Back to Blog & Insights
Best PracticesMay 2025

Voice AI in the Access Center: How to Augment with Automation Without Losing the Human Touch

For many patients, the phone is still the front door, especially when something is confusing, urgent, or emotionally loaded.

And the work behind "just scheduling an appointment" has become more complex: more sites, more modalities, more insurance rules, more provider availability constraints, and more patients who need help navigating it all. As complexity rises, so do hold times and staff burnout.

Voice AI can help, but the winning model today is augmentation, not replacement. Let AI handle the high-volume, repetitive scheduling transactions, and let staff schedulers focus on the requests that require judgment, empathy, and exception-handling.

Today's contact center or scheduling team job has two different types of work

1

Routine, high-volume transactions that (hopefully) follow rules-based requests

Great for automation and voice AI

  • Confirming upcoming appointments
  • Cancelling or rescheduling a single appointment
  • Adding a patient to a waitlist and offering earlier slots
  • Collecting or verifying demographics, when appropriate
  • "What time is my appointment?" / "Where do I go?" / "What are your hours?"
  • Handling messages and requests to care teams (prescription renewals, work/school notes, etc)

These calls are typically predictable, scriptable, and measurable.

2

Complex, exception-heavy requests that require judgment

Difficult for automation and voice AI

  • Coordinating multiple appointments across multiple departments and locations (e.g., imaging + consult + lab)
  • Handling insurance complexity, like out-of-network questions, referrals, and authorization dependencies
  • Supporting patients with accessibility needs or low health literacy who need guidance
  • Resolving scheduling conflicts and clinical constraints (provider-specific rules, special equipment, clinical sequencing)
  • De-escalating anxious or frustrated patients, where empathy matters most
  • "Not sure what I need" calls that require navigation

These calls are where experienced scheduling agents create huge value, and where voice AI should (at most) assist in getting the patient to the right place.

The evidence: automation works best on reminders and simple access tasks

A large body of literature supports that reminders and simple outreach reduce missed appointments -- the classic "routine work" bucket that consumes contact-center capacity.

  • A systematic review in Journal of Telemedicine and Telecare found reminders via phone, SMS, or automated calls generally reduced non-attendance; the pooled estimates suggested a weighted mean relative reduction of about 34% of baseline, and automated reminders were somewhat less effective than manual calls (29% vs 39% of baseline). [4]
  • A randomized study in The American Journal of Medicine found that both staff reminders and automated reminders lowered no-show rates compared with no reminders (with staff reminders performing best). [5]
  • A broader systematic review in Patient Preference and Adherence concluded that reminder systems are consistently effective across settings and populations, and can also increase cancellation/rescheduling, which is often a good thing operationally, because it creates reusable capacity. [6]

What this means for voice AI: If your voice AI starts with confirmations, cancellations, and reschedules, you're choosing workflows where we already have strong evidence that automation and structured messaging will improve attendance and throughput.

What Voice AI can safely take off the queue

A useful way to think about "safe" for voice AI is: Can this task be completed using objective rules and system-of-record data, without giving medical advice? If yes, it's a candidate.

Here are scheduling workflows that MDfit's customers commonly consider low-risk and high-volume candidates:

Appointment confirmations and reminders

Reminder systems (including automated calls/SMS) are associated with better attendance and improved rescheduling/cancellations. [7][8]

Voice AI can improve this one step further by letting patients confirm/cancel/reschedule in the same call rather than "press 1" and then call back later.

Cancellations, reschedules, and waitlist fill

The best additional capacity is always quickly recovering cancellations. Automation, including voice AI, can improve that speed. It's especially useful in unplanned situations (weather-related, emergent, provider call-off, etc).

(Separately, studies of automated rescheduling and waitlist-like workflows show meaningful improvements in earlier appointment access and captured capacity. [6])

"FAQ" calls that really shouldn't require staff involvement

If your patients routinely use your schedulers as a human search engine for things like:

  • Location, directions, parking, hours
  • Required paperwork
  • Referral receipt status

These calls for which answers are readily available on your website, or in structured data from a system, create long hold times for everyone else.

Structured intake for eligible visit types

For well-defined appointments, voice AI can collect:

  • Demographics and preferred contact method
  • Chief condition/complaint
  • Patient appointment constraints, such as days of the week or time
  • Referral details and ordering clinician name
  • Consent to text/email reminders

Then it can either book directly (if rules allow) or hand a structured "call summary" to staff schedulers.

What your staff should own

Even the most capable AI voice agent will fail in edge or complex cases. That's a reality in healthcare operations. Here's what staff should retain ownership of:

  • Exception rules and edge cases (anything off-protocol)
  • Sensitive situations (anger, distressed, or confused patients)
  • Clinical sequencing constraints
  • Ambiguity ("I don't know what I need")
  • Anything that looks like symptom triage
  • Multi-appointment coordination, especially across departments and providers

This maps directly to what we see in conversational-agent research -- the best use cases are task-oriented, tightly scoped, and integrated into rules-based workflows. [1][2]

Design patterns that make voice AI feel like "a good teammate" to schedulers

There are many best practices when it comes to making a voice AI work for healthcare organizations. Here are a few we've learned (sometimes the hard way) during our last few years offering voice AI powered by MDfit:

1

Make the AI's goal resolution or escalation, not "completion"

Most patients don't care whether AI or a person helps them. They care whether the outcome is correct and respectful. That means your AI needs a default rule: if the AI is uncertain, or the patient unsuccessfully repeats themselves twice, escalate immediately. This aligns with evidence that speech-recognition failures can create dissatisfaction and increase demand for human support. [3]

2

Always provide a human-offramp

  • "Press 0 to speak with a scheduler."
  • "If you'd rather not use our automated system, say 'representative'."
  • You should also consider immediate transfer for certain intents (complaints, billing disputes, symptom triage, etc.)
3

Capture a clean handoff package

A good handoff is "transferring the call" with context and transcripts, that includes:

  • Patient identity confirmation status
  • Intent classification ("reschedule," "new patient," "referral," etc.)
  • Any structured answers (preferred times, location, provider preference)

And if possible, a short natural-language summary for the scheduler.

4

Use "confirmations" the way great schedulers do

For any booking change, the AI should repeat back:

  • Provider / clinic
  • Date and time
  • Location, likely including address
  • Special instructions, if applicable

This reduces downstream callbacks caused by misunderstanding.

5

Offer a SMS text message link as a fallback

Speech recognition isn't equally easy for every caller or every environment. Depending on where the voice AI is in the workflow, offering a secure text link to confirm or pick a slot can help when the voice channel is noisy, or the AI has trouble understanding accents.

A final word of caution -- Integration is the make-or-break factor

Voice AI that can't safely interact with scheduling data is just a nicer version of your phone system. Here are the two main integration principles we use at MDfit:

1) Real-time availability. Never stale templates. Patients will (usually) forgive an automated voice. They will not forgive being offered slots that aren't real.

2) A single source of truth for appointment status. If the AI cancels or reschedules, the action must land in the same scheduling system everyone uses, and it must be visible in real-time.

The Bottom Line

A case study in JAMIA on automated self-scheduling illustrates the broader point: when scheduling is fully integrated into the system of record, patients will use it at scale, and outcomes like no-show rates can improve for those self-scheduled appointments. [7] Voice AI can bring similar "self-service at scale" benefits to patients who prefer the phone.

It's most valuable when it minimizes the "busywork" of a scheduler's role. In that sense, if you treat your voice AI as another scheduling teammate -- with clear boundaries, fast escalation, and tight integration -- it can meaningfully improve access, reduce hold times, and improve appointment utilization, all while making staff scheduling work less transactional.

References

  1. Reddy A, et al. Telephone Access Management in Primary Care: Cross-Case Analysis of High-Performing Primary Care Access Sites. J Gen Intern Med. 2022. pmc.ncbi.nlm.nih.gov
  2. U.S. Bureau of Labor Statistics. Medical Secretaries and Administrative Assistants (OEWS, May 2023). bls.gov
  3. Ermolina A, Tiberius V. Voice-Controlled Intelligent Personal Assistants in Health Care: International Delphi Study. J Med Internet Res. 2021;23(4):e25312. jmir.org
  4. Laranjo L, Dunn AG, Tong HL, et al. Conversational agents in healthcare: a systematic review. J Am Med Inform Assoc. 2018;25(9):12481258. pmc.ncbi.nlm.nih.gov
  5. Tudor Car L, Dhinagaran DA, Kyaw BM, et al. Conversational Agents in Health Care: Scoping Review and Conceptual Analysis. J Med Internet Res. 2020;22(8):e17158. pmc.ncbi.nlm.nih.gov
  6. Adams SJ, Acosta JN, Rajpurkar P. How generative AI voice agents will transform medicine. 2025. pmc.ncbi.nlm.nih.gov
  7. McLean S, Booth A, Gee M, et al. Appointment reminder systems are effective but not optimal: results of a systematic review and evidence synthesis employing realist principles. Patient Prefer Adherence. 2016;10:479499. pmc.ncbi.nlm.nih.gov
  8. Hasvold PE, Wootton R. Use of telephone and SMS reminders to improve attendance at hospital appointments: a systematic review. J Telemed Telecare. 2011;17(7):358364. pubmed.ncbi.nlm.nih.gov
  9. NIST. Artificial Intelligence Risk Management Framework (AI RMF 1.0). NIST AI 100-1. 2023. nist.gov (PDF)
  10. NIST. Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile. NIST AI 600-1. 2024. nist.gov (PDF)
  11. U.S. Department of Health & Human Services (OCR). Guidance on How the HIPAA Rules Permit Covered Health Care Providers and Health Plans to Use Remote Communication Technologies for Audio-Only Telehealth. 2022. hhs.gov
  12. U.S. Food & Drug Administration. Software as a Medical Device (SaMD). fda.gov