How Agentic AI is Disrupting Medical Support: Exploring Doctolib’s Efficient Intelligent System

At Doctolib, our mission is not just to build the healthcare system we dream of — we are changing the way health professionals interact with technology. Two ambitious goals drive us: to ensure satisfaction for health professionals using our solutions and to accelerate our pace of innovation. But ambition comes with great responsibility, especially in supporting our users.

As our platform grows, the number of support requests is also increasing. The traditional approach is to linearly scale our support team based on demand. However, we saw the opportunity for different thinking: what if we could achieve sustainable support costs while maintaining high customer satisfaction? What if technology could help our support team focus on what they do best: providing compassionate, humanized service for complex cases?

This challenge has prompted us to explore the forefront of AI technology, particularly autonomous intelligence. We are building a system that does not just answer questions — it can think, analyze, and act like an experienced support agent. This is not about replacing human interaction, but rather enhancing it. By intelligently handling routine queries, we enable our support team to focus on cases where human expertise and compassion are most important.

What is Agentic?

The term “agentic” contains “agent,” and this is no coincidence.

An agentic system is essentially a network composed of professional AI agents working towards a common goal like a well-coordinated team of experts. It can be seen as a virtual organization where each member has specific skills and responsibilities.

How Agentic AI is Disrupting Medical Support: Exploring Doctolib's Efficient Intelligent System

Each agent is powered by a large language model (LLM) but is strictly constrained in the following ways:

  • • A specialized prompt defining its role, context, and expertise
  • • A set of specific tools available for use

Let’s illustrate this with an example from our support system. One of the agents is the “Data Retriever” — an expert focused on gathering customer information. While it can deeply access our customer data API, it can only use a carefully curated set of endpoints. This specialization ensures efficiency and security (the principle of least privilege).

The interactions between agents are managed by a directed graph structure, where:

  • • Each node is a computational/processing step: it can be an LLM-based agent or just a deterministic function
  • • Each edge defines possible communication paths
  • • Information flows along these predefined paths depending on the output of the previous node
How Agentic AI is Disrupting Medical Support: Exploring Doctolib's Efficient Intelligent System

In the background, our agentic system is built on LangGraph, a powerful framework for coordinating these complex agent interactions.

Over the past few quarters, we have developed an enhanced retrieval-augmented generation (RAG) engine that enriches AI responses through our support knowledge base. Guess what? It will now become a specialized agent in our agentic system!

Beyond Chatbots: Rethinking Support Interactions

Poor Virtual Assistant Chat Experiences

We have all experienced the frustration of talking to a chatbot, with a clear desire to communicate with a human.

How Agentic AI is Disrupting Medical Support: Exploring Doctolib's Efficient Intelligent System

Either the options provided by the chatbot are limited, with none meeting your needs, or there is a free text field that lets you vent your frustration at the machine (🤘), but fails to yield the results you want.

This is precisely what we absolutely do not want.

Alfred 🛎️! Please Bring Me the Big Red Button

How Agentic AI is Disrupting Medical Support: Exploring Doctolib's Efficient Intelligent System

We envision a fundamentally different way, instead of forcing users through rigid decision trees or leaving them helplessly shouting in a free text box: a digital butler that provides the same thoughtful, intelligent service as a skilled human agent.

Like a trained butler, Alfred:

  • • Understands customer needs, even if those needs are not perfectly expressed
  • • Knows what questions to ask and when to ask them
  • • Cleverly gathers information readily available in our system
  • • Provides clear, actionable solutions — our ✌️big red button✌️
  • • All of this is done through a dynamic user interface

From Vision to Reality: A Real-World Example

For Q4 2024, we want to address a specific but common scenario: managing calendar access for doctors. For example, it might be “Hey, Alfred, I want to give Maria Smith read-only access to my family consultation calendar.”

But to be honest, customers rarely make such precise requests right off the bat.

A more realistic scenario might look like this:

  • • HCP: I want to provide access to my schedule
  • • Alfred: Sure, I see there are several people in your organization, which specific one are you mentioning? 1. Maria Smith 2. John Doe
  • • HCP: Maria Smith
  • • Alfred: I also see you have multiple schedules: which one do you want Maria to access? 1. Schedule 1 2. Schedule 2 3. Schedule 3
  • • HCP: Schedule 1
  • • Alfred: Okay. What level of access do you want to give Maria? 1. Read-only 2. Booking management 3. Full access
  • • HCP: Booking management
  • • Alfred: I understand you want to give Maria Smith access to your Schedule 1, with access level “Booking management.” To proceed, please press the confirm button.

HCP presses the confirm button;

✨ Maria Smith now has access to Schedule 1 ✨

How Agentic AI is Disrupting Medical Support: Exploring Doctolib's Efficient Intelligent System
How Agentic AI is Disrupting Medical Support: Exploring Doctolib's Efficient Intelligent System
How Agentic AI is Disrupting Medical Support: Exploring Doctolib's Efficient Intelligent System

Pretty stylish, right? 😄

Technical Challenges and Design Decisions

Managing AI Hallucinations

As a non-deterministic system, large language models do indeed produce hallucinations. Sometimes few, sometimes many. This is a fact and part of the equation we cannot ignore.

After extensive discussions with engineers, legal departments, and leadership, we established a key principle: LLMs should never directly execute sensitive operations. The final step of changing agenda access is always in the hands of the user. This “human-in-the-loop” approach ensures security while maintaining efficiency.

However, this decision brings its own complexity: how to ensure that what we show to the user is actually what will be executed when they click confirm? In other words, how to ensure that when we display “Maria Smith,” the ID sent in the request body is not that of John Doe?

Security and Access Control

Some AI agents need access to customer data to effectively complete their work. However, adhering to the principle of least privilege, we decided not to grant them elevated “✌️admin✌️” access. Instead, we implemented a more nuanced approach:

  • • Agents inherit the exact same permissions as the users they assist
  • • This requires complex application context propagation
  • • Each API call adheres to existing authorization boundaries
  • • Security remains consistent with our regular user interactions

Production Scale Expansion

Let’s look at the numbers:

  • • ~1,700 support cases per day
  • • Assuming ~10 interactions per conversation
  • • ~17,000 messages generated daily

While this volume is manageable from a pure throughput perspective, it presents interesting challenges:

  • • Maintaining conversational context across multiple interactions
  • • Ensuring consistency in response times
  • • Monitoring and logging to ensure quality

Technical Implementation

Now it’s time to dive into the technical details. As you can imagine, there’s a lot to say, but to make this article understandable, I’ve picked a few. Hold your breath and put on your swimsuit!

How Agentic AI is Disrupting Medical Support: Exploring Doctolib's Efficient Intelligent System

Service-to-Service Authentication

Our services communicate using JSON Web Tokens (JWTs), implementing a robust authentication scheme:

Service A (Alfred) → JWT → Service B (Agenda)

Each JWT contains two key pieces of information (claims):

  • • Audience (aud): “Who are you talking to” — the target service
  • • Issuer (iss): “Who are you” — the calling service

Think of it as a secure introduction letter: “Dear Agenda Service (aud), I am Alfred Service (iss), here are my credentials, signed with our shared key.”

But we go a step further. Each service maintains a clear list of allowed callers. Even if the signature is completely valid, if Alfred is not on a service’s “approved callers” list, the request will be denied. This double-check mechanism ensures that services only communicate with those they explicitly trust.

User Context Propagation

Remember our principle that Alfred should have the same permissions as the users he assists? Here’s how we implement it:

Users authenticate through our identity provider (Keycloak). As a result, they receive a JWT as proof of identity, which is propagated with the request.

When Alfred makes a request, he carries two tokens:

  1. 1. Service-to-Service JWT (proving Alfred’s identity)
  2. 2. User’s Keycloak token (carrying user identity)

In this way, the target service can:

  • • Verify that Alfred is allowed to make the call
  • • Apply the same permission checks as the direct user request
  • • Maintain consistent security boundaries

It’s like Alfred having both his butler credentials and an authorization letter from the user he’s assisting — both are required to perform actions on behalf of the user.

Secure Operation Execution

One of our core principles is that AI agents should not directly execute sensitive operations. But how do we achieve this while maintaining a smooth user experience? Our approach is as follows:

How Agentic AI is Disrupting Medical Support: Exploring Doctolib's Efficient Intelligent System

Whenever the AI agent decides it’s time to update the agenda authorization, it constructs a complete request (url, http method, and payload) and passes it to a deterministic node. This node will

  1. 1. Ensure the parameters are not hallucinations generated by the LLM (fact-checking)
  2. 2. Send it to an operation request checker responsible for retrieving new data about all referenced resources and returning it in both technical and human-readable forms

For example, suppose the AI agent constructs the following payload:

{
  "method": "POST",
  "endpoint": "/api/v1/agenda_authorizations",
  "payload": {
    "user_id": 42,
    "agenda_id": 123,
    "access_right": "read_only"
  }
}

The operation request checker will fetch the relevant data to present its meaning to the user:

  • • John Doe
  • • Agenda A
  • • Read-only access

This way, the front end can present some human-readable and accurate content, meaning that when we display “John Doe,” we are using John Doe’s ID rather than content generated by the LLM hallucinations.

Evaluation

For this important task, we leverage Literal.ai, a platform specifically for AI evaluation.

Our core metrics are:

  1. 1. Achievement level: a scoring system from 1 to 3, comparing Alfred’s output against established benchmarks
  2. 2. Efficiency: – Execution delay of the graph – Number of steps: the number of nodes accessed during execution — optimal number of steps

Looking Ahead 🔭

We are still in the early stages of our journey with Alfred. While we initially focused on calendar access management as a proof of concept, this is just the beginning, and we are exploring other support scenarios where this agentic approach can bring value.

The foundation we have built — after careful consideration of security, user experience, and technical limitations — provides a solid platform for expanding Alfred’s skill set.

Stay tuned for more updates as we continue to push the boundaries of automation in medical support. After all, every great butler needs time to perfect their service. 🎩

How Agentic AI is Disrupting Medical Support: Exploring Doctolib's Efficient Intelligent System
Visit 200+ LLM aggregation platforms: https://rifx.online
!!! Note: Please click the original link to view the links in the article

Leave a Comment