Discovery Health sits in a difficult communication lane: it has to explain medical aid decisions, benefit rules, exclusions, claims processes, and plan changes to millions of people, many of whom do not read policy documents for a living. In South Africa, that challenge is amplified by uneven financial literacy, variable health literacy, and a wide spread of language and reading comfort. The result is predictable: members miss details, misunderstand limits, and contact support when a written explanation would have prevented the confusion.
The problem: policy language that works on paper, not in practice
Medical aid communication often starts from the institution’s point of view. It uses formal terms, internal categories, and legal framing because that is how the product is built. The problem is that members do not experience a plan as a system diagram. They experience it as a moment of uncertainty: a procedure is approved or declined, a limit is reached, or a claim does not behave as expected.
That is where generic messaging breaks down. A standard notice can be technically correct and still fail the only test that matters: does the member understand what this means for me right now?
Why scale makes the problem harder
Discovery Health serves a very large member base, which means every unclear message has a multiplied cost. If a policy change requires a call centre explanation, that call is not just an inconvenience. It is a signal that the written version did not do its job.
At this scale, communication quality becomes an operational issue. Confusing language increases support volume, slows self-service, and makes benefits feel more complicated than they need to be. It also weakens trust. When people have to decode their own cover, the brand begins to feel distant, even when the underlying product is strong.
The AI opportunity: explain, translate, adapt
This is where AI can be useful, but only if it is used as a clarity system rather than a content generator. The goal is not to produce more words. The goal is to turn dense policy text into member-specific explanations that are shorter, clearer, and more relevant.
Done well, AI can take one policy rule and produce several versions of the same answer: a plain-language summary, a step-by-step explanation, a short mobile-friendly version, and a more detailed version for members who want the full context. It can also adapt tone, so the message sounds calm and helpful instead of defensive or bureaucratic.
How the teardown works
A useful AI workflow for policy communication starts with a single rule: the system must explain the benefit from the member’s perspective. That means the prompt should not ask for a polished rewrite alone. It should specify the audience, the action the member needs to take, and the outcome that matters.
For example, a good internal prompt would ask the model to explain what changed, who is affected, what the member should do next, and what common misunderstanding to avoid. That structure forces the output to answer real questions instead of producing generic prose.
The same logic applies to multilingual and mixed-literacy audiences. AI can help simplify sentence structure, reduce jargon, and surface the practical meaning first. But it should not invent medical advice, infer benefits that are not present, or blur the difference between guidance and official plan terms. The best use case is controlled explanation, not open-ended interpretation.
Prompt design and audience alignment
If the output feels hollow, the problem is usually not the model. It is the brief. Policy communication fails when the prompt does not define the reader well enough. A member in a formal employment setting, a young family managing day-to-day cover, and an older client checking chronic medication benefits do not need the same explanation.
That means the prompt has to include audience intent: what the person is trying to understand, what they are worried about, and what decision they need to make next. It should also define the reading level and the tone. A clear output is not just shorter; it is structurally better. It leads with the important point, strips out filler, and keeps the next step visible.
This is where many AI systems drift into sludge. They sound fluent, but they do not resolve uncertainty. A useful system is one that can be tested against a real member question and still give an answer that feels specific, grounded, and usable.
Pitfalls that weaken the output
The first pitfall is over-automation. If every policy explanation is generated the same way, the result will sound efficient but indifferent. Members notice that quickly.
The second pitfall is vague prompting. If the model is told to “simplify the policy,” it may remove detail that actually matters. Clear communication is not the same as thin communication.
The third pitfall is weak review. In healthcare, factual accuracy and tone both matter. Any AI layer needs human oversight, especially when wording could affect decisions, claims understanding, or perceived eligibility.
What success would look like
A strong implementation would show up in three places. Members would understand benefit changes faster. Routine support questions would decrease. And written communication would feel more like guidance than administration.
There is also a broader business benefit. Better explanations reduce friction across the service journey. They make self-service more viable and help members use their cover with more confidence. In a product category built on trust, that matters.
Checklist
Close this section with a concrete yes-or-no test the reader can apply quickly. If the test takes too long to explain, simplify it until it becomes usable.
Before using AI for medical aid communication, the system should be able to:
Explain one policy rule in plain language without losing meaning.
Adapt the same message for different member needs and reading levels.
Keep the language factual, empathetic, and consistent with official terms.
Highlight the action the member should take next.
Flag uncertainty instead of guessing or overpromising.
Beyond the basics
The bigger opportunity is not just simplifying policy documents. It is building a communication layer that understands intent. That means a member asking about a hospital stay, chronic cover, or benefit limits should not have to read a long generic page and infer the answer. They should get a message shaped around the question they actually asked.
That is the practical value of AI in this context. Not hype, not novelty, but better explanation at scale. For a medical aid provider serving millions of South Africans, that is a meaningful upgrade.
