Is your new doctor an AI?

You have a headache. It's really annoying, and you are not ready to visit the doctor because it's not that serious. You go google your symptoms, and the worst-case scenarios pop up. Still, you are not ready to make an appointment, talk to nurses, sit in the waiting room, and probably get coughed on by someone.
So what do people reach for instead? Increasingly, it's AI - whether it's ChatGPT, Gemini, or Claude. This growing trend is exactly why OpenAI launched ChatGPT Health. This new feature was created with the help of 260 physicians from over 60 countries. OpenAI also underlines the security of this feature: medical records, documents, and chats - It's all separate and, allegedly, safely stored. Users also have full control over how the information is stored. Some useful integration with AppleHealth, MyFitnessPal and Function. This all sounds reassuring, right? Let's look at this carefully.
Is ChatGPT Health replacing doctors?
The simplest answer is - No. It simply can't. Many obstacles prevent AI from replacing trained physicians. Diagnosing a patient isn't as simple as reading records and analysing lab results.
Medical history awareness: One of the most important factors in diagnosis is the medical history. This is relevant if the specialist suspects a genetic component. Think of diabetes, hypertension, some mental illnesses like schizophrenia, some cancers and so on. AI doesn't have access to this information and can't immediately come to the correct conclusion.
Environmental awareness: This is especially important when working with infectology. The reports of these specialists are pages long. They take into account every little detail, even what kind of pets you own. Doctors know exactly what questions to ask because they are trained to recognise the patterns of diseases and illnesses. AI won't be able to ask the important questions because it works with the information you give it. If you don't know which information would be helpful in the diagnosis, then you are withholding important facts that could mislead the AI.
No extra senses: A human doctor sees slurred speech, tremors, gait changes, and facial expressions. An AI? None of it. It's blind to the body, deaf to tone, numb to touch. Let's not forget orthopaedic specialists who use physical tests (touching and measuring muscle tension and strength), who rely on the physical presence of the patient. Since AI doesn't have important senses like touch, sight, and hearing, it lacks the accuracy that health professionals have.
Empathy and accountability: OpenAI has been under fire for a while because of removing warmth from their newest models (5.1 and 5.2). After getting sued because of some tragic incidents, OpenAI's answer was stricter guardrails. Users noticed: models started sending help hotlines the moment users expressed discomfort. One gets rerouted to a safety model even when mentioning something harmless like being sleepy. ChatGPT can't take accountability or responsibility for a user's health. A new phenomenon called AI-phychosis also emerged, making a case that AI can be detrimental to users with mental health issues. Legally, only a human can take responsibility for a diagnosis. Only a doctor answers to peers, collaborates with other professionals, and makes decisions that have weight.
So what is it good for?
It's not all doom and gloom. There are some positive aspects to this new technology, which can help patients and doctors alike.
OpenAI themselves mention that ChatGPT Health isn't supposed to be used as a diagnostic tool. It's designed to help users understand medical jargon, track symptoms, recognise patterns, interpret results, and report relevant information to a medical professional. It's a tool to help you understand and organise information.
This is exactly how it should be treated - an assistant for learning and preparation, not some instant doctor or therapist. This new feature could raise the overall health-literacy of users, which would make the work of doctors easier.
ChatGPT is great at explaining jargon, walking users through concepts, helping calm panic or stress, but it is not a medical professional, and it shouldn't be asked to make clinical calls.
Think of it like the world's chillest medical translator and prep partner - useful for questions, not treatment plans.
Our project related to mental health for professionals - Genio
We saw the potential of AI in supporting users in coaching. Coaching is a part of mental health hygiene, which is sometimes overlooked. Our project, Genio, supports professionals across many industries.
Based on Stephen Gribben's experience and expertise, together with our client, we designed a platform that supports personal development, career growth, and empowers professionals to feel more confident.
We focused on healthy, precise, and transparent collaboration between technology and the user. Responsibility and ethics are the focal point of this project, making sure that users have full control over their progress.
Final thoughts
We are confident that AI can be used responsibly. This technology can educate and support people in many ways. It's important not to delegate agency and critical thinking to AI. The most important decisions should always be up to a human.
We are excited to see the progress of this technology, following it closely, but never blindly jumping onto trends. Stay curious. Stay critical. Let AI support you - but never replace your judgment.
Appify Digital is a leading web and mobile app development company in Dublin, serving clients across Ireland and the UK. We specialize in creating innovative, AI-powered solutions that deliver exceptional user experiences and drive business growth.