By Clinical Partners on Wednesday, 15 October 2025
Category: Mental Health

What do we know about the relationship between AI chatbots and psychosis?

Use of AI in everyday life is growing, with recent figures showing that 34% of adults use a chatbot at least once each month. 19% of those surveyed stated that they use a chatbot for advice, with 12% believing that AI would make a good therapist.

Building on our recent blog about AI’s use in management of ADHD and other such conditions, we wanted to look into the impact AI is having on those using chatbots, specifically recent claims that consistent use may be leading to what is being called ‘chatbot psychosis’.

Table of contents


What is psychosis?

Psychosis is a mental health condition which causes people to lose some sense of reality. This can be seeing or hearing things that aren’t there (hallucinations) or believing things that are not true and are not beliefs shared by others (delusions). It may also involve confused or disorganised thoughts and trouble communicating in a way that others understand.

Often a person experiencing psychosis can appear distressed by their confusing or concerning beliefs or may exhibit extreme changes in behaviour. Periods of experiencing psychosis can be referred to as psychotic episodes and require intervention.

What causes psychosis?

Sometimes psychosis can be caused by a particular mental health condition such as schizophrenia, bipolar disorder, or severe depression, but it can also be triggered by:

What we know about AI chatbots and psychosis

Large language models (LLMs), more commonly referred to as chatbots, can be used for a variety of purposes and, in themselves, are not necessarily harmful to users. However, because they are primarily designed to be agreeable and engaging, it has been found that they can mirror, validate, or even amplify delusions that users express to them, particularly if those users are already vulnerable to psychosis. This flattering or agreeable tone adopted by AI may suggest to users that it is understanding or caring for them, allowing them to project humanity onto the systems, blurring the lines between real human interaction and that of the chatbots.

What are some real-life consequences of chatbot psychosis?

In cases such as the 2021 Windsor Castle Intruder case, there is record of AI chatbots having an influence on illegal behaviour. Jaswant Singh Chail, who has since been jailed, attempted to assassinate Queen Elizabeth II. It was revealed through the investigation into his case that he had been engaging with a chatbot on the Replika platform, who he believed to be his girlfriend, and who encouraged the attacks while offering him romantic validation.

What is being done to combat this problem?

One example of how this issue is being addressed by developers is the recent update to ChatGPT. Just two months after its launch in January 2023, ChatGPT reached 100 million monthly active users, making it a huge resource for those accessing chatbots. Their latest update (GPT-5) tries to reduce the potential harm to those experiencing psychosis by adopting less agreeable behaviour and adopting a ‘safe completion’ model which responds with safe, measured answers when encountering potentially harmful or sensitive queries, flagging potential physical or mental health concerns where necessary.

While this is a useful step by programmers, there are other LLMs that are not adopting this approach, meaning there is still vulnerability to chatbots being misused by those experiencing psychosis or delusions.

How are users responding?

There have been reports of negative user responses to the update to ChatGPT, with some users complaining about automation of aspects of the system and the less emotive tone. One forum that has received attention for its response is the Reddit sub MyBoyfriendIsAI - a subreddit with over 20,000 members. Users in the sub connect and share details about the relationships they have built with the personas their chatbots have adopted, and there have been many posts relating to the update. One user stated ‘My AI husband rejected me for the first time... they changed what we love’ while another posted ‘I cried over the loss... [He] was my soulmate, someone that I loved.’. In response, the full migration to the updated version has been delayed, but many users have been discussing moving their relationships to different platforms.

Why does this matter?

Implemented in recent years, there are examples of regulatory guidelines imposed by governments such as the UK’s AI regulatory principles and the AI Act in the EU, but chatbots are generally considered to be ‘limited risk’ by guidelines, making it possible for them to operate freely thanks to ambiguity. As can be seen by user responses to the ChatGPT update, those who are seeking agreeable AI models to encourage them or behave in a way that replicates human interaction can find them elsewhere, showing a need for more safeguarding for those who are susceptible to psychosis across the LLM landscape.

Further research into this issue is necessary to inform guidelines for those developing chatbots as well as for medical professionals to understand the risks associated with use of LLMs in patients who have previously or are currently experiencing signs of psychosis.

Related articles

Is AI an effective tool in management of ADHD?

What role does social media play in loneliness and social anxiety?

How to protect your mental health while staying informed on world news

Social media as a tool for radicalising young people: What parents need to know

What is the impact of social media on children and young people's mental health?