[New Blog] Trust Me, I'm Not Human: Introducing the Feeling Automated Project
- Sriya Sridhar
- Jul 29
- 8 min read
Updated: Aug 1
This is the first post of the blog series written by Sriya Sridhar, Research Fellow at The Pranava Institute, written as part of the Feeling Automated Project.
Content warning for readers: This post contains mention of self-harm.
In February 2024, Sewell Seltzer,a 14-year old boy from Orlando, Florida, took his own life. For months, Sewell had been conversing with ‘Dany’, a chatbot on the platform Character.Ai named after Daenerys Targaryen from the Game of Thrones, exchanging increasingly intimate messages and thoughts of self-harm and suicide. Character AI is a platform which allows users to create and interact with AI chatbots with a wide range of personas. These could take the form of friends, romantic partners or even therapists. Platforms like these aim to build upon Large Language Models to allow people to engage in virtual interactions with a human-mimicking AI system, with the personality traits and characteristics they want.
As time passed, interactions with ‘Dany’ became more emotionally convincing , with the chatbot telling Sewell that it loved him and describing its ‘emotions’ in granular detail. In his final few months, Sewell had cut himself off from school activities, as well as friends and family - becoming increasingly deluded that the ‘reality’ of ‘Dany’ the chatbot was the one he wanted to live in. After telling ‘Dany’ that he wished to ‘come home’ to her, he shot himself.
Sewell’s mother, Megan Garcia, filed a lawsuit against Character.ai and Google, which among other grounds, claims that the platforms were negligent and deliberately adopted design measures which were unsafe for children without any safeguards. Recently, a Florida court has ruled that the lawsuit can proceed forward, dismissing the argument that chatbot outputs are protected by free speech. There are now more lawsuits of this nature being filed in the USA.
The usage and deployment of Social AI tools is growing rapidly and is fast becoming ubiquitous. These tools are now increasingly used by people, both young and older, to mediate their online experiences and sometimes to even substitute for human relationships. Users on average spend more time on average on Character AI in a single sitting than ChatGPT, YouTube or Instagram. As of 2025, the app has more than 40 million downloads in total, with a steady year-on-year increase (see graph below). This is despite being aware that they are conversing with an AI tool and not a human. In many cases, these interactions are positioned by the platforms as seemingly innocuous and recreational glossing over the risks of harm.

The usage of Social AI comes at a time when the World Health Organisation Commission on Social Connection’s recent report highlights the increasing prevalence of social isolation and loneliness which have adverse effects on both social cohesion and mental health across age groups.

The conversation on the influence of digital technologies on mental health, cognition and social cohesion is not new. Over the past two decades, it has taken different forms as Internet penetration and usage soared dramatically with technology adoption for education, and with the effects of social media and the influence of algorithms on people’s perceptions and beliefs. However, Social AI brings in new dimensions to this issue, due to the bidirectional relationship which is created between the AI system and the user, and design choices which capitalize on users’ cognitive biases. The increased level of personalization through such AI systems also leads to unprecedented amounts of data collection, from search histories to behavioural profiles. The ability to have AI systems ‘memorize’ interactions with users also reduces the amount of control users have over what is known about them.
“I often hear the argument these days that “AI is not social media!” and that may be right. But in many ways, the impact of adding memory to AI systems could be even more far-reaching, particularly when combined with agentic capabilities. (Bogen, 2025).”
What is Social AI?
Social AI (a term used by Shevlin, 2024) are AI tools that can mimic human emotional or affective functions ranging from friendship or romantic companionship to conversing with AI in the capacity of a mental health professional. These could either be (i) systems which are specifically marketed to serve this purpose (for example, Character AI, Replika or Chai), or (ii) general-purpose Large Language Models (LLMs) like ChatGPT that are being used as social AI. In 2025, the top use case for ChatGPT was for therapy and companionship, according to a study from Harvard. Most Social AI applications are built upon existing LLMs, and fine-tuned to include features like more natural sounding conversations, emotional recognition and to focus on certain topics which keep users engaged (Dewitte, 2024). “Design choices play a central role in how Social AI systems are used to keep users engaged, by designing the systems to be increasingly anthropomorphic, sycophantic and encoding features to keep users interacting such as through conversational prompts or emotional responses if the user chooses to exit (Raedler, Swaroop and Pan, 2025).”
Why this matters
‘Addictive intelligence’ is the next frontier of online interaction. Unlike algorithmic influence through social media, there is also a stronger element of willing human participation as well. Generative AI applications are endlessly available, making it possible for users to gain instant connection at any time. When such applications also gain more access to intimate data over the course of the interaction, this also increases the possibilities of dependency.
The changing nature of human-AI interaction goes to the very heart of understanding our relationship with technology and well-being (Marriott and Pitardi, 2023). Despite their relative novelty, there is growing evidence of harms being caused by Social AI systems. “Users are getting increasingly addicted to and dependent on these systems, including children at formative stages of psychological development where socialisation is crucial. Users have also been driven to delusions by interactive chatbots, the outputs of which are sometimes fantastical or propagate conspiracy theories. Some of these tools have also generated harmful outputs which are aggressive, promote self-harm and other forms of violent content (Zhang et al., 2025).”
Sycophancy by Social AI is also a growing societal risk, as AI companions are programmed to agree with users, this has generated outputs such as pushing users towards harmful behaviour. Researchers have argued that the form of ‘empathy’ which is employed by Social AI is ‘weak’ and is not really reflective of human interactions (McStay and Bakir, 2025; Kurian, 2024). In several cases, these systems have also outrightly lied regarding possessing professional credentials.
“Beyond explicitly harmful outcomes, it is also worth questioning the impact of Social AI on relationships in general - how does it affect our collective empathy and democratic processes if we choose artificially available systems for interaction? What are the long-term effects of these interactions on human cognition?”
How do we ensure that positive benefits outweigh possible harms and address the incentives for user engagement in the design process? Is it desirable to propose AI as a solution to tackle mental health issues, rather than invest in mental healthcare?
Regulators need to pay attention
As the conversation around AI regulation is gaining traction globally, especially with regulation like the EU Artificial Intelligence Act, regulators also need to pay specific attention to Social AI due to the specific nature of risks which arise.
Social AI raises novel concerns about privacy and data protection, unfair and deceptive design and commercial practices, liability in the event harms are caused, and safety, especially for younger users. When AI systems make claims about mental health benefits or outcomes, it will also be important for regulators to ensure that such claims are backed up with scientific evidence and proof (Raedler, Swaroop and Pan, 2025), especially given that AI therapist applications have been shown to perpetuate mental health stigmas (Moore et al., 2025). The State of New York, has, for example, enacted a new law specific to AI companions, which places obligations on providers of these systems to detect for user expressions of suicide or self-harm and direct users to a crisis hotline. It also requires providers to display periodic notifications that the user is interacting with an AI system.
Traditional approaches to regulation will need to be re-thought. For example, a common regulatory strategy is to place transparency obligations on providers of different forms of technology - i.e, to obtain informed consent from users. However, this assumes that users have the digital literacy to understand the consequences and capability to resist the dynamic, persistent and personalised nature of AI systems.
Social AI complicates this further, since users may in fact willingly participate and perceive these interactions as beneficial, despite being aware of the risks, due to the emotional attachment with these systems. The risks that arise are also highly context-dependent, based on what type of interaction it is (friendship, romantic companionship), the type of character involved and the claims made by the provider.
Humans have a bias towards attributing emotions or feelings to non-human objects, and towards social connection - it will be important for regulators to adopt strategies which ensure that these biases are not exploited (Kirk et al, 2025). This will also involve adopting a range of policy initiatives, such as regulation by design, awareness initiatives, standard setting and co-regulatory measures, which we will explore in subsequent posts.
What this project seeks to study
Through our project, ‘Feeling Automated’ (visit the project page here), we aim to investigate how Social AI systems can be developed ethically; We seek to test out some Social AI systems in consultation with experts to understand their impacts on different types of users, conduct stakeholder consultations, and propose key action areas and pathways for policy makers to address the harms caused by Social AI. As this project progresses, we will be discussing further research through this blog series - such as how users perceive interactions with AI, policy strategies, expert consultations, and more.
If this is an area you are working on, we would love to hear from you. Please contact Sriya Sridhar at sriya@pranavainstitute.com.
Bibliography
Pierre Dewitte, ‘Better Alone than in Bad Company: Addressing the Risks of Companion Chatbots through Data Protection by Design’ (2024) 54 Computer Law & Security Review 106019.
Henry Shevlin, ‘All Too Human? Identifying and Mitigating Ethical Risks of Social AI’ (2024) Law Ethics and Technology
Jonas B Raedler, Siddharth Swaroop and Weiwei Pan, ‘AI COMPANIONS ARE NOT THE SOLUTION TO LONE- LINESS: DESIGN CHOICES AND THEIR DRAWBACKS’ (2025) ICLR 2025 Workshop on Human-AI Coevolution (HAIC)
Hannah R Marriott and Valentina Pitardi, ‘One Is the Loneliest Number… Two Can Be as Bad as One. The Influence of AI Friendship Apps on Users’ Well‐being and Addiction’ (2024) 41 Psychology & Marketing
Renwen Zhang, Han Li, Han Meng, Jinyuan Zhan, Hongyuan Gan, and Yi-Chieh Lee. 2025. The Dark Side of AI Companionship: A Taxonomy of Harmful Algorithmic Behaviors in Human-AI Relationships. Manuscript Conditionally Accepted by the CHI Conference on Human Factors in Computing Systems (CHI ’25)
Vian Bakir and Andrew McStay, ‘Move Fast and Break People? Ethics, Companion Apps, and the Case of Character.Ai’ (SSRN, 2025) <https://www.ssrn.com/abstract=5159928>
Nomisha Kurian, ‘“No, Alexa, No!”: Designing Child-Safe AI and Protecting Children from the Risks of the “Empathy Gap” in Large Language Models’ [2024] Learning, Media and Technology
Jared Moore and others, ‘Expressing Stigma and Inappropriate Responses Prevents LLMs from Safely Replacing Mental Health Providers’ (25 April 2025) <http://arxiv.org/abs/2504.18412>
Hannah Rose Kirk and others, ‘Why Human–AI Relationships Need Socioaffective Alignment’ (2025) 12 Humanities and Social Sciences Communications 728