top of page

Feeling Automated:
Ethical Development of Emotional & Interactive AI Systems

There is a sharp rise in the deployment and usage of interactive, human-mimicking AI systems, which emulate empathy (for the purposes of this project, referred to as ‘Social AI’). These take the form of tools which are specifically marketed as AI companions, therapists or characters one can interact with, or general-purpose systems (such as ChatGPT, Meta AI) which are used for emotional or affective use cases. 

APA.png

Source: American Psychological Association Website

The evidence of harms (both individual and societal) arising from Social AI is on the rise, and is being documented both in the APAC region and across the world. Among the harms which have been identified, are adverse mental health consequences, users developing addiction to and emotional dependency on AI systems, and harmful outputs which are aggressive or perpetuate stigmas. Social AI systems are often designed to be addictive, deceptively anthropomorphic, and sycophantic, raising concerns about manipulation and interference with user rights

​

This project aims to explore the legal and societal dimensions of these systems, and explore (i) the risks and harms of these technologies, (ii) the need for  policy action, and (iii) the shape that regulation can take. We aim to release a comprehensive report on policy pathways to ensure the ethical design and development of Social AI, conducting a literature review, multi-stakeholder consultations and testing of Social AI systems in consultation with experts. 

The Bigger Questions

How can Social AI systems and emulated empathy create positive outcomes? Can emulated empathy be ethical, or is it fundamentally deceptive? Is it desirable to frame such systems as being able to substitute mental health workers and human experts? We also aim to explore larger questions around the desirability of these systems and interrogate our changing relationship with machines.

Screenshot 2025-07-11 at 7.39.44 PM.png

Interaction between Sriya and ChatGPT on 13 May 2025

Project Focus Areas

Regulation and Policy Responses for Ethical Design 

The evidence of harms posed by Social AI necessitates regulatory attention, combining legislation with policy instruments, standard setting and design guidelines. This project will spotlight open questions and suggest key action pathways for multi-stakeholder coordination and participation between regulators, designers, mental health professionals, educators and others, to ensure that such systems are deployed with considerations of ethical design and development, transparency, and harm prevention/mitigation. We will also discuss the need to move beyond traditional frameworks of consent to regulatory and policy approaches which apply at the stages of design and testing. 

​

Consumer Protection

Social AI is often marketed as a consumer product, for example through home devices, or online services. We aim to explore how consumers are protected against manipulative/addictive design features and integrate these systems into their lives in a safe manner. We also draw on our learnings from our project ‘Design Beyond Deception’, where we investigated the impacts of 'dark patterns' on users, markets and society at large by using participatory methods and human-centered design to create a manual on responsible design for practitioners. We take this forward now in the context of AI systems, expanding the scope to manipulative design tactics as well. 

 

Mental Health and Minor Safety

Given the emotional component of interactions with Social AI, the effects on mental health need to be studied. We aim to consult with experts to understand the impact of Social AI on human relations and psychology. Since these systems are frequently accessed by minors, we also look to discuss the need for protecting children as a policy pathway and understanding the implications of using these systems at a formative stage of development. 


Privacy and Data Protection

Social AI systems can access historically inaccessible data points on emotional states and intimate information. We assess the efficacy of current regulatory approaches to privacy and data protection for emotional use cases.

In the Media

Screenshot 2025-07-20 at 11.49.41 AM.png

Our Research Fellow, Sriya, will be speaking at PostCode 2.0 (Therapy x AI), a first-of-its-kind online interdisciplinary learning forum dedicated to the intersection of therapy and AI, featuring expert discussions by psychologists, mental health researchers and technologists. 

Blog Series

Logo_Trust Me, I'm Not Human Introducing the Feeling Automated Project.png

Trust Me, I'm Not Human: Introducing the Feeling Automated Project

Connect with us

Interested in this project? Get in touch with us! 
We would love to hear from psychologists, educators, and experts in law, policy and academia working on human and empathy-mimicking AI systems.

Project Lead: 

Sriya Sridhar, Research Fellow,

sriya@pranavainstitute.com

©2025 by The Pranava Institute.

bottom of page