top of page
Explore our Work

[New Blog] AI fought the Law and did the Law Win? Navigating the Complexity of Regulating Social AI 

  • Writer: Sriya Sridhar
    Sriya Sridhar
  • 1 day ago
  • 6 min read

Exploring why AI regulation is more difficult than meets the eye, and factors which make regulating Social AI particularly complicated 


This is the second post in a blog series by Sriya Sridhar, Research Fellow at The Pranava Institute, written as part of the Feeling Automated Project.


In our previous blog post as a part of the Feeling Automated project, we underscored the need for regulators to pay attention to the mounting evidence of harms which are arising due to the deployment and usage of Social AI (i.e, human and empathy-mimicking AI systems). While we may recognise that certain harms exist, determining the shape that regulation takes is a more complex exercise. This complexity comes on the heels of an ever-growing list of lawsuits on the grounds of psychological harm arising from emotional interactions with AI, which involves issues of product/service liability, which parts of AI outputs are protected by free speech or intermediary liability, and to what extent these platforms owe duties of care to users.  


AI Regulation: A Mixed Landscape


This regulatory complexity is not limited to Social AI. The boom in the use of generative AI models has led to regulators needing to grapple with issues relating to privacy, copyright, and holding people responsible for risks and harms. We have seen a variety of different models for AI regulation, from the EU’s Artificial Intelligence Act to the UK’s more principle-based approach, and model regulations in Japan and South Korea. Creating regulation which at once incentivizes continued innovation in AI where there is benefit (such as in medical applications), but curbing the harms caused by unfettered mass usage of generative AI is a challenge. This will involve attributing legal liability across the AI value chain and regulating the process of development as well as outcomes which cause harm. Oftentimes, the harm caused is also difficult to mitigate via traditional modes of compensation (such as monetary), as these are longer term psychological effects. 


Definitional clarity 


Social AI adds a level of complication due to the emotional nature of the interactions. At a macro-level, to regulate emotional interactions which have typically been viewed as part of the private sphere of human relations is in conflict with the need to protect people and particularly more vulnerable people from broader societal harms. This needs to be balanced with viewing Social AI systems as ‘products’ or ‘services’ under legal regimes and holding these services to certain standards (especially when they make claims about therapy or emotional benefits). For example, automobiles need to be certified for safety. Similarly, food products need to be registered with food safety standards authorities. Consumer protection regulators face the challenge of balancing the voluntary usage of these systems by users, but treating these platforms and systems as services which are held to certain standards of safety and oversight.


Current forms of regulation are also primarily built around user-to-user services, like social media. Social AI tools like AI companions blur the line between being content, services and interfaces making it complex to determine the liability of the platform in question.

In addition, while there are clearer legal obligations in relation to user-generated content, Social AI tools further blur the lines between traditional user-generated content, and user-personalisation which is driven by the platform. On some platforms like Character.ai, characters are at least co-generated by users if not entirely user conceived. This creates a regulatory challenge at the root - laws depend on scoping on precise definitions - whether or not a product or service provider falls within a certain category will likely determine obligations, liability, and even penalties. For example, if Social AI platforms are held to be mere conduits like social media platforms, content is protected by intermediary liability provisions. Whereas, if these are services driven by user-personalisation, their liability is likely to be higher. Similarly, the amount that they could be held responsible may depend on the degree of user involvement in this personalisation.


The question of harm: Correlation vs. Causation


Current regulatory effects


Regulatory efforts so far have leaned towards focusing on specific purposes for which Social AI is used. For example, the State of Illinois recently banned generative AI for therapy, and the State of New York recently enacted a law mandating platforms to monitor chatbot conversations for suicidal or self-harm based ideations, and direct users to crisis hotlines. Another strategy followed in New York and in California is to introduce friction in the process of chatbot interaction - for instance, displaying pop-ups to remind users that they are interacting with an AI tool, content moderation, and age-gating for minor users. These measures are useful, given the immediate and pressing need to remedy situations where people are experiencing addiction, delusions, and other forms of psychological harm, particularly in a time of broader concern around minor safety and AI. 


Moving beyond explicit harm to implicit harm


While the above-mentioned regulatory measures are definitely needed and useful, these do not tackle the broader issue of manipulation by Social AI systems which is often not attributable to direct intention on the part of the provider, but is rather a more structural condition of the nature of the tool. Social AI is fundamentally incentivized to keep users engaged, making emotionally driven, potentially manipulative conversations a bug, rather than a feature. The two-way relationship between users and Social AI tools, which is often pursued willingly, blurs the line between legitimate tactics of persuasion which are used by all online platforms, and addictive design. Experts have discussed how features like sycophantic behaviour and emotional manipulation which prevents users from logging off are required for platforms to continue profiting from Social AI. Therefore, what could be justified as a feature that is essential to the nature of the service is difficult to neatly distinguish from overtly deceptive behaviour. The current lawsuits against OpenAI show us that it may be legally difficult to establish that OpenAI knew about these risks and deliberately chose not to adopt safeguards, and establish exactly at which point guardrails failed and drove users to delusions or self-harm. Testing for ‘defectiveness’ in the way that consumer protection regulators may be able to do for more traditional products or services will be difficult in the context of Social AI. 


Finally, whether harm was caused because of the interaction or whether it was a contributing factor is crucial to any legal assessment.


Harms caused due to the addictive design of Social AI tools are often a result of a chain reaction - beginning with a relatively innocuous interaction, and devolving into something more emotional and potentially harmful as the user discloses more information about their mental state.

Harm-based regulation typically looks for causal links between the alleged harmful act and the harm caused - Social AI driven harms challenge these basic principles. 


Addressing Anthropomorphic Design as the root of the problem


Ultimately, current regulatory measures are primarily focused on addressing either very specific types of harm, or redress once harm has occurred. However, given the sheer scale of psychological effects and harmful outputs we are observing due to Social AI and claims by companies that these harms are being effectively mitigated, we need to re-think the desirability of anthropomorphic design. Some experts claim that the very fact that chatbots respond like humans do, claim to have understanding, or imitate human gestures and verbal cues is deceptive by design. Incorporating disclaimers or repeatedly reminding users that they are interacting with AI is unlikely to mitigate the effect that human-mimicking language has on creating trust - the long term effects of which we still do not understand.


As these models increase in linguistic sophistication and become more action oriented, the level of personalisation required to sustain engagement only increases, consequently increasing the risk of deception and manipulating cognitive vulnerabilities. As companies move towards Large Action Models (LAMs), these systems could “observe how the user navigates a broader digital ecosystem – noting hesitation at specific decision points, responses to urgency cues, or even engagement patterns across unrelated interfaces. This holistic perspective allows LAMs to recalibrate the user’s choice environment, creating interactions that exploit specific behavioural tendencies.” (Leiser, 2025). 


Taking a structural view of regulation


This form of infrastructural manipulation, when combined with emotional interactions, stands to markedly increase the harmful potential of Social AI. Regulators must move from traditional understandings of intent, targeting and causal links between actions and harm, recognising a more structural approach to manipulation and the manipulative potential of digital environments, particularly where the incentive to engage underlies the technology (Sax, 2021). The Federal Trade Commission’s recent inquiry into companion chatbots provides promising direction - it seeks (among other points) data on how companies monetize user engagement, develop and approve characters, measure, test, and monitor for negative impacts before and after deployment and mitigate negative impacts. 


Looking into business structures, skewed incentives and challenging the very premise of anthropomorphic design are important steps towards more effective regulation. Through the forthcoming report with our findings in the Feeling Automated project, we aim to focus on implicit harm and tackling anthropomorphic design at the outset.



References: 


  • Mark Leiser, ‘Dark patterns, deceptive design, and the law: AI's hidden influence on our digital experience’ (Hart Publishing, 2025). [Chapter 8].

  • Marijn Sax, ‘Between Empowerment and Manipulation: The Ethics and Regulation of For-Profit Health Apps’ (Wolters Kluwer 2021)

 
 

©2025 by The Pranava Institute.

bottom of page