top of page
Explore our Work
  • Writer's pictureDhanyashri Kamalakannan

TPI at the UN IGF 2023- Project Launch and Insights

Our Team @ Pranava attended the Internet Governance Forum (IGF) 2023, an annual event hosted by the United Nations to discuss policy and governance issues relating to the internet involving multiple stakeholders from industry, government, academia, civil society and individual users. This year’s IGF was held in Kyoto, Japan with over 300 sessions across 8 broad themes including AI & Emerging technologies, Data Governance & Trust, Digital Divides & Inclusion, Global Digital Governance & Cooperation, Sustainability & Environment and others, with the central overarching theme being ‘The Internet we want - empowering all people’.


Aside from attending various thought provoking sessions, The Pranava Institute also launched its Design Beyond Deception Project on Day 1 of the UN IGF 2023. At our launch event, we also had a global line-up of speakers who spoke about deceptive design policy and law across the world. Here are some key takeaways and insights from the sessions we attended and a snippet of our project launch event, including learnings from our session:


Design Beyond Deception Project launched at the UN IGF 2023


Deceptive design, also commonly known as “dark patterns” obscure or impair consumer autonomy or choice and trick users into taking actions that they may not otherwise take. These design choices undermine privacy, consumer protection, and trust in online products and services.


Why deceptive design at the UN IGF?

Regulators worldwide including the FTC in the US, the European Commission, Consumer council in Netherlands, Norway, Australia and India are investigating deceptive practices. Regulators and Data Protection Authorities are investigating the consumer harms, data and privacy harms, and competition harms which result from deceptive patterns.


Extensive academic literature exists on dark patterns, but very little material exists that informs practitioners about responsible, ethical, and trusted design practices. The Design Beyond Deception project was an 18 month long project which sought to bridge the gap between academia and practice involving 4 large focus group consultations, over 50 global expert interviews from across disciplines and more than 20 in-depth interviews. The project was supported by The University of Notre-Dame and IBM’s Tech Ethics Lab. Using a human centered design approach, this project led to the creation of a manual for practitioners and a research series.


The Design Beyond Deception manual consists of framework, activities and teamwork for the design practitioner to engage with and create safe and trusted digital environments and experiences.


The Unpacking Deceptive Design research series is a collaborative attempt to investigate deceptive design within emerging technologies, communities and cases in the Global South by multiple contributors from diverse disciplinary perspectives. It focuses on the understudied areas and harms right from crafting a definition for deceptive design to deceptive design in voice interfaces.


Learnings from our session


Deceptive Design Policy and Law across the world

Our global line-up of speakers at our launch event spoke about deceptive design and its regulations in their regions and areas of expertise. Some excerpts and significant highlights from their talks:


Caroline Sinders is a Critical designer, Researcher, Founder- Convocation Research + Design. She spoke about the importance of design and interdisciplinary thinking in creating regulation and investigation which help mitigate the harms caused by deceptive design patterns. She mentions that

An interdisciplinary lens is important in the research on healthy or trustworthy design because this will impact forms of regulations and also how users create safety. An understanding of how products are made, how they are tested and having been able to conduct different kinds of analysis on the interface itself is crucial.

Caroline Sinders also emphasises that ‘Design can be an equalising action that distils code and policy into understandable interfaces. We need more collaborative interdisciplinary research between policy makers, regulators and designers.’

Chandni Gupta is the Deputy CEO and Digital Policy Director at the Consumer Policy Research Centre, Australia. She shared some of the insights from the evidence-based research conducted by the Consumer Policy Research Center on the common deceptive patterns in Australia and its impact on consumers, mainly quantifying the harms. Some excerpts from her speech:

A survey of a national representative sample of 2000 Australians indicated that 83% of them experience one or more negative consequences as a result of dark patterns on the web and yet 8 out of 10 dark patterns can be used in Australia without any consequences to the businesses. The qualitative part of the research highlighted three crucial harms of these dark patterns:

  • Lack of meaningful choice

  • Pervasive amount of pressure that is put on consumers especially once their personal details have been shared

  • Businesses aren’t held accountable for any of these practices.

The impact compounded among younger consumers who experienced significant financial and data harms. Impacts for businesses include deterioration of consumer trust and loyalty in the long run.

Businesses, Government, Regulators, Designers and everyone in the digital ecosystem have a role to play in ensuring a fair, safe and inclusive digital economy for consumers.

Cristiana Santos is an Assistant Professor of Privacy and Data Protection Law at Utrecht University and Expert of the Data Protection Unit, Council of Europe. She spoke about deceptive design from a legal standpoint highlighting the following:

Analysing more than 100 cases across the US/ EU of Consumer and Data Protection Authorities showed that Dark Patterns are mentioned in a general way without qualifying each practice into a concrete granular type of dark pattern.

Enforcers should name and publicise violations as dark patterns in their decisions, which will lead to organisations factoring in the risk of sanctions in their business calculations.

Consent claims for non-material damages are not being used in the redressed system, even though there are so many decisions related to dark patterns and violations of consent interactions.


Maitreya Shah is a Lawyer, Fellow at the Berkman Klein Center for Internet and Society. He spoke about his article for our research series on accessibility overlays and their harms on people with disabilities and his work on AI- bias, fairness, ethics and briefly touch upon dark patterns in AI and emerging technologies. Maitreya mentions

An accessibility overlay tool claims to make websites accessible for people with disabilities in line with multiple international standards and regulations, but in reality they deal with these changes at the interface level without making any alterations to the source code. This leads to various harms to people with disabilities such as obstruction of use, impacting their privacy, luring or manipulating them into purchasing things using inaccurate descriptions.

Deceptive design practices in technologies like generative AI revolves around the anthropomorphic abilities of chatbots and interactive tools built using large language models and also the data mining practices and how they intend to violate privacy of users


Watch our session video here - https://www.youtube.com/watch?v=hWzm1nFCr48


UN IGF - Key takeaways and Insights


Rapid evolution and development of emerging technologies & AI warrant simultaneously evolving standardisation, regulation and governance to keep up with the pace. To facilitate this discussion at the IGF, multiple sessions including Main sessions, workshops, open forums, and many more were held under the sub theme of AI & Emerging technologies. The main session titled ‘The AI we want’ focused on ‘Inclusivity’ in AI governance standards. The session emphasised the importance of ‘Inclusivity’ at a national and global level, ensures not only diversity of perspectives but also addresses the unique needs and challenges of different communities and regions. The Main session, along with multiple other sessions in this theme, spoke about the role of AI standards in the development and use of responsible AI technologies.

The sessions highlighted the importance of gathering expertise from across different stakeholder groups including civil society, academia, the technical community, industry, and regulators/government in order to address capacity building, promote participation, and advance global alignment in developing AI standards. They also emphasised that efforts should be made to include individual stakeholders groups which are underrepresented in this domain to identify and address their needs.

An important call-to-action suggested for the development of AI technologies which can be both globally operationalised and flexible enough to adapt local specificities and cultural contexts: Formulating comprehensive and inclusive policies by meaningfully involving all stakeholders across all levels of the AI policy ecosystem to facilitate responsible development, governance, regulation and capacity building.

bottom of page