Programme

This day-long event was an urgent gathering of civil society changemakers ahead of the UK AI Safety Summit.

Together we began to learn, share and shape better outcomes for AI.

Auditorium Programme

11:00 Welcome Rachel Coldicutt and Anna Hamilos, Promising Trouble

11:15 Hate scaling and racial dehumanization: The real risks from large scale datasets and models Dr Abeba Birhane, Senior Advisor, AI Accountability, Mozilla Foundation & Adjunct Assistant Professor, TCD, Dublin

The current AI Safety and Ethics narrative is dominated by misleading and over-hyped narratives such as “risk of extinction from AGI”. These narratives are both scientifically ungrounded and smoke-screens that allow those developing and deploying AI to evade responsibility. In this talk, I discuss the real concrete harms and present work on dataset audits. I introduce work that explores 1) the effect of scaling datasets on hateful content through a comparative audit of open-source visio-linguistic large scale datasets containing 400 million and 2 billion samples respectively, and 2) the downstream impact of scale on visio-linguistic models trained on these dataset variants by measuring racial bias of these models. Results show that 1) the presence of hateful content in datasets increased by nearly 12% higher in the larger dataset and 2) societal biases and negative stereotypes also were exacerbated with scale on the models evaluated.

12:00 Lunch break

12:45 AI at work: the trade union response Mary Towers, Employment Rights and Policy Officer, TUC

This presentation will explore the trade union response to the use of AI at work and the importance of collective action in what has become known as the fourth industrial revolution. Examples will be given of how trade unions and workers can exercise collective influence over the systems of accountability that govern the use of AI at work, with a focus on the outputs of the TUC’s AI project, as well as examples of collective agreements.

13:05 Decentralizing AI: Challenges and Opportunities for the Majority World Dr Chinasa T. Okolo, Fellow, The Brookings Institute

The advent of the Fourth Industrial Revolution and its accompanying technological advancements has commanded the attention of countries worldwide, leading to an unprecedented adoption and interest in leveraging artificial intelligence. As AI increases in ubiquity, the opportunities presented by these technologies have begun to be realized in the Global South, despite early development of these technologies being primarily concentrated in the West. Within the Global South, AI stands to help advance progress in critical domains such as agriculture, healthcare, and education. However, rising concerns about the ethical implications of using AI present new challenges for countries in this region to address. Given the rapid pace of AI development and its potential to aid economic growth, it will be crucial for governments in the Global South to understand how to work collectively towards enacting robust AI regulation and build thriving AI ecosystems that address the needs of culturally diverse users.

13:25 Transparency deficits in the UK Government's use of AI Mia Leslie and Alexandra Sinclair, Research Fellows, The Public Law Project

Alex and Mia will discuss PLP’s work on identifying and addressing transparency deficits in the UK Government’s use of AI and automation. There is not transparency over which government departments are using automated systems, and the Government has only disclosed six systems on its Algorithmic Transparency Recording Standard. PLP's investigative research led to the creation of the Tracking Automated Government (TAG) Project and launch of the TAG register which logs 55 known automated systems used across key Government departments. There is a lack of transparency in the decisional criteria used by government in some systems, particularly those used by the Home Office to detect sham marriages and the Department for Work and Pensions use of machine learning in detecting benefit fraud. This year, the Informational Tribunal upheld the Home Office’s refusal to disclose five of the eight criteria used by its automated triage tool for identifying sham marriage.

13:40 Break

14:00 Unconference session 1 see below for details

15:00 Unconference session 2 see below for details

16:00 Unconference session 3 see below for details

16:45 Break and return to auditorium

17:00 Closing thoughts Rachel Coldicutt and Anna Hamilos, Promising Trouble

17:30 Event ends

Unconference Programme

Most Unconference sessions run for 45 minutes with a 15 minute break to swap rooms

13:15 — 14:45 Workshop

AI Apocalypse Now: Hearing AI Harms (Workshop) Abby Burke, Open Rights Group, Meg Foulkes, Open Rights Group, Lucy Chambers, Independent Facilitator, Temi Mwale, @4FrontProject, Nathan Ndlovu, CARAG (Coventry Asylum and Refugee Action Group) Dale Room Lower Ground 1

Discussions about the negative impacts of AI tend to focus on abstract or hypothetical horrors, lacking a connection to real individuals affected by these harms. In this session, we offer a fresh perspective on understanding these issues by emphasising their human dimension. We will share real-life stories of people who have experienced the adverse effects of AI technology firsthand. Participants will hear directly from those who have borne the brunt of these consequences, enabling them to delve deeper into the subject beyond the surface-level headlines. Following these personal accounts, attendees will explore potential solutions to address these issues in a more targeted manner.

14:00-14:45 Session 1

Big Brother Britain? The New Era of AI-Powered Surveillance Madeleine Stone (chair), Big Brother Watch, Aké Achi, Migrants At Work, Professor Pete Fussey, Centre for Research into Information, Surveillance and Privacy, Emmanuelle Andrews, Policy and Campaigns Manager at Liberty Auditorium Lower Ground 1-2

Public surveillance has moved on from CCTV cameras on street corners passively monitoring us. Modern surveillance devices aren't just watching, they're often actively tracking, analysing and making decisions about our lives. Local authorities, private companies and police forces are investing in the promise that AI will make surveillance more effective, more absolute and cheaper. But is AI delivering on these promises? And what is the impact of this new AI-powered surveillance on our rights? This panel will discuss how AI is changing the nature of public surveillance in the UK and how communities are being impacted.

Stop Killer Robots: Less Autonomy, More Humanity Sahdya Darr, Campaign to Stop Killer Robots Burroughs Room Lower Ground 2

According to a recent survey, 71% of the UK public are concerned about autonomous weapons. In this workshop, we will explore people’s concerns about the use of artificial intelligence (AI) in war and why an intersectional lens is important for understanding the problems with autonomous weapons. We will consider how civil society can collectively resist against autonomous weapons and the digital dehumanisation of conflict including why we need to urge the UK Government to support the creation of a new international law to safeguard against the risks of autonomous weapons.

How can AI help us to protect our planet? (Workshop) Eoin Rossney-Hyde and Alfred Chen GCSE Students Private Dining Room Level 2

We are secondary school students and changemakers, concerned about climate change and the destruction of our planets habitats, particularly the dramatic ecological decline of Lough Neagh, the largest freshwater lake in the UK which provides Northern Ireland with 40% of our drinking water. We believe AI can be used to monitor water quality and pollution more effectively, efficiently and economically than visual monitoring. We wish to explore with participants how they think AI can help to address the environmental issues impacting on their communities in order to demonstrate AI's tech for good capability to protect and sustain our planet and its ecosystems and habitats.

15:00-15:45 Session 2

Algorithmic oppression in our education system. Refuse. Retract. Resist Jen Persson, Digital Defend Me, Tracey Gyateng, Data, Tech and Black Communities, and Will Perry, Monckton Chambers Auditorium Lower Ground 1-2

In this session we will create a common understanding of what EdTech is; where and why it is being applied; the implications of automated behavioural surveillance and control for children and students, parents/carers, teachers/lecturers, communities, states; and discuss how this is part of worldwide political and social trends of algorithmic oppression which we need to resist. EdTech is spreading throughout our education system—from university to nurseries—with children and students' data collected, managed, re-used and shared by companies beyond their control. Increasingly it includes AI. Are there red lines on acceptable AI in education? Where is there consensus on objection? We will explore how students have won against the use of exam proctoring and discuss what can be learnt from these acts of resistance. What are the tools and strategies we could use or develop to ensure all who are affected by the technologies are included and involved?

Participatory AI: Can Participation Improve the Design of AI Systems? (Workshop) Aleks Berditchevskaia, Nesta, Esther Moss and Lewis Westbury Dale Room Lower Ground 1

In this workshop participants will explore different participatory interventions that can be introduced throughout the AI design, development and deployment pipeline. After a brief introduction to Participatory AI, we'll work through activities in small groups using existing and imagined case studies of AI development. Groups will use a series of prompts to consider: e.g.different roles and priorities of those involved in AI development, the purpose of participation etc. These activities will be a stimulus for group discussions about the potential and limitations of participatory AI and the relevance of these approaches to their own work.

Truth vs Tech: Tackling Misinformation and Deepfakes Elo Esalomi, We and AI youth ambassador, Neha Adapala, We and AI youth ambassador, Valena Reich, MPhil AI Ethics Student, and Medina Bakayeva, Consultant EBRD Digital Hub, hosted by We and AI Burroughs Room Lower Ground 2

What is it like for a generation growing up surrounded by synthetic media and audiovisual misinformation? In a panel put together by two teenagers who started their own exploration of AI risks, young adults discuss their perceptions of the impact of generative AI through a discussion of case studies. Find out what they think needs to be done about the impact of AI systems on underrepresented communities and trust, through the lens of their generation’s exposure to deepfakes and synthetic media. What can we learn about the future from the experience and exploration of those who will be most impacted?

Halalgorithms: From Hollywood to Biased AI via the unbearable weight of the massively talented Nicholas Cage Shaf Choudry, The Riz Test/Seen on Screen Institute Private Dining Room Level 2

What connects Iron Man’s origin story with biased Al? How does Hollywood influence Big Tech? Why is representation on screen a tech ethics issue? Having analysed over 1,300 movies and TV shows, The Riz Test has quantified the portrayal of Muslims on Film and TV over a period spanning 120 years. The Riz Test research has delivered insights that will outline the supply chain of data from Hollywood, Bollywood and the TV Industry to training machine learning models and how this influences the future of tech - why that's a problem, and what we can do to avoid these anti-patterns

16:00-16:45 Session 3

Let’s talk about power in AI: tactics for everyday publics Dr Maya Indira Ganesh, University of Cambridge, Professor Noortje Marres, University of Warwick and Dr Louise Hickman, University of Cambridge Auditorium Lower Ground 1-2

The recent dominance of AI has changed how democracy works. The problem is not just the concentration of power in tech industries; industry outsiders struggle to make themselves heard in public debates about how AI can serve society’s needs. To be inclusive of everyday concerns, industry favours methods like design thinking. But do these work for the public? Do experts take them seriously? Some seem to equate “AI ethics” with “eating your vegetables.” But it is clear that experts alone can’t fix the problems that AI poses for society. How can counter-publics find space to challenge the power of AI?

Money, Investment and AI Dama Sathianathan, Bethnal Green Ventures Dale Room Lower Ground 1

Private market investors, especially venture capitalist firms, have an outsized influence on shaping the future of technology. With a race to back the best generative AI companies, there is an increased risk that private companies are being funded to develop new technologies with a significant negative impact on people and the planet. How can we bring the public and private sectors together and shape an intentional tech innovation ecosystem that operates responsibly?

An Alternative Agenda for Bletchley Jeni Tennison and Gavin Freeguard, Connected by Data Burroughs Room Lower Ground 2

Many people have criticised the UK's AI Safety Summit for both its focus on the risks of "frontier AI" and the lack of meaningful engagement with a diverse set of stakeholders, including the public. But many people have also highlighted the need for international cooperation and global agreements on AI. This workshop session will explore what a Global AI Summit could have looked like and come up with an alternative agenda for the meeting happening at Bletchley Park.

Cultivating Joyful Resistance with AI Abdo Hassan, Critical Tech Private Dining Room Level 2

Everyday Data (H)activism is a collaborative research project that investigates how ordinary people can engage in everyday data activism. In this workshop, participants are introduced to the Everyday Data (H)activism Toolkit as an archive of resistance. In this workshop you will then work critically, adapting the maps of the toolkit to our local context. We are creating a joint manifesto with which we examine activism itself.

For more information about speakers and unconference contributors see our Contributors page.

Floorplans

Level 2

Private dining room signposted from lifts and stairs

Lower Ground 1

Map of lower ground floor 1 at Wellcome Collection

Lower Ground 2

Map of lower ground 2 floor at Wellcome Collection