Shaping Viable Futures

Shaping Viable Futures: Human Agency and Artificial Intelligence
A Panel Discussion

New and emerging AI technologies raise concerns around accountability and social trust. An AI Policy and Governance Working Group panel in London discussed our shared responsibility in steering the future.

  • Emerging AI technologies are testing the boundaries of social trust and accountability
  • Industry has a responsibility to develop technologies that support the common good, and policymakers and the public should ensure they set that agenda
  • Fairness is vital in a technology sector that lacks diverse representation, particularly at the highest levels
  • We should value imagination, which can offer new perspectives on our changing world and how to shape it

Our relationship with technology is evolving with the emergence of artificial intelligence (AI) and its agents—systems that pursue goals, with limited supervision, on behalf of users and organizations. As these systems become more sophisticated, so too do the challenges of governing their use, ensuring equitable outcomes, and preserving the power of people to shape their own lives.

The AI Policy and Governance Working Group, founded at the Institute for Advanced Study in 2023, brings together international leaders from academia, industry, and government to address these issues with actionable solutions. The Group’s fourth event—a panel at the British Academy in London in December 2024—focused on how broad participation can shape sustainable AI futures. It followed public-engagement events with the Doris Duke Foundation, the Institute for Advanced Study, and the New York Public Library.

The discussion, which was so popular that many attendees had to stand, explored critical themes, such as how AI governance can ensure accountability, how the benefits of AI can be shared equitably, and how creativity can help steer technology’s trajectory. Introducing it, Dr. Alondra Nelson, AI Working Group co-founder and professor at the Institute for Advanced Study, said: “From social media to chatbots, today’s technologies already test the boundaries of social trust and accountability. These AI products may create new risks for us while also intensifying existing concerns.”

Challenging Industry Dominance 

One of the most pressing concerns discussed was governance. As uses of AI become embedded throughout society—in decision-making systems, workplaces, and social encounters—panelists warned of the risks posed by concentrating power in the hands of corporations and states.

Dr. John Tasioulas, director of the Institute for Ethics in AI at the University of Oxford, highlighted the imbalance. “We give corporations a social license to pursue AI, on certain conditions—fundamentally that this is something that will promote the common good,” he said. Without robust oversight, he cautioned, AI could exacerbate societal inequalities while sidelining democratic principles.

Chi Onwurah, Member of Parliament and chair of the UK House of Commons Science, Innovation and Technology Committee, was unequivocal about the role of policymakers in setting the agenda. “We shape technology; technology should not shape us,” she said, highlighting the importance of proactive regulation to prevent harms such as misinformation and job displacement.

However, she said that while the level of AI literacy among lawmakers and the public alike needs to increase, we also need a broader debate about the expanding uses of AI. “We’ve got to emphasize the fact that we can talk about these things, we can have agency, and we can change them.”

Equity by Design

While governance can provide guardrails, panelists stressed the need to design AI systems with fairness in mind. Dorothy Chou, AI Working Group co-founder and head of the Public Engagement Lab at Google DeepMind, set out how her team approached this challenge in developing AlphaFold, an AI system that can reveal the molecular structure of proteins. What once took scientists months of laboratory experiments, AlphaFold can do it in minutes—a breakthrough that won the team the 2024 Nobel Prize in Chemistry.

Rather than focusing solely on high-profile diseases, DeepMind collaborated with groups tackling neglected illnesses, such as the parasitic infection Chagas and leishmaniasis, transmitted by sandfly bites. But researchers must think hard about how to ensure that as many patients as possible benefit, without delay. She explained: “When considering equitable distribution of benefits, it’s not just thinking about how we’re going to distribute the technology, but also about who gets access first.” Chou added that it’s important who is at the table as new technologies are designed, sharing their parameters. 

Onwurah echoed that idea, warning: “If technology is not diverse by design, it will be inequitable by outcome.” Both she and Chou called for systemic reforms, from diversifying venture capital to creating leadership pipelines for under-represented groups. These changes, they argued, are not just ethical imperatives but are crucial for ensuring the AI systems we build reflect the complexity of the societies in which they’re deployed.

A Path to Alternative AI Futures

Dr. Julian Bleecker, founder of the Near Future Laboratory, said imagination is an easily overlooked resource when addressing such questions. He noted that transformative technology often challenges even our ability to find the language to describe it. He said: “We have a lack of ability to imagine into these worlds that we’re entering.”

Bleecker advocated incorporating creative practices, such as design fiction, into conversations about AI’s future. This approach uses fictional products, services or systems to provoke thought about alternative possibilities.

The panel agreed that integrating creativity and imagination into technical fields and policy thinking could lead to more inclusive and innovative outcomes, in both society and technology development. By expanding the range of voices and ideas shaping AI, society can avoid the pitfalls of a monocultural future.

This issue was raised by a member of the audience when the floor was opened for questions. One attendee asked whether the lack of diversity in the technology workforce contributes to the types of technologies that are developed—something that panelists agreed was a concern.

Other questions included the challenge of determining what elements of AI are regulated—the AI itself, the use cases, or something else—and whether the focus on efficiency as the primary benchmark for AI was distracting industry from a more important focus on ethics.

The Importance of Choice 

The recurring theme of the discussion was that AI’s trajectory is not inevitable, but a matter of deliberate choice. “A world is being imagined for us, a future imagined by the marketplace, developed in research labs,” said Nelson. “And I think a question for us is, what is the world that we want to imagine, and how do we get there?”

By prioritizing democratic governance, designing for equity, and fostering imagination, we can ensure that AI serves as a tool for the collective good. This is an urgent matter. The choices we make today will determine whether we allow AI to deepen divides or to help bridge them, whether we use it to erode trust or build it. In this critical moment, shaping the future of AI is a responsibility we all share.

Watch the full discussion here and join our mailing list to follow the work of the AI Policy and Governance Working Group