AI Agents + Design Fiction
Governing Tomorrow: How Our Use of AI Agents Could Reshape Society
From managing workplaces to mediating neighborhood disputes, imagined futures with AI agents reveal the ethical and social questions we must answer today—and point to opportunities for research and for policy innovation.
- New artificial intelligence (AI) agents pose questions about privacy, access, and an increase in social isolation, and raise issues of trust and human autonomy
- Policymakers and experts must meet these concerns with robust and clear governance frameworks
- Imagining AI agents of the future with practices such as design fiction can help illuminate solutions
- Consideration of possible futures is vital for research agendas, policy innovation and creative solutions, ensuring that emerging technology applications, such as AI agents, support social benefit
The Reading Room of the British Academy in London has echoed with political debate for 150 years. This historic venue—once the dining-room of 19th-century Prime Minister and influential policymaker William Gladstone—recently welcomed some of the 21st century’s sharpest minds to explore a new frontier of governance: the future of artificial intelligence (AI).
In Gladstone’s time, governance meant navigating the complexities of industrial power. Today, policymakers and experts face a similarly transformative era, shaped by new industry players, governments, civil society, and intelligent machines. Cutting-edge software programs can interact with their environments, gather and collect data, and use it to accomplish pre-determined tasks. How will society adapt as these AI agents become integral to work, leisure, and even personal relationships? What governance frameworks are needed now to address the ethical and social challenges that will inevitably arise?
Gathered at the British Academy to explore these questions in December 2024 was the AI Policy and Governance Working Group. This was launched in 2023 by Alondra Nelson, the Harold F. Linder Professor and head of the Science, Technology, and Social Values Lab at the Institute for Advanced Study, and Dorothy Chou, head of the Public Engagement Lab at Google DeepMind. The group comprises researchers from academia, industry, policy, and civil society across five continents and is committed to embedding public safety in the design, development and deployment of AI, while developing innovative approaches and guidelines to achieve these goals. For this, their fourth meeting, the group worked with Julian Bleecker of the Near Future Laboratory, to envision a “near-future world” of AI agents and to being to imagine the kind of research and policy innovation that is needed.
Imagining a World With AI Agents
Not only Gladstone but even the political leaders of a couple of decades ago might have been puzzled by the idea that autonomous computer programs could embed themselves in society to such an extent that new governance structures might be needed to anticipate and manage their impact. Yet this is what AI Working Group members set out to imagine.
Participants split into groups to envision products and services that might exist in a near-future shaped by AI agents. This fictional design process revealed both possibilities and concerns, illustrating the profound shifts in society these technologies could trigger.
One group considered a human resources agent for companies. It would manage staff welfare, training, and office operations. While this could streamline administration and improve workplace efficiency, it also raised concerns about surveillance. Many companies already monitor the time employees spend at their desk or track their computer activity, but an AI agent could always monitor them (“Our agents don’t sleep so you can” read the team’s pitch for its “premier operating system” titled “The Agency”). What privacy boundaries or violations of them would be acceptable and how would they be negotiated, including through the use of countervailing agents?
While workplace applications raise issues of privacy and surveillance, use in social spaces raises different challenges. Another team suggested an AI bartender to manage pubs and bars, adjusting music, lighting, and the overall atmosphere to suit different times of the day. By analyzing customer behavior and preferences, the agent could create tailored environments, such as a quiet, remote working space during the day and a lively party atmosphere in the evening. But attendees questioned whether such systems might prioritize commercial goals, such as increased sales, over customer wellbeing—perhaps encouraging overconsumption or manipulating social interactions.
A third team tackled a problem with AI agents themselves: the possibility that people might get addicted to using them. The system envisioned by the team would identify patterns of AI dependency and recommend treatment, potentially supported by additional AI tools. This irony highlighted ways reliance on AI agents may become pervasive.
Finally, one team proposed AI agents that could manage hyper-local communities, such as a street. Residents could establish customized rules—bespoke noise limits, for example—which the AI agent would monitor and help enforce, supporting dispute resolution between neighbors. While this could reduce tensions and streamline conflict resolution, some participants were wary of removing human interaction and eroding real-life relationships.
Reactions to these ideas were mixed. Some participants were energized by the potential for efficiency and personalization. Others found the scenarios unsettlingly dystopian. “These aren’t worlds that I would want to live in,” one attendee remarked, proposing that we should use AI to handle the mundane parts of our lives, not to take over our work or social relationships.
The Social Dilemmas of AI Agents
Yet this was the scenario suggested by attendees: we might come to rely on AI agents to such an extent that people who cannot access them, whether owing to disability, poverty, or personal choice, would be excluded from essential services. Governance frameworks around AI agents must therefore balance access with inclusivity.
Related to this is the problem of cultural specificity, experts pointed out. For example, an AI agent designed for dispute mediation might be optimized for the direct communication style of the U.S. and be inappropriate—or disastrous—in parts of the world where other styles of communication are the norm.
Reliance on AI agents could also mean that when things go wrong it’s not clear who is responsible as the creators of the technology, its operators, and the users themselves all played a role in reaching the outcome. This prompted debate about the need for governance oversight to define accountability and provide mechanisms for recourse.
Some AI Working Group members suggested that AI systems could eventually produce outcomes their creators did not anticipate, driven by complex algorithms and emergent behaviors. Others said those agents would still be acting according to programming defined by their creators. This nuance is at the heart of the question of accountability and raises the challenge of setting sociotechnical boundaries when designing today’s AI tools.
Finally, the group examined the potential divergence between public and private AI agents. A privately developed AI system designed to support mental health, for example, could lighten the burden on public healthcare, but it might not come with the same protections for patients as those covering encounters with human therapists. This could have consequences for privacy and safety. The AI agent therapy, while cheaper than the human alternative, might also still be too expensive for all who need it. Governments will need to consider developing public-sector AI agents to ensure transparency, accountability, and accessibility. Or they could leave innovation to the private sector, which might prioritize profit over social responsibility.
Envisioning Sociotechnical Pathways to Ethical AI
At the British Academy, the room that had hosted so many political debates once again echoed with another. In this case, the shortcomings in existing governance frameworks were highlighted by the speculative approaches to imagined technology. AI Working group members were encouraged to think beyond incremental changes. Yet there was a dystopian undertone to these imagined products even if attendees were sometimes applying a deliberate satirical edge. In response, participants turned to imagining an undeniably utopian AI agent. The goal was to explore how these technologies could be designed to enrich lives, promote fairness, and address society’s shortcomings.
One group proposed an AI personal advocate to navigate complex bureaucratic systems, such as legal disputes or insurance claims. This agent would assist in presenting the user’s case and support efforts to ensure fair treatment. It could analyze policy documents, prepare legal cases, and negotiate settlements. The group created a demo in the style of Spotify’s end-of-year Unwrapped feature, displaying all the victories the AI agent had helped its user accomplish. This would be empowering but raised new questions about whether AI advocates might be equally accessible or whether poorer people might find themselves faced with adversaries using more powerful tools. This would require governance, as would the process of ensuring these advocates genuinely represented their users, rather than their creators.
For a beneficial vision of AI to materialize, agents must both perform effectively and earn the trust of their users. Governance and ethical oversight are essential to keep these systems transparent, unbiased, and aligned with the values of the societies in which they operate.
Ultimately, this workshop underscored that while utopian AI agents may seem a good idea, their creation is deeply tied to pragmatic decisions made today. Inclusive design, robust policies, and participatory governance are not just desirable—they are necessary foundations for a future in which AI makes the world a better place.
A Framework for the AI of Tomorrow
The world has changed in ways that would be unrecognizable to researchers and policymakers of the past, just as the imagined futures considered by attendees at the AI Policy and Governance Working Group’s event are uncomfortably strange for us today. But recognizing the transformative potential of this technology is crucial for developing a sense of the innovative governance needed to ensure it develops in ways that benefit everyone.
The two-day design fiction process showed how hard it can be to think creatively and critically about the future of AI agents. Ideating possible products doesn’t need specialized technical knowledge, but it does demand careful consideration of societal, ethical, and cultural implications—and governance structures that can address them. One key takeaway was that speculative exercises like these can be a powerful tool to spark conversations about the future of AI, its role in society, and the policies that will shape its development.
AI Working Group members suggested sharing the methods used during the event more broadly, encouraging other groups to explore possible AI futures and reflect on what these visions reveal about the technology—and about us. Ultimately, this collaborative approach may help guide the development of AI agents toward more inclusive, ethical, and human-centered outcomes, with governance systems that allow us to address challenges as they emerge.
AI agents will function as more than tools, influencing relationships and power dynamics through their design and application. Whether they serve as advocates, mediators, surveillors, or enablers, their design will reflect the priorities and values of those who create them, the designers and developers who are far from representative of our societies as a whole. The ethical and social dilemmas discussed—questions of accountability, accessibility, cultural specificity, and commercialization—highlighted the need for intentional, human-centered approaches to both innovation and policymaking.