In her talk Dr Alondra Nelson urged policymakers, universities, and the public to focus on fairness, accountability, and trust rather than treating AI as a self-governing technology.

Visual: Social Science Space
Northwestern University’s Institute for Policy Research (IPR) hosted Professor Alondra Nelson, a leading thinker at the intersection of science, technology, and democracy. Nelson, Harold F. Linder Professor at the Institute for Advanced Study and former Acting Director of the White House Office of Science and Technology Policy, spoke on “Governing the Future: AI, Public Policy, and Democracy.”
IPR Director Andrew Papachristos introduced Nelson as “exactly the type of person we want in this space: a scholar, a thinker, and a doer.” Nelson, he reminded the audience, helped craft the Biden administration’s Blueprint for an AI Bill of Rights and has since taken her expertise to the global stage, serving on the UN High-Level Advisory Board on AI. Her work, Papachristos noted, reflects a central tension: how to balance the hope of technological innovation with the risks of bias, inequality, and democratic harm.
Governing Outcomes, Not Objects
Nelson began by situating AI in historical perspective. The launch of ChatGPT in November 2022, she argued, marked a turning point: AI moved from invisible, background algorithms (such as predictive text or facial recognition) to widely accessible consumer products. “For the first time, AI tools became explicit consumer products.”
She cautioned against what she called “object-oriented governance”—focusing narrowly on the latest technology itself—rather than orienting policy around values and desired social outcomes. “Amazing science doesn’t equal amazing social outcomes,” she stressed. Pointing to the pandemic, she recalled how the rapid development of vaccines contrasted with the difficulties of distribution and uptake: a gap not of science, but of trust, infrastructure, and governance.
The AI Bill of Rights
Nelson recounted her work leading the White House team that developed the Blueprint for an AI Bill of Rights. The framework, she explained, emerged from listening sessions with students, clergy, workers, industry representatives, and civil society groups. It distilled five key expectations:
- Safe and effective systems
- Protection from algorithmic discrimination
- Data privacy
- Notice and explanation
- Human alternatives to automated decisions
Though nonbinding, the blueprint has already shaped legislation in Connecticut, California, Washington, and Oklahoma. “These are not radical claims,” Nelson said. “They are common-sense expectations that we should have of any technology that touches people’s lives.”
She rejected the common industry framing of innovation versus safety as a zero-sum trade-off. “Regulation provides rules of the road that actually enable innovation,” she argued, likening it to the three-point line in basketball: a constraint that makes new forms of play possible.
Testing AI in Democracy
Moving from principles to practice, Nelson described her recent collaboration with journalist Julia Angwin in the AI and Democracy Project. The team tested major AI chatbots on election-related questions, with election officials from across the country serving as evaluators.
The findings were sobering: nearly half of responses were inaccurate or incomplete. Some failed to note Nevada’s same-day voter registration, while others fabricated nonexistent services like “Vote by Text.” In North Carolina, several models omitted student IDs as valid voter identification, which could have disenfranchised voters.
“These systems are not ready for prime time when it comes to democracy,” concluded one election official. Nelson emphasized that such errors, while not malicious, could mislead voters and undermine trust. “The cumulative effect of partially correct, partially misleading answers could be profoundly damaging,” she warned.
Lessons from History
In a Q&A moderated by Papachristos, Nelson reflected on lessons from past technological transformations. She argued that governance should not treat new technologies as silver-bullet solutions—what some scholars call “tech solutionism.” Instead, society must ensure that technologies align with democratic values and social priorities.
She returned to the pandemic example to illustrate that even the most groundbreaking science cannot guarantee equitable outcomes without trust and effective governance. “We know now that amazing science does not automatically equal amazing social outcomes,” she said.
Universities and Public Engagement
Audience questions turned to the role of universities. Nelson emphasized that academia has a crucial role in both technical and social dimensions of AI governance—developing auditing methods, training students, and creating socio-technical collaborations. She also criticized the “race to the bottom” dynamics of industry, where products are often rushed to market before being fully tested.
On the public side, Nelson urged engagement rather than fear. While polling shows deep skepticism about AI, she argued that trust could be rebuilt through regulation, transparency, and clear protections. “People should be able to use these tools without risking their privacy or having their data misused,” she said.
Global Governance
Nelson also highlighted her work on the UN’s AI advisory board. Earlier this year, all 193 UN member states adopted a U.S.-sponsored resolution on AI governance, grounding it in the UN Charter, human rights, and the Sustainable Development Goals. “It’s rare to see such unanimity at the UN,” she noted, suggesting it reflects the global urgency of AI governance.
A Hopeful Closing
Despite cataloguing risks, Nelson ended on an optimistic note. “I’m not pessimistic at all,” she said. “We can get this right. But it requires intention, collaboration, and the will to govern AI not just for what it is, but for the society we want it to help build.”