Recapping the Inaugural Vanderbilt AI Governance Symposium
On October 21, the VAILL hosted its first AI Governance Symposium, a day full of engaging discussion from the leading voices from industry, academia, and government.
The Symposium was a deliberate attempt to move beyond the hype and dig into the practical, thorny questions of accountability, transparency, and governance. During panel discussions and in informal conversations, our speakers didn’t offer easy answers—instead, they modeled the kind of interdisciplinary, nuanced thinking that this moment demands.
Panel 1: Building AI Governance from the Inside Out
Sean Perryman moderated a discussion with Bennett Bordon, Partner at Clarion AI, Katelyn Katsuki, Associate at Paul Hastings, and David Shin, Vice President of Legal at HCA Healthcare, on how organizations can operationalize AI governance in practical, sustainable ways. Panelists emphasized that AI is not a replacement for human judgment but an augmentation of it—“think Iron Man, not Terminator.” The conversation centered on the need for AI governance to start at the top, with boards of directors setting clear expectations and oversight structures. Panelists noted that while AI introduces new challenges, the core principles of risk management remain familiar: define accountability, assess impact, and ensure ongoing monitoring. They highlighted that effective AI governance must be cross-functional, bringing together technical, legal, compliance, and operational leaders to align on shared values and guardrails. The discussion also explored how translating abstract ideas of fairness and ethics into measurable, actionable frameworks remains one of the field’s toughest and most essential challenges.

Panel 2: Steering the AI Research and Development (R&D) Pipeline
Asad Ramzanali moderated a discussion with Steve Welby, Deputy Director for Research, Sensors and Intelligent Systems at the Georgia Tech Research Institute, and Olivia Zhu, Principal Technical Program Manager in the Office of the CTO at Microsoft. They shared perspectives on how today’s AI has roots in federal government funded R&D from the 1950s onward; how industry, academia, civil society, and government play distinct and important roles in enabling advances in AI R&D. They discussed how the prohibitive compute costs for conducting AI R&D skews research horizons and limits corporate and non-corporate researchers. They each shared excitement for the future of AI.

Panel 3: AI’s Energy Appetite: Environmental Impacts and Governance
This panel, moderated by Vanderbilt Private Climate Governance Lab Fellow Ethan Thorpe, dove into the reality, the hype, and the unknowable about the energy and environmental implications of AI proliferation. Claims about AI’s likely environmental footprint range from fatalist predictions of AI wreaking havoc on natural systems to utopian prognoses of AI’s potential to solve the myriad energy and environmental challenges we face.
In reality, panelists Michael Vandenbergh, Jonathan Gilligan, and Leslie Abrahams explained that opaque informational disclosure and the rapidly changing industry make such forecasts highly uncertain. The interdisciplinary panel focused heavily on how decisionmakers can make rational, measured plans in spite of the uncertainty, and how investing trillions of dollars into outdated infrastructure introduces massive new environmental and financial risk. They concluded with a discussion of some steps we can take today to reduce the environmental footprint of AI – such as integrating flexible electric loads and behavioral changes to make demand more efficient – and the need to consciously govern AI development to progress environmental goals.
Panel 4: State-Level AI Regulation - The State of AI in the States
Asad Ramzanali moderated a panel with Tennessee Senate Majority Leader Jack Johnson, Connecticut Deputy Senate Majority Leader James Maroney, and leading consumer rights advocate Ben Winters. Sen. Johnson shared the story of how the local music industry is critical to Tennessee, leading him to partner with the Governor, federal policymakers, music industry stakeholders, and technologists to develop and pass the ELVIS Act, a nation-leading state law that protects musicians from AI-generated audio that sounds like them by adding “voice” to existing legal protections for name, image, and likeness. Sen. Maroney described success incorporating AI issues in state privacy laws, while facing challenges passing S.B. 2, a broader AI bill that failed to advance. Ben Winters described how states have filled the void of an inactive federal Congress but these policy developments are under threat from a moratorium on state AI regulations that was rejected by Congress but is reportedly still being considered. The Senators described various mechanisms for state legislators to coordinate, their willingness to engage with all stakeholders, and their view that federal action on several policy fronts would be welcome even if it ultimately preempts state laws.
Panel 5: Auditing the Black Box - A Fireside Chat With Cathy O’Neill
In a fireside chat moderated by Mark Williams, Cathy O’Neil discussed her evolution from Harvard mathematician to Wall Street quant to algorithmic auditor and critic. O’Neil, whose 2016 book “Weapons of Math Destruction” highlighted how algorithms deployed at scale can perpetuate inequality through feedback loops and biased data, shared insights from founding ORCAA, an auditing firm that has worked with companies to assess fairness in hiring tools, insurance algorithms, and risk models. She explained the concept of “data laundering,” where historical discrimination gets processed through seemingly neutral algorithms and emerges legitimized by mathematical authority. The discussion also explored practical challenges in algorithmic auditing: resistance from companies protecting proprietary models, disagreements over fairness definitions, and the difficulty of scaling audits to meet growing regulatory demand.
Reflections
A consistent theme across the symposium was the inherently interdisciplinary nature of AI governance. Technical expertise, legal analysis, ethical frameworks, and policy design each offer necessary but insufficient perspectives on their own. Effective governance requires bringing these different forms of expertise into genuine conversation—and creating institutional structures that enable that collaboration.
Several panelists emphasized the need to operationalize abstract principles, translating values and commitments into measurable practices and accountability mechanisms. The gap between stated principles and actual implementation remains a central challenge in the field.
The symposium also served as a reminder that progress in AI governance depends on building community among people working on these problems from different angles. The day’s discussions and the networking reception that followed reflected the value of creating space for practitioners, academics, and policymakers to share insights and challenges.
We’re grateful to all the participants who contributed to the symposium and look forward to continuing these conversations in future years.
The Vanderbilt AI Governance Symposium was organized by Mark Williams, Sean Perryman, and Asad Ramzanali, and hosted by the Vanderbilt AI Law Lab (VAILL).






