Introduction
Helen Toner, a former OpenAI board member and director of strategy at Georgetown’s Center for Security and Emerging Technology, has expressed concerns about the current state of AI policymaking in the U.S. She argues that Congress is moving too cautiously, often reacting to crises rather than proactively addressing the technology’s potential. Her remarks during TechCrunch’s StrictlyVC event highlight a broader issue: the gridlock in U.S. AI policy.
Context on AI Policy
AI has emerged as one of the most transformative technologies of our time, with significant implications for innovation and regulation. While the technology holds immense promise, it also poses risks that need to be carefully managed. For instance, AI systems can make decisions with unintended consequences, from automated lending to autonomous weapons.
The U.S. government has recognized this duality, with President Joe Biden recently signing an executive order that implemented consumer protections for AI and required developers to share safety test results with federal agencies. Earlier in the same year, the National Institute of Standards and Technology (NIST) published a roadmap outlining how to identify and mitigate emerging AI risks. However, Congress has yet to pass comprehensive legislation on AI—or even propose laws comparable to regulations like the EU’s recent AI Act.
The Problem with Current Legislation
Despite these efforts, the current patchwork of rules governing AI is insufficient. Without robust regulation, AI systems lack accountability, raising concerns about bias, transparency, and safety. For example, while some states have established frameworks for ethical AI development, others rely on ad hoc approaches that leave gaping holes in oversight.
One particularly concerning issue is the variability in state laws. Some jurisdictions focus solely on labeling AI as "AI" to avoid liability, while others impose stricter requirements. This inconsistency creates uncertainty for businesses and consumers, who are caught between multiple, sometimes conflicting regulations.
Examples from State Laws
To illustrate the problem, consider a few examples of state-level AI policies:
-
California: The state has introduced bills requiring AI systems to be more transparent in their decision-making processes. For instance, the proposed "AI Explainability Act" would mandate that AI algorithms explain their decisions to users within 10 seconds. However, the legislation remains in draft form and lacks broad support from lawmakers.
-
Texas: A recent bill aims to regulate AI-driven technologies like self-driving cars. The Texas Driverless Vehicle Rulemaking Task Force is expected to submit final regulations next year, but critics argue that it falls short of addressing critical issues like liability and access to transportation options for those unable to drive.
-
Pennsylvania: A draft bill proposes to hold companies accountable for AI-related accidents by requiring them to establish liability protections for AI operators. However, the legislation has faced significant pushback from tech companies, who argue that it would stifle innovation.
These examples highlight how state-level regulations often strike a delicate balance between protecting consumers and fostering innovation. Yet, they also underscore the challenges of creating comprehensive, cohesive policies without overwhelming regulatory burdens.
Helen Toner’s Proposed Solutions
In light of these challenges, Toner has called for a federal approach to AI regulation. She argues that existing guidelines are too vague and that Congress must take charge to ensure consistency and accountability. To this end, she advocates for the creation of an independent regulator capable of overseeing AI development and deployment across state lines.
Toner also emphasizes the importance of investing in research to address critical questions about AI’s ethical implications. For instance, funding could be directed toward developing tools that help companies assess bias in AI systems or improve transparency in decision-making processes.
Conclusion
The current state of AI policy in Congress reflects a critical gap in addressing this rapidly evolving technology. While progress has been made through executive orders and NIST guidelines, comprehensive legislation remains elusive. Toner’s call for a federal approach to AI regulation offers a promising direction forward, but only if lawmakers are willing to prioritize these issues and take meaningful action.
As the debate over AI policy continues, it is clear that the U.S. must act decisively to ensure that this transformative technology is developed responsibly and equitably. Without bold leadership and robust oversight, the potential of AI—and its risks—will remain untethered.