This week, in an opinion article published with The Information, I shared my thoughts on the pivotal role AI can play in addressing global challenges. I discussed its potential for good, its risks, and the importance of early, balanced, and expert-informed regulation.
Read on for a brief summary of five key takeaways I believe to be most important as we realize the future impact of artificial intelligence:
The Information: AI Can Reshape the Physical World—If Regulation Is Right
By Andrew Anagnost
“Artificial intelligence is quickly becoming an indispensable tool for how we imagine and design our physical world and make it more sustainable and inclusive.
Used—and regulated—correctly, AI can become society’s greatest tool in meeting challenges ranging from crumbling infrastructure and a growing housing crisis to a rapidly changing climate. Efforts to effectively regulate AI were front and center last month, as seen with President Joe Biden’s Executive Order on Artificial Intelligence, the AI Safety Summit in the U.K., and at the Asia-Pacific Economic Cooperation summit in San Francisco. We are now at a crossroads when it comes to using AI to address society’s biggest problems in our physical world. We have two clear paths ahead.
Think of one path as good AI, which can augment human capacity when built ethically and used correctly. AI can meaningfully address the critical shortages we are facing in terms of the money, people and resources needed to reimagine buildings, cities and even entire countries…
Here are five things that need to happen to get [regulation] right:
1. Be early and be aligned. AI is already showing its potential to learn and grow exponentially, meaning that regulation will need to keep pace with advancements in this technology and maintain consistency. Regulators need to stay ahead of the curve, and the White House’s recent executive order and the EU AI Act are important first steps.
2. Strike the right balance. AI has the potential to collaborate with human stewards to solve massive problems and unlock productivity and innovation. We’re already seeing AI used to address real-world issues such as the housing crisis and extreme weather events. We absolutely need guardrails and regulation, but we must ensure that we are still enabling good AI to fulfill its potential while curtailing bad AI’s ability to do harm.
3. Bring in the experts. Computer scientists, engineers and yes, even big tech all have an intimate understanding of AI that most regulators simply do not possess. We need closer collaboration between the public and private sectors to inform lawmakers of the best approaches to top-of-mind issues.
4. Address high-risk flaws head on. We are all aware of some of the flaws in current popular AI models and the damage they can cause to society, impacting things like access to credit, employment and education. We must move quickly to regulate the highest risks in a targeted manner.
5. Put humanity first. We need to prioritize AI models that will help us more efficiently reach our goals for business and society, rather than ones that may contradict what we want. Governments should have a role in making this happen.”