From weather forecasts and disease diagnosis to chatbots and self-driving cars, new applications of artificial intelligence (AI) continue to multiply. More recently, the widespread availability of tools that can create content—whether code, text, images, audio, or video—such as ChatGPT and DALL-E, has thrust "generative AI" into the spotlight.
As applications proliferate, so do complex questions about how to ensure responsible use of generative AI. To explore the societal implications of AI technology and how policymakers might approach regulating it, the Caltech Center for Science, Society, and Public Policy (CSSPP) hosted a conversation among researchers, industry representatives, and the public on Caltech's campus. The CSSPP was established in early 2023 to examine the intersection of science and society, provide a forum for the discussion of scientific ethics, and help shape public science policy. The center is affiliated with The Ronald and Maxine Linde Institute of Economic and Management Sciences.
"We believe that scientific knowledge and technological prowess are essential to any meaningful evaluation of the impacts of AI on society," said Caltech president Thomas F. Rosenbaum, the Sonja and William Davidow Presidential Chair and professor of physics. "This is true for the positives and for the negatives: whether it be lifesaving improvements to health screening, powerful tools for artistic creation, and new ways of approaching science or potential upheavals in the job market, propagation of false information, and new weapons of war. Only through this type of informed evaluation can we amplify the salutary aspects of technological development and counter its dehumanizing capacity."
The event featured an introduction on the state of generative AI from New York Times technology columnist Kevin Roose (who famously had an unnerving conversation with Microsoft's Bing chatbot).
In his keynote, Roose reminded the audience of the power of shared responsibility and knowledge. "One of the advantages that AIs have over humans is that they have networked intelligence: When one node in a neural network learns something or makes a connection, it propagates it through to all the other nodes in the neural network. When one self-driving car in a fleet learns about a new kind of obstacle, it feeds that information back into the system," he said. "Humans don't do that, by and large. We silo information, we hoard it, we keep it to ourselves. And I think that if we want a realistic shot at competing and thriving and succeeding, and maintaining our agency and our relevance in this new era of generative AI, we really need to do it together." While on campus, Roose also participated in a Q&A session with nearly 50 Caltech students.
In a subsequent panel discussion moderated by R. Michael Alvarez, professor of political and computational social science and co-director of the CSSPP, experts in law, gaming and technology, and academic research shared thoughts on the positive and negative potential of generative AI.
The optimistic outlook centers on AI's power to advance science and engineering, for example, by making it possible to predict genome sequences of new COVID-19 variants before they appear in nature, design better medical equipment, and mitigate climate change.
"How do we capture CO2 and store it underground? How do we plan for the right reservoir and the right amount of CO2 to store? These are the kinds of complex processes that our human minds can't even grapple with," said Anima Anandkumar, Bren Professor of Computing and Mathematical Sciences and co-leader of the AI4Science initiative at Caltech. "We are using [generative] AI, and we are doing it much faster. Along with it comes the benefit of being able to come up with new discoveries, new inventions."
Speaking from a more skeptical perspective, panelists raised concerns over intellectual property and copyright, bias, and large-scale misinformation.
Additionally, when generative AI technologies are coupled with the massive amount of personal data consumers share with social media algorithms, our own biases can become vulnerable to manipulation, Carly Taylor, a data scientist and security strategist at Activision Publishing pointed out. "All of us are capable of being bamboozled," Taylor said. "Everyone has confirmation biases, and in many cases across social media, we have spent every single day for years telling Facebook, Instagram, and LinkedIn exactly what we are biased toward by what we search, what content we consume, or with whom we engage … As a risk, that can become completely exploitable."
Justin Levitt, Gerald T. McLaughlin Fellow and professor of law at Loyola Law School, shared his pessimism about AI's impact on democracy in the United States, including the ability to rapidly spread election misinformation. "Democracy depends on a set of different opinions and a set of common facts, and generative AI is going to be great for giving us an infinite array of disparate facts," he said. "That's a disaster for democracy."
Sean Comer, an applied researcher at Activision and its Infinity Ward development studio, saw a silver lining in recent anxiety over generative AI. "Maybe it gives us the elephant in the room to address the attention economy, which a lot of misinformation tends to stem from," he said. "Maybe it's a necessary evil that can force us to deal with these things."
Importantly, speakers addressed the role of research institutions like Caltech. For example, avoiding biases in new AI models will take the kind of critical thinking and rigorous testing for which academia is known. The CSSPP will continue to foster these types of conversations by bringing policymakers to campus for lectures, colloquia, discussion panels, and workshops in addition to developing undergraduate and graduate courses that cover issues in scientific ethics and policy, and consider how policy may be augmented by scientific ethics and expertise.