Fei-Fei Li heard the crackle of a cat’s brain cells a couple of decades ago and has never forgotten it. Researchers had inserted electrodes into the animal’s brain and connected them to a loudspeaker, filling a lab at Princeton with the eerie sound of firing neurons. “They played the symphony of a mammalian visual system,” Li told an audience Monday at Stanford, where she is now a professor.
The music of the brain helped convince the physics undergraduate to dedicate herself to studying intelligence, a path that led to Li specializing in artificial intelligence and helping catalyze the recent flourishing of AI technology and use cases like self-driving cars. These days, though, she is concerned that the technology she helped bring to prominence may not always make the world a better place.
Li’s keynote speech marked the opening of the Institute for Human-Centered Artificial Intelligence, or HAI, which will work on topics such as how to ensure algorithms’ make fair decisions in government or finance, and what new regulations may be required on AI applications. Luminaries from Silicon Valley and beyond, including Henry Kissinger and ex-Yahoo CEO Marissa Meyer, came to hear a day of discussions about how AI will shape society from a roster of academic and industry figures that included Bill Gates. Later, Li, a founder and co-director of HAI, told WIRED why AI research needs steering onto a new path.
WIRED: Stanford has one of the world’s longest-running AI labs, and around the world there is more AI R&D than ever before. Why create a new research institute?
Fei-Fei Li: AI started as a computer science discipline, but now we are in a new chapter. This technology has the potential to do so many good things, but there are also risks and pitfalls. We have to act and make sure it is human benevolent.
At HAI we are making AI an interdisciplinary field of study and education by working with many different thinkers and practitioners: social scientists, political scientists, economists, doctors, and neuroscientists. My aspiration is to come up with thoughtful frontier research as well as potential policy recommendations.
W: If people working on AI technology have to start engaging with such broader questions, will technical progress slow down?
FL: I never thought this has anything to do with slowing down. We are asking people to be more imaginative, collaborative, thoughtful, and human-centered. I don’t know if these adjectives imply slowing down. We want to broaden the horizon and deliver the positive potential in a more concrete way.
W: You have said there should be more work on using AI to help workers, not to replace them. What does that look like? At HAI’s launch symposium, one of your collaborators, Serena Yeung, mentioned a project placing depth cameras, which track motion in 3D, in hospital rooms.
FL: A patient’s mobility in the ICU will have a direct impact on how well he or she recovers. Hospitals have protocols to say, every one or two hours you have to monitor this, but nurses are overworked. A depth camera can watch patient mobility 24/7. AI can enhance and augment the work of clinicians.
I personally have been spending lots of time in ICU with my mom in the past half a year. I cannot imagine replacing nurses and doctors, but I can imagine their work being helped in so many different ways so that they can focus on care.
W: Stanford is located in the heart of Silicon Valley and HAI already has relationships with tech companies including Microsoft and Google. Can you become too close to the tech industry?
FL: Stanford is Stanford not because we’re close to Google but because of the tremendous amount of independent, world changing, research and education we’ve done for the past 130 years. No matter how much Silicon Valley companies love us, we wouldn’t have this reputation if we didn’t earn it ourselves. I’m very proud of the amazingly thought-provoking and sometimes controversial work we can do here.
I think it’s really important that we engage with different industries so that our researchers understand the challenges and our research becomes useful tools. It’s easy to think industry means tech industry, but in our books industry means manufacturing, agriculture, retail, health care, education, government.
W: You were chief scientist for AI and machine learning at Google’s cloud division until late last year. Then you briefly were listed as an adviser to the company, but recently cut ties. Did that industry experience influence your thinking about how AI could shape society? Leaked emails showed you discussing a Pentagon contract that led to employee protests, and Google announcing guidelines for acceptable uses of AI.
FL: The 20-month sabbatical at Google was extremely illuminating. I was inspired by listening to the pain points and challenges and opportunities that different industries have. It reinforced that there is a big role for AI to play in terms of helping the world in many important issues but we have to guide it in the most thoughtful and human-centric way. And as an AI scientist I was proud to contribute to the responsible AI guidelines.
W: If HAI is successful, how will the world be different 10 years from now?
FL: Above all I want to see HAI producing a very diverse workforce of AI practitioners, developers, and leaders. And I hope that we can deploy technologies that help humans live better and healthier and work more safely and productively.
I also have really high hopes that AI literacy is more prevalent—starting with journalists but also policy makers, teachers, civic society. This is not a professor wanting everybody to know how to code, it’s about more people participating in the guidance of AI.
More Great WIRED Stories