Meet the Woman in Charge of Responsible AI at Microsoft

Like many lucky longhorns, Sarah Bird, BS ’07, vividly recalls attending the 2006 Rose Bowl, where quarterback Vince Young, BS ’13, Life Member, led UT to a national championship win over USC. Amid the football fever, however, it also proved an exciting time to be a budding engineer. Though cheering on the dominant team was a big enough commitment for any Longhorn, Bird—a computer engineering major at the time—also sought out coursework in subjects as varied as Spanish and art history. She credits these myriad influences for her eventual interest (and industry-leading expertise) in making artificial intelligence (AI) safe.

“Of course, UT was a top engineering school, which is incredibly important in my career—but it was also in a city with technology, industry, and it’s strong in many other departments,” Bird says. “And if you look at responsible AI, it is about bringing together society and government and people and culture with technology, so being at a school that was strong across all of these dimensions [was crucial].”  

Bird also gained hands-on experience as a student, doing research with Lizy John (who is now the Truchard Foundation Chair in Engineering in UT’s Chandra Family Department of Electrical & Computer Engineering) and working at IBM through the Cockrell School of Engineering’s co-op program. As early as 19 years old, Bird was helping to design the chip for the Xbox 360 processor.  

“I might not have ended up on this path if [John] had not just grabbed me and said, ‘I think this is what you should do,’” Bird recalls. “And that real-world experience of developing technology that then everyone used and I could have in my home, was so exciting.   

“I think that really propelled me to say, I want to be a creator. I want to build technology,” Bird says. She began working with artificial intelligence years before most of us started realizing it was AI that was finishing the sentences we started writing in an email and enabling our phones to use facial recognition. Now AI is virtually everywhere.  

“One of the exciting things about AI is it’s hard to think of other technologies—maybe the internet—that impact so many different industries all at once,” says Bird, who has been the chief product officer of responsible AI at Microsoft for the past five years. As Wired magazine put it in June 2023, Bird is one of the foremost “humans trying to keep us safe from AI,” by working to mitigate risk factors and malicious use.  

In her experience, the biggest risk of the technology might be if people don’t know how to use it—safely and morally. In 2017, Bird started the first research group at Microsoft for AI ethics. Her team works to prevent AI from being used to create harmful content, such as deep fakes, hate speech, misinformation, and propaganda. And this September, Microsoft unveiled more AI safety features for their public-facing products, including self-correcting capabilities and increased graphics processing unit (GPU) security. In an interview about the initiative with VentureBeat, Bird praised the new features as important advances, but affirmed that safety will require ongoing effort. 

“I see this as such an important part of making AI a reality in the world,” Bird says. “And so that’s why I’ve made my career in this specific element of AI.”

Sarah Bird, 2024.

Artificial intelligence refers to the ability of a computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. It’s a collection of technologies, not a single program.  

With AI, computer systems are trained to use a set of rules, called algorithms, to perform tasks normally performed by humans. In other words, a machine performs the cognitive functions we usually associate with human minds, such as learning.   

Early attempts to create artificial intelligence date back at least to the 1950s. In 1997, IBM’s Deep Blue computer made history by defeating world chess champion Garry Kasparov. But many people didn’t take notice until some 20 years later, when Siri began providing answers to voice commands on Apple’s iPhones. Things changed dramatically in 2022, with the launch of OpenAI’s ChatGPT, as computer users watched what’s known as “generative AI” produce human-like responses to questions, write term papers, and more.    

“I think some of what has been missing in the public discourse is that AI has been around for a long time, and the deployment of AI is quite ubiquitous,” computer scientist Sanmi Koyejo says. Koyejo is president of Black in AI, which works to increase the presence and inclusion of Black people in the field of AI. “Almost every online platform has some automated decision making—a lot of what Google does on the web related to search and fraud detection, [for example.] There are companies that do hiring decision-making based on automated systems.  

“People don’t really notice, so it seems to have flown under the radar, and it feels completely new,” Koyejo says. “But AI is everywhere and has been for years. The challenge with AI is that it has lots of promise, but it is highly dependent on the data that you feed into it.”  

The promise includes speeding up the development of pharmaceuticals and the detection of extreme weather—but risks include misinformation, the ability to engineer pandemic-capable pathogens, and the erosion of privacy.  

“This impending AI doom tech companies are telling us will come is actually here,” says Berhan Taye, a Stanford University researcher on technology and social justice whose work focuses on addressing the potential harms of AI. “Many communities worldwide are being harmed by AI and automated decision systems that were not designed for and by them.”  

The general public is becoming savvier to AI: A May 2024 survey conducted by Elon Poll and Elon University’s Imagining the Digital Future Center found that 78 percent of American adults believe that AI abuses will affect the outcome of the presidential election.  

But along with threats come great potential. Microsoft calls AI, “the defining technology of our time.”  

“When ChatGPT became available, it was much more noticeable to people because it can directly interact with you. So now it feels like the stuff we’ve seen in science fiction movies, where you can talk to it, and it talks back,” Bird says. “And while it is fun and exciting, what is so amazing about the potential for good is that the technology now can be an interface to all other technologies. It can empower people who aren’t good with technology today to be able to do so much more.  

“It’s so flexible, and it directly connects with humans. It’s easy to see how it has the potential to change every industry, every single person’s life.”  

For example, AI has enormous potential in changing education through services such as personalized tutoring, which might otherwise be prohibitively expensive. Microsoft works with New York City schools to use AI to tutor students. Such programs have been shown to help learners, perhaps those lacking English skills, who might be reluctant to seek out help directly.  

“One of the things they see is that students may not be comfortable with raising their hands and asking a question. We see them engaging a lot with the chat experience and … getting support in that way,” Bird says.  

AI also helps people navigate bureaucracies to get benefits and services they need. Bird points to governments using AI to create experiences that help people ask questions in the way they know to ask them and get answers in ways they understand.

But AI results are only as good as their input, so mistakes can happen.  

“AI tools, depending on their data source and the ideologies of the creators, can replicate real-world disparities,” Taye says. “If the data does not consider the circumstances of women, Black people, and other minorities, it will discriminate and often harm them.”  

Bird’s aim is to ensure that AI is accurate, equitable, and safe. Depending on the inputs, AI can repeat stereotypes or accidentally present inaccurate information—even giving bad medical advice. There are also fears around providing certain content to people who are considering self-harm.  

One way to improve safety and accuracy is what is known as red-teaming: convening cybersecurity experts who try to think like hackers. (The term became popular during the Cold War when the U.S. ran military simulations, with enemy teams represented in red.)  

“What are the worst things we can get the technology to do? What are the most exciting things? Then [we can] use that information to go and figure out how do we get all that exciting potential but address the risks in the technology,” Bird says.  

While AI can be used to create deepfakes, potentially misleading huge numbers of people, it also has a unique and lesser-known ability to detect its own errors. “It’s actually a really powerful tool for defending against these types of risks because it can understand so many different types of content, and so much more deeply,” Bird says. Her team also uses AI to detect whether a system has created any “hallucinations”—fabricated data that appears authentic.  

“We use AI to detect where our system might have made a mistake and to go and actually correct that,” she says.  

It’s all aimed at ensuring that AI does more good than harm.  

“Our job is to figure out how to solve these problems that haven’t been solved before and then make it easy for everybody to do [the same],” Bird says.  

“My mindset is that you don’t get to have the exciting AI future and all of the amazing applications that can change the world if we don’t figure out how to do it responsibly and safely.”  

CREDITS: Courtesy of Sarah Bird; illustration by Chad Hagen

 
 
 

No comments

Be the first one to leave a comment.

Post a Comment


 

 
 
Menu