Skip to main content area Skip to institutional navigation Skip to search Skip to section navigation

AI and Implicit Bias Policy Lab – Asking Hard Questions: Can AI provide a ‘Second Look?’

February 25, 2021

Chandra Nukala ML’21 is an accomplished and results-driven product leader with over 20 years of diverse industry experience in various disciplines – Engineering, Program Management, Business Development and Product Management. He has worked at big companies (notably Microsoft) and multiple tech startups. He is currently at an AI Stealth Startup. He has an MS from USC, an MBA from Thunderbird School of Global Management, and holds seven technology patents.

Here, Nukala shares his experiences and observations as part of Policy Lab: AI and Implicit Bias, taught by Rangita de Silva de Alwis.

Rangita de Silva de Alwis, Senior Adjunct Professor of Global Leadership at the University of Pennsylvania Carey Law School; nonresident leader in practice at Harvard’s Women and Public Policy Program (WAPPP); and Hillary Rodham Clinton Fellow on Gender Equity 2021-2022 at Georgetown’s Institute for Women, Peace and Security (GIWPS), led students and industry leaders from around the world to conduct groundbreaking research and analysis into AI and algorithmic data bias at the University of Pennsylvania Carey Law School.

As part of the ongoing inquiry in “Policy Lab: AI and Implicit Bias,” we have engaged in debate and discussion with a multidisciplinary group of important stakeholders in the field of AI. They have included:

  • Dean Sanjay Emani Sarma, Fred Fort Flowers and Daniel Fort Flowers Professor of Mechanical Engineering at MIT and the Vice President for Open Learning at MIT
  • Rob Goldman, former Vice President of Facebook
  • Dr. Mehrnoosh Sameki, Fairlearn team at Microsoft
  • Sean White, Chief R&D Officer of Mozilla
  • Chenai Chair, Mozilla Tech Policy Fellow from South Africa
  • Eric Rosenblum, Managing Partner of Tsingyuan Ventures, an early-stage U.S. fund with over $100 million under management that focuses on primarily U.S.-based science startups founded by the Chinese tech diaspora

The participants addressed whether existing models are deficient and if we need new models.

Humanize AI

Sarma explained that technology is racing so fast that it is very hard for governmental agencies to keep up. The framework for new laws should be based on “core principles” that form the foundation of the regulations, such as disclosure.

The Algorithmic Accountability Act, a new regulation that is being proposed in Congress and Senate, is one such example that could provide the foundation for a new framework of AI regulation that protect us from the harms caused by secret algorithms. Once we have the foundation of disclosure, then comes the complicated part of creating new tools and solutions to humanize artificial intelligence systems.

Sarma further explained that training data introduces biased correlations that the ML systems pick up during training. In a recent study, an AI system could identify wolves from huskies with 90% accuracy. Further analysis of the system identified that the AI system was using snow in the background to identify wolves; if the image had snow in the background it was a wolf and if it did not it was a husky.

As Sarma put it, humans can reflect and learn from their mistakes while AI cannot. Today AI systems are soulless machines – they make decisions without any regard to the harms caused by the decisions. He explained that the biggest challenge to overcome with AI is to humanize it and bring the notions of reflection, critical thinking, and feedback into decision-making. The key question is “How can we make AI think about each decision and learn to optimize the decision not just for profit but for betterment of all of humanity?”

Tools for data scientists and AI engineers: InterpretML and Fairlearn

Solving these complex problems requires tools like “InterpretML and Fairlearn” that Sameki of Microsoft is developing to debias algorithms from the harms caused by AI. As Sameki explained that like any tool, AI has immense potential to be used for good or for profits, and companies need to use AI in a responsible and ethical way.

These are tools that can be integrated into software development and the data management lifecycle that developers and data scientists can use to hold themselves and algorithms accountable. She said her team is focused on building tools that can be used in the data lifecycle management process – tools that can be used during data creation, storage, use, sharing, archival, and destruction that can identify bias and help a data scientist reduce biases that aligns with his work processes.

Fairlearn is an open-source package that identifies the “groups of individuals that are at risk for experiencing harms.” The package uses disparity metrics to compare the AI model’s behavior across different groups. She explained that InterpretML is another tool that helps AI model developers gain a better understanding of their model’s overall behavior, understand the reasons behind individual predictions, and debias data.

She further pointed to studies that show that diverse teams lead to less biased models and algorithms. Silicon Valley has a diversity problem that amplifies the issue of bias, and we need policies to address that.

AI: New frontier in competitive geopolitics

Rosenblum offered a unique insight into the problem. We are not only trying to protect consumers, but we are also competing on the global stage with China in the race for AI technologies as AI is a new frontier for competitive geopolitics. In 2017, Russian President Vladimir Putin said, “Whoever becomes the leader in this sphere will become the ruler of the world.” AI is a key pillar of China’s current five-year plan for science and technology development and is the centerpiece of its “Made in China 2025” industrial plan.

AI thrives on data, and in countries like China and Russia, companies big and small have access to massive troves of data that the government provides to its companies. Cutting of access to data is a death knell to many AI companies. This impact can be seen in data-intensive regulated industries like e-commerce and autonomous driving systems where some experts feel that China is leading the U.S.

Rosenblum mentioned that any policy solution should be built on the foundations of disclosure and collaboration. He further stressed the importance of data ethics in various fields, engineering, sciences, law, and public policy, as well as the need for multi-disciplinary programs to allow for cross pollination of ideas. He believes in the opportunity of cross border collaboration and cross-discipline investments that focus primarily on U.S.-based science startups founded by the Chinese tech diaspora, helping to create a bridge across the countries.

Biases in pattern recognition

Rosenblum told us the experience of Eric Yuan, the founder of Zoom. He discussed how difficult it was for Yuan to get venture funding.

“If his last name was mine, Eric Yuan would have been funded mush faster,” said Eric Rosenblum.

Rosenblum cut one of the first checks for Yuan. He said that the VC industry is very good at identifying patterns but when founders don’t fit the patterns or if the patterns have biases, then they fail to identify these biases. In his view, the hardest biases are those that have helped individuals become very successful in their careers. Rosenblum shared his process to mitigate his own biases, which is to identify the bias and then allow for a “second look” at CVs by underrepresented minorities (or more broadly a second review to overcome the bias).

The biggest takeaway from the discussion is that the issue of AI harms is one of the most serious of our time, and we need a comprehensive regulatory and technical solution built on disclosure and principles of fairness – technical solutions that integrate into current data.

Read more about the AI and Implicit Bias Policy Lab course offering at the Law School and about Nukala’s observations on compliance and deep change..