Skip to main content area Skip to institutional navigation Skip to search Skip to section navigation

AI and Implicit Bias Policy Lab: Compliance or Deep Change?

February 11, 2021

Chandra Nukala ML’21 is an accomplished and results-driven product leader with over 20 years of diverse industry experience in various disciplines – Engineering, Program Management, Business Development and Product Management. He has worked at big companies (notably Microsoft) and multiple tech startups. He is currently at an AI Stealth Startup. He has an MS from USC, an MBA from Thunderbird School of Global Management, and holds seven technology patents.

Here, Nukala shares his experiences and observations as part of Policy Lab: AI and Implicit Bias, taught by Senior Adjunct Professor of Global Leadership Rangita de Silva de Alwis.

We are facing the existential issue of our time: the rise of big data and the use of AI. Massive data sets of sensitive data are weaponized by companies using AI to determine how they do business with individuals, by financial institutions to determine whether to give credit, and by identity thieves to commit fraud. When individuals and communities are algorithmically selected for opportunities and potential-crime suspects, data is destiny and algorithms the pre-cogs, like those that predict the future in the film, “Minority Report.”

Groundbreaking analysis into AI and algorithmic data bias is being done at the University of Pennsylvania Carey Law School led by Senior Adjunct Professor of Global Leadership Rangita de Silva de Alwis with students and industry leaders from around the world.

On our first day, we were joined by Mitchell Baker, the CEO of Mozilla; Craig Newmark, the founder of Craigslist; Steve Crown, the Vice President of Microsoft; and Mark Surman, the Executive Director of Mozilla.

One of the main challenges that the current technology companies are facing is whether they should pursue compliance or whether to focus on deep cultural change. Mitchell Baker, the CEO of Mozilla, discussed how Silicon Valley has largely focused on being compliant. The companies follow the framework outlined in the United States Sentencing Guidelines for corporate compliance even though the individual steps provide little improvement or in many cases are harmful in Diversity and Inclusion programs. For example, there have been many studies that highlight that diversity training does not work but the sentencing guidelines state that “Training” is the hallmark of a well-designed compliance program.

Most existing compliance programs are designed in a hub and spoke model – the core compliance team (the hub) uses the United States sentencing guidelines to create a compliance program that contains – policies & procedures, training, risk assessment, resources (including a reporting hotline). What the majority of companies have experienced is that these compliance programs break down when the product teams (the spokes) must implement the key principles in the day-to-day decision making of the product teams.

The primary reason for the failure or breakdown is due to the fact that AI bias and ethics are not at the top of developers’ minds and is more of a compliance checkbox item. The software development process/Agile and DevOps model put the developer’s responsibility of delivering working code above all else (developing code, finding, and fixing bugs, meeting the project goals) while AI bias and other important aspects fall through the gaps and are considered checkbox items. In some cases these are even put off because the current project is a prototype or an experiment.

This brings us back to Mitchell Baker’s point: Are companies trying to be compliant or are they trying to reinvent themselves for the new age of big data and AI?

There are many initiatives looking at the agile development processes to address the concerns highlighted above. These include the data ethics canvas from the Open Data Institute, the ethics canvas from ADAPT, the science foundation and Ireland-based research center, and others looking at agile ethics for AI. These processes are a good start, but we are at the precipice of an industry that needs to add ethics and bias to the software developer’s accountabilities and deliverables.

Marking the compliance checkbox may have been the first step, but leveraging existing data and adopting the advances of AI to execute the programs in my view will put us on a path that makes considerable difference in our future with ethics and AI. Until then “Ethical AI” is a compliance checkbox item.

Read more about the AI and Implicit Bias Policy Lab course offering at the Law School and about Nukala’s observations on whether AI can provide a “second look” regarding bias.