Importance of Open Models and Data in AI Decision-Making

Jeffrey P. Bigham

@jeffbigham

7/13/2017

A few weeks ago, I had a quick Twitter conversation with Tim O’Reilly, inspired by his tweet:

i'm pretty ok w/ blackbox AI catching a ball…

…have more questions about it making hiring decisions, sending ppl to jail, firing guns, etc

That conversation and this recent article describing how Peter Norvig may question the value of “explainable AI” led me to (eventually) write this post about the importance of open data and open models, which are fundamentally connected to bias in AI.

A lot (A LOT) has been written about the potential dangers of using of AI (machine learning) in important human applications in, i.e., hiring, jailing, shooting, etc. If you want a quick primer, I highly recommend Zeynep Tufekci’s TED Talk, which I had the pleasure of seeing on a recent Delta flight. Or, you can check out the article Tim was responding to about models beyond what we can understand. Or, this article about how it is likely to interface with income inequality.

As people confront the idea of AI making decisions, there seems to be three stages of perspective -- (i) this is awesome!, (ii) we should remove bias in data, and (iii) screw it, people are biased too!

1.  The “wow this is awesome” stage!

Machines helping us make better decisions is completely awesome. I’m a scientist. I like data. We should use data to help us make better decisions. People get really excited about this, and they should (because it is awesome), but often they go a little too far and say, “Now, we can remove bias from our decisions!”

But, as has been written about extensively before, this ignores a huge problem -- models are only as good as the data they are trained on. And, humans are super biased. Training models on biased data leads to models that are really good at replicating biased human decisions.

 

2.  The “we just need better data” stage

People realize this and then enter the second stage, which is roughly, “Okay, our models are replicating the bias in human input data, and so we need to ensure the quality of our training data”. This is a very common response, and it seems like the solution. It probably is part of the solution. It was a slow day on Twitter, and not too many people paid attention to our conversation, but this tweet by O’Reilly seemed to get at least a bit of reaction:

“Risk of the bad data we feed into the model is greater than the risks of the model coming to a conclusion by methods we don’t understand.”

So, let’s just remove all bias from data! There’s some value to this approach, but it unfortunately has the feature that it seeks to avoid the problem by creating a new impossible problem that we can’t hope to solve. You simply can’t remove all bias from data.

This follows directly from how machine learning works -- machine learning works by discovering patterns in data that we didn’t know were there. Bias is a kind of pattern. If we’re building sophisticated models that recognize patterns better than any human or other model has been able to do, wouldn’t it also be able to uncover and replicate bias that we didn’t even know was in the data. If we didn’t know it was in the data, how could we have removed it? If we can’t understand the model, how can we understand its bias?

3.  The “Judge Models By Their Outputs” Stage

The third stage is, “we’ll never remove all bias in our models, but humans also have bias and we should judge our models by their outputs.” This came up in my Twitter conversation, and is more or less what I took as Peter Norvig’s current thinking based on his recent remarks[1].

There are multiple problems with this approach, but the primary one I’m worried about is that the bias that makes it into our models is likely to be hard to detect. Even people stuck at stage 2 above will probably (hopefully!) try to or be forced to remove common forms of bias from their models, e.g., overt racism or sexism. But, bias will remain… both bias learned from human bias or bias introduced as part of the machine learning process. What is overfitting if not an extreme bias toward examples that happened to be in the training data.

When people think about bias, they tend to think of the big huge societal level bias that we know happens. This is the bias that is applied to coarse attributes like gender, race, sexual orientation, or religion. This is persistent in human bias, and affects human judgments. It’s also something we know to look for, and can arguably detect (sometimes). We might not always have the political will to do something about it, but we can often see it happens and we can agitate for something to be done to address it.

And, as the models encode more and more complex kinds of bias, discovering that bias from observing outputs alone will become increasingly difficult. You might be able to tell an insurance AI is racially biased, even from a few examples, if everyone who applies who is of a certain race is declined or receives higher rates.

But, would you be able to uncover a bias against males who live in an odd-numbered zip codes who were married in 2010? That example is probably unlikely because presumably there’s enough examples of males in odd-numbered zip codes who were married in 2010 in the dataset who were both good and bad insurance customers. But, the point is that it’s a bad illustration because it’s still human understandable. It’s a good illustration because, despite being human understandable, it would still be difficult to uncover from what limited outputs we are likely to have.

Do we expect the insurance companies to provide open access to a large set of outputs and potential outputs, even if they won’t provide open access to data or models? It seems unlikely to me. And, in fact, without open data (inputs), what could we ever hope to learn from the outputs? Bias is evidenced by the connection between certain kinds of inputs and their outputs.

The bias in machines is likely to be much more complicated, and specific, and widespread, than in the example above. If bias is reflected across 10s, 100s, or 1000s of variables, and each instance only affects a few real people, how could we feasibly reconstruct complex, biased concepts from the limited data we’ll be able to scrape together? How would we even know where to look?

Machines built and adopted because they recognize patterns at scale that humans cannot will adopt biases that we will not know to look for in their outputs and at a scale that will be unfeasible for us to uncover.

After you’ve made it through these three stages, I think either you come to peace with their potential very negative effects, or you recognize the need for open data and open models. Open data and open models will allow us to build tools that will let us compete at scale to uncover the bias that will be lurking within the complexities of the models we produce.

ok, so, instead of open algorithms and open data, we're stuck with "if you can collect a bunch of outputs maybe you can figure out some high-level things it's doing." but, that doesn't actually help in the brave new AI world. the problem is it's going to learn all kinds of weird functions that might not obviously or directly map onto some high-level concept like "racism," but as an individual you're stuck with them and there's no way you can probe. trying to reverse engineer a function based on observable outputs, especially when the inputs are severely restricted is so inefficient to be basically impossible in practice.

Without access to a transparent and understandable model, it will be nearly impossible to understand what biases a model might have. The computer scientist in me hesitates to say that it will be impossible. But, it will be impossible with only a handful of outputs.[2]

Maybe we’ll need to create parallel models that uses the input data (if available) and the output data (if available) in an attempt to reconstruct the closed blackbox model. That’s a pretty fascinating research problem. It may even be reasonable to use such an approach to create an approximation of the model that is more understandable. That’s one of the ideas being pursued to make AI more explainable. But, to do that you need open data. Maybe models should be open. Even if models are open, we need techniques to effectively query them for bias.

The data world today is composed of silos. The data silos are kept secret primarily because of their incredible value to the data companies, e.g., Google, Amazon, Facebook, Microsoft, Apple, etc. And, of course, it helps that we don’t really know how to release this data publicly without potentially opening up huge privacy concerns. And, so, they have a good excuse to keep it private. Models are also private, valuable intellectual property.

But, there are ways to open data and models for inspection and transparency without putting it out for public use and abuse. We could imagine data and model auditors. We could imagine reformed IP laws that allow some sort of protection for models. Or, we could say that this business model is too dangerous for society and force data and models to be open despite business consequences. Some of this is kind of happening some places. We could make this a priority.

AI still has a long way to go before it competes with general human intelligence, but it is rapidly being deployed to assist in human decision making. Not only are we producing models that we cannot understand, those models have biases that we don’t understand. We need to get ahead of this.

A world in which there are models that we cannot understand sounds like the plot for a dystopian sci-fi story, not a world in which we would be happy to live, or that we should accept as being inevitable.


[1] Truth be told, who knows what Peter believes, but this is how one media outlet chose to portray his views.

[2] It’s pretty obvious to me that you can’t reverse engineer a deep learning model trained on millions of examples from a few 10s or 100s of outputs, but please someone point to a proof in the comments :)


This page and contents are copyright Jeffrey P. Bigham except where noted.
Blog posts are not intended to be final products, but rather a reflection of current thinking and/or catalysts for discussion, like tweets but longer.