Daxdi now accepts payments with Bitcoin

How to Build an Ethical Algorithm

Algorithms determine a huge amount about the way we live and work online.

They determine what we see online and can tell us what type of healthcare we're going to get.

On this episode of Fast Forward, Michael Kearns, co-author of The Ethical Algorithm: The Science of Ethical Algorithm Design, explains how AI systems will change the world for the better—if we design them the right way.

Dan Costa: You are a professor of computer and information science at the University of Pennsylvania and you've written a book called The Ethical Algorithm: The Science of Socially Aware Algorithm Design with co-author Aaron Roth, also at the University of Pennsylvania.

It really presents a framework for how we can build moral machines that will actually adhere to the sort of ethical guidelines that we aspire to.

Let's start with why algorithms are important, what we mean when we say algorithm in terms of AI and what people don't realize about algorithms themselves?

Michael Kearns: First of all, algorithms of course have been around for a very, very long time, since before there were computers.

AI and machine learningare also very old fields, but I think what's really changed in the last 20 years, and especially the last 10,­ is that AI and machine learning used to be used in scientific applications because that's where there was sufficient data to train predictive models.

The rise of the consumer internet has now made all of us generate reams and reams of data about our activities, our locations, our preferences, our hopes, our fears, et cetera.

Now it's possible to use machine learning to personalize algorithmic decision making, some decisions that we know about and want algorithms to be making for us, and sometimes decisions that we aren't even aware of.

What are some of the decisions that people may not be aware of?
Many of the examples in our book are where the decision has a great consequence for the individual and they may not even be aware that algorithms are being used or being used to help the decision-making process.

Examples would be things like consumer lending, whether you get a loan or a credit card when you apply for it, college admissions decisions, hiring decisions in HR departments, and even very consequential things like healthcare; and, also what criminal sentence you receive or whether you get parole if you've been incarcerated.

Most people don't realize this is happening in both private businesses and in government.

Ideally these things are being introduced to make the decision process better and more informed and less biased.

Why isn't that happening?

I don't think that the primary goal of most algorithmic decision making is to make things less biased, it's often to make it more efficient and to take advantage of the fact that we have massive amounts of data that can be used to build predictive models.

So rather than either human beings directly making the decisions, which can often be slow and also be biased in various ways, it's easier and expedient to take the data that you have and to essentially train a model.

It's really a form of self-programming, right? Rather than a computer programmer saying who should get a loan and who shouldn't, based on the attributes entered into a loan application, you just take a bunch of historical data about people you gave loans to, who repaid and didn't repay, and you try to learn a model separating the credit-worthy from the non-credit-worthy.

I think that often in business and elsewhere, the primary driver is efficiency and our book is really about the collateral damage that can come from chasing those efficiencies.

Let's talk about some of those examples.

A few weeks ago, there was a
study about a hospital that was using an algorithm to help determine who to give medical care to and how much medical care to give.

There was some analysis, and it was determined that the algorithm was systematically under-serving African-American patients and therefore over-serving white patients.

Yes, and I think it actually wasn't one hospital, it was many hospitals that were all using some third-party algorithm that had the problem you described.

It highlights one of the several ways in which things like racial, gender, and other bias can creep into algorithms.

In that particular case, the problem wasn't really with the algorithm, which is often a source of bias or discrimination, and it also wasn't with the data itself, it was actually what the objective the company used to train the model.

The purpose of this model was to try to assess patients' health to decide what level of healthcare they needed or to intervene with a treatment of some kind.

But, actually measuring somebody's health is a complicated, multidimensional thing.

In other words, it's hard to gather the right data to train for that goal.

What this company apparently did was say, "Well let's just use healthcare cost as a proxy for healthcare.

Let's assume that in our historical dataset, the people who had higher health expenses were the sicker ones and the people with lower health expenses were the healthier ones." The problem with this is that it learned to discriminate against African-Americans because they systematically in the aggregate had lower healthcare costs, not because they were less sick, but because they had less access to healthcare.

This is a classic example where, when you have one goal, it's hard to target that goal or would require a more expensive data-gathering process.

Then they use this proxy and that proxy essentially perpetuated this bias into their model.

It's interesting because when you hear about bias in the algorithm, you think that, well certainly there's some point where you're asking about racial backgrounds.

That's actually very rarely the case, it's those secondary consequences, those correlations that you may not understand when you're first programming the algorithm.

That's right.

In fact, I think one of the things we've learned in recent years is that, just because you don't include a variable like race or gender in your model is absolutely no guarantee at all that your model won't end up discriminating by race and by gender.

There are a number of reasons why this can happen, and it's interesting because for instance, in lending and credit, there are longstanding laws in the US that say, "Thou shalt not use race as an input to your predictive models." In the era that these laws were developed, I think the intent was to protect racial minorities from discrimination by models, but it happens nevertheless.

One of the many reasons it can happen is that these days, especially when so much is known about us, there are so many sources of data about us that are available.

There are just too many proxies for things like race.

I mean, you don't have to tell me what your race is for me to figure it out, at least in a statistical sense from other sources of data.

One unfortunate example is that in the United States, your ZIP Code is already a rather good indicator of your race.

So this is the kind of thing that can happen.

Let's talk about another example of a misunderstood algorithm.

You talked about criminal risk assessment algorithms, which encompasses one of these algorithms that has been used for almost 20 years now.

A lot of people have gone through the system, there have been some reports that there are flaws, fairness problems in the algorithm, but the issue is actually pretty complicated and nuanced.

That was again a very relatively recent controversy that I think helped advance our understanding of the challenges of algorithmic fairness.

A campus built this criminal recidivism prediction model, kind of almost a Minority Report-type of model that, based on somebody's criminal history, tries to predict whether they will recidivate, or essentially recommit a violent crime sometime in the next two years.

These kinds of risk-assessment models are often used in different jurisdictions by judges who are deciding whether to give people parole or not.

So it's very, very consequential stuff.

The investigatory nonprofit ProPublica took a hard look at this model and demonstrated that it had a systematic racial bias, that it was discriminating against African-Americans and other racial minorities.

So there was controversy and there was back and forthbetween ProPublica and the company that had developed the model, with ProPublica saying, "Your model is unfair." Then NorthPointe, which was the company that developed it, came back and said, "No, we deliberately were aware of these issues and we made sure our model was fair, but we used this other definition of fairness.

If you dig into the weeds on this, both of these definitions of fairness are entirely reasonable and desirable.

In fact, you'd like to have both of them." Then researchers started scratching their heads and saying, "Okay, who's right here?" Then some of them more theoretically inclined ones sat down and thought, "Is it even mathematically possible to satisfy both of these fairness definitions simultaneously?" Then they proved that it was not.

This is especially enlightening or disturbing depending on your viewpoint because it shows that the algorithmic study of fairness or implementation of fairness, is going to be kind of messy and that you might have to, when you ask for one type of fairness, you may have to be giving up on another one.

I think we've been pretty clear about how complicated this gets very quickly.

In your book, you offer some advice for how to build ethics into these algorithms from the start.

How do we go about doing that?

The main purpose of our book is we are optimists, we are machine learning researchers, but we're also aware of the antisocial behavior that algorithms have demonstrated in the past five years and the rising kind of popular alarm over that.

We share that alarm and we felt like most of the books that we've read, many of which we've liked a great deal, are very good at pointing out what the problems are; but when it comes to solutions, their answers are of the form "We need better laws, we need better regulations, we need watchdog groups, we really have to keep an eye on this stuff." We agree with all of that, but we think that while that's going on and things like regulatory or legal solutions, they take a long time, right?

If algorithms are misbehaving, we could think about making the algorithms better in the first place.

If we're worried about a criminal recidivism model demonstrating racial bias or another algorithm leaking private data, we could ask whether we could literally like change the code in those algorithms and eradicate or at least reduce those problems.

The good news from our perspective is that over the last 10 years or so, a growing number of researchers in the field, ourselves included, have been working on exactly how you would do that and what it would mean.

The general kind of recipe, if you like.

I don't think we're quite to the point where we can call it a recipe, but the general process is first you have to state what you're worried about, like privacy, leakage, or fairness or what have you.

Then you need to, anytime you're going to explain something to a computer, anytime you're going to put something in an algorithm, you have to be exceedingly precise.

You can't wave your arms and say, "Hey, try to be more fair,” right? You need to pick a definition of fairness that you could write down mathematically and you need to encode it, embed it in the algorithm itself.

To give a concrete example, many of the problems of machine learning arise from the fact that it generally has a singular, very clear objective, which is minimizing error.

So you take some historical training data, you've got loan applications represented by a vector X, and you've got some outcome that you know happened historically like, this person did repay their loan or they did not repay their loan.

What you would normally do is take a big pile of data like that and say, "Okay, I want to use some machine learning algorithm to find a model that on this historical dataset makes as few mistakes of predicting re-loan repayment as possible." Totally sensible principle.

The problem is that, especially when the model classes are extremely rich and complex, I didn't say anything in that statement about fairness.

So I didn't say for instance, "Make sure that the false rejection rate on black people is not too much higher than it is on white people." I just said minimize the error overall.

If for instance black people are in the minority in my dataset or if there's some little corner of the model space where the overall air can be reduced, just even infinitesimally at the great expense of racial discrimination, standard machine learning is going to go for that corner.

So what's the fix? The fix broadly is to change the objective function and say, "Don't just minimize the error, minimize the error subject to the constraint that the false rejection rate on black people and white people is no more than 1 percent, or 5 percent, or 10 percent." You can say, "I want perfect fairness, 0% discrepancy between different racial groups of false rejection." Or I can relax that a little bit, of course if I turn that knob all the way to allowing 100 percent discrepancy and false rejection rates, it's like normal machine learning.

It's like I'm not asking for any fairness at all.

I imagine it's just so antithetical to most engineers’ thinking, because you're basically going to sacrifice accuracy in order to accommodate these other principles which are a little more philosophical.
I don't think it's actually the scientists and engineers who have any difficulty with it.

First of all they understand the original principle of machine learning of just like minimize the error.

They're very used to solving constrained optimization problems also.

So they certainly can understand the math behind this alternate where you're taking fairness into consideration.

It's the CEO.

The hard part is the business.

Right, because it really will mean, as you say, less accuracy, right? If the most accurate model, ignoring fairness was racially discriminatory, then getting rid of that discrimination can only make the air go up.

I think we're at the point where the science of these kinds of trade-offs is pretty well in hand.

I mean on actual data sets like the COMPAS criminal recidivism dataset, you can actually literally numerically trace out the trade-off that you face between accuracy and fairness.

I think the hard part is explaining to non-quantitative people what that curve means, first of all.

Then once they understand it sort of saying to them, "Okay, you need to pick one point on this curve.

You have to decide the relative importance of accuracy and fairness."

Remember in many applications, accuracy translates into profits for instance, right? So, if you are Google and Facebook and you're using machine learning as they are, to do targeted advertising to your users on...

Algorithms determine a huge amount about the way we live and work online.

They determine what we see online and can tell us what type of healthcare we're going to get.

On this episode of Fast Forward, Michael Kearns, co-author of The Ethical Algorithm: The Science of Ethical Algorithm Design, explains how AI systems will change the world for the better—if we design them the right way.

Dan Costa: You are a professor of computer and information science at the University of Pennsylvania and you've written a book called The Ethical Algorithm: The Science of Socially Aware Algorithm Design with co-author Aaron Roth, also at the University of Pennsylvania.

It really presents a framework for how we can build moral machines that will actually adhere to the sort of ethical guidelines that we aspire to.

Let's start with why algorithms are important, what we mean when we say algorithm in terms of AI and what people don't realize about algorithms themselves?

Michael Kearns: First of all, algorithms of course have been around for a very, very long time, since before there were computers.

AI and machine learningare also very old fields, but I think what's really changed in the last 20 years, and especially the last 10,­ is that AI and machine learning used to be used in scientific applications because that's where there was sufficient data to train predictive models.

The rise of the consumer internet has now made all of us generate reams and reams of data about our activities, our locations, our preferences, our hopes, our fears, et cetera.

Now it's possible to use machine learning to personalize algorithmic decision making, some decisions that we know about and want algorithms to be making for us, and sometimes decisions that we aren't even aware of.

What are some of the decisions that people may not be aware of?
Many of the examples in our book are where the decision has a great consequence for the individual and they may not even be aware that algorithms are being used or being used to help the decision-making process.

Examples would be things like consumer lending, whether you get a loan or a credit card when you apply for it, college admissions decisions, hiring decisions in HR departments, and even very consequential things like healthcare; and, also what criminal sentence you receive or whether you get parole if you've been incarcerated.

Most people don't realize this is happening in both private businesses and in government.

Ideally these things are being introduced to make the decision process better and more informed and less biased.

Why isn't that happening?

I don't think that the primary goal of most algorithmic decision making is to make things less biased, it's often to make it more efficient and to take advantage of the fact that we have massive amounts of data that can be used to build predictive models.

So rather than either human beings directly making the decisions, which can often be slow and also be biased in various ways, it's easier and expedient to take the data that you have and to essentially train a model.

It's really a form of self-programming, right? Rather than a computer programmer saying who should get a loan and who shouldn't, based on the attributes entered into a loan application, you just take a bunch of historical data about people you gave loans to, who repaid and didn't repay, and you try to learn a model separating the credit-worthy from the non-credit-worthy.

I think that often in business and elsewhere, the primary driver is efficiency and our book is really about the collateral damage that can come from chasing those efficiencies.

Let's talk about some of those examples.

A few weeks ago, there was a
study about a hospital that was using an algorithm to help determine who to give medical care to and how much medical care to give.

There was some analysis, and it was determined that the algorithm was systematically under-serving African-American patients and therefore over-serving white patients.

Yes, and I think it actually wasn't one hospital, it was many hospitals that were all using some third-party algorithm that had the problem you described.

It highlights one of the several ways in which things like racial, gender, and other bias can creep into algorithms.

In that particular case, the problem wasn't really with the algorithm, which is often a source of bias or discrimination, and it also wasn't with the data itself, it was actually what the objective the company used to train the model.

The purpose of this model was to try to assess patients' health to decide what level of healthcare they needed or to intervene with a treatment of some kind.

But, actually measuring somebody's health is a complicated, multidimensional thing.

In other words, it's hard to gather the right data to train for that goal.

What this company apparently did was say, "Well let's just use healthcare cost as a proxy for healthcare.

Let's assume that in our historical dataset, the people who had higher health expenses were the sicker ones and the people with lower health expenses were the healthier ones." The problem with this is that it learned to discriminate against African-Americans because they systematically in the aggregate had lower healthcare costs, not because they were less sick, but because they had less access to healthcare.

This is a classic example where, when you have one goal, it's hard to target that goal or would require a more expensive data-gathering process.

Then they use this proxy and that proxy essentially perpetuated this bias into their model.

It's interesting because when you hear about bias in the algorithm, you think that, well certainly there's some point where you're asking about racial backgrounds.

That's actually very rarely the case, it's those secondary consequences, those correlations that you may not understand when you're first programming the algorithm.

That's right.

In fact, I think one of the things we've learned in recent years is that, just because you don't include a variable like race or gender in your model is absolutely no guarantee at all that your model won't end up discriminating by race and by gender.

There are a number of reasons why this can happen, and it's interesting because for instance, in lending and credit, there are longstanding laws in the US that say, "Thou shalt not use race as an input to your predictive models." In the era that these laws were developed, I think the intent was to protect racial minorities from discrimination by models, but it happens nevertheless.

One of the many reasons it can happen is that these days, especially when so much is known about us, there are so many sources of data about us that are available.

There are just too many proxies for things like race.

I mean, you don't have to tell me what your race is for me to figure it out, at least in a statistical sense from other sources of data.

One unfortunate example is that in the United States, your ZIP Code is already a rather good indicator of your race.

So this is the kind of thing that can happen.

Let's talk about another example of a misunderstood algorithm.

You talked about criminal risk assessment algorithms, which encompasses one of these algorithms that has been used for almost 20 years now.

A lot of people have gone through the system, there have been some reports that there are flaws, fairness problems in the algorithm, but the issue is actually pretty complicated and nuanced.

That was again a very relatively recent controversy that I think helped advance our understanding of the challenges of algorithmic fairness.

A campus built this criminal recidivism prediction model, kind of almost a Minority Report-type of model that, based on somebody's criminal history, tries to predict whether they will recidivate, or essentially recommit a violent crime sometime in the next two years.

These kinds of risk-assessment models are often used in different jurisdictions by judges who are deciding whether to give people parole or not.

So it's very, very consequential stuff.

The investigatory nonprofit ProPublica took a hard look at this model and demonstrated that it had a systematic racial bias, that it was discriminating against African-Americans and other racial minorities.

So there was controversy and there was back and forthbetween ProPublica and the company that had developed the model, with ProPublica saying, "Your model is unfair." Then NorthPointe, which was the company that developed it, came back and said, "No, we deliberately were aware of these issues and we made sure our model was fair, but we used this other definition of fairness.

If you dig into the weeds on this, both of these definitions of fairness are entirely reasonable and desirable.

In fact, you'd like to have both of them." Then researchers started scratching their heads and saying, "Okay, who's right here?" Then some of them more theoretically inclined ones sat down and thought, "Is it even mathematically possible to satisfy both of these fairness definitions simultaneously?" Then they proved that it was not.

This is especially enlightening or disturbing depending on your viewpoint because it shows that the algorithmic study of fairness or implementation of fairness, is going to be kind of messy and that you might have to, when you ask for one type of fairness, you may have to be giving up on another one.

I think we've been pretty clear about how complicated this gets very quickly.

In your book, you offer some advice for how to build ethics into these algorithms from the start.

How do we go about doing that?

The main purpose of our book is we are optimists, we are machine learning researchers, but we're also aware of the antisocial behavior that algorithms have demonstrated in the past five years and the rising kind of popular alarm over that.

We share that alarm and we felt like most of the books that we've read, many of which we've liked a great deal, are very good at pointing out what the problems are; but when it comes to solutions, their answers are of the form "We need better laws, we need better regulations, we need watchdog groups, we really have to keep an eye on this stuff." We agree with all of that, but we think that while that's going on and things like regulatory or legal solutions, they take a long time, right?

If algorithms are misbehaving, we could think about making the algorithms better in the first place.

If we're worried about a criminal recidivism model demonstrating racial bias or another algorithm leaking private data, we could ask whether we could literally like change the code in those algorithms and eradicate or at least reduce those problems.

The good news from our perspective is that over the last 10 years or so, a growing number of researchers in the field, ourselves included, have been working on exactly how you would do that and what it would mean.

The general kind of recipe, if you like.

I don't think we're quite to the point where we can call it a recipe, but the general process is first you have to state what you're worried about, like privacy, leakage, or fairness or what have you.

Then you need to, anytime you're going to explain something to a computer, anytime you're going to put something in an algorithm, you have to be exceedingly precise.

You can't wave your arms and say, "Hey, try to be more fair,” right? You need to pick a definition of fairness that you could write down mathematically and you need to encode it, embed it in the algorithm itself.

To give a concrete example, many of the problems of machine learning arise from the fact that it generally has a singular, very clear objective, which is minimizing error.

So you take some historical training data, you've got loan applications represented by a vector X, and you've got some outcome that you know happened historically like, this person did repay their loan or they did not repay their loan.

What you would normally do is take a big pile of data like that and say, "Okay, I want to use some machine learning algorithm to find a model that on this historical dataset makes as few mistakes of predicting re-loan repayment as possible." Totally sensible principle.

The problem is that, especially when the model classes are extremely rich and complex, I didn't say anything in that statement about fairness.

So I didn't say for instance, "Make sure that the false rejection rate on black people is not too much higher than it is on white people." I just said minimize the error overall.

If for instance black people are in the minority in my dataset or if there's some little corner of the model space where the overall air can be reduced, just even infinitesimally at the great expense of racial discrimination, standard machine learning is going to go for that corner.

So what's the fix? The fix broadly is to change the objective function and say, "Don't just minimize the error, minimize the error subject to the constraint that the false rejection rate on black people and white people is no more than 1 percent, or 5 percent, or 10 percent." You can say, "I want perfect fairness, 0% discrepancy between different racial groups of false rejection." Or I can relax that a little bit, of course if I turn that knob all the way to allowing 100 percent discrepancy and false rejection rates, it's like normal machine learning.

It's like I'm not asking for any fairness at all.

I imagine it's just so antithetical to most engineers’ thinking, because you're basically going to sacrifice accuracy in order to accommodate these other principles which are a little more philosophical.
I don't think it's actually the scientists and engineers who have any difficulty with it.

First of all they understand the original principle of machine learning of just like minimize the error.

They're very used to solving constrained optimization problems also.

So they certainly can understand the math behind this alternate where you're taking fairness into consideration.

It's the CEO.

The hard part is the business.

Right, because it really will mean, as you say, less accuracy, right? If the most accurate model, ignoring fairness was racially discriminatory, then getting rid of that discrimination can only make the air go up.

I think we're at the point where the science of these kinds of trade-offs is pretty well in hand.

I mean on actual data sets like the COMPAS criminal recidivism dataset, you can actually literally numerically trace out the trade-off that you face between accuracy and fairness.

I think the hard part is explaining to non-quantitative people what that curve means, first of all.

Then once they understand it sort of saying to them, "Okay, you need to pick one point on this curve.

You have to decide the relative importance of accuracy and fairness."

Remember in many applications, accuracy translates into profits for instance, right? So, if you are Google and Facebook and you're using machine learning as they are, to do targeted advertising to your users on...

Daxdi

pakapuka.com Cookies

At pakapuka.com we use cookies (technical and profile cookies, both our own and third-party) to provide you with a better online experience and to send you personalized online commercial messages according to your preferences. If you select continue or access any content on our website without customizing your choices, you agree to the use of cookies.

For more information about our cookie policy and how to reject cookies

access here.

Preferences

Continue