Quantcast
Viewing all articles
Browse latest Browse all 2658

Algorithms, Like People, Discriminate Too

Image may be NSFW.
Clik here to view.
Businessman sitting on a chair and studying math formulas on blackboard

Who will be held accountable when big data analytics discriminate in the marketplace? Nuala O’Connor, President & CEO of the Center for Democracy & Technology, explains there are no algorithms without humans.

 

As we browse the internet, we are being judged. What we read, buy and even try to keep secret while using the “private browsing” mode is stored and analyzed to build a profile of who we are. This profile is then used to tailor our experience in the digital world. More and more products and websites use these profiles to personalize content in an effort to capture mindshare in a competitive environment.

But what, or who, makes the decisions behind all of this sorting and personalizing? Digital impressions of our identity are captured and maintained by the large amounts of code that power the internet – code, written by humans, that can potentially lead to discriminatory outcomes.

Personalization is a huge achievement, but it also presents new problems for policymakers. There is evidence that some online sorting can result in discriminatory outcomes. Take the case of the Wall Street Journal’s report that found Staples.com varies product prices based on a customer’s proximity to competing stores. If the customer’s IP address was within about 20 miles from a competitor, the product price would be lower than if a customer was located only around a Staples store. Even more alarmingly, the WSJ found that there were lower prices in areas where the average household income was higher than in the areas with generally higher prices. Lower prices, higher income; higher prices, lower income.

As a result of this and other investigatory research, there is a growing policy debate around the potential harms of personalization. One of the most difficult questions is how to build accountability into a system that seems to have a life of its own. Throughout this debate, we must keep in mind that humans remain at the heart of automation through building, testing, refining, auditing and evaluating these systems. And companies must think through the real-world implications of these systems and which types of information they use, because their potential impact is widespread. As the White House’s big data report concludes, “big data analytics have the potential to eclipse longstanding civil rights protections in how personal information is used in housing, credit, employment, health, education and the marketplace.”

So what can companies do? Ask this basic question: Can this automated system lead to discrimination?

It is difficult, if not impossible, for a computer to sort fact from stereotype. This obligation falls to humans and the companies that employ them. Currently, there is no incentive for companies to take the time to address these issues. But new research by the Center for Democracy & Technology and a team from the UC Berkeley School of Information shows that people (known to companies as “users”) have strong feelings about personalization based on some characteristics. People, when they understand how they are being profiled, find personalization based on race and household income to be highly unfair in advertising, search results and, particularly, in pricing. Characteristics that may not be considered private in an offline context, like race and gender, were nonetheless considered to be sensitive in the context of online profiling. Several of the individuals who responded to the survey noted their concern that they might miss out on something as a result of the judgments made about them.

This research demonstrates that the judgments behind personalization are not harmless, neither in effect nor in perception. There are real material harms that people can experience as a result of automated sorting, and people don’t like the possibility of missing out on something that was deemed “irrelevant.” This fact creates a tremendous incentive for companies to innovate and lead the way in fair automation. And this comes down to humans.

The Center for Democracy & Technology has developed a process intended to interrogate the assumptions of the humans writing computer code. It relies on inserting questions into the process that dig deeper than statistical relationships: where did this data come from? Is it representative of all populations? Does the processing mechanism amplify the patterns of a majority and impose them on a minority? It’s not enough to pose these questions while writing the algorithm. Companies should also create a feedback loop to capture any surprising or unintended consequences of their automation.

 If managed correctly, automated systems will not only become less mysterious and clearer, they will also become more accurate — all of which builds the trust of the customer. And as the online advertising and market spaces become even more crowded, the companies who you can trust will be the ones that thrive in the long term.

(Top image: Courtesy of Getty Images.)

Image may be NSFW.
Clik here to view.
Nuala O’Connor: It’s the Digital Age — We Have Rights 1
Nuala O’Connor is the President & CEO of the Center for Democracy & Technology.

All views expressed are those of the author.


Viewing all articles
Browse latest Browse all 2658

Trending Articles