Copy
October 7th, 2019

Good morning <<First Name>>,

We hope you had a wonderful weekend and are looking forward to the week ahead. We most certainly are over here at the ever-optimistic, glass-half-full Venn offices.

One person who isn't likely to share our sunny disposition this fine Monday morning is a certain President Trump. On Friday an anonymous second whistleblower with “firsthand knowledge” came forward with further information related to his dealings in Ukraine. This fresh revelation may undermine the President's primary line of defense - that the accusations are based on incorrect, second-hand information - and may well bolster the Democrat's case for impeachment.

But as much as impeachment may be fresh on our minds, there are other issues that still warrant our attention too. This week we're turning our sights to the somewhat thorny question: Is AI biased? It's a fascinating topic, as we're sure you'll concur, and offers a welcome, if momentary, break from the never-ending psychodrama playing out in Washington.

Enjoy!

How to access the podcasts:

Option 1: Click the ListenNotes link below for the full easy-to-use playlist.  To learn how to download into your preferred podcast app using the RSS feed follow the tutorial on our Instagram story.  



Option 2:  For those of you who love Spotify, we now have all of our playlists there as well!  If you want to access our full playlist there, click on the link below. 



Option 3: Click on any of the podcast pictures when you scroll down to go to the individual episode on the podcast host's website, or on the app links to take you directly to the episode on Apple Podcasts or Spotify.
 THE VENN DEEP DIVE: 
Overview: Machine Learning Bias
  • Eliminating human bias from decision making around criminal justice, law enforcement, hiring processes, and financial lending can only be a good, and much needed, thing. And it’s easy to think that government agencies turning increasingly to supposed ‘dispassionate’ technologies in order to do this can similarly only be a good thing. Remove or reduce the potentially biased human input in deciding, for example, whether to identify someone as a suspect in a crime, let an algorithm decide instead, and then surely we can remove or reduce prejudices in such processes?
  • But what if these technologies are perpetuating rather than tackling these very biases? A growing number of studies suggest they are. 
  • Machine-learning bias is widespread, and it’s making the sort of data powered predictive technologies we increasingly rely on not only replicate, but also amplify, existing prejudices and inequalities. Insofar as we perceive these tools as being innately objective, we run a greater risk of failing to detect how and when they further embed discriminatory practices into our systems. In 2014 a government report warned that automated decision-making “raises difficult questions about how to ensure that discriminatory effects resulting from automated decision processes, whether intended or not, can be detected, measured, and redressed.”
OK, so what does this actually mean?
  • The best place to begin is with an understanding of machine learning algorithms. Machine-learning algorithms use statistics to find patterns in large amounts of data (including numbers, words, images, clicks etc). Systems like Netflix, for example, rely on machine learning to make personal recommendations to users. The platform works by collecting as much data about you as possible and then uses machine learning to guess what you might want to watch next. So, the more you watch, the more data is collected about you and the more up-to-date the algorithm is, and hence the more tailored its recommendations. In essence, the process is pretty simple: “find the pattern and apply the pattern”. 
  • While this seems innocuous in the context of a bit of Netflix and chill, the way algorithms learn from data begins to pose a major problem when either the data being fed into the system is skewed, or when the algorithmic coding is faulty and the technology is being used to inform something a bit more consequential than a personalized film list. Inadequate or misrepresentative datasets or biased logic lead to inaccurate results
  • Take, for instance, Google’s photo algorithm, which auto-tagged two black friends as ‘gorillas’ back in 2015 because the program hadn’t been properly trained to recognize darker skin tones. Or the British pediatrician, who that same year was denied access to the women’s locker room at her local gym because the smart software overseeing its membership coded ‘doctor’ as ‘male’. Or the much reported on case three years later in which Amazon’s Rekognition system incorrectly matched 28 members of Congress to criminal mug-shots. 
  • Bias can also show up in training data when the data collected reflects existing prejudices. For example, Amazon recently found that its internal recruiting tool was excluding female candidates. Since the system was trained on the data from the company’s previous hiring practices, which had favored men over women, the recruiting tool learned to do the same thing, essentially teaching itself that male candidates always trumped female candidates. So, if the data being fed into an algorithm is biased, then the algorithm will replicate those same patterns of bias.
What on earth is algorithmic confirmation bias…?!
  • It’s actually not as complicated as it sounds…Suresh Venkatasubramanian, a computing professor at the University of Utah who studies algorithmic fairness, explains it pretty well: “The focus on accuracy implies that the algorithm is searching for a true pattern, but we don’t really know if the algorithm is, in fact, finding a pattern that’s true of the population at large or just something it sees in its data”. Biased data can, therefore, create ‘feedback loops’ that act as a sort of ‘algorithmic confirmation bias’, whereby the system locates what it expects to find as opposed to what is actually there.
  • Such prejudicial feedback loops are deeply problematic since they risk reinforcing societal inequalities. This is of notable concern when it comes to using AI in the criminal justice system. Consider COMPAS, for example, an algorithm that assesses the likelihood of a defendant or convict committing a future crime. This widely used algorithm generates a risk score, which is then used across the criminal justice system to inform decisions around sentencing, bail, and parole. However, a recent investigation by ProPublica found that black defendants were nearly twice as likely to be mislabeled as ‘likely to re-offend’ than their white counterparts, who were more likely to be labeled as ‘low-risk’ following their release. 
  • But even worse, some critics argue, is that some of the data being used by systems like COMPAS invariably reflects the myriad of racial inequalities that pervade the criminal justice system. The fact that black people are arrested more often than white people, despite committing the same number of crimes, or that they are more likely to be searched or arrested during a traffic stop, and typically receive harsher sentences by the courts, is important context that wouldn’t register with an algorithm, which will take those numbers at face value and make risk assessments accordingly.
  • As Ifeoma Ajunwa, a law professor who has testified before the Equal Employment Opportunity Commission on the implications of big data notes, the number of convictions a person has “is not a neutral variable.” 
But why the bias? 
  • Diversity. Or a lack-there-of. It is a well-known fact that the tech industry is woefully short on diversity. So much so in fact that a recent report by the AI Institute referred to there being a “diversity crisis” in the AI sector across both gender and race. Women comprise only 15% of AI research staff at Facebook, and 10% at Google, for example. Meanwhile, black workers make up only 4.8% of Google’s overall workforce in the U.S., 3.8% of Facebook’s and 4.1% of Microsoft’s. And this matters. In the absence of diversity amongst programmers, companies are just less likely to spot biases in training data (Google’s debacle with its racist facial recognition technology is a case in point).
  • Melinda Gates recently warned that without a diverse workforce programming artificial intelligence, and thinking about the data sets to feed in, we will have “so much hidden bias coded into the system that we won’t even realize all the places that we have it.” 
And the response?
  • A growing backlash is beginning to emerge around the use of facial recognition software for law enforcement purposes because of the risks associated with its algorithmic biases. While such software is pretty good at identifying white, male faces, its relative uselessness at identifying people of color and women could have significant implications when the tech is used by law enforcement agencies. 
  • For example, a study into Amazon’s controversial facial recognition software ‘Rekognition’ found that it had a 0% error rate when identifying lighter-skinned males, but a 31.4% error rate when identifying darker-skinned females. In a separate study by the ACLU, the software misidentified 28 members of Congress (disproportionately people of color) as people who had been arrested for a crime. Misidentifying someone could, as critics point out, lead to wrongful convictions.
  • Amidst a growing backlash from civil rights groups around how the technology could be misused, the Congressional Black Caucus penned a letter to Amazon CEO Jeff Bezos voicing concern about the “profound negative unintended consequences” face surveillance could have for black people, undocumented immigrants, and protesters. As the ACLU noted: “An identification — whether accurate or not — could cost people their freedom or even their lives. People of color are already disproportionately harmed by police practices, and it’s easy to see how Rekognition could exacerbate that.”
  • As criticism around facial recognition software grows, several cities, including Oakland, San Francisco and Somerville, Mass., have now banned the use of facial recognition technology altogether, while the California legislature is considering a bill that would ban the tech in police body cameras. 
  • Nonetheless, the global market for this tech is incredibly lucrative - forecasts predict it will grow to $7.0 billion by 2024 - so there is a strong financial incentive to continue pushing it out into broader areas of our lives.
Is there anything else being done?
  • Yes. The tech industry is beginning to wake up to the problem of bias in its software and some companies, like IBM, are releasing “debiasing toolkits” to address the issue. Such products work by scanning for bias in AI systems, for example through analyzing the data they’re trained on, and then make adjustments to improve their fairness. 
  • However, according to a new report by the AI Now Institute, “debiasing toolkits” may actually cause more harm than good. The report’s authors argue that we need to properly evaluate the impact AI systems are having on the real world, even after being technically ‘debiased,’ and face the fact that some of these systems ought not to be created in the first place. 
  • What’s more, the response of certain tech behemoths when issues around bias in their software have been flagged have, up until now, been pretty underwhelming. For example, when Google was found to be identifying black people as ‘gorillas’, its response was simply to erase gorillas, and certain other primates, from the service’s lexicon. At the start of 2018, Wired magazine highlighted that the search engine still couldn’t accurately identify either gorillas or people of color. If we are going to properly address the issue of bias in AI, we need more than just short-term workarounds to issues like this from the monopoly companies dominating the industry.
What the candidates say:
  • Bernie Sanders: In August, the Vermont Senator became the first presidential candidate to call for an outright ban on facial recognition software. He also called for an investigation into the use of algorithmic risk assessment tools that are used to assess the likelihood of criminals reoffending. 
  • Elizabeth Warren: Shortly following Sanders, Warren released her own criminal justice reform plan in which she pledged to create a task force to “establish guardrails and appropriate privacy protections” for surveillance tech, including “facial recognition technology and algorithms that exacerbate underlying bias.” She stopped short of proposing a ban.
  • Kamala Harris: Promises to work with stakeholders, including civil rights groups, technology groups, and law enforcement, to put in place regulations and protections that would ensure technology used by federal law enforcement — such as facial recognition and other surveillance — doesn’t perpetuate racial disparities or other biases.
  • Sen. Cory Booker: Has advanced legislation pushing for algorithmic accountability.
***
Want to know more and dig a little deeper into this fascinating topic? Of course you do. Below are three carefully selected podcast episodes to help you do just that.

THE PODCASTS

A.I. IS THE FUTURE AND THE FUTURE IS HERE


“Sleepwalkers” is the new and much talked about podcast series exploring the AI revolution and the ever-growing impact technology is having on the human experience. In this episode, the creators of the show discuss why and how they chose to dig into some of the fundamental questions surrounding AI, and why it is so critical we ask these questions now. AI is everywhere, they remind us, and it’s already changing our perception of the world and how we relate to each other. We need to understand the future we’re creating, lest we sleepwalk into a dystopia of our own making.


01:06:25

IS A.I. RACIST? 

Yes. Or at least it can be. Human bias is being embedded into the AI technologies upon which we increasingly rely. But a lack of awareness of the possibility of what’s happening, combined with a lack of diversity amongst the groups creating much of this technology, means too many companies are consistently failing to pick up on this, and so are continuing to make and use AI driven products that perpetuate societal prejudices. In this episode, Business Daily discusses what will happen if we don’t tackle this issue now, and speaks to one woman who’s creating the tools to ensure we do. 

00:17:57

A PERMANENT POLICE LINE-UP

One in two adult Americans’ photographs are stored in facial recognition databases that can be accessed by the FBI, without their knowledge or consent, when they’re looking for suspected criminals. 80% of photos in the FBI’s network are non-criminal entries, including images taken from passports and driver’s licenses. The algorithms that are used to identify matches are inaccurate 15% of the time… It begs the question: are we all basically in a permanent police line up? How did we get here? 
In this episode, Tech Policy Podcasts examines just that. 


00:28:53


This rounds up our deep dive, but stay tuned as we will revisit this important issue in a future newsletter.

IMPEACHMENT

WHAT'S GOING ON WITH THAT?!

The Impeachment inquiry is underway and last week the House began questioning a raft of administration officials, including Trump’s former Ukraine envoy, Kurt Volker, who spent more than eight hours being questioned by lawmakers. 
The proceedings are moving pretty fast and with a new whistleblower having just stepped forward with 'firsthand knowledge' related to Trump's dealings with Ukraine, there's a lot to keep up with. This episode by Left, Right and Center provides views from across the political spectrum of what on earth is going on...

00:52:16

Words by: Emma-Louise Boynton
Editing by: Jim Cowles, Stacy Perez and Emma-Louise Boynton
We hope you'll want to share our newsletter with your friends. If so, please direct them to our website where they too can subscribe. 
 
TAKE ME TO THE WEBSITE TO SUBSCRIBE

WE'RE LISTENING

Tell us the topics you'd love to hear more about.
Suggest a podcast episode we might like.
Or just say hi.
You have a direct line to our ears.
We can't wait to hear from you.
GET IN TOUCH
Copyright © The Venn Media LLC / All Rights Reserved 
Content discovered by The Venn Media and our community — copyright is owned by the publisher, not The Venn Media, and audio streamed directly from their servers.

 






This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
The Venn Media · P.O. Box 334 · Teton Village, Wyoming 83025 · USA