[mp3-jplayer tracks=”CounterSpin Ruha Benjamin Full Show @http://www.fair.org/audio/counterspin/CounterSpin190809.mp3″]
This week on CounterSpin: Listeners may have heard about the electronic soap dispensers whose light sensors can’t detect black skin, Google and Flickr‘s automatic image-labeling that—oops— tagged photos of black people with “ape” and “gorilla.” An Asian-American blogger wrote about her Nikon digital camera that kept asking, “Did someone blink?” And you can, I’m afraid, imagine what turns up in search engine results for “3 black teenagers” vs. “3 white teenagers.” Some examples of discriminatory design are obvious—which doesn’t mean the reasons behind them are easy to fix. And then there are other questions around technology and bias—in policing, in housing, in banking—that require deeper questioning. That questioning is the heart of a new book, called Race After Technology: Abolitionist Tools for the New Jim Code. CounterSpin spoke with author Ruha Benjamin; she’s associate professor of African-American studies at Princeton University and author, also, of People’s Science: Bodies and Rights on the Stem Cell Frontier.
Transcript: ‘Black Communities Are Already Living in a Tech Dystopia’
[mp3-jplayer tracks=”CounterSpin Ruha Benjamin Interview @http://www.fair.org/audio/counterspin/CounterSpin190809Benjamin.mp3″]
Plus Janine Jackson takes a quick look at recent coverage of racist heart trackers.
[mp3-jplayer tracks=”CounterSpin Banter @http://www.fair.org/audio/counterspin/CounterSpin190809Banter.mp3″]





The first 4 examples of discriminatory technology are almost certainly not intended to discriminate, but a simple result of insufficient testing and/or system training. They should have done better, no doubt, but nothing sinister. Can cause offense, I’m sure.
Next time you’ll be crying wolf that some systems label small adults as children or girls with short haircuts as boys.
An example with 3teenager search is more interesting. Google and most other search engines reflect what’s on the web, rating pages by popularity/frequency etc. Should they continue to do so (and thereby give us a look into our society as it is), or should they paper over ugliness and display an idyllic picture where everything and everybody is politically correct, blacks are just as well to do as whites and so on?
Disclaimer: I just read your description and didn’t listen to podcast. I love your website but podcasts are not so convenient. Can you please add transcriptions of those?
Good points, Foo. I think it’s important to distinguish why a given technology may be discriminatory. Also, I agree there should be transcripts! Your points inspired the following ideas:
1. I don’t think it’s fair to characterize complaints about your “first category” of technology as “crying wolf.” Imagine the people who developed those first four technologies. You believe that those people made mistakes, or at least that they could have done better. How can they do better? Pay more attention? Sure. Spend more time on the technology? Of course? Be smarter? Ideally, yes. But one other way they could do a better job is to spend more time stopping and thinking about how their technologies affect people of color, and how those technologies fit into society more generally. So I think it’s important to bring race into this discussion, even if the developers were not racist.
2. I agree with you that google should not mess with their search just to “display an idyllic picture.” But we should take the three teenager search phenomenon as a lesson that reminds us how all kinds of little things throughout society perpetuate negative attitudes towards people of color. Why is it important to be reminded of this? Because it will help us understand how technologies we develop fit into society (as in #1 above). A simple algorithm to identify cool teenagers might do many repeated image searches to identify who looks sketchy, and it might come to the conclusion that black teenagers are more likely to be sketchy. That is a way worse algorithm than one that could look deeper and decide that teenagers in gangs are more likely to be sketchy, or that teenagers from bad school districts are more likely to be sketchy. If I asked the first algorithm how to reduce the number of sketchy teenagers, it might say “kill all urban pre-teens.” Now that might work, but it would be much worse than what the second algorithm would tell me: “improve the lives of urban pre-teens, especially teens from minority backgrounds who are likely to face institutional challenges.” I’m not asking the algorithm to exclude the black teenager photos from its data, I’m asking the algorithm to a) identify how algorithms like the google search might be good at showing us some things (like what percent of black teenagers look sketchy) but bad at showing us other things (like why that is), and b) try to to better, which it is not at all impossible for algorithms to do.
Does that make sense?
I’m white. Why on Earth would I object to White privilege? (Seriously. I’m not trolling here.) White privilege is a natural offshoot of the ways Whites and Blacks live their lives. You could argue that it has a resonance effect, since White privilege reinforces racial inequality, but that’s really just a dodge. It is Black behavior compared to White behavior that causes the White privilege effect, and Black behavior is the responsibility of Black people. What I’m saying here is, if you want people to like and respect you, you can behave in ways that increases the probability that they will. Complaining that people won’t like or respect you because of your race makes White people dislike and disrespect you more.