I recently caught up with the teaching assistant for an undergraduate humanities course I took years ago, that helped inspire my subsequent studies. I told him that everything I thought I knew about academia was wrong. He wryly responded, “Did you, too, think it was about the pursuit and advancement of human knowledge?” Now that I’m a PhD student myself (the position my TA was in ten years ago), it’s clear to me that going into academia isn’t necessarily any more pure or noble than working for a corporation. They are intimately intertwined, and have more in common than they don’t: trying to bust unions[1], fueling gentrification in surrounding communities[2], and contributing to global warming[3]. These realizations haven’t disrupted my academic work, but they have gone hand-in-hand with my taking a more critical stance towards my field of research.
I’m studying health informatics, which is about building technologies that aim to support health needs. It sounds like a good thing, right? But since I started my program in 2014, my previously unchecked enthusiasm for how technology could benefit the world has been thoroughly checked. What concerns me are not the usual anxieties about lifestyle (social media is causing us to forget how to have friends!). Rather, I’ve become alarmed at how tech hubris–the conviction among tech people that we’re a disruptive force improving the world–makes us blind to the ways we’re a conservative element, reinforcing structural inequalities. Technology is not neutral. It is not apolitical, and we must grapple with that.
Tech Isn’t Neutral
One of the most often discussed ways that tech isn’t neutral is algorithmic bias[4]. Turkish, my mother’s language, uses a gender-neutral pronoun. But Google Translate injects these sentences with sexist stereotypes.
The same thing happens when translating other languages that use gender-neutral pronouns, such as Indonesian and Finnish. So for Google Translate, it seems that men are doctors, women are nurses, and she can only love him. When Mashable asked Google to comment, a spokesperson stated that they are “actively researching” how to deal with these “unsolved problems in computer science.” But their response obscures the unsolved political problem underpinning the technical ones.
Assuming that Google cannot feasibly alter the biased data that feed its algorithms, the company has the responsibility to design a solution. They could, for example, make use of the fact that English already has a widely accepted gender-neutral pronoun[5]. And yet, singular “they” is still mocked as an example of “political correctness,” thus delegitimizing non-binary gender identities. Rather than perpetuating misogyny through its translations, the software could take an active role in normalizing the use of singular “they,” or at the very least providing it as an option within the interface. Although the company may be concerned with taking a political stance, its apolitical public response is actually a stance in favor of the status quo.
Other consequences of algorithmic bias are more directly and deeply harmful. A “risk assessment” algorithm called COMPAS predicts the degree to which a convicted individual is likely to re-offend (also known as “recidivism”)[6]. Judges across the country use this output to determine prison sentences. A recent analysis by ProPublica indicated that COMPAS was “particularly likely to flag black defendants as future criminals, labeling them as such at almost twice the rate as white defendants[7]. In addition, white defendants were labeled as low risk more often than black defendants.” The labels themselves were also uninformative: people who were flagged as likely to commit a violent crime actually did so only 20 percent of the time.
This practice reflects the inherent institutional racism within the United States prison system[8]. Believing in the algorithm’s supposed impartiality allows all political and ethical responsibility to evaporate: the Supreme Court refused to hear the case of a Wisconsin man, Eric Loomis, who was given an 11-year sentence without probation for attempting to flee an officer and operating a vehicle without the owner’s consent. Meanwhile, the COMPAS algorithm remains closed-source and proprietary; it cannot be seen or modified by anyone outside the company, which refuses even to disclose how it works. (The code and methods ProPublica used to conduct its analysis are available online.[9])
In another technical area, speech recognition software often fails to recognize accented voices, often because the datasets used to train the algorithm don’t include them[10]. Conversely, white, educated, first-language English speakers tend to be most represented. Researchers noted that the underrepresented groups in these datasets tend to be groups that are marginalized in general. The consequences of these flaws of design and implementation are carried forward in our society—as a direct result, these underrepresented groups are going to have a harder time using the technology (and finding it useful).
But the story doesn’t end there. People sometimes perform false accents, or use a “machine voice,” in order for voice-responsive technologies to correctly understand them.[11] This issue can be viewed through the lenses of colonialism, assimilation, and white supremacy. The failure of the technology to recognize and act on the voices of some individuals mirrors the failure of our society to provide equal access to resources based on race and nationality. Framing this issue as purely one of missing data forecloses on a structural analysis of the ways technologies are embedded in sociopolitical contexts. Rather than just incorporating data from marginalized people (a reformist solution based on notions of “diversity” and “inclusion”), we might also attend to our methods of software development and design. In what ways are they inherently oppressive? How might they be changed?
Anti-Political Machines
Meryl Alper did extensive research on Proloquo2Go, an app used to assist people who cannot produce oral speech[12]. On the surface, it is difficult to see why this app could ever be problematic—its function is relatively straightforward, and it aims to address a human need. Alper found that more privileged families were able to benefit the most from the tech. Privileged families are better able to navigate educational institutions and obtain the required training to use the software. Importantly, media narratives tend to uplift the technology and its creators as the salvation to people who, due to their disability, are somehow deficient.
This provider-recipient relationship is not merely an invention of the media. It’s constructed by the technology design process itself, which privileges the decisions, actions, and abilities of the designer. When first mapping out research projects, it is not always the case that the recipient populations are consulted. It’s even rarer for a member of that population is a decision-maker on the design team. This is not an accident: this is part and parcel of a system that reinforces social inequalities, while innocently proclaiming an intention of social good.
We can find some context by taking a brief look into the history of “tech for social good.” Lilly Irani wrote that “the rhetoric and practice of development positions emerging nations as essentially powerless and unable to ‘develop’ without intervention.”[13] In this way, the field of developing information technologies for “developing countries” maintains a power relation in which the recipient of the technology is powerless without it. This relationship is identical to the one enacted in the design of assistive technology. In these cases, oppressive power relations—colonialism and ableism, respectively—aren’t just unfortunate side-effects of technological development. They are symbiotic. They cannot exist in their current forms without each other.
We’ve known this for a long time. In his article “The Anti-Politics Machine,” James Ferguson argues that when technical systems are uplifted and promised as solutions to poverty, they mask political reasons for why people are disadvantaged in the first place[14]. This works to the advantage of those in power, because it creates “uneven relations of economic dependency” between US-based industries and people whose lives have been historically—and are presently—impacted by colonialist violence. With this lens, we can begin to see that tech saviorism and white saviorism are two sides of the same coin.
Despite these glaring problems, many researchers in technical fields have taken up a mission to do “social good.” And they often do it while being completely separated from social justice work—or worse, from a position of political “neutrality.” However, as Joyojeet Pal argued, these are serious problems that deserve to be considered in their entirety, rather than in a way that is most convenient or self-serving for the researcher[15]. This necessitates a direct engagement in politics. Depoliticizing research is harmful not only to the people whom that work directly affects, but also to our research processes itself.
Following the work of Sara Ahmed and many others, we must recognize that “all forms of power, inequality, and domination are systematic rather than individual.”[16] In other words, the biases, beliefs, and intentions of individual software developers do not fully capture the extent of racism in the tech industry. While some developers may indeed be racist, this limited view prevents us from examining the systems in place that ensure racism is never challenged (such as a company’s hiring practices, leadership structures, policies, and priorities). Ahmed puts it clearly that “eliminating the racist individual would preserve the racism of the institution in part by creating an illusion that we are eliminating racism.” This means that simply including more “diverse” people on a development team is not enough to combat oppression. In other words, it’s not just about getting into the room. It’s also about what we’re able to do and say while inside, whether we’re listened to—and how long it takes before we get pushed out.
Science for whom?
Human-computer interaction (HCI) researchers have argued that egalitarian approaches to the design and implementation of technology result in a design environment uniquely rich in relevant information.. As Nunes writes: “it is a great challenge to know what ‘needs’ to be designed… A strength of the HCI lens is its ability to embrace this complexity, for example, through the use of qualitative methods.”[17] In contrast to quantitative methods, which are primarily concerned with the measurement of variables and the statistical relationships between them, qualitative research uses in-depth interviews (among other tools) to develop a holistic understanding of a topic. In the field of software development, participatory design prioritizes the needs and values of users over the specifications of designers.
But it is not enough to base our science on recruiting “diverse” participants and calling it done. The Ejército Zapatista de Liberación Nacional (the revolutionary indigenous resistance movement, also known as the Zapatistas) recently organized a conference called “Las ConCiencias”[18]. There, scientists and activists came from around the world to explore the transformative potential of an anti-capitalist science wielded by and for indigenous communities. Zapatista Subcomandante Galeano asked profound questions: “With all of the damage that the capitalists have done to the people through their misuse of science, scientifically can you create a science that is truly human in order to avoid falling into a science that is inhuman? And if it is possible [to] create a truly human science, who can create it?”
These questions remain open, but opening them was itself important. In my work, I want to ask what transformational opportunities might look like in health informatics. Despite lofty social goals for developing “technology for health,” the field has been (rightfully) the subject of criticism. Take some of our favorite buzzwords, which are often held up uncritically as admirable: “open” systems, “disruptive” technologies, “innovative” approaches. These terms imply that anything that helps you to bypass institutions is, by default, empowering or liberating. But is this the same as equality and justice? For example, calorie-counting and exercise apps often ignore how our social and political situations are deeply intertwined with our health outcomes.
The goal for my dissertation is to build technology for transgender health. In order to do this, I have researched[19] how our medical institutions have historically pathologized trans people and perpetuate oppressions that continue to this day[20]. I’ve also used qualitative methods to look at the feelings and experiences of trans people as they relate to technology and identity[21]. It’s impossible to do this work responsibly without engaging in political questions, especially because the word “transgender,” and our existences themselves, are politically loaded.
Long before news broke that CDC researchers were cautioned[22] against using the word “transgender,” I was advised that my chances of receiving a fellowship from a major government research institute would be hurt if I used it in my application. Currently, I am trying to find funding to continue my research while also contending with discriminatory and anti-worker practices in the university[23]; these conditions are among the major motivating drives for graduate employee unionization[24]. Although my immediate future is uncertain, I am sure that my development as a scientist and as an activist are connected. And I want to keep learning how to be better as both.
[1]http://www.truth-out.org/news/item/40530-penn-state-university-wages-union-busting-campaign-against-its-own-graduate-students
[2]https://www.huntnewsnu.com/2017/03/as-northeastern-population-grows-so-does-impact-on-neighborhoods/
[3]https://www.theguardian.com/news/2017/nov/08/us-universities-offshore-funds-endowments-fossil-fuels-paradise-papers
[4] Check out Safiya Umoja Noble’s new book Algorithms of Oppression for more on this topic.
[5]https://qz.com/923238/even-the-staunchest-grammarians-are-now-accepting-the-singular-gender-neutral-they/
[6] https://epic.org/algorithmic-transparency/crim-justice/
[7] https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
[8] https://www.nytimes.com/2017/10/26/opinion/algorithm-compas-sentencing-bias.html
[9] https://github.com/propublica/compas-analysis
[10] https://www.wired.com/2017/03/voice-is-the-next-big-platform-unless-you-have-an-accent/
[11] https://www.theguardian.com/technology/2016/feb/10/texas-regional-accent-siri-apple-voice-recognition-technology
[12] https://merylalper.com/giving-voice/
[13] Irani, L., Vertesi, J., Dourish, P., Philip, K., & Grinter, R. E. (2010). Postcolonial Computing: A Lens on Design and Development. Proceedings of the 28th International Conference on Human Factors in Computing Systems – CHI ’10, 1311.
[14] Ferguson, J., & Lohmann, L. (1994). The Anti-Politics Machine: “Development” and Bureaucratic Power in Lesotho. The Ecologist.
[15] Pal, J. (2017). CHI4Good or Good4CHI. Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems – CHI EA ’17, 709–721.
[16] Ahmed, S. (2012). On being included: Racism and diversity in institutional life. Duke University Press, pp. 44
[17] Nunes, F., Verdezoto, N., Fitzpatrick, G., Kyng, M., Grönvall, E., & Storni, C. (2015). Self-Care Technologies in HCI: Trends, Tensions, and Opportunities. ACM Transactions on Computer-Human Interaction, 22(6).
[18] https://freerads.org/2017/04/04/zapatistas-reimagine-science-as-tool-of-resistance/
[19] https://www.researchgate.net/publication/318571065_Transgender_Health_Disparities_A_Technosocial_Epidemiological_Approach
[20] Giffort, D. M., & Underman, K. (2016). The relationship between medical education and trans health disparities: a call to research. Sociology Compass, 10(11), 999–1013.
[21] https://www.researchgate.net/publication/321276094_Trans_Competent_Interaction_Design_A_Qualitative_Study_on_Voice_Identity_and_Technology
[22] https://www.washingtonpost.com/national/health-science/cdc-gets-list-of-forbidden-words-fetus-transgender-diversity/2017/12/15/f503837a-e1cf-11e7-89e8-edec16379010_story.html?utm_term=.d59edb82282e
[23] http://www.alicolleenneff.com/blog/2017/11/8/on-academic-precarity
[24] https://thebaffler.com/the-poverty-of-theory/laboring-academia
Image: “Concept with eyes” by CKrzysztof Urbanowicz, licensed under CC 2.0
- Political Machines - November 13, 2018