By: Christopher Sirota, CPCU
Thinking of buying a bot for your business? A recent study from the Brookings Institute and the Italian Institute for International Political Studies (ISPI), highlights some concerns related to how such AI-powered voice assistants, also known as chatbots, may promote and perpetuate gender bias.
What are Voice Assistant Bots?
According to a May 2020 report published by the National Institutes for Health (NIH), AI-powered voice assistants fall under a general category of software programs known as intelligent agents. The report provides the following definition:
A chatbot is a typical example of an AI system and one of the most elementary and widespread examples of intelligent Human-Computer Interaction (HCI). It is a computer program, which responds like a smart entity when conversed with through text or voice and understands one or more human languages by Natural Language Processing (NLP)
The report notes that experts consider two types of chatbots, one that has a restricted source of information, known as a closed domain chatbot; interactions with such a system may not be able to answer question beyond a specific scope. The other type sources a more broad list of topics of knowledge, and is known as an open domain chatbot.
Per the report, chatbots have been imbued with a level of naturalness that seems to encourage natural human interaction from users, as exemplified by one statistic indicating nearly 40% of chatbot interactions via social media reportedly contain emotional statements from the users.
Some Concerns Related to Gender Bias
The Brooking study lists various concerns from experts regarding the preponderance of the use of female voices in voice assistants. The study reportedly examined, as examples, voice assistants such as Alexa, Google Assistant, Siri, and Cortana.
The study explains the concern as follows:
[…] voice assistants promote unfair gender stereotypes. Around the world, various customer-facing service robots, such as automated hotel staff, waiters, bartenders, security guards, and child care providers, feature gendered names, voices, or appearances. In the United States, Siri, Alexa, Cortana, and Google Assistant—which collectively total an estimated 92.4% of U.S. market share for smartphone assistants—have traditionally featured female-sounding voices.
These voice settings are significant because multiple academic studies have suggested that gendered voices can shape users’ attitudes or perceptions of a person or situation. Furthermore, as Nass et al. found, gendered computer voices alone are enough to elicit gender-stereotypic behaviors from users—even when isolated from all other gender cues, such as appearance. Mark West et al. concluded in a 2019 UNESCO report that the prominence of female-sounding voice assistants encourages stereotypes of women as submissive and compliant, and UCLA professor Safiya Noble said in 2018 that they can “function as powerful socialization tools, and teach people, in particular children, about the role of women, girls, and people who are gendered female to respond on demand.”
Notably, per the study, Google Assistant, Alexa, Siri, and Cortana were first released with a female voice as the default voice. Subsequently, the study noted that some have added male voices, but, for example, Siri still reportedly defaults to the female voice for 27 of the 34 available languages.
Another concern reportedly considers how voice assistants respond to hate speech and verbal sexual harassment, in part, because "the positive or negative responses of voice assistants can reinforce the idea that harassing comments are appropriate or inappropriate to say in the offline space. This is particularly true if people associate bots with specific genders and alter their conversation to reflect that." Per the study, researchers conducting a test using harassing language with some major voice assistants noticed a change in replies from 2017 to 2020, with some reportedly using more negative language, and one even reportedly declining to respond at all.
There is reportedly further concern about how accurate voice assistants are in response to the voices of users of different ethnicities and demographics; for example, one researcher reportedly found a 16% accuracy gap between the voices of Black participants and white participants.
Size of the Market
A pre-pandemic article in The Verge noted that one 2019 forecast estimated nearly 8 billion voice assistants would be in use by 2023, mostly via smartphones and smart speakers. The Verge noted that the forecast suggested the market would grow fastest for smart TVs. A May 2020 Global News Wire press release announced a market report that estimated the global voice assistant industry would grow to about $5.9 billion by 2026, and suggested the following industries would continue to leverage the technology: banking, financial services and insurance (BFSI), automotive, e-commerce and retail, and healthcare. Notably, per the press release, BFSI was the largest user in 2018, and industry experts estimate the fastest growth in the future to likely be in the healthcare sector.
Call Centers During the Pandemic
A May 2020 article in the The MIT Review highlighted the increased demand for chatbots in various call centers.
Per the article, both Google's Virtual Agent and IBM's Watson offer chatbots that can be customized for a call center's needs.
Some examples of public entity voice assistant implementations, from the article, include the following:
- Otsego County, New York (for COVID-19 guidance)
- City of Austin, Texas (for COVID-19 guidance)
- Czech Ministry of Health (for COVID-19 guidance)
- The Oklahoma Employment Security Commission (for unemployment claims)
- University of Arkansas for Medical Sciences (for patient triage)
- University of Pennsylvania’s medical school (for patient triage)
- Children’s Healthcare of Atlanta, Georgia (for supporting parents' assessment of children's symptoms)