Beyond Hertz: Gender in Intelligent User Interfaces
Human Factors in Information Design, Bentley University
HF760: Intelligent Interfaces
Roland Hubscher
April 26, 2021
Abstract
This article examines the treatment of gender in intelligent user interfaces (IUI) reflects, reveals, and reinforces existing social stereotypes. In order to communicate and advance notions of equitable competence for future generations, we must identify these biased patterns, examine their origins, and challenge them. A literature review of existing studies of conversational and voice assistant technologies provides a holistic view of the current status, expectations, and legacies of gendering in IUI. This article proposes a composition of gender treatment in IUI and suggests studies for future research.
Keywords: gender, gender-neutral, intelligent user interfaces, voice assistance, linguistic patterns, language perception
Introduction
This article examines the existing literature regarding the treatment of gender in voice user interfaces (VUI). Current practices and methods reflect, reveal, and reinforce existing social stereotypes. In order to communicate and advance notions of equitable competence for future generations, we must identify these biased patterns, examine their origins, and challenge them. As human factors designers and researchers, we must ask ourselves, “is a gender-neutral conversational interface the best approach?” Further, we must consider the human in these conversational interactions,“how should these interfaces interpret our speaking?” A literature review of existing studies of conversational and voice assistant technologies provides a holistic view of the current status, expectations, and legacies of gendering in IUI. This article proposes a composition of gender treatment in IUI and suggests studies for future research.
Background
It is important to start with a foundational understanding of the treatment of gender in AI, and for that we must consider the history of AI. Early computer voices archaic and mechanical seem to have been predominantly male. Can you remember the voice of your AOL inbox? “You’ve got mail!”—this exclamation was most certainly male. However, as technology advanced, culture seemed to reveal implicit biases in cultural underpinnings.
Early sci-fi examples reveal some of the initial ideation around voice assistants. This example may be most poignant in the cult-classic, Star Trek. One today’s most recognizable voices can be traced back to this iconic series set in the 23rd century—Amazon’s Alexa, a household name. Alex Spinelli, software lead for Alexa, revealed the impetus behind the gender decision. CEO Jeff Bezos preferred a female voice he said, “The idea was creating the Star Trek computer. The Star Trek computer was a woman” (Sydell, 2018). Similar to Alexa, many of today’s voice assistants have only departed so far from these initial imaginings.
Proliferation of Voice User Interfaces
Over the past few years voice assistants have increasingly encroached into our lives. We hear and converse with these devices and systems in our homes, cars, and sometimes even in public spaces. This wide adoption of voice assistants into our American lifestyles impacts our expectations of these systems. According to census data, approximately “51% of U.S. adults have tried using a voice assistant while driving and about one-third have converted to monthly users” (Kinsella, 2020). Most Americans have been exposed to some kind of smart voice user interface (VUI) while in their car—for navigation, climate control, or communication, to name a few. And we have seen the benefits of VUI on roads as well. Marketing teams do not shy away from displaying these benefits on our screens; we have all seen a car commercial with the diligent-yet-overwhelmed mother or enthusiastic-yet spacey-teen talking conversing with their vehicle to make a call, get directions, or change the in-car entertainment all hands-free. These voice assistants seem heralded as the champions to mitigate the dangers of texting and driving. This use of VUI in cars is so prolific, that it is not unreasonable for a person who is buying a new car to expect to find some kind of VUI as part of the system. And in turn, designers and manufacturers are looking to further integrate these interfaces into the functionality of their vehicles.
And it is not just in our vehicles, we see voice assistants in homes across America as well. Most of us have at least one friend who has shown off their home VUI, by commanding “hey google, turn lights to 50 percent.” Or perhaps, we ourselves are the adopters, and excitedly parade our technology to our visitors, allowing them to see the benefits of VUI in the comfort of our own homes. However, perhaps insidious, we see the subtle demographic distinctions in users. Adults that are under age 50 are more likely to have these devices in their homes, when compared with their counterparts, ages 50 and above (29% vs. 19%) (Auxier, 2019). And these VUIs are more often found in households earning $75,000 or more, when compared with their counterparts, those with an annual family income below $30,000 (34% vs. 15%) (Auxier, 2019). While most users of voice assistants in domestic settings are likely to be younger and higher household earners, these are not the only adopters of voice user interfaces as census data reveals that, “one-quarter of U.S. adults say they have a smart speaker in their home” (Auxier, 2019). The use of VUI in homes today is pervasive and not uncommon; it is no longer only the imaginings of sci-fi control rooms that brings VUI to our beck and call.
Perhaps the most interesting scenario where we increasingly experience VUI is in our public spaces. As voice assistance has infiltrated our domestics spaces, it has also been integrated into numerous public spaces. This is a particularly unique situation for VUI as the innately public nature of the setting means that a person conversing with a system may have an accidental audience. The voice assistant’s response must not only deliver desired information to a single person, but it is now also subject to observation and therefore greater scrutiny. Researchers must ask, “how does the presence of an audience influence the social interaction with a conversational system in a physical space?” (Candello, 2019). So far, in response to this research question, studies have found that, “conversational systems in physical spaces should be designed based on whether other people observe the user or not” (Candello, 2019).
Expectations of voice assistants vary by context, as a VUI of a vehicle is likely not yet capable of providing responses for domestic needs. The expanding adoption of VUI over the past few years has increased not only our familiarity with these devices, but also our expectations in each context. The prolific use of voice assistance points to the potential for these systems to impact our own speaking patterns in return. The Pew Research Center has posited some of the potential impacts on our language based on speech alignment, “where talkers subconsciously adopt the speech and language patterns of their interlocutor” (Zellou, 2021). Studies show that speech alignment seems to have a unique impact on different sections of our population based on “humanness and gender of the human model talkers: older adults displayed greater alignment toward the female human and device voices, while younger adults aligned to a greater extent toward the male human voice” (Zellou, 2021). As we increasingly converse with VUI, it is not surprising that their voices will compound our existing patterns of language and culture and further differentiate sections of our population. Unfortunately, this means that some of our implicit biases begin to surface.
Voice Mechanics & Implications
Numerous studies have identified what we have come to consider a typical “male” or “female” voice. For these studies, voices are measured in Hertz, which indicates the frequency with which a unit cycles within a second. Studies have found that participants will identify a voice as “female” for a sound within the range of 165 to 255 Hz, while most participants will attribute sounds within the range of 85 to 155 Hz to a “male” voice. It should be noted, and these studies are careful to make this point as well, that these results are based on the perceptions of the general public and we can expect reasonable variation based on previous experience and immediate context. That is to say, that these are not hard and fast ranges of numbers, but are generally attributed to one of these two genders.
However, this current binary attribution of voice, may reinforce any existing biases and stereotypes of these genders. If you think of many of the most pervasive VUI devices for domestic settings—Amazon’s Alexa, Google Home and Apple’s Siri—is it a “male” or “female” voice that you are conversing with? It is alarming to realize the number of devices that we naturally attribute to being “female.” In giving the VUI a gender, we must consider the role these devices play for us and in relation to their human counterparts. When asked, Siri will promptly provide you a weather update, Google Home may tell you how many cups of flour for a recipe, and Alexa may ask which album you would like to hear next. In most American households, it is a female voice that offers service within the home and some would argue “culturally, we think of them as ladies too” (Hempel, 2018). We come to know them as “female” voices and assign them female pronouns of she/her/hers and we have only begun to consider the consequences of such gender attribution.
Unfortunately, designers and engineers only exacerbate this genderization as the VUI not only speaks in a typically “female” frequency but also, many devices incorporate femenine expressions and turns of phrase in their answers to our requests. Research from the mid-1990’s on American language patterns, by psychologist James Pennebaker, revealed some distinct gender tendencies. His software, Linguistic Inquiry and Word Count (LIWC), analyzed various texts for pronouns and function words, such as pronouns, articles, and prepositions. He found that these “quiet” words, function words and pronouns, “provide grammatical structure for language and help to create a writer's or speaker’s style” (Hannon, 2016). And the more prevalent these “quiet” and non-content words in a phrase, the more likely we are to attribute the voice as a “female.” One researcher aptly expanded, “when Alexa blames herself (doubly) for not hearing my question, she is also subtly reinforcing her female persona through her use of the first person pronoun(I)” (Hannon, 2016). While this detailed manipulation of VUI may “humanize” these devices, the unspoken message of these voice decisions reveals our socio-cultural expectation of females to act in positions of servitude in our domestic and, even, public spaces. A more recent study, Conversations with ELIZA keenly points out how many of these femenine attributes only compound the gender stereotypes that we have identified here (Costa, 2019). Sadly, many of the creators behind these VUI household names are yet to address this subject. And it is not only voice mechanics and language architecture that seems to reinforce this position for “female” VUI. Take, for example, Apple’s Siri—the name Siri, literally translates to “a beautiful woman who leads you to victory” in Old Norse (Hempel, 2018). It is hard to imagine a world in which this literal translation has no connection nor consequences for the position of this gendered VUI and gender within a home and within greater society.
Numerous articles in the past few years point to growing concerns that these familiar voices reinforce these gender stereotypes. As a society, it is imperative that we consider the treatment and use of gender in VUI.
Most Americans expect their VUIs to come as “female” by default.
A recent study (shown in the graph) by the Pew Research Center revealed that, on average, 53.3% of respondents had not thought about why their voice assistants had a “female” voice. However, to perhaps offer some solace to this statistic, this study also identified a correlation between the age range of the respondent and the tendency to consider the gender of their device. Generally, we see that younger age groups are more aware that their assistants arrive with a “female” voice as a default setting. This observation should not surprise us—increasing perception of gender treatment in young people is a road that has been paved by the efforts of previous generations. It is heartening that these efforts have paid off and that future generations are increasingly aware of the implications of gender treatment in our smart speakers. And we must continue to challenge these existing perceptions.
Barriers & Moving Forward
One key issue in AI, related to VUI, is that of a device’s capability to recognize its counterpart—human user voices. It is not surprising that AI are more likely to detect and understand caucasian male voices than females and racial minorities (Wong, 2020). Research shows that a voice assistant can recognize 92% of white American male voices in contrast to only 69% of mixed-race American female voices. Perhaps, when considering this discrepancy, we should examine the demographics of the creators behind these voice assistants—the most popular assistants including Amazon’s Alexa, Google Home and Apple’s Siri are predominantly designed and coded by caucasian males based in California. In response to the exacerbated gender stereotypes of today’s VUI and AI, many organizations have called to add women and minorities to the industry. Miriam Vogel, CEO and President of Equal AI, states “The tech space seems foreign to many people who grew up with the art history, history, English backgrounds and the liberal arts and sciences, we don’t feel as connected to the tech space. But it’s all the more important that people with that diverse background bring that in, bring that curiosity, the big picture thinking, the historical context, into the technology space….”
Our AI appears to be a reflection of current treatment of gender. As we generate smarter technologies that learn how to talk back to us, our voice assistants expose our own issues and biases. Fred Baker, fellow at Cisco, stated: “Communications in any medium (the internet being but one example) reflects the people communicating. If those people use profane language, are misogynistic, judge people on irrelevant factors such as race, gender, creed, or other such factors in other parts of their lives, they will do so in any medium of communication, including the internet. If that is increasing in prevalence in one medium, I expect that it is or will in any and every medium over time. The issue isn’t the internet; it is the process of breakdown in the social fabric” (Rainie, Anderson, Albright, 2017).
Studies reveal that VUI and it’s implicit biases, carried over from our own socio-cultural shortcomings, reinforce brand engagement. One study revealed that our continued use of AI is most significantly impacted by our expectations of and is not significantly impacted by the gender of the AI (Kim, Cho, Ahn, 2019). This finding is critical for the development of VUI. Ultimately the study led researchers to believe that, “anthropomorphism fully mediated the relationship between both warmth and pleasure and the type of relationship with AI” (Kim, Cho, Ahn, 2019). Results revealed that gender stereotypes are most impactful at the relational, or style, level of a conversation than at the referential, or content, level of a conversation. Unfortunately, in terms of a VUI conversation, a person is more likely to attribute negative stereotypes to “female” chatterbots. Further, this attribution makes “female” chatterbots “more often the objects of implicit and explicit sexual attention and swear words” (Brahnam, 2012). Sadly, this is a tangible manifestation of the consequences of current gender biases. However, another study identified opportunities for exploratory behavior with the VUI as ultimately essential to customer satisfaction and continued use of the device. VUIs that incorporate features, such as functional intelligence, sincerity, and creativity, that empower a human user to take control of the interaction and lead to this exploratory behavior. (Poushneh, 2021). These studies have significant implications for the business of VUI development. Findings from these two studies reveal that a consumer is more likely to continue interaction with a device that acts as a competent and dynamic conversant. This customer pattern of use, frees designers and engineers to focus on content and expand the attributes of VUI in numerous other ways.
As we examine the consequences of compounding stereotypes of these binary “genders” in voice assistants we also must ask ourselves what voices are left out of the conversation? For one, this is any voice that does not comply with a binary attribution of gender. It is in these questions that we can start to see the more insidious implications and consequences of the composition of smart voices. Some projects have attempted to combat this dichotomy. Q Voice is a project that seeks to create “the first genderless voice” by blending common assumptions of “male” and “female” voice frequencies. The project intends to “end gender bias in AI assistants'” (Virtue Nordic, 2019). Copenhagen Pride, Virtue, Equal AI, Koalition Interactive, and thirtysoundsgood are the collaborators behind the project, who explain: “Why did we make Q? Technology companies often choose to gender technology believing it will make people more comfortable adopting it. Unfortunately this reinforces a binary perception of gender, and perpetuates stereotypes that many have fought hard to progress. As society continues to break down the gender binary, recognising those who neither identify as male nor female, the technology we create should follow. Q is an example of what we hope the future holds; a future of ideas, inclusion, positions and diverse representation in technology” (Virtue Nordic, 2019).
Results of this pilot project are yet to be collected, but in the interim, we can reflect on these motivations behind the project. Their endeavour has highlighted an innately human issue that we must address and challenge in the future our VUI.
Conclusion
The future of AI is at a critical point, experts and researchers, alike, project numerous issues moving forward. However, the issues of greater integration of AI voice assistants that they imagine are generally not issues of technology, but instead seem to reflect issues of our society as a whole. We have looked at just a few of the implications of gender stereotypes and biases in VUI, but moving forward, we should expect to be confronted with more of our own socio-cultural inequities. Most notably, Sonia Katyal, co-director of Berkeley Center for Law and Technology and member of the inaugural U.S. Commerce Department Digital Economy Board of Advisors, recently remarked, “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future. Questions about privacy, speech, the right of assembly and technological construction of personhood will all re-emerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all. Who will benefit and who will be disadvantaged in this new world depends on how broadly we analyze these questions today, for the future” (Rainie, Anderson, 2018).
Increasingly, we can expect these human issues to surface in our VUI and subsequent AI interfaces. Our technologies will continue to reflect the implicit biases of their creators. Unfortunately, we can expect this to be the case until we introduce change to the creators and creative process. However, overall Americans seem hopeful for the future and potential of technology. The Pew Research Center has collected data on American expectations of life in the future in regard to AI. They reported, “despite the downsides they fear, 63% of respondents in this canvassing said they are hopeful that most individuals will be mostly better off in 2030, and 37% said people will not be better off.” And when they looked further to 2069, 72% of respondents expected change for the better (Stansberry, Anderson, Rainie, 2019). To conclude, the treatment of gender in intelligent user interfaces (IUI) reflects, reveals, and reinforces existing social stereotypes of our culture. To communicate and advance notions of equitable competence for future generations, we must identify these biased patterns, examine their origins, and challenge them.
References
An Integrated Model of Voice-User Interface Continuance Intention: The Gender Effect. Taylor; Francis. https://www.tandfonline.com/doi/abs/10.1080/10447318.2018.1525023.
AI replicating same conceptions of gender roles that are being removed in real world. Economic Times Blog. (2017, June 16). https://economictimes.indiatimes.com/blogs/et-commentary/ai-replicating-same-conceptions-of-gender-roles-that-are-being-removed-in-real-world/.
Auxier, B. (2020, August 17). 5 things to know about Americans and their smart speakers. Pew Research Center. https://www.pewresearch.org/fact-tank/2019/11/21/5-things-to-know-about-americans-and-their-smart-speakers/.
Bondy, H. (2019, December 11). Artificial Intelligence has a gender problem - why it matters for everyone. NBCNews.com. https://www.nbcnews.com/know-your-value/feature/artificial-intelligence-has-gender-problem-why-it-matters-everyone-ncna1097141.
Brahnam, S.,; De Angeli, A. (2012, April 14). Gender affordances of conversational agents. OUP Academic. https://academic.oup.com/iwc/article-abstract/24/3/139/690595.
Candello, Heloisa (2019, May) The Effect of Audiences on the User Experience with Conversational Interfaces in Physical Spaces | Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. https://dl.acm.org/doi/abs/10.1145/3290605.3300320?casa_token=_dANTDBeg-8AAAAA%3AaoR0KGfcUjPd-wf9q2IAbuaHdNp6wFAovEFbM-XFLHnt5JFW3TWFCDjswg6sBVN5yGaL1Tsf1h6QI8U.
Coleman, L. deL. (2018, July 16). Inside The Tricky Business Of Gender, Voice And The $190B Artificial Intelligence Game. Forbes. https://www.forbes.com/sites/laurencoleman/2018/07/15/inside-the-tricky-business-of-gender-voice-and-the-190b-artificial-intelligence-game/?sh=5b335cac5cb2.
Conversational interfaces: advances and challenges. IEEE Xplore. (n.d.). https://ieeexplore.ieee.org/abstract/document/880078.
Costa, P.,; Ribas, L. (2019, June 1). AI becomes her: Discussing gender and artificial intelligence. Latest TOC RSS. https://www.ingentaconnect.com/content/intellect/ta/2019/00000017/f0020001/art00014.
Creating conversational interfaces for children. IEEE Xplore. (n.d.). https://ieeexplore.ieee.org/abstract/document/985544.
EqualAI®. EqualAI. (n.d.). https://www.equalai.org/.
Ethics; AI: Computer Science, Gender and Intersectionality. Free online events showcase the latest ideas and insights from world-class experts, innovators and visionaries. (2020, October 7). https://www.brighttalk.com/webcast/13819/431469/ethics-ai-computer-science-gender-and-intersectionality.
Gartner Top 10 Strategic Predictions for 2021 and Beyond. Smarter With Gartner. (n.d.). https://www.gartner.com/smarterwithgartner/gartner-top-10-strategic-predictions-for-2021-and-beyond/.
The global landscape of AI ethics guidelines. (n.d.).
Greenconnections. (2019, July 12). Is AI Biased Against Women? Miriam Vogel, Executive Director of Equal-AI. Green Connections Radio. http://greenconnectionsradio.com/is-ai-biased-against-women-miriam-vogel-executive-director-of-equal-ai/.
Hannon, Charles. (2016, April). Gender and status in voice user interfaces. Interactions. https://dl.acm.org/doi/pdf/10.1145/2897939?casa_token=AkpQeAV1Bf8AAAAA%3ASccGFa-hEFsZhCTZSycHdLWz_CDlsgEYxh_yu4P4ICpNiViuuL9ia6t2RGkG9B53AWFdPvTMEXZ7_jY.
Hempel, J. (2018, June 6). Siri and Cortana Sound Like Ladies Because of Sexism. Wired. https://www.wired.com/2015/10/why-siri-cortana-voice-interfaces-sound-female-sexism/.
How 2020 Accelerated Conversations on Diversity, Equity and Inclusion. Smarter With Gartner. (n.d.). https://www.gartner.com/smarterwithgartner/how-2020-accelerated-conversations-on-diversity-equity-and-inclusion/.
Hu, Q., Lu, Y., Pan, Z., Gong, Y.,; Yang, Z. (2020, October 16). Can AI artifacts influence human cognition? The effects of artificial autonomy in intelligent personal assistants. International Journal of Information Management. https://www.sciencedirect.com/science/article/pii/S0268401220314493?casa_token=JVhjYCZc4-IAAAAA%3Aw-FYwM5dJ3PZwb_g43tj1lix2kRLNCQqGDQrbXXF5jiVTxIPyVxAd1ObF7PK5LdnKRhEaKfw_0XO.
Johnson, C.,; Tyson, A. (2020, December 15). Are AI and job automation good for society? Globally, views are mixed. Pew Research Center. https://www.pewresearch.org/fact-tank/2020/12/15/people-globally-offer-mixed-views-of-the-impact-of-artificial-intelligence-job-automation-on-society/.
Kim, Ahyeon; Cho, Minha; Ahn, Jungyong. (2019) Effects of Gender and Relationship Type on the Response to Artificial Intelligence. CYBERPSYCHOLOGY, BEHAVIOR, AND SOCIAL NETWORKING. https://www.researchgate.net/profile/Jungyong-Ahn/publication/331719104_Effects_of_Gender_and_Relationship_Type_on_the_Response_to_Artificial_Intelligence/links/5d8d768f92851c33e9406f54/Effects-of-Gender-and-Relationship-Type-on-the-Response-to-Artificial-Intelligence.pdf
Kinsella, B. (2020, April 28). Nearly 90 Million U.S. Adults Have Smart Speakers, Adoption Now Exceeds One-Third of Consumers. Voicebot.ai. https://voicebot.ai/2020/04/28/nearly-90-million-u-s-adults-have-smart-speakers-adoption-now-exceeds-one-third-of-consumers/.
Kinsella, B. (2020, February 20). U.S. In-car Voice Assistant Users Rise 13.7% to Nearly 130 Million, Have Significantly Higher Consumer Reach Than Smart Speakers - New Report. Voicebot.ai. https://voicebot.ai/2020/02/20/u-s-in-car-voice-assistant-users-rise-13-7-to-nearly-130-million-have-significantly-higher-consumer-reach-than-smart-speakers/.
Lee, K., Lee, K. Y.,; Sheehan, L. (2019, December 14). Hey Alexa! A Magic Spell of Social Glue?: Sharing a Smart Voice Assistant Speaker and Its Impact on Users' Perception of Group Harmony. Information Systems Frontiers. https://link.springer.com/article/10.1007%2Fs10796-019-09975-1.
McLean, G., Osei-Frimpong, K.,; Barhorst, J. (2020, December 15). Alexa, do voice assistants influence consumer brand engagement? – Examining the role of AI powered voice assistants in influencing consumer brand engagement. Journal of Business Research. https://www.sciencedirect.com/science/article/pii/S0148296320307980?casa_token=gvyB3hI3oTEAAAAA%3AtaxechzDOZRlDlkYIGtgCKrzvtpB6mcP55auIOkXIUTrTaCCgDbSe7fQUw75D5YcRm4EdbfzX4GZ.
Medeiros, J. (n.d.). This is Q: The First Genderless Voice for AI. VOICE 2020. https://www.voicesummit.ai/blog/genderless-voices-are-finally-coming-to-ai.
Meet Pegg, a gender-neutral robot assistant. The World from PRX. (n.d.). https://www.pri.org/stories/2018-03-28/meet-pegg-gender-neutral-robot-assistant.
Microsoft, S. A. (2019, May 1). Guidelines for Human-AI Interaction. Guidelines for Human-AI Interaction | Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. https://dl.acm.org/doi/abs/10.1145/3290605.3300233?casa_token=uORIC50lNZcAAAAA%3A0wZlZWHiYIPX8mfqioZDoQGDPR8MHHN7YNkDQvl4sNJ9X5MbeIeOY23oW7KFR5tuYQ2syvmHnujl_jQ.
Miriam Vogel quoted in NBC News article, "Artificial Intelligence Has a Gender Problem - Why It Matters for Everyone". WestExec Advisors. (2019, December 13). https://westexec.com/miriam-vogel-quoted-in-nbc-news-article-artificial-intelligence-has-a-gender-problem-why-it-matters-for-everyone/.
Mortada, D. (2019, March 21). Meet Q, The Gender-Neutral Voice Assistant. NPR. https://www.npr.org/2019/03/21/705395100/meet-q-the-gender-neutral-voice-assistant.
Virtue Nordic (2019) Meet Q. The First Genderless Voice. https://www.genderlessvoice.com/.
Poushneh, A. (2020, September 2). Humanizing voice assistant: The impact of voice assistant personality on consumers' attitudes and behaviors. Journal of Retailing and Consumer Services. https://www.sciencedirect.com/science/article/pii/S0969698920312911?casa_token=-c6KuBLRBugAAAAA%3A9Cv66Xdwb7rnBiAD4EKbS7A9k9zPF5fblCCccsgd7L43w5MOO54QcUUIwXVrCfI1NX0tNeIehH8M.
Rainie, L. Anderson, J.,; (2020, August 17). Stories From Experts About the Impact of Digital Life. Pew Research Center: Internet, Science; Tech. https://www.pewresearch.org/internet/2018/07/03/stories-from-experts-about-the-impact-of-digital-life/.
Rainie, L. Anderson, J.,; (2020, July 22). Artificial Intelligence and the Future of Humans. Pew Research Center: Internet, Science; Tech. https://www.pewresearch.org/internet/2018/12/10/artificial-intelligence-and-the-future-of-humans/.
Rainie, L., Anderson, J., ; Vogels, E. A. (2021, April 5). Experts Say the 'New Normal' in 2025 Will Be Far More Tech-Driven, Presenting More Big Challenges. Pew Research Center: Internet, Science; Tech.
Rainie, L.,; Anderson, J. (2020, August 6). Experts on the Future of Work, Jobs Training and Skills. Pew Research Center: Internet, Science; Tech. https://www.pewresearch.org/internet/2017/05/03/the-future-of-jobs-and-jobs-training/.
Rainie, L.,; Anderson, J. (2020, August 6). Experts on the Pros and Cons of Algorithms. Pew Research Center: Internet, Science; Tech. https://www.pewresearch.org/internet/2017/02/08/code-dependent-pros-and-cons-of-the-algorithm-age/.
Rainie, L., Anderson, J.,; Albright, J. (2020, August 27). The Future of Free Speech, Trolls, Anonymity and Fake News Online. Pew Research Center: Internet, Science; Tech. https://www.pewresearch.org/internet/2017/03/29/the-future-of-free-speech-trolls-anonymity-and-fake-news-online/.
Shabat, B. (2021, January 7). Women-Owned Business: Statistics; Trends [2020]. Become. https://www.become.co/blog/women-owned-businesses-statistics/.
Sharon Oviatt (2004, September 1). Toward adaptive conversational interfaces: Modeling speech convergence with animated personas. ACM Transactions on Computer-Human Interaction (TOCHI). https://dl.acm.org/doi/abs/10.1145/1017494.1017498?casa_token=9-HQNZTEbNMAAAAA%3ApzYXAfVk_xIoq-Ac6QIPFcN2NGdAH9H8fmOxtmGqZ3GVKFbxCf50kIDaRXBgtHaR8Y-l3HLdZmnqicM.
Simon, M. (n.d.). The Genderless Digital Voice the World Needs Right Now. Wired. https://www.wired.com/story/the-genderless-digital-voice-the-world-needs-right-now/.
"Speaking and Listening: Mismatched Human-like Conversation Qualities U" by Peng Hu, Kun Wang et al. Site. (n.d.). https://aisel.aisnet.org/pacis2019/81/.
Stansberry, K., Anderson, J.,; Rainie, L. (2020, August 14). Experts Optimistic About the Next 50 Years of Digital Life. Pew Research Center: Internet, Science; Tech. https://www.pewresearch.org/internet/2019/10/28/experts-optimistic-about-the-next-50-years-of-digital-life/.
Sydell, L. (2018, July 9). The Push For A Gender-Neutral Siri. NPR. https://www.npr.org/2018/07/09/627266501/the-push-for-a-gender-neutral-siri.
Gender Ambiguous, not Genderless: Designing Gender in Voice User Interfaces (VUIs) with Sensitivity. Gender Ambiguous, not Genderless | Proceedings of the 2nd Conference on Conversational User Interfaces. https://dl.acm.org/doi/pdf/10.1145/3405755.3406123?casa_token=aZTb9PLj7DUAAAAA%3Ajp2XsVUj0ngBOqCSP8638_QPxXZ-R-ITNt8YwaN-eVLVlHqtqjXt3Rarggwdr3MKQTxKg1Mpt2LWjq8.
Vogels, E. A., Rainie, L.,; Anderson, J. (2020, October 23). Experts Predict More Digital Innovation by 2030 Aimed at Enhancing Democracy. Pew Research Center: Internet, Science; Tech. https://www.pewresearch.org/internet/2020/06/30/experts-predict-more-digital-innovation-by-2030-aimed-at-enhancing-democracy/.
Wiggers, K. (2020, July 16). Artie releases tool to measure bias in speech recognition models. VentureBeat. https://venturebeat.com/2020/07/15/artie-releases-tool-to-measure-bias-in-speech-recognition-models/.
Wong, H. (2020, January 17). Siri, Alexa and unconscious bias: the case for designing fairer AI assistants. Design Week. https://www.designweek.co.uk/issues/13-19-january-2020/unconscious-bias-ai-voice-assistants/.
Zellou, Georgia. (2021, January 20) Age-and Gender-Related Differences in Speech Alignment Toward Humans and Voice-AI. https://www.researchgate.net/profile/Georgia-Zellou/publication/348629322_Age-_and_Gender-Related_Differences_in_Speech_Alignment_Toward_Humans_and_Voice-AI/links/60083dc5299bf14088aacfe8/Age-and-Gender-Related-Differences-in-Speech-Alignment-Toward-Humans-and-Voice-AI.pdf