1 Introduction

In recent years, Artificial Intelligence (AI) has experienced a notable resurgence in both academic and commercial circles. This renewed interest has positioned AI as an integral element in the landscape of digital transformation, with the potential to revolutionise industries moving forward. However, the capabilities of AI are frequently overstated [1], and communications and general discourse around AI are often plagued by hype [2].

AI hype has been a concern of scholars, activities, and practitioners since the inception of artificial intelligence, as the exaggerated promises and inflated expectations of AI technologies can perpetuate harmful stereotypes and exclusionary practices against marginalised communities [3]. However, the advent of machine learning technologies and the advancements in generative AI have fuelled a new wave of hype around AI [4] and with that, increased risks of harm for marginalised groups.

There are many ways in which AI hype can be harmful to marginalised communities. This is particularly prevalent in exaggerating the capabilities of AI systems, misleadingly presenting them as highly accurate or infallible solutions to social problems, or downplaying their potential biases. Marginalised communities may then have negative experiences with AI through the reinforcement of stereotypes and biases [5], or from the harms that can be perpetrated by marginalised people not being part of the data that AI systems are trained on [6].

Hype is also an influencing factor in policy and decision-makers’ understanding of AI capabilities, and subsequent AI implementations. The role of hype in policymakers’ and key decision-makers’ understandings of the capabilities, risks, and limitations of artificial intelligence has been addressed by key figures in the AI ecosystem, such as Zachary Lipton who stated in 2018 “policymakers don’t read the scientific literature but they do read the clickbait that goes around” [2]. According to Lipton (2018), the media industry is partly responsible for this issue as it fails to effectively discern between genuine advancements in the field and promotional material [2].

Hype is also a means to obfuscate real issues of bias, harm, and exploitation felt most sharply by marginalised communities when AI is implemented. This therefore raises the question of power imbalances as a feature of AI technologies as we currently know them. This paper will study the relationship of AI hype and marginalised communities, particularly the LGBTQ+ community, and the role which marketing communications plays in enhancing this hype and its impacts on the LGBTQ+ community.

Section 2 of this paper will discuss marginalisation and its origins. Section 3 will examine some of the key theoretical underpinnings of this paper. Section 4 looks specifically at power and AI, and some of the real-world impacts experienced by marginalised communities. Section 5 examines how hype impacts on the LGBTQ+ communities experience of AI. Section 6 then moves on to look exclusively at the Queer experience of AI. The paper then moves on to discuss these elements in Section 7, before offering recommendations for future research in Section 8.

This paper will pose two key questions: does hype affect marginalised communities, particularly hype around new technologies such as AI; and what impacts does the LGBTQ+ community experience as a result of hype. This paper will then move on to discuss areas that provide a focus for discourse of AI hype and the impact on the LGBTQ+ community: policy and decision-making, the maintenance of the cisgender heteronormative (cishet) baseline, the ubiquity of a mythology of AI, and the role of market expansion.

2 Marginalisation and its origins

2.1 Marginalisation and marginalised communities

Marginalised communities as a concept refer to groups of people who experience social, economic, and/or political disadvantages or exclusion due to factors such as their race, ethnicity, gender identity, sexual orientation, disability, or socioeconomic status. More expansive definitions extend to marginalised communities as those in communities considered to be outwith mainstream society. The concept of marginality can be traced back to Robert Park of the Chicago School of Sociology, who defined it as the position of individuals or groups in society that is characterised by a lack of power, and limited access to resources [7].

While social marginalisation can be experienced on an individual level, such as in the cases of single parenthood [8], unhoused people [9], or barriers faced by disabled people in the workforce [10], marginalisation occurs on a larger societal scale, where entire communities are at risk of marginalisation on account of systemic discrimination and prejudice based on their identities [10, 11].

Contributing factors to marginalisation are numerous; however, certain key elements arguably contribute to marginalisation on a global scale, such as gender [12], race and ethnicity [13], globalisation [14], and socioeconomic inequality [15, 16]. These factors intersect and interact, giving rise to intricate systems of marginalisation that impact individuals within the LGBTQ+ community. The intersectionality of different marginalising factors is also a key contributor of further marginalisation [17].

2.2 Marginalisation and the LGBTQ + community

The LGBTQ+ community refers to individuals who identify as lesbian, gay, bisexual, transgender, queer, and other sexual and gender minorities. People within the LGBTQ+ community often face systemic discrimination and prejudice based on their sexual orientation and gender identity, leading to various socioeconomic and healthcare disparities compared to the general population [18]. People within the LGBTQ+ community may also experience marginalisation from other groups within the LGBTQ+ community. For example, trans-individuals or people of colour within the LGBTQ+ community may experience transphobia or racism from other group members [19]. It is therefore important to consider the ways in which the LGBTQ+ community is marginalised not only as a whole, but also how specific subgroups within the LGBTQ+ community may face intersecting forms of marginalisation.

A major concern regarding the expanded use of AI is its potential to unintentionally reinforce stereotypes around gender, which can hinder progress toward gender equality [20]. As AI algorithms primarily learn from vast amounts of data, the biases locked within these systems can be perpetuated and reinforced through unmonitored implementation. However, research specifically focusing on the LGBTQ experience as a result of marketing driven early adoption is limited.

The LGBTQ+ community has encountered numerous challenges when it comes to the integration of their queer identities with artificial intelligence. These issues stem from the fact that AI models often learn from data that reflect social biases, leading to instances of discrimination against transgender individuals on dating websites and misgendering by generative AI systems. Furthermore, algorithmic bias within healthcare systems perpetuates negative impacts on the LGBTQ+ community, undermining progress made in addressing bias for other marginalised groups. Over the past 2 decades, as AI technology has advanced, its interactions with the LGBTQ+ community have exhibited various harmful or unfavourable aspects.

3 Theoretical background and considerations

3.1 Queer theory

Much of the marginalisation of the LGBTQ+ community comes from the acceptance of heterosexuality, and the characteristics and values that entail, as the dominant paradigm for understanding gender and sexuality. Queer theory challenges the dominant paradigm of using heterosexuality as the standard for comparison [21]. Coined by Teresa de Lauretis in 1991 [22], there are at least three interconnected concepts in queer theory [23]:

  • Disregarding heterosexuality as the default state for sexuality.

  • Challenging the notion that lesbian and gay studies comprise a singular area of study.

  • Acknowledgement and subsequent focus on the interconnection of racism and sexism.

Queer theory offers the potential to encompass these various critiques and enable a reevaluation of sexuality and gender beyond traditional norms. In the context of AI hype and the experience of marginalised communities, queer and feminist theoretical perspectives can offer valuable insights into the experiences of marginalised communities and the potential biases embedded within AI.

However, it is important to note that Queer theory did not emerge in isolation; it has its roots in critical and feminist theories. These theories aimed to deconstruct dominant narratives and examine power imbalances within society. One influential figure in this field is Michel Foucault, whose analysis of power dynamics and exploration of the history of sexuality remain highly relevant [24, 25]. In examining the relationship between key concepts like heteronormativity and cisnormativity in Queer theory and mechanisms employed by AI systems, we can draw upon Foucault’s insights regarding taxonomies, classifications, and power structures that contribute to the oppression faced by LGBTQ+ communities.

According to Foucault, subjectivity is not an isolated concept in philosophy; rather, it is shaped and influenced by knowledge and power. In his later work, he delved into examining the interplay between knowledge and power.

His work represents a significant departure from previous conceptions of power and cannot be easily assimilated into existing frameworks. Power is depicted as diffuse rather than concentrated, embodied, and enacted instead of possessed, discursive rather than solely coercive, and forming agents rather than being wielded by them [26].

3.2 The cishet baseline

The key concept in queer theory is that of heteronormativity: “the institutions, structures of understanding, and practical orientations that make heterosexuality seem not only coherent—that is, organised as a sexuality—but also privileged” [27].

Heteronormativity refers to the assumption that heterosexuality is the norm and all other forms of sexuality are deviations or abnormal. This assumption permeates societal institutions, cultural norms, and individual beliefs, creating a hierarchical system that marginalises and oppresses non-heterosexual identities.

Similarly, cisnormativity is the assumption that cisgenderism is the norm, and that everyone is (or ought to be) cisgender [28]. Originated in 2009 as the “the expectation that all people are cissexual” [29], it has been noted that “Cisnormative assumptions are so prevalent that they are difficult at first to even recognize”, and “cisnormativity shapes social activity such as child rearing, the policies and practices of individuals and institutions, and the organisation of the broader social world” [29].

This “cishet baseline” has numerous implications for the LGBTQ+ community, both in terms of othering the lived experiences of LGBTQ+ people, and in terms of actively discriminating against the LGBTQ+ community or specific member groups within the community.

In Western Europe and North America, there has been a notable increase in the normalising of non-heterosexuality within institutional arrangements. Same-sex marriage and adoption have become legally recognised in countries, such as Canada, the USA, UK, and many parts of Western Europe. Furthermore, equalities’ legislation prohibits employers from discriminating against individuals on the basis of their sexual orientation or gender identity to a certain extent (although this is narrowly defined in UK legislation). While these advancements may appear positive at first glance, it is important to recognise that rights enjoyed by queer individuals across these regions still face ongoing challenges and potential threats similar to those observed with abortion rights.

The recent repeal of Roe v Wade in the United States has created opportunities to challenge rulings on marriage equality and laws concerning private sexual activities. This shift in legal precedent is reflected in the introduction of bills like the “Don’t Say Gay” bill, reminiscent of UK’s s.28, which prohibits any discussion or promotion of gender and sexuality (currently enacted only in Florida but with plans for implementation in 11 other states). In December 2023, the UK government Department for Education released its guidance for a “parent first approach” to transgender and non-binary children in school [30], and guidance which has been widely criticised as transphobic and in violation of children’s privacy [31]. Furthermore, the Eastern European queer community faces increasing discrimination and social isolation from the anti-LGBTQ+ political ideologies currently asserting dominance in nations, such as Poland and Hungary [32, 33], in stark juxtaposition to the more permissive political landscape of the 1960s and 70s and the social values of many citizens [34].

4 Power and AI

4.1 Defining power

Power itself can be understood either as structural or poststructural. Structural power refers to systemic bodies of power which are encoded in social and institutional structures, shaping relationships, norms, and values within society. However, defining structural power has been a subject of debate and varies among different scholars [35,36,37]. Poststructural power is concerned on how power functions through discourse and language, shaping subjectivities, and identities [38].

Power is according to Weedon (1987)

a dynamic of control and lack of control between discourses and the subjects, constituted by discourses, who are their agents. Power is exercised within discourses in the ways in which they constitute and govern individual subjects [39].

Furthermore, systems of power are not solely determined by individual actions but rather by the existence of power itself. Power permeates society and is present in countless everyday situations that involve various issues. The combined impact of these situations leads to the establishment of specific power structures [40]. Additionally, individuals themselves are shaped by both external and internal constraints imposed by these power structures. External controls restrict certain identities, particularly through labelling numerous bodily desires as unacceptable.

Power is not limited to specific individuals or rigid structures, but is present in every aspect of society without any singular source or fixed form [41]. Every society has its system of truth, known as the “general politics” of truth [42]. This refers to the specific kinds of discourse that are accepted and treated as true within a particular society. It also includes the mechanisms and institutions that enable individuals to distinguish between true and false statements, as well as how these statements are validated.

4.2 Real-world implications for marginalised communities

The hopeful optimism that AI will assist in overcoming biases in human decision-making has been challenged by instances of bias and unfairness against marginalised communities [43]. As asserted by Abeba Birhane (2022):

“Let’s ditch the common narrative that AI is a tool that promotes and enhances human ‘prosperity’ (whatever that means) & start with the assumption that AI is a tool that exacerbates inequality & injustice & harms the most marginalised unless people actively make sure it doesn’t.” [44]

Women and people of colour (and particularly women of colour) experience real-world implications of inherent power imbalances encoded in AI systems trained on biased data and created in biased and unbalanced conditions. For example, facial recognition technology has demonstrated racial and gender bias in data sets, leading to the misclassification of women and people of colour [45,46,47]. Disparities in facial recognition classification accuracy are also greater between light skinned people and dark skinned people, with inaccuracies rising sharply for dark skinned people [45]. The existence of biases in areas like law enforcement, where facial recognition technology is employed for identification purposes, can result in significant negative outcomes with potentially devastating consequences [48,49,50].

Structural inequalities in access to healthcare are often replicated in AI-enabled healthcare systems [51,52,53]. These systems have been found to exhibit biases in diagnosis and treatment recommendations, leading to disparities in healthcare outcomes for marginalised communities.

The technocentric discourse which centres fast and ubiquitous implementation of new technologies yields numerous demonstrations of systemic power leveraged against marginalised communities [54]. This can be seen across key aspects of social administration, as demonstrated in Table 1.

Table 1 Technology implementation impact on marginalised communities across social categories

5 How hype impacts the LGBTQ + AI experience

5.1 Definitions and understanding of hype

Hype as a concept has a variety of different definitions, ranging from deception or fraud, to excitability [71]. However, for the purposes of this paper, the definition of hype that will be utilised is the use of media, marketing, and promotional channels to elicit interest in a product or service [71]. Where this intersects with new technologies, such as AI, this is often achieved on the basis of overinflated claims of capabilities, although this is not always the case [72].

Hype is often a catalyst in the implementation and adoption of emergent technologies [73]. This has been seen in previously emergent technologies of the modern era, such as the Internet [74,75,76], Big data [77,78,79], the Internet of Things [80,81,82], and Blockchain [83,84,85].

Perhaps, the most prevalent model of emergent technology hype is the Gartner Hype Cycle, which provides a visual representation of the hype surrounding various technologies over time [86]. The Gartner Hype Cycle is a graphical representation that illustrates the evolution of technologies, their adoption rates, and social impact over time. It consists of five distinct phases that reflect various stages in the technology adoption cycle [87].

The concept of the Gartner Hype Cycle has been the subject of numerous criticisms [88,89,90], mainly due to its subjective nature and lack of scientific rigour. A few technologies have been shown to travel through an identifiable hype cycle, and the model has been described as more of a conceptual framework rather than a precise predictive tool [91].

However, despite these criticisms, the hype cycle framework has been widely used to understand the adoption and maturity of emerging technologies, including in the field of artificial intelligence.

As a specific marketing approach, hype involves utilising exaggerated measures of publicity to generate excitement and anticipation for a product or service [92]. This modern practice is closely linked with social media marketing, particularly through influencer and viral marketing [93]. Consumers targeted by hype marketing may engage in hype-generating activities around the product in question as a demonstrator of conspicuous consumption and as a means to signal their affiliation with a particular brand or trend, and the personal characteristics or social capital that this signifies [93].

Hype in itself is not necessarily a bad thing. As noted by Milne (2020):

“Hype, like any tool, isn’t inherently good or bad. It can be the tool with which we gather communities around positive change, and it can be the tool that misleads to satisfy the ill-conceived wants of a few immoral actors. Sometimes people don’t even know they are propagating it. But when hype starts to grow unchecked, it doesn’t really matter who started it or why; what matters is that it is spotted before any damage happens” [94].

5.2 The evolution of AI hype

Throughout the history of artificial intelligence, there has been a persistent pattern of inflated expectations and grandiose claims regarding its transformative potential. The field has consistently experienced periods of hype, with exaggerated promises surrounding AI’s capabilities.

Artificial Intelligence encompasses various technologies, such as machine learning, natural language processing, natural language generation, deep learning, and neural networks. While the term “Artificial Intelligence” serves as a broad concept that simplifies complex technological processes for lay audiences, computer scientists and developers are starting to view it as a marketing hype term [95,96,97]. This is due to its continuous use to mask the true capabilities of different technologies behind an illusion of a singular magical technology.

Hype has been a significant aspect of artificial intelligence research and development since the 1950s. While there have been notable advancements in the field of artificial intelligence in recent years, much of this progress can be attributed to the availability of Big Data and increased computing power rather than substantial strides in what is commonly understood as “intelligence” by the general public [98,99,100].

The effect of such hype has led to a perception from some commentators that “AI-powered’ is tech’s meaningless equivalent of ‘all natural’” [101].

According to a report by Slate, an analysis of press releases and technology articles dating back to the 1990s reveals a recurring pattern: predictions about technological advancements, especially those related to artificial intelligence, consistently project developments that are 5–10 years away [102].

The report compiled a list of 81 such predictions to illustrate this common cliché. These inflated expectations and overpromises contribute to the hype surrounding AI, creating a sense of anticipation and excitement among both industry professionals and the general public.

5.3 Elements of AI hype

There are several elements of AI as a concept that are simultaneously hyped up while driving further hype towards AI. This can be seen in anthropomorphisation and perceived objectivity of AI systems.

In the case of anthropomorphism, it has been observed that to facilitate customer–robot interactions, humanlike service robots may be preferred to increase customers’ perceptions of social presence [103]. Equally, this can be seen in the human mimicry of chatbots, which can often convince users that their interactions have been with another human actor [104]. There is a growing consensus, noted by Novak and Hoffman (2019) that anthropomorphism is an important tool in understanding how customers experience interactions with inanimate objects [105]. According to Epley et al., this perception results from “the attribution of human characteristics or traits to nonhuman agents” [106].

Anthropomorphism has been found to increase product and brand liking [107], although whether anthropomorphism in service robots enhances customers’ experiences is unclear. It has been argued that humanlike qualities “incorporates the underlying principles and expectations people use in social settings in order to fine-tune the social robot’s interaction with humans” [107]. However, there is also the argument that anthropomorphism is less positive: “consumers will experience discomfort—specifically, feelings of eeriness and a threat to their human identity” [107]. This is also known as the “uncanny valley” effect.

However, Troshani et al. have purported that enhancing the humanness of an AI application is likely to amplify the human user’s perception of goodness of an AI application, and consequently the extent to which it can be trusted. The further posit that humanity in AI applications in service can improve trust of consumers in these applications which can, in turn, facilitate relationship building between consumers and service providers [108].

Hype also drives and reinforces the idea of perceived objectivity that underpins AI technologies. Datasets on which AI systems are trained, and subsequently analysed, often reflect inequities that occur in the world at large [109, 110]. However, the highly technical nature of data-driven AI systems often provides a rhetoric of objectivity which veils the complicated and much more fallible systems underneath [111].

These “appeals to objectivity” are embedded in technological discourses and practices [112]. This notion of objectivity increases the difficulty with which to challenge this fundamentally misleading dichotomy and to demand accountability [113].

When defining Big Data, on which artificially intelligent systems are developed, Boyd & Crawford (2012) define it as a cultural, technological, and scholarly phenomenon that intersects with technology, analysis, and mythology [114]. The concept of mythology offers a foundation for appeals to objectivity in perpetuating the belief that data offers a “higher form of intelligence and knowledge that can generate insights that were previously impossible with the aura of truth, objectivity and accuracy” [114]. These claims to objectivity in Big Data, the information on which artificial intelligence is fed, are fundamentally misleading.

It is argued by Gitelman (2013) that an interpretative process of the imagination is shaped by the norms and standards for every discipline and disciplinary institution and their own perception or “imagination” of data [115]. Boyd & Crawford note thusly “As computational scientists have started engaging in acts of social science, there is a tendency to claim their work as the business of facts and not interpretation. A model may be mathematically sound, an experiment may seem valid, but as soon as a researcher seeks to understand what it means, the process of interpretation has begun. This is not to say that all interpretations are created equal, but rather that not all numbers are neutral” [114].

Closely related to perceived objectivity is the fallacy of inscrutability [116]. This fallacy of inscrutability is a category error: when critics argue that the actions of a system cannot be comprehended, they are attributing values to mechanical technologies rather than to the humans who created and implemented them [116]. The fallacy of inscrutability is highlighted as one of 18 key issues with AI journalism which contributes to AI hype set out by Kapoor and Narayanan [117]. This can be seen in media which claims that it is impossible to understand how models work, and as such they cannot be used in a non-discriminatory way.

The remainder of the 18 common pitfalls most often seen in AI journalism, include utilising flawed human-AI comparisons, hyperbolic, incorrect, or non-falsifiable claims about AI, uncritically platforming those with self-interest, and failure to address limitations [117]. These common issues with media representation of AI contribute to the perpetuation of unrealistic expectations and the culture of hype surrounding AI technologies.

This is compounded by the role of technology developers and other private interests in driving AI hype. Technology firms have a strong motivation to keep information concealed. Some may aim to protect the confidentiality of their intellectual property, while others seek to capitalise on the allure of “AI” without truly engaging in AI itself [118]. Many software developments might employ quite ordinary statistical methods that do not reflect true artificial intelligence. Consequently, it is not advantageous for a company to disclose how basic their technology actually is.

The role of tech companies is further compounded by the media. A key pitfall of AI journalism according to Kapoor and Narayanan includes the platforming of self-interested parties without critique. This can be seen by the media treating company spokespeople or sources as though they are neutral sources, repeating PR terms rather than describing how an AI tool works [117]. This uncritical platforming allows corporate interests to control the narrative and perpetuate the hype surrounding their AI technologies, without providing a balanced and factual representation of their limitations or potential risks [117]. This lack of critical analysis in AI journalism contributes to the formation of false impressions and unrealistic expectations about AI capabilities.

All of this hype has real-world consequences which both directly and indirectly harm marginalised communities such as the LGBTQ+ community.

Hype drives the early adoption of new tech, even when there is little evidence to support its effectiveness or usefulness [119], and when there are potentially negative impacts for marginalised communities [120]. The rush to adopt AI technologies without fully understanding their capabilities and limitations can lead to the creation and perpetuation of biased and discriminatory algorithms [121].

The fear of being left behind and the rush to early adoption also drives the traffic in “fake” AI [122]. For example, in 2019, venture capital firm MMC found that out of 2,830 startups classified as AI companies, only 1580 actually met the criteria [123].

Related to this is also the way in which hype impacts AI implementation decisions [124]. Where executions of AI have resulted in adverse social impacts, implementation decision-makers ultimately perceive the technology to be impartial, and results generated found to be fair and correct even in the presence of biased or poorly structured data [124].

There is also the notion that the failure of previous hyped technologies such as blockchain or cryptocurrency has led to a desire for AI to be the “golden solution” that will solve all economic, social, and political woes, encouraging an attitude of “all bets are on AI” [125]. This plays into a discourse of inevitability, whereby AI implementation is presented as a necessity. In a horizon scan of discourses on artificial intelligence and development in education, Nemorin (2021) states:

“This view of AI in education rests on the assumption that no space in the human body is sacred enough to be protected from the creep of AI’s attention. This social imaginary suggests that every aspect of bare life is and should be thrown open for measurement and behavioural management...” [126].

6 The queer AI hype experience

6.1 The LGBTQ + historical perspective: engagement with technology and media

The LGBTQ + community has had a complex relationship with AI since its inception, as seen in the transgender politics of Alan Turing’s original Turing Test [127]. The presence of other LGBTQ + individuals among prominent AI pioneers further emphasises this intersection between AI and the LGBTQ + community on a human level. However, it is also important to consider the ethical implications and potential biases that can arise when integrating AI algorithms into societal frameworks. Christopher Strachey, the creator of C programming language and considered one of the pioneers in computer-generated art, faced personal challenges regarding his sexuality while working within the restrictive environment of British academia during the 1960s [128]. Peter Landin, a prominent figure in computer science who recognised the mathematical expression behind programming languages, later expressed regret for his involvement with this field due to its growing utilisation in state surveillance activities [129].

The LGBTQ + community faces various forms of harm in the digital realm, even outside the scope of AI technologies. In the United Kingdom, legislation such as The Protection of Freedoms Act 2012 [130] and the Policing and Crime Act 2017 (known as the ‘Turing Law’) [131] aimed to address this issue by allowing gay men with historic cautions or convictions for certain offences to have them disregarded or pardoned. However, due to inadequate consideration for a wide range of different historical crimes stored digitally, many individuals ended up having these convictions included in Disclosure and Barring Service checks alongside serious sex offences, terror acts, and crimes like murder. As a result, their careers suffered significant damage [132]. In 2023, the programme was revised to cover a broader range of crimes eligible for pardon. For the first time, this now includes pardons for women convicted of any past same-sex activity offences that have since been repealed or abolished [133].

6.2 The modern experience: a history of the present

With the advancement of AI technology, its impact on the LGBTQ + community has evolved. Unfortunately, this evolution has often resulted in harmful effects. A review of some key harmful impacts of the implementation of AI systems on the LGBTQ + community can be seen across a variety of social factors in Table 2.

Table 2 A review of AI implementation impacts on LGBTQ + community

One notable instance occurred in 2017 when a Stanford study claimed that facial images could be used by artificial intelligence to determine sexual orientation [134]. This claim was swiftly criticised by advocacy groups as “junk science” [135] and sparked debates regarding privacy, ethical boundaries, and potential misuse of AI technology. Metcalf (2017) argues that part of the issue with this paper lies at the intersection of research ethics and research hype [136]. Where academic ethics boards come up against data science research, they are often ill equipped to deal with the outcome, leaving people at the risk of harm in the rush to be the next “scientific gaydar” claim [136]. The questionable and centuries old search for positive physical identifiers of sexuality is in itself a product of perceived objectivity hype (albeit for science more generally rather than AI specifically) [137].

Automated gender recognition refers to a specific application of facial recognition technology that uses AI algorithms to identify the gender of individuals based on photographs and videos. However, it is important to acknowledge that AGR models hold outdated and potentially harmful assumptions about gender presentation, particularly for transgender and non-binary communities [138]. Studies indicate that AGR technologies reinforce the existing biases against marginalised groups, including trans-, non-binary, gender non-conforming individuals [139] as well as people with darker skin tones who belong to racial minority groups [140]. These examples highlight the discriminatory outcomes and representational harms that can arise from data-intensive practices and AI systems of surveillance and social sorting [141]. The development and proliferation of AGR technologies can be seen to be a direct reaction to hype. Similarly to the aforementioned appeals to scientific objectivity found in “scientific gaydar” [136, 137], the variations of means by which to classify gender are numerous [142]. AGR systems also fall prey to the fallacy of inscrutability [116], the commercial computer services almost always being proprietory “black box” systems [142].

Utilisation of AI content moderation has led to the LGBTQ+ community experiencing several issues. Social media platforms like YouTube and TikTok have come under scrutiny for their alleged discriminatory practices towards the LGBTQ+ community. In 2019, YouTube faced a class action lawsuit that accused its content moderation algorithm of falsely identifying videos with keywords related to “lesbian,” “transgender,” and “gay” as adult content and restricting access to them through the use of “restricted mode” [143]. Similarly, TikTok has been accused of engaging in various anti-LGBTQ+ activities through its algorithms, including limiting exposure to LGBTQ+ hashtags and suppressing disabled, plus-sized, and LGBTQ+ creators’ content [144, 145]. Despite frequent and regular reports about the failures of automated content moderation, whether it “is neither reliable nor effective” [146], “it might not work” [147], or it is still reliant on human labour to the extent that a former content moderator filed suit against TikTok alleging failure to provide adequate safeguards for moderator mental health after she developed PTSD [148], AI continues to be hyped as a panacea for content moderation by industry players such as OpenAI [149] and the UK Government [150].

This is in spite of evidence that even highly efficient moderation systems could exacerbate, instead of improve, numerous current content policy issues on platforms by potentially further enhancing opacity, introducing complexity to existing problems related to fairness and justice in large-scale sociotechnical systems; and concealing the inherently political nature of speech decisions being made at scale [151].

These examples represent just a small fraction of the reported cases of algorithmic bias targeting the LGBTQ+ community on various social media platforms. It is evident that marginalised groups face stricter content moderation regulations and are subject to disproportionate account suspensions, especially when their content challenges the dominant group [152].

The LGBTQ+ community has also faced the algorithmic promotion of content that is harmful and discriminatory. TikTok, for instance, has been accused of actively promoting homophobic and anti-LGBTQ+ content to its users [153]. The cyclical nature of misinformation and the viral spread of content on social media platforms, combined with intentional promotion to boost user engagement, have played a significant role in amplifying narratives such as the “groomer” accusation against LGBTQ+ individuals. This type of narrative has garnered substantial attention on social media platforms, with the top 500 influential tweets containing hateful ‘grooming’ allegations being viewed over 72 million times [154].

7 Discussion

7.1 Introduction

This paper seeks to investigate the ways in which AI hype impacts the LGBTQ+ community. As a means of exploring this proposition, hype as a concept both in AI and in marketing communications more generally was examined, along with looking at the various ways in which AI itself can cause harm for marginalised communities, in particular the LGBTQ+ community. This paper also looked into the ways in which marketing and communication strategies for AI contribute to and mirror systemic power dynamics, specifically concerning the LGBTQ+ community. The findings of this paper suggest that AI hype can indeed mirror and perpetuate the existing power structures related to LGBTQ+ identities.

The LGBTQ+ community experience of AI is fundamentally shaped by the biases inherent in AI systems, the subsequent preconceived means of interacting with LGBTQ+ individuals that AI systems can deliver. AI technologies are then able to manifest a variety of different harms on the LGBTQ+ community. The lack of diverse representation and inclusivity in developing and applying AI technologies further perpetuates these biases.

Problematic issues that arise from the intersection of AI and the LGBTQ+ community include the use of controversial and discriminatory AI technologies like facial recognition, deception detection, and predictive policing. Not only are these technologies shown to cause harm to the LGBTQ+ community as a whole, but those most at risk from the sharpest harms are the most disenfranchised of the community, namely trans-people, refugees and asylum seekers, and people of colour. It remains imperative to approach AI and the people impacted by its implementation through an intersectional lens.

This paper has also examined the way in which AI hype drives early adoption of AI technologies, often without sufficient consideration of their potential impact on marginalised communities.

This is particularly the case for AI systems that purport to predict social outcomes, such as crime prediction, child protection, and welfare benefits administration. The context in which these technologies are implemented can have significant consequences for the LGBTQ+ community. Given that many of these technologies are implemented without sufficient transparency, oversight, or accountability mechanisms, the potential risks to marginalised communities are amplified. The consequences of the AI hype cycle on the LGBTQ+ community therefore cannot be overlooked.

The findings of this paper point to a variety of different factors at play in driving hype and the adoption of AI technologies towards an ultimately harmful experience for the LGBTQ+ community. This discussion will now move to consider these factors and their implications in more detail, before making recommendations as to how to address these issues and create a better experience for LGBTQ+ people in which to interact with AI.

7.2 How AI hype actively harms the LGBTQ + community

7.2.1 Influence on policy and decision-making

Artificial intelligence hype, particularly claims pertaining to perceived objectivity, infallibility, and techno-solutionism, is pervasive in many areas of decision-making. In particular, AI hype exerts an increasingly significant influence on key decision-makers and AI implementers, subsequently increasing the impact on marginalised communities. On the subject of navigating claims of artificial superintelligence, The Carnegie Endowment for International Peace commented that “leaders are far less equipped to evaluate claims made in a media and investment environment that incentivizes hype over level-headed assessment” [168].

The Observatory of Public Sector Innovation noted the impact that AI would have on government policy making and made recommendations as to policy frameworks to overcome AI hype in government decision-making [169]. However, policy decisions to implement AI systems with questionable and arguably overstated claims have seen AI implemented in European Union border control policy [170], the United Kingdom Defence Strategy [171], Dutch welfare benefits policy [172], as well as across a host of other public sector implementations [173]. Sadly, but somewhat unsurprisingly, the key drivers of AI implementation in the aforementioned policies are centred around claims of efficiency and objectivity, be that of tracing fraud [172], in warfare [171], or border security [170]. It is also with a somewhat bitter irony that protection of vulnerable people from harm is mentioned frequently as a benefit of AI implementation in the aforementioned policies, although without the assistance of any clarification as to how or why this might be achieved.

One does not need to look far to find key elements of hype, such as techno-solutionism and simplistic appeals to infallibility, in public policies. The introduction of the UK Government’s Defence Artificial Intelligence Strategy, states boldly:

“We also recognise that the use of AI in many contexts, and especially by the military, raises profound issues. We take these very seriously – but think for a moment about the number of AI-enabled devices you have at home and ask yourself whether we shouldn’t make use of the same technology to defend ourselves and our values” [171].

Overstatement of the effects of AI systems may also be seen as a means to secure funding or set funding agendas, in the public sector, in research, and in commercial enterprises [174]. Indeed, this is not only seen as a particular phase in the Gartner Hype Cycle as depicted in Appendix 1, but is also a defining factor in the concept of “AI Summers” and “AI Winters” whereby government funding of AI projects along with commercial investment go through periods of boom and bust, predicated largely on overstatement of AI capabilities thus leading to a reduced capacity for exploration and innovation in the field [175].

AI hype also drives the pervasive notion that it operates outwith human weaknesses such as bias [176]. This claim has actively impacted on policy decisions that affect marginalised people at great concentrations.

This claim is often used to promote and implement AI systems in various social contexts, such as recruitment, criminal justice, child protection, and welfare administration. However, these AI tools for social outcomes are frequently applied in situations where marginalised individuals and communities are excluded from the decision-making process yet are most likely to be affected by them [177].

AI has been used to predict outcomes in the fields where marginalised communities are at the greatest likelihood of intersecting with these technologies, in areas such as crime prevention [178], domestic violence [179], child welfare [180], and welfare benefit administration [172]. In 2022, British innovation agency Nesta noted that a key issue with utilising AI to predict social outcomes was a lack of robustness in AI models, leaving them generally poor at generalising outwith the narrow confines of which they had been trained, and favouring targeted interventions that could achieve unfair outcomes [181].

In the recruitment and human resources sectors, AI is often touted to business leaders as an effective way to reduce bias in the hiring process [182,183,184]. This is a popular misconception around AI that lingers in decision-making circles to this day despite evidence that AI at best reproduces race and gender bias in a similar way that humans do [185] largely on account of limited and biased datasets [186]. Despite these concerns, AI continues to be adopted in the recruitment sector, with recruiters focused largely on the efficiencies that AI brings over concerns around bias [187].

The impact that AI hype has on policy and decision making powers has harmed, and continues to harm, marginalised people, of which the LGBTQ+ community is a particularly vulnerable constituent.

7.2.2 Obfuscation and diverted priorities

Artificial intelligence hype also actively harms marginalised communities, particularly the LGBTQ+ community, through obfuscating or otherwise distracting from real issues of bias, harm, and exploitation felt most sharply by marginalised communities when AI is implemented. The persistent focus on superintelligence, or the existential threat of AI [188] effectively diverts public attention (and capital investment) away from real and routine matters of discrimination in housing [189], healthcare [12, 43, 51,52,53, 58], security, policing and criminal justice [48,49,50, 64], and the spread of hate speech and misinformation in non-English languages [190], all of which drastically impact on marginalised communities and where the LGBTQ+ community experiences some of the sharper harms.

In the instance of a social media platform such as TikTok, where the LGBTQ+ community is impacted by discriminatory content moderation policies as well as the increasing promotion of anti-LGBTQ+ content, the hype around the use of the platform itself (or the subsequent US ban), serves to overshadow the obscurity around the use of AI on the platform itself. In a case study examining the use of AI in TikTok and Facebook, Grandinetti (2021) states “the discursive promotional strategies of TikTok represent a hype cycle that obfuscates as much as it clarifies” [191].

7.2.3 Supporting the cishet baseline as the dominant discourse

Heteronormativity and cisnormativity continue to prevail in environments where the rights of queer individuals are undermined or taken away, as well as in spaces where their very existence is challenged by those who hold power. The politicisation and marginalisation of queer communities further reinforce cishet ideologies and uphold the belief that heterosexuality and cisgenderism are the norm, while any other sexual or gender identity is viewed as a deviation influenced by external factors.

This “cishet baseline” is arguably the main driver for the exclusion of the LGBTQ+ community in AI systems and their underlying data sets. This is particularly the case where models are trained on historical data, such as health records, but we also see its effects in other areas, such as social media algorithms and targeted advertising. These AI models are often built on data sets that reflect and reinforce heteronormative and cisnormative biases and assumptions, leading to inaccurate or discriminatory outcomes for the LGBTQ+ community.

To combat this issue, it is crucial to actively question and dismantle heteronormative and cisnormative prejudices in AI systems. It is vital to diversify data sources by integrating LGBTQ+ experiences to guarantee equitable and inclusive AI algorithms. This involves incorporating diverse perspectives and experiences, specifically those of LGBTQ+ individuals, into data collection processes for the development of more accurate and inclusive AI systems.

7.2.4 Market expansion above human impact

Exaggerated claims have been a common occurrence in the field of artificial intelligence research and development. In the 1950s, Alan Turing envisioned a future where computers would reach such an advanced level of intelligence that distinguishing between human interaction and AI would become indistinguishable. While recent advancements like the GPT-4 algorithm demonstrate progress towards this goal, Turing initially anticipated achieving this technological breakthrough by the end of the twentieth century. In 1970, Life Magazine quoted multiple computer scientists who predicted that within 3–15 years, we would have machines with comparable general intelligence to humans. But with the advent of generative AI, we are seeing a rush to market for AI that values getting ahead of the AI trend above the real consequences that marginalised people face from its implementation.

Despite the regular highlighting of the potential and realised issues with AI for marginalised communities, the demands of the market, of commerce and of innovation mean that AI implementation continues barely abated. This has been seen over the past decade of increased AI implementation, but has seen an upswing with the easy accessibility of generative AI tools. This is despite warnings that generative AI is perpetuating harmful gender stereotypes [192], data colonialism and exacerbation of poverty in the Global South [193], and monopolistic business practices [194] to name just a few. Whether these warnings will come to pass is of little consequence. The hype, the rush to market and the fear of missing out or being left behind is pervasive. As discussed earlier, decisions to implement AI in situations where limitations of the data or biases in the algorithmic outcomes are a known fact are often made on the basis of cost cutting, efficiency, and competitive advantage. While this rush to market continues, or indeed intensifies, we will continue to see AI systems implemented that harm the LGBTQ+ community.

7.2.5 A mythology of AI and AI hype

The burgeoning ubiquity and discourse of inevitability around AI, coupled with the omnipresent nature of AI hype, arguably gives way to a pervasive mythology of AI. Technological myths are made by their ability to enter the collective imagination [195], which AI can be seen to have achieved within its socio-technological system [196].

This mythology of AI perpetuates unrealistic expectations and exaggerations about the capabilities and potential impact of AI technologies, encouraged by a variety of actors and influences, becoming pervasive throughout politics, society, and culture [195].

The mythology of AI is distributed in the form of AI hype, from a combination of factors, including media sensationalism, marketing strategies by tech companies, and the desire to attract investment and gain competitive advantage [197].

The ideology at the core of the AI myth has been suggested as “a political and social ideology rather than as a basket of algorithms. The core of the ideology is that a suite of technologies, designed by a small technical elite, can and should become autonomous from and eventually replace, rather than complement, not just individual humans but much of humanity” [198].

Operating under the auspices of such a mythology can drive implementations of AI technology that may not adequately consider the specific needs, experiences, and challenges faced by marginalised communities, or where it is not the activating reason for implementation, can ensure that the media coverage, public perceptions, corporate and financial interests, and general environments of heightened hype, will contribute to the mythologisation of AI technology.

Technology myths have a direct impact on the capabilities of policy-makers to make decisions, whereby industry generated and media hyped technology myths ultimately degrade the quality of decision-making [199]. Existing in a mythology of AI, driven by AI hype, where AI is ubiquitous, inevitable, inscrutable, and infallible ultimately frames all AI implementations and outcomes within the context of the AI myth and AI hype.

8 Areas for further research

8.1 AI, regimes of truth, and power dynamics

The AI ecosystem can be seen as a prevailing dominant discourse that functions as a regime of truth. This regime is established by those who create artificial intelligence and influence decision-makers within organisations, ultimately supporting the existing power structure. Hype is also a key feature in the construction of a regime of truth.

Individuals who are not fluent in the dominant discourse often face marginalisation, ridicule, or exclusion. However, it is worth considering that AI has only been widely recognised for less than half a century. This raises the question of whether there is potential to alter the prevailing discourse surrounding AI. Can regimes of truth be reconstructed and modified by challenging perspectives that may seem unconventional or irrational? Such an inquiry would highlight the inherent contradictions between knowledge about AI, how this knowledge is acquired, and the decision-making processes within the field.

8.2 Accountability

The domains of ethics, technology, and society have become increasingly cognisant of the direct repercussions that AI technologies wield on their end users, as well as how the creation and utilisation of AI systems can reinforce the existing power structures and systemic biases against affected populations. As a result, there has been a widespread demand to establish mechanisms for AI accountability in a manner that is both transparent and effective. Framing the question around who or what is accountable for AI hype, AI implementation, and the impacts of these implementations is an area of investigation which would add to these discussions.

8.3 Influencing factors in AI design, investment, and implementation

In the course of developing this paper, a significant number of cases of AI design, implementation, and socio-political decision-making have been analysed, examined, and, in many cases, critiqued. However the influencing factors in the decision-making processes as pertains to AI, particularly as experienced directly by the decision-makers themselves, is rarely to be found in the literature. This paper lays the foundations for further exploration and novel data collection from decision-makers themselves to explore the influencing factors that drive implementations of AI, steers AI design and development, and underpins AI policy.

9 Conclusion

This paper aimed to address two key questions: does hype affect marginalised communities, particularly hype around new technologies such as AI; and how do AI marketing and communication strategies that leverage hype reflect systemic power dynamics, particularly as they pertain to the LGBTQ+ community.

This paper explored the connection between AI hype and its impact on the LGBTQ+ community, as well as the influence of marketing in amplifying this hype. The findings of this paper suggest that AI hype can reflect and perpetuate the existing power dynamics surrounding LGBTQ+ identities, leading to a reinforcement of heteronormative and cisnormative ideologies, and subsequently compounding the marginalisation of queer communities.

The LGBTQ+ community is fundamentally impacted by the biases and preconceptions ingrained in AI systems and algorithms. These biases can appear through misgendering or inaccurately depicting LGBTQ+ individuals in tailored ads, omitting LGBTQ+ subjects from AI-generated content, or reinforcing stereotypes about the LGBTQ+ community. Moreover, the absence of varied representation and inclusiveness in the creation and application of AI technologies contributes to perpetuating these biases. For example, AI development in societies with a history of discrimination may reinforce and worsen these biases and oppressions [202].

The findings of this paper point to several driving factors at play in encouraging hype and the adoption of AI technologies towards an ultimately harmful experience for the LGBTQ+ community. This can be predominantly seen in the impact that AI hype has on the decision-making powers with respect to AI implementation, and the fundamental societal state of heteronormativity underpinning issues around data sets, model development, and general diversity and inclusion in the AI ecosystem. Overall, this paper sheds new light on the ways in which the LGBTQ+ community is impacted by AI implementation across a number of different areas, and the ways in which hype drives the adoption of and implementation of these technologies irrespective of the harms felt by the LGBTQ+ community.