Skip to main content
Intended for healthcare professionals
Open access
Research article
First published online January 29, 2024

Who will govern the metaverse? Examining governance initiatives for extended reality (XR) technologies

Abstract

It is increasingly common to see policymakers, industry and regulators calling for governance of the ‘metaverse’ – often envisioned as a technological stack supported by extended reality (XR) technology. This article reports on a content analysis of calls to govern XR, published between March 2020 and May 2023 (n = 181), aiming to understand who is calling for XR governance, what is being governed and why. Findings reveal that XR governance was advanced by government, civil society and industrial stakeholders, and was scoped around four major issues (‘privacy’, ‘safety, equity and inclusion’, ‘competition’, and ‘commercialisation’). Governance solutions to emerging XR technologies included a reliance on currently existing laws and regulations, and predominantly, a more anticipatory set of approaches (largely scoped around industrial self-governance, education and best practice) focusing on the media-specific harms and affordances of XR.

Introduction

Over the last several years, the notion of the ‘metaverse’ has emerged as an area of intense focus for the technology sector. A capacious term – taken to mean different things by different constituencies – the metaverse can generally be understood as a large-scale, three-dimensional computing network (Ball, 2022), one that many boosters suggest will come to operate as a new infrastructure for both computing and life more broadly. Proponents of the metaverse, though it has yet to materialise, tend to envision it as a convergence of various technologies. These include blockchains, artificial intelligence and spatial computing technologies such as wearable augmented reality (AR) and virtual reality (VR), or what is commonly marshalled under the label of mixed or extended reality (XR; see for example, McStay, 2023).
This article focuses on this latter category of XR, a class of technology with a long history across experimental, military and consumer contexts (Pesce, 2020). There is consensus that XR has the potential to affect society in ways that are both empowering (e.g. benefits to workplace productivity, education, etc) and harmful (e.g. threats to privacy, new forms of harassment). On this basis, it is common to hear academics, policymakers, civil society groups (CSGs) and industry call for proactive governance of this emerging class of technology, imposing limits on its development and use before it is widely adopted.
Yet, we lack a comprehensive understanding of who is calling for governance, what is being governed, how governance will be enacted and why particular governance initiatives are being implemented (i.e. what material logics and interests guide governance?). As such, in this article, we ask the following:
RQ1. What stakeholders are governing – or calling for the governance of – emerging XR technologies?
RQ2. What is the scope of XR governance initiatives?
Some of the aspects we discuss are XR specific, where others overlapped with existing dialogues about governing digital technology.
Of course, technology policy and governance are not value neutral. Governance bears ideological marks. It is as such not enough to ask how XR is governed and by whom. We need also to address the diverse logics guiding calls for governance – allowing us to understand points of cohesion and tension across governance initiatives, and who the beneficiaries of these governance initiatives might be. Thus, we also ask:
RQ3. What kinds of values and logics underlie XR-related governance initiatives?
To answer these questions, we conducted a content analysis of 181 documents relating to XR governance (with a worldwide scope) – governance which is both actually existing (e.g. through laws and regulation, as well as other non-binding policy) and aspirational. As we show, these initiatives have differing aims and scope, and are driven by varied (and sometimes competing) logics and interests.
This article first overviews related work on the governance of media and technology and the emerging literature on XR governance issues and initiatives. From there, we outline our study methodology. Subsequently, we present our findings – charting the dominant governance stakeholders, the scope of governance initiatives and their enforcement mechanisms. We follow this with a discussion of findings, focusing on some of the pitfalls of existing governance initiatives, and what a productive agenda for XR governance might look like. In our conclusion, we synthesise key findings and insights, and suggest possible directions for future study.

Related work

Governance and technology

In this article, we follow definitions of media and technology governance, understood broadly as the set of legal, political and economic relationships shaping the interactions between users, technology companies, governments and other stakeholders. Governance can be both restrictive (limiting individuals and entities’ capacities for action) but also enabling (providing means for individuals or entities to act). A key reference point is the work of technology policy scholar Robert Gorwa (2019a, 2019b). Building on Abbott and Snidal’s (2009) model, Gorwa (2019a) conceptualises the governance of technology (in his case, focusing specifically on platform technologies) as a ‘triangle’ – a three-sided relation between technology firms, nation-state agencies and non-governmental organizations (NGOs). Crucially for Gorwa, each of these points of the triangle are not bracketed off from one another, but rather interrelated in ways that are dyadic (e.g. state-NGO co-governance) and triadic (e.g. state-firm-NGO co-governance).
From the perspective of the governance triangle, there are a range of possible governance frameworks. Technologies can be self-governed in a way that is voluntary and non-binding – such as industry standards bodies or self-imposed initiatives (e.g. Meta’s ‘Oversight Board’, see Wong and Floridi, 2023). Technology is also subject to external governance – such as through the increasing application of laws and public policy to the conduct of platform businesses and end use practices (Flew, 2021). Technology is also commonly subject to co-governance – collaborative rulemaking between governments, industry and CSGs (such as creating standards, or through consultation with industry in the law-making process).
Yet, governance is not value neutral. For researchers studying technology governance, it is imperative to understand not only what approaches to governance are, but why they are implemented and to whom they deliver benefit. An interdisciplinary literature has demonstrated how technology governance is heavily influenced by market and state ideologies (Jasanoff and Kim, 2015; Mansell, 2012) and geopolitical contexts (Gray, 2021; Siu and Chun, 2020). Governance shapes what technologies get made and how they get used, in ways that benefit some stakeholders (and not others). For example, technology governance often elevates and legitimates the commercial interests of large industrial stakeholders. We might think of how corporate self-governance is deployed self-servingly by tech firms in response to public and regulatory controversies (see, for example, Gillett et al., 2022). Or how industrial calls for governance – particularly in emerging technology sectors like artificial intelligence (AI) – might shape emerging legal and regulatory measures in ways aligned with industrial interests (Veale et al., 2023: 17–18). In other words, techno-politics – the way that human actors assemble institutions, procedures and objects in pursuit of particular ends (Winner, 1980) – is not just about how technologies are made or what they do, but how they are governed.

Governing XR

While XR technologies have not seen widespread adoption, a small (but growing) literature has focused on their governance. This literature concentrates on two broad areas – governance issues and governance initiatives. The former identifies governance challenges, often proposing potential resolutions through policy or regulation. The latter, a smaller literature, identifies XR’s governance mechanisms (and often interrogates the material and ideological interests underlying governance).
Early work on XR governance issues has focused on high-profile technologies, like the Google Glass – Google’s AR device which never reached a consumer market – examining how emerging AR computing interfaces conflict with privacy laws (Meese, 2014; Wassom, 2014). More recent research has focused on the resurgence of XR over the last decade, and how new technologies, business models and practices coming with it have legal and regulatory implications. Echoing tenets of feminist critiques of VR and notions of virtuality (e.g. Green, 1999), recent work in media studies (Carter and Egliston, 2024; McStay, 2023) has argued that thinking about governance in XR requires taking seriously the social, political and economic structures surrounding XR’s development and use, interrogating how it might benefit some and not others.
Embodying this perspective, a nascent scholarship has identified policy and governance challenges facing contemporary XR. For instance, the intersections of XR in work contexts and anti-discrimination regulation (Egliston and Carter, 2021: 11–14), harassment and social harm (Blackwell et al., 2019), and market competition (Egliston and Carter, 2022c). More prescriptive academic work, such as that published by industry-adjacent academic venues like the Institute of Electrical and Electronics Engineers, has addressed issues such as trolling, harassment and esafety with aims to effect changes in design and policy (Cortese and Outlaw, 2021). Perhaps unsurprisingly, given the data-rich nature of contemporary XR devices (Miller et al., 2020), there has been much attention to potentials for surveillance and implications for privacy – such as a focus on gaps between XR technology and existing privacy law (Heller, 2020), or the regulatory implications of how XR data might be operationalised (namely, for new frontiers in targeted advertising; see Heller and Bar-Zeev, 2021).
A smaller literature has shifted from studying governance issues to studying governance initiatives. Particular attention in the current literature has been to corporate policy. For instance, research has analysed XR ‘best practice’ initiatives advanced by industry (particularly by Meta), many of which have been critiqued as a cynical effort to drum up goodwill (Applin and Flick, 2021; Harley, 2023). Other work has explored how terms of service agreements govern user or complementor behaviour (Egliston and Carter, 2021; Trimananda et al., 2021), and set out speculative design guidelines for how VR software developers might implement policies that more meaningfully attain user consent (of particular importance given the technology’s data-rich nature, see Selinger et al., 2023).
Building on the current literature, and its focus on both issues and initiatives, this article offers a comprehensive account of XR governance – particularly that developed by industrial, civil society and government stakeholders. We aim to understand what is being governed (identifying its points of cohesion and tension), by whom and why. But beyond its explanatory potential, taking stock of the XR governance landscape allows us to account for its efficacy, its limitations and to think about what future (and perhaps more productive) initiatives might look like.

Approach

To conduct this study, we undertook a content analysis (Schreier, 2020) of calls to govern XR. These included currently existing governance initiatives – from legal regulation to corporate policy – to more speculative calls for future governance. Our data set comprised 181 documents pertaining to XR governance with a global scope. Locating our material largely involved keyword searches through Google. In some instances, we used the search functionality of stakeholders’ websites (e.g. the search function on Meta’s website). Our search keywords began with broader terms like ‘metaverse data privacy’ but we began using more specific and targeted phrasing as we grew familiar with the material, for example, ‘VR’ ‘data’ OR ‘privacy’ OR ‘GDPR’.1 While we believe that this approach provided a comprehensive view of proposed and existing regulatory and policy initiatives, a sampling limitation was that we only gathered results published in English (and that were located largely using a search engine that privileges English-language results). While we were able to account for some results from outside the Anglosphere – such as proposals for government regulation throughout Asia (such as India, Korea, Japan and China), these were limited to instances where sources were published in English or covered in secondary sources. Despite these constraints, we sought as globally representative a sample as possible.2
Material was segmented into three main stakeholder categories that were common in our data set – government, CSGs and industry. While other forms of governance certainly do exist (e.g. industry unions pushing for more ethical corporate research and development, see Lecher, 2019), these are relatively minor and not representative of the governance landscape. Most material was from a primary source – published on the web domains or social media pages of governance stakeholders. Secondary sources were also used where primary source documentation was not available (e.g. trade press coverage). We sampled these documents with maximum variation (within the parameters of XR governance) and within the date range of March 2020 to May 2023. Data ranged from relatively short documents (e.g. a 500-word blog post) to longer reports (e.g. a 45-page policy agenda).
We coded our material using an iterative system – common in qualitative approaches to the analysis of textual data (Creswell, 2007). Prior to coding, each author independently familiarised themselves with the material. We then approached the material with three main questions in mind, drawn out of existing work in studies of technology governance (see Table 1): who leads the governance initiative? What is the scope of the governance initiative? What are the enforcement mechanisms? This was followed by an inductive analysis, where we organised the data according to recurrent patterns or concepts (see Table 1, column 2) through an iterative process (e.g. ‘data’ was a common topic consolidated into the broader theme of privacy). Notably, stakeholders were not limited to focusing on a single theme (perhaps due to the aspirational nature of most governance initiatives) – with most focusing on at least two themes. The sample of 181 documents yielded references to 290 individual themes across four main categories, see Figure 1. Our final step in data analysis involved going back over the material to ensure that nothing was missed, and to ensure the reliability of our final categories.
Table 1. Coding schema and main categories in data.
Questions applied deductively to the data Main categories (inductive) emerging from data
Who leads the governance initiative? State
Industry
CSGs (NGO, advocacy groups, etc)
What is the scope of the governance initiative? Privacy
Safety, equity, and inclusion
Competition
Commercialisation
What are the enforcement mechanisms? Law or direct regulation
Self-governance
Co-governance
CSGs: civil society groups; NGO: non-governmental organisations.
Figure 1. Frequency of governance themes across different stakeholder types.

Who wants to govern XR?

Three main stakeholders advanced XR governance initiatives: governments and government agencies (n = 43), CSGs (n = 89) and industry (n = 49). The CSG skew is at least in part attributable to the fact that many (over 30%) of CSGs in the sample were funded by a 2021 Meta scheme to develop principles and policy for XR (drawing in a range of CSGs that would have likely not otherwise focused on XR. See I10).
For governments, most initiatives in the sample came from regions in North America, Europe, and Australia – with far less representation of regions beyond this group (although we acknowledge this skew may be a product of the English-language limitation of our sampling). Government responses ranged in degree of specificity in engaging with XR governance. India’s response, for instance, rather vaguely notes the need for timely regulation for both harnessing opportunity and mitigating ‘threats’ (G2). In contrast, responses by government agencies like the Australian eSafety Commissioner and by governments such as the New South Wales state government (Australia) tapped into medium-specific affordances and risks (e.g. privacy risks associated with head and body tracking, G20). Beyond nation-specific governmental responses to XR, there were responses from actors with broader jurisdictional scope. The supranational European Commission (EC) is a notable example, citing an XR supported metaverse as something that ‘brings both opportunities and risks’ (G8) for the European Union – covering areas including consumer safety, competition and commercialisation (with the EC responding with further calls for public, industry and CSG consultation. See G21, G23). As we see with other areas of European digital policy, a supranational approach to governing XR could well have global influence and prove productive in setting obligations for corporations and digital spaces accessed by EU citizens. While some governmental bodies, such as national competition regulators in the United States (G17) and Germany (G33), view existing laws as directly applicable to XR, others note the inadequacy of current legal approaches in governing XR. For example, EC regulators (G34) acknowledge the fragmented nature of US state privacy legislation, which makes compliance challenging (something which is problematic in the context of data intensive devices like XR). Similarly, Australian regulators (G39) note that legal definitions of ‘personal information’ do not fully safeguard individuals’ personal digital data generated by XR devices.
CSGs – such as advocacy groups and NGOs – included organisations which developed best practices and standards for using and making XR technology. These groups were either (1) scoped specifically around XR or (2) occupied with a broader societal issue (e.g. privacy, gender equality, child safety, disability/inclusion, etc) and were responding to the implications of XR on these wider domains. The former included groups like the Extended Reality Safety Initiative (XRSI) – a US-based organisation that describes itself as ‘proactively anticipating and addressing the cybersecurity challenges’ (CSG1) associated with XR. The XRSI produces reports, research and standards on XR technology (such as a ‘taxonomy’ of XR, CSG71), and has been in consultation with both industry and government.3 The latter includes major, international NGOs such as the United Nations (specifically the Office of Counterterrorism and Preventing Violent Extremism, CSG4) and the World Economic Forum (CSG5-7).
Third, industry – specifically, firms producing XR software or hardware – was a major voice in calling for governance. As we examine in further detail below, industry largely called for self-governance – self-imposed standards and control mechanisms (such as design practices, technical standards or terms of service) that shape the conduct and agency of a variety of actors involved with a company’s technology (e.g. content creators, end users, the company itself). Differently, some industry players advocated for regulatory oversight, a move which accrues benefit to early entrants to the XR market, allowing them to shape the regulatory landscape proactively (e.g. I10). Perhaps unsurprisingly, given its outsized role in the XR industry, most initiatives advanced by industrial stakeholders came from Meta (comprising over 20% of the total industry sample). Within the industry category, and parallel to XR firms, were professional services firms – such as law firms and consultancy firms – calling for a need to think about governance. These firms offer knowledge as a service for technology companies and governments. For instance, professional services firms have published reports identifying where companies could implement policy to build consumer trust (I7, I3), and law firms are increasingly providing legal services tailored to businesses in minimising regulatory fines or litigation losses in a new technology market (e.g. how XR might intersect with existing areas of the law, such as tax, IP and cybersecurity; see I23, I24, I25).
In this section, we have identified three major stakeholder types who advance XR governance initiatives – governments, CSGs and industry. As illustrated in Figure 1, the interests guiding these stakeholders were diverse (although not mutually exclusive) – and were spread across the following four broad categories: privacy; safety, equity and inclusion; competition; and commercialisation. In the following section, we examine the scope of governance initiatives and how the interests and concerns of each stakeholder group intersect and diverge.

What is the scope of governance initiatives?

Privacy

A common governance issue was privacy (n = 85). Specifically, the most common privacy-related issue was data-related privacy harm (n = 54), consistent with a focus on data and surveillance in the academic literature on XR. In the context of government initiatives, the Australian Attorney General’s Department – in a review of the Australian Privacy Act 1998 – recognises XR as an emerging threat to user privacy due to the sensitive, biometric nature of device data (G7). While the report is not scoped around XR specifically, it is cited as an example of where Australia’s national privacy law falls short, failing to protect against how digital data traces might be used to identify individuals.4 Issues of data and privacy have been echoed in the wider European context. As EC President von der Leyen put it in a 2023 consultation on digital connectivity, ‘the amount of data exchanged – and harvested – is larger than ever and will increase . . . The metaverse and virtual worlds . . . are making this even more evident’ (G38). Elsewhere, in a report presented in European Parliament, European regulators note the potential for privacy concerns due to the sensitive nature of XR data – ‘including biometric data and data on the emotional and physiological responses of users, representing sensitive personal data under the GDPR and thus requiring special attention and explicit user consent for each purpose for which data is used’ (G34). As we will return to later, a challenge for the EC (exemplified in the current case of XR) is balancing the tension between protecting ‘data rights’ while also advancing the commercialisation of technological research and ‘innovation’ (a tenet of EC policy, see Guay and Birch, 2022).
Government attention to data dovetails with the efforts of CSGs like the XRSI, perhaps the most vocal advocate of data protections in XR – placing significant emphasis on data-related harms. For example, the XRSI notes forms of physical or physiological biometric identification techniques which could theoretically extend from XR. These include gaze analysis, voice recognition and facial recognition (e.g. CSG1) – with the XRSI highlighting the implications for existing regulatory frameworks, such as the GDPR and Children’s Online Privacy Protection Act. Elsewhere, the Future of Privacy Forum (FPF) – an American data privacy advocacy group – published a resource on XR data governance, identifying data inputs derived from VR sensors (e.g. microphones, outward/inward-facing cameras and inertial measurement units, CSG73, CSG74). As FPF policy analyst Jameson Spivack notes, there is uncertainty about the application of US state-based privacy laws (namely, the Illinois Biometric Information Privacy Act [BIPA]) to XR data collection: ‘BIPA applies to information based on “scans” of hand or face geometry, retinas or irises, and voiceprints, and does not explicitly cover the collection of behavioural characteristics or eye tracking’ (CSG74).
Data collection and privacy was also a concern for industry stakeholders. For some, this was as simple as highlighting existing laws under which XR data would fall (e.g. I30). Differently, larger industrial stakeholders like Meta and Qualcomm responded to the issue of data privacy through attempting to cultivate a community of software developers (and other interest groups) that aligned with privacy-centred values (among other kinds of pro-social ideals, I10, I46). To do so, Meta and Qualcomm sought partnerships with researchers, software designers, and CSGs through funding schemes valued at US$50 million and US$100 million, respectively.
Beyond data, industrial attention to privacy included the threat of interpersonal surveillance – how users may be seen by other users in XR. Interpersonal surveillance was a major component of Meta’s October 2021 guidelines for user best practice following the release of its ‘Stories’ smartglasses (I11, I42). These principles, Meta suggest, were developed in partnership with ‘third-party experts’ (all of whom received Meta funding, I42), namely, the non-profit groups the Future of Privacy Forum, the National Network to End Domestic Violence, the National Consumers League, the Information Technology and Innovation Formation, Access Now and the LGBT Technology Partnership. It is notable, but not entirely surprising, that here (and indeed, in its wider treatment of data-surveillance), Meta centres protecting users against ‘bad actors’ (e.g. of users who might record and upload images of other unknowing users to social media) rather than the addressing the corporate imperative to surveil (and its associated harms).
Privacy, particularly data-related privacy harm, emerges as a key concern in XR governance initiatives globally. Governance initiatives focus largely on the data generated by XR technology (although also extend to forms of interpersonal surveillance), indicating gaps in and alignments with existing privacy laws and raising issues around user consent and data protection.

Safety, equity and inclusion

Safety, equity and inclusion was another common focus of XR governance initiatives (n = 86). For some government agencies and CSGs, user safety focused on the prevention of interpersonal harm precipitated by user conduct (e.g. avatar movements and gestures) and content (e.g. materials and imagery created in XR applications) – echoing wider debate among courts and regulators as to whether platforms are liable for user content and conduct (e.g. the UK Online Safety Bill, the Australian Online Safety Act). For some stakeholders, the notion that technologies create environments enabling harm was rendered particularly acute in XR through its interactive and embodied affordances.
For example, Connect Safely (CSG72), a non-profit scoped around technology safety and security, develops resources focused on user education (this initiative, notably, was funded by Meta, and one that would subsequently feed into Meta’s ‘Parent Education Hub’ [I1] – an educational resource for parents of children using Oculus devices published on Meta’s website, and one that Meta cites as an example of its own best practice in self-governing user behaviour in a response to an EC inquiry, see I9). Indeed, for Meta – and the large number of CSG initiatives funded by Meta’s US$50M XR Programs and Research Fund (XRPRF) (I10) – we found a focus on user safety initiatives, chiming with accounts of Meta’s wider self-governance and focus on preventing harm by enforcing ‘safety’ – a way to reinforce ‘a myth that platforms are neutral conduits’ (Gillett et al., 2022: 9), and that harm is the outcome of ‘bad actors’ rather than the technologies and business models of technology firms. Also falling under the umbrella of user safety, NGOs – such as the UN’s Office of Counterterrorism and Violent Extremism (CSG4) – have focused on the governance of conspiracist and extreme content (content, which has notably already appeared in Meta’s Horizon Worlds software as recent investigative reporting suggests, see Baker-White, 2022). As one European Parliament document notes, this is especially problematic given the immersive affordances of XR – with the potential that this content may ‘feel more real’ (G34). For governments, the call to implement guardrails for user safety was covered in more expansive overviews of the regulatory implications of XR (e.g. G20, G34). Government agencies, which have user safety as their sole remit – such as the Australian eSafety Commissioner – have begun developing education-focused initiatives, drawing on consultation with industry and users, to develop a ‘proactive harm prevention approach’ (G18).
Another common focus of equity and inclusion approaches was accessibility – particularly, focusing on disability. For industry, firms have implemented their own voluntary standards – such as Meta’s ‘Virtual Reality Check’ a list of best practices for third-party Oculus software developers to implement accessibility features (I13). It bears mentioning that such an approach pushes responsibility from XR providers to app developers (with the former – in the case of Meta – much better resourced to offer device and operating system level accessibility features). For government and CSGs, there was a push to make XR technologies accessible for the diverse requirements of disabled users. European Parliament flags ‘Accessibility and Inclusiveness’ as a major policy issue. As the Parliamentary report reads, ‘Although in principle, the metaverse is open to all, in practice many might have trouble accessing it’ (G34). For other groups, such as the XR Access Network (a disability focused network of industry practitioners and academics), their goal is to ‘modernize, innovate, and expand XR technologies, products, content and assistive technologies by promoting inclusive design in a diverse community that connects stakeholders, catalyses shared and sustained action, and provides valuable, informative resources’ (CSG78).
XR governance initiatives emphasise safety, equity and inclusion. User safety in XR is a significant concern, and resonates with debates about platform accountability for user content and conduct. While legal jurisdictions around the world increasingly favour approaches establishing technological platforms having a ‘duty of care’ in moderating user content and conduct, there was little regulatory focus on harmful user conduct and content (and much industry and CSG work focused on education and best practices to encourage pro-social behaviour). While policymakers recognise disability as a major area for potential inequity, governance through standards and best practices have been largely advanced by industry and CSGs.

Competition

The governance of market competition was another area of focus (n = 50). Competition has been the purview of recent regulator engagement with the XR sector. It has been of particular importance considering the outsized influence of big tech firms on XR (namely Meta, following its acquisition of Oculus in 2014, and its rapid consolidation of power in the XR industry over the last half-decade). For example, in July 2022, the US Federal Trade Commission (FTC) sought to block Meta’s US$400 million acquisition of VR fitness software developer Within (the largest in the company’s history of acquisitions). As the FTC’s Bureau of Competition Deputy Director, John Newman, puts it: ‘Meta chose to buy market position instead of earning it on merits. This is an illegal acquisition, and we will pursue all appropriate relief’ (G17). In 2023, Meta won a federal court case approving the acquisition (with the FTC subsequently withdrawing from adjudication).
In the European context, Germany’s national competition regulator – the Bundeskartellamt – has taken aim at Meta and Oculus. Distinct from the FTC, the Bundeskartellamt placed emphasis on Oculus providing Meta a data advantage (rather than a strict focus on a competitive advantage), suggesting that the company breached German data coupling laws in requiring an Oculus account to be connected with a Facebook account (G33), feeding into Meta’s already-significant glut of first-party data accumulated through their social media ‘Family of Apps’. This move follows almost immediately from antitrust charges being filed against Meta by the FTC, and from a 2019 investigation by the Bundeskartellamt into Meta’s internal data sharing practices (Bundeskartellamt, 2019). As the Bundeskartellamt note in a 2022 document, while Meta has suggested ‘there will be no flow of data between the virtual reality services and other services without users’ consent’, it still ‘remains to be clarified to which extent data processing across services can be permissible with . . . or without the users’ consent’ (G6).
The notion that the metaverse will be interoperable – that is, the capacity to make one product or service work with another existing product or service – while technically implausible (Egliston and Carter, 2023) is a nonetheless widely held view by many of its proponents. For some policymakers, this imagined affordance is one that presents considerable challenges in applying competition law. As it is put in a European Parliament inquiry into the metaverse’s policy implications:
The fact that the metaverse environment requires competitors to communicate, collaborate and ensure that platforms are interoperable could also potentially lead to a series of anti-trust challenges, for instance concerning the sharing of sensitive information, such as pricing, or agreements between competitors subject to competition law scrutiny. (G34)
Where governments were interested in ensuring fair competition, many CSGs saw competition as a means to create economic opportunity for market participants (often, focusing on marginalised developer cohorts – such as ethnic or racial minorities within global production hubs, or small companies in regions outside the global north, see, for example, CSG39, CSG57). Much of this emphasis was attributable to Meta’s XRPRF (I10) which framed ‘economic opportunity’ as a major area of focus. For such groups, there is a sentiment that the benefits of competition and innovation should be widely distributed among market participants (and that a failure to do so can have negative implications for economic equity).
In sum, XR emerges as a high-profile site for broader regulatory efforts to prevent big tech incumbents from further entrenching dominant market positions (as well as growing their economic and data advantages through consolidating complementors or competitors). Yet elsewhere, as some CSGs show, competition in the XR market is not just an object of regulation but something that can deliver a wider social and cultural benefit (for instance, through emphasis on diverse market competition from individuals and organisations outside the global north).

Commercialisation

Governance initiatives have sought to harness the economic benefits of the metaverse for industry and regional software sectors through approaches to XR commercialisation (n = 69). For example, the 2021 US Innovation and Competition Act – a Biden Administration bill which would later become law in 2022 as the CHIPS and Science Act, allocating roughly US$280 billion in funding to boost US semiconductor and technology manufacturing – identifies ‘Immersive Technology’ as 1 of its 10 key technology focus areas (G30). European policy has more stridently advanced an agenda for commercialising XR research and development – in line with a wider EC agenda for commercialising technological ‘innovation’ (G9–G15, G35–G36). A major focus in the EU has been on addressing the industrial and economic impacts of XR through policy initiatives enhancing regional software production. For example, VR and AR are a component of the EC’s Media and Audio-visual Action Plan (MAAP) – a funding initiative to ‘help maintain European cultural and technological autonomy’ (G35). Under the MAAP is the European Virtual and Augmented Reality Industrial Coalition – an initiative aiming to ‘inform policy making, encourage investment, facilitate dialogue with stakeholders and identify key challenges and opportunities for the European VR/AR sector’ (G36). Elsewhere, the Horizon Europe program – a 7-year research and innovation funding initiative – has placed emphasis on VR, soliciting proposals for a ‘VR Media Lab’, which will develop and prototype advanced solutions for the creation, distribution and consumption of new immersive VR/AR media products, and bring together skills from a variety of disciplines, including the creative sector’ (G15). It is notable that for some government efforts to commercialise national or regional XR development, there was a tendency to buy into technological boosterism and highly speculative future visions. For instance, the EC’s MAAP (G35) justifies the need for focus on VR and AR as ‘by 2030, Virtual Reality (VR) and Augmented Reality (AR) have the potential to add about 1.3 trillion euros to the global economy’.
Industry groups, such as standards groups, have likewise sought to realise the economic and industrial benefits of the metaverse. For example, the Metaverse Standards Forum is a group of largely industry stakeholders that ‘aims to encourage and enable the timely development of open interoperability standards essential to an open and inclusive metaverse’ (CSG36. See also CSG80). Notably, the forum will not actually create these standards, but rather will facilitate dialogue among industry stakeholders to do so. Standards are viewed as a critical technical element of industrial growth to prevent XR from being siloed in proprietary formats, part of a need for ‘wider industry standardization, coordination and cooperation’ (CSG36). The aim for the standards forum as a governing body is not just to encourage ‘good’ practices, but also to encourage the construction of ‘good’ (interoperable) infrastructure.
Taken together, the commercialisation data present a point of contrast to the previous governance issues we have identified. Rather than focusing on how governance can prevent harm, governance structures are also needed to support and strengthen national and regional technology industries to harness commercial benefits of an emerging class of technology. While there were a number of concrete commercialisation initiatives, commercialisation efforts were primarily based on popular industry claims about the economically transformative potential of XR and the metaverse.

What are the enforcement mechanisms?

Most regulatory responses to XR are speculative – a gesture towards the future that, at some point, laws (both existing and new) will need to meet XR. Where existing laws – such as competition laws – have been applied to XR, calls for new forms of regulation have emerged. It was in the European context that regulators sought to develop domain-specific laws more proactively. For instance, in 2023, the EC held a ‘Citizens panel’ on ‘Virtual worlds’ – a public forum of 150 randomly selected participants, representing the EU’s diversity of geography, gender, age and socioeconomic background – with the intention of creating recommendations to support the EC’s legislative work in line with the EC’s commitment to commercialising technoscientific research and innovation ‘responsibly’ (G21). While intended to draw lay perceptions of technological benefit and risk into the policymaking process, the efficacy of responsible innovation has been critiqued – particularly when it seeks to anticipate (or ‘forecast’) the harm of an emerging (or yet to be widely adopted) form of technology (Owen et al., 2013).5
Calls for new forms of regulation have been raised by stakeholders in the tech industry. Through its XRPRF, Meta, for instance, seeks to mediate dialogue between industry, CSGs, and regulators – expressing an ambition to play an active role in ‘ensur[ing] that industry standards or regulations are inclusive of the concerns of the civil rights and human rights communities, so these technologies are built in a way that is empowering for everyone’ (I10). Of course, it would be remiss to take such claims at face value. Consistent with previous research on tech industry-funded regulation (e.g. Goldenfein and Mann, 2023), the XRPRF is closely aligned with Meta’s business interests, allowing it to broadly define the scope of governance (and allow cherry-picking from desirable outcomes, as we see with the aforementioned example of Connect Safely partnership being cited in response to government inquiry).
Stakeholders also recognise the challenges of regulating an emerging space like XR. For European policymakers, one challenge is of jurisdiction – of how to regulate a new infrastructure of global computing (G34). This reflects broader concerns with regulation, where many countries are now considering the efficacy of domestic regulations in dealing with multi-national firms (Flew, 2021). Elsewhere, in the Australian context, the New South Wales State Government note the risk of applying laws too speculatively, suggesting that policymakers and industry first focus on ‘Developing responsible metaverse principles’ (G39). As they go on to note,
The NSW Government took this same approach with artificial intelligence: beginning with high-level statements about what ideal deployments of AI look like, before moving into more concrete governance measures to manage practical considerations. (G39)
This initiative recognises the precarity of trying to influence technological change in the face of indeterminacy, overcoming a ‘predictivist’ paradigm – one that emphasises the future as ‘ontologically open and deeply indeterminate’ (Urueña, 2022: 284).
The regulation of digital technology is of course a highly contested field. This is clear in discussions about the regulation of technologies such as XR. For some industry stakeholders regulation was seen as necessary – yet not the only aspect of governance. For example, in a 2021 call for consultation for the review of the Australian Privacy Act, Meta note that XR regulation will not be the only solution, citing the need for a ‘principles led’ (I27, see also I9) approach to governance (reflected in the company’s own approach to the self and co-governance of internal practices and user activity). Other organisations – such as the FPF – suggest a need for caution in jumping head-first into XR-specific regulation, at risk of creating regulatory siloes that have little applicability should the technology evolve in unforeseen ways (instead suggesting a ‘tech agnostic’ approach, see CSG73).
Beyond legal regulation, another common enforcement mechanism was self-governance. An example that attracted much coverage in the tech press was Meta’s 2021 ‘Responsible Innovation’ guidelines for internal development of XR smartglasses (I28). Responsible innovation is a general approach to technological research and development that seeks to embed social benefit and moral responsibility, centred around areas like ‘anticipation, reflexivity, inclusion and responsiveness’ (de Hoop et al., 2016: 118). For Meta, this responsible innovation framework (which the company suggests was developed in consultation with academia, CSGs, and policymakers) was centred on four principles – ‘Consider everyone’, ‘Never surprise people’, ‘Provide controls that matter’ and ‘Put people first’ – critiqued as ‘vague and unimplementable’ (Applin and Flick, 2021: 13), an opportunistic mechanism to offset negative perceptions of societal harm.
Finally, we note co-governance initiatives. For the most part, co-governance involved dyads of industry and CSGs (and specifically, often dyads of Meta and CSGs, due to Meta’s XRPRF scheme). To a lesser extent, co-governance under this scheme involved partnership with government (e.g. an industry commercialisation programme for XR software development with India’s IT Ministry, G27). As with industry-funded multistakeholder governance in other sectors (e.g. AI, see Veale et al., 2023), the degree to which such initiatives truly provide these stakeholders a voice is questionable. A much smaller portion of co-governance was triadic – that is, involving stakeholders across industry, government and CSGs. Triadic governance has been advanced by CSGs like the XRSI, in initiatives such as the XR Safety Week – a conference bringing together industry stakeholders (e.g. Meta and MagicLeap) and government agencies (e.g. the Australian eSafety Commissioner). Triads were more commonly framed in aspirational terms, such as with repeated claims that future governance would need to involve collaboration between government, CSGs and industry (e.g. I10, G18).
In summary, governance initiatives consisted of existing laws, but also included proposals for new legal interventions. In addition, there was a focus on non-binding initiatives such as technological standards, educational efforts and best-practice principles. Some of these initiatives would be developed or enforced by individual stakeholders, while others required the collaboration of various stakeholder types.

Discussion

Findings show that stakeholders – from governments, to CSGs, to industry – have different ways of understanding the techno-politics of XR and how it might be governed. For some stakeholders, XR is unexceptional in the sense that it faces similar issues to other technologies: data privacy, anti-competitive business practices and user safety. While XR may intensify some of these issues, the solution does not look radically different to broader efforts to govern technology. Currently existing governance frameworks – such as general-purpose regulatory frameworks for data protection or competition or AI – apply to XR, even if they do not mention the technology by name. Put differently, technology agnostic governance could offer broader guardrails in a space that is rapidly changing (even in the timeframe of our sample). While we found that some governance initiatives focused on the relevance of existing laws and regulations (e.g. privacy, competition), it bears noting that issues with XR governance span areas beyond those identified in our sample. For example, an emerging issue in the current literature is the use of XR technology to augment decision making (for instance, in workplace contexts, an area that current literature would suggest intersects with laws and issues concerning anti-discrimination in workplace hiring and evaluation, see Egliston and Carter, 2021).
Contrarily, other stakeholders suggest there is a need for novel – and media-specific – governance initiatives scoped around XR. Given the emerging nature of XR technology, most of these governance initiatives are aspirational. For the range of anticipatory approaches across industry and CSGs, there was overwhelmingly a focus on the future trajectory of XR’s adoption in society. There is a diachronic logic here – the metaverse will come to fruition, and thus governance initiatives must be put in place proactively, pre-empting its societal impacts and its widespread entrenchment (at which point it may be difficult to challenge or redress).
It is worth noting some of the limitations of such speculative approaches. The first is to do with the epistemically precarious nature of anticipatory technology governance. According to Nordmann (2007, 2014), such exercises of ‘speculative ethics’ entail articulating ethical concerns about emerging technologies manifesting in largely unknowable futures. Simply, it is difficult to know – and thus govern – the future trajectory of technology in advance of its socialisation. This futural orientation was particularly problematic in more general calls to govern the metaverse (rather than approaches focused on discrete technical and social issues in XR) – what has been critiqued as an ‘empty’ signifier (McStay, 2023) – something lacking definitional consistency.
Another problem with speculative governance – and with approaches that tend to reproduce linear and deterministic visions of XR – is where well-meaning stakeholders buy into what boosters of XR say the technology will do, where ‘an imagined future overwhelms the present’ (Nordmann, 2007: 32). This is particularly notable, for instance, where government regulators extol the commercial potentials of wide XR adoption or rehearse industrial visions of technological interoperability. But even more critical perspectives – such as those on data governance – tend to focus on imagined futures in which the technology is widely adopted. Comparatively, less governance work was focused on how we can prevent future harms from materialising today (such as the FTC’s effort to prevent Meta’s consolidation of the nascent XR industry – and the inevitable economic and data advantages that would accrue). Rather than trying to pre-emptively address hypothetical outcomes of rapidly changing systems, might we not also try to address root causes and conditions that exist today?
Anticipatory governance also accrues benefit to industrial stakeholders with a vested interest in the growth and adoption of XR. Creating ‘value’ from technology is as much social and cultural as it is technical. Indeed, governance is one such mechanism for shaping social perceptions of technology (cf. Gillett et al., 2022). For some industry stakeholders, the promise of governance provides legitimation and credibility to speculative, far-off visions of XR futures. For investors, it is increasingly difficult to reckon narratives of XR as a new infrastructure for life (Egliston and Carter, 2022a, 2022c) with the on-the-ground realities of the technology – legless avatars, applications amounting to low-fidelity videoconferencing software, and headsets that cause severe nausea. To speak of governing XR is a discursive move, one that makes a technology of the distant future feel more present and pressing today.
But how might we move from critique to action? We end with some reflections on what effective XR governance might involve. First, we believe that there is a need to balance technological specificity and technological agnosticism. Technological specificity can be helpful. Understanding the affordances of the technology as it actually exists would help combat a tendency to buy into imagination and hype (as outlined above). But specificity need not necessarily silo conversations about XR off from those about wider technology. Understanding XR’s medium-specific challenges can also act as an attention focuser for broader dialogues about technology governance. For example, the significant attention to XR data and privacy (as an especially sensitive form of biometric and spatial data) provokes a new urgency for introducing stronger safeguards on how technology companies harvest and operationalise user data.
A second issue that future governance must confront is that of independence. As we have identified, today’s XR industry is characterised by a high degree of industrial influence (much of that by a single stakeholder, Meta). Initiatives like the XRPRF – by design – serve industrial logics and interests, a mechanism through which a company mired in controversy might endear itself to the public and regulators (cf. Bietti, 2020). While we are not suggesting that receiving industry funding compromises an organisation’s integrity, industrially funded initiatives to build a ‘better’ metaverse shape the contours of conversations and critiques about where the harms in this emerging suite of technology are concentrated. In other words, such schemes exert a form of ‘structuring power’ (Phan et al., 2022), a form of boundary work demarcating what counts (and what does not) as harm (and thus, where governance should be focused). To be sure, these schemes can certainly identify very real problems and generate productive governance and policy work (e.g. that focused on growing regional development hubs in the global south). Yet, at the same time, if industry can pre-define what constitutes ‘harm’, a means for more radical or emancipatory politics is foreclosed (for instance, governance initiatives addressing structural issues with industrial business models or practices are unlikely to ever be funded by Meta). In the current political economy of technology, achieving independence is certainly difficult – especially considering that many CSGs are financially reliant on big tech benefactors (cf. Goldenfein and Mann, 2023). Yet, it is not impossible. While beyond the scope of our study, we note recent cases of independent governance happening from ‘below’. For example, we might think of technical interventions by ‘ethical hackers’ (Egliston and Carter, 2022b: 11–12) to wrest the technological infrastructures of XR from industry, or the recalcitrant voices within industry who push back against troubling forms of XR development (Lecher, 2019).

Conclusion

Initiatives to govern XR have been advanced by governments, industry and CSGs. While framings of governance differed markedly across the sample, common was a fatalistic sense that an XR-based metaverse is inevitable, and thus, its need to be governed in one way or another. For some, such as government regulators, this could be done through existing legal mechanisms, since XR presented many of the same challenges as the wider tech sector – privacy, safety and equity, competition, and commercialisation. While many government agencies were aware of the potential for acute harm – largely in the context of data security and privacy, the metaverse is projected to serve considerable economic benefits for regional and national software sectors. For others, such as industry and CSGs, XR required new guardrails to be put in place (something that was commonly seen as achievable through collaboration between CSGs and industrial stakeholders). As we note, industry stakeholders like Meta – as first-movers (and now dominant players) in the XR market – stand to benefit considerably from having an active hand in steering governance (or, alternatively, creating an illusion of being interested in governance).
The debate about the future of XR governance is a complex one. While we have offered what we believe to be a comprehensive account, the work here is generative rather than conclusive. Future work could, for instance, further map out and explore relationships between stakeholders (such as dyads and triads), and indeed, perhaps propose what an idealised arrangement of governance stakeholders might look like. We are also mindful that the XR industry (and related policy and governance) is rapidly changing. Future work, for instance, might account for Apple’s inevitable contribution to governance dialogues with its announcement of its XR device in mid-2023. Furthermore, a limitation of this study is the scope of its sample – namely, the focus on English-language data. Future scholarship on XR governance would benefit from incorporating a broader range of more granular, non-English-language materials from across the world, acknowledging developments and recognising that governance varies based on different local and regional cultures, politics and economies.

Funding

The author(s) received no financial support for the research, authorship, and/or publication of this article.

ORCID iDs

Footnotes

1. For our full sample along with in-text citations for materials referenced, see https://osf.io/8zerd?view_only=3e9ed526d8cb42a184573655de07e686.
2. Omitted from our sample were individual terms of service of specific XR applications or devices governing the behaviour of users and complementors (such as developers), an area examined in previous work (e.g. Egliston and Carter, 2023a; Qingxiao et al., 2023).
3. Sources from the Extended Reality Safety Initiative (XRSI) – notably – comprised over 20% of the CSG sample. This is perhaps attributable to the fact that the XRSI is a well-established voice within the XR space and one that is well-resourced, with ties to organisations like Meta and Microsoft, as well as tapping into government networks (e.g. the Australian eSafety Commission).
4. Virtual reality (VR) was invoked as an example similarly in a report by the Australian Human Rights Commission (G4). In a similar vein, European regulators note that competition law does not go far enough in recognising the data advantage accrued to incumbents – intensified in the moment of XR, as potentially rich biometric and spatial data. See G34.
5. In a previous study (Egliston and Carter, 2022b), we examined lay perceptions of Meta’s Oculus technology (across 2018 and 2019). While nonexperts were able to accurately identify emerging issues (e.g. potential harms associated with data capture), we found that responses were highly speculative and technologically deterministic (e.g. VR’s capacity for data-driven ‘mind control’, with comparisons made to dystopian science fiction films like ‘The Matrix’). While such responses highlight a broad concern about surveillance, data and privacy, we remain doubtful as to how such speculative visions, rooted in cultural imaginaries of technology (or technological anxiety), might productively inform policy.

References

Abbott KW, Snidal D (2009) The governance triangle: regulatory standards institutions and the shadow of the state. In: Mattli W, Woods N (eds) The Politics of Global Regulation. Princeton, NJ: Princeton University Press, pp. 44–88.
Applin S, Flick C (2021) Facebook’s project Aria indicates problems for responsible innovation when broadly deploying AR and other pervasive technology in the commons. Journal of Responsible Innovation 5: 100010.
Baker-White E (2022) Meta Wouldn’t Tell Us How It Enforces Its Rules In VR, So We Ran A Test To Find Out. Buzzfeed. Available at: https://www.buzzfeednews.com/article/emilybakerwhite/meta-facebook-horizon-vr-content-rules-test
Ball M (2022) The Metaverse and How It Will Revolutionize Everything. New York: Liveright Publishing.
Bietti E (2020) From ethics washing to ethics bashing: a view on tech ethics from within moral philosophy. In: FAT* ’20: Proceedings of the 2020 conference on fairness, accountability, and transparency, Barcelona, 27–30 January, pp. 210–219. New York: ACM.
Blackwell L, Ellison N, Elliott-Deflo N et al. (2019) Harassment in social virtual reality: Challenges for platform governance. In Proceedings of the 2019 CHI Conference on Human-Computer Interaction. pp. 1–25. US.
Bundeskartellamt (2019) Bundeskartellamt prohibits Facebook from combining user data from different sources. Available at: https://www.bundeskartellamt.de/SharedDocs/Meldung/EN/Pressemitteilungen/2019/07_02_2019_Facebook.html
Carter M, Egliston B (2024) Fantasies of Virtual Reality: Untangling Fiction, Fact, and Threat. Cambridge, MA: MIT Press.
Creswell J (2007) Qualitative Inquiry and Research Design: Choosing Among Five Approaches. Thousand Oaks, CA: Sage.
de hoop E, Pols A, Romjin H (2016) Limits to responsible innovation. Journal of Responsible Innovation 3(2): 110–134.
Egliston B, Carter M (2021) Critical questions for Facebook’s virtual reality: data, power, and the metaverse. Internet Policy Review 10(4): 1–23.
Egliston B, Carter M (2022a) Oculus imaginaries: the promises and perils of Facebook’s virtual reality. New Media & Society 24(1): 70–89.
Egliston B, Carter M (2022b) The material politics of mobile virtual reality: Oculus, data, and the technics of sensemaking. Convergence 28(2): 595–610.
Egliston B, Carter M (2022c) ‘The metaverse and how we’ll build it’: the political economy of meta’s reality labs. New Media & Society.
Egliston B, Carter M (2023) Cryptogames: The promises of blockchain for the future of the videogame industry. New Media and Society. Epub ahead of print.
Egliston B, Carter M (2023a) Examining visions of surveillance in Oculus’ data and privacy policies, 2014–2020. Media International Australia 188: 52–66.
Flew T (2021) Regulating Platforms. Cambridge: Polity Press.
Gillett R, Stardust Z, Burgess J (2022) Safety for whom? Investigating how platforms frame and perform safety and harm interventions. Social Media + Society 8(4): 1–12.
Goldenfein J, Mann M (2023) Tech money in civil society: whose interests do digital rights organisations represent? Cultural Studies 37(1): 88–122.
Gorwa R (2019a) The platform governance triangle: conceptualising the informal regulation of online content. Internet Policy Review 8: 1–22.
Gorwa R (2019b) What is platform governance? Information, Communication & Society 22(6): 854–871.
Gray JE (2021) The geopolitics of ‘platforms’: the TikTok challenge. Internet Policy Review 10(2).
Green N (1999) Disrupting the field: virtual reality technologies and ‘multisited’ ethnographic methods. American Behavioral Scientist 43(3): 409–421.
Guay R, Birch K (2022) A comparative analysis of data governance: socio-technical imaginaries of digital personal data in the USA and EU (2008–2016). Big Data & Society 9(2): 1–13.
Harley D (2023) The promise of beginnings: unpacking ‘diversity’ at Oculus VR. Convergence 29(2): 417–341.
Heller B (2020) Watching androids dream of electric sheep: immersive technology, biometric psychography, and the law. Vanderbilt Journal of Entertainment and Technology Law 23(1): 1.
Heller B, Bar-Zeev A (2021) The problems with immersive advertising: in AR/VR, nobody knows you are an ad. Journal of Online Trust and Safety 1(1): 1–14.
Jasanoff S, Kim S (2015) Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power. Chicago, IL: University of Chicago Press.
Lecher C (2019) Microsoft workers’ letter demands company drop army HoloLens contract. Available at: https://www.theverge.com/2019/2/22/18236116/microsoft-hololens-army-contract-workers-letter
Cortese M, Outlaw J(2021) The IEEE global initiative on ethics of extended reality (XR) report–social and multi-user spaces in VR: trolling, harassment, and online safety. Available at: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9650825
Mansell R (2012) Imagining the Internet: Communication, Innovation, and Governance. Oxford: Oxford University Press.
McStay A (2023) The metaverse: surveillant physics, virtual realist governance, and the missing commons. Philosophy & Technology 36: 13.
Meese J (2014) Google glass and Australian privacy law: regulating the future of locative media. In: Wilken R, Goggin G (eds) Locative Media. New York: Routledge, pp. 136–147.
Miller MR, Herrera F, Jun H, et al. (2020) Personal identifiability of user tracking data during observation of 360-degree VR video. Scientific Reports 10(1): 17404.
Nordmann A (2007) If and then: a critique of speculative nanoethics. NanoEthics 1(31): 31–46.
Nordmann A (2014) Responsible innovation, the art and craft of anticipation. Journal of Responsible Innovation 1(1): 87–98.
Owen R, Stilgoe J, Macnaghten P, et al. (2013) A framework for responsible innovation. In: Owen R, Bessant J, Heintz M (eds) Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society. Hoboken, NJ: John Wiley & Sons, pp. 27–50.
Pesce M (2020) Augmented Reality. Cambridge: Polity Press.
Phan T, Goldenfein J, Mann M, et al. (2022) Economies of virtue: the circulation of ‘ethics’ in big tech. Science as Culture 31(1): 121–135.
Qingxiao Z, Xu S, Wang L, et al. (2023) Understanding safety risks and safety design in social VR environments. Proceedings of the ACM on Human-Computer Interaction 7: 154.
Schreier M (2020) Content Analysis, Qualitative. Thousand Oaks, CA: Sage.
Selinger E, Altman E, Foster S (2023) Eye-tracking in virtual reality: a visceral notice approach for protecting privacy. Privacy Studies Journal 2: 1–34.
Siu L, Chun C (2020) Yellow peril and techno-orientalism in the time of Covid-19: racialized contagion, scientific espionage, and techno-economic warfare. Journal of Asian American Studies 23(3): 421–440.
Trimananda R, Le H, Cui H, et al. (2022) OVRseen: auditing network traffic and privacy policies in Oculus VR. Available at: https://arxiv.org/abs/2106.05407
Urueña S (2022) Responsibility through anticipation? The ‘future talk’ and the quest for plausibility in the governance of emerging technologies. NanoEthics 15: 271–302.
Veale M, Matus K, Gorwa R (2023) AI and global governance: modalities, rationalities, tensions. Annual Review of Law and Social Science 19: 1–30.
Wassom B (2014) Augmented Reality Law, Privacy, and Ethics: Law, Society, and Emerging AR Technologies. Waltham, MA: Syngress.
Winner L (1980) Do artifacts have politics? Daedelus 109(1): 121–136.
Wong D, Floridi L (2023) Meta’s oversight board: a review and critical assessment. Minds and Machines 33: 261–284.

Biographies

Ben Egliston is an Australian Research Council DECRA Fellow and Lecturer in Digital Cultures at the University of Sydney. He researches the political economy of videogames and immersive media.
Marcus Carter is an Australian Research Council Future Fellow and Associate Professor in Digital Cultures at The University of Sydney. He researches games, play and emerging technologies such as virtual reality.
Kate Euphemia Clark is a PhD candidate at Monash University and research fellow at the University of Sydney. Her research explores the role of the body in VR, in particular she examines how to make immersive technologies more accessible to people with disabilities.

Cite article

Cite article

Cite article

OR

Download to reference manager

If you have citation software installed, you can download article citation data to the citation manager of your choice

Share options

Share

Share this article

Share with email
EMAIL ARTICLE LINK
Share on social media

Share access to this article

Sharing links are not relevant where the article is open access and not available if you do not have a subscription.

For more information view the Sage Journals article sharing page.

Information, rights and permissions

Information

Published In

Article first published online: January 29, 2024

Keywords

  1. Extended reality
  2. governance
  3. mixed reality
  4. policy
  5. regulation
  6. virtual reality

Rights and permissions

© The Author(s) 2024.
Creative Commons License (CC BY 4.0)
This article is distributed under the terms of the Creative Commons Attribution 4.0 License (https://creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage).

Authors

Affiliations

Marcus Carter
Kate Euphemia Clark
Monash University, Australia; The University of Sydney, Australia

Notes

Ben Egliston, The University of Sydney, Camperdown, Sydney, NSW 2050, Australia. Email: [email protected]

Metrics and citations

Metrics

Journals metrics

This article was published in New Media & Society.

VIEW ALL JOURNAL METRICS

Article usage*

Total views and downloads: 919

*Article usage tracking started in December 2016


Altmetric

See the impact this article is making through the number of times it’s been read, and the Altmetric Score.
Learn more about the Altmetric Scores



Articles citing this one

Receive email alerts when this article is cited

Web of Science: 0

Crossref: 0

There are no citing articles to show.

Figures and tables

Figures & Media

Tables

View Options

View options

PDF/ePub

View PDF/ePub

Get access

Access options

If you have access to journal content via a personal subscription, university, library, employer or society, select from the options below:


Alternatively, view purchase options below:

Purchase 24 hour online access to view and download content.

Access journal content via a DeepDyve subscription or find out more about this option.