Volume 62, Issue 4 p. 1280-1300
Research Article
Open Access

Citizen conceptions of democracy and support for artificial intelligence in government and politics

PASCAL D. KÖNIG

Corresponding Author

PASCAL D. KÖNIG

Minda de Gunzburg Center for European Studies, Harvard University, USA

Department of Social Sciences, TU Kaiserslautern, Germany

Address for correspondence: Pascal D. König, Minda de Gunzburg Center for European Studies, Harvard University, Cambridge, Massachusetts, United States of America, and Department of Social Sciences, TU Kaiserslautern, Kaiserslautern, Germany. Email: [email protected]

Search for more papers by this author
First published: 19 November 2022
Citations: 1

Abstract

How much do citizens support artificial intelligence (AI) in government and politics at different levels of decision-making authority and to what extent is this AI support associated with citizens’ conceptions of democracy? Using original survey data from Germany, the analysis shows that people are overall sceptical toward using AI in the political realm. The findings suggest that how much citizens endorse democracy as liberal democracy as opposed to several of its disfigurations matters for AI support, but only in high-level politics. While a stronger commitment to liberal democracy is linked to lower support for AI, the findings contradict the idea that a technocratic notion of democracy lies behind greater acceptance of political AI uses. Acceptance is higher only among those holding reductionist conceptions of democracy which embody the idea that whatever works to accommodate people's views and preferences is fine. Populists, in turn, appear to be against AI in political decision making.

Introduction

Artificial intelligence (AI) is gradually finding its way into government and political decision making. It is therefore increasingly important to study the link between AI and democracy – a relationship that has hardly been examined empirically so far and which this paper addresses by looking at citizens’ support for AI in government and politics. Existing evidence on citizens’ perceptions of AI in this area is sparse and disparate. Findings range from considerable scepticism about using AI in political decision making (Starke & Lünich, 2020), over signs of acceptance for AI dealing with simpler tasks in public administration (Chatterjee et al., 2021; Miller & Keiser, 2021), to even remarkably favourable evaluations of AI informing decisions in high-level politics (IE University, 2019, 2021). However, several core questions remain unanswered. First, we lack systematic knowledge about the level of public acceptance of AI used in the political realm and at different levels of decision-making authority given to AI, that is, from the low-level politics of performing routine tasks in public administration to high level politics of taking decisions for politicians. Second, we know little about which citizens are more open to such AI uses than others.

Studying these questions is important for at least two reasons. First, governments around the globe are increasingly implementing AI systems (Chiusi et al., 2020; Coglianese & Ben Dor, 2020). These process data to learn decision models which produce optimal outputs or decisions according to predefined objectives.1 As such, they can inform or even replace human decision making in areas such as medical diagnosis and even at the management level of businesses (Mayer-Schoenberger & Ramge, 2019), and these same capabilities can also be used to inform decision making in government and politics. In public administration, AI serves to, for example, forecast health care needs, predict domestic violence, or to identify cases of fraudulent social benefit claims. On higher levels of policy making, AI tools may assist with decisions and help to achieve greater efficiency and effectiveness (Höchtl et al., 2016), as is the case with algorithmic systems used by finance ministries for policy simulations to help with policy planning (Kolkman, 2020). In a similar vein, the US Food and Drug Administration piloted an application that processes reports of adverse events after a drug has been launched on the market to detect undesirable effects and to adaptively inform its rulemaking (Coglianese & Ben Dor, 2020, p. 24). Going even further, some AI uses could alter the political game at the heart of democracy. A comment on the question whether societies might replace politicians with AI, published by the World Economic Forum, concedes that: ‘One day, we may’ (van der Wal & Yan, 2017). Already today, there is at least one documented case of an AI running as candidate for mayor in a Tokyo district (Cole, 2018).

A second reason why studying public perceptions of AI matters, are tensions between an increasing adoption of AI and liberal democracy. Advancing AI to ‘improve’ political decision-making risks to misconstrue the nature of democratic politics, which is not merely about optimizing outputs, but also about process and about regulating and containing unavoidable pluralism and conflicting views in society (Hildebrandt, 2016). Also, if unaccountable uses of AI find acceptance among citizens based on outputs, such as more convenience and efficiency, this could gradually engender a kind of democratic paternalism (see e.g., Loi & Spielkamp, 2021). Studying citizens’ attitudes toward AI is therefore important to understand how much a slow and silent shift that has already begun, and could change the face of the democratic state, meets with public approval.

This paper adds to the literature by analyzing whether citizens’ conceptions of democracy are associated with citizens’ views on AI in government and politics. It studies these potential correlates of support for AI in (1) public administration, (2) assisting political decisions and (3) replacing political decision-making with a survey specifically designed for this purpose and based on original data from a representative German online panel. In a nutshell, the analysis probes the role of demand for democracy as liberal democracy versus four of its disfigurations: technocracy, populism, post-democracy and majoritarian relativism. Based on the idea that the use of AI, in part, sits uneasily with core principles of liberal democracy, it is particularly instructive to study whether the kind of democracy citizens want affects their acceptance of AI in government and politics.

Since governance by AI has been described as technocratic in nature (Janssen & Kuk, 2016; Sætra, 2020), technocratically-minded citizens in particular may embrace the use of AI in political decision making. Indeed, there might be considerable support for AI, even on high-level politics, given that citizens have on average rather positive views on expert decisionmakers, and for some tasks and decisions more than for others: evidence for the European Union (EU) member states of 2008 shows that while there are notable country differences, with fairly and very positive views of technocratic decision making ranging from 20 to 90 per cent, 19 of 27 countries lie above the 50-per cent threshold (Bertsou & Pastorella, 2017).2 More recent evidence also suggests that citizens perceive expertise in some policy areas as more important than in others (Lavezzolo et al., 2021; see also Bertsou, 2021) and that citizens favour experts in policy design and implementation but not in decision making (Bertsou, 2021).

Besides technocracy, however, other conceptions of democracy might be relevant too, and translate into a higher, or instead, a lower support for AI in political decision making. By leveraging these political attitudes as predictors of support for AI in political decision making the analysis is the first to establish a link between the literature on democratic preferences (e.g., Bengtsson & Mattila, 2009; Font et al., 2015; Hibbing & Theiss-Morse, 2002; Kriesi et al., 2016; Webb, 2013) and public perceptions of AI. In fact, support for such AI uses itself could be seen as a certain understanding of politics that is akin to democratic attitudes.

The paper is structured as follows. Section two briefly describes existing research on AI perceptions. The third section theorizes the relationship between AI and democracy and presents the theoretical expectations. Section four describes the data and measures used in the analysis. The fifth section presents the empirical findings before the papers concludes with a summary and outlook.

Public perceptions of AI

A quickly growing literature spanning different disciplines is concerned with perceptions of AI systems, mostly commercial applications such as in recruiting or online content filtering. This research is interested in the role that specific features of AI systems, like performance-related aspects and transparency, play for trust in AI (for an overview, see Glikson & Woolley, 2020). In a given domain of application, these features can matter for people's evaluations of an AI system. Yet research has also looked at how much people generally support the use of AI systems depending on the domain and the tasks it performs. Several studies that have examined citizens’ AI perceptions across different areas and that cover all EU members (European Commission, 2017, 2020), the eight most populated EU countries (Grzymek & Puntschuh, 2019), the United States (Smith, 2018) and the Netherlands (Araujo et al., 2020) illustrate that there is no trust in AI per se: Citizens are generally concerned about potential risks, such as unfair discrimination, and also see possible benefits of AI, but the overall level of support for the adoption of AI depends on where it is used. It is a recurring finding in the literature that trust in AI is higher in areas involving technical tasks – and may even lie above trust in human experts (e.g., Araujo et al., 2020; Juravle et al., 2020; Lee, 2018; Logg et al., 2019; Schepman & Rodway, 2020).

Existing evidence on AI perceptions mainly covers private sector applications, though. Only few studies have examined citizens’ views on AI in the political sphere, and extant findings remain disparate. One study fielded in Germany has examined how the acceptance of AI in EU decision making depends on the division of labour with human decisionmakers (Starke & Lünich, 2020) and concludes that AI may be seen as unacceptable even if it delivers better outputs. Regarding less complex tasks in public administration, however, citizens may show greater acceptance of AI, as indicated by evidence from the United States (Miller & Keiser, 2021; but see Ingrams et al., 2021), India (Chatterjee et al., 2021) and Japan (Aoki, 2020). According to Eurobarometer data (European Commission, 2017, p. 59), citizens seem to be rather positive about AI in general. Where 61 per cent of EU citizens state fairly or very positive views, with country scores ranging from just below 50 per cent (Greece) to 83 per cent (Denmark), and more recent data suggests that citizens also see a merit of using AI in public policy areas, such as health care and traffic (European Commission, 2020, p. 13). Surveys by the IE University (2019, 2021) fielded in various EU countries as well as in the United Kingdom, the United States and China, even suggest that about half of Europeans approve of replacing parliamentarians with AI: scores for 2021 start at about 45 per cent (Germany and the Netherlands) and reach up to 65 per cent (Spain) approval. While this is a remarkable finding, it should be taken with a grain of salt because it is based on a survey question asking whether citizens were in favour of reducing the number of national parliamentarians in their country and giving those seats to an AI. Agreement may thus also reflect support for a smaller parliament, possibly out of discontent with politics.

Existing research altogether indicates that there may be substantial support for AI in political decision making, but there remain major gaps regarding what is known about citizens’ support for AI in government and politics. There is no systematic evidence on support for AI used at different levels of decision-making authority, from administrative decisions to high-level politics, nor do we know whether political attitudes are associated with AI support. As the next section will argue, two kinds of political attitudes are particularly relevant to make inferences about when people will accept or demand AI in government and politics. Since AI use means interfering with political decision making in democratic political systems, likely candidates for shaping AI support are citizens’ expectations of democracy and their satisfaction with the state of politics – which can be conceptualized broadly under Easton's (1975) notion of political support (Bellucci & Memoli, 2012, p. 11). Political attitudes may thus determine whether and how much resistance there will be to the adoption of AI in government and politics.

AI and democracy

Theoretical assumptions

AI has the potential to help with solving cognitive tasks in many different domains. This capability can equally be brought to bear in the political realm, as AI may automate routine tasks, help to better allocate resources, manage risks and accommodate citizens’ preferences (Engin & Treleaven, 2019; Margetts & Dorobantu, 2019; Wirtz et al., 2019). Moreover, there exist initiatives for harnessing data and AI as tools in the hands of citizens to empower them by achieving greater transparency and control over political decision making (see e.g. Savaget et al., 2018). There are, however, also fundamental tensions between the operating mode of AI systems and democratic politics. Although the interplay between human decisionmakers and AI can vary, adopting AI to inform decisions means basing solutions on information, evidence and knowledge. Thus, AI helps with cognitive tasks as it serves to produce optimal outputs or decisions through processing information based on predefined objectives.

Bringing such technical problem-solving to political decision making to guide or even replace it, amounts to prioritizing effectiveness and efficiency and thus means placing output-based legitimacy over input-based legitimacy, that is, that political decisions reflect preferences of citizens (Scharpf, 1999). Greater reliance on AI could mean that citizens’ preferences as inputs to the democratic process become less relevant and that this is justified with results. Even more important consequences of AI for democracy concern the dimension of throughput legitimacy (Schmidt, 2013), because a greater reliance on AI, especially if conceded greater authority on higher levels of decision making, can change the form of the democratic political process and the way decisions are brought about.

Following Hildebrandt (2016), there is an inherent incompatibility between the decision optimization achieved with AI and the democratic process of arriving at decisions and establishing generally binding rules. AI serves to optimize predefined goals through instrumental information processing. It therefore does not accommodate the nature of democratic politics and specifically the task of figuring out what those goals should be in the first place – which entails a hermeneutic process of reinterpreting values and meanings that shape a society and polity (Hildebrandt, 2016). In this view, liberal democracy is a cybernetic system too. However, it differs from mere information processing as it demands the possibility of competing interpretations, contestability and an ongoing feedback loop that enables an iterative reflection on the rules and the values guiding the polity (Hildebrandt, 2016, pp. 23−25). Distinguishing the ‘grammar’ (Hildebrandt, 2016, p. 2) of liberal democracy from that of AI in this way highlights precisely those aspects that also lie at the core of a procedural understanding of liberal democracy (Urbinati, 2014). This view foregrounds a reflexive process that allows for contesting decisions and renegotiating the rules of the polity under conditions of pluralism. A proceduralist view on democracy that stresses pluralism and contestation as pillars of democratic politics is therefore particularly useful for highlighting tensions with AI.

On a basic level, bringing AI to social and political problems conflicts with these liberal-democratic principles. Reducing social and political problems to an optimization task, first, tends to undermine the idea of an irreducible pluralism of political views and perspectives marked by at least some degree of public deliberation. The idea that one can base decisions on the processing of data by AI systems that find optimal solutions makes any pluralistic exchange and integration of different views unnecessary. AI in political decision making, second, also conflicts with a process of ongoing contestation that occurs vertically between society and decisionmakers: If one assumes that optimal solutions can be derived from actionable insights obtained from processing information, this removes the occasion to contest decision making.

These tensions make two kinds of political attitudes particularly relevant. As the use of AI means altering to some degree the existing political system, acceptance of AI may well depend on citizens’ political support. This attitude reflects whether citizens orient themselves favourably or unfavourably toward different parts of the present political system (Easton, 1975, p. 436), such as political elites or institutions. Besides how citizens think about the status quo of politics, the kind of democratic politics that they want could be equally or even more important. Citizens’ conceptions of democracy – particularly how much they demand a reflexive democratic process based on pluralism and contestation – are therefore also relevant when it comes to their acceptance of AI in government and politics. Although citizens will not generally have a deeper knowledge of AI, they will have a basic understanding based on own experiences and particularly popular media coverage. As noted further above, people generally perceive AI to be more suitable where it deals with technical tasks. One can thus presume that they know AI as a technology that processes information to find optimal solutions – be it in translation, image recognition or playing games like chess and Go. Thus, if a person finds AI acceptable in political decision making, it seems very likely that she has a reductionist view of politics as technical problem solving and sees AI as a means to obtain better solutions. Hence, AI may conflict with basic principles of liberal-democratic politics, but if citizens do not value these principles as a part of democracy, they may accept or embrace AI in political decision making, even in high-level politics, for example, to take decisions for politicians.

However, there is more than only one notion of democracy that diverges from liberal-democratic principles and that citizens may sympathize with. As research on conceptions of democracy and process preferences (e.g., Bengtsson & Mattila, 2009; Bertsou & Caramani, 2020; Font et al., 2015; Hibbing & Theiss-Morse, 2002; König, 2022; Kriesi et al., 2016; Webb, 2013) has shown, citizens have different views regarding what form democracy should take. They place variable importance, for example, on expert decisionmakers, on more direct responsiveness to the will of the people, or on representative-democratic arrangements that leave politics to professional party politicians. Such views of democracy can vary with regard to whether they are compatible with AI in political decision making.

Based on the preceding considerations, the following section will formulate hypotheses about the role of political support and conceptions of democracy for the acceptance of AI in government and politics. Taking liberal democracy as the starting point, it will discuss four further conceptions of democracy (see also König, 2022) which, following Urbinati (2014), can be understood as disfigurations of liberal democracy: while they are compatible with the general idea of the rule of people, they subvert liberal democracy through undermining pluralism and ongoing contestation needed to sustain a reflexive political process. The set of four disfigurations described in the next section – a technocratic, a populist, a post-democratic and a majoritarian-relativist conception – has been chosen for two reasons. First, they are, with the exception of majoritarian-relativism, prominently treated in the literature. Second, they can be analytically anchored in contrasts regarding how they deviate from realizing pluralism and ongoing contestation. In this way, the analysis can make sure that they are conceptualized and measured on the same level, rooted in a proceduralist understanding of democracy.

Hypotheses

It follows from the theoretical assumptions in the preceding section that citizens’ views on how democracy should be realized shapes their support for AI in government and politics. We would correspondingly expect that the more citizens want to see democracy realized as liberal democracy – based on pluralism and ongoing contestation – they will be sceptical about using AI in government and politics. The following hypothesis will therefore be tested:
  • Hypothesis 1: The more citizens support a liberal-democratic conception of democracy, the less they support the use of AI in government and politics.
Other conceptions of democracy, however, which do not adhere to pluralism and ongoing contestation, might well translate into greater acceptance of AI. One such conception that has been studied in previous work on citizens’ support for different forms of democracy is a technocratic, expert-led form of politics (e.g., Bertsou & Caramani, 2020; Bertsou & Pastorella, 2017; Rapeli, 2016). Technocracy is characterized by detached expert decisionmakers that pursue the best political solutions based on knowledge (Caramani, 2017). Deriving a single best political course from knowledge makes pluralism and exchanging views in the public sphere much less relevant, and expert decisionmakers that remain detached from citizens leave little room for ongoing contestation. Both governance by AI and a technocratic mode of governing imply taking optimal decisions based on information and knowledge, and several contributions have accordingly pointed to a strong affinity between them (e.g. Janssen & Kuk, 2016; Sætra, 2020). One would therefore expect that support for technocracy is positively associated with support for AI:
  • Hypothesis 2: The more citizens support a technocratic conception of democracy, the more they support the use of AI in government and politics.
A different association is to be expected with a populist conception of democracy. Populism too has an anti-pluralist thrust as it clings to the idea of a true and uniform popular will (Abts & Rummens, 2007; Müller, 2016). It thus shares with technocracy that it presupposes one single correct political course, but unlike technocracy, it rejects detached political elites and instead demands immediacy regarding how citizens’ views are translated into political decisions (Abts & Rummens, 2007; Albertazzi & McDonnell, 2008; Caramani, 2017). Populism therefore similarly evades a reflexive political process based on ongoing contestation. The negative sentiment toward supposedly self-serving political elites that marks a populist attitude (Geurkink et al., 2020) could mean that people have a strong desire to see these elites replaced, possibly even with AI. However, regarding populism understood more narrowly as a preference for a certain democratic process, a negative association with support for AI in government and politics seems more likely: The idea that politics should realize the true will of the people is hard to reconcile with letting a detached AI system arrive at optimal decisions based on information-processing. The following hypothesis will thus be tested:
  • Hypothesis 3: The more citizens support a populist conception of democracy, the less they support the use of AI in government and politics.
Technocracy and populism both undermine pluralism through clinging to a presumed single best or true political course (Caramani, 2017). However, democracy can also be subverted from the opposite direction, through relativism, that is, the idea that all views are equally valid, whether they are the result of public discourse or not. Relativism, too, eliminates any need for a pluralist will formation (Novak, 1997). What matters is merely what citizens think or feel is right at a given time. Such a relativism marks a consumerist understanding of politics according to which citizens’ preferences are to be realized as much and directly as possible. Thinking of democratic politics in these terms amounts to a form of majoritarian relativism (Tsatsanis et al., 2018) that combines relativism with demand for immediacy and direct responsiveness of political decision making. Unlike with populism, immediacy in that case does not entail that some true will of the people be realized. Rather, politics is about accommodating citizens’ demands, including by closely tracing changes in public opinion. This notion of democracy thus corresponds to Bobbitt's (2002) account of government as a ‘market state’ that is highly responsive and adapts to the demands in society. Majoritarian relativism, understood in this way, is therefore very much compatible with the idea that AI can be employed in political decision making to provide better solutions – particularly the promise of AI to better give people what they want, which informs certain visions of algorithmic government delivering services in a more targeted, personalized fashion (Williamson, 2014). The following hypothesis will thus be tested:
  • Hypothesis 4: The more citizens support a majoritarian-relativist conception of democracy, the more they support the use of AI in government and politics.
Another conception of democracy that can equally be expected to be perceived as compatible with AI in government and politics is post-democracy (Crouch, 2004). Like technocracy, post-democracy is characterized by detached political decisionmakers. These are professional politicians who are left to go about their business while citizens are not bothered with political matters. Post-democracy thus does not count on ongoing contestation. It is also relativist in the sense that democratic politics does not involve firm commitments to political views that enter into conflict with each other (see also Mouffe, 2009). Rather, political positions are advocated less as ends and instead more as a means to compete for electoral support and political power, similar to how party competition has been described with a view to cartel parties (Katz & Mair, 1995). A post-democratic conception of democracy thus expresses the will not to be bothered by politics and a general readiness to delegate decision making to agents who are trusted to sort out a suitable political course for a polity among themselves. This passivism makes post-democracy compatible with AI in government and politics.
  • Hypothesis 5: The more citizens support a post-democratic conception of democracy, the more they support the use of AI in government and politics.

In sum, the four described disfigurations of liberal democracy all undermine pluralism and ongoing contestation, but whereas the relativism that marks post-democracy and majoritarian relativism – ‘anything goes’ – and the knowledge-based monism of technocracy are compatible with AI in political decision making, this cannot be said of the monism that marks populism, which clings to a true popular will. The differing directionality of the expected associations with AI support can be tied to an underlying contrast between the democracy conceptions: technocracy, post-democracy and even majoritarian relativism are largely about performance and politics realizing satisfying outputs, without inputs to the political process having to take a specific shape. Liberal democracy and populism, in turn, are more about input and throughput – about how citizens’ interests are represented and about a particular political will being realized respectively. Further, the preceding hypotheses are most likely to hold true for more far-reaching uses of AI which imply greater decision making authority and substantially change the way political decisions are produced. Where AI intervenes less heavily in democratic politics, citizens’ conceptions of democracy may be less relevant.

Besides their preferred realization of democracy, it could be citizens’ political support (Easton, 1975) that shapes AI acceptance. Implementing AI can amount to a substantial change to political decision making and institutions. Citizens might therefore show greater acceptance for making such changes if they are dissatisfied with the status quo of politics, whereas they will hardly see a need for tinkering with the political process if they are highly satisfied with politics. Testing the role of political support together with conceptions of democracy is also important, as already hinted at above, because these conceptions can go along with higher or lower political support – which can lead to spurious associations when not including both as predictors. For instance, populism has been shown to be associated with lower, and technocracy with higher, institutional trust (Bertsou & Caramani, 2020).

Political support can take different forms, however. Specific political support refers to satisfaction with outputs, the working of politics and the political authorities, whereas diffuse support is a deep-seated attitude that concerns the more basic structure and the institutions of a political regime (Easton, 1975, pp. 437, 444). It is important to consider such different aspects of political support. It has been shown that dissatisfaction with the working of democracy can still go along with solid trust in political institutions (Norris, 1999; Webb, 2013) as a more fundamental expression of political support, and it does not already have to amount to disaffection or alienation, which are accompanied by political passivism and disinterest (Magalhães, 2005). These distinctions regarding political support are considered when testing the following hypothesis as dissatisfaction with the working of democracy may not be enough to lead people to accept substantial changes in decision making:
  • Hypothesis 6: The higher the political support of a person, the lower her support for AI in government and politics.

Data and measures

Sample

The data used in the analysis stem from a survey among 1115 participants recruited by respondi AG from an online panel representative of the German population aged 18 to 74 (see Supporting Information Appendix A1 for a description of the sample composition).3 Although the data come from a single country, their relevance extends beyond the German case. First, evaluations of AI in general (European Commission, 2017) are mostly as or more positive than in Germany and support for AI in a political context (IE University, 2019, 2021) seems to be largely comparable between Germany and at least several other European countries, like France, Poland and the Netherlands. Besides the level of acceptance of AI in political decision making, the individual-level correlates of AI acceptance are likely to be similar across contexts as well, especially when considering that these correlates concern rather basic political attitudes that are likely to play a similar role in different settings.

Second, Germany's societal values and political culture are similar to other central and Northern European countries (Inglehart & Welzel, 2005; Schwartz, 2006). Furthermore, as in many other European states, one can expect to find substantial variation regarding citizens’ conceptions of democracy in German society that is suitable for testing the hypotheses. Since World War II, Germany has developed a democratic political culture marked by pronounced self-expression values (Inglehart & Welzel, 2005). Yet, there is also historical experience with authoritarian regimes characterized by technocratic traits, and German post-war political culture has been described as a subject orientation marked by strong passivism and pragmatism (Almond & Verba, 1963). Still today, at least some parts of society show weak democratic values and an acceptance of unaccountable government structures (Decker et al., 2016). Further, the success of a right-wing populist party in more recent history and survey evidence on a populist attitude (Vehrkamp & Merkel, 2018) attest to the presence of a populist notion of democracy. In sum, there are several reasons for why it is unlikely that the findings are specific only to the German case.

Measures

Dependent variable: Support for AI in government and politics. The dependent variable is based on six five-point rating items. In the survey, these items followed an introductory text that briefly described AI and listed several real applications indicating that AI can assist, if not replace, decision making by professionals in certain non-political domains (see Supporting Information Appendices A2 and A3). To capture whether citizens show a general acceptance of AI in certain areas, the question asked how respondents generally evaluate the use of AI while considering that this technology will likely become more advanced in the future. The six items have been formulated with specific scaling properties in mind: they refer to the use of AI on different levels of decision-making authority such that agreement with some of the items can be expected to be lower than for others. While people may easily agree with using AI for (1) routine or (2) complex administrative tasks, this does not mean that they will also support that (3) AI informs political decisions for example, in ministries, (4) assists parliamentarians in their decision making and even less that AI (5) takes decisions that politicians would otherwise take or (6) may compete at elections, as it has occurred in Tokyo (Cole, 2018).

Conceptions of democracy. While the literature offers various measures expressing demand for different conceptions of democracy, such as for populism and technocracy, these have been developed independently. As the analysis covers five conceptions of democracy overall, directly drawing on existing measures would lead to an eclectic mix. Also, some existing measures incorporate aspects of political support as one of their elements. Specifically, a populist attitude (for an overview, see Wuttke et al., 2020) and stealth democracy as a form of expert-led government in the interest of the people (Bengtsson & Mattila, 2009; Hibbing & Theiss-Morse, 2002; VanderMolen, 2017; Webb, 2013), but also certain measures of technocracy (Bertsou, 2021), include discontent with political elites. Yet other measures, such as for liberal democracy, do not (see e.g., Kriesi et al., 2016). To deal with these issues, the measures used in the analysis are still closely linked to the literature on democratic preferences but are designed to capture the different notions of democracy on the same conceptual level while also keeping them separate from political support: Based on the proceduralist perspective formulated above, the measures focus narrowly on preferences regarding the shape of the democratic political process. They thus reflect support for different ways of organizing the political process, particularly how it translates citizens’ views into political decisions.

The items have been formulated to reflect the conceptualized similarities and differences described in the preceding section. The populist, technocratic, post-democratic and majoritarian-relativist conceptions of democracy formulated above, are all measured with three rating items each. The items for populism capture the idea that politics should realize some true and unitary will of people as directly as possible; majoritarian relativism combines immediacy with a relativism that does not care about a true popular will and sees all political positions and citizen preferences as equally valid; technocracy reflects the idea that experts should find the best political decisions for the people based on knowledge; and the post-democracy items capture that political elites do not need to have a firm commitment to any political positions and should simply be left to go about their business to govern for the people. Finally, the liberal-democratic conception is based on four items on the importance of pluralism and four on the importance of ongoing contestation (see Appendix A4).

The items for each conception of democracy have been combined into variables. A principal component analysis with oblique rotation recovers the six components (see Appendices A5 to A8 for details on scaling analyses), with dynamic contestation and pluralism as elements of liberal democracy emerging as two separable dimensions. Since these are strongly correlated (r = 0.6) and are posited as elements of a liberal-democratic conception, they have been added into one variable. Through including the five conceptions of democracy together, the analysis can account for mixed attitudes since people may favour the realization of different conceptions of democracy at the same time and to varying degrees (see e.g., Font et al., 2015; Webb, 2013).

Political support. The analysis includes measures that have commonly been used as indicators of political support. Specific political support, which refers to the performance of the political authorities, is captured with a standard item on satisfaction with the working of democracy in the country and additionally, a variable for political disaffection based on five items taken from a question used in the German Longitudinal Election study 2017 (Roßteutscher et al., 2018). These express discontent with how politicians represent citizens and a certain alienation from politics (see Appendix A9). To measure a central form of diffuse support, the survey contains standard items on institutional trust asking about trust in five political institutions.4

Control variables. A central control variable in the analysis is citizens’ general attitude toward AI, that is, AI used beyond political domains. Two measures will be used in the analysis: a simple evaluative question about people's positive versus negative view of AI and the seven highest-loading items of a validated AI optimism scale (Schepman & Rodway, 2020). Only seven items have been chosen in order to avoid respondent fatigue in a survey that features several longer item batteries (see Appendices A10 and A11). Besides AI evaluations, the analysis will also include a variable for self-assessed knowledge about AI (see Appendices A12 and A13).

As a further control, the analysis employs a seven-point item asking about left-right ideology. An authoritarian attitude that is more commonly found on the political right is likely to translate into greater acceptance of political arrangements that are characterized by weaker accountability (Bengtsson & Mattila, 2009; Webb, 2013). For a more fine-grained measurement of ideology, seven-point items about economic policy preferences (more welfare and higher taxes versus lower taxes and less welfare), and immigration policy preferences are also included. Ambiguity intolerance has been included as a further control variable because it may be underlying demand for a kind of politics that reduces complexity and uncertainty (Gründl & Aichholzer, 2020). It might lead citizens to be more favourable towards AI delivering seemingly clear-cut solutions for given problems, thus avoiding the conflict and ambiguity that usually marks politics. Yet the common opacity of AI, however, could also counter any such relationship. To measure ambiguity intolerance, the survey features an item battery used in the Austrian National Election Study 2017 (Wagner et al., 2018). The analysis also includes political self-efficacy, which is based on two items taken from the German Longitudinal Election study 2017.

Finally, further control variables are political interest, education, age and gender (female, male, diverse; female coded as 1).5 The survey also featured an attention check and a control question on the last page of the survey asking whether responses were honest and can be used for the analysis. These questions are applied as a filter together with a speeding check (at least five minutes for a survey designed to take around 10–15 minutes).

Empirical analysis

Description of the dependent variable

Figure 1 shows the average support for the six items of the dependent variable. Citizens are overall reluctant to support the use of AI in government and politics. In line with the idea that citizens are more accepting of less intrusive AI applications, some AI uses are found more acceptable than others. Similar to what has been shown for human expert decisionmakers, the level or stage of decision making can make an important difference (Bertsou, 2021). Figure 1 suggests that the higher the decision-making authority the less acceptable the use of AI. Only the score for AI taking on routine administrative tasks is above the mid-point of the scale. For AI taking decisions that politicians would otherwise make and that AI can compete at elections, support is very low. Given how drastically these AI uses would interfere with democratic politics, it is to be expected that many see them as unacceptable. Still, some citizens are willing to entertain the idea of partially delegating high-level politics to machines. What the low mean value hides: a respectable 10 per cent of the respondents have scores of three and higher.

Details are in the caption following the image
Support for AI with different levels of decision-making authority. Mean and 95% confidence intervals of the means.

A principal component analysis (see Appendix A15) yields three dimensions behind the six items: three item pairs in the order shown in Figure 1. The analysis in the next section will therefore examine three dependent variables measuring AI (1) in an administrative context, (2) for assisting political decision making and (3) for replacing politicians. While these three dimensions lie behind the six items, these items also yield a strong Mokken scale when taken together (see Appendices A16 and A17). The regression analysis below will therefore additionally be performed with a dependent variable combining all six items.

Regression analyses

Figure 2 visualizes the ordinary least square regression results for the three subscales of support for AI in government in politics as dependent variables (for regression tables and bivariate correlations, see Appendix A18 and Appendix A19). Turning first to the role of conceptions of democracy, there are striking differences between the models. For uses of AI concerning the first two levels of decision-making authority – in public administration and for assisting political decision making – citizens’ ideas of what democracy should look like are largely irrelevant.6 Citizens appear to see different notions of democracy to be equally compatible even with AI uses that imply a notable change from the status quo.

Details are in the caption following the image
OLS Regression results. Unstandardized coefficients. 95%-Confidence intervals are shown. The intercept is hidden in the graph. Adjusted R2 is 0.41, 0.35 and 0.16, respectively.

Only where AI uses would mean the strongest change to democratic politics, that is, on the highest level of decision-making authority (third model in Figure 2), are conceptions of democracy linked to AI support. The results largely support the formulated hypotheses. Notably, the most pronounced association is that for a liberal-democratic conception. In line with H1, the results imply that the more people endorse democracy as liberal democracy based on pluralism and ongoing contestation, the lower is their support for AI taking decisions for political actors. A negative association also emerges for populism, which supports H3. The majoritarian-relativist and post-democratic conception, in contrast, are positively associated with the third subscale in Figure 2, thus conforming to H4 and H5. Remarkably, a technocratic conception (H2) of democracy is not associated with the dependent variables, challenging common wisdom about the affinity between technocracy and governance by AI.

It is instructive to look at the results in combination with one of the control variables: citizens’ general AI evaluations and especially AI optimism, which is by far the most important predictor in the analysis in the first two models shown in Figure 2. In other words, whether citizens accept AI used for administration tasks and assisting political decision making is not a question of their conceptions of democracy but is heavily shaped simply by how they perceive AI in general, that is, in non-political contexts. For the more far-reaching uses of AI (subscale 3), the coefficient of AI optimism is still significant, but more than halved in comparison. Citizens seem to understand that there is a tension between the idea of finding optimal solutions based on information processing and the nature of democratic politics: general AI optimism does not as directly translate into support for AI in high-level politics whereas citizens’ conceptions of democracy are more important predictors.

Turning to the question whether political support is linked to people's acceptance of AI in government and politics, the evidence is weak. Only satisfaction with democracy reaches conventional levels of statistical significance, with a negative coefficient in the third model. Overall, the findings contradict the idea that AI support is attributable to political discontent.7 Only the kind of democracy that people want emerges as relevant together with AI optimism.8

The explained variance (adjusted R2) for the first two subscales in Figure 2 is clearly higher than for the third subscale: 0.41, and 0.35 as opposed to 0.16. AI optimism contributes heavily to the explanatory power in the first two models, but several other control variables are also relevant particularly in the first model (AI in an administration context). Women appear to have less favourable perceptions of AI used in this setting. One reason for this could be that women are more likely to be employed in the public sector and therefore perceive a higher risk of being replaced by AI. A similar reason might lie behind the significant positive coefficient of higher formal education and age. Since higher education means people are less likely to perform non-routine tasks, they are less exposed to automation risks (Arntz et al., 2016), and higher age means a lower likelihood of being replaced by AI in one's future career.

The reported results stem from regression models using the three subscales of AI support. When taking all six underlying items together into a single scale, the results largely mirror those for the third subscale in Figure 2, but AI optimism emerges as more important, while the liberal-democratic conception loses statistical significance (see Appendix A23). Also, when performing the regression analyses with the individual variables capturing citizens’ conceptions of democracy, the findings are overall corroborated (see Appendices A24 to A26). When using different variants (individual items and combinations of items) for the technocratic conception of democracy, only the first item and the first and the third item taken together show a significant but weak positive coefficient in line with H2 (see Appendices A27 to A30).

The fact that the analysis covers conceptions of democracy and political support rather comprehensively makes it unlikely that there are missing conceptually related variables that would markedly change the findings. A sensitivity analysis also shows that among the highly significant conceptions of democracy variables, the smallest existing bias due to omitted confounding variables would have to be about 45 per cent (for ‘post-democratic’) to invalidate the significant coefficient (Appendix A31). Finally, one should note that conclusions about any possible causal interpretation of the registered relationships rest on theoretical assumptions. A causal relation, opposite to what has been modelled above, would mean that support for AI in government and politics comes first and influences how citizens think about the shape that democratic politics should take and their political support. This seems less plausible, however, given that these political attitudes are more general and arguably the result of a longer socialization, whereas ideas about AI in government and politics are more specific and have become more relevant only through more recent developments.

Conclusion

The increasing use of AI-based solutions in government has been noted to potentially challenge existing institutions and principles of democratic rule and could lead to a more detached, managerial and technocratic style of government. From the perspective of democracy research, it is therefore important to understand whether and which citizens are likely to support the adoption of AI in government and politics. The analysis above has shown, first, that citizens are overall sceptical towards the use of AI in government and politics, especially in high-level politics. This finding differs from surveys results that indicate considerable support for AI (IE University, 2019, 2021). Yet while this previous evidence is based on a survey question about AI entering parliament in exchange for reducing the seats of parliamentarians, the analysis above has used questions on whether AI should generally be able to take decisions for politicians. When using this wording, the idea that citizens willingly accept to be governed by AI seems exaggerated. Further, even support for assisting in the preparation of political decisions is rather low according to the findings. Only AI in public administration receives moderate support.

Second, citizens’ views regarding what shape democratic politics should take are associated with support for AI in government and politics, but their relevance depends on where AI is implemented. Regarding AI used in public administration and for informing the preparation of political decisions, acceptance is primarily driven by people's general AI optimism. This association is remarkably strong and there seems to be a considerable potential of popular support for AI in government to the extent that citizens have positive experiences and develop a positive general view of AI in non-political contexts. In this sense, AI in commercial settings may ultimately have political implications.

Citizens’ conceptions of democracy, in turn, are relevant predictors of AI support with respect to the more far-reaching applications. Strikingly, the evidence hardly supports the idea that a technocratic notion of democracy is associated with greater AI support – even though theoretical contributions have pointed to a clear affinity between technocracy and a social ordering via AI (e.g., Janssen & Kuk, 2016; Sætra, 2020). A possible explanation for this is that when people express support for expert-led politics, they think of human experts, not AI. The finding on technocracy is even more notable when considering that for the other examined conceptions of democracy, the registered relationships are theoretically consistent. Greater support for a liberal-democratic conception is associated with lower support of AI taking political decisions. Citizens thus seem to perceive a tension between the nature of democratic politics and the optimizing, information-based problem-solving of AI systems. Besides a liberal-democratic conception, a populist conception, too, is negatively related to support for AI. Finally, both a post-democratic conception, and a majoritarian-relativist conception are positively associated with support for the most far-reaching uses of AI. These findings therefore altogether suggest that acceptance of AI in high-level political decisions comes from citizens with a reductionist understanding of democracy embodying the idea that whatever works is fine if only it accommodates citizens’ demands.

Third, variation regarding how citizens view AI in political decision making is barely tied to their political support. Greater acceptance of AI informing decision making in government and politics does not seem to be an expression of political discontent. This is notable since a lack of political support is linked to demand for more radical political change (see e.g., Geurkink et al., 2020; Lavezzolo & Ramiro, 2018). Yet, there is only very weak evidence that citizens perceive AI as the answer to perceived deficits regarding the working of democracy. They might instead demand other solutions, such as more opportunities for citizen participation. This also means that it is very unlikely that citizens will call for AI as a kind of deus ex machina should future political developments erode political support.

The present study also has several limitations and points to further research desiderata. First, the data at hand cannot tell why exactly support for technocracy does not clearly translate into support for using AI in government and politics. Further research linking technocratic attitudes and AI perceptions could shed more light on this and generate insights of interest for both literatures. Second, the findings for the German case may well travel to various other European democracies similar in political culture. Yet, further research is needed to corroborate the registered relationships in other contexts. Third, it would be interesting to examine further why women are clearly less, and those with higher formal education are clearly more, supportive of AI in specifically in public administration decision making. Finally, while the analysis has distinguished between different levels of decision-making authority regarding the use of AI, there could be further differences regarding more concrete applications and the specific conditions under which these are adopted. These limitations notwithstanding, the above findings contribute important empirical insights to current academic and non-academic debates about the relationship between AI and democracy, showing which applications are likely to find support and by whom.

Acknowledgements

I would like to thank the anonymous reviewers for their comments and suggestions. The manuscript has also gained from feedback by Charlotte Bartels, Alexander Reisenbichler, Markus B. Siewert, and Georg Wenzelburger. Last but not least, thanks go to Louisa Prien for assisting with the preparation of the survey.

Open access funding enabled and organized by Projekt DEAL.

    Notes

  1. 1 In the following, Artificial intelligence (AI) systems are understood as technological tools designed to interact with a given environment and to process data in ways which allow the system to modify its behaviour and to choose optimal actions/producing optimal outputs according to predefined goals (Russell & Norvig, 2016).
  2. 2 Evidence presented by Bertsou and Caramani (2020) for nine European democracies also shows that citizens on average score above the mid-point on a scale measuring support for expert decisionmakers. The share of citizens belonging to a more sharply delineated technocratic type ranges from 6.5 per cent (Germany) to 19 per cent (Greece). Related evidence on a stealth-democratic attitude shows that it is fairly widespread with about a quarter to a third of citizens in Finland (Bengtsson & Mattila, 2009; see also Rapeli, 2016), to more than a third of citizens in Spain (Font et al., 2015) and the United Kingdom (Webb, 2013) being categorized as stealth democrats.
  3. 3 The response rate was 30.2 per cent.
  4. 4 These are the parliament, the justice system, the federal government, parties, and public administration.
  5. 5 For a description of the variables, see Appendix A14. As Cronbach's Alpha for technocracy is only at an acceptable level of reliability (Hair et al., 2019, p. 262), the analysis has been run with different measures.
  6. 6 Further analyses (see Appendix A19) show that significant positive coefficients of technocracy and post-democracy for the second subscale (AI serves to prepare and inform political decision making) disappear after controlling for the general AI evaluations.
  7. 7 Additional analysis including the variables for political support individually show that institutional trust has a similar significant negative coefficient (see Appendix 20). While correlations between the main independent variables show that these are partly interrelated – and in theoretically consistent ways – a multicollinearity analysis indicates that the extent of their shared variance is not problematic and does not unduly reduce the accuracy of the coefficient estimates (see Appendices A21 and A22).
  8. 8 A closer inspection of AI optimism shows that respondents with higher scores are younger, higher educated and have higher self-efficacy, but are less politically interested. They are also more likely to be male and have a clearly higher self-assessed knowledge about AI (r = 0.36). These associations mirror findings by Araujo et al. (2020, p. 616). Further, when removing AI optimism and general AI evaluation in the main models, self-assessed knowledge about AI becomes highly significant.