To say the least, artificial intelligence (AI) is developing with extraordinary speed. ChatGPT, an AI chatbot developed by OpenAI, is the fastest growing online service in history (Ahuja, 2023). The implications of AI for behavioural science may be particularly significant, extending far beyond the historic connection (Simon, 1981). For consumers, the most important point is that modern AI excels at pattern detection, from identifying animals within images to predicting text from an initial prompt. Modern behavioural science, particularly over the past 15 years, has focused on identifying and operationalising bias and noise in consumer and investor decision-making and on providing correctives to reduce the effects of each (Halpern 2015; Kahneman et al., 2021; Thaler and Sunstein, 2008). Bias and noise are, essentially, behavioural patterns. Thus, AI is likely to be valuable within behavioural science for modelling and examining consumer behaviour and perhaps for improving it or improving on it (Ludwig & Mullainathan, 2022). For that reason, the use of AI alongside behavioural science is likely to be widespread in many applicable domains, such as consumer research and consumer policy (Sunstein, 2023).

This article outlines some opportunities and costs of AI-based behavioural science, including algorithmic behavioural science, in the coming years. We emphasize the benefits and costs for consumers, though occasionally we venture more broadly, and some implications for consumer policy.

We highlight important work already done to identify discriminatory biases, such as racist and sexist word associations (d-biases), within natural language text via AI methods (Bolukbasi et al., 2016; Brunet et al., 2019; Caliskan et al., 2017). At the same time, we note that relatively little work (Horton, 2023; Jones & Steinhardt, 2022) to date has used AI to identify cognitive biases (c-biases), which are the focus of modern behavioural science. Both d-biases and c-biases matter to consumers and in many domains. This is a clear, immediate opportunity for AI in behavioural science research (Ludwig & Mullainathan, 2021, 2022; Sunstein, 2023).

Modern behavioural science has also received significant criticism in recent years (Chater & Loewenstein, 2022; Maier et al., 2022), some of it highlighting the need for more contextualised behavioural approaches that incorporate heterogeneity (Mills, 2022a; Szaszi et al., 2022). For consumers, this “heterogeneity revolution,” (Bryan et al., 2021) is likely to be promoted and accelerated by AI technologies (Michie et al., 2017; Rauthmann, 2020), both as a new tool for behavioural science and in conjunction with existing strategies, such as mega studies (Buyalskaya et al., 2023; Duckworth & Milkman, 2022).

Finally, from a complex systems perspective, AI has the potential to help behavioural scientists to “see the system” (Hallsworth, 2023). This may be through predicting the optimal timing and context for delivery of interventions designed to improve consumer welfare (Mills, 2022b; Yeung, 2017). It may also take the form of probing consumer behaviour as a complex system to identify optimal leverage points for affecting behaviour change (Park et al., 2023; Schmidt & Stenger, 2021).

AI also creates new costs for practitioners and consumers. We briefly address the environmental effects of AI in behavioural science (Crawford, 2021; Dhar, 2020). Where behavioural science uses AI in behavioural interventions to promote pro-environmental consumer behaviours, these energy-intensive methods must factor into the final evaluation of the intervention. However, environmental costs will affect any and all disciplines that use AI. As such, we focus more on costs specific to behavioural science practitioners and consumers.

AI-behavioural models may impose substantial social costs, as by endangering consumer privacy through data collection (Hagendorff, 2022; Sætra, 2020; Saheb, 2022) and interfering with the formation of consumer preferences (Bommasani et al., 2022; Russell, 2019). The latter risk is particularly important when considering vulnerable individuals, such as children and teenagers (Akgun & Greenhow, 2022; Smith & de Villiers-Botha, 2021). At least with regulation of various kinds, AI may be limited in its ability to accommodate important individual and societal values, and that limitation may undermine public trust and produce welfare costs from interventions otherwise forgone. This is assuming that AI and behavioural science are used to promote consumer welfare. AI-behavioural models may be manipulative (Hacker, 2021; Sunstein, 2015) and induce harms through exploiting consumer biases (Bar-Gill et al., 2023; de Marcellis-Warin et al., 2022), contributing to the ongoing challenge of dark patterns in online consumer spaces (Helberger et al., 2022; Mathur et al., 2019). Finally, AI-behavioural approaches may not be economically viable in some domains where existing behavioural science methods are appropriate (Sunstein, 2012, 2023). Furthermore, skill premiums are likely to be high for professionals who command effective knowledge of behavioural science and AI, meaning that—at least in the near-term—established methods may prove more economically viable (Hallsworth, 2023; Lipton & Steinhardt, 2018).

Understanding the opportunities of behavioural science and AI, as well as these costs, will be crucial for determining best-practice applications and consumer policy to protect consumers (and citizens more broadly).

Opportunity 1: Identifying Biases

Identifying bias and noise with AI is a clear opportunity for behavioural science. Behavioural biases can be understood as predictable patterns or errors in human behaviour, including consumption choices (Kahneman, 2011; Thaler and Sunstein, 2003, 2008), and the pattern-detecting capabilities of modern AI are likely to be well-suited to the task of identifying consumer biases from behavioural data (Kleinberg et al., 2015, 2018; Ludwig & Mullainathan, 2021, 2022). In fact, AI may identify biases that have never been identified before (Ludwig & Mullainathan, 2022). Equally, noise may hide patterns in behaviour that humans may fail to spot, but that AI can identify and quantify (Aonghusa & Michie, 2020). There are profit-making opportunities here for companies seeking to increase business; there are also opportunities, profit-making or not, to improve consumer welfare. One question is whether biases might be identified that are currently unknown.

AI has been used to identify discriminatory biases within human behaviour. For instance, Word2Vec is a natural language processing AI developed by Google (Mikolov et al., 2013). Like many natural language AI systems, Word2Vec identifies the statistical relationships between words in terms of probabilities and uses these relationships to identify word associations (Wolfram, 2023). A user can then explore these associations through posing questions to the AI. Through such questioning, Word2Vec has often been found to produce gender-biased word associations (Bolukbasi et al., 2016; Brunet et al., 2019). “Word embedding” models such as Word2Vec have also been used as “Word Embedding Association Tests” (WEATs) to replicate the results of the Implicit Associations Test (IAT) using only (big) text data (Caliskan et al., 2017; Evenepoel, 2022). In both instances, only natural language is used to identify various discriminatory biases, and thus, it is not that the AI systems themselves are biased, but rather, that AI can be used to identify implicit biases in natural language that were previously hidden (Brunet et al., 2019).

These results are evidently relevant to consumer behaviour, and they suggest several opportunities. Such approaches represent alternative approaches to, say, the IAT, for investigating human behaviour. Methods such as the IAT can be challenging to implement and time-consuming (and raise questions about external validity). Furthermore, AI approaches can unlock new avenues for behavioural research that bear directly or indirectly on consumption. For instance, the WEAT can be applied to any corpus of natural language data and can thus be used to explore implicit biases across different cultural groups and time periods (Evenepoel, 2022). One need not focus on language; the potential is much broader. AI pattern detection has been used to investigate the decision-making processes of judges and doctors, with practices such as “mugshot bias” (the tendency to rely heavily on a defendant’s mugshot) identified through AI analysis (Kleinberg et al., 2018, 2019; Ludwig & Mullainathan, 2022). Similar biases might well be observed in consumers, though research is now at a very preliminary stage.

We are speaking here of discriminatory biases, or d-biases. While such biases have a long association with behavioural science, they are distinct from the cognitive biases (Wilke & Mata, 2012)—or c-biases—which generally concern modern behavioural science, especially in the domain of consumer and investor behaviour (Sunstein, 2022b). This is important to note so as to distinguish discussions of AI for detecting biases in behavioural science from the extensive literature on algorithmic bias (which generally focuses on d-biases). Relatively little work to date has explored the use of AI to identify c-biases (Horton, 2023; Jones & Steinhardt, 2022), though importantly, some AI-based analyses have shown judges (Kleinberg et al., 2018; Ludwig & Mullainathan, 2022) and doctors (Mullainathan & Obermeyer, 2022) to use more prominent information in a manner which is indictive of availability bias and representativeness bias (Tversky & Kahneman, 1974). AI techniques have also been used to study habit formation behaviour within especially large datasets, identifying important factors that influence consumption habit formation, which may have been difficult to determine via traditional statistical techniques (Buyalskaya et al., 2023). The potential of AI to identify the conditions under which individuals form consumption habits has immediate and obvious implications for consumers and scholars of consumption behaviour.

The relative paucity of such work should be seen as a compelling opportunity for research within behavioural science. Indeed, it is hardly premature to speculate about the possibilities such a research programme might hold. For instance, real-time data on the behaviour of a financial stock trader—such as the status of their portfolio, the speed of their mouse clicks, and the frequency of their email communications—might be used to predict whether the broker is in a “hot” state and automatically trigger risk management procedures ranging from nudge-like interventions (e.g., “you should take a break from the desk”) to more coercive interventions (e.g., imposition of temporary trading limits). The behaviour of consumers might similarly be tracked at relevant times and over short or long periods.

Opportunity 2: Integrating Heterogeneity

Beyond expanding the toolkit by which researchers investigate consumer behaviour, AI presents a unique opportunity for behavioural science to progress in a way that meets various concerns about the field as a whole.

Recent high-profile results have sparked considerable debate (Hallsworth, 2023). In particular, questions have been raised about the effectiveness of some behavioural interventions (Maier et al., 2022), given what are often small effect sizes (Beshears & Kosowsky, 2020; DellaVigna & Linos, 2022). Concern has also been raised about the value of behavioural interventions that are focused on individual behaviour (Chater & Loewenstein, 2022), given current policy challenges that involve consumers, in domains such as health, safety, and the environment (Nisa et al., 2020). These concerns supplement earlier concerns about certain uses of behavioural insights in consumer policy, which have been challenged for potentially undermining individual autonomy and freedom of choice (e.g., Gigerenzer, 2015; Rebonato, 2014).

These different concerns—of being insufficiently effective and disrespectful to individuals—may or may not have force and may be addressed by better integrating individual heterogeneity and context into theory and practice (Bryan et al., 2021; Hecht et al., 2022; Szaszi et al., 2022). For consumers, the effectiveness of behavioural interventions is likely to depend on a multitude of factors, from the precise tool chosen (a default rule, a warning, a reminder, a tax, a subsidy, and a mandate) to individual traits (Peer et al., 2020; Thunström et al., 2018), to strength of preferences (de Ridder et al., 2022) and cultural factors (Schimmelpfennig & Muthukrishna, 2023).

In recent years, behavioural studies have increasingly used moderation and mediation approaches to probe behavioural results to find and identify heterogeneous effects within a sample (Dolgopolova et al., 2021; Hecht et al., 2022)—for instance, when evaluating calorie labels (Thunström, 2019) or COVID-19 interventions (Kantorowicz-Reznichenko et al., 2022; Krpan et al. 2021). This can lead to a deeper understanding of the factors influencing the intervention and thus creates opportunities for interventions to be tailored to specific environments, individuals, or policy objectives (Agrawal et al., 2022; Mills, 2022a; Sunstein, 2022a). More tailored interventions may also empower consumers to “self-nudge,” reassured that such interventions are attuned to their preferences and objectives (Krpan & Urbaník, 2021).

While such approaches are promising and interject much needed nuance into the evaluation of behavioural results (Bryan et al., 2021; Szaszi et al., 2022), approaches such as analysing the potential moderators of behavioural interventions are limited by the potentially subjective choices in how the sample is stratified to investigate the effect of, say, gender or personality (Mills & Whittle, 2023). Furthermore, examining all possible combinations of heterogeneous factors on an identified effect may be too resource-intensive given current research practices, as moderators themselves may be moderated by additional factors. Indeed, for n variables being examined, an approximate estimate for the number of potential models—without prior theory—would be n!, or n-factorial (Hayes, 2013). The question of resource intensity is particularly pertinent as behavioural science research, some of it involving consumer behaviour, increasingly uses “mega studies” to investigate interventions (Duckworth & Milkman, 2022). These studies represent a very different route to understanding heterogeneous effects by embracing the power of scale. But in doing so, they are also burdened by huge amounts of data, creating an opportunity for AI to assist in the analysis (Buyalskaya et al., 2023; Matz et al., 2017).

AI may reduce or resolve many of the challenges brought by the added complexity of heterogeneity analysis (Lazer et al., 2009). Deep learning AI systems, which dominate current AI modelling, may accommodate an essentially unlimited number of input variables in an n-length input vector. For instance, rather than examining the effect of extraversion on a consumer behaviour, and separately examining the effect of openness on that same behaviour, an AI approach would allow each consumer’s unique personality profile to be examined holistically, leading to a predictive AI model that integrates far more heterogeneity than moderation approaches can accommodate (Kosinski et al., 2013; Matz et al., 2017). These individual-level variables are likely to be accompanied by various other contextual variables, such as time of day or location (Buyalskaya et al., 2023; Hauser et al., 2009, 2014), to further integrate heterogeneous factors, as many “autonomous choice architects” already do (Hermann, 2023; Mills & Sætra, 2022; Morozovaite, 2021; Yeung, 2017).

Heterogeneity-respecting behavioural interventions, developed through AI, may lead to more effective (Agrawal et al., 2022) and equitable (Sunstein, 2022a) interventions that simultaneously address concerns about the effect size of interventions given the scale of some consumer policy challenges (Chater & Loewenstein, 2022; Nisa et al., 2020). At the same time, a new-found emphasis on context and heterogeneity may turn out to be a sufficient response to the concern that for consumers and others; behavioural interventions are homogeneous, one-size-fits-all strategies. Interesting results are already being found. For instance, AI recommendation algorithms to personalise reading recommendations for children, accounting for their abilities and tastes, have been found to produce higher levels of reading (Agrawal et al., 2022).

Opportunity 3: Handling Complexity

AI invites applied behavioural science to embrace, where relevant, the complexity inherent in real human behaviour, and points towards an understanding of behaviour as part of a complex adaptive system (Hallsworth, 2023). At least in some of its forms, behavioural science has several overlaps with the fields of complexity economics (Sanbonmatsu & Johnston, 2019; Sanbonmatsu et al., 2021; Spencer, 2018), which uses computational techniques to model the behaviour of many artificial agents within economic systems (Arthur, 2021) and cybernetics (DeYoung, 2015; Forrester, 1971), which examines how information and feedback drive the evolution of simple and complex systems (Beer, 1970).

Behavioural interventions do not exist outside of the environment in which consumer behaviour occurs (Banerjee & Mitra, 2023), and furthermore, such behaviour is typically not a static exercise, but a continuous one, with behaviours occurring before and after any intervention (Dolan & Galizzi, 2015; Krpan et al., 2019). An opportunity for AI within behavioural science is therefore predicting the optimal environments for consumers, including time of intervention delivery and before/after spillover effects of interventions (Michie et al., 2017). For instance, generative AI may be used to model many artificial agents within an “artificial society,” to investigate behavioural responses to an intervention within a computer “sandbox,” prior to real-world implementation (Aher et al., 2023; Argyle et al., 2023; Park et al., 2023). This perspective requires behaviour to be viewed not as a homogeneous, individual state, but as a dynamic, adaptive response to environmental factors (Hallsworth, 2023; Sapolsky, 2017). Recent studies have begun to investigate the suitability of these methods in consumer and marketing research (Brand et al., 2023).

Complexity and cybernetic perspectives encourage one to understand behaviour as part of a wider system, where different “variables” within the system all represent potential opportunities to intervene and affect behaviour change (Beer, 1993; Forrester, 1971). Particularly important variables within systems have been dubbed “leverage points” (Abson et al., 2017; Leventon et al., 2021; Schmidt & Stenger, 2021). Within a complex system model of consumers, these variables have an outsized effect on the system as a whole and, from a behavioural perspective, have been offered as a valuable direction for future research to understand how behavioural interventions can be targeted to produce substantial behaviour change (Abson et al., 2017; Hallsworth, 2023; West et al., 2020) and influence consumption behaviours.

Identifying such points for consumers may be difficult owing to the complexity of the system. Large amounts of data are required to appropriately model a sufficiently complex system (Komaki et al., 2021; Meadows, 1997; Simon, 1981). Furthermore, these systems—by their nature—tend to be difficult to reduce to effective, useable models for sustained periods of time, leading to what systems theorists have dubbed the “dancing with systems” problem (Meadows, 2001).

AI represents a promising approach for mapping behavioural systems and identifying leverage points (Ng, 2016), which in turn may enhance the effectiveness of behavioural interventions (Hallsworth, 2023; Schmidt & Stenger, 2021). Again, this is due to the dual technological advantages of AI in analysing large amounts of data and dynamically detecting patterns in data. As behavioural science develops to tackle more complex behavioural challenges, there will be a growing need for strategies to understand complexity and design interventions capable of responding to and leveraging such complexity effectively. AI may facilitate the interjection of more complexity into this ever more interdisciplinary field. The result may be far more clarity, in the domain of consumer behaviour, about what works and what does not, and about what lasts and what does not.

Costs

AI will create several costs for behavioural science practitioners, and consumers. Some costs, such as the environmental cost of building, using, and maintaining massive AI systems, are costs that all disciplines that embrace AI technologies must address (Crawford, 2021; Dhar, 2020). For instance, the carbon cost of training an AI model for a study of publication quality has been estimated to be the equivalent of the carbon consumption of approximately two average American lifetimes, or seven average global lifetimes (Hao, 2019; Strubell et al., 2018). Where, say, AI-behavioural models are used to design and implement behavioural interventions to promote pro-environmental consumption decisions, the energy cost of such models must be a factor in the overall intervention assessment, changing the required effectiveness of the behavioural intervention to compensate for the deleterious effects of developing and delivering it (Mills & Whittle, 2023).

Consumers might also face costs of diverse kinds; some of them are difficult to quantify. These include costs that arise from data collection, in terms of privacy costs (Hagendorff, 2022; Sætra, 2020; Saheb, 2022), and from implementation, in terms of experiential costs (Russell, 2019; Sunstein, 2023) such as outcome homogenisation (Bommasani et al., 2022). For instance, where sensitive data are required for an AI-behavioural model to effectively function, but the rationale for using such data cannot be explained to the data subject—perhaps due to a lack of theoretical underpinning (Forde & Paganini, 2019; Gibney, 2018)—there is an ever-present risk that data are being misused and privacy unjustifiably violated. Even if justifiable, the potential benefits of AI-behavioural models, in terms of predictive capacity and welfare-enhancing behavioural interventions, should not be taken as sufficient to assume consent for data collection (Sætra, 2020). Such social costs are particularly pronounced when considering vulnerable consumers, such as children and teenagers, and the potential harms that AI-behavioural models may induce through intervening to change behaviour at times of critical cognitive and personal development (Akgun & Greenhow, 2022; Russell, 2019; Smith & de Villiers-Botha, 2021).

For consumers, there is also a pervasive risk of manipulation (Hacker, 2021; Sunstein, 2015). AI might be used to lead consumers in directions that are not in their interest, perhaps by exploiting a lack of information or behavioural biases (Bar-Gill et al., 2023; de Marcellis-Warin et al., 2022). Indeed, pattern detection abilities could enable AI not only to personalise in a way that promotes consumers’ welfare but also to use their biases to their detriment. This could exacerbate ongoing challenges surrounding dark patterns—online, deceptive behavioural practices designed to exploit consumers (Helberger et al., 2022; Mathur et al., 2019). The costs along these dimensions could be high. At the same time, AI technologies coupled with pro-consumer behavioural science could, in fact, emerge as a substantial bulwark against such abuses, providing consumers with fast, personalised information and advice (Micklitz & Pałka, 2017; Thorun & Diels, 2020).

It is important, from a consumer policy perspective, to retain human oversight and accountability for any costs that are incurred (Mills & Sætra, 2022). Having some “human in the loop” is recognised in emerging AI position papers, such as in the UK (UK Centre for Data Ethics & Innovation, 2020), and is supported by research into public attitudes concerning algorithmic influence (Aoki, 2021; Ingrams et al., 2021; Kozyreva et al., 2021).

While one may wish to balance social costs against the estimated welfare outcomes of more accurate or personalised interventions (Sunstein, 2012), poor theoretical underpinnings of AI-behavioural models may lead to a reliance on large datasets containing potentially sensitive behavioural details, lest the accuracy of the models be undermined. Broadly, the costs of AI-behavioural models and the enhanced accuracy such approaches might bring (Mills, 2022a; Sunstein, 2022a) should be weighed against the social and welfare costs of more generic, but less data-invasive, approaches to behaviour change.

For the foregoing reasons, AI-driven approaches may be less economical than established behavioural science approaches. While contextualising interventions and using heterogeneity analysis to respect individual autonomy are substantial opportunities, it is important to recognise that behavioural science has already contributed much to public life and consumer protection without using such technologies (Beshears & Kosowsky, 2020; Jachimowicz et al., 2019; Sanders et al., 2018). Where existing behavioural science competencies can deliver adequate benefits, an AI-behavioural approach may ultimately be more costly, in both time and economic costs.

The cost of skills may also be a factor. As some have argued in computer science (Lipton & Steinhardt, 2018), the lack of skilled AI researcher capacity has led to limited critical oversight in AI development, with the costs of resolving this issue tied to the economic cost of enhancing skills. While emerging fields, such as behavioural data science, appear promising, there is likely to be a persistent skill premium which keeps the costs of AI-behavioural approaches high compared to established techniques, at least in the near-term.

This highlights an important additional risk: rapid deployment of AI-behavioural models is likely to demand more in terms of skills than present capacity within behavioural science can meet (Hallsworth, 2023), which in turn creates the possibility of mis-deployment and misuse. Mistakes and harms to consumers are a possible consequence. Patience in the development of this space, coupled with efforts to build capacity and understand the necessary safeguards for AI-behavioural models—given the potential costs involved—is likely critical to the successful implementation of AI within behavioural science and to the development of appropriate consumer policy guidance and consumer protections.

Conclusion

The opportunities AI presents for behavioural science are significant. For consumers, AI has promise as a means of probing human behavioural data to identify new cognitive biases or to identify known cognitive biases in novel contexts. AI may also promote the “heterogeneity revolution” in behavioural science by allowing significantly more data to be used in the design and implementation of behavioural interventions designed to improve consumer welfare. From a complex systems perspective, AI may be well-suited for optimising the timing and context of intervention delivery, again enhancing effectiveness, as well as probing behavioural systems as a whole to predict optimal leverage points for promoting relevant goals for consumers.

AI usage in behavioural science will also create costs. As with all disciplines, behavioural science must synthesise the environmental costs of energy-intensive AI technologies into its practice. Those behavioural interventions that seek to promote pro-environmental behaviours, such a cost is particularly pertinent. AI will also create various social costs for consumers of diverse kinds, which behavioural science must face. These include privacy costs from collecting potentially sensitive data on individual behaviour, and the risks of AI-behavioural models interfering with vulnerable individuals. There are also several economic costs. AI-behavioural models are likely to raise the skill-requirements of behavioural science practitioners, making these approaches more expensive. Where such skills are scarce, there is also the risk that such methods will be used without adequate understanding or oversight, leading to misuses and welfare costs suffered by consumers. Furthermore, behavioural science can already do much without AI methods, and existing competencies should always be considered in comparison to potentially more costly alternatives.

As AI technologies develop, their potential will inevitably grow. The most productive paths forward focus on the distinctive opportunities and costs of an AI-driven behavioural science, with particular emphasis on the opportunity to learn more than ever before about both bias and noise and to use what is learned to increase consumer welfare.