Key points:
- Responsible AI mentions in job descriptions have risen as a share of all AI-related postings, from nearly zero in 2019 to 0.9% in 2025 (on average among the 22 countries in our sample).
- In the UK, occupations including legal, mathematics and research & development have the highest shares of Responsible AI postings relative to AI overall. In contrast, tech occupations are generally focused on AI more broadly rather than Responsible AI in particular.
- AI regulation alone does not account for cross-country differences in the share of Responsible AI over AI. Other factors, including company reputation and/or a given firm’s international communication strategy, might better explain these differences.
Responsible AI mentions in job descriptions have risen as a share of all AI-related postings, from close to zero in 2019 to 0.9% globally in 2025 (on average among the 22 countries in our sample), suggesting a rising focus on the ethical integration of AI into society.
Countries including the Netherlands, Switzerland and Luxembourg feature the highest share of Responsible AI mentions, while Japan, Mexico and Brazil lag below the global average, according to a Hiring Lab analysis of AI- and Responsible AI-related keywords in job postings. Country-specific local regulation efforts alone do not seem to account for these differences. And some occupations where Responsible AI is more commonly noted are human-centred occupations like legal and education & instruction.
As AI technology continues to advance rapidly, concerns about its risks are also rising, including inaccuracy, cybersecurity, and intellectual property infringement. Despite these concerns, a gap persists between those companies that choose to recognize AI risks and are taking meaningful action, and those that are not (at least not outwardly).
This analysis follows a similar methodology to Hiring Lab’s AI tracker, which measures the volume of mentions of a basket of select AI-related keywords in job descriptions. While the standard AI tracker looks for keywords such as “artificial intelligence” and “natural language processing”, Responsible AI-specific keywords for purposes of this analysis include terms such as “Responsible AI” and “ethical AI.” The keywords are language-specific, but given the global nature of many of these roles, we also included the English keywords in our analyses of non-English speaking countries.
Postings related to Responsible AI have grown rapidly
Responsible AI postings have been growing steadily, from practically non-existent in 2019 to almost 1% of all AI postings by 2025 (AI postings more broadly have also grown in this period, although with some ups and downs). Although job postings explicitly referencing Responsible AI emerged after the more general AI-related postings, their growth has accelerated notably, particularly from 2024 onward. Looking across some selected large markets, the Netherlands stands out with the highest Responsible AI mention share (1.7%), followed by the UK (1.2%) and Canada (1.16%).
Interestingly, the rise in mentions of Responsible AI has been fairly uniform across countries. One might assume that heightened regulatory focus in the European Union, including the EU Artificial Intelligence Act and the earlier General Data Protection Regulation (GDPR), would lead European countries to emphasise Responsible AI more strongly than others, including the U.S. However, mentions of Responsible AI have also grown rapidly in the U.S., standing at 1.0% in March, slightly above the global average. Half of all global AI-related postings used for this analysis originated in the U.S.
AI job postings in Luxembourg, the Netherlands, Switzerland and Belgium reference Responsible AI more frequently than other nations. This may be partly influenced by the fact that many international organisations and regulatory bodies are based in these countries. In contrast, Singapore, India, Spain and Poland show a relatively low level of Responsible AI mentions, despite having a large share of AI postings more broadly. In other words, even countries with strong demand for AI-related jobs do not necessarily exhibit a comparable focus on Responsible AI, a potential sign of varying national attitudes towards AI itself and/or that Responsible AI practices are still emerging.
Some human-centred occupations emphasise Responsible AI roles more than tech occupations
Looking specifically at UK-based AI jobs, the occupations with the highest shares of Responsible AI mentions between April 2024 and March 2025 included legal (6.5%), mathematics (3.3%), research & development (3.2%), social science (3.1%), education & instruction (1.9%) and insurance (1.7%)
While some of these are generally not the most technically intensive AI jobs, Responsible and Ethical AI use may be especially important in these occupations to minimise the potential for harm and/or to comply with existing laws. For example, while AI can help an attorney summarise and produce dense and/or highly technical documents, that work must adhere to certain legal and ethical guidelines. In social science, policy advisory roles leverage Responsible AI to ensure ethical decisions, transparent policymaking, and unbiased outcomes. Insurance functions increasingly rely on AI for tasks including risk assessment, underwriting, and claim processing, which require fairness, transparency, and accountability.
The data also show that Responsible AI mentions are more limited in the occupational segments that tend to have the highest demand for workers, including retail and food preparation & service. This reflects limited AI adoption in these roles overall, and its concentration in smaller, more specialised fields.
Drivers of Responsible AI: Regulation or reputation?
Using data from Stanford’s Global AI Vibrancy Tool, we also assessed whether stronger national regulatory environments are associated with higher rates of Responsible AI mentions in AI postings. Interestingly, regulation alone was not found to have any correlation with Responsible AI mentions. Despite limited regulation, countries including the Netherlands, Switzerland and Sweden exhibit relatively high Responsible AI shares. Conversely, while AI regulations in the UK are more strident, its share of Responsible AI postings is somewhat middling.
This lack of correlation suggests other factors may be at play. One possibility is cross-country differences in political institutions, with some nations being more apt to consider and/or pass more legislation than others. Company-level dynamics also likely play a role. A public embrace of Responsible AI practices may be part of a broader brand or reputational strategy rather than any overt response to regulation. And multinational companies, in particular, often publish the same or very similar job postings across several nations and may take a “one size fits all” approach to satisfying different national requirements, regardless of specific local regulations.
Policy implications and conclusion
AI risks can be viewed as negative externalities, where companies impose costs on society without bearing them fully, and our research can complement other studies that attempt to answer such AI dilemmas. We don’t know what the optimal level of Responsible AI is, but it’s clear that awareness of its importance is rising. If the optimal level is above the current 1%, then companies are still under-investing in risk mitigation.
To date, countries with relatively strict AI regulation have similar rates of Responsible AI mentions as less-regulated markets. This suggests that other factors, including reputational concerns or international business strategies, might be driving Responsible AI adoption as much, or more, than regulatory requirements. Companies appear to be trying to internalise AI risks and address them based on market incentives or corporate and social responsibility, rather than regulatory mandates only.
Methodology
We identify Responsible AI in job postings based on the presence of related keywords. These include frequently used terms such as “Responsible AI”, “ethical AI”, “AI ethics”, “AI governance”, and “AI safety,” among others. The keyword list was developed using references from major public sources (e.g., UNESCO, OECD) and terminology frequently found in AI-related job descriptions. We capture keywords and phrases in English, French, German and many others used in our sample.
Our analysis only covers countries where Indeed has operations, and thus does not include China, an acknowledged global AI powerhouse.
We analysed job titles in addition to job descriptions, which revealed similar patterns in the use of Responsible AI-related language.
To ensure robustness, we also applied fixed occupational weights to account for shifts in occupational composition over time; results were broadly consistent with the unweighted findings reported in the main text.
We further tested many alternative regulation indicators from Stanford University’s Global AI Vibrancy Tool. Regardless of the specific measure used, the relationship between regulation and the Responsible AI share remained broadly stable. For instance, neither legislative proceedings nor the legislation passed were statistically significant when regressed individually.