The Stanford AI Index 2025 reveals a rapidly evolving AI landscape

The Stanford Institute for Human-Centered Artificial Intelligence (HAI) has released its annual AI Index Report, offering a global overview of the current state of AI and its evolution over the past year.

This comprehensive 8th edition, containing more than 450 pages, highlights AI’s rapid evolution, reshaping industries, economies, and our daily life. To provide an objective and thorough analysis, the report aggregates data from diverse sources and was compiled by an interdisciplinary team of experts from academia, industry, and government agencies.

Their findings are structured in eight core chapters: Research and Development, Technical Performance, Responsible AI, Economy, Science and Medicine, Education, Policy and Governance, and Public Opinion.

Contents

  1. Research and Development
  2. Technical Performance
  3. Responsible AI
  4. Economy
  5. Science and Medicine
  6. Education
  7. Policy and Governance
  8. Public Opinion
  9. Beyond the Numbers
  10. Conclusion
  11. Links

Research and Development

The report highlights the growing global competition in AI research and development. In 2023, a total of 149 foundation models were released, which is more than double the number launched in 2022. Among these models, 65.7% were open-source, showing a significant increase from 44.4% in 2022 and 33.3% in 2021.

The United States continues to lead in AI innovation, producing 40 significant models in 2024, compared to China’s 15 and Europe’s three. At the same time, the Chinese models, such as DeepSeek R1, have achieved comparable performance with their U.S. counterparts on key benchmarks (MMLU, HumanEval). This progress is particularly remarkable given China’s relatively limited access to advanced computing resources.

Throughout 2024 the industry made substantial investments in AI (source)

China also leads in AI publications and patents. In 2024, Chinese researchers published more AI papers than any other nation, and China filed the most AI-related patents globally. The report highlights that quantity does not necessarily equate to quality, with U.S. models maintaining their dominance in real-world impact and deployment, being on the top position in highly cited publications.

Highly cited publications between 2021-2023 (source)

Emerging regions such as the Middle East, Latin America, and Southeast Asia are actively developing AI models that address their unique local needs. This trend reflects the globalization of AI research and development, moving away from a predominantly U.S.-centric focus toward a more distributed and inclusive ecosystem

AI models are becoming larger, more computationally demanding, and energy-intensive. The training compute for notable AI models is doubling approximately every 5 months, while dataset sizes for training LLMs are doubling every 8 months.

Dataset sizes for training LLMs are doubling every eight months (source)

Technical Performance

In 2023, new benchmarks such as MMMU (multimodal understanding), GPQA (general-purpose question answering), and SWE-bench (software engineering tasks) were introduced to evaluate the capabilities of advanced AI systems. By 2024, AI performance on these benchmarks showed significant improvements, with scores increasing by 18.8% on MMMU, 48.9% on GPQA, and a remarkable rise from 4.4% to 71.7% in solving coding problems on SWE-bench.

A significant concern is benchmark saturation: many standard tests, such as those for general knowledge and coding, are no longer effective as AI systems consistently achieve near-perfect scores. This requires the development of new, more challenging evaluation frameworks.

The report also reveals that the performance gap between the open-weight models and their closed-source counterparts reduced considerably from 8% to 1.7% in a single year.

By 2024, the performance gap between the open-weight models and their closed-source counterparts had nearly disappeared (source)

Important advancements have been noticed in the field of multimodal AI, particularly in video generation, where systems are able to produce longer, more coherent, and more realistic content. However, significant challenges persist in areas such as competition-level mathematics and visual commonsense reasoning, where AI is still behind human abilities.

Responsible AI

Addressing growing concerns, the 2025 AI Index Report dedicates considerable attention to responsible AI. The AI Incident Database reported a significant rise in harmful incidents, reporting 233 cases in 2024, a 56.4% increase over the 149 incidents of 2023. These incidents, such as deepfake misuse and chatbots encouraging harmful behavior, illustrate the critical need for standardized responsible AI frameworks. The current lack of uniformity in benchmark selection among leading developers, including OpenAI, Google, and Anthropic, makes the comparison of associated risks more challenging.

Economy

AI is exerting a high influence on the global economy. In 2024, private investment in AI reached $252.3B, marking a 26% increase from the previous year. The United States accounted for the largest share of this investment, with $109.1B, exceeding the EU’s $19.4B and China’s $9.3B. Notably, $33.9B of the global investment was allocated to generative AI, marking a growth of 18.7% compared to 2023.

The adoption of AI by businesses also saw a considerable increase, with 78% of organizations integrating AI into their operations in 2024, up from 55% in 2023.

AI integration reached 78% of organizations in 2024 (source)

AI’s effect on the workforce presents a complex scenario. While studies show that AI enhances productivity – particularly in areas such as service operations, supply chain management, and software engineering – the savings are often below 10%. AI also helps less-experienced workers to perform at higher levels, bridging the skill gaps. At the same time, concerns about job displacement remain significant due to ongoing automation in manufacturing and customer service jobs.

Science and Medicine

AI’s role in scientific and medical discovery is expanding rapidly. Systems like DeepMind’s GNoME identified new materials, while AI-driven weather forecasting improved prediction accuracy. In medicine, the FDA approved 950 AI-enabled devices by August 2024, a sharp rise from 223 in 2023, reflecting AI’s integration into diagnostics and treatment planning.

Education

AI is transforming education by supporting personalized learning and expanding access. Adaptive platforms create content for individual students, while virtual tutors provide real-time feedback. In 2024, AI tool usage in classrooms increased, with teachers using systems like ChatGPT for better lesson planning.

The report also notes a global increase in AI-related education programs, particularly in computer science. Yet, disparities in access remain, with low-income regions facing challenges to adopt AI-driven tools due to infrastructure limitations.

Policy and Governance

In 2024, U.S. states passed 131 AI-related laws, up from 49 in 2023, while federal progress lagged. Globally, regulatory frameworks vary, with the European Union focusing on ethical standards and China focusing on state-controlled deployment. The report highlights a lack of global coordination, which complicates efforts to address cross-border issues like data privacy and AI misuse.

While international coalitions like the AI Governance Alliance are forming to encourage collaboration, achieving widespread agreement remains a significant challenge. The report suggests standardized reporting and transparent development practices as a way to build trust and ensure accountability.

Public Opinion

Public sentiment toward AI is deeply divided. While countries like China (83%) and Indonesia (80%), and Thailand (77%) view AI as a net positive, skepticism prevails in Western nations. Optimism levels are significantly lower in Canada (40%), the United States (39%), and the Netherlands (36%). The report attributes this divergence to cultural attitudes and varying exposure to AI’s tangible benefits.

Social media platforms, including X, amplify these debates. Posts highlight both AI’s promise – such as breakthroughs in science – and its pitfalls, including transparency concerns and potential biases. There are also concerns about privacy, job loss, and ethical risks. These discussions underscore the need for public engagement to shape AI’s development responsibly.

Beyond the Numbers

In addition to its extensive data, the AI Index Report also encourages critical reflection by all involved parties. The narrowing performance gap between open and closed models raises questions about accessibility versus control. Should AI remain a field dominated by a few tech giants, or should it be a global commons?

Conclusion

The Stanford AI Index Report 2025 highlights AI’s significant growth with high impact on technology, economics, and society, yet development should be aligned with human values and needs. Russell Wald, the Executive Director of Stanford HAI, remarks that “The AI Index equips policymakers, researchers, and the public with the data they need to make informed decisions – and to ensure AI is developed with human-centered values at its core”.

“The 2025 AI Index Report” (Stanford)

Other popular posts