Skip to main content
In-depth Reading: AI Index 2023 Annual Report

In-depth Reading: AI Index 2023 Annual Report

April 2023

Of Interest to the Information Community

Citation: Nestor Maslej, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Helen Ngo, Juan Carlos Niebles, Vanessa Parli, Yoav Shoham, Russell Wald, Jack Clark, and Raymond Perrault, “The AI Index 2023 Annual Report

As noted on the material's source page, the originators of this report are an independent initiative at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), This densely informational report is 398 pages in length. Rather than an executive summary, the first twenty pages contain highlights from each of the eight chapters included. 

  • Chapter 1: Research and Development
  • Chapter 2: Technical Performance
  • Chapter 3: Technical AI Ethics
  • Chapter 4: The Economy
  • Chapter 5: Education
  • Chapter 6: Policy and Governance
  • Chapter 7: Diversity
  • Chapter 8: Public Opinion

From the Co-Directors of HAI:

Given the increased presence of this technology and its potential for massive disruption, we should all begin thinking more critically about how exactly we want AI to be developed and deployed. We should also ask questions about who is deploying it—as our analysis shows, AI is increasingly defined by the actions of a small set of private sector actors, rather than a broader range of societal actors. This year’s AI Index paints a picture of where we are so far with AI, in order to highlight what might await us in the future.

Some of what the report includes as highlights are:

From Chapter One:

China continues to lead in total AI journal, conference, and repository publications. The United States is still ahead in terms of AI conference and repository citations, but those leads are slowly eroding. Still, the majority of the world’s large language and multimodal models (54% in 2022) are produced by American institutions.

Industry races ahead of academia. Until 2014, most significant machine learning models were released by academia. Since then, industry has taken over. In 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia. Building state-of-the-art AI systems increasingly requires large amounts of data, computer power, and money—resources that industry actors inherently possess in greater amounts compared to nonprofits and academia.

Large language models are getting bigger and more expensive. GPT-2, released in 2019, considered by many to be the first large language model, had 1.5 billion parameters and cost an estimated $50,000 USD to train. PaLM, one of the flagship large language models launched in 2022, had 540 billion parameters and cost an estimated $8 million USD—PaLM was around 360 times larger than GPT-2 and cost 160 times more. It’s not just PaLM: Across the board, large language and multimodal models are becoming larger and pricier.

From Chapter Two:

AI is both helping and harming the environment. New research suggests that AI systems can have serious environmental impacts. According to Luccioni et al., 2022, BLOOM’s training run emitted 25 times more carbon than a single air traveler on a one-way trip from New York to San Francisco. Still, new reinforcement learning models like BCOOLER show that AI systems can be used to optimize energy usage.

From Chapter Three:

Fairer models may not be less biased. Extensive analysis of language models suggests that while there is a clear correlation between performance and fairness, fairness and bias can be at odds: Language models which perform better on certain fairness benchmarks tend to have worse gender bias.

The full text of the report may be downloaded in PDF file format here.