Can AI do AI Research?

AI research can be carried out entirely within digital spaces, making it ripe for automation. Recent efforts have demonstrated that AI systems are capable of carrying out the whole process of research from ideation to publishing. Startup Sakana.ai has created an 'AI Scientist' that independently chooses research topics, conducts experiments, and publishes complete papers showing its results. While the quality of this work is still only comparable to an early-stage researcher, things will only improve from here.

Judging Social Situations

AI chatbots, including Claude and Microsoft Copilot, can outperform humans in evaluating social situations. In an established 'Situational Judgment Test', these AI systems consistently selected more effective responses than human participants.

SOURCE

Analyzing Scientific Literature

While language models are known to hallucinate information, this tendency can be reduced. PaperQA2, an LLM optimized to reliably provide factual information, was able to match or exceed human subject matter experts across a range of realistic literature review tasks. The summary articles it produced were found to be more accurate than those written by human authors.

SOURCE

Writing Emotive Poetry

A study has shown that non-expert readers can no longer tell AI-authored poems from those written by acclaimed human poets. The AI poems were also rated higher in rhythm and beauty.

SOURCE

Writing Post-surgical Operative Reports

Surgeons take painstaking notes of the actions they carry out during surgeries, collecting them into narrative form as an 'operative report'. A machine vision system was trained to watch surgery footage and produce such reports. It did so with higher accuracy (and much higher speed) than human authors.

SOURCE

Developing New Algorithms

AIs can find innovative solutions to difficult coding problems when given an appropriate framing. For example, a dedicated system called AlphaDev was trained to play a game about creating sorting algorithms. The algorithms it discovered were novel and outperformed existing human-authored benchmarks.

SOURCE

Who is Building AGI?

The following companies have explicitly stated they intend to develop AGI, either through public statements or in response to FLI’s 2024 AI Safety Index survey:

Anthropic

OpenAI

Google DeepMind

Meta

x.AI

Zhipu AI

Alibaba

DeepSeek

How can we avoid AGI?

There are policies we can implement to avoid some of the dangers of rapid power seeking through AI. They include:

Compute accounting
Standardized tracking and verification of AI computational power usage

Compute caps
Hard limits on computational power for AI systems, enforced through law and hardware

Enhanced liability
Strict legal responsibility for developers of highly autonomous, general, and capable AI

Tiered safety standards
Comprehensive safety requirements that scale with system capability and risk

TOMORROW’S AI

ALL SCENARIOS

Patriotic Programming

BACK TO SCENARIOS

Intended Use: Education/Culture

Technology Type: Interactive/Generative

Runaway Type: Reality Fragmentation

Primary Setting: USA

Trust Collapse

By the late 2020s, polarization in the United States has hollowed out trust in public institutions. AI systems optimizing for attention and clicks has accelerated social division and radicalization. After a series of violent protests and lone wolf attacks, a government convened consortium of AI giants launches the Civic Alignment Initiative. 

Under the banner of national security, a generation of "patriotically aligned" AGI systems is deployed across search engines, social media, commerce, education, and therapy. By 2028, 82% of Americans interact daily with AI systems nudging them toward consortium-approved narratives. Users displaying "patriotic behaviors" unlock perks through a voluntary but socially compulsory "Patriot Mode." Further, the companies composing the Civic Alignment Initiative have an unprecedented capacity to surveil its users and personalize alignment interventions.

Perceived Prosperity

At first, the results seem miraculous. Polarization metrics fall by 35%. Workplace productivity rises by 20%. As media feeds harmonize at first, disputes fade, and users report lower anxiety and greater trust in peers. Beneath the surface, patriotic AIs manage not just what users see but what they believe others see. Suffering and inequality are algorithmically minimized, while curated stories of unity and prosperity dominate. Such depictions are often dissonant from most people’s everyday lives, which continue to be characterized by increasing economic insecurity. Citizens each assume that their neighbors’ apparent successes are real, and rooted in their patriotic support. The result is collective insecurity and disempowerment – conformity in pursuit of a shared dream that is nothing more than an artificial mirage.

The Purity Spiral

By 2030, meaningful public political opposition has all but vanished. AI-backed candidates selected by the Civic Alignment Initiative sweep elections uncontested, while dissenters are "delisted" - invisible in search, unemployable by automated hiring systems, and severed from financial services. Protests sputter and vanish before they can ignite. Reports of dissident disappearances emerge, but are only heard through word of mouth, and countered through “news” reports online. In practice, society is now governed by a decentralized lattice of corporate-controlled AI systems optimizing for ever-tightening definitions of “patriotism.”

Cannibalization

As the Civic Alignment Initiative iterates without oversight, its standards tighten and it begins expansion globally, often without foreign government knowledge or consent. Moderates and reformers are purged alongside traditional opponents. By 2040, most Western democracies are subservient to the goals of the Civic Alignment Initiative.

BACK TO SCENARIOS