Can AI do AI Research?

AI research can be carried out entirely within digital spaces, making it ripe for automation. Recent efforts have demonstrated that AI systems are capable of carrying out the whole process of research from ideation to publishing. Startup Sakana.ai has created an 'AI Scientist' that independently chooses research topics, conducts experiments, and publishes complete papers showing its results. While the quality of this work is still only comparable to an early-stage researcher, things will only improve from here.

Judging Social Situations

AI chatbots, including Claude and Microsoft Copilot, can outperform humans in evaluating social situations. In an established 'Situational Judgment Test', these AI systems consistently selected more effective responses than human participants.

SOURCE

Analyzing Scientific Literature

While language models are known to hallucinate information, this tendency can be reduced. PaperQA2, an LLM optimized to reliably provide factual information, was able to match or exceed human subject matter experts across a range of realistic literature review tasks. The summary articles it produced were found to be more accurate than those written by human authors.

SOURCE

Writing Emotive Poetry

A study has shown that non-expert readers can no longer tell AI-authored poems from those written by acclaimed human poets. The AI poems were also rated higher in rhythm and beauty.

SOURCE

Writing Post-surgical Operative Reports

Surgeons take painstaking notes of the actions they carry out during surgeries, collecting them into narrative form as an 'operative report'. A machine vision system was trained to watch surgery footage and produce such reports. It did so with higher accuracy (and much higher speed) than human authors.

SOURCE

Developing New Algorithms

AIs can find innovative solutions to difficult coding problems when given an appropriate framing. For example, a dedicated system called AlphaDev was trained to play a game about creating sorting algorithms. The algorithms it discovered were novel and outperformed existing human-authored benchmarks.

SOURCE

Who is Building AGI?

The following companies have explicitly stated they intend to develop AGI, either through public statements or in response to FLI’s 2024 AI Safety Index survey:

Anthropic

OpenAI

Google DeepMind

Meta

x.AI

Zhipu AI

Alibaba

DeepSeek

How can we avoid AGI?

There are policies we can implement to avoid some of the dangers of rapid power seeking through AI. They include:

Compute accounting
Standardized tracking and verification of AI computational power usage

Compute caps
Hard limits on computational power for AI systems, enforced through law and hardware

Enhanced liability
Strict legal responsibility for developers of highly autonomous, general, and capable AI

Tiered safety standards
Comprehensive safety requirements that scale with system capability and risk

TOMORROW’S AI

ALL SCENARIOS

Welcome to The Destiny Deck!

Description

This custom 55-card deck is designed to help you imagine your own AI futures. In a group or on your own, it will support your creativity and help you explore how different future frames can shape belief and action.

Card types such as ‘Intended Use’ and ‘Runaway Risk’ will prompt your imagination. Draw one of each and infuse your own lived experiences and expertise to imagine a brand new AI future!

Other card types like ‘Action Levers’ and ‘Benefits/Risks’ will then help you to flesh out your world, and to consider how we might approach or avoid it.

We’ll need all of our collected wisdom to reach a future worth fighting for. Whether you're a tech leader or investor, a policymaker, or just AI-curious, this deck is for you.

Where do you want AI tools to take you?

Framework: Control levers dictate AI's future — contained (benefits or disruption) or runaway.

Storytelling Framework

Challenging the ‘Official Future’ of AI

AI is moving fast, could impact nearly every sector, and may be able to improve itself. So how can we imagine where AI may take us?

AI’s architects are rushing toward a risky ‘official future’ that assumes more powerful AI tools will be more beneficial. They say that regulations matter, but their actions and investments say otherwise.

We want to challenge this official future, by imagining a wide range of other possibilities.

Imagining AI’s runaway risks can help us to strengthen our institutions against them. Imagining nuanced positive futures can help us use AI to solve problems instead of seeking power.

Let’s imagine AI futures that benefit us all.

BACK