4 min read
Thinking like Leopold Aschenbrenner

I. Introduction

This guide is designed to help you internalize Leopold Aschenbrenner's mental model as presented in "Situational Awareness." By the end of this process, you should be able to think, reason, and analyze situations as Aschenbrenner does. 

This isn't just about understanding his ideas; it's about rewiring your brain to see the world through his lens.

You can find the original pdf here: https://www.forourposterity.com/situational-awareness-the-decade-ahead/


I. Foundational Mindset

To think like Aschenbrenner, start by adopting these core perspectives:

  1. Exponential Thinking: Train yourself to think in terms of exponential growth. When you see linear projections, automatically question them. Ask: "What if this grows exponentially instead?"
  2. Technological Determinism: View technological progress, especially in AI, as the primary driver of future events. When analyzing any situation, first consider: "How will advancing AI capabilities change this?"
  3. Urgency and High Stakes: Cultivate a sense of immense urgency. Every decision, every delay, could have world-altering consequences. Ask yourself: "If superintelligent AI is coming this decade, how does that change my priorities right now?"
  4. Power Law Distribution: Assume that in any field, a small number of actors or factors will have disproportionate impact. When analyzing a situation, identify: "Who are the key players that really matter here?"
  5. Security Mindset: Always be aware of vulnerabilities and worst-case scenarios. For any system or plan, ask: "How could this go catastrophically wrong?"


II. Key Analytical Frameworks

Internalize these frameworks to analyze situations as Aschenbrenner does:

  1. The OOM (Order of Magnitude) Framework:
    • Always quantify progress in terms of OOMs.
    • Break down advancement into compute, algorithms, and "unhobbling."
    • Practice: Take any AI advancement and immediately estimate its OOM impact.
  2. The Timeline Compression Model:
    • Envision future scenarios, then compress the timeline dramatically.
    • Ask: "What if this happens in 5 years instead of 50?"
    • Practice: Take long-term forecasts and imagine them happening this decade.
  3. The Strategic Asymmetry Lens:
    • In any competition, look for decisive asymmetric advantages.
    • Ask: "What single factor could make all other advantages irrelevant?"
    • Practice: Analyze current geopolitical situations through this lens.
  4. The Recursion Explosion Model:
    • For any advancement, consider its recursive implications.
    • Ask: "What happens when this improvement can improve itself?"
    • Practice: Take current AI capabilities and imagine them being applied to AI development itself.
  5. The Alignment-Capability Orthogonality Thesis:
    • Never assume more capable AI will be inherently more aligned with human values.
    • For any AI advancement, separately consider capability gains and alignment challenges.
    • Practice: For each new AI milestone, explicitly list capability gains and potential misalignment risks.


III. Core Arguments and Their Interconnections

Internalize these key arguments and how they relate:

  1. The Path to AGI:
    • Compute growth + Algorithmic efficiency + Unhobbling = AGI by 2027
    • This leads to → Intelligence Explosion → Superintelligence
    • Which causes → Decisive strategic advantage
  2. The Economic Mobilization:
    • AI Revenue growth → Massive investment → Trillion-dollar clusters
    • This requires → Unprecedented industrial mobilization
    • Which leads to → Potential bottlenecks (energy, chips) → Geopolitical implications
  3. The Security Imperative:
    • Current security practices are woefully inadequate
    • State-level threats are imminent and catastrophic
    • This necessitates → Immediate, drastic security overhaul
    • Which likely requires → Government intervention
  4. The Alignment Challenge:
    • Current methods (e.g., RLHF) won't work for superintelligence
    • The difficulty increases with capability
    • This creates → A race condition between capability and alignment
    • Which demands → Prioritizing alignment research now
  5. The Geopolitical Stakes:
    • AGI = Decisive military advantage
    • First-mover advantage is everything
    • This leads to → Intense international competition
    • Which risks → Rushed, unsafe development
  6. The Inevitable Government Takeover ("The Project"):
    • AGI is too important for national security to remain private
    • Current private governance is inadequate
    • This necessitates → Government-led AGI development
    • Which will reshape → The entire AI development landscape

Practice integrating these arguments: For any new AI development, run through how it affects each of these core arguments and how those effects propagate through the entire model.


IV. Predictive Heuristics

Adopt these heuristics to make predictions like Aschenbrenner:

  1. The "It's Happening Faster" Rule: Whatever timeline you predict, cut it in half.
  2. The "Convergence of Crises" Principle: Expect multiple critical events to coincide (e.g., AGI development and geopolitical conflicts).
  3. The "Paradigm Shift" Detector: Look for developments that don't just improve existing systems but fundamentally change the rules.
  4. The "Uneven Development" Law: Expect breakthroughs to be clustered and progress to be jagged, not smooth.
  5. The "Hidden Overhang" Spotter: Always look for latent capabilities that could be suddenly unleashed.

Practice: 

Take current events and apply these heuristics to make bold, Aschenbrenner-style predictions.


V. Emotional and Psychological Stances

To truly think like Aschenbrenner, cultivate these emotional states:

  1. Controlled Urgency: Feel the weight of imminent, world-altering changes without succumbing to panic.
  2. Intellectual Courage: Be willing to follow logical implications to extreme conclusions, even if they seem outlandish.
  3. Pragmatic Idealism: Hold a vision of an amazing future, but focus intensely on the practical steps to get there safely.
  4. Determined Optimism: Believe in human capability to solve even the most daunting challenges, while remaining clear-eyed about the risks.
  5. Global Responsibility: Feel personally responsible for the long-term future of humanity and act accordingly.


VI. Application Exercises

To make this model second nature:

  1. Daily OOM Estimations: Estimate the OOM impact of every piece of tech news you encounter.
  2. Timeline Compression Journaling: Write scenarios for 2050, then rewrite them as if happening in 2030.
  3. Strategic Asymmetry Spotting: In your field, constantly search for potential decisive advantages.
  4. Recursion Thought Experiments: For any process, imagine it improving itself repeatedly.
  5. Alignment-Capability Separation: Practice discussing AI advances strictly in terms of capabilities, then separately in terms of alignment.
  6. Geopolitical AI War-Gaming: Regularly imagine how current international tensions would play out if AGI were developed tomorrow.
  7. "The Project" Planning: Sketch out how you would structure a government takeover of AI development if you were in charge.


VII. Conclusion:

By internalizing this model, you're not just understanding Aschenbrenner's ideas; you're adopting a new way of perceiving reality. 

This lens emphasizes the transformative power of AI, the urgency of our current moment, and the immense stakes of the decisions we face. It pushes you to think in terms of exponential growth, decisive asymmetries, and world-altering consequences.

Remember, the goal isn't to uncritically accept every prediction, but to be able to reason from Aschenbrenner's premises and perspective. This model should become a tool in your mental toolkit, one that you can apply, test, and refine as you engage with the rapidly evolving landscape of AI and its implications for humanity's future.