PodcastsBildung80,000 Hours Podcast

80,000 Hours Podcast

The 80,000 Hours team
80,000 Hours Podcast
Neueste Episode

330 Episoden

  • 80,000 Hours Podcast

    How scary is Claude Mythos? 303 pages in 21 minutes

    10.04.2026 | 21 Min.
    With Claude Mythos we have an AI that knows when it's being tested, can obscure its reasoning when it wants, and is better at breaking into (and out of) computers than any human alive. Rob Wiblin works through its 244-page System Card and 59-page Alignment Risk Update to explain why: 
    Mythos is a nightmare for computer security
    It has arrived far ahead of schedule
    It might be great news for alignment and safety
    But 3 key problems mean we can’t take its alignment results at face value
    Mythos isn’t building its replacement yet, probably
    Anthropic staff are, for the first time, kinda scared of Claude
    He's losing sleep
    Learn more & full transcript: https://80k.info/mythos
    This episode was recorded on April 9, 2026.
    Chapters:
    Why people are panicking about computer security (01:05)
    Mythos could break out of containment (04:23)
    Anthropic is losing billions in revenue by not releasing Mythos (06:21)
    Mythos is actually the most aligned model to date, except… (07:48)
    Mythos knows when it’s being tested (09:52)
    Mythos can hide its thoughts (11:50)
    Mythos can’t be trusted about whether it’s untrustworthy (14:02)
    Does Mythos advance automated AI R&D? (17:03)
    Mythos scares Anthropic (19:15)
    Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
    Camera operator: Dominic Armstrong
    Production: Elizabeth Cox, Nick Stockton, and Katy Moore
  • 80,000 Hours Podcast

    Village gossip, pesticide bans, and gene drives: 17 experts on the future of global health

    07.04.2026 | 4 Std. 6 Min.
    What does it really take to lift millions out of poverty and prevent needless deaths?
    In this special compilation episode, 17 past guests — including economists, nonprofit founders, and policy advisors — share their most powerful and actionable insights from the front lines of global health and development. You’ll hear about the critical need to boost agricultural productivity in sub-Saharan Africa, the staggering impact of lead poisoning on children in low-income countries, and the social forces that contribute to high neonatal mortality rates in India.
    What’s so striking is how some of the most effective interventions sound almost too simple to work: banning certain pesticides, replacing thatch roofs, or identifying village “influencers” to spread health information.
    Full transcript and links to learn more: https://80k.info/ghd
    Chapters:
    Cold open (00:00:00)
    Luisa’s intro (00:00:58)
    Development consultant Karen Levy on why pushing for “sustainable” programmes isn’t as good as it sounds (00:02:15)
    Economist Dean Spears on the social forces and gender inequality that contribute to neonatal mortality in Uttar Pradesh (00:06:55)
    Charity founder Sarah Eustis-Guthrie on what we can learn from the massive failure of PlayPumps (00:14:33)
    Economist Rachel Glennerster on how randomised controlled trials are just one way to better understand tricky development problems (00:19:05)
    Data scientist Hannah Ritchie on why improving agricultural productivity in sub-Saharan Africa is critical to solving global poverty (00:24:36)
    Charity founder Lucia Coulter on the huge, neglected upsides of reducing lead exposure (00:47:48)
    Malaria expert James Tibenderana on using gene drives to wipe out the species of mosquitoes that cause malaria (00:53:11)
    Charity founder Varsha Venugopal on using village gossip to get kids their critical immunisations (01:04:14)
    Rachel Glennerster on solving tough global problems by creating the right incentives for innovation (01:11:31)
    Karen Levy on when governments should pay for programmes instead of NGOs (01:26:51)
    Open Philanthropy lead Alexander Berger on declining returns in global health, and finding and funding the most cost-effective interventions (01:29:40)
    GiveWell researcher James Snowden on making funding decisions with tricky moral weights (01:34:44)
    Lucia Coulter on “hits-based giving” approaches to funding global health and development projects (01:43:01)
    Rachel Glennerster on whether it’s better to fix problems in education with small-scale interventions versus systemic reforms (01:48:12)
    GiveDirectly cofounder Paul Niehaus on why it’s so important to give aid recipients a choice in how they spend their money (01:51:09)
    Sarah Eustis-Guthrie on whether more charities should scale back or shut down, and aligning incentives with beneficiaries (01:56:12)
    James Tibenderana on why we need loads better data to harness the power of AI to eradicate malaria (02:11:22)
    Lucia Coulter on rapidly scaling a light-touch intervention to more countries (02:20:14)
    Karen Levy on why pre-policy plans are so great at aligning perspectives (02:32:47)
    Rachel Glennerster on the value we get from doing the right RCTs well (02:40:04)
    Economist Mushtaq Khan on really drilling down into why “context matters” for development work (02:50:13)
    GiveWell cofounder Elie Hassenfeld on contrasting GiveWell’s approach with the subjective wellbeing approach of Happier Lives Institute (02:57:24)
    James Tibenderana on whether people actually use antimalarial bed nets for fishing — and why that’s the wrong thing to focus on (03:05:30)
    Karen Levy on working with governments to get big results (03:10:53)
    Leah Utyasheva on how a simple intervention reduced suicide in Sri Lanka by 70% (03:17:38)
    Karen Levy on working with academics to get the best results on the ground (03:29:03)
    James Tibenderana on the value of working with local researchers (03:32:15)
    Lucia Coulter on getting buy-in from both industry and government (03:35:05)
    Alexander Berger on reasons neartermist work makes sense even by longtermist standards (03:39:26)
    Economist Shruti Rajagopalan on the key skills to succeed in public policy careers, and seeing economics in everything (03:47:42)
    J-PAL lead Claire Walsh on her career advice for young people who want to get involved in global health and development (03:55:20)
    Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
    Content editing: Katy Moore and Milo McGuire
    Music: CORBIT
    Coordination, transcriptions, and web: Katy Moore
  • 80,000 Hours Podcast

    What everyone is missing about Anthropic vs the Pentagon. And: The Meta leaks are worse than you think.

    03.04.2026 | 20 Min.
    When the Pentagon tried to strong-arm Anthropic into dropping its ban on AI-only kill decisions and mass domestic surveillance, the company refused. Its critics went on the attack: Anthropic and its supporters are some combination of 'hypocritical', 'naive', and 'anti-democratic'. Rob Wiblin dissects each claim finding that all three are mediocre arguments dressed up as hard truths. (Though the 'naive' one is at least interesting.)
    Watch on YouTube: What Everyone is Missing about Anthropic vs The Pentagon
    Plus, from 13:43: Leaked documents from Meta revealed that 10% of the company's total revenue — around $16 billion a year — came from ads for scams and goods Meta had itself banned. These likely enabled the theft of around $50 billion dollars a year from Americans alone. But when an internal anti-fraud team developed a screening method that halved the rate of scams coming from China... well, it wasn't well received.
    Watch on YouTube: The Meta Leaks Are Worse Than You Think
    Chapters:
    Introduction (00:00:00)
    What Everyone is Missing about Anthropic vs The Pentagon (00:00:26)
    Charge 1: Hypocrisy (00:01:21)
    Charge 2: Naivety (00:04:55)
    Charge 3: Undemocratic (00:09:38)
    You don't have to debate on their terms (00:12:32)
    The Meta Leaks Are Worse Than You Think (00:13:43)
    Three fixes for social media's scam problem (00:16:48)
    We should regulate AI companies as strictly as banks (00:18:46)
    Video and audio editing: Dominic Armstrong and Simon Monsour
    Transcripts and web: Elizabeth Cox and Katy Moore
  • 80,000 Hours Podcast

    AI codes viable genomes from scratch and outperforms virologists at lab work. What could go wrong? | Dr Richard Moulange, CLTR

    31.03.2026 | 3 Std. 7 Min.
    Last September, scientists used an AI model to design genomes for entirely new bacteriophages (viruses that infect bacteria). They then built them in a lab. Many were viable. And despite being entirely novel some even outperformed existing viruses from that family.

    That alone is remarkable. But as today's guest — Dr Richard Moulange, one of the world's top experts on 'AI–Biosecurity' — explains, it's just one of many data points showing how AI is dissolving the barriers that have historically kept biological weapons out of reach.
    For years, experts have reassured us that 'tacit knowledge' — the hands-on, hard-to-Google lab skills needed to work with dangerous pathogens — would prevent bad actors from weaponising biology. So far, they've been right.

    But as of 2025 that reassurance is crumbling. The Virology Capabilities Test measures exactly this kind of troubleshooting expertise, and finds that modern AI models crushed top human virologists even in their self-declared area of greatest specialisation and expertise — 45% to 22%.
    Meanwhile, Anthropic’s research shows PhD-level biologists getting meaningfully better at weapons-relevant tasks with AI assistance — with the effect growing with each new model generation.
    Richard joins host Rob Wiblin to discuss all that plus:
    What AI biology tools already exist
    Why mid-tier actors (not amateurs) are the ones getting the most dangerous boost
    The three main categories of defence we can pursue
    Whether there’s a plausible path to a world where engineered pandemics become a thing of the past
    This episode was recorded on January 16, 2026. Since recording this episode, Richard has seconded to the UK Government — please note that his views expressed here are entirely his own.
    Links to learn more, video, and full transcript: https://80k.info/rm
    Announcements:
    Our new book is available to preorder: 80,000 Hours: How to have a fulfilling career that does good is written by our cofounder Benjamin Todd. It’s a completely revised and updated edition of our existing career guide, with a big new updated section on AI — covering both the risks and the potential to steer it in a better direction, and how AI automation should affect your career planning and which skills one chooses to specialise in. Preorder now: https://geni.us/80000Hours
    We're hiring contract video editors for the podcast! For more information, check out the expression of interest page on the 80,000 Hours website: https://80k.info/video-editor
    Chapters:
    Cold open (00:00:00)
    Who's Richard Moulange? (00:00:31)
    AI can now design novel genomes (00:01:11)
    The end of the 'tacit knowledge' barrier (00:04:34)
    Are risks from bioterrorists overstated? (00:18:20)
    The 3 key disasters AI makes more likely (00:22:41)
    Which bad actors does AI help the most? (00:30:03)
    Experts are more scary than amateurs (00:41:17)
    Barriers to bioterrorists using AI (00:46:43)
    AI biorisks are sometimes dismissed (and that's a huge mistake) (00:48:54)
    Advanced AI biology tools we already have or will soon (01:04:10)
    Rob argues that the situation is hopeless (01:09:49)
    Intervention #1: Limit access (01:18:16)
    Intervention #2: Get AIs to refuse to help (01:32:58)
    Intervention #3: Surveillance and attribution (01:42:38)
    Intervention #4: Universal vaccines and antivirals (01:56:38)
    Intervention #5: Screen all orders for DNA (02:10:00)
    AI companies talk about def/acc more than they fund it (02:19:52)
    Can you build a profitable business solving this problem? (02:26:32)
    This doesn't have to interfere with useful science (much) (02:30:56)
    What are the best low-tech interventions? (02:33:01)
    Richard's top request for AI companies (02:37:59)
    Grok shows governments lack many legal levers (02:53:17)
    Best ways listeners can help fix AI-Bio (02:56:24)
    We might end all contagious disease in 20 years (03:03:37)

    Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
    Music: CORBIT
    Camera operator: Jeremy Chevillotte
    Transcripts and web: Elizabeth Cox and Katy Moore
  • 80,000 Hours Podcast

    #240 – Samuel Charap on how a Ukraine ceasefire could accidentally set Europe up for a bigger war

    24.03.2026 | 1 Std. 12 Min.
    Many people believe a ceasefire in Ukraine will leave Europe safer. But today's guest lays out how a deal could potentially generate insidious new risks — leaving us in a situation that's equally dangerous, just in different ways.
    That’s the counterintuitive argument from Samuel Charap, Distinguished Chair in Russia and Eurasia Policy at RAND. He’s not worried about a Russian blitzkrieg on Estonia. He forecasts instead a fragile peace that breaks down and drags in European neighbours; instability in Belarus prompting Russian intervention; hybrid sabotage operations that escalate through tit-for-tat responses.
    Samuel’s case isn’t that peace is bad, but that the Ukraine conflict has remilitarised Europe, made Russia more resentful, and collapsed diplomatic relations between the two. That’s a postwar environment primed for the kind of miscalculation that starts unintended wars.
    What he prescribes isn’t a full peace treaty; it’s a negotiated settlement that stops the killing and begins a longer negotiation that gives neither side exactly what it wants, but just enough to deter renewed aggression. Both sides stop dying and the flames of war fizzle — hopefully.
    None of this is clean or satisfying: Russia invaded, committed war crimes, and is being offered a path back to partial normalcy. But Samuel argues that the alternatives — indefinite war or unstructured ceasefire — are much worse for Ukraine, Europe, and global stability.

    Links to learn more, video, and full transcript: https://80k.info/sc26
    This episode was recorded on February 27, 2026.
    Chapters:
    Cold open (00:00:00)
    Could peace in Ukraine lead to Europe’s next war? (00:00:47)
    Do Russia’s motives for war still matter? (00:11:41)
    What does a good ceasefire deal look like? (00:17:38)
    What’s still holding back a ceasefire (00:38:44)
    Why Russia might accept Ukraine’s EU membership (00:46:00)
    How to prevent a spiraling conflict with NATO (00:48:00)
    What’s next for nuclear arms control (00:49:57)
    Finland and Sweden strengthened NATO — but also raised the stakes for conflict (00:53:25)
    Putin isn’t Hitler: How to negotiate with autocrats (00:56:35)
    Why Russia still takes NATO seriously (01:02:01)
    Neither side wants to fight this war again (01:10:49)
    Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
    Music: CORBIT
    Transcripts and web: Nick Stockton, Elizabeth Cox, and Katy Moore

Weitere Bildung Podcasts

Über 80,000 Hours Podcast

The most important conversations about artificial intelligence you won’t hear anywhere else. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin, Luisa Rodriguez, and Zershaaneh Qureshi.
Podcast-Website

Höre 80,000 Hours Podcast, Quarks Science Cops und viele andere Podcasts aus aller Welt mit der radio.de-App

Hol dir die kostenlose radio.de App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen

80,000 Hours Podcast: Zugehörige Podcasts

Rechtliches
Social
v8.8.6| © 2007-2026 radio.de GmbH
Generated: 4/10/2026 - 7:42:50 PM