Loading...
Loading...
0 / 10 episodes
No episodes yet
Tap + Later on any episode to add it here.
Really enjoyed chatting with Michael Nielsen about how we recognize scientific progress. It's especially relevant for closing the RL verification loop for scientific discovery. But it's also a surprisingly mysterious and elusive question when you look at the history of human science. We approach this question stories like Einstein (who claimed that he hadn't even heard of the famous Michelson-Morley experiment, which is supposed to have motivated special relativity, until after he had come up with the theory), Darwin (why did it take till 1859 to lay out an idea whose essence every farmer since antiquity must have observed?), Prout (how do you recognize that isotopes exist if you cannot chemically separate them?), and many others. The verification loop on scientific ideas is often extremely long and weirdly hostile. Ancient Athenians dismissed Aristarchus's heliocentrism in the 3rd century BC because it would imply that the stars should shift in the sky as the Earth orbits the sun. The first successful measurement of stellar parallax was in 1838. That's a 2,000-year verification loop. But clearly human science is able to make progress faster than raw experimental falsification/verification would imply, and in cases where experiments are very ambiguous. How? Michael has some very deep and provocative hypotheses about the nature of progress. One I found especially thought-provoking is that aliens will likely have a VERY different science + tech stack than us. Which contradicts the common sense picture of a linear tech tree that I was assuming. And has some interesting implications about how future civilizations might trade and cooperate with each other. Watch on Youtube; read the transcript. Sponsors * Labelbox researchers built a new safety benchmark. Why? Well, current safety benchmarks claim that attacks on top models are successful only a few percent of the time, but the prompts in those benchmarks don’t reflect how real bad actors actually write. You can read Labelbox’s research here. If this could be useful for your work, reach out at labelbox.com/dwarkesh * Mercury has an MCP that lets you give an LLM access to your full transaction history, including things like attached receipts and internal notes. I just used it to categorize my 2025 transactions, and it worked shockingly well. Modern functionality like this is exactly why I use Mercury. Learn more at mercury.com * Jane Street’s ML engineers presented some of their GPU optimization workflows at GTC, showing how they use CUDA graphs, streams, and custom kernels to shave real time off their training runs. You can watch the full talk here. And they open-sourced all the relevant code here. If this kind of stuff excites you, Jane Street is hiring — learn more at janestreet.com/dwarkesh Timestamps (00:00:00) – How scientific progress outpaces its verification loops (00:17:51) – Newton was the last of the magicians (00:23:26) – Why wasn’t natural selection obvious much earlier? (00:29:52) – Could gradient descent have discovered general relativity? (00:50:54) – Why aliens will have a different tech stack than us (01:15:26) – Are there infinitely many deep scientific principles left to discover? (01:26:25) – What drew Michael to quantum computing so early? (01:35:29) – Does science need a new way to assign credit? (01:43:57) – Prolificness versus depth (01:49:17) – What it takes to actually internalize what you learn Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
We begin the episode with the absolutely ingenious and surprising way in which Kepler discovered the laws of planetary motion. People sometimes say that AI will make especially fast progress at scientific discovery because of tight verification loops. But the story of how we discovered the shape of our solar system shows how the verification loop for correct ideas can be decades (or even millennia) long. During this time, what we know today as the better theory can actually make worse predictions. And the reasons it survives this epistemic hell is some mixture of judgment and heuristics that we don’t even understand well enough to actually articulate, much less codify into an RL loop. Hope you enjoy! Watch on YouTube; read the transcript. Sponsors - Jane Street loves challenging my audience with different creative puzzles. One of my listeners, Shawn, solved Jane Street’s ResNet challenge and posted a great walk-through on X. If you want to try one of these puzzles yourself, there’s one live now at janestreet.com/dwarkesh. - Labelbox can get you rubric-based evals, no matter your domain. These rubrics allow you to give your model feedback on all the dimensions you care about, so you can train how it thinks, not just what it thinks. Whatever you’re focused on—math, physics, finance, psychology or something else—Labelbox can help. Learn more at labelbox.com/dwarkesh. - Mercury just released a new feature called Insights. Insights summarizes your money in and out, showing you your biggest transactions and calling out anything worth paying attention to. It’s a super low-friction way to stay on top of your business. Learn more at mercury.com/insights. Timestamps (00:00:00) – Kepler was a high temperature LLM (00:11:44) – How would we know if there’s a new unifying concept within heaps of AI slop? (00:26:10) – The deductive overhang (00:30:31) – Selection bias in reported AI discoveries (00:46:43) – AI makes papers richer and broader, but not deeper (00:53:00) – If AI solves a problem, can humans get understanding out of it? (00:59:20) – We need a semi-formal language for the way that scientists actually talk to each other (01:09:48) – How Terry uses his time (01:17:05) – Human-AI hybrids will dominate math for a lot longer Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Dylan Patel, founder of SemiAnalysis, provides a deep dive into the 3 big bottlenecks to scaling AI compute: logic, memory, and power. And walks through the economics of labs, hyperscalers, foundries, and fab equipment manufacturers. Learned a ton about every single level of the stack. Enjoy! Watch on YouTube; read the transcript. Sponsors * Mercury has already saved me a bunch of time this tax season. Last year, I used Mercury to request W-9s from all the contractors I worked with. Then, when it came time to issue 1099s this year, I literally just clicked a button and Mercury sent them out. Learn more at mercury.com. * Labelbox noticed that even when voice models appear to take interruptions in stride, their performance degrades. To figure out why, they built a new evaluation pipeline called EchoChain. EchoChain diagnoses voice models’ specific failure modes, letting you understand what your model needs to truly handle interruptions. Check it out at labelbox.com/dwarkesh. * Jane Street is basically a research lab with a trading desk attached – and their infrastructure backs this up. They’ve got tens of thousands of GPUs, hundreds of thousands of CPU cores, and exabytes of storage. This is what it takes to find subtle signals hidden deep within noisy market data. If this sounds interesting, you can explore open positions at janestreet.com/dwarkesh. Timestamps (00:00:00) – Why an H100 is worth more today than 3 years ago (00:24:52) – Nvidia secured TSMC allocation early; Google is getting squeezed (00:34:34) – ASML will be the #1 constraint for AI compute scaling by 2030 (00:55:47) – Can't we just use TSMC's older fabs? (01:05:37) – When will China outscale the West in semis? (01:16:01) – The enormous incoming memory crunch (01:42:34) – Scaling power in the US will not be a problem (01:54:44) – Space GPUs aren't happening this decade (02:14:07) – Why aren't more hedge funds making the AGI trade? (02:18:30) – Will TSMC kick Apple out from N2? (02:24:16) – Robots and Taiwan risk Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Read the full essay here: https://www.dwarkesh.com/p/dow-anthropic Timestamps (00:00:00) - Anthropic vs The Pentagon (00:04:16) - The overhangs of tyranny (00:05:54) - AI structurally favors mass surveillance (00:08:25) - Alignment...to whom? (00:13:55) - Coordination not worth the costs Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Renaissance history is so much wilder and weirder than you would have expected. Very fun chatting with Ada Palmer (historian, novelist, and composer based at the University of Chicago). Some especially fascinating things I learned from the conversation and her excellent book, Inventing the Renaissance: Not only did Gutenberg go bankrupt in the 1450s (after inventing the printing press), but so did the bank that foreclosed on him, and so did his apprentices. This is because paper was still very expensive, and so you had to make this big upfront CAPEX decision to print a batch of 300 copies of a book - say the Bible. But he’s in a small landlocked German town where only priests are allowed to read the Bible - so he sells maybe 7 copies. It’s only when this technology ends up in Venice, where you can hand 10 copies to each of 30 ship captains going to 30 different cities, that it starts taking off. Speaking of which, the printing revolution wasn’t just one single discrete event, just as the computer revolution has been this whole century of going from mainframes -> personal computers -> phones -> social media, each with different and accelerating social impact. Books came first, but they’re slow to print, and made in small batches. The real revolution is pamphlets - much faster, much harder to censor. Pamphlet runners are how you can have Luther’s 95 Theses go from Wittenberg to London in 17 days. So much other wild stuff from this episode. For example, did you know that the largest and best-funded experimental laboratory in 17th century Europe was very likely the Roman one run by inquisitors? Ada jokes that the Inquisition accidentally invented peer review. The focus of the Inquisition is really misunderstood - it was obsessed with catching dangerous new heretics like Lutherans and Calvinists - it only executed one person for doing science. And this leads Ada to make an observation that I think is really wise: the authorities and censors are always worried about the exact wrong things given 20/20 hindsight. When Inquisition raids an underground bookshop during the French Enlightenment, they don’t mind the Rousseau, Voltaire, and Encyclopédie, but they lose their minds about some Jansenist treatises about the technical nature of the Trinity. More broadly, a lesson for me from this episode is that it’s just really hard to shape history in the specific way that you want to impact things. One of the most famous medieval scholars is this guy Petrarch. He survives the Black Death in the 1340s, watches his friends die to plague and bandits, and says: our leaders are selfish and terrible, we need to raise them on the Roman classics so they’ll act like Cicero. So Europe pours money into finding ancient manuscripts, building libraries, and educating princes on classical virtues. Those princes grow up and fight bigger, nastier wars than ever before with new deadlier technology. And this, combined with greater urbanization and endemic plague, results in European life expectancy decreasing from 35 in the medieval period to 18 during the Renaissance (the period which we in retrospect think of as a golden age but which many people living through it thought of as the continuation of the dark ages that had persisted since the fall of Rome). Anyways, the libraries Petrarch inspires stick around, the printing press makes them accessible to everyone, and 200 years later a generation of medical students is reading Lucretius and asking “what if there are atoms and that’s how diseases work?” which eventually leads to germ theory, vaccines, and a cure for the Black Death (Ada has longer more involved explanation of how cosplaying the Romans results through a series of many steps to the scientific revolution). Petrarch wanted to produce philosopher-kings that shared his values. Instead he created a world that doesn’t share his values at all but can cure the disease that destroyed his. Watch on YouTube; read the transcript. Sponsors * Jane Street is still waiting on someone to solve their backdoor puzzle… They’re accepting submissions until April 1st and have set aside $50,000 for the best attempts. Separately, applications are live for Jane Street’s summer ML internships in NY, London, and Hong Kong. Go check all of this out at janestreet.com/dwarkesh. * Labelbox can help ensure your agents don’t need to rely on overspecified prompts. They tailor real-world scenarios to whatever domain you’re focused on, and they make sure the data you train on rewards real understanding, not just instruction-following. Learn more at labelbox.com/dwarkesh * Mercury’s personal accounts let you add users, issue cards, and customize permissions. This is super useful for sharing finances with a partner, a roommate… or even an OpenClaw agent. And, if you’re already a Mercury Business user, your personal account is free! See terms and conditions below, and learn more at mercury.com/personal-banking Eligible Mercury Business users who apply for and maintain a Mercury Personal account may have their Mercury Personal subscription fee waived provided they remain a user on an active Mercury Business account in good standing. Standard Mercury Platform Subscription fees will apply if they no longer meet eligibility requirements, including but not limited to no longer being associated with an eligible Mercury Business account, or if the program is modified or terminated. Mercury may modify or discontinue this offering at any time and will provide notice as required by law. See Subscription Terms for full details. * To sponsor a future episode, visit dwarkesh.com/advertise. Timestamps (00:00:00) - How cosplaying Ancient Rome led to the Renaissance (00:28:49) - How Florence’s weird republic worked (00:38:13) - How the Medicis took over Florence (00:58:12) - Why it was so hard for Gutenberg to make any money off the printing press (01:17:34) - Why the industrial revolution didn’t happen in Italy (01:23:02) - The Library of Alexandria isn’t where most ancient books were lost (01:41:21) - The Inquisition accidentally invented peer review Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Dario Amodei thinks we are just a few years away from AGI — or as he puts it, from having “a country of geniuses in a data center”. In this episode, we discuss what to make of the scaling hypothesis in the current RL regime, why task-specific RL might lead to generalization, and how AI will diffuse throughout the economy. We also dive into Anthropic’s revenue projections, compute commitments, path to profitability, and more. Watch on YouTube; read the transcript. Sponsors * Labelbox can get you the RL tasks and environments you need. Their massive network of subject-matter experts ensures realism across domains, and their in-house tooling lets them continuously tweak task difficulty to optimize learning. Reach out at labelbox.com/dwarkesh. * Jane Street sent me another puzzle… this time, they’ve trained backdoors into 3 different language models — they want you to find the triggers. Jane Street isn’t even sure this is possible, but they’ve set aside $50,000 for the best attempts and write-ups. They’re accepting submissions until April 1st at janestreet.com/dwarkesh. * Mercury’s personal accounts make it easy to share finances with a partner, a roommate… or OpenClaw. Last week, I wanted to try OpenClaw for myself, so I used Mercury to spin up a virtual debit card with a small spend limit, and then I let my agent loose. No matter your use case, apply at mercury.com/personal-banking. Timestamps (00:00:00) - What exactly are we scaling? (00:12:36) - Is diffusion cope? (00:29:42) - Is continual learning necessary? (00:46:20) - If AGI is imminent, why not buy more compute? (00:58:49) - How will AI labs actually make profit? (01:31:19) - Will regulations destroy the boons of AGI? (01:47:41) - Why can’t China and America both have a country of geniuses in a datacenter? Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
In this episode, John and I got to do a real deep-dive with Elon. We discuss the economics of orbital data centers, the difficulties of scaling power on Earth, what it would take to manufacture humanoids at high-volume in America, xAI’s business and alignment plans, DOGE, and much more. Watch on YouTube; read the transcript. Sponsors * Mercury just started offering personal banking! I’m already banking with Mercury for business purposes, so getting to bank with them for my personal life makes everything so much simpler. Apply now at mercury.com/personal-banking * Jane Street sent me a new puzzle last week: they trained a neural net, shuffled all 96 layers, and asked me to put them back in order. I tried but… I didn’t quite nail it. If you’re curious, or if you think you can do better, you should take a stab at janestreet.com/dwarkesh * Labelbox can get you robotics and RL data at scale. Labelbox starts by helping you define your ideal data distribution, and then their massive Alignerr network collects frontier-grade data that you can use to train your models. Learn more at labelbox.com/dwarkesh Timestamps (00:00:00) - Orbital data centers (00:36:46) - Grok and alignment (00:59:56) - xAI’s business plan (01:17:21) - Optimus and humanoid manufacturing (01:30:22) - Does China win by default? (01:44:16) - Lessons from running SpaceX (02:20:08) - DOGE (02:38:28) - TeraFab Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Adam Marblestone is CEO of Convergent Research. He’s had a very interesting past life: he was a research scientist at Google Deepmind on their neuroscience team and has worked on everything from brain-computer interfaces to quantum computing to nanotech and even formal mathematics. In this episode, we discuss how the brain learns so much from so little, what the AI field can learn from neuroscience, and the answer to Ilya’s question: how does the genome encode abstract reward functions? Turns out, they’re all the same question. Watch on YouTube; read the transcript. Sponsors * Gemini 3 Pro recently helped me run an experiment to test multi-agent scaling: basically, if you have a fixed budget of compute, what is the optimal way to split it up across agents? Gemini was my colleague throughout the process — honestly, I couldn’t have investigated this question without it. Try Gemini 3 Pro today gemini.google.com * Labelbox helps you train agents to do economically-valuable, real-world tasks. Labelbox’s network of subject-matter experts ensures you get hyper-realistic RL environments, and their custom tooling lets you generate the highest-quality training data possible from those environments. Learn more at labelbox.com/dwarkesh To sponsor a future episode, visit dwarkesh.com/advertise. Timestamps (00:00:00) – The brain’s secret sauce is the reward functions, not the architecture (00:22:20) – Amortized inference and what the genome actually stores (00:42:42) – Model-based vs model-free RL in the brain (00:50:31) – Is biological hardware a limitation or an advantage? (01:03:59) – Why a map of the human brain is important (01:23:28) – What value will automating math have? (01:38:18) – Architecture of the brain Further reading Intro to Brain-Like-AGI Safety - Steven Byrnes’s theory of the learning vs steering subsystem; referenced throughout the episode. A Brief History of Intelligence - Great book by Max Bennett on connections between neuroscience and AI Adam’s blog, and Convergent Research’s blog on essential technologies. A Tutorial on Energy-Based Learning by Yann LeCun What Does It Mean to Understand a Neural Network? - Kording & Lillicrap E11 Bio and their brain connectomics approach Sam Gershman on what dopamine is doing in the brain Gwern’s proposal on training models on the brain’s hidden states Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Read the essay here. Timestamps 00:00:00 What are we scaling? 00:03:11 The value of human labor 00:05:04 Economic diffusion lag is cope00:06:34 Goal-post shifting is justified 00:08:23 RL scaling 00:09:18 Broadly deployed intelligence explosion Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
This is the final episode of the Sarah Paine lecture series, and it’s probably my favorite one. Sarah gives a “tour of the arguments” on what ultimately led to the Soviet Union’s collapse, diving into the role of the US, the Sino-Soviet border conflict, the oil bust, ethnic rebellions and even the Roman Catholic Church. As she points out, this is all particularly interesting as we find ourselves potentially at the beginning of another Cold War. As we wrap up this lecture series, I want to take a moment to thank Sarah for doing this with me. It has been such a pleasure. If you want more of her scholarship, I highly recommend checking out the books she’s written. You can find them here. Watch on YouTube; read the transcript. Sponsors * Labelbox can get you the training data you need, no matter the domain. Their Alignerr network includes the STEM PhDs and coding experts you’d expect, but it also has experienced cinematographers and talented voice actors to help train frontier video and audio models. Learn more at labelbox.com/dwarkesh. * Sardine doesn’t just assess customer risk for banking & retail. Their AI risk management platform is also extremely good at detecting fraudulent job applications, which I’ve found useful for my own hiring process. If you need help with hiring risk—or any other type of fraud prevention—go to sardine.ai/dwarkesh. * Gemini’s Nano Banana Pro helped us make many of the visuals in this episode. For example, we used it to turn dense tables into clear charts so that’d it be easier to quickly understand the trends that Sarah discusses. You can try Nano Banana Pro now in the Gemini app. Go to gemini.google.com. Timestamps (00:00:00) – Did Reagan single-handedly win the Cold War? (00:15:53) – Eastern Bloc uprisings & oil crisis (00:30:37) – Gorbachev’s mistakes (00:37:33) – German unification and NATO expansion (00:48:31) – The Gulf War and the Cold War endgame (00:56:10) – How central planning survived so long (01:14:46) – Sarah’s life in the USSR in 1988 Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Ilya & I discuss SSI’s strategy, the problems with pre-training, how to improve the generalization of AI models, and how to ensure AGI goes well. Watch on YouTube; read the transcript. Sponsors * Gemini 3 is the first model I’ve used that can find connections I haven’t anticipated. I recently wrote a blog post on RL’s information efficiency, and Gemini 3 helped me think it all through. It also generated the relevant charts and ran toy ML experiments for me with zero bugs. Try Gemini 3 today at gemini.google * Labelbox helped me create a tool to transcribe our episodes! I’ve struggled with transcription in the past because I don’t just want verbatim transcripts, I want transcripts reworded to read like essays. Labelbox helped me generate the exact data I needed for this. If you want to learn how Labelbox can help you (or if you want to try out the transcriber tool yourself), go to labelbox.com/dwarkesh * Sardine is an AI risk management platform that brings together thousands of device, behavior, and identity signals to help you assess a user’s risk of fraud & abuse. Sardine also offers a suite of agents to automate investigations so that as fraudsters use AI to scale their attacks, you can use AI to scale your defenses. Learn more at sardine.ai/dwarkesh To sponsor a future episode, visit dwarkesh.com/advertise. Timestamps (00:00:00) – Explaining model jaggedness (00:09:39) - Emotions and value functions (00:18:49) – What are we scaling? (00:25:13) – Why humans generalize better than models (00:35:45) – SSI’s plan to straight-shot superintelligence (00:46:47) – SSI’s model will learn from deployment (00:55:07) – How to think about powerful AGIs (01:18:13) – “We are squarely an age of research company” (01:20:23) – Self-play and multi-agent (01:32:42) – Research taste Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
As part of this interview, Satya Nadella gave Dylan Patel (founder of SemiAnalysis) and me an exclusive first-look at their brand-new Fairwater 2 datacenter. Microsoft is building multiple Fairwaters, each of which has hundreds of thousands of GB200s & GB300s. Between all these interconnected buildings, they’ll have over 2 GW of total capacity. Just to give a frame of reference, even a single one of these Fairwater buildings is more powerful than any other AI datacenter that currently exists. Satya then answered a bunch of questions about how Microsoft is preparing for AGI across all layers of the stack. Watch on YouTube; read the transcript. Sponsors * Labelbox produces high-quality data at massive scale, powering any capability you want your model to have. Whether you’re building a voice agent, a coding assistant, or a robotics model, Labelbox gets you the exact data you need, fast. Reach out at labelbox.com/dwarkesh * CodeRabbit automatically reviews and summarizes PRs so you can understand changes and catch bugs in half the time. This is helpful whether you’re coding solo, collaborating with agents, or leading a full team. To learn how CodeRabbit integrates directly into your workflow, go to coderabbit.ai To sponsor a future episode, visit dwarkesh.com/advertise. Timestamps (00:00:00) - Fairwater 2 (00:03:20) - Business models for AGI (00:12:48) - Copilot (00:20:02) - Whose margins will expand most? (00:36:17) - MAI (00:47:47) - The hyperscale business (01:02:44) - In-house chip & OpenAI partnership (01:09:35) - The CAPEX explosion (01:15:07) - Will the world trust US companies to lead AI? Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
In this lecture, military historian Sarah Paine explains how Russia—and specifically Stalin—completely derailed China’s rise, slowing them down for over a century. This lecture was particularly interesting to me because, in my opinion, the Chinese Civil War is 1 of the top 3 most important events of the 20th century. And to understand why it transpired as it did, you need to understand Stalin’s role in the whole thing. Watch on YouTube; read the transcript. Sponsors Mercury helps you run your business better. It’s the banking platform we use for the podcast — we love that we can see our cash balance, AR, and AP all in one place. Join us (and over 200,000 other entrepreneurs) at mercury.com Labelbox scrutinizes public benchmarks at the single data-row level to probe what’s really being evaluated. Using this knowledge, they can generate custom training data for hill climbing existing benchmarks, or design new benchmarks from scratch. Learn more at labelbox.com/dwarkesh To sponsor a future episode, visit dwarkesh.com/advertise. Timestamps (00:00:00) – How Russia took advantage of China’s weakness (00:22:58) – After Stalin, China’s rise (00:33:52) – Russian imperialism (00:45:23) – China’s and Russia’s existential problems (01:04:55) – Q&A: Sino-Soviet Split (01:22:44) – Stalin’s lessons from WW2 Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
The Andrej Karpathy episode. During this interview, Andrej explains why reinforcement learning is terrible (but everything else is much worse), why AGI will just blend into the previous ~2.5 centuries of 2% GDP growth, why self driving took so long to crack, and what he sees as the future of education. It was a pleasure chatting with him. Watch on YouTube; read the transcript. Sponsors * Labelbox helps you get data that is more detailed, more accurate, and higher signal than you could get by default, no matter your domain or training paradigm. Reach out today at labelbox.com/dwarkesh * Mercury helps you run your business better. It’s the banking platform we use for the podcast — we love that we can see our accounts, cash flows, AR, and AP all in one place. Apply online in minutes at mercury.com * Google’s Veo 3.1 update is a notable improvement to an already great model. Veo 3.1’s generations are more coherent and the audio is even higher-quality. If you have a Google AI Pro or Ultra plan, you can try it in Gemini today by visiting https://gemini.google Timestamps (00:00:00) – AGI is still a decade away (00:29:45) – LLM cognitive deficits (00:40:05) – RL is terrible (00:49:38) – How do humans learn? (01:06:25) – AGI will blend into 2% GDP growth (01:17:36) – ASI (01:32:50) – Evolution of intelligence & culture (01:42:55) - Why self driving took so long (01:56:20) - Future of education Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Nick Lane has some pretty wild ideas about the evolution of life. He thinks early life was continuous with the spontaneous chemistry of undersea hydrothermal vents. Nick’s story may be wrong, but I find it remarkable that with just that starting point, you can explain so much about why life is the way that it is — the things you’re supposed to just take as givens in biology class: * Why are there two sexes? Why sex at all? * Why are bacteria so simple despite being around for 4 billion years? Why is there so much shared structure between all eukaryotic cells despite the enormous morphological variety between animals, plants, fungi, and protists? * Why did the endosymbiosis event that led to eukaryotes happen only once, and in the particular way that it did? * Why is all life powered by proton gradients? Why does all life on Earth share not only the Krebs Cycle, but even the intermediate molecules like Acetyl-CoA? His theory implies that early life is almost chemically inevitable (potentially blooming on hundreds of millions of planets in the Milky Way alone), and that the real bottleneck is the complex eukaryotic cell. Watch on YouTube; listen on Apple Podcasts or Spotify. Sponsors * Gemini in Sheets lets you turn messy text into structured data. We used it to classify all our episodes by type and topic, no manual tagging required. If you’re a Google Workspace user, you can get started today at docs.google.com/spreadsheets/ * Labelbox has a massive network of domain experts (called Alignerrs) who help train AI models in a way that ensures they understand the world deeply, not superficially. These Alignerrs are true experts — one even tutored me in chemistry as I prepped for this episode. Learn more at labelbox.com/dwarkesh * Lighthouse helps frontier technology companies like Cursor and Physical Intelligence navigate the U.S. immigration system and hire top talent from around the world. Lighthouse handles everything, maximizing the probability of visa approval while minimizing the work you have to do. Learn more at lighthousehq.com/employers To sponsor a future episode, visit dwarkesh.com/advertise. Timestamps (00:00:00) – The singularity that unlocked complex life (00:08:26) – Early life continuous with Earth's geochemistry (00:23:36) – Eukaryotes are the great filter for intelligent life (00:42:16) – Mitochondria are the reason we have sex (01:08:12) – Are bioelectric fields linked to consciousness? Ref: 868329 Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
I have a much better understanding of Sutton’s perspective now. I wanted to reflect on it a bit. (00:00:00) - The steelman (00:02:42) - TLDR of my current thoughts (00:03:22) - Imitation learning is continuous with and complementary to RL (00:08:26) - Continual learning (00:10:31) - Concluding thoughts Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Richard Sutton is the father of reinforcement learning, winner of the 2024 Turing Award, and author of The Bitter Lesson. And he thinks LLMs are a dead end. After interviewing him, my steel man of Richard’s position is this: LLMs aren’t capable of learning on-the-job, so no matter how much we scale, we’ll need some new architecture to enable continual learning. And once we have it, we won’t need a special training phase — the agent will just learn on-the-fly — like all humans, and indeed, like all animals. This new paradigm will render our current approach with LLMs obsolete. In our interview, I did my best to represent the view that LLMs might function as the foundation on which experiential learning can happen… Some sparks flew. A big thanks to the Alberta Machine Intelligence Institute for inviting me up to Edmonton and for letting me use their studio and equipment. Enjoy! Watch on YouTube; listen on Apple Podcasts or Spotify. Sponsors * Labelbox makes it possible to train AI agents in hyperrealistic RL environments. With an experienced team of applied researchers and a massive network of subject-matter experts, Labelbox ensures your training reflects important, real-world nuance. Turn your demo projects into working systems at labelbox.com/dwarkesh * Gemini Deep Research is designed for thorough exploration of hard topics. For this episode, it helped me trace reinforcement learning from early policy gradients up to current-day methods, combining clear explanations with curated examples. Try it out yourself at gemini.google.com * Hudson River Trading doesn’t silo their teams. Instead, HRT researchers openly trade ideas and share strategy code in a mono-repo. This means you’re able to learn at incredible speed and your contributions have impact across the entire firm. Find open roles at hudsonrivertrading.com/dwarkesh Timestamps (00:00:00) – Are LLMs a dead-end? (00:13:04) – Do humans do imitation learning? (00:23:10) – The Era of Experience (00:33:39) – Current architectures generalize poorly out of distribution (00:41:29) – Surprises in the AI field (00:46:41) – Will The Bitter Lesson still apply post AGI? (00:53:48) – Succession to AIs Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Sergey Levine, one of the world’s top robotics researchers and co-founder of Physical Intelligence, thinks we’re on the cusp of a “self-improvement flywheel” for general-purpose robots. His median estimate for when robots will be able to run households entirely autonomously? 2030. If Sergey’s right, the world 5 years from now will be an insanely different place than it is today. This conversation focuses on understanding how we get there: we dive into foundation models for robotics, and how we scale both the data and the hardware necessary to enable a full-blown robotics explosion. Watch on YouTube; listen on Apple Podcasts or Spotify. Sponsors * Labelbox provides high-quality robotics training data across a wide range of platforms and tasks. From simple object handling to complex workflows, Labelbox can get you the data you need to scale your robotics research. Learn more at labelbox.com/dwarkesh * Hudson River Trading uses cutting-edge ML and terabytes of historical market data to predict future prices. I got to try my hand at this fascinating prediction problem with help from one of HRT’s senior researchers. If you’re curious about how it all works, go to hudson-trading.com/dwarkesh * Gemini 2.5 Flash Image (aka nano banana) isn’t just for generating fun images — it’s also a powerful tool for restoring old photos and digitizing documents. Test it yourself in the Gemini App or in Google’s AI Studio: ai.studio/banana To sponsor a future episode, visit dwarkesh.com/advertise. Timestamps (00:00:00) – Timeline to widely deployed autonomous robots (00:22:12) – Why robotics will scale faster than self-driving cars (00:32:15) – How vision-language-action models work (00:50:26) – Improvements needed for brainlike efficiency (01:02:48) – Learning from simulation (01:14:08) – How much will robots speed up AI buildouts? (01:22:54) – If hardware’s the bottleneck, does China win by default? Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
In this lecture, military historian Sarah Paine explains how Britain used sea control, peripheral campaigns, and alliances to defeat Nazi Germany during WWII. She then applies this framework to today, arguing that Russia and China are similarly constrained by their geography, making them vulnerable in any conflict with maritime powers (like the U.S. and its allies). Watch on YouTube; listen on Apple Podcasts or Spotify. Sponsors * Labelbox partners with researchers to scope, generate, and deliver the exact data frontier models need, no matter the domain. Whether that’s multi-turn audio, SOTA robotics data, advanced STEM problem sets, or even novel RL environments, Labelbox delivers high-quality data, fast. Learn more at labelbox.com/dwarkesh * Warp is the best interface I’ve found for coding with agents. It makes building custom tools easy: Warp’s UI helps you understand agent behavior and its in-line text editor is great for making tweaks. You can try Warp for free, or, for a limited time, use code DWARKESH to get Warp’s Pro Plan for only $5. Go to warp.dev/dwarkesh To sponsor a future episode, visit dwarkesh.com/advertise. Timestamps 00:00:00 – How WW1 shaped WW2 00:15:10 – Hitler and Churchill’s battle to command the Atlantic 00:30:10 – Peripheral theaters leading up to Normandy 00:37:13 – The Eastern front 00:48:04 – Russia’s & China’s geographic prisons 01:00:28 – Hitler’s blunders & America’s industrial might 01:15:03 – Bismarck’s limited wars vs Hitler’s total war Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Jacob Kimmel thinks he can find the transcription factors to reverse aging. We do a deep dive on why this might be plausible and why evolution hasn’t optimized for longevity. We also talk about why drug discovery has been getting exponentially harder, and what a new platform for biological understanding to speed up progress would look like. As a bonus, we get into the nitty gritty of gene delivery and Jacob’s controversial takes on CAR-T cells. For full disclosure, I am an angel investor in NewLimit. This did not impact my decision to interview Jacob, nor the questions I asked him. Watch on YouTube; listen on Apple Podcasts or Spotify. SPONSORS * Hudson River Trading uses deep learning to tackle one of the world's most complex systems: global capital allocation. They have a massive in-house GPU cluster, and they’re constantly adding new racks of B200s to ensure their researchers are never constrained by compute. Explore opportunities at hudsonrivertrading.com/dwarkesh\ * Google’s Gemini CLI turns ideas into working applications FAST, no coding required. It built a complete podcast post-production tool in 10 minutes, including fully functional backend logic, and the entire build used less than 10% of Gemini’s session context. Check it out on Github now! * To sponsor a future episode, visit dwarkesh.com/advertise. TIMESTAMPS (00:00:00) – Three reasons evolution didn’t optimize for longevity (00:12:07) – Why didn't humans evolve their own antibiotics? (00:25:26) – De-aging cells via epigenetic reprogramming (00:44:43) – Viral vectors and other delivery mechanisms (01:06:22) – Synthetic transcription factors (01:09:31) – Can virtual cells break Eroom’s Law? (01:31:32) – Economic models for pharma Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
How will we feed the 100s of GWs of extra energy demand that AI will create over the coming decade? On this episode, Casey Handmer (Caltech PhD, former NASA JPL, founder & CEO of Terraform Industries) walks me through how we can pull it off, and why he thinks a major part of this energy singularity will be powered by solar. His views are contrarian, but he came armed to defend them. Watch on YouTube; listen on Apple Podcasts or Spotify. SPONSORS - Lighthouse helps frontier technology companies like Cursor and Physical Intelligence navigate the U.S. immigration system and hire top talent from around the world. Lighthouse handles everything for you, maximizing the probability of visa approval while minimizing the work you have to do. Learn more at lighthousehq.com/employers - To sponsor a future episode, visit dwarkesh.com/advertise. TIMESTAMPS (00:00:00) – Why doesn’t China win by default? (00:08:28) – Why hyperscalers choose natural gas over solar (00:18:01) – Solar's astonishing learning rates (00:27:02) – How to build 50,000 acre solar-powered data centers (00:40:24) – Environmental regulations blocking clean energy (00:44:04) – Batteries replacing the grid (00:49:14) – GDP is broken, AGI's true value must be measured in total energy use (00:58:45) – Silicon wafers in space with one mind each Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
A deep dive with Lewis Bollard, who leads Open Philanthropy’s strategy for Farmed Animal Welfare, on the surprising economics of the meat industry. Why is factory farming so efficient? How can we make the lives of the 23+ billion animals living on factory farms more bearable? How far off are the moonshots (e.g., brainless chickens, cultivated meats, etc.) to end this mass suffering? And why does the meat industry have such a surprising amount of political influence? For decades, innovation in the meat industry has actually made the conditions for animals worse. Can the next few decades of tech reverse this pattern? Watch on YouTube; listen on Apple Podcasts or Spotify. Donation match fundraiser The welfare of animals on factory farms is so systemically neglected that just $1 can help avert 10 years of animal suffering. After learning more about the outsized opportunities to help, I decided to give $250,000 as a donation match to farmkind.giving/dwarkesh. FarmKind directs your contributions to the most effective charities in this area. Please consider contributing, even if it’s a small amount. Together, we can double each other's impact and give a total of $500,000. Bluntly, there are some listeners who are in a position to give much more. Given how neglected this topic is, one such person could singlehandedly change the game for 10s of billions of animals. If you’re considering donating $50k or more, please reach out directly to Lewis and his team by emailing [email protected]. Timestamps (00:00:00) – The astonishing efficiency of factory farming (00:07:18) – It was a mistake making this about diet (00:09:54) – Tech that’s sparing 100s of millions of animals/year (00:16:16) – Brainless chickens and higher welfare breeds (00:28:21) – $1 can prevent 10 years of animal suffering (00:37:26) – Situation in China and the developing world (00:41:41) – How the meat lobby got a lock on Congress (00:53:23) – Business structure of the meat industry (00:57:42) – Corporate campaigns are underrated Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
After my last lecture series with Sarah Paine ended, I still had so many questions. I knew we’d only scratched the surface of Sarah’s scholarship, so I immediately invited her back for another series: she graciously agreed, and we’ll be releasing the results online over the coming weeks and months! This first lecture is focused on the balance of power in East Asia at the turn of the 20th century. Specifically, how did Japan (population 47M) defeat China (400M) and Russia (130M) to become Asia's dominant power? For me, the most interesting thing was that Japan's surprise attack on Port Arthur at the beginning of the Russo-Japanese War (1904) helps us understand why Japan might have thought Pearl Harbor would work. Watch on YouTube; listen on Apple Podcasts or Spotify. Sponsors * Google’s Veo 3 helps us visualize the hypothetical scenarios that often come up during our interviews. Veo’s ability to generate both video and audio—all with incredible realism—makes it perfect for bringing our content to life. If you have a Google AI Pro or Ultra plan, you can try it in Gemini today by visiting gemini.google. * Hudson River Trading is one of the world's top quantitative trading firms. Responsible for around 15% of all U.S. equities trading volume, HRT powers their trades with cutting-edge deep learning models. Their in-house AI team does fundamental ML research and then applies it to some of the most competitive markets in the world. If you’re interested in joining them, you can learn more at hudsonrivertrading.com/dwarkesh. To sponsor a future episode, visit dwarkesh.com/advertise. Timestamps (00:00:00) – Japan’s Meiji reforms (00:14:44) – Trans-Siberian railway & Japan’s 3-year window for empire (00:29:58) – The most important battle in the Russo-Japanese war (00:48:38) – China’s implosion: imperialism, civil wars, and opium (00:59:31) – Was Russia on track to dominate Asia? (01:14:20) – Pearl Harbor (1941) vs surprise attack of Port Arthur (1904) (01:34:03) – Why big countries still lose wars (01:46:56) – Grand strategy for small countries Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
The Stephen Kotkin episode. Kotkin is arguably the world’s foremost expert on Joseph Stalin and has written a massive 2-volume biography on him (with a 3rd volume in the works). No other individual had more of a profound impact on the 20th century than Stalin. He held the power of life and death over every single person across 11 time zones, and he killed tens of millions of people, utterly consumed by an ideology aimed at building paradise on Earth. And, he was one half of the biggest and most consequential military confrontation in history (even if Hitler didn’t prove to be his match). Watch on YouTube; listen on Apple Podcasts or Spotify. Sponsors * Lighthouse is THE fastest immigration solution for the technology industry. All they need is your resume or LinkedIn profile to tell you which visas you’re most eligible for, and they’ll send you this eligibility document for free, no commitment required. Get started today at https://www.lighthousehq.com/ref/Dwarkesh. To sponsor a future episode, visit dwarkesh.com/advertise. Timestamps (00:00:00) – Was the tsarist regime the lesser of 2 evils? (00:23:45) – The peasants brought Lenin to power, then he enslaved them (00:37:38) – Why did so many go along with enforced famine and the Great Terror? (01:02:26) – Today’s leftist civil war (01:13:01) – Doesn’t CCP deserve credit for China's growth? (01:35:13) – Why didn't somebody just kill Stalin? (01:52:45) – Overcoming the pathologies of communism with tech: USSR vs China Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
I’ve had a lot of discussions on my podcast where we haggle out timelines to AGI. Some guests think it’s 20 years away - others 2 years. Here’s an audio version of where my thoughts stand as of June 2025. If you want to read the original post, you can check it out here. Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
George Church is the godfather of modern synthetic biology and has been involved with basically every major biotech breakthrough in the last few decades. Professor Church thinks that these improvements (e.g., orders of magnitude decrease in sequencing & synthesis costs, precise gene editing tools like CRISPR, AlphaFold-type AIs, & the ability to conduct massively parallel multiplex experiments) have put us on the verge of some massive payoffs: de-aging, de-extinction, biobots that combine the best of human and natural engineering, and (unfortunately) weaponized mirror life. Watch on YouTube; listen on Apple Podcasts or Spotify. Sponsors * WorkOS Radar ensures your product is ready for AI agents. Radar is an anti-fraud solution that categorizes different types of automated traffic, blocking harmful bots while allowing helpful agents. Future-proof your roadmap today at workos.com/radar. * Scale is building the infrastructure for smarter, safer AI. In addition to their Data Foundry, they recently released Scale Evaluation, a tool that diagnoses model limitations. Learn how Scale can help you push the frontier at scale.com/dwarkesh. * Gemini 2.5 Pro was invaluable during our prep for this episode: it perfectly explained complex biology and helped us understand the most important papers. Gemini’s recently improved structure and style also made using it surprisingly enjoyable. Start building with it today at https://aistudio.google.com To sponsor a future episode, visit dwarkesh.com/advertise. Timestamps (0:00:00) – Aging solved by 2050 (0:07:37) – Finding the master switch for any trait (0:19:50) – Weaponized mirror life (0:30:40) – Why hasn’t sequencing/synthesis led to biotech revolution? (0:50:26) – Impact of AGI on biology research progress (1:00:35) – Biobots that use the best of biological and human engineering (1:05:09) – Odds of life in universe (1:09:57) – Is DNA the ultimate data storage? (1:13:55) – Curing rare diseases with genetic counseling (1:22:23) – NIH & NSF budget cuts (1:25:26) – How one lab spawned 100 biotech companies Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Arthur Kroeber is a leading researcher on Chinese tech and macro, a founding partner at Gavekal Dragonomics, and author of "China's Economy: What Everyone Needs to Know." It's the most useful, detailed resource I've found of how China actually works. On this episode, we discuss how China achieved high-tech manufacturing dominance, and where they'll go from here. By Arthur’s account, the Chinese government is like a giant VC fund: they decide on key priorities and then spend hundreds of billions of dollars subsidizing ruthless competition at the local level. They are willing to lose huge amounts of money for a few of their bets to pay off: at China’s scale, effectiveness matters more than efficiency. There's also a growing bipartisan consensus that we need to combat China's rise. This doesn’t make much sense to me. China is a big, powerful country at the frontier in many fields, and its economy is intricately tied in with our own. Being instinctively adversarial is both unsustainable and risky. Arthur and I discuss how we can create a productive, mutually beneficial version of this relationship. Watch on YouTube; listen on Apple Podcasts or Spotify. Sponsors * Scale is building the infrastructure for smarter, safer AI. In addition to their Data Foundry, they recently released Scale Evaluation, a tool that diagnoses model limitations. Learn how Scale can help you push the frontier at scale.com/dwarkesh. * WorkOS Radar ensures your product’s free trials go to actual users. Radar uses 80+ signals to distinguish malicious bots from real people, eliminating costly free-tier abuse. See why companies like Cursor, Perplexity, and OpenAI use Radar by visiting workos.com/radar. * Lighthouse is THE fastest immigration solution for the technology industry. They help you understand your options and navigate applications for expert visas like the O-1A and EB-1A. Explore which visa is right for you at https://www.lighthousehq.com/ref/Dwarkesh. To sponsor a future episode, visit dwarkesh.com/advertise. Timestamps (00:00:00) – We should reconcile with China (00:21:21) – BYD, Tesla, & Chinese EV industry (00:36:05) – Will China have a Japan-style financial crisis? (00:44:39) – Local debt situation is manageable (00:57:28) – If CCP is so competent, why isn’t China richer? (01:05:08) – How China keeps tech under control (01:33:45) – Does China win AI? (01:43:34) – Communication with China key for AI safety (02:10:08) – What foreigners get wrong about China (02:17:32) – China-US relationship future Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Ken Rogoff is the former chief economist of the IMF, a professor of Economics at Harvard, and author of the newly released Our Dollar, Your Problem and This Time is Different. On this episode, Ken predicts that, within the next decade, the US will have a debt-induced inflation crisis, but not a Japan-type financial crisis (the latter is much worse, and can make a country poorer for generations). Ken also explains how China is trapped: in order to solve their current problems, they’ll keep leaning on financial repression and state-directed investment, which only makes their situation worse. We also discuss the erosion of dollar dominance, why there will be a rebalancing toward foreign equities, how AGI will impact the deficit and interest rate, and much more! Watch on YouTube; listen on Apple Podcasts or Spotify. Sponsors * WorkOS gives your product all the features that enterprise customers need, without derailing your roadmap. Skip months of engineering effort and start selling to enterprises today at workos.com. * Scale is building the infrastructure for smarter, safer AI. In addition to their Data Foundry, they recently released Scale Evaluation, a tool that diagnoses model limitations. Learn how Scale can help you push the frontier at scale.com/dwarkesh. * Gemini Live API lets you have natural, real-time, interactions with Gemini. You can talk to it like you were talking to another person, stream video to show it your surroundings, and share screen to give it context. Try it now by clicking the “Stream” tab on ai.dev. To sponsor a future episode, visit dwarkesh.com/advertise. Timestamps (00:00:00) – China is stagnating (00:25:46) – How the US broke Japan's economy (00:37:06) – America's inflation crisis is coming (01:02:20) – Will AGI solve the US deficit? (01:07:11) – Why interest rates will go up (01:10:55) – US equities will underperform (01:22:24) – The erosion of dollar dominance Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
On this episode, I chat with Victor Shih about all things China. We discuss China’s massive local debt crisis, the CCP’s views on AI, what happens after Xi, and more. Victor Shih is an expert on the Chinese political system, as well as their banking and fiscal policies, and he has amassed more biographical data on the Chinese elite than anyone else in the world. He teaches at UC San Diego, where he also directs the 21st Century China Center. Watch on YouTube; listen on Apple Podcasts or Spotify. Sponsors * Scale is building the infrastructure for smarter, safer AI. In addition to their Data Foundry, they just released Scale Evaluation, a tool that diagnoses model limitations. Learn how Scale can help you push the frontier at scale.com/dwarkesh. * WorkOS is how top AI companies ship critical enterprise features without burning months of engineering time. If you need features like SSO, audit logs, or user provisioning, head to workos.com. To sponsor a future episode, visit dwarkesh.com/advertise. Timestamps (00:00:00) – Is China more decentralized than the US? (00:03:16) – How the Politburo Standing Committee makes decisions (00:21:07) – Xi’s right hand man in charge of AGI (00:35:37) – DeepSeek was trained to track CCP policy (00:45:35) – Local government debt crisis (00:50:00) – BYD, CATL, & financial repression (00:58:12) – How corruption leads to overbuilding (01:10:46) – Probability of Taiwan invasion (01:18:56) – Succession after Xi (01:25:10) – Future growth forecasts Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
New episode with my good friends Sholto Douglas & Trenton Bricken. Sholto focuses on scaling RL and Trenton researches mechanistic interpretability, both at Anthropic. We talk through what’s changed in the last year of AI research; the new RL regime and how far it can scale; how to trace a model’s thoughts; and how countries, workers, and students should prepare for AGI. See you next year for v3. Here’s last year’s episode, btw. Enjoy! Watch on YouTube; listen on Apple Podcasts or Spotify. ---------- SPONSORS * WorkOS ensures that AI companies like OpenAI and Anthropic don't have to spend engineering time building enterprise features like access controls or SSO. It’s not that they don't need these features; it's just that WorkOS gives them battle-tested APIs that they can use for auth, provisioning, and more. Start building today at workos.com. * Scale is building the infrastructure for safer, smarter AI. Scale’s Data Foundry gives major AI labs access to high-quality data to fuel post-training, while their public leaderboards help assess model capabilities. They also just released Scale Evaluation, a new tool that diagnoses model limitations. If you’re an AI researcher or engineer, learn how Scale can help you push the frontier at scale.com/dwarkesh. * Lighthouse is THE fastest immigration solution for the technology industry. They specialize in expert visas like the O-1A and EB-1A, and they’ve already helped companies like Cursor, Notion, and Replit navigate U.S. immigration. Explore which visa is right for you at lighthousehq.com/ref/Dwarkesh. To sponsor a future episode, visit dwarkesh.com/advertise. ---------- TIMESTAMPS (00:00:00) – How far can RL scale? (00:16:27) – Is continual learning a key bottleneck? (00:31:59) – Model self-awareness (00:50:32) – Taste and slop (01:00:51) – How soon to fully autonomous agents? (01:15:17) – Neuralese (01:18:55) – Inference compute will bottleneck AGI (01:23:01) – DeepSeek algorithmic improvements (01:37:42) – Why are LLMs ‘baby AGI’ but not AlphaZero? (01:45:38) – Mech interp (01:56:15) – How countries should prepare for AGI (02:10:26) – Automating white collar work (02:15:35) – Advice for students Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Based on my essay about AI firms. Huge thanks to Petr and his team for bringing this to life! Watch on YouTube. Thanks to Google for sponsoring. We used their Veo 2 model to make this entire video—it generated everything from the photorealistic humans to the claymation octopuses. If you’re a Gemini Advanced user, you can try Veo 2 now in the Gemini app. Just select Veo 2 in the dropdown, and type your video idea in the prompt bar. Get started today by going to gemini.google.com. To sponsor a future episode, visit dwarkesh.com/advertise. Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Zuck on: * Llama 4, benchmark gaming * Intelligence explosion, business models for AGI * DeepSeek/China, export controls, & Trump * Orion glasses, AI relationships, and preventing reward-hacking from our tech. Watch on Youtube; listen on Apple Podcasts and Spotify. ---------- SPONSORS * Scale is building the infrastructure for safer, smarter AI. Scale’s Data Foundry gives major AI labs access to high-quality data to fuel post-training, while their public leaderboards help assess model capabilities. They also just released Scale Evaluation, a new tool that diagnoses model limitations. If you’re an AI researcher or engineer, learn how Scale can help you push the frontier at scale.com/dwarkesh. * WorkOS Radar protects your product against bots, fraud, and abuse. Radar uses 80+ signals to identify and block common threats and harmful behavior. Join companies like Cursor, Perplexity, and OpenAI that have eliminated costly free-tier abuse by visiting workos.com/radar. * Lambda is THE cloud for AI developers, with over 50,000 NVIDIA GPUs ready to go for startups, enterprises, and hyperscalers. By focusing exclusively on AI, Lambda provides cost-effective compute supported by true experts, including a serverless API serving top open-source models like Llama 4 or DeepSeek V3-0324 without rate limits, and available for a free trial at lambda.ai/dwarkesh. To sponsor a future episode, visit dwarkesh.com/p/advertise. ---------- TIMESTAMPS (00:00:00) – How Llama 4 compares to other models (00:11:34) – Intelligence explosion (00:26:36) – AI friends, therapists & girlfriends (00:35:10) – DeepSeek & China (00:39:49) – Open source AI (00:54:15) – Monetizing AGI (00:58:32) – The role of a CEO (01:02:04) – Is big tech aligning with Trump? (01:07:10) – 100x productivity Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
800 years before the Black Death, the very same bacteria ravaged Rome, killing 60%+ of the population in many areas. Also, back-to-back volcanic eruptions caused a mini Ice Age, leaving Rome devastated by famine and disease. I chatted with historian Kyle Harper about this and much else: * Rome as a massive slave society * Why humans are more disease-prone than other animals * How agriculture made us physically smaller (Caesar at 5'5" was considered tall) Watch on Youtube; listen on Apple Podcasts or Spotify. ---------- SPONSORS * WorkOS makes it easy to become enterprise-ready. They have APIs for all the most common enterprise requirements—things like authentication, permissions, and encryption—so you can quickly plug them in and get back to building your core product. If you want to make your product enterprise-ready, join companies like Cursor, Perplexity and OpenAI, and head to workos.com. * Scale’s Data Foundry gives major AI labs access to high-quality data to fuel post-training, including advanced reasoning capabilities. If you’re an AI researcher or engineer, learn how Scale’s Data Foundry and research lab, SEAL, can help you go beyond the current frontier of capabilities at scale.com/dwarkesh To sponsor a future episode, visit dwarkesh.com/advertise. ---------- KYLE'S BOOKS * The Fate of Rome: Climate, Disease, and the End of an Empire * Plagues upon the Earth: Disease and the Course of Human History * Slavery in the Late Roman World, AD 275-425 ---------- TIMESTAMPS (00:00:00) - Plague's impact on Rome's collapse (00:06:24) - Rome's little Ice Age (00:11:51) - Why did progress stall in Rome's Golden Age? (00:23:55) - Slavery in Rome (00:36:22) - Was agriculture a mistake? (00:47:42) - Disease's impact on cognitive function (00:59:46) - Plague in India and Central Asia (01:05:16) - The next pandemic (01:16:48) - How Kyle uses LLMs (01:18:51) - De-extinction of lost species Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Ege Erdil and Tamay Besiroglu have 2045+ timelines, think the whole "alignment" framing is wrong, don't think an intelligence explosion is plausible, but are convinced we'll see explosive economic growth (economy literally doubling every year or two). This discussion offers a totally different scenario than my recent interview with Scott and Daniel. Ege and Tamay are the co-founders of Mechanize (disclosure - I’m an angel investor), a startup dedicated to fully automating work. Before founding Mechanize, Ege and Tamay worked on AI forecasts at Epoch AI. Watch on Youtube; listen on Apple Podcasts or Spotify. ---------- Sponsors * WorkOS makes it easy to become enterprise-ready. With simple APIs for essential enterprise features like SSO and SCIM, WorkOS helps companies like Vercel, Plaid, and OpenAI meet the requirements of their biggest customers. To learn more about how they can help you do the same, visit workos.com * Scale’s Data Foundry gives major AI labs access to high-quality data to fuel post-training, including advanced reasoning capabilities. If you’re an AI researcher or engineer, learn about how Scale’s Data Foundry and research lab, SEAL, can help you go beyond the current frontier at scale.com/dwarkesh * Google's Gemini Pro 2.5 is the model we use the most at Dwarkesh Podcast: it helps us generate transcripts, identify interesting clips, and code up new tools. If you want to try it for yourself, it's now available in Preview with higher rate limits! Start building with it today at aistudio.google.com. ---------- Timestamps (00:00:00) - AGI will take another 3 decades (00:22:27) - Even reasoning models lack animal intelligence (00:45:04) - Intelligence explosion (01:00:57) - Ege & Tamay’s story (01:06:24) - Explosive economic growth (01:33:00) - Will there be a separate AI economy? (01:47:08) - Can we predictably influence the future? (02:19:48) - Arms race dynamic (02:29:48) - Is superintelligence a real thing? (02:35:45) - Reasons not to expect explosive growth (02:49:00) - Fully automated firms (02:54:43) - Will central planning work after AGI? (02:58:20) - Career advice Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Scott and Daniel break down every month from now until the 2027 intelligence explosion. Scott Alexander is author of the highly influential blogs Slate Star Codex and Astral Codex Ten. Daniel Kokotajlo resigned from OpenAI in 2024, rejecting a non-disparagement clause and risking millions in equity to speak out about AI safety. We discuss misaligned hive minds, Xi and Trump waking up, and automated Ilyas researching AI progress. I came in skeptical, but I learned a tremendous amount by bouncing my objections off of them. I highly recommend checking out their new scenario planning document, AI 2027 Watch on Youtube; listen on Apple Podcasts or Spotify. ---------- Sponsors * WorkOS helps today’s top AI companies get enterprise-ready. OpenAI, Cursor, Perplexity, Anthropic and hundreds more use WorkOS to quickly integrate features required by enterprise buyers. To learn more about how you can make the leap to enterprise, visit workos.com * Jane Street likes to know what's going on inside the neural nets they use. They just released a black-box challenge for Dwarkesh listeners, and I had a blast trying it out. See if you have the skills to crack it at janestreet.com/dwarkesh * Scale’s Data Foundry gives major AI labs access to high-quality data to fuel post-training, including advanced reasoning capabilities. If you’re an AI researcher or engineer, learn about how Scale’s Data Foundry and research lab, SEAL, can help you go beyond the current frontier at scale.com/dwarkesh To sponsor a future episode, visit dwarkesh.com/advertise. ---------- Timestamps (00:00:00) - AI 2027 (00:06:56) - Forecasting 2025 and 2026 (00:14:41) - Why LLMs aren't making discoveries (00:24:33) - Debating intelligence explosion (00:49:45) - Can superintelligence actually transform science? (01:16:54) - Cultural evolution vs superintelligence (01:24:05) - Mid-2027 branch point (01:32:30) - Race with China (01:44:47) - Nationalization vs private anarchy (02:03:22) - Misalignment (02:14:52) - UBI, AI advisors, & human future (02:23:00) - Factory farming for digital minds (02:26:52) - Daniel leaving OpenAI (02:35:15) - Scott's blogging advice Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
I recorded an AMA! I had a blast chatting with my friends Trenton Bricken and Sholto Douglas. We discussed my new book, career advice given AGI, how I pick guests, how I research for the show, and some other nonsense. My book, “The Scaling Era: An Oral History of AI, 2019-2025” is available in digital format now. Preorders for the print version are also open! Watch on YouTube; listen on Apple Podcasts or Spotify. Timestamps (0:00:00) - Book launch announcement (0:04:57) - AI models not making connections across fields (0:10:52) - Career advice given AGI (0:15:20) - Guest selection criteria (0:17:19) - Choosing to pursue the podcast long-term (0:25:12) - Reading habits (0:31:10) - Beard deepdive (0:33:02) - Who is best suited for running an AI lab? (0:35:16) - Preparing for fast AGI timelines (0:40:50) - Growing the podcast Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Humans have not succeeded because of our raw intelligence. Marooned European explorers regularly starved to death in areas where foragers thrived for 1000s of years. I’ve always found this cultural evolution deeply mysterious. How do you discover the 10 steps for processing cassava so it won’t give you cyanide poisoning simply by trial and error? Has the human brain declined in size over the last 10,000 years because we outsourced cultural evolution to a larger collective brain? The most interesting part of the podcast is Henrich’s explanation of how the Catholic Church unintentionally instigated the Industrial Revolution through the dismantling of intensive kinship systems in medieval Europe. Watch on Youtube; listen on Apple Podcasts or Spotify. ---------- Sponsors Scale partners with major AI labs like Meta, Google Deepmind, and OpenAI. Through Scale’s Data Foundry, labs get access to high-quality data to fuel post-training, including advanced reasoning capabilities. If you’re an AI researcher or engineer, learn about how Scale’s Data Foundry and research lab, SEAL, can help you go beyond the current frontier at scale.com/dwarkesh. To sponsor a future episode, visit dwarkesh.com/p/advertise. ---------- Joseph’s books The WEIRDest People in the World The Secret of Our Success ---------- Timestamps (0:00:00) - Humans didn’t succeed because of raw IQ (0:09:27) - How cultural evolution works (0:20:48) - Why is human brain size declining? (0:32:00) - Will AGI have superhuman cultural learning? (0:42:34) - Why Industrial Revolution happened in Europe (0:55:30) - Why China, Rome, India got left behind (1:21:09) - Loss of cultural variance in modern world (1:31:20) - Is individual genius real? (1:43:49) - IQ and collective brains Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
I’m so excited with how this visualization of Notes on China turned out. Petr, thank you for such beautiful watercolor artwork. More to come! Watch on YouTube. ---------- Timestamps (0:00:00) - Intro (0:00:32) - Scale (0:05:50) - Vibes (0:11:14) - Youngsters (0:14:27) - Tech & AI (0:15:47) - Hearts & Minds (0:17:07) - On Travel Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Satya Nadella on: Why he doesn’t believe in AGI but does believe in 10% economic growth; Microsoft’s new topological qubit breakthrough and gaming world models; Whether Office commoditizes LLMs or the other way around. Watch on Youtube; listen on Apple Podcasts or Spotify. ---------- Sponsors Scale partners with major AI labs like Meta, Google Deepmind, and OpenAI. Through Scale’s Data Foundry, labs get access to high-quality data to fuel post-training, including advanced reasoning capabilities. If you’re an AI researcher or engineer, learn about how Scale’s Data Foundry and research lab, SEAL, can help you go beyond the current frontier at scale.com/dwarkesh Linear's project management tools have become the default choice for product teams at companies like Ramp, CashApp, OpenAI, and Scale. These teams use Linear so they can stay close to their products and move fast. If you’re curious why so many companies are making the switch, visit linear.app/dwarkesh To sponsor a future episode, visit dwarkeshpatel.com/p/advertise. ---------- Timestamps (0:00:00) - Intro (0:05:04) - AI won't be winner-take-all (0:15:18) - World economy growing by 10% (0:21:39) - Decreasing price of intelligence (0:30:19) - Quantum breakthrough (0:42:51) - How Muse will change gaming (0:49:51) - Legal barriers to AI (0:55:46) - Getting AGI safety right (1:04:59) - 34 years at Microsoft (1:10:46) - Does Satya Nadella believe in AGI? Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
This week I welcome on the show two of the most important technologists ever, in any field. Jeff Dean is Google's Chief Scientist, and through 25 years at the company, has worked on basically the most transformative systems in modern computing: from MapReduce, BigTable, Tensorflow, AlphaChip, to Gemini. Noam Shazeer invented or co-invented all the main architectures and techniques that are used for modern LLMs: from the Transformer itself, to Mixture of Experts, to Mesh Tensorflow, to Gemini and many other things. We talk about their 25 years at Google, going from PageRank to MapReduce to the Transformer to MoEs to AlphaChip – and maybe soon to ASI. My favorite part was Jeff's vision for Pathways, Google’s grand plan for a mutually-reinforcing loop of hardware and algorithmic design and for going past autoregression. That culminates in us imagining *all* of Google-the-company, going through one huge MoE model. And Noam just bites every bullet: 100x world GDP soon; let’s get a million automated researchers running in the Google datacenter; living to see the year 3000.Watch on Youtube; listen on Apple Podcasts or Spotify. Sponsors Scale partners with major AI labs like Meta, Google Deepmind, and OpenAI. Through Scale’s Data Foundry, labs get access to high-quality data to fuel post-training, including advanced reasoning capabilities. If you’re an AI researcher or engineer, learn about how Scale’s Data Foundry and research lab, SEAL, can help you go beyond the current frontier at scale.com/dwarkesh Curious how Jane Street teaches their new traders? They use Figgie, a rapid-fire card game that simulates the most exciting parts of markets and trading. It’s become so popular that Jane Street hosts an inter-office Figgie championship every year. Download from the app store or play on your desktop at figgie.com Meter wants to radically improve the digital world we take for granted. They’re developing a foundation model that automates network management end-to-end. To do this, they just announced a long-term partnership with Microsoft for tens of thousands of GPUs, and they’re recruiting a world class AI research team. To learn more, go to meter.com/dwarkesh To sponsor a future episode, visit dwarkeshpatel.com/p/advertise Timestamps 00:00:00 - Intro 00:02:44 - Joining Google in 1999 00:05:36 - Future of Moore's Law 00:10:21 - Future TPUs 00:13:13 - Jeff’s undergrad thesis: parallel backprop 00:15:10 - LLMs in 2007 00:23:07 - “Holy s**t” moments 00:29:46 - AI fulfills Google’s original mission 00:34:19 - Doing Search in-context 00:38:32 - The internal coding model 00:39:49 - What will 2027 models do? 00:46:00 - A new architecture every day? 00:49:21 - Automated chip design and intelligence explosion 00:57:31 - Future of inference scaling 01:03:56 - Already doing multi-datacenter runs 01:22:33 - Debugging at scale 01:26:05 - Fast takeoff and superalignment 01:34:40 - A million evil Jeff Deans 01:38:16 - Fun times at Google 01:41:50 - World compute demand in 2030 01:48:21 - Getting back to modularity 01:59:13 - Keeping a giga-MoE in-memory 02:04:09 - All of Google in one model 02:12:43 - What’s missing from distillation 02:18:03 - Open research, pros and cons 02:24:54 - Going the distance Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Third and final episode in the Paine trilogy! Chinese history is full of warlords constantly challenging the capital. How could Mao not only stay in power for decades, but not even face any insurgency? And how did Mao go from military genius to peacetime disaster - the patriotic hero who inflicted history’s worst human catastrophe on China? How can someone shrewd enough to win a civil war outnumbered 5 to 1 decide "let's have peasants make iron in their backyards" and "let's kill all the birds"? In her lecture and our Q&A, we cover the first nationwide famine in Chinese history; Mao's lasting influence on other insurgents; broken promises to minorities and peasantry; and what Taiwan means. Thanks so much to @Substack for running this in-person event! Note that Sarah is doing an AMA over the next couple days on Youtube; see the pinned comment. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Sponsor Today’s episode is brought to you by Scale AI. Scale partners with the U.S. government to fuel America’s AI advantage through their data foundry. Scale recently introduced Defense Llama, Scale's latest solution available for military personnel. With Defense Llama, military personnel can harness the power of AI to plan military or intelligence operations and understand adversary vulnerabilities. If you’re interested in learning more on how Scale powers frontier AI capabilities, go to https://scale.com/dwarkesh. Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
This is the second episode in the trilogy of a lectures by Professor Sarah Paine of the Naval War College. In this second episode, Prof Paine dissects the ideas and economics behind Japanese imperialism before and during WWII. We get into the oil shortage which caused the war; the unique culture of honor and death; the surprisingly chaotic chain of command. This is followed by a Q&A with me. Huge thanks to Substack for hosting this event! Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Sponsor Today’s episode is brought to you by Scale AI. Scale partners with the U.S. government to fuel America’s AI advantage through their data foundry. Scale recently introduced Defense Llama, Scale's latest solution available for military personnel. With Defense Llama, military personnel can harness the power of AI to plan military or intelligence operations and understand adversary vulnerabilities. If you’re interested in learning more on how Scale powers frontier AI capabilities, go to scale.com/dwarkesh. Buy Sarah's Books! I highly, highly recommend both "The Wars for Asia, 1911–1949" and "The Japanese Empire: Grand Strategy from the Meiji Restoration to the Pacific War". Timestamps (0:00:00) - Lecture begins (0:06:58) - The code of the samurai (0:10:45) - Buddhism, Shinto, Confucianism (0:16:52) - Bushido as bad strategy (0:23:34) - Military theorists (0:33:42) - Strategic sins of omission (0:38:10) - Crippled logistics (0:40:58) - the Kwantung Army (0:43:31) - Inter-service communication (0:51:15) - Shattering Japanese morale (0:57:35) - Q&A begins (01:05:02) - Unusual brutality of WWII (01:11:30) - Embargo caused the war (01:16:48) - The liberation of China (01:22:02) - Could US have prevented war? (01:25:30) - Counterfactuals in history (01:27:46) - Japanese optimism (01:30:46) - Tech change and social change (01:38:22) - Hamming questions (01:44:31) - Do sanctions work? (01:50:07) - Backloaded mass death (01:54:09) - demilitarizing Japan (01:57:30) - Post-war alliances (02:03:46) - Inter-service rivalry Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
I’m thrilled to launch a new trilogy of double episodes: a lecture series by Professor Sarah Paine of the Naval War College, each followed by a deep Q&A. In this first episode, Prof Paine talks about key decisions by Khrushchev, Mao, Nehru, Bhutto, & Lyndon Johnson that shaped the whole dynamic of South Asia today. This is followed by a Q&A. Come for the spy bases, shoestring nukes, and insight about how great power politics impacts every region. Huge thanks to Substack for hosting this! Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Sponsors Today’s episode is brought to you by Scale AI. Scale partners with the U.S. government to fuel America’s AI advantage through their data foundry. The Air Force, Army, Defense Innovation Unit, and Chief Digital and Artificial Intelligence Office all trust Scale to equip their teams with AI-ready data and the technology to build powerful applications. Scale recently introduced Defense Llama, Scale's latest solution available for military personnel. With Defense Llama, military personnel can harness the power of AI to plan military or intelligence operations and understand adversary vulnerabilities. If you’re interested in learning more on how Scale powers frontier AI capabilities, go to scale.com/dwarkesh. Timestamps (00:00) - Intro (02:11) - Mao at war, 1949-51 (05:40) - Pactomania and Sino-Soviet conflicts (14:42) - The Sino-Indian War (20:00) - Soviet peace in India-Pakistan (22:00) - US Aid and Alliances (26:14) - The difference with WWII (30:09) - The geopolitical map in 1904 (35:10) - The US alienates Indira Gandhi (42:58) - Instruments of US power (53:41) - Carrier battle groups (1:02:41) - Q&A begins (1:04:31) - The appeal of the USSR (1:09:36) - The last communist premier (1:15:42) - India and China's lost opportunity (1:58:04) - Bismark's cunning (2:03:05) - Training US officers (2:07:03) - Cruelty in Russian history Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
I interviewed Tyler Cowen at the Progress Conference 2024. As always, I had a blast. This is my fourth interview with him – and yet I’m always hearing new stuff. We talked about why he thinks AI won't drive explosive economic growth, the real bottlenecks on world progress, him now writing for AIs instead of humans, and the difficult relationship between being cultured and fostering growth – among many other things in the full episode. Thanks to the Roots of Progress Institute (with special thanks to Jason Crawford and Heike Larson) for such a wonderful conference, and to FreeThink for the videography. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Sponsors I’m grateful to Tyler for volunteering to say a few words about Jane Street. It's the first time that a guest has participated in the sponsorship. I hope you can see why Tyler and I think so highly of Jane Street. To learn more about their open rules, go to janestreet.com/dwarkesh. Timestamps (00:00:00) Economic Growth and AI (00:14:57) Founder Mode and increasing variance (00:29:31) Effective Altruism and Progress Studies (00:33:05) What AI changes for Tyler (00:44:57) The slow diffusion of innovation (00:49:53) Stalin's library (00:52:19) DC vs SF vs EU Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Adam Brown is a founder and lead of BlueShift with is cracking maths and reasoning at Google DeepMind and a theoretical physicist at Stanford. We discuss: destroying the light cone with vacuum decay, holographic principle, mining black holes, & what it would take to train LLMs that can make Einstein level conceptual breakthroughs. Stupefying, entertaining, & terrifying. Enjoy! Watch on YouTube, read the transcript, listen on Apple Podcasts, Spotify, or your favorite platform. Sponsors - Deepmind, Meta, Anthropic, and OpenAI, partner with Scale for high quality data to fuel post-training Publicly available data is running out - to keep developing smarter and smarter models, labs will need to rely on Scale’s data foundry, which combines subject matter experts with AI models to generate fresh data and break through the data wall. Learn more at scale.ai/dwarkesh. - Jane Street is looking to hire their next generation of leaders. Their deep learning team is looking for ML researchers, FPGA programmers, and CUDA programmers. Summer internships are open for just a few more weeks. If you want to stand out, take a crack at their new Kaggle competition. To learn more, go to janestreet.com/dwarkesh. - This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue. Timestamps (00:00:00) - Changing the laws of physics (00:26:05) - Why is our universe the way it is (00:37:30) - Making Einstein level AGI (01:00:31) - Physics stagnation and particle colliders (01:11:10) - Hitchhiking (01:29:00) - Nagasaki (01:36:19) - Adam’s career (01:43:25) - Mining black holes (01:59:42) - The holographic principle (02:23:25) - Philosophy of infinities (02:31:42) - Engineering constraints for future civilizations Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Gwern is a pseudonymous researcher and writer. He was one of the first people to see LLM scaling coming. If you've read his blog, you know he's one of the most interesting polymathic thinkers alive. In order to protect Gwern's anonymity, I proposed interviewing him in person, and having my friend Chris Painter voice over his words after. This amused him enough that he agreed. After the episode, I convinced Gwern to create a donation page where people can help sustain what he's up to. Please go here to contribute. Read the full transcript here. Sponsors: * Jane Street is looking to hire their next generation of leaders. Their deep learning team is looking for ML researchers, FPGA programmers, and CUDA programmers. Summer internships are open - if you want to stand out, take a crack at their new Kaggle competition. To learn more, go to janestreet.com/dwarkesh. * Turing provides complete post-training services for leading AI labs like OpenAI, Anthropic, Meta, and Gemini. They specialize in model evaluation, SFT, RLHF, and DPO to enhance models’ reasoning, coding, and multimodal capabilities. Learn more at turing.com/dwarkesh. * This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue. If you’re interested in advertising on the podcast, check out this page. Timestamps 00:00:00 - Anonymity 00:01:09 - Automating Steve Jobs 00:04:38 - Isaac Newton's theory of progress 00:06:36 - Grand theory of intelligence 00:10:39 - Seeing scaling early 00:21:04 - AGI Timelines 00:22:54 - What to do in remaining 3 years until AGI 00:26:29 - Influencing the shoggoth with writing 00:30:50 - Human vs artificial intelligence 00:33:52 - Rabbit holes 00:38:48 - Hearing impairment 00:43:00 - Wikipedia editing 00:47:43 - Gwern.net 00:50:20 - Counterfactual careers 00:54:30 - Borges & literature 01:01:32 - Gwern's intelligence and process 01:11:03 - A day in the life of Gwern 01:19:16 - Gwern's finances 01:25:05 - The diversity of AI minds 01:27:24 - GLP drugs and obesity 01:31:08 - Drug experimentation 01:33:40 - Parasocial relationships 01:35:23 - Open rabbit holes Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
A bonanza on the semiconductor industry and hardware scaling to AGI by the end of the decade. Dylan Patel runs Semianalysis, the leading publication and research firm on AI hardware. Jon Y runs Asianometry, the world’s best YouTube channel on semiconductors and business history. * What Xi would do if he became scaling pilled * $ 1T+ in datacenter buildout by end of decade Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Sponsors: * Jane Street is looking to hire their next generation of leaders. Their deep learning team is looking for FPGA programmers, CUDA programmers, and ML researchers. To learn more about their full time roles, internship, tech podcast, and upcoming Kaggle competition, go here. * This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue. If you’re interested in advertising on the podcast, check out this page. Timestamps 00:00:00 – Xi's path to AGI 00:04:20 – Liang Mong Song 00:08:25 – How semiconductors get better 00:11:16 – China can centralize compute 00:18:50 – Export controls & sanctions 00:32:51 – Huawei's intense culture 00:38:51 – Why the semiconductor industry is so stratified 00:40:58 – N2 should not exist 00:45:53 – Taiwan invasion hypothetical 00:49:21 – Mind-boggling complexity of semiconductors 00:59:13 – Chip architecture design 01:04:36 – Architectures lead to different AI models? China vs. US 01:10:12 – Being head of compute at an AI lab 01:16:24 – Scaling costs and power demand 01:37:05 – Are we financing an AI bubble? 01:50:20 – Starting Asianometry and SemiAnalysis 02:06:10 – Opportunities in the semiconductor stack Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Unless you understand the history of oil, you cannot understand the rise of America, WW1, WW2, secular stagnation, the Middle East, Ukraine, how Xi and Putin think, and basically anything else that's happened since 1860. It was a great honor to interview Daniel Yergin, the Pulitzer Prize winning author of The Prize - the best history of oil ever written (which makes it the best history of the 20th century ever written). Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Sponsors: This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue. This episode is brought to you by Suno, pioneers in AI-generated music. Suno's technology allows artists to experiment with melodic forms and structures in unprecedented ways. From chart-toppers to avant-garde compositions, Suno is redefining musical creativity. If you're an ML researcher passionate about shaping the future of music, email your resume to [email protected]. If you’re interested in advertising on the podcast, check out this page. Timestamps (00:00:00) – Beginning of the oil industry (00:13:37) – World War I & II (00:25:06) – The Middle East (00:47:04) – Yergin’s conversations with Putin & Modi (01:04:36) – Writing through stories (01:10:26) – The renewable energy transition Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
I had no idea how wild human history was before chatting with the geneticist of ancient DNA David Reich. Human history has been again and again a story of one group figuring ‘something’ out, and then basically wiping everyone else out. From the tribe of 1k-10k modern humans who killed off all the other human species 70,000 years ago; to the Yamnaya horse nomads 5,000 years ago who killed off 90+% of (then) Europeans and also destroyed the Indus Valley. So much of what we thought we knew about human history is turning out to be wrong, from the ‘Out of Africa’ theory to the evolution of language, and this is all thanks to the research from David Reich’s lab. Buy David Reich’s fascinating book, Who We Are How We Got Here. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Sponsor This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue. If you’re interested in advertising on the podcast, check out this page. Timestamps (00:00:00) – Archaic and modern humans gene flow (00:20:24) – How early modern humans dominated the world (00:39:59) – How bubonic plague rewrote history (00:50:03) – Was agriculture terrible for humans? (00:59:28) – Yamnaya expansion and how populations collide (01:15:39) – “Lost civilizations” and our Neanderthal ancestry (01:31:32) – The DNA Challenge (01:41:38) – David’s career: the genetic vocation Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Chatted with Joe Carlsmith about whether we can trust power/techno-capital, how to not end up like Stalin in our urge to control the future, gentleness towards the artificial Other, and much more. Check out Joe's sequence on Otherness and Control in the Age of AGI here. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Sponsors: - Bland.ai is an AI agent that automates phone calls in any language, 24/7. Their technology uses "conversational pathways" for accurate, versatile communication across sales, operations, and customer support. You can try Bland yourself by calling 415-549-9654. Enterprises can get exclusive access to their advanced model at bland.ai/dwarkesh. - Stripe is financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue. If you’re interested in advertising on the podcast, check out this page. Timestamps: (00:00:00) - Understanding the Basic Alignment Story (00:44:04) - Monkeys Inventing Humans (00:46:43) - Nietzsche, C.S. Lewis, and AI (1:22:51) - How should we treat AIs (1:52:33) - Balancing Being a Humanist and a Scholar (2:05:02) - Explore exploit tradeoffs and AI Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
I talked with Patrick McKenzie (known online as patio11) about how a small team he ran over a Discord server got vaccines into Americans' arms: A story of broken incentives, outrageous incompetence, and how a few individuals with high agency saved 1000s of lives. Enjoy! Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Sponsor This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue. Timestamps (00:00:00) – Why hackers on Discord had to save thousands of lives (00:17:26) – How politics crippled vaccine distribution (00:38:19) – Fundraising for VaccinateCA (00:51:09) – Why tech needs to understand how government works (00:58:58) – What is crypto good for? (01:13:07) – How the US government leverages big tech to violate rights (01:24:36) – Can the US have nice things like Japan? (01:26:41) – Financial plumbing & money laundering: a how-not-to guide (01:37:42) – Maximizing your value: why some people negotiate better (01:42:14) – Are young people too busy playing Factorio to found startups? (01:57:30) – The need for a post-mortem Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
I chatted with Tony Blair about: - What he learned from Lee Kuan Yew - Intelligence agencies track record on Iraq & Ukraine - What he tells the dozens of world leaders who come seek advice from him - How much of a PM’s time is actually spent governing - What will AI’s July 1914 moment look like from inside the Cabinet? Enjoy! Watch the video on YouTube. Read the full transcript here. Follow me on Twitter for updates on future episodes. Sponsors - Prelude Security is the world’s leading cyber threat management automation platform. Prelude Detect quickly transforms threat intelligence into validated protections so organizations can know with certainty that their defenses will protect them against the latest threats. Prelude is backed by Sequoia Capital, Insight Partners, The MITRE Corporation, CrowdStrike, and other leading investors. Learn more here. - This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue. If you’re interested in advertising on the podcast, check out this page. Timestamps (00:00:00) – A prime minister’s constraints (00:04:12) – CEOs vs. politicians (00:10:31) – COVID, AI, & how government deals with crisis (00:21:24) – Learning from Lee Kuan Yew (00:27:37) – Foreign policy & intelligence (00:31:12) – How much leadership actually matters (00:35:34) – Private vs. public tech (00:39:14) – Advising global leaders (00:46:45) – The unipolar moment in the 90s Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Here is my conversation with Francois Chollet and Mike Knoop on the $1 million ARC-AGI Prize they're launching today. I did a bunch of socratic grilling throughout, but Francois’s arguments about why LLMs won’t lead to AGI are very interesting and worth thinking through. It was really fun discussing/debating the cruxes. Enjoy! Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Timestamps (00:00:00) – The ARC benchmark (00:11:10) – Why LLMs struggle with ARC (00:19:00) – Skill vs intelligence (00:27:55) - Do we need “AGI” to automate most jobs? (00:48:28) – Future of AI progress: deep learning + program synthesis (01:00:40) – How Mike Knoop got nerd-sniped by ARC (01:08:37) – Million $ ARC Prize (01:10:33) – Resisting benchmark saturation (01:18:08) – ARC scores on frontier vs open source models (01:26:19) – Possible solutions to ARC Prize Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Chatted with my friend Leopold Aschenbrenner on the trillion dollar nationalized cluster, CCP espionage at AI labs, how unhobblings and scaling can lead to 2027 AGI, dangers of outsourcing clusters to Middle East, leaving OpenAI, and situational awareness. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Follow Leopold on Twitter. Timestamps (00:00:00) – The trillion-dollar cluster and unhobbling (00:20:31) – AI 2028: The return of history (00:40:26) – Espionage & American AI superiority (01:08:20) – Geopolitical implications of AI (01:31:23) – State-led vs. private-led AI (02:12:23) – Becoming Valedictorian of Columbia at 19 (02:30:35) – What happened at OpenAI (02:45:11) – Accelerating AI research progress (03:25:58) – Alignment (03:41:26) – On Germany, and understanding foreign perspectives (03:57:04) – Dwarkesh’s immigration story and path to the podcast (04:07:58) – Launching an AGI hedge fund (04:19:14) – Lessons from WWII (04:29:08) – Coda: Frederick the Great Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Chatted with John Schulman (cofounded OpenAI and led ChatGPT creation) on how posttraining tames the shoggoth, and the nature of the progress to come... Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Timestamps (00:00:00) - Pre-training, post-training, and future capabilities (00:16:55) - Plan for AGI 2025 (00:29:18) - Teaching models to reason (00:39:45) - The Road to ChatGPT (00:51:07) - What makes for a good RL researcher? (00:59:53) - Keeping humans in the loop (01:14:11) - State of research, plateaus, and moats Sponsors If you’re interested in advertising on the podcast, fill out this form. * CommandBar is an AI user assistant that any software product can embed to non-annoyingly assist, support, and unleash their users. Used by forward-thinking CX, product, growth, and marketing teams. Learn more at commandbar.com. Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Mark Zuckerberg on: - Llama 3 - open sourcing towards AGI - custom silicon, synthetic data, & energy constraints on scaling - Caesar Augustus, intelligence explosion, bioweapons, $10b models, & much more Enjoy! Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Human edited transcript with helpful links here. Timestamps (00:00:00) - Llama 3 (00:08:32) - Coding on path to AGI (00:25:24) - Energy bottlenecks (00:33:20) - Is AI the most important technology ever? (00:37:21) - Dangers of open source (00:53:57) - Caesar Augustus and metaverse (01:04:53) - Open sourcing the $10b model & custom silicon (01:15:19) - Zuck as CEO of Google+ Sponsors If you’re interested in advertising on the podcast, fill out this form. * This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue. Learn more at stripe.com. * V7 Go is a tool to automate multimodal tasks using GenAI, reliably and at scale. Use code DWARKESH20 for 20% off on the pro plan. Learn more here. * CommandBar is an AI user assistant that any software product can embed to non-annoyingly assist, support, and unleash their users. Used by forward-thinking CX, product, growth, and marketing teams. Learn more at commandbar.com. Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Had so much fun chatting with my good friends Trenton Bricken and Sholto Douglas on the podcast. No way to summarize it, except: This is the best context dump out there on how LLMs are trained, what capabilities they're likely to soon have, and what exactly is going on inside them. You would be shocked how much of what I know about this field, I've learned just from talking with them. To the extent that you've enjoyed my other AI interviews, now you know why. So excited to put this out. Enjoy! I certainly did :) Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. There's a transcript with links to all the papers the boys were throwing down - may help you follow along. Follow Trenton and Sholto on Twitter. Timestamps (00:00:00) - Long contexts (00:16:12) - Intelligence is just associations (00:32:35) - Intelligence explosion & great researchers (01:06:52) - Superposition & secret communication (01:22:34) - Agents & true reasoning (01:34:40) - How Sholto & Trenton got into AI research (02:07:16) - Are feature spaces the wrong way to think about intelligence? (02:21:12) - Will interp actually work on superhuman models (02:45:05) - Sholto’s technical challenge for the audience (03:03:57) - Rapid fire Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Here is my episode with Demis Hassabis, CEO of Google DeepMind We discuss: * Why scaling is an artform * Adding search, planning, & AlphaZero type training atop LLMs * Making sure rogue nations can't steal weights * The right way to align superhuman AIs and do an intelligence explosion Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Timestamps (0:00:00) - Nature of intelligence (0:05:56) - RL atop LLMs (0:16:31) - Scaling and alignment (0:24:13) - Timelines and intelligence explosion (0:28:42) - Gemini training (0:35:30) - Governance of superhuman AIs (0:40:42) - Safety, open source, and security of weights (0:47:00) - Multimodal and further progress (0:54:18) - Inside Google DeepMind Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
We discuss: * what it takes to process $1 trillion/year * how to build multi-decade APIs, companies, and relationships * what's next for Stripe (increasing the GDP of the internet is quite an open ended prompt, and the Collison brothers are just getting started). Plus the amazing stuff they're doing at Arc Institute, the financial infrastructure for AI agents, playing devil's advocate against progress studies, and much more. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Timestamps (00:00:00) - Advice for 20-30 year olds (00:12:12) - Progress studies (00:22:21) - Arc Institute (00:34:27) - AI & Fast Grants (00:43:46) - Stripe history (00:55:44) - Stripe Climate (01:01:39) - Beauty & APIs (01:11:51) - Financial innards (01:28:16) - Stripe culture & future (01:41:56) - Virtues of big businesses (01:51:41) - John Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
It was a great pleasure speaking with Tyler Cowen for the 3rd time. We discussed GOAT: Who is the Greatest Economist of all Time and Why Does it Matter?, especially in the context of how the insights of Hayek, Keynes, Smith, and other great economists help us make sense of AI, growth, animal spirits, prediction markets, alignment, central planning, and much more. The topics covered in this episode are too many to summarize. Hope you enjoy! Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Timestamps (0:00:00) - John Maynard Keynes (00:17:16) - Controversy (00:25:02) - Fredrick von Hayek (00:47:41) - John Stuart Mill (00:52:41) - Adam Smith (00:58:31) - Coase, Schelling, & George (01:08:07) - Anarchy (01:13:16) - Cheap WMDs (01:23:18) - Technocracy & political philosophy (01:34:16) - AI & Scaling Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
This is a narration of my blog post, Lessons from The Years of Lyndon Johnson by Robert Caro. You read the full post here: https://www.dwarkeshpatel.com/p/lyndon-johnson Listen on Apple Podcasts, Spotify, or any other podcast platform. Follow me on Twitter for updates on future posts and episodes. Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
This is a narration of my blog post, Will scaling work?. You read the full post here: https://www.dwarkeshpatel.com/p/will-scaling-work Listen on Apple Podcasts, Spotify, or any other podcast platform. Follow me on Twitter for updates on future posts and episodes. Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
A true honor to speak with Jung Chang. She is the author of Wild Swans: Three Daughters of China (sold 15+ million copies worldwide) and Mao: The Unknown Story. We discuss: - what it was like growing up during the Cultural Revolution as the daughter of a denounced official - why the CCP continues to worship the biggest mass murderer in human history. - how exactly Communist totalitarianism was able to subjugate a billion people - why Chinese leaders like Xi and Deng who suffered from the Cultural Revolution don't condemn Mao - how Mao starved and killed 40 million people during The Great Leap Forward in order to exchange food for Soviet weapons Wild Swans is the most moving book I've ever read. It was a real privilege to speak with its author. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Timestamps (00:00:00) - Growing up during Cultural Revolution (00:15:58) - Could officials have overthrown Mao? (00:34:09) - Great Leap Forward (00:48:12) - Modern support of Mao (01:03:24) - Life as peasant (01:21:30) - Psychology of communist society Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Andrew Roberts is the world's best biographer and one of the leading historians of our time. We discussed * Churchill the applied historian, * Napoleon the startup founder, * why Nazi ideology cost Hitler WW2, * drones, reconnaissance, and other aspects of the future of war, * Iraq, Afghanistan, Korea, Ukraine, & Taiwan. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Timestamps (00:00:00) - Post WW2 conflicts (00:10:57) - Ukraine (00:16:33) - How Truman Prevented Nuclear War (00:22:49) - Taiwan (00:27:15) - Churchill (00:35:11) - Gaza & future wars (00:39:05) - Could Hitler have won WW2? (00:48:00) - Surprise attacks (00:59:33) - Napoleon and startup founders (01:14:06) - Robert’s insane productivity Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Here is my interview with Dominic Cummings on why Western governments are so dangerously broken, and how to fix them before an even more catastrophic crisis. Dominic was Chief Advisor to the Prime Minister during COVID, and before that, director of Vote Leave (which masterminded the 2016 Brexit referendum). Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Timestamps (00:00:00) - One day in COVID… (00:08:26) - Why is government broken? (00:29:10) - Civil service (00:38:27) - Opportunity wasted? (00:49:35) - Rishi Sunak and Number 10 vs 11 (00:55:13) - Cyber, nuclear, bio risks (01:02:04) - Intelligence & defense agencies (01:23:32) - Bismarck & Lee Kuan Yew (01:37:46) - How to fix the government? (01:56:43) - Taiwan (02:00:10) - Russia (02:07:12) - Bismarck’s career as an example of AI (mis)alignment (02:17:37) - Odyssean education Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Paul Christiano is the world’s leading AI safety researcher. My full episode with him is out! We discuss: - Does he regret inventing RLHF, and is alignment necessarily dual-use? - Why he has relatively modest timelines (40% by 2040, 15% by 2030), - What do we want post-AGI world to look like (do we want to keep gods enslaved forever)? - Why he’s leading the push to get to labs develop responsible scaling policies, and what it would take to prevent an AI coup or bioweapon, - His current research into a new proof system, and how this could solve alignment by explaining model's behavior - and much more. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Open Philanthropy Open Philanthropy is currently hiring for twenty-two different roles to reduce catastrophic risks from fast-moving advances in AI and biotechnology, including grantmaking, research, and operations. For more information and to apply, please see the application: https://www.openphilanthropy.org/research/new-roles-on-our-gcr-team/ The deadline to apply is November 9th; make sure to check out those roles before they close. Timestamps (00:00:00) - What do we want post-AGI world to look like? (00:24:25) - Timelines (00:45:28) - Evolution vs gradient descent (00:54:53) - Misalignment and takeover (01:17:23) - Is alignment dual-use? (01:31:38) - Responsible scaling policies (01:58:25) - Paul’s alignment research (02:35:01) - Will this revolutionize theoretical CS and math? (02:46:11) - How Paul invented RLHF (02:55:10) - Disagreements with Carl Shulman (03:01:53) - Long TSMC but not NVIDIA Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
I had a lot of fun chatting with Shane Legg - Founder and Chief AGI Scientist, Google DeepMind! We discuss: * Why he expects AGI around 2028 * How to align superhuman models * What new architectures needed for AGI * Has Deepmind sped up capabilities or safety more? * Why multimodality will be next big landmark * and much more Watch full episode on YouTube, Apple Podcasts, Spotify, or any other podcast platform. Read full transcript here. Timestamps (0:00:00) - Measuring AGI (0:11:41) - Do we need new architectures? (0:16:26) - Is search needed for creativity? (0:19:19) - Superhuman alignment (0:29:58) - Impact of Deepmind on safety vs capabilities (0:34:03) - Timelines (0:41:24) - Multimodality Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
I had a lot of fun chatting with Grant Sanderson (who runs the excellent 3Blue1Brown YouTube channel) about: - Whether advanced math requires AGI - What careers should mathematically talented students pursue - Why Grant plans on doing a stint as a high school teacher - Tips for self teaching - Does Godel’s incompleteness theorem actually matter - Why are good explanations so hard to find? - And much more Watch on YouTube. Listen on Spotify, Apple Podcasts, or any other podcast platform. Full transcript here. Timestamps (0:00:00) - Does winning math competitions require AGI? (0:08:24) - Where to allocate mathematical talent? (0:17:34) - Grant’s miracle year (0:26:44) - Prehistoric humans and math (0:33:33) - Why is a lot of math so new? (0:44:44) - Future of education (0:56:28) - Math helped me realize I wasn’t that smart (0:59:25) - Does Godel’s incompleteness theorem matter? (1:05:12) - How Grant makes videos (1:10:13) - Grant’s math exposition competition (1:20:44) - Self teaching Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
I learned so much from Sarah Paine, Professor of History and Strategy at the Naval War College. We discuss: - how continental vs maritime powers think and how this explains Xi & Putin's decisions - how a war with China over Taiwan would shake out and whether it could go nuclear - why the British Empire fell apart, why China went communist, how Hitler and Japan could have coordinated to win WW2, and whether Japanese occupation was good for Korea, Taiwan and Manchuria - plus other lessons from WW2, Cold War, and Sino-Japanese War - how to study history properly, and why leaders keep making the same mistakes If you want to learn more, check out her books - they’re some of the best military history I’ve ever read. Watch on YouTube, listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript. Timestamps (0:00:00) - Grand strategy (0:11:59) - Death ground (0:23:19) - WW1 (0:39:23) - Writing history (0:50:25) - Japan in WW2 (0:59:58) - Ukraine (1:10:50) - Japan/Germany vs Iraq/Afghanistan occupation (1:21:25) - Chinese invasion of Taiwan (1:51:26) - Communists & Axis (2:08:34) - Continental vs maritime powers Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Here is my conversation with Dario Amodei, CEO of Anthropic. Dario is hilarious and has fascinating takes on what these models are doing, why they scale so well, and what it will take to align them. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Timestamps (00:00:00) - Introduction (00:01:00) - Scaling (00:15:46) - Language (00:22:58) - Economic Usefulness (00:38:05) - Bioterrorism (00:43:35) - Cybersecurity (00:47:19) - Alignment & mechanistic interpretability (00:57:43) - Does alignment research require scale? (01:05:30) - Misuse vs misalignment (01:09:06) - What if AI goes well? (01:11:05) - China (01:15:11) - How to think about alignment (01:31:31) - Is modern security good enough? (01:36:09) - Inefficiencies in training (01:45:53) - Anthropic’s Long Term Benefit Trust (01:51:18) - Is Claude conscious? (01:56:14) - Keeping a low profile Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
A few weeks ago, I sat beside Andy Matuschak to record how he reads a textbook. Even though my own job is to learn things, I was shocked with how much more intense, painstaking, and effective his learning process was. So I asked if we could record a conversation about how he learns and a bunch of other topics: * How he identifies and interrogates his confusion (much harder than it seems, and requires an extremely effortful and slow pace) * Why memorization is essential to understanding and decision-making * How come some people (like Tyler Cowen) can integrate so much information without an explicit note taking or spaced repetition system. * How LLMs and video games will change education * How independent researchers and writers can make money * The balance of freedom and discipline in education * Why we produce fewer von Neumann-like prodigies nowadays * How multi-trillion dollar companies like Apple (where he was previously responsible for bedrock iOS features) manage to coordinate millions of different considerations (from the cost of different components to the needs of users, etc) into new products designed by 10s of 1000s of people. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. To see Andy’s process in action, check out the video where we record him studying a quantum physics textbook, talking aloud about his thought process, and using his memory system prototype to internalize the material. You can check out his website and personal notes, and follow him on Twitter. Cometeer Visit cometeer.com/lunar for $20 off your first order on the best coffee of your life! If you want to sponsor an episode, contact me at [email protected]. Timestamps (00:00:52) - Skillful reading (00:02:30) - Do people care about understanding? (00:06:52) - Structuring effective self-teaching (00:16:37) - Memory and forgetting (00:33:10) - Andy’s memory practice (00:40:07) - Intellectual stamina (00:44:27) - New media for learning (video, games, streaming) (00:58:51) - Schools are designed for the median student (01:05:12) - Is learning inherently miserable? (01:11:57) - How Andy would structure his kids’ education (01:30:00) - The usefulness of hypertext (01:41:22) - How computer tools enable iteration (01:50:44) - Monetizing public work (02:08:36) - Spaced repetition (02:10:16) - Andy’s personal website and notes (02:12:44) - Working at Apple (02:19:25) - Spaced repetition 2 Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
The second half of my 7 hour conversation with Carl Shulman is out! My favorite part! And the one that had the biggest impact on my worldview. Here, Carl lays out how an AI takeover might happen: * AI can threaten mutually assured destruction from bioweapons, * use cyber attacks to take over physical infrastructure, * build mechanical armies, * spread seed AIs we can never exterminate, * offer tech and other advantages to collaborating countries, etc Plus we talk about a whole bunch of weird and interesting topics which Carl has thought about: * what is the far future best case scenario for humanity * what it would look like to have AI make thousands of years of intellectual progress in a month * how do we detect deception in superhuman models * does space warfare favor defense or offense * is a Malthusian state inevitable in the long run * why markets haven't priced in explosive economic growth * & much more Carl also explains how he developed such a rigorous, thoughtful, and interdisciplinary model of the biggest problems in the world. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Catch part 1 here Timestamps (0:00:00) - Intro (0:00:47) - AI takeover via cyber or bio (0:32:27) - Can we coordinate against AI? (0:53:49) - Human vs AI colonizers (1:04:55) - Probability of AI takeover (1:21:56) - Can we detect deception? (1:47:25) - Using AI to solve coordination problems (1:56:01) - Partial alignment (2:11:41) - AI far future (2:23:04) - Markets & other evidence (2:33:26) - Day in the life of Carl Shulman (2:47:05) - Space warfare, Malthusian long run, & other rapid fire Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
In terms of the depth and range of topics, this episode is the best I’ve done. No part of my worldview is the same after talking with Carl Shulman. He's the most interesting intellectual you've never heard of. We ended up talking for 8 hours, so I'm splitting this episode into 2 parts. This part is about Carl’s model of an intelligence explosion, which integrates everything from: * how fast algorithmic progress & hardware improvements in AI are happening, * what primate evolution suggests about the scaling hypothesis, * how soon before AIs could do large parts of AI research themselves, and whether there would be faster and faster doublings of AI researchers, * how quickly robots produced from existing factories could take over the economy. We also discuss the odds of a takeover based on whether the AI is aligned before the intelligence explosion happens, and Carl explains why he’s more optimistic than Eliezer. The next part, which I’ll release next week, is about all the specific mechanisms of an AI takeover, plus a whole bunch of other galaxy brain stuff. Maybe 3 people in the world have thought as rigorously as Carl about so many interesting topics. This was a huge pleasure. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Timestamps (00:00:00) - Intro (00:01:32) - Intelligence Explosion (00:18:03) - Can AIs do AI research? (00:39:00) - Primate evolution (01:03:30) - Forecasting AI progress (01:34:20) - After human-level AGI (02:08:39) - AI takeover scenarios Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
It was a tremendous honor & pleasure to interview Richard Rhodes, Pulitzer Prize winning author of The Making of the Atomic Bomb We discuss - similarities between AI progress & Manhattan Project (developing a powerful, unprecedented, & potentially apocalyptic technology within an uncertain arms-race situation) - visiting starving former Soviet scientists during fall of Soviet Union - whether Oppenheimer was a spy, & consulting on the Nolan movie - living through WW2 as a child - odds of nuclear war in Ukraine, Taiwan, Pakistan, & North Korea - how the US pulled of such a massive secret wartime scientific & industrial project Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Timestamps (0:00:00) - Oppenheimer movie (0:06:22) - Was the bomb inevitable? (0:29:10) - Firebombing vs nuclear vs hydrogen bombs (0:49:44) - Stalin & the Soviet program (1:08:24) - Deterrence, disarmament, North Korea, Taiwan (1:33:12) - Oppenheimer as lab director (1:53:40) - AI progress vs Manhattan Project (1:59:50) - Living through WW2 (2:16:45) - Secrecy (2:26:34) - Wisdom & war Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong. We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more. If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Timestamps (0:00:00) - TIME article (0:09:06) - Are humans aligned? (0:37:35) - Large language models (1:07:15) - Can AIs help with alignment? (1:30:17) - Society’s response to AI (1:44:42) - Predictions (or lack thereof) (1:56:55) - Being Eliezer (2:13:06) - Othogonality (2:35:00) - Could alignment be easier than we think? (3:02:15) - What will AIs want? (3:43:54) - Writing fiction & whether rationality helps you win Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
I went over to the OpenAI offices in San Fransisco to ask the Chief Scientist and cofounder of OpenAI, Ilya Sutskever, about: * time to AGI * leaks and spies * what's after generative models * post AGI futures * working with Microsoft and competing with Google * difficulty of aligning superhuman AI Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Timestamps (00:00) - Time to AGI (05:57) - What’s after generative models? (10:57) - Data, models, and research (15:27) - Alignment (20:53) - Post AGI Future (26:56) - New ideas are overrated (36:22) - Is progress inevitable? (41:27) - Future Breakthroughs Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
It is said that the two greatest problems of history are: how to account for the rise of Rome, and how to account for her fall. If so, then the volcanic ashes spewed by Mount Vesuvius in 79 AD - which entomb the cities of Pompeii and Herculaneum in South Italy - hold history’s greatest prize. For beneath those ashes lies the only salvageable library from the classical world. Nat Friedman was the CEO of Github form 2018 to 2021. Before that, he started and sold two companies - Ximian and Xamarin. He is also the founder of AI Grant and California YIMBY. And most recently, he has created and funded the Vesuvius Challenge - a million dollar prize for reading an unopened Herculaneum scroll for the very first time. If we can decipher these scrolls, we may be able to recover lost gospels, forgotten epics, and even missing works of Aristotle. We also discuss the future of open source and AI, running Github and building Copilot, and why EMH is a lie. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Timestamps (0:00:00) - Vesuvius Challenge (0:30:00) - Finding points of leverage (0:37:39) - Open Source in AI (0:40:32) - Github Acquisition (0:50:18) - Copilot origin Story (1:11:47) - Nat.org (1:32:56) - Questions from Twitter Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
I flew out to Chicago to interview Brett Harrison, who is the former President of FTX US President and founder of Architect. In his first longform interview since the fall of FTX, he speak in great detail about his entire tenure there and about SBF’s dysfunctional leadership. He talks about how the inner circle of Gary Wang, Nishad Singh, and SBF mismanaged the company, controlled the codebase, got distracted by media, and even threatened him for his letter of resignation. In what was my favorite part of the interview, we also discuss his insights about the financial system from his decades of experience in the world's largest HFT firms. And we talk about Brett's new startup, Architect, as well as the general state of crypto post-FTX. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Timestamps (0:00:00) - Passive investing & HFT hacks (0:08:30) - Is Finance Zero-Sum? (0:18:38) - Interstellar Markets & Periodic Auctions (0:23:10) - Hiring & Programming at Jane Street (0:32:09) - Quant Culture (0:42:10) - FTX - Meeting Sam, Joining FTX US (0:58:20) - FTX - Accomplishments, Beginnings of Trouble (1:08:11) - FTX - SBF's Dysfunctional Leadership (1:26:53) - FTX - Alameda (1:33:50) - FTX - Leaving FTX, SBF"s Threats (1:45:45) - FTX - Collapse (1:53:10) - FTX - Lessons (2:04:34) - FTX - Regulators, & FTX Mafia (2:15:42) - Architect.xyz (2:30:10) - Institutional Interest & Uses of Crypto Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
My podcast with the brilliant Marc Andreessen is out! We discuss: * how AI will revolutionize software * whether NFTs are useless, & whether he should be funding flying cars instead * a16z's biggest vulnerabilities * the future of fusion, education, Twitter, venture, managerialism, & big tech Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Timestamps (0:00:17) - Chewing glass (0:04:21) - AI (0:06:42) - Regrets (0:08:51) - Managerial capitalism (0:18:43) - 100 year fund (0:22:15) - Basic research (0:27:07) - $100b fund? (0:30:32) - Crypto debate (0:43:29) - Future of VC (0:50:20) - Founders (0:56:42) - a16z vulnerabilities (1:01:28) - Monetizing Twitter (1:07:09) - Future of big tech (1:14:07) - Is VC Overstaffed? Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Garett Jones is an economist at George Mason University and the author of The Cultural Transplant, Hive Mind, and 10% Less Democracy. This episode was fun and interesting throughout! He explains: * Why national IQ matters * How migrants bring their values to their new countries * Why we should have less democracy * How the Chinese are an unstoppable global force for free markets Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Timestamps (00:00:00) - Intro (00:01:08) - Migrants Change Countries with Culture or Votes? (00:09:15) - Impact of Immigrants on Markets & Corruption (00:12:02) - 50% Open Borders? (00:16:54) - Chinese are Unstoppable Capitalists (00:21:39) - Innovation & Immigrants (00:24:53) - Open Borders for Migrants Equivalent to Americans? (00:28:54) - Let's Ignore Side Effects? (00:30:25) - Are Poor Countries Stuck? (00:32:26) - How Can Effective Altruists Increase National IQ (00:39:13) - Clone a million John von Neumann? (00:44:39) - Genetic Selection for IQ (00:47:02) - Democracy, Fed, FDA, & Presidential Power (00:49:42) - EU is a force for good? (00:55:12) - Why is America More Libertarian Than Median Voter? (00:56:19) - Is Ethnic Conflict a Short Run Problem? (00:59:38) - Bond Holder Democracy (01:04:57) - Mormonism (01:08:52) - Garett Jones's Immigration System (01:10:12) - Interviewing SBF Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
One of my best episodes ever. Lars Doucet is the author of Land is a Big Deal, a book about Georgism which has been praised by Vitalik Buterin, Scott Alexander, and Noah Smith. Sam Altman is the lead investor in his new startup, ValueBase. Talking with Lars completely changed how I think about who creates value in the world and who leeches off it. We go deep into the weeds on Georgism: * Why do even the wealthiest places in the world have poverty and homelessness, and why do rents increase as fast as wages? * Why are land-owners able to extract the profits that rightly belong to labor and capital? * How would taxing the value of land alleviate speculation, NIMBYism, and income and sales taxes? Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow Lars on Twitter. Follow me on Twitter. Timestamps (00:00:00) - Intro (00:01:11) - Georgism (00:03:16) - Metaverse Housing Crises (00:07:10) - Tax Leisure? (00:13:53) - Speculation & Frontiers (00:24:33) - Social Value of Search (00:33:13) - Will Georgism Destroy The Economy? (00:38:51) - The Economics of San Francisco (00:43:31) - Transfer from Landowners to Google? (00:46:47) - Asian Tigers and Land Reform (00:51:19) - Libertarian Georgism (00:55:42) - Crypto (00:57:16) - Transitioning to Georgism (01:02:56) - Lars's Startup & Land Assessment (01:15:12) - Big Tech (01:20:50) - Space (01:23:05) - Copyright (01:25:02) - Politics of Georgism (01:33:10) - Someone Is Always Collecting Rents Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Holden Karnofsky is the co-CEO of Open Philanthropy and co-founder of GiveWell. He is also the author of one of the most interesting blogs on the internet, Cold Takes. We discuss: * Are we living in the most important century? * Does he regret OpenPhil’s 30 million dollar grant to OpenAI in 2016? * How does he think about AI, progress, digital people, & ethics? Highly recommend! Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Timestamps (0:00:00) - Intro (0:00:58) - The Most Important Century (0:06:44) - The Weirdness of Our Time (0:21:20) - The Industrial Revolution (0:35:40) - AI Success Scenario (0:52:36) - Competition, Innovation , & AGI Bottlenecks (1:00:14) - Lock-in & Weak Points (1:06:04) - Predicting the Future (1:20:40) - Choosing Which Problem To Solve (1:26:56) - $30M OpenAI Investment (1:30:22) - Future Proof Ethics (1:37:28) - Integrity vs Utilitarianism (1:40:46) - Bayesian Mindset & Governance (1:46:56) - Career Advice Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
This was one of my favorite episodes ever. Bethany McLean was the first reporter to question Enron’s earnings, and she has written some of the best finance books out there. We discuss: * The astounding similarities between Enron & FTX, * How visionaries are just frauds who succeed (and which category describes Elon Musk), * What caused 2008, and whether we are headed for a new crisis, * Why there’s too many venture capitalists and not enough short sellers, * And why history keeps repeating itself. McLean is a contributing editor at Vanity Fair (see her articles here) and the author of The Smartest Guys in the Room, All the Devils Are Here, Saudi America, and Shaky Ground. Watch on YouTube. Listen on Spotify, Apple Podcasts, or your favorite podcast platform. Follow McLean on Twitter. Follow me on Twitter for updates on future episodes. Timestamps (0:04:37) - Is Fraud Over? (0:11:22) - Shortage of Shortsellers (0:19:03) - Elon Musk - Fraud or Visionary? (0:23:00) - Intelligence, Fake Deals, & Culture (0:33:40) - Rewarding Leaders for Long Term Thinking (0:37:00) - FTX Mafia? (0:40:17) - Is Finance Too Big? (0:44:09) - 2008 Collapse, Fannie & Freddie (0:49:25) - The Big Picture (1:00:12) - Frackers Vindicated? (1:03:40) - Rating Agencies (1:07:05) - Lawyers Getting Rich Off Fraud (1:15:09) - Are Some People Fundamentally Deceptive? (1:19:25) - Advice for Big Picture Thinkers Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Nadia Asparouhova is currently researching what the new tech elite will look like at nadia.xyz. She is also the author of Working in Public: The Making and Maintenance of Open Source Software. We talk about how: * American philanthropy has changed from Rockefeller to Effective Altruism * SBF represented the Davos elite rather than the Silicon Valley elite, * Open source software reveals the limitations of democratic participation, * & much more. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Timestamps (0:00:00) - Intro (0:00:26) - SBF was Davos elite (0:09:38) - Gender sociology of philanthropy (0:16:30) - Was Shakespeare an open source project? (0:22:00) - Need for charismatic leaders (0:33:55) - Political reform (0:40:30) - Why didn’t previous wealth booms lead to new philanthropic movements? (0:53:35) - Creating a 10,000 year endowment (0:57:27) - Why do institutions become left wing? (1:02:27) - Impact of billionaire intellectual funding (1:04:12) - Value of intellectuals (1:08:53) - Climate, AI, & Doomerism (1:18:04) - Religious philanthropy Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Perhaps the most interesting episode so far. Byrne Hobart writes at thediff.co, analyzing inflections in finance and tech. He explains: * What happened at FTX * How drugs have induced past financial bubbles * How to be long AI while hedging Taiwan invasion * Whether Musk’s Twitter takeover will succeed * Where to find the next Napoleon and LBJ * & ultimately how society can deal with those who seek domination and recognition Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Timestamps: (0:00:50) - What the hell happened at FTX? (0:07:03) - How SBF Faked Being a Genius: (0:12:23) - Drugs Explain Financial Bubbles (0:17:12) - On Founder Physiognomy (0:21:02) - Indexing Parental Involvement in Raising Talented Kids (0:30:35) - Where are all the Caro-level Biographers? (0:39:03) - Where are today's Great Founders? (0:48:29) - Micro Writing -> Macro Understanding (0:51:48) - Elon's Twitter Takeover (1:00:50) - Does Big Tech & West Have Great People? (1:11:34) - Philosophical Fanatics and Effective Altruism (1:17:17) - What Great Founders Have In Common (1:19:56) - Thinkers vs. Analyzers (1:25:40) - Taiwan Invasion bets & AI Timelines Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Edward Glaeser is the chair of the Harvard department of economics, and the author of the best books and papers about cities (including Survival of the City and Triumph of the City). He explains why: * Cities are resilient to terrorism, remote work, & pandemics, * Silicon Valley may collapse but the Sunbelt will prosper, * Opioids show UBI is not a solution to AI * & much more! Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Timestamps (0:00:00) - Mars, Terrorism, & Capitals (0:06:32) - Decline, Population Collapse, & Young Men (0:14:44) - Urban Education (0:18:35) - Georgism, Robert Moses, & Too Much Democracy? (0:25:29) - Opioids, Automation, & UBI (0:29:57) - Remote Work, Taxation, & Metaverse (0:42:29) - Past & Future of Silicon Valley (0:48:56) - Housing Reform (0:52:32) - Europe’s Stagnation, Mumbai’s Safety, & Climate Change Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
I had a fascinating discussion about Robert Moses and The Power Broker with Professor Kenneth T. Jackson. He's the pre-eminent historian on NYC and author of Robert Moses and The Modern City: The Transformation of New York. He answers: * Why are we so much worse at building things today? * Would NYC be like Detroit without the master builder? * Does it take a tyrant to stop NIMBY? Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Timestamps (0:00:00) Preview + Intro (0:11:13) How Moses Gained Power (0:18:22) Moses Saved NYC? (0:27:31) Moses the Startup Founder? (0:32:34) The Case Against Moses Highways (0:50:30) NIMBYism (1:02:44) Is Progress Cyclical (1:11:13) Friendship with Caro (1:19:50) Moses the Longtermist? Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
It was a pleasure to welcome Brian Potter on the podcast! Brian is the author of the excellent Construction Physics blog, where he discusses why the construction industry has been slow to industrialize and innovate. He explains why: Construction isn’t getting cheaper and faster, “Ugly” modern buildings are simply the result of better architecture, China is so great at building things, Saudi Arabia’s Line is a waste of resources, Environmental review makes new construction expensive and delayed and much much more! Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. You may also enjoy my interviews with Tyler Cowen (about talent, collapse, & pessimism of sex). Charles Mann (about the Americas before Columbus & scientific wizardry), and Austin Vernon about (Energy Superabundance, Starship Missiles, & Finding Alpha). Timestamps (0:00) - Why Saudi Arabia’s Line is Insane, Unrealistic, and Never going to Exist (06:54) - Designer Clothes & eBay Arbitrage Adventures (10:10) - Unique Woes of The Construction Industry (19:28) - The Problems of Prefabrication (26:27) - If Building Regulations didn’t exist… (32:20) - China’s Real Estate Bubble, Unbound Technocrats, & Japan (44:45) - Automation and Revolutionary Future Technologies (1:00:51) - 3D Printer Pessimism & The Rising Cost of Labour (1:08:02) - AI’s Impact on Construction Productivity (1:17:53) - Brian Dreams of Building a Mile High Skyscraper (1:23:43) - Deep Dive into Environmentalism and NEPA (1:42:04) - Software is Stealing Talent from Physical Engineering (1:47:13) - Gaps in the Blog Marketplace of Ideas (1:50:56) - Why is Modern Architecture So Ugly? (2:19:58) - Advice for Aspiring Architects and Young Construction Physicists Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
It was a fantastic pleasure to welcome Bryan Caplan back for a third time on the podcast! His most recent book is Don't Be a Feminist: Essays on Genuine Justice. He explains why he thinks: - Feminists are mostly wrong, - We shouldn’t overtax our centi-billionaires, - Decolonization should have emphasized human rights over democracy, - Eastern Europe shows that we could accept millions of refugees. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. More really cool guests coming up; subscribe to find out about future episodes! You may also enjoy my interviews with Tyler Cowen (about talent, collapse, & pessimism of sex), Charles Mann (about the Americas before Columbus & scientific wizardry), and Steve Hsu (about intelligence and embryo selection). Timestamps (00:12) - Don’t Be a Feminist (16:53) - Western Feminism Ignores Infanticide (19:59) - Why The Universe Hates Women (32:02) - Women's Tears Have Too Much Power (45:40) - Bryan Performs Standup Comedy! (51:02) - Affirmative Action is Philanthropic Propaganda (54:13) - Peer-effects as the Only Real Education (58:24) - The Idiocy of Student Loan Forgiveness (1:07:57) - Why Society is Becoming Mentally Ill (1:10:50) - Open Borders & the Ultra-long Term (1:14:37) - Why Cowen’s Talent Scouting Strategy is Ludicrous (1:22:06) - Surprising Immigration Victories (1:36:06) - The Most Successful Revolutions (1:54:20) - Anarcho-Capitalism is the Ultimate Government (1:55:40) - Billionaires Deserve their Wealth Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
It was my great pleasure to speak once again to Tyler Cowen. His most recent book is Talent, How to Find Energizers, Creatives, and Winners Across the World. We discuss: - how sex is more pessimistic than he is, - why he expects society to collapse permanently, - why humility, stimulants, & intelligence are overrated, - how he identifies talent, deceit, & ambition, - & much much much more! Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. You may also enjoy my interviews of Bryan Caplan (about mental illness, discrimination, and poverty), David Deutsch (about AI and the problems with America’s constitution), and Steve Hsu (about intelligence and embryo selection). Timestamps (0:00) -Did Caplan Change On Education? (1:17) - Travel vs. History (3:10) - Do Institutions Become Left Wing Over Time? (6:02) - What Does Talent Correlate With? (13:00) - Humility, Mental Illness, Caffeine, and Suits (19:20) - How does Education affect Talent? (24:34) - Scouting Talent (33:39) - Money, Deceit, and Emergent Ventures (37:16) - Building Writing Stamina (39:41) - When Does Intelligence Start to Matter? (43:51) - Spotting Talent (Counter)signals (53:30) - Will Reading Cowen’s Book Help You Win Emergent Ventures? (1:02:15) - Existential risks and the Longterm (1:10:41) - Cultivating Young Talent (1:16:58) - The Lifespans of Public Intellectuals (1:24:36) - Is Stagnation Inevitable? (1:30:30) - What are Podcasts for? Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Charles C. Mann is the author of three of my favorite history books: 1491. 1493, and The Wizard and the Prophet. We discuss: * why Native American civilizations collapsed and why they failed to make more technological progress * why he disagrees with Will MacAskill about longtermism * why there aren’t any successful slave revolts * how geoengineering can help us solve climate change * why Bitcoin is like the Chinese Silver Trade * and much much more! Timestamps (0:00:00) -Epidemically Alternate Realities (0:00:25) -Weak Points in Empires (0:03:28) -Slave Revolts (0:08:43) -Slavery Ban (0:12:46) - Contingency & The Pyramids (0:18:13) - Teotihuacan (0:20:02) - New Book Thesis (0:25:20) - Gender Ratios and Silicon Valley (0:31:15) - Technological Stupidity in the New World (0:41:24) - Religious Demoralization (0:43:24) - Critiques of Civilization Collapse Theories (0:48:29) - Virginia Company + Hubris (0:52:48) - China’s Silver Trade (1:02:27) - Wizards vs. Prophets (1:07:19) - In Defense of Regulatory Delays (1:11:50) -Geoengineering (1:16:15) -Finding New Wizards (1:18:10) -Agroforestry is Underrated (1:27:00) -Longtermism & Free Markets Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Austin Vernon is an engineer working on a new method for carbon capture, and he has one of the most interesting blogs on the internet, where he writes about engineering, software, economics, and investing. We discuss how energy superabundance will change the world, how Starship can be turned into a kinetic weapon, why nuclear is overrated, blockchains, batteries, flying cars, finding alpha, & much more! Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow Austin on Twitter. Follow me on Twitter for updates on future episodes. Timestamps (0:00:00) - Intro (0:01:53) - Starship as a Weapon (0:19:24) - Software Productivity (0:41:40) - Car Manufacturing (0:57:39) - Carbon Capture (1:16:53) - Energy Superabundance (1:25:09) - Storage for Cheap Energy (1:31:25) - Travel in Future (1:33:27) - Future Cities (1:39:58) - Flying Cars (1:43:26) - Carbon Shortage (1:48:03) - Nuclear (2:12:44) - Solar (2:14:44) - Alpha & Efficient Markets (2:22:51) - Conclusion Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Steve Hsu is a Professor of Theoretical Physics at Michigan State University and cofounder of the company Genomic Prediction. We go deep into the weeds on how embryo selection can make babies healthier and smarter. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow Steve on Twitter. Follow me on Twitter for updates on future episodes. Timestamps (0:00:14) - Feynman’s advice on picking up women (0:11:46) - Embryo selection (0:24:19) - Why hasn't natural selection already optimized humans? (0:34:13) - Aging (0:43:18) - First Mover Advantage (0:53:38) - Genomics in dating (0:59:20) - Ancestral populations (1:07:07) - Is this eugenics? (1:15:08) - Tradeoffs to intelligence (1:24:25) - Consumer preferences (1:29:34) - Gwern (1:33:55) - Will parents matter? (1:44:45) - Wordcels and shape rotators (1:56:45) - Bezos and brilliant physicists (2:09:35) - Elite education Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Will MacAskill is one of the founders of the Effective Altruist movement and the author of the upcoming book, What We Owe The Future. We talk about improving the future, risk of extinction & collapse, technological & moral change, problems of academia, who changes history, and much more. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Episode website + Transcript here. Follow Will on Twitter. Follow me on Twitter for updates on future episodes. Subscribe to find out about future episodes! Timestamps (00:23) - Effective Altruism and Western values (07:47) - The contingency of technology (12:02) - Who changes history? (18:00) - Longtermist institutional reform (25:56) - Are companies longtermist? (28:57) - Living in an era of plasticity (34:52) - How good can the future be? (39:18) - Contra Tyler Cowen on what’s most important (45:36) - AI and the centralization of power (51:34) - The problems with academia Please share if you enjoyed this episode! Helps out a ton! Transcript Dwarkesh Patel 0:06 Okay, today I have the pleasure of interviewing William MacAskill. Will is one of the founders of the Effective Altruism movement, and most recently, the author of the upcoming book, What We Owe The Future. Will, thanks for coming on the podcast. Will MacAskill 0:20 Thanks so much for having me on. Effective Altruism and Western values Dwarkesh Patel 0:23 My first question is: What is the high-level explanation for the success of the Effective Altruism movement? Is it itself an example of the contingencies you talk about in the book? Will MacAskill 0:32 Yeah, I think it is contingent. Maybe not on the order of, “this would never have happened,” but at least on the order of decades. Evidence that Effective Altruism is somewhat contingent is that similar ideas have been promoted many times during history, and not taken on. We can go back to ancient China, the Mohists defended an impartial view of morality, and took very strategic actions to help all people. In particular, providing defensive assistance to cities under siege. Then, there were early utilitarians. Effective Altruism is broader than utilitarianism, but has some similarities. Even Peter Singer in the 70s had been promoting the idea that we should be giving most of our income to help the very poor — and didn’t get a lot of traction until early 2010 after GiveWell and Giving What We Can launched. What explains the rise of it? I think it was a good idea waiting to happen. At some point, the internet helped to gather together a lot of like-minded people which wasn’t possible otherwise. There were some particularly lucky events like Alex meeting Holden and me meeting Toby that helped catalyze it at the particular time it did. Dwarkesh Patel 1:49 If it's true, as you say, in the book, that moral values are very contingent, then shouldn't that make us suspect that modern Western values aren't that good? They're mediocre, or worse, because ex ante, you would expect to end up with a median of all the values we could have had at this point. Obviously, we'd be biased in favor of whatever values we were brought up in. Will MacAskill 2:09 Absolutely. Taking history seriously and appreciating the contingency of values, appreciating that if the Nazis had won the World War, we would all be thinking, “wow, I'm so glad that moral progress happened the way it did, and we don't have Jewish people around anymore. What huge moral progress we had then!” That's a terrifying thought. I think it should make us take seriously the fact that we're very far away from the moral truth. One of the lessons I draw in the book is that we should not think we're at the end of moral progress. We should not think, “Oh, we should lock in the Western values we have.” Instead, we should spend a lot of time trying to figure out what's actually morally right, so that the future is guided by the right values, rather than whichever happened to win out. Dwarkesh Patel 2:56 So that makes a lot of sense. But I'm asking a slightly separate question—not only are there possible values that could be better than ours, but should we expect our values - we have the sense that we've made moral progress (things are better than they were before or better than most possible other worlds in 2100 or 2200)- should we not expect that to be the case? Should our priors be that these are ‘meh’ values? Will MacAskill 3:19 Our priors should be that our values are as good as expected on average. Then you can make an assessment like, “Are other values of today going particularly well?” There are some arguments you could make for saying no. Perhaps if the Industrial Revolution happened in India, rather than in Western Europe, then perhaps we wouldn't have wide-scale factory farming—which I think is a moral atrocity. Having said that, my view is to think that we're doing better than average. If civilization were just a redraw, then things would look worse in terms of our moral beliefs and attitudes. The abolition of slavery, the feminist movement, liberalism itself, democracy—these are all things that we could have lost and are huge gains. Dwarkesh Patel 4:14 If that's true, does that make the prospect of a long reflection dangerous? If moral progress is a random walk, and we've ended up with a lucky lottery, then you're possibly reversing. Maybe you're risking regression to the mean if you just have 1,000 years of progress. Will MacAskill 4:30 Moral progress isn't a random walk in general. There are many forces that act on culture and on what people believe. One of them is, “What’s right, morally speaking? What's their best arguments support?” I think it's a weak force, unfortunately. The idea of lumbar flexion is getting society into a state that before we take any drastic actions that might lock in a particular set of values, we allow this force of reason and empathy and debate and goodhearted model inquiry to guide which values we end up with. Are we unwise? Dwarkesh Patel 5:05 In the book, you make this interesting analogy where humans at this point in history are like teenagers. But another common impression that people have of teenagers is that they disregard wisdom and tradition and the opinions of adults too early and too often. And so, do you think it makes sense to extend the analogy this way, and suggest that we should be Burkean Longtermists and reject these inside-view esoteric threats? Will MacAskill 5:32 My view goes the opposite of the Burkean view. We are cultural creatures in our nature, and are very inclined to agree with what other people think even if we don't understand the underlying mechanisms. It works well in a low-change environment. The environment we evolved towards didn't change very much. We were hunter-gatherers for hundreds of years. Now, we're in this period of enormous change, where the economy is doubling every 20 years, new technologies arrive every single year. That's unprecedented. It means that we should be trying to figure things out from first principles. Dwarkesh Patel 6:34 But at current margins, do you think that's still the case? If a lot of EA and longtermist thought is first principles, do you think that more history would be better than the marginal first-principles thinker? Will MacAskill 6:47 Two things. If it's about an understanding of history, then I'd love EA to have a better historical understanding. The most important subject if you want to do good in the world is philosophy of economics. But we've got that in abundance compared to there being very little historical knowledge in the EA community. Should there be even more first-principles thinking? First-principles thinking paid off pretty well in the course of the Coronavirus pandemic. From January 2020, my Facebook wall was completely saturated with people freaking out, or taking it very seriously in a way that the existing institutions weren't. The existing institutions weren't properly updating to a new environment and new evidence. The contingency of technology Dwarkesh Patel 7:47 In your book, you point out several examples of societies that went through hardship. Hiroshima after the bombings, Europe after the Black Death—they seem to have rebounded relatively quickly. Does this make you think that perhaps the role of contingency in history, especially economic history is not that large? And it implies a Solow model of growth? That even if bad things happen, you can rebound and it really didn't matter? Will MacAskill 8:17 In economic terms, that's the big difference between economic or technological progress and moral progress. In the long run, economic or technological progress is very non-contingent. The Egyptians had an early version of the steam engine, semaphore was only developed very late yet could have been invented thousands of years in the past. But in the long run, the instrumental benefits of tech progress, and the incentives towards tech progress and economic growth are so strong, that we get there in a wide array of circumstances. Imagine there're thousands of different societies, and none are growing except for one. In the long run, that one becomes the whole economy. Dwarkesh Patel 9:10 It seems that particular example you gave of the Egyptians having some ancient form of a steam engine points towards there being more contingency? Perhaps because the steam engine comes up in many societies, but it only gets turned into an industrial revolution in one? Will MacAskill 9:22 In that particular case, there's a big debate about whether quality of metalwork made it actually possible to build a proper steam engine at that time. I mentioned those to share some amazing examples of contingency prior to the Industrial Revolution. It's still contingency on the order of centuries to thousands of years. Post industrial-revolution world, there's much less contingency. It's much harder to see technologies that wouldn't have happened within
Joseph Carlsmith is a senior research analyst at Open Philanthropy and a doctoral student in philosophy at the University of Oxford. We discuss utopia, artificial intelligence, computational power of the brain, infinite ethics, learning from the fact that you exist, perils of futurism, and blogging. Watch on YouTube. Listen on Spotify, Apple Podcasts, etc. Episode website + Transcript here. Follow Joseph on Twitter. Follow me on Twitter. Subscribe to find out about future episodes! Timestamps (0:00:06) - Introduction (0:02:53) - How to Define a Better Future? (0:09:19) - Utopia (0:25:12) - Robin Hanson’s EMs (0:27:35) - Human Computational Capacity (0:34:15) - FLOPS to Emulate Human Cognition? (0:40:15) - Infinite Ethics (1:00:51) - SIA vs SSA (1:17:53) - Futurism & Unreality (1:23:36) - Blogging & Productivity (1:28:43) - Book Recommendations (1:30:04) - Conclusion Please share if you enjoyed this episode! Helps out a ton! Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Fin Moorhouse is a Research Scholar and assistant to Toby Ord at Oxford University's Future of Humanity Institute. He co-hosts the Hear This Idea podcast, which showcases new thinking in philosophy, the social sciences, and effective altruism. We discuss for-profit entrepreneurship for altruism, space governance, morality in the multiverse, podcasting, the long reflection, and the Effective Ideas & EA criticism blog prize. Watch on YouTube. Listen on Spotify, Apple Podcasts, etc. Episode website + Transcript here.Follow Fin on Twitter. Follow me on Twitter. Subscribe to find out about future episodes! Timestamps (0:00:10) - Introduction (0:02:45) - EA Prizes & Criticism (0:09:47) - Longtermism (0:12:52) - Improving Mental Models (0:20:50) - EA & Profit vs Nonprofit Entrepreneurship (0:30:46) - Backtesting EA (0:35:54) - EA Billionares (0:38:32) - EA Decisions & Many Worlds Interpretation (0:50:46) - EA Talent Search (0:52:38) - EA & Encouraging Youth (0:59:17) - Long Reflection (1:03:56) - Long Term Coordination (1:21:06) - On Podcasting (1:23:40) - Audiobooks Imitating Conversation (1:27:04) - Underappreciated Podcasting Skills (1:38:08) - Space Governance (1:42:09) - Space Safety & 1st Principles (1:46:44) - Von Neuman Probes (1:50:12) - Space Race & First Strike (1:51:45) - Space Colonization & AI (1:56:36) - Building a Startup (1:59:08) - What is EA Underrating? (2:10:07) - EA Career Steps (2:15:16) - Closing Remarks Please share if you enjoyed this episode! Helps out a ton! Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Alexander Mikaberidze is Professor of History at Louisiana State University and the author of The Napoleonic Wars: A Global History. He explains the global ramifications of the Napoleonic Wars - from India to Egypt to America. He also talks about how Napoleon was the last of the enlightened despots, whether he would have made a good startup founder, how the Napoleonic Wars accelerated the industrial revolution, the roots of the war in Ukraine, and much more! Watch on YouTube, or listen on Spotify, Apple Podcasts, or any other podcast platform. Episode website + Transcript here. Follow Professor Mikaberidze on Twitter. Follow me on Twitter for updates on future episodes. Subscribe to find out about future episodes! Timestamps: (0:00:00) Alexander Mikaberidze - Professor of history and author of “The Napoleonic Wars” (0:01:19) - The allure of Napoleon (0:13:48) - The advantages of multiple colonies (0:27:33) - The Continental System and the industrial revolution (0:34:49) - Napoleon’s legacy. (0:50:38) - The impact of Napoleonic Wars (1:01:23) - Napoleon as a startup founder (1:14:02) The advantages of war and how it shaped international government and to some extent, political structures. Please share if you enjoyed this episode! Helps out a ton! Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
I flew to the Bahamas to interview Sam Bankman-Fried, the CEO of FTX! He talks about FTX’s plan to infiltrate traditional finance, giving $100m this year to AI + pandemic risk, scaling slowly + hiring A-players, and much more. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Episode website + Transcript here. Follow me on Twitter for updates on future episodes Subscribe to find out about future episodes! Timestamps (00:18) - How inefficient is the world? (01:11) - Choosing a career (04:15) - The difficulty of being a founder (06:21) - Is effective altruism too narrowminded? (09:57) - Political giving (12:55) - FTX Future Fund (16:41) - Adverse selection in philanthropy (18:06) - Correlation between different causes (22:15) - Great founders do difficult things (25:51) - Pitcher fatigue and the importance of focus (28:30) - How SBF identifies talent (31:09) - Why scaling too fast kills companies (33:51) - The future of crypto (35:46) - Risk, efficiency, and human discretion in derivatives (41:00) - Jane Street vs FTX (41:56) - Conflict of interest between broker and exchange (42:59) - Bahamas and Charter Cities (43:47) - SBF’s RAM-skewed mind Unfortunately, audio quality abruptly drops from 17:50-19:15 Transcript Dwarkesh Patel 0:09 Today on The Lunar Science Society Podcast, I have the pleasure of interviewing Sam Bankman-Fried, CEO of FTX. Thanks for coming on The Lunar Society. Sam Bankman-Fried 0:17 Thanks for having me. How inefficient is the world? Dwarkesh Patel 0:18 Alright, first question. Does the consecutive success of FTX and Alameda suggest to you that the world has all kinds of low-hanging opportunities? Or was that a property of the inefficiencies of crypto markets at one particular point in history? Sam Bankman-Fried 0:31 I think it's more of the former, there are just a lot of inefficiencies. Dwarkesh Patel 0:35 So then another part of the question is: if you had to restart earning to give again, what are the odds you become a billionaire, but you can't do it in crypto? Sam Bankman-Fried 0:42 I think they're pretty decent. A lot of it depends on what I ended up choosing and how aggressive I end up deciding to be. There were a lot of safe and secure career paths before me that definitely would not have ended there. But if I dedicated myself to starting up some businesses, there would have been a pretty decent chance of it. Choosing a career Dwarkesh Patel 1:11 So that leads to the next question—which is that you've cited Will MacAskill's lunch with you while you were at MIT as being very important in deciding your career. He suggested you earn-to-give by going to a quant firm like Jane Street. In retrospect, given the success you've had as a founder, was that maybe bad advice? And maybe you should’ve been advised to start a startup or nonprofit? Sam Bankman-Fried 1:31 I don't think it was literally the best possible advice because this was in 2012. Starting a crypto exchange then would have been…. I think it was definitely helpful advice. Relative to not having gotten advice at all, I think it helps quite a bit. Dwarkesh Patel 1:50 Right. But then there's a broader question: are people like you who could become founders advised to take lower variance, lower risk careers that in, expected value, are less valuable? Sam Bankman-Fried 2:02 Yeah, I think that's probably true. I think people are advised too strongly to go down safe career paths. But I think it's worth noting that there's a big difference between what makes sense altruistically and personally for this. To the extent you're just thinking of personal criteria, that's going to argue heavily in favor of a safer career path because you have much more quickly declining marginal utility of money than the world does. So, this kind of path is specifically for altruistically-minded people. The other thing is that when you think about advising people, I think people will often try and reference career advice that others got. “What were some of these outward-facing factors of success that you can see?” But often the answer has something to do with them and their family, friends, or something much more personal. When we talk with people about their careers, personal considerations and the advice of people close to them weigh very heavily on the decisions they end up making. Dwarkesh Patel 3:17 I didn't realize that the personal considerations were as important in your case as the advice you got. Sam Bankman-Fried 3:24 Oh, I don’t think in my case. But, it is true with many people that I talked to. Dwarkesh Patel 3:29 Speaking of declining marginal consumption, I'm wondering if you think the implication of this is that over the long term, all the richest people in the world will be utilitarian philanthropists because they don't have diminishing returns of consumption. They’re risk-neutral. Sam Bankman-Fried 3:40 I wouldn't say all will, but I think there probably is something in that direction. People who are looking at how they can help the world are going to end up being disproportionately represented amongst the most and maybe least successful. The difficulty of being a founder Dwarkesh Patel 3:54 Alright, let’s talk about Effective Altruism. So in your interview with Tyler Cowen, you were asked, “What constrains the number of altruistically minded projects?” And you answered, “Probably someone who can start something.” Now, is this a property of the world in general? Or is this a property of EAs? And if it's about EAs, then is there something about the movement that drives away people who took could take leadership roles? Sam Bankman-Fried 4:15 Oh, I think it's just the world in general. Even if you ignore altruistic projects and just look at profit-minded ones, we have lots of ideas for businesses that we think would probably do well, if they were run well, that we'd be excited to fund. And the missing ingredient quite frequently for them is the right person or team to take the lead on it. In general, starting something is brutal. It's brutal being a founder, and it requires a somewhat specific but extensive list of skills. Those things end up making it high in demand. Dwarkesh Patel 4:56 What would it take to get more of those kinds of people to go into EA? Sam Bankman-Fried 4:59 Part of it is probably just talking with them about, “Have you thought about what you can do for the world? Have you thought about how you can have an impact on the world? Have you thought about how you can maximize your impact on the world?” Many people would be excited about thinking critically and ambitiously about how they can help the world. So I think honestly, just engagement is one piece of this. And then even within people who are altruistically minded and thinking about what it would take for them to be founders, there are still things that you can do. Some of this is about empowering people and some of this is about normalizing the fact that when you start something, it might fail—and that's okay. Most startups and especially very early-stage startups should not be trying to maximize the chances of having at least a little bit of success. But that means you have to be okay with the personal fallout of failing and that we have to build a community that is okay with that. I don't think we have that right now, I think very few communities do. Is effective altruism too narrowminded? Dwarkesh Patel 6:21 Now, there are many good objections to utilitarianism, as you know. You said yourself that we don't have a good account of infinite ethics—should we attribute substantial weight to the probability that utilitarianism is wrong? And how do you hedge for this moral uncertainty in your giving? Sam Bankman-Fried 6:35 So I don't think it has a super large impact on my giving. Partially, because you'd need to have a concrete proposal for what else you would do that would be different actions-wise—and I don't know that that I've been compelled by many of those. I do think that there are a lot of things we don't understand right now. And one thing that you pointed to is infinite ethics. Another thing is that (I'm not sure this is moral uncertainty, this might be physical uncertainty) there are a lot of sort of chains of reasoning people will go down that are somewhat contingent on our current understanding of the universe—which might not be right. And if you look at expected-value outcomes, might not be right. Say what you will about the size of the universe and what that implies, but some of the same people make arguments based on how big the universe is and also think the simulation hypothesis has decent probability. Very few people chain through, “What would that imply?” I don't think it's clear what any of this implies. If I had to say, “How have these considerations changed my thoughts on what to do?” The honest answer is that they have changed it a little bit. And the direction that they pointed me in is things with moderately more robust impact. And what I mean by that is, I'm sure one way that you can calculate the expected value of an action is, “Here's what's going to happen. Here are the two outcomes, and here are the probabilities of them.” Another thing you can do is say - it's a little bit more hand-wavy - but, “How much better is this going to make the world? How much does it matter if the world is better in generic diffuse ways?” Typically, EA has been pretty skeptical of that second line of reasoning—and I think correctly. When you see that deployed, it's nonsense. Usually, when people are pretty hard to nail down on the specific reasoning of why they think that something might be good, it’s because they haven't thought that hard about it or don't want to think that hard about it. The much better analyzed and vetted pathways are the ones we should be paying attention to. That being said, I do think that sometimes EA gets too narrow-minded and specific about
Agustin Lebron began his career as a trader and researcher at Jane Street Capital, one of the largest market-making firms in the world. He currently runs the consulting firm Essilen Research, where he is dedicated to helping clients integrate modern decision-making approaches in their business. We discuss how AI will change finance, why adverse selection makes trading and hiring so difficult, & what the future of crypto holds. Watch on YouTube, or listen on Spotify, Apple Podcasts, or any other podcast platform. Episode website here. Buy The Laws of Trading. Follow Agustin on Twitter. Follow me on Twitter for updates on future episodes. Subscribe to find out about future episodes! Timestamps: (00:00) - Introduction (04:18) - What happens in adverse selection? (09:22) - Why is having domain expertise in trading not important? (15:09) - How do you deal when you're on the other side of the adverse selection? (21:16) - Why you should invest in training your people? (25:37) - Is finance too big at 9% of GDP? (31:06) - Trading is very labor intensive (36:16) - Overlap of rationality community and trading (48:00) - The age of startup founders (50:43) - The role of market makers in crypto (57:31) - Three books that you recommend (58:47) - Life is long, not short (1:03:01) - Short history of Lunar Society Please share if you enjoyed this episode! Helps out a ton! Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Ananyo Bhattacharya is the author of The Man from the Future: The Visionary Life of John von Neumann. He is a science writer who has worked at the Economist and Nature. Before journalism, he was a medical researcher at the Burnham Institute in San Diego, California. He holds a degree in physics from the University of Oxford and a PhD in protein crystallography from Imperial College London. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Episode website here. Follow Ananyo on Twitter. Follow me on Twitter for updates on future episodes. Timestamps: (0:00:30) - John Von Neumann - The Man From The Future (0:02:29) - The Forgotten Father of Game Theory (0:16:04) - The last representative of the great mathematicians (0:19:45) - Did John Von Neumann have a Miracle year? (0:26:31) - The fundamental theorem of John von Neumann’s game theory (0:29:34) - The strong supporter of "Preventive War” (0:50:51) - We can't all be superhuman Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Stephen Grugett is a cofounder of Manifold Markets, where anyone can create a prediction market. We discuss how prediction markets can change how countries and companies make important decisions. Manifold Markets: https://manifold.markets/ Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Episode website here. Follow me on Twitter for updates on future episodes. Timestamps: (0:00:00) - Introduction (0:02:29) - Predicting the future (0:05:16) - Getting Accurate Information (0:06:20) - Potentials (0:09:29) - Not using internal prediction markets (0:11:04) - Doing the painful thing (0:13:31) - Decision Making Process (0:14:52) - Grugett’s opinion about insider trading (0:16:23) - The Role of prediction market (0:18:17) - Dealing with the Speculators (0:20:33) - Criticism of Prediction Markets (0:22:24) - The world when people cared about prediction markets (0:26:10) - Grugett’s Profile Background/Experience (0:28:49) - User Result Market (0:30:17) - The most important mechanism (0:32:59) - The 1000 manifold dollars (0:40:30) - Efficient financial markets (0:46:28) - Manifold Markets Job/Career Openings (0:48:02) - Objectives of Manifold Markets Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Today I talk to Pradyu Prasad (blogger and podcaster) about the book "Hirohito and the Making of Modern Japan" by Herbert P. Bix. We also discuss militarization, industrial capacity, current events, and blogging. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Podcast website here. Follow Pradyu on Twitter. Follow me on Twitter for updates on future episodes. Follow Pradyu's Blog: https://brettongoods.substack.com/ Timestamps: (0:00:00) - Intro (0:01:59) - Hirohito and Introduction to the Book (0:05:39) - Meiji Restoration and Japan's Rapid Industrialization (0:11:11) - Industrialization and Traditional Military Norms (0:14:50) - Alternate Causes for Japanese Atrocities Richard Hanania's Public Choice Theory in Imperial Japan (0:17:03) (0:21:34) - Hirohito's Relationship with the Military (0:24:33) - Rant of Japanese Strategy (0:33:10) - Modern Parallel to Russia/Ukraine (0:38:22) - Economics of War and Western War Capacity (0:48:14) - Elements of Effective Occupation (0:55:53) - Ideological Fervor in WW2 Japan (0:59:25) - Cynicism on Elites (1:00:29) - The Legend of Godlike Hirohito (1:06:47) - Postwar Japanese Economy (1:13:23) - Blogging and Podcasting (1:20:31) - Spooky (1:38:00) - Outro Please share if you enjoyed this episode! Helps out a ton! Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Razib Khan is a writer, geneticist, and blogger with an interest in history, genetics, culture, and evolutionary psychology. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Podcast website here.Follow Razib on Twitter. Follow me on Twitter for updates on future episodes Thanks for reading The Lunar Society! Subscribe to find out about future episodes! Time Stamps (0:00:05) Razib's Background (0:01:34) Dysgenics of Intelligence (0:04:23) Endogamy and Genetic traits in India (0:08:58) Similar Examples of Endogamy (0:14:28) Why So Many Brahmin CEOs (0:19:55) Razib the Globe Trotter, Geography Expert (0:25:04) Male/Female Genetic Variance (0:30:04) Agricultural Man and Our Tiny Brains (0:34:40) The Church of Science (0:42:33) Professorship, a family business (0:44:23) Long History (0:52:42) Future of Human-Computer Interfacing (0:56:30) Near Future of Gene Editing (0:59:19) Meta Questions and Closing Please share if you enjoyed this episode! Helps out a ton! Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Jimmy Soni is the author of The Founders: The Story of Paypal and the Entrepreneurs Who Shaped Silicon Valley. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Episode website here. Follow Jimmy on Twitter. Follow me on Twitter for updates on future episodes! Timestamps: (0:00:00) - Bell Labs vs PayPal (0:05:12) - Scenius in Ancient Rome and America's Founding (0:07:02) - Girard at PayPal (0:15:17) - Thiel almost shorts the Dot com bubble (0:19:49) - Does Zero to One contradict PayPal's story? (0:27:57) - Hilarious Russian hacker story (0:29:06) - Why is Thiel so good at spotting talent? (0:34:50) - Did PayPal make talent or discover it? (0:40:40) - Japanese mafia invests in PayPal?! (0:44:42) - Upcoming TV show on PayPal (0:48:11) - Musk in ancient Rome (0:52:12) - Why didn't Musk keep pursuing finance? (0:56:32) - Why didn't the mafia get back together? (1:00:06) - Jimmy's writing process Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
I interview the economist Bryan Caplan about his new book, Labor Econ Versus the World, and many other related topics. Bryan Caplan is a Professor of Economics at George Mason University and a New York Times Bestselling author. His most famous works include: The Myth of the Rational Voter, Selfish Reasons to Have More Kids, The Case Against Education, and Open Borders: The Science and Ethics of Immigration. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Podcast website here. Follow Bryan on Twitter. Follow me on Twitter for updates on future episodes. Timestamps: (0:00:00) - Intro (0:00:33) - How many workers are useless, and why is labor force participation so low? (0:03:47) - Is getting out of poverty harder than we think? (0:10:43) - Are elites to blame for poverty? (0:14:56) - Is human nature to blame for poverty? (0:19:11) - Remote work and foreign wages (0:24:43) - The future of the education system? (0:29:31) - Do employers care about the difficulty of a curriculum? (0:33:13) - Why do companies and colleges discriminate against Asians? (0:42:01) - Applying Hanania's unitary actor model to mental health (0:50:38) - Why are multinationals so effective? (0:53:37) - Open borders and cultural norms (0:58:13) - Is Tyler Cowen right about automation? Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Richard Hanania is the President of the Center for the Study of Partisanship and Ideology and the author of Public Choice Theory and the Illusion of Grand Strategy: How Generals, Weapons Manufacturers, and Foreign Governments Shape American Foreign Policy. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Episode website here. Follow Richard on Twitter. Follow me on Twitter for updates on future episodes. Read Richard's Substack: https://richardhanania.substack.com/ Timestamps: (0:00:00) - Intro (0:04:35) - Did war prevent sclerosis? (0:06:05) - China vs America's grand strategy (0:10:00) - Does the president have more power over foreign policy? (0:11:30) - How to deter bad actors? (0:15:39) - Do some countries have a coherent foreign policy? (0:16:55) - Why does self-interest matter in foreign but not domestic policy? (0:21:05) - Should we limit money in politics? (0:23:47) - Should we credit expertise for nuclear detante and global prosperity? (0:28:45) - Have international alliances made us safer? (0:31:57) - Why does academic bueracracy work in some fields? (0:36:26) - Did academia suck even before diversity? (0:39:34) - How do we get expertise in social sciences? (0:42:19) - Why are things more liberal? (0:43:55) - Why is big tech so liberal? (0:47:53) - Authoritarian populism vs libertarianism (0:51:40) - Can authoritarian governments increase fertility? (0:54:54) - Will increasing fertility be dysgenic? (0:56:43) - Will not having kids become cool? (0:59:22) -Advice for libertarians? Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
David Deutsch is the founder of the field of quantum computing and the author The Beginning of Infinity and The Fabric of Reality. Read me contra David on AI. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript with helpful links here. Follow David on Twitter. Follow me on Twitter for updates on future podcasts. Timestamps (0:00:00) - Will AIs be smarter than humans? (0:06:34) - Are intelligence differences immutable / heritable? (0:20:13) - IQ correletation of twins seperated at birth (0:27:12) - Do animals have bounded creativity? (0:33:32) - How powerful can narrow AIs be? (0:36:59) - Could you implant thoughts in VR? (0:38:49) - Can you simulate the whole universe? (0:41:23) - Are some interesting problems insoluble? (0:44:59) - Does America fail Popper's Criterion? (0:50:01) - Does finite matter mean there's no beginning of infinity? (0:53:16) - The Great Stagnation (0:55:34) - Changes in epistemic status is Popperianism (0:59:29) - Open ended science vs gain of function (1:02:54) - Contra Tyler Cowen on civilizational lifespan (1:07:20) - Fun criterion (1:14:16) - Does AGI through evolution require suffering? (1:18:01) - Would David enter the Experience Machine? (1:20:09) - (Against) Advice for young people Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Byrne Hobart writes The Diff, a newsletter about inflections in finance and technology with 24,000+ subscribers. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Episode website here. The Diff newsletter: https://diff.substack.com/ Follow Byrne on Twitter. Follow me on Twitter for updates on future episodes! Thanks for reading The Lunar Society! Subscribe for free to receive new posts and support my work. Timestamps: (0:00:00) - Byrne's one big idea: stagnation (0:05:50) -Has regulation caused stagnation? (0:14:00) - FDA retribution (0:15:15) - Embryo selection (0:17:32) - Patient longtermism (0:21:02) - Are there secret societies? (0:26:53) - College, optionality, and conformity (0:34:40) - Differentiated credentiations underrated? (0:39:15) - WIll contientiousness increase in value? (0:44:26) - Why aren't rationalists more into finance? (0:48:04) - Rationalists are bad at changing the world. (0:52:20) - Why read more? (0:57:10) - Does knowledge have increasing returns? (1:01:30) - How to escape the middle career trap? (1:04:48) - Advice for young people (1:08:40) - How to learn about a subject? Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
David Friedman is a famous anarcho-capitalist economist and legal scholar. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Episode website + transcript here. David Friedman's website: http://www.daviddfriedman.com/ Follow me on Twitter for updates on future episodes. Timestamps: (0:00:00) - Dating market (0:12:15) - The future of reputation (0:27:30) - How Friedman predicted bitcoin (0:35:35) - Prediction markets (0:40:00) - Can regulation stop progress globally? (0:45:50) - Lack of diversity in modern legal systems (0:54:20) - Friedman's theory of property rights (1:01:50) - Charles Murray's scheme to fight regulations (1:06:25) -Property rights of the poor (1:09:07) - Automation (1:16:00) - Economics of medieval reenactment (1:19:00) - Advice for futurist young people Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Sarah Fitz-Claridge is a writer, coach, and speaker with a fallibilist worldview. She started the journal that became Taking Children Seriously in the early 1990s after being surprised by the heated audience reactions she was getting when talking about children. She has spoken all over the world about her educational philosophy, and you can find transcripts of some of her talks on her website. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Episode website here. Sarah's Website: https://www.fitz-claridge.com/ Follow Sarah on Twitter. Follow me on Twitter for updates. Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Michael Huemer is a professor of philosophy at the University of Colorado. He is the author of more than sixty academic articles in epistemology, ethics, metaethics, metaphysics, and political philosophy, as well as eight amazing books. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Podcast website here. Buy Knowledge, Reality, and Value and The Problem of Political Authority. Read Michael’s awesome blog and follow me on Twitter for new episodes. Timestamps: (0:00:00) - Intro (0:01:07) - The Problem of Political Authority (0:03:25) - Common sense ethics (0:09:39) - Stockholm syndrome and the charisma of power (0:18:14) - Moral progress (0:26:55) - Growth of libertarian ideas (0:33:37) - Does anarchy increase violence? (0:44:37) - Transitioning to anarchy (0:47:20) - Is Huemer attacking our society?! (0:51:40) - Huemer's writing process (0:53:18) - Is it okay to work for the government (0:56:39) - Burkean argument against anarchy (1:02:07) - The case for tyranny (1:11:58) - Underrated/overrated (1:25:55) - Huemer production function (1:30:41) - Favorite books (1:33:04) - Advice for young people Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Robert Martin (aka Uncle Bob) is a programming pioneer and bestselling author or Clean Code. We discuss the prospect of automating programming, spotting and developing coding talent, occupational licensing, quotas, and the elusive sense of style. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Listen to his fascinating talk on the future of programming: https://youtu.be/ecIWPzGEbFc Read his blog about programming: http://blog.cleancoder.com/ Buy his books on Amazon: https://www.amazon.com/kindle-dbs/ent... Thanks for reading The Lunar Society! Subscribe to find out about future episodes! Timestamps (0:00) - Automating programming (8:40) - Educating programmers (expertise, talent, university) (21:45) - Spotting talent (26:10) - Teaching kids (29:31) - Prose and music sense in coding (32:22) - Occupational licensing for programmers (35:49) - Why is tech political (39:28) - Quotas (42:29) - Advice to 20 yr old Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Scott Aaronson is a Professor of Computer Science at The University of Texas at Austin, and director of its Quantum Information Center. He's the author of one of the most interesting blogs on the internet: https://www.scottaaronson.com/blog/ and the book “Quantum Computing since Democritus”. He was also my professor for a class on quantum computing. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Episode website here. Follow me on Twitter to get updates on future episodes and guests. Timestamps (0:00) - Intro (0:33) - Journey through high school and college (12:37) - Early work (19:15) - Why quantum computing took so long (33:30) - Contributions from outside academia (38:18) - Busy beaver function (53:50) - New quantum algorithms (1:03:30) - Clusters (1:06:23) - Complexity and economics (1:13:26) - Creativity (1:24:07) - Advice to young people Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Scott is the author of Ultralearning and famous for the MIT Challenge, where he taught himself MIT's 4 year Computer Science curriculum in 1 year. I had a blast chatting with Scott Young about aggressive self-directed learning. Scott has some of the best advice out there about learning hard things. It has helped yours truly prepare to interview experts and dig into interesting subjects. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Podcast website here. Check out Scott’s website. Follow me on Twitter for updates on future episodes. Buy Scott’s book on Ultralearning: https://amzn.to/3TuPEbf Timestamps (00:00) - Intro (01:00) - Einstein (13:20) - Age (18:00) - Transfer (24:40) - Compounding (34:00) - Depth vs context (40:50) - MIT challenge (1:00:50) - Focus (1:10:00) - Role models (1:20:30) - Progress studies (1:24:25) - Early work and ambition (1:28:18) - Advice for 20 yr old (1:35:00) - Raising a genius baby? Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
I ask Charles Murray about Human Accomplishment, By The People, and The Curmudgeon's Guide to Getting Ahead. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow Charles on Twitter. Follow me on Twitter for updates on future episodes. Timestamps (00:00) - Intro (01:00) - Writing Human Accomplishment (06:30) - The Lotka curve, age, and miracle years (10:38) - Habits of the greats (hard work) (15:22) - Focus and explore in your 20s (19:57) - Living in Thailand (23:02) - Peace, wealth, and golden ages (26:02) - East, west, and religion (30:38) - Christianity and the Enlightenment (34:44) - Institutional sclerosis (37:43) - Antonine Rome, decadence, and declining accomplishment (42:13) - Crisis in social science (45:40) - Can secular humanism win? (55:00) - Future of Christianity (1:03:30) - Liberty and accomplishment (1:06:08) - By the People (1:11:17) - American exceptionalism (1:14:49) - Pessimism about reform (1:18:43) - Can libertarianism be resuscitated? (1:25:18) - Trump's deregulation and judicial nominations (1:28:11) - Beating the federal government (1:32:05) - Why don't big companies have a litigation fund? (1:34:05) - Getting around the Halo effect (1:36:07) - What happened to the Madison fund? (1:37:00) - Future of liberty (1:41:00) - Public sector unions (1:43:43) - Andrew Yang and UBI (1:44:36) - Groundhog Day (1:47:05) - Getting noticed as a young person (1:50:48) - Passage from Human Accomplishment Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Alex Tabarrok is a professor of economics at George Mason University and with Tyler Cowen a founder of the online education platform http://MRU.org. I ask Alex Tabarrok about the Grand Innovation Prize, the Baumol effect, and Dominant Assurance Contracts. Watch on YouTube, or listen on Spotify, Apple Podcasts, or any other podcast platform. Episode website here. Follow Alex on Twitter. Follow me on Twitter for updates on future episodes. Alex Tabarrok's and Tyler Cowen's excellent blog: https://marginalrevolution.com/ Thanks for reading The Lunar Society! Subscribe to find out about future episodes! Timestamps: (00:00) - Intro (00:34) - Grand Innovation Prize (08:45) - Prizes vs grants (14:10) -Baumol effect (27:50) - On Bryan Caplan's case against education (31:35) - Scaling education online (48:50) - Declining research productivity (52:15) - Dominant Assurance Contracts (58:40) - Future of governance (1:04:05) - On Robin Hanson's Futarchy (1:06:02) - Beating Adam Smith (1:08:35) - Our Warfare-Welfare State (1:19:30) - The Great Stagnation vs The Innovation Renaissance (1:21:40) - Advice to 20 year olds Share Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Caleb Watney is the director of innovation policy at the Progressive Policy Institute. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Episode website here. Follow Caleb on Twitter. Follow me on Twitter for updates on future episodes. Caleb's new blog: https://www.agglomerations.tech/ Timestamps (00:00) - Intro (00:20) - America's innovation engine is slowing (01:02) - Remote work/ agglomeration effects (08:45) - Chinese vs American innovation (16:23) - Reforming institutions (19:00) - Tom Cotton's critique of high skilled Immigration (22:26) - Eric Weinstein's critique of high skilled Immigration (26:02) - Reforming H1-B (30:30) - Immigration during recession (32:55) - Big tech / AI (38:20) - EU regulation (40:07) - Biden vs Trump (42:30) - Federal R & D (47:20) - Climate megaprojects (49:35) - Falling fertility rates (52:20) - Advice to 20 year olds Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Robin Hanson is a professor of economics at George Mason University. He is the author of The Elephant in the Brain and The Age of Em. Robin's Twitter: https://twitter.com/robinhanson Robin's blog: https://www.overcomingbias.com/ Robin's website: http://mason.gmu.edu/~rhanson/home.html My blog: https://dwarkeshpatel.com/ My Twitter: https://twitter.com/dwarkesh_sp 00:05 The long view 15:07 Subconscious vs conscious intelligence 20:28 Meditators 26:50 Signaling, norms, and motives 36:50 Conversation 42:54 2020 election nominees 49:25 Nerds in startups and social science 54:50 Academia and Robin 58:20 Dominance explains paternalism 1:09:32 Remote work 1:21:26 Advice for 20 yr old 1:28:05 Idea futures 1:32:13 Reforming institutions Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Jason Crawford writes at The Roots of Progress about the history of technology and industry and the philosophy of progress. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Podcast website here. Follow Jason on Twitter. Follow me on Twitter for updates on future episodes. Jason's website: https://jasoncrawford.org/ The Roots of Progress: https://rootsofprogress.org/ Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Matjaž Leonardis has co-written a paper with David Deutsch about the Popper-Miller Theorem. In this episode, we talk about that as well as the dangers of the scientific identity, the nature of scientific progress, and advice for young people who want to be polymaths. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Podcast website here. Follow Matjaž's excellent Twitter. Follow me on Twitter for updates on future episodes! Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Tyler Cowen is Holbert L. Harris Professor of Economics at George Mason University and also Director of the Mercatus Center. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Transcript + Podcast website here. Follow Tyler Cowen on Twitter. Follow me on Twitter for updates on future episodes. Timestamps (0:00) - The Great Reset (2:58) - Growth and the cyclical view of history (4:00) - Time horizons, growth, and sustainability (5:30) - Space travel (8:11) - WMDs and end of humanity (10:57) - Common sense morality (12:20) - China and authoritarianism (13:45) - Are big businesses complacent? (17:15) - Online education vs university (20:45) - Aesthetic decline in West Virginia (23:20) - Advice for young people (25:18) - Mentors (27:15) - Identifying talent (29:50) - Can adults change? (31:45) - Capacity to change men vs women (33:10 ) - Are effeminate societies better? (35:15) - Conservatives and progress (36:50) - Biggest mistake in history (39:05) - Nuke in my lifetime (40:35) - Age and learning (42:45) - Pessimistic future (43:50) - Optimistic future (46:28) - Closing Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Bryan Caplan is a Professor of Economics at George Mason University and a New York Times Bestselling author. His most famous works include: The Myth of the Rational Voter, Selfish Reasons to Have More Kids, The Case Against Education, and Open Borders: The Science and Ethics of Immigration. I talk to Bryan about open borders, the idea trap, UBI, appeasement, China, the education system, and Bryan Caplan's next two books on poverty and housing regulation. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Follow Bryan on Twitter. Follow me on Twitter for updates on future episodes. Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe