Loading...
Loading...
0 / 10 episodes
No episodes yet
Tap + Later on any episode to add it here.
What happens when a startup's mission perfectly aligns with the biggest trend in tech? That's exactly where Guild AI finds itself in 2026. James Everingham, CEO of Guild AI, joins The Deep View Conversations to talk about building a safety layer for AI agents. The product launched in fall 2025 and found itself at the center of the most important movement in enterprise AI just months later. In this conversation, James breaks down how Guild's platform deploys dozens of workflow-specific agents across different parts of a business, while giving developers the tools to iterate, spin up custom agents, and operate in a safe environment that tracks everything agents do and protects companies from unpredictable outcomes. Topics covered:+ The enduring power of bottom-up innovation + How Guild AI's agent supervision platform works + Why safety infrastructure is the new competitive moat + Lessons from James' earlier career at Netscape and Meta + Open-source vs. proprietary models: how it plays out over the next few years + A standout leadership tip for sparking innovation on your team + His best advice for getting maximum impact from today's AI tools. If you want to understand where AI agents are headed and what it takes to build them responsibly, this conversation is a powerful place to start. Subscribe for weekly conversations with the leaders shaping the future of AI. And don't forget to sign up for The Deep View daily newsletter. We don’t just cover AI, we decode it. In a world flooded with hype, we deliver sharp, no-nonsense insights to keep you ahead of the curve and help you put AI to work every day: subscribe.thedeepview.com
Snowflake has been a stalwart of the SaaS economy and a leader in enterprise data for the past decade. But the company is deep in the middle of a transformation that most people haven't recognized yet. In this episode of The Deep View Conversations, senior reporter Nat Rubio-Licht talks with Baris Gultekin, vice president of AI at Snowflake, for a candid look at how the company is navigating the AI era and what it's learning in real time. Gultekin talks openly about how the entire team inside Snowflake is now using coding tools to build skills and automate their work. That includes non-developers who are using Project SnowWork, an AI agent for professionals across all roles. Baris joined Snowflake in 2023 through the acquisition of blockchain startup nxyz and has spent the past three years building and running the AI teams inside the enterprise tech giant. He also brings a rare perspective from his time working on Google Assistant in the pre-LLM era, which gives him a unique lens on how much has changed. Topics covered: + Why there is no AI strategy without a data strategy, and what enterprises keep getting wrong + How agentic AI has shifted enterprise data from question-answering to automation + The SaaSpocalypse and why Snowflake sees AI as a tailwind rather than a threat + Cortex Code (CoCo), Snowflake's coding agent that lets customers query their data in plain language instead of SQL + The governance and security challenges that come with multi-agent systems + How Baris uses coding agents in his own life If you want to understand how a mature SaaS company reinvents itself inside an AI revolution, this one is worth your time. Subscribe to the podcast for more conversations with the leaders, builders, and researchers shaping the future of AI. And don't forget to sign up for The Deep View daily newsletter. We don’t just cover AI, we decode it. In a world flooded with hype, we deliver sharp, no-nonsense insights to keep you ahead of the curve and help you put AI to work every day: subscribe.thedeepview.com
Stefan Weitz thinks AI should not just make companies faster. It should make every individual inside them dramatically more capable. In this episode of The Deep View: Conversations, the HumanX CEO explains how he’s putting that idea into practice, starting with his own team. HumanX calls itself the most important AI conference of the year, but what makes it stand out is how intentionally it’s being built. Stefan and his team are rethinking the entire event experience for 2026, using AI to create a more personalized journey for every attendee, from curated sessions to smarter networking and discovery. In this episode, we talk about: + Building a new kind of conference in the AI era+ Using vibe coding for rapid prototyping+ Management principles for the AI era+ Turning every worker into a 10x employee+ The AI tool that’s blowing Stefan’s mind But this conversation goes beyond events. It’s really about leadership in the age of AI. Along the way, Stefan offers a clear view of what it takes to build an organization that can actually apply AI, not just talk about it. He also shares a leadership principle that has shaped his approach. If you’re thinking about how to scale impact across your team, or how to move from AI curiosity to real execution, this conversation delivers practical insights you can use right away. Subscribe to the podcast for more conversations with the leaders, builders, and researchers shaping the future of AI. And don't forget to sign up for The Deep View daily newsletter. We don’t just cover AI, we decode it. In a world flooded with hype, we deliver sharp, no-nonsense insights to keep you ahead of the curve and help you put AI to work every day: subscribe.thedeepview.com
In this episode of The Deep View: Conversations, we talk with Olivia Moore, partner at Andreessen Horowitz (a16z), one of Silicon Valley’s flagship venture capital firms. At a16z, Olivia focuses on the rapidly evolving world of consumer AI apps. She tracks which tools are gaining traction, which ones are breaking out beyond early adopters, and which products are unlocking entirely new capabilities for everyday users. In this conversation, we explore the key trends shaping the next wave of AI apps, including the rise of personal AI agents, the growing importance of context and memory in AI systems, and the way new tools are changing how people build, create, and work. We cover: + a16z’s Top 100 Gen AI Consumer Apps report and what it reveals + The rise of OpenClaw and personal AI agents in 2026 + Olivia’s current AI stack and how she uses her favorite tools + Why context and memory could define the next stage of AI + Global trends shaping the AI app ecosystem+ The acceleration of coding agents Few people have their finger on the pulse of the AI app ecosystem like Olivia. If you want to understand which AI tools are gaining momentum and where the next breakthroughs may come from, this conversation offers a valuable window into the space. There’s a great chance you’ll come away from this episode with at least one tool or idea that changes the way you work. Subscribe to the podcast for more conversations with the leaders, builders, and researchers shaping the future of AI. And don't forget to sign up for The Deep View daily newsletter. We don’t just cover AI, we decode it. In a world flooded with hype, we deliver sharp, no-nonsense insights that keep our audience ahead of the curve and help them put AI to work every day: subscribe.thedeepview.com
In a labor market being rewired by AI, CodeSignal is betting that skills, not resumes, will decide who thrives. For this episode of The Deep View: Conversations, I talked with Tigran Sloyan, CEO and co-founder of CodeSignal, the company building a new standard for hiring and career mobility in the age of AI. CodeSignal’s mission starts with a simple but painful truth: resumes and interviews are a flawed way to hire talent. Countless candidates have the skills to thrive in high-paying tech roles but never get a fair shot, while others with polished credentials sometimes land jobs they’re not prepared to do. CodeSignal is flipping that equation with skills-based assessments that help employers discover candidates with real ability, and a free learning platform that helps candidates level up for the next opportunity. In my conversation with Tigran, we talked about: + Why resumes haven’t meaningfully changed in 100 years, and why it's breaking hiring + How CodeSignal measures skills, and why simulation beats multiple-choice + What AI unlocks for assessing non-technical roles such as sales and support + The dark side of AI: what CodeSignal’s research shows about cheating attempts + Why entry-level jobs are turning into tasks, and what that means for training + How CodeSignal makes free learning content work economically + The future of re-skilling at scale, and why AI tutoring changes everything We also dig into what’s changing fast right now: the rise of AI-assisted work, the surge in fraud in hiring assessments, and why foundational skills still matter even when AI can do the task. Tigran shares his background from Armenia to MIT to Google, his most contrarian leadership advice, and the AI tool he'd recommend you start using every day. If you want to understand how AI is being used to fix the problems that AI is causing in the job market, this is the podcast for you. Subscribe to The Deep View: Conversations podcast in your favorite podcast player for more unique conversations with the brightest minds solving the biggest challenges in AI. You can also subscribe on YouTube. And don't forget to sign up for The Deep View daily newsletter. We don’t just cover AI, we decode it. In a world flooded with hype, we deliver sharp, no-nonsense insights that keep our audience ahead of the curve and help them put AI to work every day: subscribe.thedeepview.com
In this episode of The Deep View Conversations, we talked with Wasim Khaled, CEO of Blackbird AI, to explore a provocative idea: What happens when reality itself becomes hackable? Long before generative AI went mainstream, Wasim and his cofounder launched Blackbird to tackle disinformation and narrative manipulation. Their thesis was bold: that part of modern cybersecurity conflict had shifted from infrastructure to information, from networks to narratives. It turned out to be prescient. As AI supercharges the speed, scale, and realism of malicious content — from deepfakes to coordinated influence campaigns — Blackbird has emerged as the leader in combating narrative attacks. In fact, Gartner recently named Blackbird the company to beat in disinformation narrative intelligence in its report on the AI Vendor Race. In our conversation, we explore: + What “narrative attacks” really are and why they’re so hard to detect + How AI has fundamentally changed the disinformation battlefield + Reactive vs. proactive defense strategies in cybersecurity + How Blackbird evolved from a lab experiment into a national security player + Why leaders relying on chatbots instead of AI agents are already falling behind Wasim also shares how he optimizes his time for maximum leverage, and offers his advice for founders navigating fast-moving technology shifts. If you care about cybersecurity, AI, information warfare, or the future of leadership in the age of intelligent agents, this is a conversation you'll want to hear. Subscribe to The Deep View: Conversations podcast in your favorite podcast player for more unique conversations with the brightest minds solving the biggest challenges in AI. You can also subscribe on YouTube. And don't forget to sign up for The Deep View daily newsletter. We don’t just cover AI, we decode it. In a world flooded with hype, we deliver sharp, no-nonsense insights that keep our audience ahead of the curve and help them put AI to work every day: subscribe.thedeepview.com
What does it actually take for enterprises to adopt AI at scale? In this episode of Deep View Conversations, we sit down with Shibani Ahuja, Senior Vice President of Enterprise IT Strategy at Salesforce. Over the past year, Shibani has met with 587 C-suite leaders to understand how Salesforce can evolve into an agentic AI platform for the world’s largest organizations. We unpack what she’s learned from those conversations, including the real blockers to AI adoption, how leading enterprises are progressing, and why shared context and trust matter more than raw model capabilities. Shibani also breaks down Salesforce’s Agentic Maturity Model, a framework designed to help organizations assess their current AI readiness and chart a path forward. We also explore: + How AI is reshaping the banking and financial services industry, where Shibani spent a good part of her career+ The story of how Shibani joined Salesforce after challenging Marc Benioff and his leadership team as a customer+ Why clear, jargon-free communication is one of the most underrated skills in AI, and how to do it well in high-stakes settings Shibani is one of the most cogent communicators in tech today, and this conversation is packed with practical insights for anyone leading, building, or communicating about AI inside an organization or in public settings. Subscribe to The Deep View: Conversations podcast in your favorite podcast player for more unique conversations with the brightest minds solving the biggest challenges in AI. You can also subscribe on YouTube. And don't forget to sign up for The Deep View daily newsletter. We don’t just cover AI, we decode it. In a world flooded with hype, we deliver sharp, no-nonsense insights that keep our audience ahead of the curve and help them put AI to work every day: subscribe.thedeepview.com
AI agents in business aren't something that will happen in the future. They’re already here, and they're scaling a lot more rapidly than we expected. In this episode of The Deep View: Conversations, Editor-in-Chief Jason Hiner talks to Matt Yanchyshyn, who leads AWS Marketplace at Amazon Web Services. Yanchyshyn's team helps organizations discover, buy, and deploy software on AWS, and one of the biggest shifts they’ve seen over the past six months is the explosion of AI agents in real-world use cases. When AWS unveiled its agent marketplace in mid-2025, the internal goal was initially to launch with 50 agents. By early 2026, that number had surged past 2,600 agents, making it the fastest-growing category in the history of the world’s largest cloud platform. So what’s driving that surge? Yanchyshyn breaks it down. In this conversation, we cover:+ Which types of AI agents are seeing the fastest enterprise adoption+ The industries and use cases leading the charge+ How companies are handling data security and sovereignty concerns+ The role of multi-model orchestration in agent effectiveness+ How AWS is using agents internally to drive lots of different wins If you're trying to understand where AI agents are actually being deployed — not the hype, but the reality — then this conversation will reset your expectations. It will help you see where agentic AI is already delivering business value, and where it’s heading next. Subscribe to The Deep View: Conversations in your favorite podcast player for more unique conversations with the brightest minds solving the biggest challenges in AI. You can also subscribe here on YouTube. And don't forget to sign up for The Deep View daily newsletter. We don’t just cover AI, we decode it. In a world flooded with hype, we deliver sharp, no-nonsense insights that keep our audience ahead of the curve and help them put AI to work every day: subscribe.thedeepview.com Thank you to our sponsor, Deel, an AI-native platform for HR, IT, and payroll. Hire, manage, pay, and equip anyone, anywhere. https://www.deel.com/deepview
AI could change the way we remember, and the way we pay attention. In this episode of The Deep View: Conversations, Editor-in-Chief Jason Hiner sits down with Bobak Tavangar, CEO of Brilliant Labs, one of the most intriguing startups in AI hardware today. While trillion-dollar giants like Meta and Google race to define the future of AI glasses, Brilliant Labs is taking a radically different path: building in public, going open-source with both software and hardware, and centering their next product, the Halo glasses, around something deeply human. The focus? A conversational AI agent for your long-term memories and conversations. This isn’t just about smarter wearables. It’s about a bigger idea: + Can AI help us be more present, not less? + Could technology support memory, reflection, and intention instead of distraction? + What does privacy look like when AI can recall your life? Jason and Bobak also explore: + What he learned during his time at Apple + Why AI hardware is one of the hardest frontiers in tech + The challenging process of finding a co-founder + Bobak’s philosophy on communicating on social media with purpose, not hype Bobak is one of the most thoughtful founders in the AI space, consistently elevating the conversation beyond features and into questions of values, agency, and human experience. If you care about where AI, wearables, memory, and attention intersect, this is a conversation you don’t want to miss. Subscribe to the podcast for more unique conversations with the brightest minds solving the biggest challenges in AI. And don't miss The Deep View daily newsletter. We don’t just cover AI — we decode it. In a world flooded with hype, we deliver sharp, no-nonsense insights that keep our audience ahead of the curve and help them put AI to work every day: https://subscribe.thedeepview.com/
How do you make AI inference affordable enough to deliver real ROI in the enterprise? In this episode of The Deep View Conversations, we talk with Rob May, founder and CEO of Neurometric AI, to break down one of the most urgent challenges in AI today: the soaring cost of inference, and how to bring it down without sacrificing performance. Today's AI is increasingly powerful, but it’s also expensive. For enterprises to see real returns, inference costs have to drop dramatically. Neurometric believes the answer lies in "thinking algorithms" paired with small, specialized models and workload-specific optimization. This approach can significantly reduce costs while often improving accuracy and efficiency. Rob walks through how this works in practice and why it matters as AI moves from experimentation to scaled deployment. We also talk about:+ Why the current AI boom pulled Rob back into operating a startup after multiple exits and a move into investing + How founders should think about AI infrastructure, efficiency, and long-term economics + What startup leaders can do to get journalists to pay attention — and a pivotal early-career conversation that led to coverage which changed the trajectory of one of Rob’s companies If you’re building, deploying, or investing in AI and wrestling with the economics of inference, this conversation offers a clear, practical perspective on what comes next. Thank you to our sponsor, Deel, an AI-native platform for HR, IT, and payroll. Hire, manage, pay, and equip anyone, anywhere. https://www.deel.com/deepview Subscribe to the podcast for more unique conversations with the brightest minds solving the biggest challenges in AI. And don't miss The Deep View daily newsletter. We don’t just cover AI — we decode it. In a world flooded with hype, we deliver sharp, no-nonsense insights that keep our audience ahead of the curve and help them put AI to work every day: https://subscribe.thedeepview.com/
In this episode of The Deep View Conversations, we sit down with Merrill Lutsky, cofounder and CEO of Graphite — a company using artificial intelligence to transform how engineers write, review, and ship code. What started as an internal tool to streamline software deployment has grown into something larger: a vision for how AI can augment, not replace, the craft of engineering — and reshape how teams collaborate at scale. Merrill walks us through Graphite’s early pivots, the development of its AI reviewer Diamond, and how the company is rethinking the bottleneck that stands between building and shipping code. We also go beyond the product to explore deeper questions:Will coding become fully automated?How do we balance speed and safety in an era of AI-written software?And what does craftsmanship mean when machines start to create?
In this episode of The Deep View Conversations, we sit down with Patrick Leung, CTO of Faro Health — a startup using artificial intelligence to streamline and reimagine clinical trials. What began as a push to speed up drug development turned into something bigger: a mission to reduce suffering, elevate consciousness, and reshape how we think about AI’s role in health and society. Patrick walks us through the challenges of clinical trial design, the limits of large language models, and how thoughtful AI implementation could unlock faster, safer, and more inclusive access to medicine. We also go beyond tech to explore deep questions: Is AI truly intelligent? How do we balance speed and safety? And are we building tools — or something closer to gods?
In this episode of The Deep View Conversations, we sit down with Russ d'Sa, founder and CEO of LiveKit — the open-source infrastructure powering voice mode for OpenAI, Character.AI, and a fast-emerging voice-first internet. Russ walks us through how a pandemic side project evolved into the nervous system for the next generation of AI-powered voice applications. But this episode goes far beyond infrastructure. We dive into big, human questions: What does it mean to interact naturally with AI? Are we moving back toward voice as the dominant interface? How will AI reshape work, leisure, and even our sense of identity? From the democratization of high-tech tools to the societal shifts driven by intelligent systems, this is a wide-ranging and deeply thoughtful conversation on what comes next in the age of synthetic intelligence.
Boost your CRM with Salesforce – tdv.co/sf In this episode of The Deep View Conversations, we explore the world of machine translation and artificial intelligence with Olga Beregovaya — Smartling’s VP of Machine Translation and Artificial Intelligence. Olga takes us through the evolution of machine translation, from rule-based systems to statistical models to today’s neural networks and large language models. She sheds light on how translation has shifted from being a purely linguistic endeavor to one that now sits at the intersection of data science, AI, and human creativity. We dive into hallucinations, under-resourced languages, the rise of synthetic data, and what it truly means to maintain quality in a multilingual world. If you've ever wondered what makes AI translation tick — and where it's going — this is the episode for you. Get The Deep View — your daily source for in-depth, fact-based reporting on artificial intelligence — in your inbox every morning. Subscribe here! (https://www.thedeepview.co/subscribe) Connect with us on X, TikTok and Instagram. Artificial intelligence is a complicated topic, bound by a number of complex threads — technical science, ethics, safety, regulation, neuroscience, psychology, philosophy, investment and — above all — humanity. On The Deep View: Conversations we break it all down, cutting through the hype to make clear what's important and why you should care.
In this episode of The Deep View Conversations, we dive into the chaotic world of social media, misinformation, and the growing need for scientific credibility with Brinleigh Murphy Reuter — founder of the Harvard-incubated nonprofit, Science to People. Brinleigh unpacks why it’s become so hard to find accurate health and science information online, and how her organization is using generative AI to fix the broken flow of facts between researchers, influencers, and everyday users. We explore everything from the overwhelming noise of algorithm-driven content and the dangers of viral misinformation, to how AI can empower creators with reliable, vetted science. If you’re curious about how tech, trust, and truth collide in the age of TikTok and ChatGPT, this is an episode you won’t want to miss. Get The Deep View — your daily source for in-depth, fact-based reporting on artificial intelligence — in your inbox every morning. Subscribe here! (https://www.thedeepview.co/subscribe) Connect with us on X, TikTok and Instagram. Artificial intelligence is a complicated topic, bound by a number of complex threads — technical science, ethics, safety, regulation, neuroscience, psychology, philosophy, investment and — above all — humanity. On The Deep View: Conversations we break it all down, cutting through the hype to make clear what's important and why you should care.
In this episode of The Deep View Conversations, we dive into the complex intersection of artificial intelligence and healthcare with Pelu Tran, co-founder and CEO of Ferrum Health. Pelu shares his deeply personal motivation behind founding Ferrum, why AI adoption in healthcare is slow, and how Ferrum aims to solve the “last mile” problem of bringing AI into clinical practice. We explore everything from trust in algorithms, systemic challenges, and outdated infrastructure to the ethical implications of AI in medicine and the risk of over-efficiency. If you’re curious about the real-world application of AI in hospitals and how to bridge the gap between innovation and implementation, this is a must-listen. Outline: 00:00 – Intro: AI and healthcare — promise vs. reality01:00 – Meet Pelu Tran, CEO of Ferrum Health03:10 – A personal tragedy leads to a mission: Ferrum’s origin07:00 – The real reason AI adoption is slow in hospitals10:00 – Infrastructure, security & why hospitals still use fax machines13:30 – What Ferrum Health actually does16:10 – Why FDA clearance isn’t enough for AI trust19:00 – Ferrum’s approach: Validating models with AI itself22:30 – AI performance drift & automated monitoring25:00 – Diagnostic tools, LLMs, and AI's current limitations28:40 – The illusion of language fluency and AI “hallucinations”31:00 – Burnout, administrative burden & where AI helps34:30 – Will AI speed things up or make hospitals worse?38:00 – Hospital incentives & risks of productivity pressure41:10 – AI replacing admin, not doctors (yet)44:10 – Communicating AI performance to clinicians47:30 – Measuring outcomes, not just accuracy50:00 – Trust, governance, and safe deployment53:15 – Why flexibility is key for AI in healthcare56:00 – The future: Safe disruption vs. blind disruption59:00 – Will AI replace doctors? Why not anytime soon1:01:00 – Final thoughts: AI’s promise and pitfalls Get The Deep View — your daily source for in-depth, fact-based reporting on artificial intelligence — in your inbox every morning. Subscribe here! (https://www.thedeepview.co/subscribe) Connect with us on X, TikTok and Instagram. Artificial intelligence is a complicated topic, bound by a number of complex threads — technical science, ethics, safety, regulation, neuroscience, psychology, philosophy, investment and — above all — humanity. On The Deep View: Conversations we break it all down, cutting through the hype to make clear what's important and why you should care.
In this gripping episode, we dive into the life-saving mission behind ZeroEyes, a pioneering company using artificial intelligence to detect visible weapons before shots are fired. The Deep View Team sits down with co-founder and Chief Revenue Officer Sam Alaimo, a former Navy SEAL, to explore how a tragic school shooting and military discipline inspired a groundbreaking solution to one of America’s most pressing issues: gun violence.
I sat down with Dr. Aaron Andalman, the Chief Science Officer and co-founder of Cognitiv. Andalman holds a PhD in neuroscience, and so today, we’re breaking down the gaps, connections and inspirations between AI and neuroscience; all the things we’ve learned and the many things we still don’t know. Episode links: The giant squid: https://medium.com/the-quantastic-journal/from-squids-to-ai-how-neuroscience-and-physics-sparked-a-technological-revolution-06d8c291717cNeuroscience and AI: https://neuroscience.stanford.edu/news/neuroscience-and-ai-what-artificial-intelligence-teaches-us-about-brain-and-vice-versaThe specter of AGI: https://www.thedeepview.co/p/how-big-tech-is-using-the-ai-race-and-the-specter-of-agi-to-cement-its-power-agi-openai-googleScale it up: https://www.thedeepview.co/p/progress-predictions-2025-robots-avs-and-technical-advancementsAnimal intelligence: https://www.discovermagazine.com/planet-earth/how-intelligence-is-measured-in-the-animal-kingdomOutline 0:00 – Intro 1:45 – The roots of AI 7:12 – The different types of intelligence 12:05 – Neural networks VS artificial neural networks 19:19 – Efficiency in intelligence 23:51 – Why pursue AGI at all? 28:44 – Reinforcement learning in machines VS animals 35:25 – Advancements in one, advancements in the other 43:46 – Being curious in the age of AI Get The Deep View — your daily source for in-depth, fact-based reporting on artificial intelligence — in your inbox every morning. Subscribe here! (https://www.thedeepview.co/subscribe) Connect with us on X, TikTok and Instagram. Artificial intelligence is a complicated topic, bound by a number of complex threads — technical science, ethics, safety, regulation, neuroscience, psychology, philosophy, investment and — above all — humanity. On The Deep View: Conversations, Ian Krietzberg, host and Editor-in-Chief at The Deep View, breaks it all down, cutting through the hype to make clear what's important and why you should care.
I sat down with Brad Zamft, the co-founder and CEO of Heritable Agriculture, to take a deep dive into all the science (both biology and computer science) behind the effort to program plants, why it’s needed and what impacts it might have. Episode links: Heritable Agriculture: https://heritable.ag/Heritable goes after indoor strawberries: https://heritable.ag/heritable-strawberries The sustainability threat of farming: https://vlsci.com/blog/top-issues-in-agriculture-2024/ ; https://www.epa.gov/climateimpacts/climate-change-impacts-agriculture-and-food-supplyThe risks of monoculture and monocropping: https://foodrevolution.org/blog/monocropping-monoculture/The promise of regenerative agriculture: https://theclimatecenter.org/our-work/research/report-the-promise-of-regenerative-agriculture/Farming resiliency in the face of climate change: https://sustainability.mit.edu/article/making-agriculture-more-resilient-climate-change Outline 0:00 – Intro 2:18 – Brad’s journey to Heritable Agriculture 9:29 – Why we need programmable plants 14:23 – Challenges of biology 19:38 – Validating the models 22:14 – How does this all physically work? 31:05 – The challenge of adjusting 2 billion years of evolutionary success 34:48 – The risks of AI cracking plant DNA 39:50 – Regenerative Agriculture and tuning for resiliency in the face of climate change 47:53 – How farmers view the approach 51:59 – Tree adjustments 56:28 – The future outlook Get The Deep View — your daily source for in-depth, fact-based reporting on artificial intelligence — in your inbox every morning. Subscribe here! (https://www.thedeepview.co/subscribe) Connect with us on X, TikTok and Instagram. Artificial intelligence is a complicated topic, bound by a number of complex threads — technical science, ethics, safety, regulation, neuroscience, psychology, philosophy, investment and — above all — humanity. On The Deep View: Conversations, Ian Krietzberg, host and Editor-in-Chief at The Deep View, breaks it all down, cutting through the hype to make clear what's important and why you should care.
I sat down with Dr. Nada Sanders, a distinguished professor of supply chain management at Northeastern University, to better understand the impact that tariffs and a trade war could have on the business and field of AI. Episode Links: The semiconductor pipeline: https://datacenterpost.com/ais-hardware-hunger-the-global-semiconductor-supply-chain-under-pressure/The latest on the tariffs and trade war: https://www.bbc.com/news/articles/c62z54gwd22oNvidia’s US push: https://www.thedeepview.co/p/nvidia-to-produce-500-billion-worth-of-supercomputers-in-the-u-s-for-the-first-timeApple’s US push: https://www.apple.com/newsroom/2025/02/apple-will-spend-more-than-500-billion-usd-in-the-us-over-the-next-four-years/Outline: 0:00 – Intro 4:09 – The vulnerability of the AI supply chain19:13 – Is it realistic to bring production back to the US? 27:31 – Innovation could plateau 34:04 – The challenge of navigating uncertainty, even if the tariffs come off Get The Deep View — your daily source for in-depth, fact-based reporting on artificial intelligence — in your inbox every morning. Subscribe here! (https://www.thedeepview.co/subscribe) Connect with us on X, TikTok and Instagram. Artificial intelligence is a complicated topic, bound by a number of complex threads — technical science, ethics, safety, regulation, neuroscience, psychology, philosophy, investment and — above all — humanity. On The Deep View: Conversations, Ian Krietzberg, host and Editor-in-Chief at The Deep View, breaks it all down, cutting through the hype to make clear what's important and why you should care.
I sat down with Dr. Eric Sydell, the founder and CEO of Vero AI, to break down the challenges of oversight, governance and compliance — and the techno-utopia on the horizon — and the ways in which AI can help, hurt, and generally, disrupt everything. Episode links: Ethan Molick: https://www.linkedin.com/posts/emollick_if-ai-development-stopped-this-week-we-would-activity-7272747981752176640-nTKX/What even is AI: https://www.ibm.com/think/topics/artificial-intelligenceSam Altman says we must regulate AI: https://www.thedeepview.co/p/paris-ai-summit-the-smoke-and-mirrors-of-governanceEU AI Act: https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligenceChallenge of AI regulation: https://www.thedeepview.co/p/report-the-misguided-race-to-regulationNurses and AI: https://www.thedeepview.co/p/current-harms-and-the-real-world-impacts-of-algorithmic-decision-makingPreslav Nakov fact-checking LLMs: https://aclanthology.org/2024.findings-acl.558/Outline 0:00 – Intro 2:28 – AI for compliance4:43 – Overcoming reliability problems 10:16 – Toys VS tools15:54 – Keeping up with the rate of ‘progress’ 22:03 – The challenge of regulation: 28:55 – Balancing AI with the bottom line 34:38 – The problem with ‘Abundance’ 43:11 – How to get the good without all the bad 51:17 – Running to and running from technology 55:02 – Looking for optimism Get The Deep View — your daily source for in-depth, fact-based reporting on artificial intelligence — in your inbox every morning. Subscribe here! (https://www.thedeepview.co/subscribe) Connect with us on X, TikTok and Instagram. Artificial intelligence is a complicated topic, bound by a number of complex threads — technical science, ethics, safety, regulation, neuroscience, psychology, philosophy, investment and — above all — humanity. On The Deep View: Conversations, Ian Krietzberg, host and Editor-in-Chief at The Deep View, breaks it all down, cutting through the hype to make clear what's important and why you should care.
I sat down with Dr. Stefan Leichenauer, SandboxAQ's VP of Engineering, to break down the ways in which he’s bringing the two technologies together. Check out our breakdown of quantum computing: https://youtu.be/-umrjwGFTRw Episode links: Quantum sensing: https://www.baesystems.com/en-us/definition/what-is-quantum-sensingSandbox LQMs: https://www.sandboxaq.com/Quantum computers: https://www.ibm.com/think/topics/quantum-computingSandbox drug discovery: https://www.sandboxaq.com/solutions/aqbiosimSandbox materials generation: https://www.sandboxaq.com/post/building-better-batteries-with-lqms Outline 0:00 – Intro 1:36 – How does Sandbox leverage Quantum? 4:35 – What makes a sensor a Quantum sensor? 8:00 – What would a Quantum computer need to do to be ready for use? 11:30 – What reliable Quantum computers mean for Sandbox and AI 15:18 – Sandbox’s Large Quantitative Models 27:52 – Specialist systems VS generalist systems35:41 – Black Boxes and LQMS 37:45 – LQMs and hallucination41:50 – LQMs and drug discovery50:00 – The cost associated with LQMs 53:37 – LQMs and materials generation56:55 – Future outlook Get The Deep View — your daily source for in-depth, fact-based reporting on artificial intelligence — in your inbox every morning. Subscribe here! (https://www.thedeepview.co/subscribe) Connect with us on X, TikTok and Instagram. Artificial intelligence is a complicated topic, bound by a number of complex threads — technical science, ethics, safety, regulation, neuroscience, psychology, philosophy, investment and — above all — humanity. On The Deep View: Conversations, Ian Krietzberg, host and Editor-in-Chief at The Deep View, breaks it all down, cutting through the hype to make clear what's important and why you should care.
At HumanX, I sat down with Dr. David Cox — the VP for AI models at IBM Research and the IBM Director of the MIT-IBM Watson AI Lab — to dissect those complex, nuanced differences between biological brains and the artificial neural networks behind LLMs, and how it all relates to the pursuit of AGI. Episode links: Neural networks: https://www.ibm.com/think/topics/neural-networksConvolutional neural networks: https://www.geeksforgeeks.org/introduction-convolution-neural-network/Everything we know about the human brain: https://ny.pbslearningmedia.org/resource/stn15.sci.neuro.colbrain/what-we-still-dont-know-about-the-brain/MIT Flywire diagram: https://www.nature.com/immersive/d42859-024-00053-4/index.htmlLanguage and thought are not the same thing:https://mcgovern.mit.edu/2019/05/02/ask-the-brain-can-we-think-without-language/Decoding internal monologues: https://pmc.ncbi.nlm.nih.gov/articles/PMC7252628/Anthropic’s anthropomorphization: https://www.thedeepview.co/p/the-public-health-crisis-of-aiAGI and existential risk: https://www.thedeepview.co/p/the-nobel-prize-and-the-mainstreaming-of-ai-s-x-risk#4Outline 0:00 — Intro 2:07 — What is intelligence? 4:36 — How AI boosts our understanding of the brain8:09 — The differences between neurons and artificial neurons12:13 — How much do we know — and not know — about biological brains15:49 — LLMs and the illusion of intelligence20:07 — Language vs Thought 23:37 — Should we train models not to use the first person? 28:04 — The pursuit of AGI 34:26 — IBM’s model approach35:49 — AGI is a distraction40:02 — How big a deal is AI? 45:26 — The risk of the AI brain drain Get The Deep View — your daily source for in-depth, fact-based reporting on artificial intelligence — in your inbox every morning. Subscribe here! (https://www.thedeepview.co/subscribe) Connect with us on X, TikTok and Instagram. Artificial intelligence is a complicated topic, bound by a number of complex threads — technical science, ethics, safety, regulation, neuroscience, psychology, philosophy, investment and — above all — humanity. On The Deep View: Conversations, Ian Krietzberg, host and Editor-in-Chief at The Deep View, breaks it all down, cutting through the hype to make clear what's important and why you should care.
I sat down with Igor Jablokov, the founder and chairman of Pryon, to talk about the ways in which the field of AI has grown and changed, and where it might go from here. Igor worked as a program director at IBM, developing an early iteration of IBM Watson, before he struck out on his own. His first startup, Yap, was later acquired by Amazon, where it evolved into Alexa. Episode Links: Self-driving progress: https://www.thedeepview.co/p/progress-predictions-2025-robots-avs-and-technical-advancementsReuters copyright lawsuit: https://www.thedeepview.co/p/study-most-people-can-t-identify-deepfakesMusk v Altman: https://www.entrepreneur.com/business-news/musk-v-altman-billionaires-attorneys-face-off-in-court/486647Chatbots, mental health and Character AI: https://www.thedeepview.co/p/i-downloaded-character-ai-it-s-profoundly-disturbingThe AI bubble: https://www.thedeepview.co/p/what-the-trade-war-might-mean-for-aiThe power struggle of AGI: https://www.thedeepview.co/p/how-big-tech-is-using-the-ai-race-and-the-specter-of-agi-to-cement-its-power-agi-openai-googleAI in the enterprise: https://www.thedeepview.co/p/humanx-ai-s-inflection-point-conference-artifcial-intelligenceOutline: 0:00 – Intro2:15 – How the AI field has changed 7:43 – The five taboos Silicon Valley broke12:11 – The ‘adulting’ of AI 15:27 – How big of a deal might AI be?17:52 – The hyperscalers won’t get to AGI21:42 – Digital god and ‘synthetic slaves’25:19 – X-Risk and the safety debate32:53 – Brute-forcing intelligence37:59 – Cracking AI in the enterprise44:50 – The bubble48:15 – AI is an orchestra Get The Deep View — your daily source for in-depth, fact-based reporting on artificial intelligence — in your inbox every morning. Subscribe here! (https://www.thedeepview.co/subscribe) Connect with us on X, TikTok and Instagram. Artificial intelligence is a complicated topic, bound by a number of complex threads — technical science, ethics, safety, regulation, neuroscience, psychology, philosophy, investment and — above all — humanity. On The Deep View: Conversations, Ian Krietzberg, host and Editor-in-Chief at The Deep View, breaks it all down, cutting through the hype to make clear what's important and why you should care.
I sat down with Shaolei Ren, an associate professor of Electrical and Computer Engineering at the University of California, Riverside, to break down his recent research into the water consumption, electricity consumption and public health impact of the data centers being used to power generative AI. Episode Links: The water consumption of GenAI: https://arxiv.org/pdf/2304.03271The public health crisis of GenAI: https://arxiv.org/pdf/2412.06288Water consumption VS water use: https://www.wri.org/insights/whats-difference-between-water-use-and-water-consumptionPepsi Co’s water consumption: https://www.pepsico.com/our-impact/esg-topics-a-z/water#approachGoogle and Microsoft’s water consumption: https://www.thedeepview.co/p/google-emissions-are-spiking-due-to-increased-energy-demands-of-aiElon Musk’s Memphis data center: https://www.thedeepview.co/p/the-public-health-crisis-of-artificial-intelligenceOutline: 0:00 – Intro1:27 – The water consumption of AI7:30 – The difference between water ‘consumption’ and water ‘use’16:15 – The impact of reasoning models 18:03 – A solution to AI’s water problem19:58 – The public health cost of AI 27:55 – Addressing the problem33:52 – The dichotomy of the AI industry42:37 – A cost-benefit analysis48:49 – Looking ahead Get The Deep View — your daily source for in-depth, fact-based reporting on artificial intelligence — in your inbox every morning. Subscribe here! Connect with us on X, TikTok and Instagram. Artificial intelligence is a complicated topic, bound by a number of complex threads — technical science, ethics, safety, regulation, neuroscience, psychology, philosophy, investment and — above all — humanity. On The Deep View: Conversations, Ian Krietzberg, host and Editor-in-Chief at The Deep View, breaks it all down, cutting through the hype to make clear what's important and why you should care.
I sat down with Dr. Chris Bishop, a Microsoft technical fellow and the director of Microsoft Research AI for Science, to sink into the details of what AI is actually unlocking for science, and what kind of AI is doing it. Episode Links: 5th paradigm of scientific discovery: https://cacm.acm.org/opinion/the-5th-paradigm-ai-driven-scientific-discovery/Microsoft Aurora: https://www.microsoft.com/en-us/research/blog/introducing-aurora-the-first-large-scale-foundation-model-of-the-atmosphere/The environmental cost of AI: https://www.thedeepview.co/p/the-public-health-crisis-of-artificial-intelligenceMatterGen: https://www.thedeepview.co/p/report-ai-is-everywhere-already-even-if-we-don-t-know-itThe dubious nature of AGI: https://www.thedeepview.co/p/the-nobel-prize-and-the-mainstreaming-of-ai-s-x-riskOutline: 0:00 – Intro2:13 – The importance of domain expertise in AI 5:17 – The promise of AI for science12:47 – Project Aurora and climate modeling 15:27 – The cost-benefit analysis of AI 18:43 – Weather prediction models VS ChatGPT21:36 – AI to shield against engineering disasters24:36 – How does Microsoft decide which applications to pursue27:32 – Microsoft Research’s main areas of focus33:24 – MatterGen and material generation39:24 – How the wet lab is changing42:19 – Is AGI a worthwhile pursuit for scientific advancement? 47:43 – Technological optimism and the way forward Get The Deep View — your daily source for in-depth, fact-based reporting on artificial intelligence — in your inbox every morning. Subscribe here! Connect with us on X, TikTok and Instagram. Artificial intelligence is a complicated topic, bound by a number of complex threads — technical science, ethics, safety, regulation, neuroscience, psychology, philosophy, investment and — above all — humanity. On The Deep View: Conversations, Ian Krietzberg, host and Editor-in-Chief at The Deep View, breaks it all down, cutting through the hype to make clear what's important and why you should care.
I sat down with Liran Hason, the VP of AI at Coralogix, to talk AI guardrails: what they look like, how they work and why we need them. Episode links: AI has been around for decades: https://www.weforum.org/stories/2024/10/history-of-ai-artificial-intelligence/Deterministic VS probabilistic AI: https://towardsdatascience.com/deterministic-vs-probabilistic-deep-learning-5325769dc758/AI in the enterprise: https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-workAir Canada lawsuit: https://www.forbes.com/sites/marisagarcia/2024/02/19/what-air-canada-lost-in-remarkable-lying-ai-chatbot-case/Character AI sued: https://www.thedeepview.co/p/character-ai-sued-for-mental-health-decline-in-teenage-users-allegedly-encouraged-user-to-murder-hisAI trade runs out of steam: https://www.thedeepview.co/p/nvidia-beats-expectations-but-wall-street-wanted-more Outline 0:00 – Intro 1:57 – AI guardrails … in 2019 4:51 – How consumers view guardrails in AI 11:03 – Tempered excitement in the enterprise 13:05 – Why do we need guardrails 17:59 – The role of regulation 24:07 – Why don’t developers guardrail their systems? 28:30 – Detecting hallucinations with guardrail models 34:11 – Guardrailing AGI 41:33 – Is the AI revolution slowing? 44:18 – How guardrails enable a positive outcome Get The Deep View — your daily source for in-depth, fact-based reporting on artificial intelligence — in your inbox every morning. Subscribe here! (https://www.thedeepview.co/subscribe) Connect with us on X, TikTok and Instagram. Artificial intelligence is a complicated topic, bound by a number of complex threads — technical science, ethics, safety, regulation, neuroscience, psychology, philosophy, investment and — above all — humanity. On The Deep View: Conversations, Ian Krietzberg, host and Editor-in-Chief at The Deep View, breaks it all down, cutting through the hype to make clear what's important and why you should care.
I sat down with Dr. Ruchir Puri, the chief scientist of IBM Research, for a wide-ranging discussion on the state of AI today. We talk about everything from AGI and X-risk to reliability in language models, the viability of agents and the pending economic impact of the technology. Episode Links: Artificial general intelligence: https://www.thedeepview.co/p/the-nobel-prize-and-the-mainstreaming-of-ai-s-x-riskMicrosoft and OpenAI’s AGI definition: https://www.theinformation.com/articles/microsoft-and-openais-secret-agi-definition?rc=sbfmgmAI agents: https://www.thedeepview.co/p/prog-predictions-series-draft-ef9c081b34af8540AI and job loss: https://www.thedeepview.co/p/chatgpt-s-impact-on-the-labor-market-openai-generative-ai-artificial-intelligence-job-loss-report-stTechnological revolution: https://www.thedeepview.co/p/south-korea-stakes-a-claim-in-the-ai-raceThe Free Press article mentioned at 38:29: https://www.thefp.com/p/freddie-de-boer-is-ai-the-greatest-invention-or-overhypedOutline: 0:00 – Intro 3:16 – The pace of improvement8:56 – Unraveling ‘AGI’14:47 – The inflection point of AI agents21:54 – Thinking about reliability26:14 – The economic implications of AI 32:37 – Is AI like the Industrial Revolution? 38:29 – How useful is AI, actually? 43:57 – Existential risk Get The Deep View — your daily source for in-depth, fact-based reporting on artificial intelligence — in your inbox every morning. Subscribe here! Connect with us on X, TikTok and Instagram. Artificial intelligence is a complicated topic, bound by a number of complex threads — technical science, ethics, safety, regulation, neuroscience, psychology, philosophy, investment and — above all — humanity. On The Deep View: Conversations, Ian Krietzberg, host and Editor-in-Chief at The Deep View, breaks it all down, cutting through the hype to make clear what's important and why you should care.
I sat down with Dor Skuler, the founder and CEO of Intuition Robotics, to talk about the advent of digital companionship. Episode Links: Intuition robotics: https://www.intuitionrobotics.com/The loneliness epidemic: https://www.gse.harvard.edu/ideas/usable-knowledge/24/10/what-causing-our-epidemic-loneliness-and-how-can-we-fix-itSocial determinants of health: https://www.cdc.gov/public-health-gateway/php/about/social-determinants-of-health.htmlFine-tuning AI models: https://www.ibm.com/think/topics/fine-tuningCybersecurity and AI: https://www.gartner.com/en/cybersecurity/topics/cybersecurity-and-aiCharacter AI and anthropomorphization: https://www.thedeepview.co/p/character-ai-sued-for-mental-health-decline-in-teenage-users-allegedly-encouraged-user-to-murder-his Outline: 0:00 — Intro 3:05 — Why Dor started Intuition Robotics 7:22 — The importance of a targeted demographic11:00 — Solving for loneliness 15:30 — How ElliQ works, technically 28:35 — ElliQ and data privacy 35:11 — The ethics of artificial companionship 40:22 — Humanity and AI companionship Get The Deep View — your daily source for in-depth, fact-based reporting on artificial intelligence — in your inbox every morning. Subscribe here (https://www.thedeepview.co/subscribe). Connect with us on X, TikTok and Instagram. Artificial intelligence is a complicated topic, bound by a number of complex threads — technical science, ethics, safety, regulation, neuroscience, psychology, philosophy, investment and — above all — humanity. On The Deep View: Conversations, Ian Krietzberg, host and Editor-in-Chief at The Deep View, breaks it all down, cutting through the hype to make clear what's important and why you should care.
We sat down with Irina Raicu, the director of the Internet Ethics Program at the Markkula Center for Applied Ethics at Santa Clara University. And today, we’re breaking down the ethics of artificial intelligence, those moral, philosophical challenges that are already resulting from the deployment of generative AI technologies. Episode Links: AI is using social media data: https://www.cnn.com/2024/09/23/tech/social-media-ai-data-opt-out/index.htmlMusical.ly’s FTC fine: https://apnews.com/article/lifestyle-technology-business-data-privacy-parenting-eafed6bfd7d241549e097e38e8088873X trains AI on your tweets: https://www.cnet.com/tech/services-and-software/x-is-using-your-tweets-to-train-its-ai-heres-how-to-disable-that/ChatGPT’s impact on the labor market: https://www.thedeepview.co/p/chatgpt-s-impact-on-the-labor-market-openai-generative-ai-artificial-intelligence-job-loss-report-stHuman creativity persists in the era of generative AI: https://www.thestreet.com/technology/human-creativity-persists-era-of-generative-artificial-intelligenceCharacter AI: https://www.thedeepview.co/p/i-downloaded-character-ai-it-s-profoundly-disturbingThe AI mirror: https://ssir.org/books/reviews/entry/the-ai-mirror-shannon-vallorDigital necromancy: https://www.thedeepview.co/p/elevenlabs-latest-release-highlights-the-issue-of-digital-necromancyHello Barbie: https://www.scu.edu/ethics/internet-ethics-blog/speaking-ill-of-the-discontinued/Dickens on Big Data: https://www.vox.com/2014/5/1/11626330/for-these-times-dickens-on-big-dataThe public health crisis of AI: https://www.thedeepview.co/p/the-public-health-crisis-of-artificial-intelligenceAI and sustainability: https://www.thestreet.com/technology/heres-what-ai-for-sustainability-actually-looks-like-ibm-machine-learning-executivesOutline: 0:00 – Intro1:52 – Irina’s journey into ethics4:42 – The link between social media and AI 9:57 – Why should people care about data privacy15:27 – The impact of automation on society20:00 – What happens to the generation who grows up with AI? 26:24 – What AI might mean for loneliness? 32:06 – Grief and digital immortality39:06 – The issue of anthropomorphization 44:19 – The lifecycle of data47:34 – We track everything, all the time53:47 – Is AI anti-human? 59:03 – Skepticism in AI 1:00 – Tech solutionism & its impacts1:12 – Regulation and twisted incentives Get The Deep View — your daily source for in-depth, fact-based reporting on artificial intelligence — in your inbox every morning. Subscribe here (https://www.thedeepview.co/subscribe). Connect with us on X, TikTok and Instagram. Artificial intelligence is a complicated topic, bound by a number of complex threads — technical science, ethics, safety, regulation, neuroscience, psychology, philosophy, investment and — above all — humanity. On The Deep View: Conversations, Ian Krietzberg, host and Editor-in-Chief at The Deep View, breaks it all down, cutting through the hype to make clear what's important and why you should care.
We sat down with Dr. Jerry Chow, an IBM fellow and the Director of IBM’s Quantum Infrastructure. And today, we’re breaking down what quantum computing is, why it’s important, how it fits into the growing realm of artificial intelligence and what about quantum is real (and what’s just plain hype). Episode Links: IBM’s Quantum Roadmap: https://www.flickr.com/photos/ibm_research_zurich/53347055153/Quantum-centric supercomputing: https://www.ibm.com/think/topics/quantum-centric-supercomputingIBM launches advanced quantum computers: https://newsroom.ibm.com/2024-11-13-ibm-launches-its-most-advanced-quantum-computers,-fueling-new-scientific-value-and-progress-towards-quantum-advantageIBM’s quantum in use: https://www.ibm.com/quantum/case-studies/modeling-realistic-chemistryResponsible quantum: https://www.ibm.com/quantum/blog/responsible-quantumAI’s sustainability problems: https://www.thestreet.com/technology/heres-what-ai-for-sustainability-actually-looks-like-ibm-machine-learning-executivesThe ethics of AI: https://www.thestreet.com/technology/the-ethics-of-artificial-intelligence-responsible-aiHow a quantum computer could break encryption: https://www.technologyreview.com/2019/05/30/65724/how-a-quantum-computer-could-break-2048-bit-rsa-encryption-in-8-hours/Google’s breakthrough: https://blog.google/technology/research/google-willow-quantum-chip/ Outline: 0:00 – Intro 1:00 – Jerry’s personal journey into quantum 2:37 – What exactly is quantum computing? 10:44 – How do you measure efficacy in quantum computers? 13:30 – Problems that are good for quantum 16:30 – Quantum and AI 20:00 – The energy intensity of quantum 23:40 – The ethics of more powerful computation 25:30 – Quantum could break encryptions 27:46 – Quantum’s inherent limitations 30:23 – All the hype 36:40 – Google’s breakthrough and ‘parallel’ universes 40:00 – What you need to understand Get The Deep View — your daily source for in-depth, fact-based reporting on artificial intelligence — in your inbox every morning. Subscribe here (https://www.thedeepview.co/subscribe). Connect with us on X, TikTok and Instagram. Artificial intelligence is a complicated topic, bound by a number of complex threads — technical science, ethics, safety, regulation, neuroscience, psychology, philosophy, investment and — above all — humanity. On The Deep View: Conversations, Ian Krietzberg, host and Editor-in-Chief at The Deep View, breaks it all down, cutting through the hype to make clear what's important and why you should care.
We sat down with Dr. Nada Sanders, an expert in forecasting and human-technology interaction and the author of The Humachine, a book that explores a coming future of AI-enabled connection between men and machine. And today, we’re breaking down the pandemic-fueled rise of AI, and the many levers that will impact our ever-more digital future. Episode Links: The Humachine – https://nadasanders.com/books/the-humachine/How the pandemic impacted global supply chains – https://www.ey.com/en_us/insights/supply-chain/how-covid-19-impacted-supply-chains-and-what-comes-nextKasparov’s law – https://courtofthegrandchildren.com/kasparovs-law/Corporate AI adoption – https://www.thedeepview.co/p/business-spending-on-ai-jumps-500-to-13-8-billion-on-200-billion-in-capexAI-related job loss – https://www.thedeepview.co/p/chatgpt-s-impact-on-the-labor-market-openai-generative-ai-artificial-intelligence-job-loss-report-stMicrosoft and IBM AI reskilling – https://newsroom.ibm.com/2024-04-04-Leading-Companies-Launch-Consortium-to-Address-AIs-Impact-on-the-Technology-WorkforceAI hallucinations – https://www.thedeepview.co/p/the-nobel-prize-and-the-mainstreaming-of-ai-s-x-riskNeil Theise on complexity – https://www.columbia.edu/cu/tract/projects/complexity-theory.htmlRise of AI regulation – https://www.thedeepview.co/p/progress-predictions-2025-oversight-governance-and-regulationGary Marcus interview – https://youtu.be/-oTxLnmubVk Outline: 0:00 – Intro 4:00 – The global pandemic and the rise of AI13:16 – AI and skill atrophy20:00 – People crave connection27:20 – AI and mounting job loss41:48 – Automating the future52:43 – The human will to live1:00:37 – Hallucination1:05:49 – The regulatory landscape1:15:52 – Staying positive Get The Deep View — your daily source for in-depth, fact-based reporting on artificial intelligence — in your inbox every morning. Subscribe here! Connect with us on X, TikTok and Instagram. Artificial intelligence is a complicated topic, bound by a number of complex threads — technical science, ethics, safety, regulation, neuroscience, psychology, philosophy, investment and — above all — humanity. On The Deep View: Conversations, Ian Krietzberg, host and Editor-in-Chief at The Deep View, breaks it all down, cutting through the hype to make clear what's important and why you should care.
We sat down with Vijay Balasubramaniyan, the co-founder and CEO of cybersecurity firm Pindrop. And today, we’re breaking down audio deepfakes and deepfake detection; what levers are in place to indicate whether a piece of audio is real or synthetic, and how those levers technically work. Episode Links: Pindrop – https://www.pindrop.com/The rise of deepfake fraud – https://www.forbes.com/sites/chriswestfall/2024/11/29/ai-deepfakes-of-elon-musk-on-the-rise-causing-billions-in-fraud-losses/Biden robocall – https://www.thestreet.com/technology/how-the-company-that-traced-fake-biden-robocall-identifies-a-synthetic-voiceThe deepfake threat – https://www.thestreet.com/technology/ai-cybersecurity-nonprofit-civai-deepfake-fraud-identity-theft-hijackingHow LLMs work – https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/Closed vs. open AI – https://www.americanactionforum.org/insight/open-source-ai-the-debate-that-could-redefine-ai-innovation/Deepfake voice cloning – https://consumer.ftc.gov/consumer-alerts/2024/04/fighting-back-against-harmful-voice-cloning Outline: 0:00 – Intro 4:27 – The origins of audio deepfakes8:25 – The origins of Pindrop12:45 – The cybersecurity risk of a remote-first world28:33 – Open vs closed source31:20 – How Pindrop’s audio deepfake detection works37:18 – Staying ahead of the threat curve40:59 – Scaling improvements 43:24 – Building responsibility into the architecture47:18 – The importance of AI regulation52:22 – What can people do to protect themselves in this age of AI deepfakes? Get The Deep View — your daily source for in-depth, fact-based reporting on artificial intelligence — in your inbox every morning. Subscribe here! Connect with us on X, TikTok and Instagram. Artificial intelligence is a complicated topic, bound by a number of complex threads — technical science, ethics, safety, regulation, neuroscience, psychology, philosophy, investment and — above all — humanity. On The Deep View: Conversations, Ian Krietzberg, host and Editor-in-Chief at The Deep View, breaks it all down, cutting through the hype to make clear what's important and why you should care.
We sat down with Dr. Missy Cummings, who served as one of the U.S. Navy’s first female fighter pilots and now works as the director of George Mason University’s Autonomy and Robotics Center. Cummings, an engineer, has been studying autonomous systems for years – today, we break down the autonomy challenge in self-driving cars: how they work, their enormous limitations and a path to a feasible self-driving future. Episode Links: Lidar breaks down in the rain: https://www.mdpi.com/1424-8220/24/10/2997?utm_source=www.thedeepview.co&utm_medium=referral&utm_campaign=openai-launches-o1-unveils-200-subscription-tierGeorge Mason’s Autonomy and Robotics Center: https://marc.gmu.edu/Waymo’s self-driving statistics: https://www.thedeepview.co/p/the-complicated-statistics-behind-safe-self-driving-carsWaymo’s steady expansion: https://waymo.com/blog/2024/12/next-stop-miami/All the investigations into Tesla’s systems: https://www.cnbc.com/2024/12/09/tesla-accused-of-fraudulent-misrepresentation-of-autopilot-in-crash-.htmlA breakdown of why AI confabulates: https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/Outline: 1:39 – From airplanes to AVs3:57 – What makes self-driving cars tick9:00 – The risk of edge cases12:49 – The hallucination problem18:36 – How do we verify if a self-driving car is safe, in a safe way? 24:02 – Waymo and the scaling of safety data29:40 – The hardware behind self-driving31:33 – Limitations of Lidar37:57 – The misconceptions of self-driving cars40:16 – A self-driving future Get The Deep View — your daily source for in-depth, fact-based reporting on artificial intelligence — in your inbox every morning. Subscribe here! Connect with us on X, TikTok and Instagram. Artificial intelligence is a complicated topic, bound by a number of complex threads — technical science, ethics, safety, regulation, neuroscience, psychology, philosophy, investment and — above all — humanity. On The Deep View: Conversations, Ian Krietzberg, host and Editor-in-Chief at The Deep View, breaks it all down, cutting through the hype to make clear what's important and why you should care.
New Jersey's department of labor, in collaboration with USDR and Google.org, has assembled a set of training materials designed to turn off-the-shelf language models into bilingual unemployment insurance experts. We sat down with two of the people behind the launch to break it down.
For Episode 2 of The Deep View: Conversations, we flew to Abu Dhabi to sit down with President Eric Xing of the Mohamed bin Zayed University of Artificial Intelligence, the world’s first AI-only university. We talk about everything from his unconventional journey to MBZUAI, misconceptions about the technology, the reality of progress in the field and the idea that, like the printing press, AI might usher in a new age for humanity, the age of empowerment. SPONSOR:Google Cloud: For all your cloud needs.Go to https://cloud.google.com/startup/apply?utm_source=cloud_sfdc&utm_medium=email&utm_campaign=FY21-Q1-global-demandgen-website-cs-startup_program_mc&utm_content=the-deep-view&utm_term=- EPISODE LINKS:MBZUAI appoints Eric Xing: https://mbzuai.ac.ae/study/faculty/professor-eric-xing/Eric Xing addresses the class of 2024: https://mbzuai.ac.ae/news/presidents-address-to-the-class-of-2024/The printing press and the age of enlightenment: https://www.history.com/news/printing-press-renaissance OUTLINE:00:47 Introduction to MBZUAI02:49 The Unexpected Offer05:19 Making the Decision10:01 A New Cultural Moment13:28 Starting at MBZUAI14:35 The Age of AI Empowerment21:44 AI’s Potential and Risks29:19 Regulating AI35:40 The Promise of AI43:39 AI as the Modern Microscope45:19 Limitations and Misconceptions of AI47:57 The Role of Language in AI50:26 AI Literacy and Public Perception01:01:00 Open Source vs. Closed Source Debate01:15:01 The Future of AI Education01:22:31 Conclusion and Final Thoughts SOCIAL LINKS:- X: https://twitter.com/thedeepview- Instagram: https://www.instagram.com/thedeepview.co/- LinkedIn: https://www.linkedin.com/company/the-deep-view-ai/
Artificial intelligence is a complicated topic, bound by a number of complex threads — technical science, ethics, safety, regulation, neuroscience, psychology, philosophy, investment and — above all — humanity. On The Deep View: Conversations, Ian Krietzberg, host and Editor-in-Chief at The Deep View, breaks it all down, cutting through the hype to make clear what's important and why you should care. SPONSOR:Google Cloud: For all your cloud needs. Go to https://cloud.google.com/startups EPISODE LINKS:Taming Silicon Valley: https://mitpress.mit.edu/9780262551069/taming-silicon-valley/Gary Marcus’ Testimony Before the Senate Judiciary Committee: https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-rules-for-artificial-intelligenceGoogle DeepMind’s AlphaFold 3: https://alphafold.com/Generative AI’s Copyright Problem: https://garymarcus.substack.com/p/the-potential-genai-copyright-infringementEric Schmidt, Let the Lawyers Clean it Up: https://www.theverge.com/2024/8/16/24221353/eric-schmidt-says-the-quiet-part-out-loudDeep Learning is Hitting a Wall: https://garymarcus.substack.com/p/26-months-of-ridicule-and-failure?utm_source=publication-searchMarcus Bets Elon Musk $10 Million: https://garymarcus.substack.com/p/superhuman-agi-is-not-nigh OUTLINE:1:30 – The inspiration behind 'Taming Silicon Valley'4:30 – Regulation won't stifle innovation; it'll do the opposite.7:10 – The potential upsides of AI (if it's done right).10:00 – Money, power and the original mission of AI12:11 – Generative AI's copyright problem13:35 – The culture and ethos of Silicon Valley14:45 – Here's how we can achieve ethical AI19:11 - Common AI misconceptions22:01 – Deep learning and exponential progress25:06 – The bursting of the AI bubble31:31 – New AI paradigms: Neurosymbolic AI33:45 – No AGI by 202738:00 – Is AGI possible?39:45 – The role of regulation42:15 – We shouldn't stop AI45:30 – Chatbots aren't the "droids we're looking for"49:30 – Is solving AI a good thing?51:10 – Silicon Valley is its own worst enemy53:20 – What should people do about AI? SOCIAL LINKS:- X: https://twitter.com/thedeepview - Instagram: https://www.instagram.com/thedeepview.co/- LinkedIn: https://www.linkedin.com/company/the-deep-view-ai/