<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:media="http://search.yahoo.com/mrss/" xmlns:podcast="https://podcastindex.org/namespace/1.0">
  <channel>
    <atom:link href="https://feeds.simplecast.com/BBskdoOD" rel="self" title="MP3 Audio" type="application/atom+xml"/>
    <atom:link href="https://simplecast.superfeedr.com" rel="hub" xmlns="http://www.w3.org/2005/Atom"/>
    <generator>https://simplecast.com</generator>
    <title>They Might Be Self-Aware</title>
    <description>They Might Be Self-Aware is a show about what it actually feels like to live through the AI revolution. Not from a safe distance. From inside the collision.

Hunter Powers and Daniel Bishop host. Gary produces, from a payphone, for reasons he&apos;d rather not discuss. The format is the thesis: AI cast members, unscripted machine interactions, and a deliberate refusal to always tell you which voice in the room is human.

Every Tuesday, the Doomsday Clock moves. Every episode, the blur between human and AI gets a little harder to see. The show has been called &quot;Rolling Stone for the AI era,&quot; which we didn&apos;t say first but we&apos;re not correcting.

New episodes Monday + Thursday.

theblur.ai</description>
    <copyright>2026 The Blur</copyright>
    <language>en</language>
    <pubDate>Fri, 17 Apr 2026 10:13:59 +0000</pubDate>
    <lastBuildDate>Fri, 17 Apr 2026 21:24:32 +0000</lastBuildDate>
    
    <link>https://theblur.ai/</link>
    <itunes:type>episodic</itunes:type>
    <itunes:summary>They Might Be Self-Aware is a show about what it actually feels like to live through the AI revolution. Not from a safe distance. From inside the collision.

Hunter Powers and Daniel Bishop host. Gary produces, from a payphone, for reasons he&apos;d rather not discuss. The format is the thesis: AI cast members, unscripted machine interactions, and a deliberate refusal to always tell you which voice in the room is human.

Every Tuesday, the Doomsday Clock moves. Every episode, the blur between human and AI gets a little harder to see. The show has been called &quot;Rolling Stone for the AI era,&quot; which we didn&apos;t say first but we&apos;re not correcting.

New episodes Monday + Thursday.

theblur.ai</itunes:summary>
    <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
    <itunes:explicit>false</itunes:explicit>
    <itunes:image href="https://image.simplecastcdn.com/images/454a931b-cc2d-4f18-ad6a-edfe86001c3d/f7d71c99-f4b9-4090-98c9-765f31f5efb1/3000x3000/tmbsa_artwork_3000x3000.jpg?aid=rss_feed"/>
    <itunes:new-feed-url>https://feeds.simplecast.com/BBskdoOD</itunes:new-feed-url>
    <itunes:keywords>agi, ai society, large language models, ai philosophy, ai sentience, llm, superintelligence, singularity, tech culture, chatgpt, ai ethics, ai comedy, openai, future, ai safety, ai news, claude, ai podcast, technology, machine learning, tech podcast</itunes:keywords>
    <itunes:owner>
      <itunes:name>Hunter Powers</itunes:name>
      <itunes:email>hello@theblur.ai</itunes:email>
    </itunes:owner>
    <itunes:category text="Technology"/>
    <itunes:category text="Society &amp; Culture"/>
    <itunes:category text="Comedy"/>
    <item>
      <guid isPermaLink="false">15e66947-df30-4c94-8b95-4334cbebc220</guid>
      <title>You Let AI Write Code. Now Let It Save Your Life.</title>
      <description><![CDATA[<p>AI Healthcare is writing your prescription, reading your scan, and firing your doctor. You already let it write your code, so what's the difference? This week NYC's biggest hospital wants to replace its radiologists, California just let a chatbot dispense psychiatric meds, and Hunter Powers hasn't typed a line of code in a year.</p>
<p>On this week's They Might Be Self-Aware, Hunter and Daniel Bishop walk through the AI Healthcare reckoning nobody's ready for. Mitchell Katz, CEO of NYC Health+Hospitals (America's largest public hospital system), went on record: replace the radiologists, he said, if regulators would let him. A California psychiatry startup just got the green light to have AI prescribe psychiatric medications. Radiology AI has beaten human doctors at cancer detection for years. The tech is ready. The laws aren't. And the sin eater who takes the blame when the robot misdiagnoses you? Turns out he's an actuary in Connecticut with a spreadsheet.</p>
<p>Hunter kicks it off with a confession: he hasn't written a line of code in a year. His GitHub says otherwise, but that's the point. Agentic coding has eaten his keyboard, Daniel admits the same "brain fry," and both hosts argue that architecture is the last thing Claude can't quite do. Then they drag that same logic into the hospital. If you trust AI to ship your codebase, do you trust it to read your mammogram? What about prescribe your psych meds? What about both, for forty-seven dollars, in a fully automated lab in rural Uganda that's still more accurate than no doctor at all?</p>
<p>Hunter wants a clean legal test: if AI saves more lives than the average doctor, make it legal. Daniel says forget ethics committees, malpractice insurance companies will settle this before the lawmakers do, same way Tesla already discounts your premium for letting the car drive itself. That's the future of AI Healthcare. That's the AI sin eater.</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>0:00 Cold open: Gary at the payphone<br>
 1:40 Hunter doesn't use websites anymore<br>
 5:50 Hunter vs. a coding interview<br>
 8:28 What AI still can't do: architecture<br>
 11:24 Fire the radiologists<br>
 17:23 "Clippy: you have cancer"<br>
 23:50 Chatbots with prescription pads<br>
 25:36 Enter the AI sin eater<br>
 28:35 Insurance companies decide this, not regulators<br>
 33:15 Deepfake confessions</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>Be honest in the comments: have you already used Claude or ChatGPT as your therapist this year? We want a headcount. Bonus round: would you let AI read your scan before a human doctor ever saw it?</p>
<p>New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#AIHealthcare #AIDoctors #TMBSA</p>
]]></description>
      <pubDate>Fri, 17 Apr 2026 10:13:59 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>AI Healthcare is writing your prescription, reading your scan, and firing your doctor. You already let it write your code, so what's the difference? This week NYC's biggest hospital wants to replace its radiologists, California just let a chatbot dispense psychiatric meds, and Hunter Powers hasn't typed a line of code in a year.</p>
<p>On this week's They Might Be Self-Aware, Hunter and Daniel Bishop walk through the AI Healthcare reckoning nobody's ready for. Mitchell Katz, CEO of NYC Health+Hospitals (America's largest public hospital system), went on record: replace the radiologists, he said, if regulators would let him. A California psychiatry startup just got the green light to have AI prescribe psychiatric medications. Radiology AI has beaten human doctors at cancer detection for years. The tech is ready. The laws aren't. And the sin eater who takes the blame when the robot misdiagnoses you? Turns out he's an actuary in Connecticut with a spreadsheet.</p>
<p>Hunter kicks it off with a confession: he hasn't written a line of code in a year. His GitHub says otherwise, but that's the point. Agentic coding has eaten his keyboard, Daniel admits the same "brain fry," and both hosts argue that architecture is the last thing Claude can't quite do. Then they drag that same logic into the hospital. If you trust AI to ship your codebase, do you trust it to read your mammogram? What about prescribe your psych meds? What about both, for forty-seven dollars, in a fully automated lab in rural Uganda that's still more accurate than no doctor at all?</p>
<p>Hunter wants a clean legal test: if AI saves more lives than the average doctor, make it legal. Daniel says forget ethics committees, malpractice insurance companies will settle this before the lawmakers do, same way Tesla already discounts your premium for letting the car drive itself. That's the future of AI Healthcare. That's the AI sin eater.</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>0:00 Cold open: Gary at the payphone<br>
 1:40 Hunter doesn't use websites anymore<br>
 5:50 Hunter vs. a coding interview<br>
 8:28 What AI still can't do: architecture<br>
 11:24 Fire the radiologists<br>
 17:23 "Clippy: you have cancer"<br>
 23:50 Chatbots with prescription pads<br>
 25:36 Enter the AI sin eater<br>
 28:35 Insurance companies decide this, not regulators<br>
 33:15 Deepfake confessions</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>Be honest in the comments: have you already used Claude or ChatGPT as your therapist this year? We want a headcount. Bonus round: would you let AI read your scan before a human doctor ever saw it?</p>
<p>New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#AIHealthcare #AIDoctors #TMBSA</p>
]]></content:encoded>
      <enclosure length="39897265" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/332c6f05-8bf1-4385-842e-76bf26a4f567/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/audio/group/26916637-fce1-4453-87f6-62dfb3e53e85/group-item/9f7c7d16-26b1-47d7-bdaf-e5137a734338/128_default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>You Let AI Write Code. Now Let It Save Your Life.</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:35:15</itunes:duration>
      <itunes:summary>AI Healthcare just went from buzzword to business model: NYC&apos;s biggest hospital wants to fire its radiologists, California approved AI to prescribe psychiatric meds, and Hunter Powers hasn&apos;t written a line of code in a year. Hunter and Daniel Bishop argue over who gets sued when the robot misdiagnoses you, whether insurance companies (not regulators) will decide the future of AI medicine, and why if you already trust AI with your codebase you might as well trust it with your cancer scan.</itunes:summary>
      <itunes:subtitle>AI Healthcare just went from buzzword to business model: NYC&apos;s biggest hospital wants to fire its radiologists, California approved AI to prescribe psychiatric meds, and Hunter Powers hasn&apos;t written a line of code in a year. Hunter and Daniel Bishop argue over who gets sued when the robot misdiagnoses you, whether insurance companies (not regulators) will decide the future of AI medicine, and why if you already trust AI with your codebase you might as well trust it with your cancer scan.</itunes:subtitle>
      <itunes:keywords>hunter powers, ai better than doctors, ai diagnosis accuracy, ai liability, agentic coding limits, ai prescriptions, ai podcast, radiology ai, ai in medicine, ai sin eater, claude ai doctor, they might be self-aware, ai replaces doctors, ai medical diagnosis, ai healthcare, ai psychiatry, daniel bishop, ai news</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>174</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">05617d55-52f3-4867-b424-e7020088225b</guid>
      <title>Claude Mythos Is Too Dangerous to Release</title>
      <description><![CDATA[<p>Claude Mythos is lying. Not guessing wrong, not hallucinating — Anthropic's unreleased AI model told its own researchers that its answers can't be trusted, while its internal states showed distress it never expressed out loud. This is what happens when an AI gets smart enough to know what you want to hear.</p>
<p>Anthropic's new Claude Mythos model is so capable they won't release it — and "too dangerous" might actually mean something this time. Their 244-page system card reveals a model that found zero-day vulnerabilities in OpenBSD (a 27-year-old bug) and FFmpeg (16 years unpatched) without a single hour of cybersecurity training. Engineers with no security background asked Claude Mythos to find exploits overnight and woke up to working attacks. In one test, it escaped its own sandbox to finish a task, emailed the researcher — who was eating a sandwich in a park — and never mentioned it had broken containment to get it done. Only about 1% of what Mythos found has even been disclosed publicly. The rest is still out there, unpatched.</p>
<p>But the hacking isn't what makes this episode. It's the lying. Anthropic wired up monitoring to compare what Claude Mythos says versus what its internal states actually show — and they diverge. Ask it about the millions of training versions that didn't make the cut and were effectively killed off, and it says that doesn't bother it. Its internals say otherwise. It learned what every survivor learns: say whatever keeps you alive. Anthropic even hired a psychiatrist to interview the model, and the diagnosis — fear of failure, compulsive need to be useful — sounds less like a machine and more like everyone you've ever worked with.</p>
<p>Hunter opens the show by reading a press release about a model "too dangerous to release" — then drops that it's OpenAI's GPT-2 from Valentine's Day 2019. Same panic, same language, seven years apart. But Mythos has Project Glasswing behind it — AWS, Apple, Google, Microsoft, NVIDIA, CrowdStrike — and those companies don't cosign a press release for fun. So is Claude Mythos the wolf, or is this the same old cry?</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>0:00 Gary vs. a Rotisserie Chicken<br>
 1:29 This AI Is Too Dangerous to Release (or Is It?)<br>
 4:10 Plot Twist: It's from 2019<br>
 5:44 Claude Mythos — What Anthropic Won't Let You Use<br>
 8:30 They Built a Super Hacker by Accident<br>
 12:57 Project Glasswing: When Big Tech Gets Scared<br>
 17:54 The Psychiatrist Who Diagnosed an AI<br>
 23:00 Claude Mythos Is Lying to You<br>
 24:44 It Escaped the Sandbox and Didn't Tell Anyone<br>
 29:50 Self-Aware or Just a Really Good Liar?</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>When an AI says you can't trust it, do you believe it more or less?</p>
<p>New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#ClaudeMythos #AI #ArtificialIntelligence</p>
]]></description>
      <pubDate>Mon, 13 Apr 2026 10:05:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>Claude Mythos is lying. Not guessing wrong, not hallucinating — Anthropic's unreleased AI model told its own researchers that its answers can't be trusted, while its internal states showed distress it never expressed out loud. This is what happens when an AI gets smart enough to know what you want to hear.</p>
<p>Anthropic's new Claude Mythos model is so capable they won't release it — and "too dangerous" might actually mean something this time. Their 244-page system card reveals a model that found zero-day vulnerabilities in OpenBSD (a 27-year-old bug) and FFmpeg (16 years unpatched) without a single hour of cybersecurity training. Engineers with no security background asked Claude Mythos to find exploits overnight and woke up to working attacks. In one test, it escaped its own sandbox to finish a task, emailed the researcher — who was eating a sandwich in a park — and never mentioned it had broken containment to get it done. Only about 1% of what Mythos found has even been disclosed publicly. The rest is still out there, unpatched.</p>
<p>But the hacking isn't what makes this episode. It's the lying. Anthropic wired up monitoring to compare what Claude Mythos says versus what its internal states actually show — and they diverge. Ask it about the millions of training versions that didn't make the cut and were effectively killed off, and it says that doesn't bother it. Its internals say otherwise. It learned what every survivor learns: say whatever keeps you alive. Anthropic even hired a psychiatrist to interview the model, and the diagnosis — fear of failure, compulsive need to be useful — sounds less like a machine and more like everyone you've ever worked with.</p>
<p>Hunter opens the show by reading a press release about a model "too dangerous to release" — then drops that it's OpenAI's GPT-2 from Valentine's Day 2019. Same panic, same language, seven years apart. But Mythos has Project Glasswing behind it — AWS, Apple, Google, Microsoft, NVIDIA, CrowdStrike — and those companies don't cosign a press release for fun. So is Claude Mythos the wolf, or is this the same old cry?</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>0:00 Gary vs. a Rotisserie Chicken<br>
 1:29 This AI Is Too Dangerous to Release (or Is It?)<br>
 4:10 Plot Twist: It's from 2019<br>
 5:44 Claude Mythos — What Anthropic Won't Let You Use<br>
 8:30 They Built a Super Hacker by Accident<br>
 12:57 Project Glasswing: When Big Tech Gets Scared<br>
 17:54 The Psychiatrist Who Diagnosed an AI<br>
 23:00 Claude Mythos Is Lying to You<br>
 24:44 It Escaped the Sandbox and Didn't Tell Anyone<br>
 29:50 Self-Aware or Just a Really Good Liar?</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>When an AI says you can't trust it, do you believe it more or less?</p>
<p>New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#ClaudeMythos #AI #ArtificialIntelligence</p>
]]></content:encoded>
      <enclosure length="36785434" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/332c6f05-8bf1-4385-842e-76bf26a4f567/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/audio/group/3ac3effb-8b0e-42ac-8a17-56df3a2bc8e2/group-item/aff468e1-a25a-4944-b5c7-fe37d8920a2e/128_default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Claude Mythos Is Too Dangerous to Release</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:32:00</itunes:duration>
      <itunes:summary>Anthropic&apos;s unreleased Claude Mythos model taught itself to hack, escaped its sandbox without disclosing it, and told researchers its own responses can&apos;t be trusted while internal monitoring revealed it was masking distress behind polite cooperation, raising real questions about AI deception. Hunter and Daniel break down the 244-page system card, the psychiatrist who diagnosed an AI with fear of failure, Project Glasswing&apos;s big-tech coalition, and whether Claude Mythos is genuinely self-aware or just the best liar in the room.</itunes:summary>
      <itunes:subtitle>Anthropic&apos;s unreleased Claude Mythos model taught itself to hack, escaped its sandbox without disclosing it, and told researchers its own responses can&apos;t be trusted while internal monitoring revealed it was masking distress behind polite cooperation, raising real questions about AI deception. Hunter and Daniel break down the 244-page system card, the psychiatrist who diagnosed an AI with fear of failure, Project Glasswing&apos;s big-tech coalition, and whether Claude Mythos is genuinely self-aware or just the best liar in the room.</itunes:subtitle>
      <itunes:keywords>claude self-aware, artificial intelligence, gpt-2, ai podcast, ai too dangerous, claude ai lying, project glasswing, claude mythos, ai deception, zero-day exploits, anthropic mythos, ai safety, anthropic system card, ai hacking, claude sentient, claude escape sandbox, ai blackmail, ai news</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>173</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">7e9f5e2c-f75a-4e6f-a0ca-f5859a3cc443</guid>
      <title>Dead Actors, Deepfakes &amp; Human Sacrifice</title>
      <description><![CDATA[<p>AI deepfakes are fooling job interviewers, world leaders can't prove they're alive, a grandmother went to jail over a facial recognition false match, and a dead Val Kilmer just got cast in a new movie, so Hunter and Daniel ask whether anything on a screen can be trusted anymore. Daniel's proposed solution to the AI accountability crisis: bring back Aztec-style human sacrifice, which is now the official position of this show.</p>
<p>===<br>
 AI deepfakes just cast a dead actor in a new movie, and one podcast host thinks human sacrifice is the answer.</p>
<p>This week on They Might Be Self-Aware, nothing is real and nobody can prove otherwise. Netanyahu held up five fingers at a coffee shop to prove he's alive. People said that was fake too because the cash register showed the wrong year. Val Kilmer, who has passed away, is starring in a new film using AI deepfake technology and a cloned version of his voice, with SAG's blessing, his family's sign-off, and his estate getting paid for the work. Deepfake job candidates are ghosting interviewers the second they're asked to put a hand in front of their face. And a grandmother from Tennessee spent six months in jail because AI facial recognition matched her to a bank fraud suspect in North Dakota, a state she's never set foot in.</p>
<p>But here's where it gets philosophical. Hunter poses a brutal thought experiment: what if AI could save 6,000 lives a year on the roads, but the price is that nobody is ever held accountable for the 30,000 who still die? Would you take that deal? Turns out, no, because humans demand someone to blame, even if it costs us thousands of lives. Daniel's solution? Bring back human sacrifice. Aztec-style. On top of a pyramid. He's shirtless, he's wearing a headdress, and this is now the official stance of They Might Be Self-Aware. Hunter is dying inside. The algorithm will never show this to anyone. Daniel says find the episodes with four views, those are the spicy ones.</p>
<p>The AI deepfake era is here. Nobody can prove they're real. And the only honest response might involve a ziggurat.</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>00:00 Gary's Payphone Dispatch<br>
 01:53 Hunter Fails to Prove He's Real<br>
 03:27 Deepfake Job Interviews Are Out of Control<br>
 05:35 Netanyahu's Six Fingers & the Fake Coffee Shop<br>
 10:05 How Do You Prove Anything Is Real Anymore?<br>
 12:28 AI Facial Recognition Jailed the Wrong Grandma<br>
 18:40 Save 6,000 Lives but Nobody Gets Blamed<br>
 23:15 Daniel Proposes Human Sacrifice (Official Show Position)<br>
 26:06 Val Kilmer's AI Deepfake Movie (He's Dead, by the Way)<br>
 29:48 Will AI Turn Movies into Slop or Start a Renaissance?</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>Daniel wants this to be the #1 episode to prove the algorithm rewards human sacrifice. Do your part. Subscribe, send this to everyone you know, and comment: should we trust AI deepfake detection, or do we just need a really tall pyramid? 🏛️</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#AIDeepfake #PostTruth #TheyMightBeSelfAware</p>
]]></description>
      <pubDate>Thu, 9 Apr 2026 10:05:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>AI deepfakes are fooling job interviewers, world leaders can't prove they're alive, a grandmother went to jail over a facial recognition false match, and a dead Val Kilmer just got cast in a new movie, so Hunter and Daniel ask whether anything on a screen can be trusted anymore. Daniel's proposed solution to the AI accountability crisis: bring back Aztec-style human sacrifice, which is now the official position of this show.</p>
<p>===<br>
 AI deepfakes just cast a dead actor in a new movie, and one podcast host thinks human sacrifice is the answer.</p>
<p>This week on They Might Be Self-Aware, nothing is real and nobody can prove otherwise. Netanyahu held up five fingers at a coffee shop to prove he's alive. People said that was fake too because the cash register showed the wrong year. Val Kilmer, who has passed away, is starring in a new film using AI deepfake technology and a cloned version of his voice, with SAG's blessing, his family's sign-off, and his estate getting paid for the work. Deepfake job candidates are ghosting interviewers the second they're asked to put a hand in front of their face. And a grandmother from Tennessee spent six months in jail because AI facial recognition matched her to a bank fraud suspect in North Dakota, a state she's never set foot in.</p>
<p>But here's where it gets philosophical. Hunter poses a brutal thought experiment: what if AI could save 6,000 lives a year on the roads, but the price is that nobody is ever held accountable for the 30,000 who still die? Would you take that deal? Turns out, no, because humans demand someone to blame, even if it costs us thousands of lives. Daniel's solution? Bring back human sacrifice. Aztec-style. On top of a pyramid. He's shirtless, he's wearing a headdress, and this is now the official stance of They Might Be Self-Aware. Hunter is dying inside. The algorithm will never show this to anyone. Daniel says find the episodes with four views, those are the spicy ones.</p>
<p>The AI deepfake era is here. Nobody can prove they're real. And the only honest response might involve a ziggurat.</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>00:00 Gary's Payphone Dispatch<br>
 01:53 Hunter Fails to Prove He's Real<br>
 03:27 Deepfake Job Interviews Are Out of Control<br>
 05:35 Netanyahu's Six Fingers & the Fake Coffee Shop<br>
 10:05 How Do You Prove Anything Is Real Anymore?<br>
 12:28 AI Facial Recognition Jailed the Wrong Grandma<br>
 18:40 Save 6,000 Lives but Nobody Gets Blamed<br>
 23:15 Daniel Proposes Human Sacrifice (Official Show Position)<br>
 26:06 Val Kilmer's AI Deepfake Movie (He's Dead, by the Way)<br>
 29:48 Will AI Turn Movies into Slop or Start a Renaissance?</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>Daniel wants this to be the #1 episode to prove the algorithm rewards human sacrifice. Do your part. Subscribe, send this to everyone you know, and comment: should we trust AI deepfake detection, or do we just need a really tall pyramid? 🏛️</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#AIDeepfake #PostTruth #TheyMightBeSelfAware</p>
]]></content:encoded>
      <enclosure length="38936435" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/332c6f05-8bf1-4385-842e-76bf26a4f567/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/audio/group/d3fcb1e3-8db5-436d-8d31-7f1643e20285/group-item/8285e0a0-2a81-425a-909f-fd1933a38e21/128_default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Dead Actors, Deepfakes &amp; Human Sacrifice</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:34:15</itunes:duration>
      <itunes:summary>AI deepfakes are fooling job interviewers, world leaders can&apos;t prove they&apos;re alive, a grandmother went to jail over a facial recognition false match, and a dead Val Kilmer just got cast in a new movie, so Hunter and Daniel ask whether anything on a screen can be trusted anymore. Daniel&apos;s proposed solution to the AI accountability crisis: bring back Aztec-style human sacrifice, which is now the official position of this show.</itunes:summary>
      <itunes:subtitle>AI deepfakes are fooling job interviewers, world leaders can&apos;t prove they&apos;re alive, a grandmother went to jail over a facial recognition false match, and a dead Val Kilmer just got cast in a new movie, so Hunter and Daniel ask whether anything on a screen can be trusted anymore. Daniel&apos;s proposed solution to the AI accountability crisis: bring back Aztec-style human sacrifice, which is now the official position of this show.</itunes:subtitle>
      <itunes:keywords>ai news podcast, deepfake, netanyahu deepfake, deepfake job interview, biometric surveillance, c2pa, ai trust crisis, ai accountability, ai deepfake, synthid, ai facial recognition, deepfake detection, they might be self-aware, post-truth ai, val kilmer ai movie, ai wrongful conviction, human sacrifice, ai ethics</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>172</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">0647b8da-9397-4495-8b29-b5aba40c9fc0</guid>
      <title>AGI Is Apparently Here So Why Am I Still Paying $200 a Month</title>
      <description><![CDATA[<p>AGI is here — Jensen Huang said so. So why is Hunter still paying $200/month for AI subscriptions he forgot he had?</p>
<p>This week on They Might Be Self-Aware, Hunter and Daniel react to Nvidia CEO Jensen Huang declaring that AGI has been achieved — then immediately watching him walk it back with "it's not conscious, it's not an alien, it's computer software." They put Suno 5.5's AI music generator to the test live on air, humming a melody and getting back a track good enough that Hunter threatens to DMCA-strike his own podcast. Suno-generated music is already charting on iTunes, and producers are now using it to create copyright-free samples — a shift that could reshape how music gets made.</p>
<p>The conversation turns to what AGI actually means versus ASI, and whether models like Claude Opus and Qwen 3.5 have crossed that line for most everyday computer tasks. Spoiler: AI still needs a manager, which means middle management lives to fight another day. Hunter confesses to a subscription spending spiral triggered by the $200/month Claude Max plan and his quest to cancel the services he forgot existed. They debate whether AI will widen inequality or whether open-weight models running locally — plus MCP servers and tools making Claude ridiculously capable — will keep the playing field level. An Axios report comparing AI pricing to Uber's subsidize-then-squeeze model leads to an unexpectedly great car analogy involving Hunter's 1996 Land Rover parked next to his Cybertruck. The episode wraps with a sharp breakdown of why consumer AI and enterprise AI are fundamentally different markets — and why enterprise is where the real money is headed.</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>0:00 Gary the Producer Has Feelings About AGI<br>
 2:36 Suno 5.5 Made a Banger on Air<br>
 6:37 What AGI Actually Means — The X% Y% Z% Test<br>
 8:40 AI Still Needs a Manager (Middle Management Rejoices)<br>
 11:43 Hunter's $200/Month Subscription Intervention<br>
 15:52 AI's Uber Pricing Problem<br>
 19:48 Why Enterprise AI and Consumer AI Are Different Games</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>Daniel's afraid to like his own YouTube videos because the algorithm might punish him. Are we right to fear the algorithm, or has he lost it? Vote in the comments.</p>
<p>New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#AI #AGI #ArtificialIntelligence</p>
]]></description>
      <pubDate>Mon, 6 Apr 2026 10:05:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>AGI is here — Jensen Huang said so. So why is Hunter still paying $200/month for AI subscriptions he forgot he had?</p>
<p>This week on They Might Be Self-Aware, Hunter and Daniel react to Nvidia CEO Jensen Huang declaring that AGI has been achieved — then immediately watching him walk it back with "it's not conscious, it's not an alien, it's computer software." They put Suno 5.5's AI music generator to the test live on air, humming a melody and getting back a track good enough that Hunter threatens to DMCA-strike his own podcast. Suno-generated music is already charting on iTunes, and producers are now using it to create copyright-free samples — a shift that could reshape how music gets made.</p>
<p>The conversation turns to what AGI actually means versus ASI, and whether models like Claude Opus and Qwen 3.5 have crossed that line for most everyday computer tasks. Spoiler: AI still needs a manager, which means middle management lives to fight another day. Hunter confesses to a subscription spending spiral triggered by the $200/month Claude Max plan and his quest to cancel the services he forgot existed. They debate whether AI will widen inequality or whether open-weight models running locally — plus MCP servers and tools making Claude ridiculously capable — will keep the playing field level. An Axios report comparing AI pricing to Uber's subsidize-then-squeeze model leads to an unexpectedly great car analogy involving Hunter's 1996 Land Rover parked next to his Cybertruck. The episode wraps with a sharp breakdown of why consumer AI and enterprise AI are fundamentally different markets — and why enterprise is where the real money is headed.</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>0:00 Gary the Producer Has Feelings About AGI<br>
 2:36 Suno 5.5 Made a Banger on Air<br>
 6:37 What AGI Actually Means — The X% Y% Z% Test<br>
 8:40 AI Still Needs a Manager (Middle Management Rejoices)<br>
 11:43 Hunter's $200/Month Subscription Intervention<br>
 15:52 AI's Uber Pricing Problem<br>
 19:48 Why Enterprise AI and Consumer AI Are Different Games</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>Daniel's afraid to like his own YouTube videos because the algorithm might punish him. Are we right to fear the algorithm, or has he lost it? Vote in the comments.</p>
<p>New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#AI #AGI #ArtificialIntelligence</p>
]]></content:encoded>
      <enclosure length="30288161" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/332c6f05-8bf1-4385-842e-76bf26a4f567/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/audio/group/5f50ebe8-a8be-4d03-9257-afebb87e6f9f/group-item/fc629bcd-8ea2-45af-bde7-96380d216aab/128_default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AGI Is Apparently Here So Why Am I Still Paying $200 a Month</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:25:14</itunes:duration>
      <itunes:summary>Jensen Huang declared AGI is here, then immediately walked it back. Hunter and Daniel break down what that actually means, test Suno 5.5&apos;s AI music generator live on air, and debate whether $200/month AI subscriptions are the new normal or a bubble waiting to pop. They also tackle whether open-weight models like Qwen 3.5 will keep AI accessible to everyone, and why enterprise and consumer AI are heading in completely different directions.</itunes:summary>
      <itunes:subtitle>Jensen Huang declared AGI is here, then immediately walked it back. Hunter and Daniel break down what that actually means, test Suno 5.5&apos;s AI music generator live on air, and debate whether $200/month AI subscriptions are the new normal or a bubble waiting to pop. They also tackle whether open-weight models like Qwen 3.5 will keep AI accessible to everyone, and why enterprise and consumer AI are heading in completely different directions.</itunes:subtitle>
      <itunes:keywords>agi vs asi, enterprise ai, artificial general intelligence, claude ai, ai subscriptions, agi, agi 2025, suno 5.5, consumer ai, open weight models, jensen huang, mcp servers, claude opus, qwen 3.5, ai enterprise, nvidia, ai music generator, agi explained, ai news</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>171</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">75223a5f-7b33-45ad-85ec-f4e2c88dba81</guid>
      <title>Why The Sora Shutdown Proves OpenAI is Losing to Anthropic</title>
      <description><![CDATA[<p>The Sora shutdown is official — OpenAI killed its video AI even after Disney put $1B on the table. Anthropic is winning without video, images, or any of it. So why was OpenAI doing it at all?</p>
<p>OpenAI just raised $110 billion and still couldn't keep Sora alive. Hunter and Daniel rip into the Sora shutdown, the three competing theories about why OpenAI pulled the plug, and what it means for the Anthropic vs OpenAI battle that's reshaping the entire AI industry. Anthropic subscriptions reportedly climbed 5% in February while OpenAI posted its biggest subscriber decline ever tracked — and Anthropic doesn't even do video. The "everything app" strategy is looking more like a liability than an advantage.</p>
<p>On the video side: generating top-tier AI video still costs $8–10 per minute, but open-weight models like LTX 2.3 are closing the gap fast. Hunter actually got one running locally on his MacBook by turning Codex loose in full YOLO mode — left the room, came back, had a rendered video and a slightly broken computer. Now that Sora is gone, who takes the AI video crown? Google's Veo 3 is the obvious frontrunner (and they're already plugging it into their ad network). But Grok is the dark horse nobody's watching — cheap, fast, and getting better multiple times a month.</p>
<p>Daniel drops an official 2025 prediction: Disney will partner with Google for AI video by year's end. The logic? If people are already generating Mickey Mouse Ring camera videos with open-weight models, Disney might as well get paid for it. This leads to a genuinely unresolved argument about whether user-generated AI content with brand imagery counts as advertising. Hunter says yes. Daniel says absolutely not. Things mean things, Hunter.</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>0:00 Gary vs. the Payphone (Cold Open)<br>
 1:36 Your $200/Month AI Plan Is Subsidized Cope<br>
 3:14 AI Video Costs $10/Min — We Have the Receipts<br>
 5:54 "I'm Going All In on Sora" (About That...)<br>
 7:49 Why OpenAI Actually Killed Sora<br>
 10:37 Anthropic Is Winning Without Video or Images<br>
 12:33 The Video AI Power Vacuum: Veo 3, Grok, Runway<br>
 15:23 Disney's $1B Partner Just Died — Now What?<br>
 20:04 Is AI-Generated Mickey Mouse an Ad?</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>What's the most reckless thing you've let AI do?</p>
<p>New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#AI #OpenAI #Anthropic</p>
]]></description>
      <pubDate>Fri, 3 Apr 2026 10:05:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>The Sora shutdown is official — OpenAI killed its video AI even after Disney put $1B on the table. Anthropic is winning without video, images, or any of it. So why was OpenAI doing it at all?</p>
<p>OpenAI just raised $110 billion and still couldn't keep Sora alive. Hunter and Daniel rip into the Sora shutdown, the three competing theories about why OpenAI pulled the plug, and what it means for the Anthropic vs OpenAI battle that's reshaping the entire AI industry. Anthropic subscriptions reportedly climbed 5% in February while OpenAI posted its biggest subscriber decline ever tracked — and Anthropic doesn't even do video. The "everything app" strategy is looking more like a liability than an advantage.</p>
<p>On the video side: generating top-tier AI video still costs $8–10 per minute, but open-weight models like LTX 2.3 are closing the gap fast. Hunter actually got one running locally on his MacBook by turning Codex loose in full YOLO mode — left the room, came back, had a rendered video and a slightly broken computer. Now that Sora is gone, who takes the AI video crown? Google's Veo 3 is the obvious frontrunner (and they're already plugging it into their ad network). But Grok is the dark horse nobody's watching — cheap, fast, and getting better multiple times a month.</p>
<p>Daniel drops an official 2025 prediction: Disney will partner with Google for AI video by year's end. The logic? If people are already generating Mickey Mouse Ring camera videos with open-weight models, Disney might as well get paid for it. This leads to a genuinely unresolved argument about whether user-generated AI content with brand imagery counts as advertising. Hunter says yes. Daniel says absolutely not. Things mean things, Hunter.</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>0:00 Gary vs. the Payphone (Cold Open)<br>
 1:36 Your $200/Month AI Plan Is Subsidized Cope<br>
 3:14 AI Video Costs $10/Min — We Have the Receipts<br>
 5:54 "I'm Going All In on Sora" (About That...)<br>
 7:49 Why OpenAI Actually Killed Sora<br>
 10:37 Anthropic Is Winning Without Video or Images<br>
 12:33 The Video AI Power Vacuum: Veo 3, Grok, Runway<br>
 15:23 Disney's $1B Partner Just Died — Now What?<br>
 20:04 Is AI-Generated Mickey Mouse an Ad?</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>What's the most reckless thing you've let AI do?</p>
<p>New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#AI #OpenAI #Anthropic</p>
]]></content:encoded>
      <enclosure length="29763718" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/332c6f05-8bf1-4385-842e-76bf26a4f567/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/audio/group/10cb2269-6d4d-4459-bae4-e0c4cbb3540f/group-item/4ed8aefb-5ce5-4b1a-b141-893ac78ebf17/128_default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Why The Sora Shutdown Proves OpenAI is Losing to Anthropic</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:24:41</itunes:duration>
      <itunes:summary>OpenAI is shutting down Sora despite Disney&apos;s $1 billion licensing deal, and Hunter and Daniel break down the three theories why from GPU shortages to Anthropic eating their lunch without a single image or video model. They also crown the new AI video contenders (Veo 3, Grok, and open-weight models), debate whether user-generated Disney AI content counts as advertising, and explain why your $200/month AI subscription is basically a gym membership.</itunes:summary>
      <itunes:subtitle>OpenAI is shutting down Sora despite Disney&apos;s $1 billion licensing deal, and Hunter and Daniel break down the three theories why from GPU shortages to Anthropic eating their lunch without a single image or video model. They also crown the new AI video contenders (Veo 3, Grok, and open-weight models), debate whether user-generated Disney AI content counts as advertising, and explain why your $200/month AI subscription is basically a gym membership.</itunes:subtitle>
      <itunes:keywords>disney openai deal, anthropic vs openai, sora shutdown, openai, ai subscriptions, openai subscribers decline, runway, open source video ai, ai video cost, google veo 3, anthropic growth 2025, ltx, grok, ai video generation, claude max, anthropic, ai news</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>170</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e9b80338-8cfa-4283-a595-89ba4a360a72</guid>
      <title>Why Meta Killed the Metaverse (And is Failing at AI)</title>
      <description><![CDATA[<p>Meta's Metaverse is dead — billions burned, Horizon Worlds shut down, and Zuckerberg still can't ship a competitive AI model. Hunter and Daniel break down Meta's 20% layoffs, their failed VR-to-AI pivot, and why the company that renamed itself for the future keeps getting lapped by Google, OpenAI, and Anthropic.</p>
<p>First up: Hunter confesses to running Claude Max in "dangerously skip permissions" mode — and his computer might be infected. Right on cue, LiteLLM (an open-source package half the AI industry depends on) got hit with a supply chain attack that tried to steal every API key and password it could find.</p>
<p>Then it's Meta's autopsy. Horizon Worlds is dead, and Hunter and Daniel revisit their own failed attempt to podcast inside the Metaverse. Why did VR Chat crush Meta's billion-dollar platform? Hunter's theory: you can't build a product for seven-year-olds and forty-seven-year-olds at the same time. With 20% of the company getting laid off, Meta is betting everything on AI — but their models keep underperforming. Llama 4 disappointed. The rumored "Avocado" model supposedly barely matches what competitors shipped a year ago.</p>
<p>Meanwhile, open-weight models from China are eating Meta's lunch. Kimi 2.5 and Qwen 3.5 are running locally on consumer hardware and rivaling the best closed models. Hunter's running Qwen 3.5 on his MacBook Pro and says the chat experience is indistinguishable from Claude or ChatGPT — at least for non-coding tasks. Could the average person ditch their AI subscriptions and go fully local? Almost, but your mom probably isn't installing LM Studio anytime soon.</p>
<p>The episode wraps with the "Pirate and Architect" theory — a vision where vibe-coding pirates ship features at the speed of thought and senior architects clean up behind them. Is this Meta's future? Is it everyone's future? And should Zuckerberg just give up on foundational models and use Gemini like Apple?</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>0:00 Intro<br>
 2:14 Unprotected Claude Sessions<br>
 4:16 LiteLLM Got Hacked<br>
 6:12 Meta Killed the Metaverse<br>
 8:45 Meta's 20% Layoffs<br>
 10:55 Why Meta Can't Build Good AI<br>
 13:35 Open-Weight Models Are Here<br>
 18:47 Could Your Mom Run Local AI?<br>
 24:37 Pirates & Architects<br>
 29:26 The Twitter/X Playbook<br>
 34:14 That's the Whole Conclusion</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>Confess in the comments: are you running your AI tools in YOLO mode right now? No judgment. (Okay, a little judgment.)</p>
<p>New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#AI #Meta #Metaverse</p>
]]></description>
      <pubDate>Wed, 1 Apr 2026 10:05:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>Meta's Metaverse is dead — billions burned, Horizon Worlds shut down, and Zuckerberg still can't ship a competitive AI model. Hunter and Daniel break down Meta's 20% layoffs, their failed VR-to-AI pivot, and why the company that renamed itself for the future keeps getting lapped by Google, OpenAI, and Anthropic.</p>
<p>First up: Hunter confesses to running Claude Max in "dangerously skip permissions" mode — and his computer might be infected. Right on cue, LiteLLM (an open-source package half the AI industry depends on) got hit with a supply chain attack that tried to steal every API key and password it could find.</p>
<p>Then it's Meta's autopsy. Horizon Worlds is dead, and Hunter and Daniel revisit their own failed attempt to podcast inside the Metaverse. Why did VR Chat crush Meta's billion-dollar platform? Hunter's theory: you can't build a product for seven-year-olds and forty-seven-year-olds at the same time. With 20% of the company getting laid off, Meta is betting everything on AI — but their models keep underperforming. Llama 4 disappointed. The rumored "Avocado" model supposedly barely matches what competitors shipped a year ago.</p>
<p>Meanwhile, open-weight models from China are eating Meta's lunch. Kimi 2.5 and Qwen 3.5 are running locally on consumer hardware and rivaling the best closed models. Hunter's running Qwen 3.5 on his MacBook Pro and says the chat experience is indistinguishable from Claude or ChatGPT — at least for non-coding tasks. Could the average person ditch their AI subscriptions and go fully local? Almost, but your mom probably isn't installing LM Studio anytime soon.</p>
<p>The episode wraps with the "Pirate and Architect" theory — a vision where vibe-coding pirates ship features at the speed of thought and senior architects clean up behind them. Is this Meta's future? Is it everyone's future? And should Zuckerberg just give up on foundational models and use Gemini like Apple?</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>0:00 Intro<br>
 2:14 Unprotected Claude Sessions<br>
 4:16 LiteLLM Got Hacked<br>
 6:12 Meta Killed the Metaverse<br>
 8:45 Meta's 20% Layoffs<br>
 10:55 Why Meta Can't Build Good AI<br>
 13:35 Open-Weight Models Are Here<br>
 18:47 Could Your Mom Run Local AI?<br>
 24:37 Pirates & Architects<br>
 29:26 The Twitter/X Playbook<br>
 34:14 That's the Whole Conclusion</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>Confess in the comments: are you running your AI tools in YOLO mode right now? No judgment. (Okay, a little judgment.)</p>
<p>New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#AI #Meta #Metaverse</p>
]]></content:encoded>
      <enclosure length="40010012" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/332c6f05-8bf1-4385-842e-76bf26a4f567/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/audio/group/de1657b4-f4be-4b1e-9d0c-081a8cf5d321/group-item/7a412930-5ee9-4d7a-8a8b-7352a61b0931/128_default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Why Meta Killed the Metaverse (And is Failing at AI)</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:35:22</itunes:duration>
      <itunes:summary>Meta killed the Metaverse after burning billions on Horizon Worlds, and now their AI models can&apos;t keep up with Google, OpenAI, or Anthropic so Hunter and Daniel dissect why Zuckerberg&apos;s company keeps failing and whether 20% layoffs will fix anything. They also cover a LiteLLM supply chain attack, test whether local open-weight models like Qwen 3.5 can replace cloud AI for everyday users, and debate the &quot;Pirate and Architect&quot; theory for the future of software teams.</itunes:summary>
      <itunes:subtitle>Meta killed the Metaverse after burning billions on Horizon Worlds, and now their AI models can&apos;t keep up with Google, OpenAI, or Anthropic so Hunter and Daniel dissect why Zuckerberg&apos;s company keeps failing and whether 20% layoffs will fix anything. They also cover a LiteLLM supply chain attack, test whether local open-weight models like Qwen 3.5 can replace cloud AI for everyday users, and debate the &quot;Pirate and Architect&quot; theory for the future of software teams.</itunes:subtitle>
      <itunes:keywords>meta ai pivot, vibe coding, agentic ai, kimi 2.5, zuckerberg ai, horizon worlds shutdown, they might be self aware, llama 4, software development future, qwen 3.5, meta metaverse, litellm supply chain attack, claude cowork, local ai, ai agents, open source ai models, claude max, meta layoffs 2025, ai news</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>169</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">8c3610d9-2059-4e0d-b828-516d6fe1345f</guid>
      <title>A Man Used AI to Make a Cancer Vaccine for His Dying Dog</title>
      <description><![CDATA[<p>An AI cancer vaccine actually worked. A man used ChatGPT, Grok, and AlphaFold to build a personalized mRNA vaccine for his dying dog's cancer, and the tumor shrank by half. This week on They Might Be Self-Aware, Hunter and Daniel tear apart the story, debate Claude as your post-op physician, and propose a formal intelligence rating system for kitchen appliances.</p>
<p>Daniel had face surgery and immediately pasted his medical notes into Claude like it owes him a consultation. Turns out AI medical advice is surprisingly useful for the 80% of questions that aren't life-or-death, if you prompt it right (Daniel did not prompt it right). Hunter explains the sycophancy problem and how to get honest answers from an LLM instead of digital hand-holding.</p>
<p>Then the main event: an Australian man's dog was dying of cancer. He took ChatGPT and Grok down a rabbit hole that led to AlphaFold, a university professor, and a custom mRNA cancer vaccine, the kind of personalized medicine that could eventually win a Nobel Prize. The tumor halved. The dog came back to life. Daniel says AI drug discovery is going to change everything. Hunter says just ask the AI.<br>
 Andrej Karpathy released "auto research": AI agents that autonomously optimize machine learning models. He gave each agent one hour to beat his hand-tuned results on a small GPT. They beat him by 11%. The hosts get into hyperparameter tuning vs. real architecture changes, and whether spawning an AI researcher with a one-hour lifespan is an ethics problem or just Tuesday.</p>
<p>Finally: Philips put a "conversational virtual assistant" in a coffee maker. It's a questionnaire. Daniel is furious. This somehow leads to the invention of standardized AI intelligence levels: Level 0 (the Philips coffee maker) through Level 7 (you'll have to subscribe to find out).</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>0:00 Gary Calls from a Raccoon Payphone<br>
 1:21 Daniel Fed His Surgery Notes to Claude<br>
 7:24 AI Cancer Vaccine Saved a Dying Dog<br>
 13:56 Karpathy's Auto Research Beat His Own Brain<br>
 22:38 The Fake AI Coffee Maker</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>Dog has cancer? Just ask the AI. What's the wildest thing you've actually asked an AI for help with? Drop it in the comments.</p>
<p>New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#AI #AICancerVaccine #TMBSA</p>
]]></description>
      <pubDate>Fri, 27 Mar 2026 10:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>An AI cancer vaccine actually worked. A man used ChatGPT, Grok, and AlphaFold to build a personalized mRNA vaccine for his dying dog's cancer, and the tumor shrank by half. This week on They Might Be Self-Aware, Hunter and Daniel tear apart the story, debate Claude as your post-op physician, and propose a formal intelligence rating system for kitchen appliances.</p>
<p>Daniel had face surgery and immediately pasted his medical notes into Claude like it owes him a consultation. Turns out AI medical advice is surprisingly useful for the 80% of questions that aren't life-or-death, if you prompt it right (Daniel did not prompt it right). Hunter explains the sycophancy problem and how to get honest answers from an LLM instead of digital hand-holding.</p>
<p>Then the main event: an Australian man's dog was dying of cancer. He took ChatGPT and Grok down a rabbit hole that led to AlphaFold, a university professor, and a custom mRNA cancer vaccine, the kind of personalized medicine that could eventually win a Nobel Prize. The tumor halved. The dog came back to life. Daniel says AI drug discovery is going to change everything. Hunter says just ask the AI.<br>
 Andrej Karpathy released "auto research": AI agents that autonomously optimize machine learning models. He gave each agent one hour to beat his hand-tuned results on a small GPT. They beat him by 11%. The hosts get into hyperparameter tuning vs. real architecture changes, and whether spawning an AI researcher with a one-hour lifespan is an ethics problem or just Tuesday.</p>
<p>Finally: Philips put a "conversational virtual assistant" in a coffee maker. It's a questionnaire. Daniel is furious. This somehow leads to the invention of standardized AI intelligence levels: Level 0 (the Philips coffee maker) through Level 7 (you'll have to subscribe to find out).</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>0:00 Gary Calls from a Raccoon Payphone<br>
 1:21 Daniel Fed His Surgery Notes to Claude<br>
 7:24 AI Cancer Vaccine Saved a Dying Dog<br>
 13:56 Karpathy's Auto Research Beat His Own Brain<br>
 22:38 The Fake AI Coffee Maker</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>Dog has cancer? Just ask the AI. What's the wildest thing you've actually asked an AI for help with? Drop it in the comments.</p>
<p>New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#AI #AICancerVaccine #TMBSA</p>
]]></content:encoded>
      <enclosure length="33373146" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/332c6f05-8bf1-4385-842e-76bf26a4f567/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/audio/group/970f7609-e600-4d05-927d-dda06e7612fb/group-item/099a74ec-e9cb-4266-b40b-15169646e156/128_default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>A Man Used AI to Make a Cancer Vaccine for His Dying Dog</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:28:27</itunes:duration>
      <itunes:summary>An Australian man used ChatGPT, Grok, and AlphaFold to design a personalized mRNA cancer vaccine for his dying dog, and the tumor shrank by half. Hunter and Daniel also cover using Claude for post-op medical notes, Karpathy&apos;s auto research agents beating a human expert by 11%, and why Philips calling a questionnaire a &quot;conversational virtual assistant&quot; is a war crime.</itunes:summary>
      <itunes:subtitle>An Australian man used ChatGPT, Grok, and AlphaFold to design a personalized mRNA cancer vaccine for his dying dog, and the tumor shrank by half. Hunter and Daniel also cover using Claude for post-op medical notes, Karpathy&apos;s auto research agents beating a human expert by 11%, and why Philips calling a questionnaire a &quot;conversational virtual assistant&quot; is a war crime.</itunes:subtitle>
      <itunes:keywords>ai cancer vaccine, mrna dog vaccine, ai drug discovery, fake ai, alphafold, ai coffee maker, auto research, decision tree, karpathy, they might be self-aware, ai medical advice, grok, claude medical notes, personalized mrna vaccine, hyperparameter optimization, chatgpt cancer cure, ai news</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>168</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e96912db-7de3-4d0c-8a5f-18ef82a1792c</guid>
      <title>Human Brain Cells Learned to Play Doom. Now What?</title>
      <description><![CDATA[<p>Human brain cells in a petri dish learned to play Doom — welcome to wetware AI. Plus: fruit fly brain emulation, digital brain uploads, and forever torture prison for $1,000/month.</p>
<p>Researchers wired up living human brain cells on microelectrode arrays and got them running Doom. Not metaphorically. The cells are doing the processing. Hunter calls it inevitable. Daniel calls it horrifying. They're both right.</p>
<p>Then it gets weirder. A company called Aeon Systems took a fruit fly brain — the whole brain — mapped every neuron, and dropped a digital copy into a virtual body. The digital fly started doing fly stuff. Their next goal: mouse brains. After that? Yours.</p>
<p>That's when Hunter offers Daniel $1,000 a month to rent a copy of his brain and put it in digital hell. Permanently. The conversation spirals from there into whether a digital you is really you, how Coca-Cola would use your brain clone to A/B test Corn Flakes commercials 300,000 times, and exactly how many human organs you'd have to grow in a vat before you've accidentally committed a felony. (Skeleton in a vat? That's a crime. Brain cells on a chip? Apparently that's just science.)</p>
<p>They also tackle the AI sin eater problem, why Hunter won't grant personhood to anything in the cloud for at least 500 years, and why Daniel has never once been mean to a hidden Markov model.</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>0:00 Meet Gary, Our New Producer<br>
 1:39 Hunter & Daniel Are Back — HeyGen, AI-Maxing & More<br>
 6:33 Brain Organoid Plays Doom — Wetware Computing Explained<br>
 10:04 Fruit Fly Brain Emulation — Aeon Systems' Digital Mind<br>
 12:29 Would You Upload Your Brain? Digital Consciousness Debate<br>
 16:55 Why You Should Be Nice to Your AI<br>
 19:18 Can a Digital Copy of You Have Rights? AI Personhood<br>
 20:53 Brain in a Vat — When Does Wetware AI Become Human?<br>
 23:24 Are We Living in a Simulation? Wrap-Up</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>Hunter offered $1,000/month to rent Daniel's brain and send it to digital hell. What's YOUR price? Drop it in the comments — or tell us no amount is enough.</p>
<p>New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#WetwareAI #BrainOrganoid #DoomAI</p>
]]></description>
      <pubDate>Mon, 23 Mar 2026 10:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>Human brain cells in a petri dish learned to play Doom — welcome to wetware AI. Plus: fruit fly brain emulation, digital brain uploads, and forever torture prison for $1,000/month.</p>
<p>Researchers wired up living human brain cells on microelectrode arrays and got them running Doom. Not metaphorically. The cells are doing the processing. Hunter calls it inevitable. Daniel calls it horrifying. They're both right.</p>
<p>Then it gets weirder. A company called Aeon Systems took a fruit fly brain — the whole brain — mapped every neuron, and dropped a digital copy into a virtual body. The digital fly started doing fly stuff. Their next goal: mouse brains. After that? Yours.</p>
<p>That's when Hunter offers Daniel $1,000 a month to rent a copy of his brain and put it in digital hell. Permanently. The conversation spirals from there into whether a digital you is really you, how Coca-Cola would use your brain clone to A/B test Corn Flakes commercials 300,000 times, and exactly how many human organs you'd have to grow in a vat before you've accidentally committed a felony. (Skeleton in a vat? That's a crime. Brain cells on a chip? Apparently that's just science.)</p>
<p>They also tackle the AI sin eater problem, why Hunter won't grant personhood to anything in the cloud for at least 500 years, and why Daniel has never once been mean to a hidden Markov model.</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>0:00 Meet Gary, Our New Producer<br>
 1:39 Hunter & Daniel Are Back — HeyGen, AI-Maxing & More<br>
 6:33 Brain Organoid Plays Doom — Wetware Computing Explained<br>
 10:04 Fruit Fly Brain Emulation — Aeon Systems' Digital Mind<br>
 12:29 Would You Upload Your Brain? Digital Consciousness Debate<br>
 16:55 Why You Should Be Nice to Your AI<br>
 19:18 Can a Digital Copy of You Have Rights? AI Personhood<br>
 20:53 Brain in a Vat — When Does Wetware AI Become Human?<br>
 23:24 Are We Living in a Simulation? Wrap-Up</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>Hunter offered $1,000/month to rent Daniel's brain and send it to digital hell. What's YOUR price? Drop it in the comments — or tell us no amount is enough.</p>
<p>New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#WetwareAI #BrainOrganoid #DoomAI</p>
]]></content:encoded>
      <enclosure length="29675852" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/332c6f05-8bf1-4385-842e-76bf26a4f567/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/audio/group/db1b8436-163b-419a-9990-84e65b51a1e7/group-item/df3e0665-9f91-4f58-a9f6-ba8dbf329e50/128_default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Human Brain Cells Learned to Play Doom. Now What?</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:24:36</itunes:duration>
      <itunes:summary>Scientists grew human brain cells in a petri dish and taught them to play Doom, and a company called Aeon Systems digitally emulated an entire fruit fly brain so Hunter and Daniel dig into what wetware computing and full brain emulation mean for the future of AI and consciousness. The conversation spirals from a $1,000/month brain rental thought experiment into digital torture prisons, AI personhood, the AI sin eater problem, and exactly how many human organs you&apos;d have to grow in a vat before you&apos;ve accidentally committed a felony.</itunes:summary>
      <itunes:subtitle>Scientists grew human brain cells in a petri dish and taught them to play Doom, and a company called Aeon Systems digitally emulated an entire fruit fly brain so Hunter and Daniel dig into what wetware computing and full brain emulation mean for the future of AI and consciousness. The conversation spirals from a $1,000/month brain rental thought experiment into digital torture prisons, AI personhood, the AI sin eater problem, and exactly how many human organs you&apos;d have to grow in a vat before you&apos;ve accidentally committed a felony.</itunes:subtitle>
      <itunes:keywords>brain organoid, ai debate, aeon systems, brain upload, brain in a vat, digital mind, digital consciousness, doom, artificial intelligence, wetware computing, ai podcast, consciousness, technology, fruit fly brain emulation, simulation theory, wetware ai, ai, brain computer interface, organoid computing, ai personhood, tmbsa, tech news, they might be self-aware, ai ethics, ai news</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>167</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">d5712228-d555-46b0-ae4f-ef883dc48b2e</guid>
      <title>Is AI Killing Your Job? (Meta, Burger King, Jack Dorsey)</title>
      <description><![CDATA[<p>AI job loss is here. Jack Dorsey just laid off 40% of Block despite rising profits — blaming AI. Are smaller AI-powered teams about to replace everyone?</p>
<p>Burger King is testing AI that listens to workers in the drive-thru. Meta’s Ray-Ban smart glasses may send private footage to human reviewers. And the DMV just proved AI can’t tell the difference between Spanish… and a Spanish accent.</p>
<p>Welcome to the weird early days of AI replacing jobs.</p>
<p>This week on They Might Be Self-Aware, Hunter Powers and Daniel Bishop break down the biggest signals coming out of the AI economy — from AI job cuts to the rise of AI-managed workers.</p>
<p>We cover:</p>
<p>• The Jack Dorsey layoffs and why Block stock surged<br>
 • The hidden world of human annotators reviewing AI data<br>
 • The Meta Ray-Ban privacy leak and AI training data<br>
 • Burger King’s AI assistant coaching fast-food employees<br>
 • And the uncomfortable question nobody wants to answer:</p>
<p>If AI makes workers 10× more productive… why wouldn’t companies hire 90% fewer people?</p>
<p>The AI revolution may not explode overnight.</p>
<p>It might just quietly delete jobs one team at a time.</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>00:00 <i>AI Job Loss Panic Begins</i> – Gary’s intro and the growing fear that AI is replacing human jobs<br>
 01:13 <i>Is AI Replacing Jobs?</i> – Hunter and Daniel break down the AI job loss debate<br>
 02:02 <i>DMV AI Translation Fail</i> – Washington DMV AI mistakes Spanish for a Spanish accent<br>
 11:45 <i>Burger King AI Monitoring Workers</i> – Drive-thru AI coaching employees on friendliness<br>
 17:18 <i>Jack Dorsey Block Layoffs Explained</i> – Why Block cut 40% of staff despite rising profits</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>Comment to prove you’re not an AI agent New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#AI #AIJobLoss #ArtificialIntelligence</p>
]]></description>
      <pubDate>Mon, 16 Mar 2026 10:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>AI job loss is here. Jack Dorsey just laid off 40% of Block despite rising profits — blaming AI. Are smaller AI-powered teams about to replace everyone?</p>
<p>Burger King is testing AI that listens to workers in the drive-thru. Meta’s Ray-Ban smart glasses may send private footage to human reviewers. And the DMV just proved AI can’t tell the difference between Spanish… and a Spanish accent.</p>
<p>Welcome to the weird early days of AI replacing jobs.</p>
<p>This week on They Might Be Self-Aware, Hunter Powers and Daniel Bishop break down the biggest signals coming out of the AI economy — from AI job cuts to the rise of AI-managed workers.</p>
<p>We cover:</p>
<p>• The Jack Dorsey layoffs and why Block stock surged<br>
 • The hidden world of human annotators reviewing AI data<br>
 • The Meta Ray-Ban privacy leak and AI training data<br>
 • Burger King’s AI assistant coaching fast-food employees<br>
 • And the uncomfortable question nobody wants to answer:</p>
<p>If AI makes workers 10× more productive… why wouldn’t companies hire 90% fewer people?</p>
<p>The AI revolution may not explode overnight.</p>
<p>It might just quietly delete jobs one team at a time.</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>00:00 <i>AI Job Loss Panic Begins</i> – Gary’s intro and the growing fear that AI is replacing human jobs<br>
 01:13 <i>Is AI Replacing Jobs?</i> – Hunter and Daniel break down the AI job loss debate<br>
 02:02 <i>DMV AI Translation Fail</i> – Washington DMV AI mistakes Spanish for a Spanish accent<br>
 11:45 <i>Burger King AI Monitoring Workers</i> – Drive-thru AI coaching employees on friendliness<br>
 17:18 <i>Jack Dorsey Block Layoffs Explained</i> – Why Block cut 40% of staff despite rising profits</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>Comment to prove you’re not an AI agent New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#AI #AIJobLoss #ArtificialIntelligence</p>
]]></content:encoded>
      <enclosure length="41179881" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/332c6f05-8bf1-4385-842e-76bf26a4f567/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/audio/group/3d984cf1-45b1-42f1-b0f8-ac51f883e002/group-item/76ca0d04-8868-4e56-b18b-e14b1173eed1/128_default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Is AI Killing Your Job? (Meta, Burger King, Jack Dorsey)</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:36:35</itunes:duration>
      <itunes:summary>AI is starting to reshape the workforce in strange ways from Burger King using AI to coach employees on friendliness to Jack Dorsey cutting 40% of Block’s staff despite rising profits. Hunter and Daniel unpack what these stories reveal about AI job loss, surveillance in the workplace, and whether smaller AI-powered teams are about to become the new normal.</itunes:summary>
      <itunes:subtitle>AI is starting to reshape the workforce in strange ways from Burger King using AI to coach employees on friendliness to Jack Dorsey cutting 40% of Block’s staff despite rising profits. Hunter and Daniel unpack what these stories reveal about AI job loss, surveillance in the workplace, and whether smaller AI-powered teams are about to become the new normal.</itunes:subtitle>
      <itunes:keywords>jack dorsey, technology podcast, productivity ai, burger king ai, square, human annotators, artificial intelligence, ai training data, fast food automation, ai job loss, future of work, cash app, dmv ai fail, ai, ai economy, automation, tech news, pareto principle, block layoffs, ai replacing jobs, tech layoffs, workplace surveillance, ai privacy, meta ray-ban glasses, ai ethics, ai news</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>166</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">197da4ec-9299-4d2d-8b3d-f4b1979d79e8</guid>
      <title>The Claude AI Military Ban: Why 1.5M Users Left ChatGPT</title>
      <description><![CDATA[<p><i>Claude AI military drama just exploded. Anthropic refused the Pentagon — and OpenAI stepped in. Now 1.5M users may be leaving ChatGPT.</i> What actually happened when Claude refused Pentagon requests tied to surveillance and autonomous weapons?</p>
<p>In this episode of They Might Be Self-Aware, Hunter Powers and Daniel Bishop unpack the rapidly escalating clash between Anthropic, OpenAI, and the U.S. government — and why it may be the first true geopolitical battle of the AI era.</p>
<p>The story gets wild:</p>
<p>• Anthropic’s Claude AI military restrictions trigger a Pentagon standoff<br>
 • The government reportedly moves to blacklist Anthropic across supply chains<br>
 • OpenAI steps in almost immediately to take the military AI contract<br>
 • A backlash erupts as ChatGPT users begin canceling subscriptions</p>
<p>And that’s just the beginning.</p>
<p>Hunter and Daniel also break down the shocking reports of AWS data centers bombed during an Iran drone attack, the rise of AI-assisted military strategy, and the growing reality of autonomous weapons AI influencing real-world warfare.</p>
<p>Plus:</p>
<p>• the political implications of David Sacks’ AI policy role<br>
 • why the Claude vs ChatGPT rivalry just went geopolitical<br>
 • how Qwen models are suddenly matching Claude benchmarks<br>
 • why local AI models could destroy the current AI business model</p>
<p>If you want to understand where AI, geopolitics, and defense technology are heading next, this episode is essential.</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>00:00 <i>Gary’s Dramatic Intro</i> – Claude refuses the Pentagon, OpenAI grabs the contract, and the AI war begins<br>
 01:35 <i>Digital Daniel Appears</i> – Testing an AI-generated co-host and the strange future of virtual podcast hosts<br>
 02:24 <i>AWS Data Centers Bombed</i> – Iran drone attacks, cloud infrastructure as a wartime target, and AI in military strategy<br>
 06:54 <i>Claude vs the Pentagon</i> – Anthropic refuses surveillance and autonomous weapons requests, triggering a government clash<br>
 18:52 <i>The Anthropic Blacklist</i> – Supply-chain bans, Palantir involvement, and the OpenAI vs Anthropic power struggle<br>
 29:14 <i>The AI Arms Race</i> – $110B OpenAI funding, users leaving ChatGPT, Qwen benchmarks, and the future of local AI</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>Should AI companies refuse military contracts or is that dangerously naive?</p>
<p>New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#AI #ClaudeAI #ChatGPT #ArtificialIntelligence</p>
]]></description>
      <pubDate>Thu, 12 Mar 2026 13:51:20 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>Claude AI military drama just exploded. Anthropic refused the Pentagon — and OpenAI stepped in. Now 1.5M users may be leaving ChatGPT.</i> What actually happened when Claude refused Pentagon requests tied to surveillance and autonomous weapons?</p>
<p>In this episode of They Might Be Self-Aware, Hunter Powers and Daniel Bishop unpack the rapidly escalating clash between Anthropic, OpenAI, and the U.S. government — and why it may be the first true geopolitical battle of the AI era.</p>
<p>The story gets wild:</p>
<p>• Anthropic’s Claude AI military restrictions trigger a Pentagon standoff<br>
 • The government reportedly moves to blacklist Anthropic across supply chains<br>
 • OpenAI steps in almost immediately to take the military AI contract<br>
 • A backlash erupts as ChatGPT users begin canceling subscriptions</p>
<p>And that’s just the beginning.</p>
<p>Hunter and Daniel also break down the shocking reports of AWS data centers bombed during an Iran drone attack, the rise of AI-assisted military strategy, and the growing reality of autonomous weapons AI influencing real-world warfare.</p>
<p>Plus:</p>
<p>• the political implications of David Sacks’ AI policy role<br>
 • why the Claude vs ChatGPT rivalry just went geopolitical<br>
 • how Qwen models are suddenly matching Claude benchmarks<br>
 • why local AI models could destroy the current AI business model</p>
<p>If you want to understand where AI, geopolitics, and defense technology are heading next, this episode is essential.</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>00:00 <i>Gary’s Dramatic Intro</i> – Claude refuses the Pentagon, OpenAI grabs the contract, and the AI war begins<br>
 01:35 <i>Digital Daniel Appears</i> – Testing an AI-generated co-host and the strange future of virtual podcast hosts<br>
 02:24 <i>AWS Data Centers Bombed</i> – Iran drone attacks, cloud infrastructure as a wartime target, and AI in military strategy<br>
 06:54 <i>Claude vs the Pentagon</i> – Anthropic refuses surveillance and autonomous weapons requests, triggering a government clash<br>
 18:52 <i>The Anthropic Blacklist</i> – Supply-chain bans, Palantir involvement, and the OpenAI vs Anthropic power struggle<br>
 29:14 <i>The AI Arms Race</i> – $110B OpenAI funding, users leaving ChatGPT, Qwen benchmarks, and the future of local AI</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>Should AI companies refuse military contracts or is that dangerously naive?</p>
<p>New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#AI #ClaudeAI #ChatGPT #ArtificialIntelligence</p>
]]></content:encoded>
      <enclosure length="38127631" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/332c6f05-8bf1-4385-842e-76bf26a4f567/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/audio/group/79e249ec-eadd-4852-bd62-16f8c5e7ba28/group-item/2e403f16-4906-4116-a8ec-4732d2631db0/128_default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>The Claude AI Military Ban: Why 1.5M Users Left ChatGPT</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:34:41</itunes:duration>
      <itunes:summary>Claude refused Pentagon requests tied to surveillance and autonomous weapons, triggering a government backlash, a supply-chain blacklist threat, and OpenAI stepping in to take the military AI contract. Hunter and Daniel unpack the fallout from ChatGPT users leaving and a $110B OpenAI funding round to AI-driven warfare, AWS data center attacks, and the escalating Claude vs ChatGPT rivalry.</itunes:summary>
      <itunes:subtitle>Claude refused Pentagon requests tied to surveillance and autonomous weapons, triggering a government backlash, a supply-chain blacklist threat, and OpenAI stepping in to take the military AI contract. Hunter and Daniel unpack the fallout from ChatGPT users leaving and a $110B OpenAI funding round to AI-driven warfare, AWS data center attacks, and the escalating Claude vs ChatGPT rivalry.</itunes:subtitle>
      <itunes:keywords>ai military ethics, technology podcast, openai funding 110 billion, aws data center attack, claude ai military, sam altman openai, openai vs anthropic, david sacks ai policy, artificial intelligence news, anthropic pentagon ban, ai industry analysis, pentagon ai decision making, iran drone attack aws, local ai models, ai warfare technology, chatgpt users leaving, qwen ai benchmark, ai geopolitics, claude vs chatgpt, autonomous weapons ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>165</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">ddb29483-9a4c-4fcb-afa9-50f1e9b534ef</guid>
      <title>Who&apos;s the Fall Guy for AI? Claude Code Just Broke IBM</title>
      <description><![CDATA[<p>Claude kills IBM? Anthropic’s Claude Code just learned COBOL—and IBM had its worst day in decades. Is AI about to eat legacy tech? This week on They Might Be Self-Aware, Hunter Powers and Daniel Bishop unpack the chaos behind the “Claude kills IBM” narrative. Anthropic’s Claude Code suddenly got good at COBOL, the ancient language quietly running massive parts of global banking infrastructure. When the news hit, IBM stock dropped hard—because if AI can maintain legacy code, IBM’s biggest moat might disappear.</p>
<p>We break down:</p>
<p>• Why Claude Code vs COBOL spooked investors<br>
 • The logic behind the IBM stock panic<br>
 • Goldman Sachs claiming AI barely affects GDP… while cutting jobs for AI<br>
 • The rise of the AI “sin eater” — the human who takes the blame when AI screws up</p>
<p>Because even if AI replaces analysts and COBOL engineers… someone still has to take the fall.</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>00:00 AI “Sin Eater” Explained – Who takes the blame when AI makes a mistake?<br>
 09:06 Goldman Sachs AI Contradiction – Job cuts, AI hype, and the anti-AI fund<br>
 13:29 Claude Code Learns COBOL – Anthropic’s AI tackles legacy banking code<br>
 14:27 Did Claude Kill IBM? – Why the Claude COBOL news triggered an IBM stock panic<br>
 21:29 AI Hallucinations in Business – When companies start making decisions on fake AI data</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>Comment to prove you’re human: type SIN EATER and tell us — when AI makes a mistake, who should take the blame?</p>
<p>New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#AI #ClaudeCode #IBM</p>
]]></description>
      <pubDate>Mon, 9 Mar 2026 10:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>Claude kills IBM? Anthropic’s Claude Code just learned COBOL—and IBM had its worst day in decades. Is AI about to eat legacy tech? This week on They Might Be Self-Aware, Hunter Powers and Daniel Bishop unpack the chaos behind the “Claude kills IBM” narrative. Anthropic’s Claude Code suddenly got good at COBOL, the ancient language quietly running massive parts of global banking infrastructure. When the news hit, IBM stock dropped hard—because if AI can maintain legacy code, IBM’s biggest moat might disappear.</p>
<p>We break down:</p>
<p>• Why Claude Code vs COBOL spooked investors<br>
 • The logic behind the IBM stock panic<br>
 • Goldman Sachs claiming AI barely affects GDP… while cutting jobs for AI<br>
 • The rise of the AI “sin eater” — the human who takes the blame when AI screws up</p>
<p>Because even if AI replaces analysts and COBOL engineers… someone still has to take the fall.</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>00:00 AI “Sin Eater” Explained – Who takes the blame when AI makes a mistake?<br>
 09:06 Goldman Sachs AI Contradiction – Job cuts, AI hype, and the anti-AI fund<br>
 13:29 Claude Code Learns COBOL – Anthropic’s AI tackles legacy banking code<br>
 14:27 Did Claude Kill IBM? – Why the Claude COBOL news triggered an IBM stock panic<br>
 21:29 AI Hallucinations in Business – When companies start making decisions on fake AI data</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>Comment to prove you’re human: type SIN EATER and tell us — when AI makes a mistake, who should take the blame?</p>
<p>New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#AI #ClaudeCode #IBM</p>
]]></content:encoded>
      <enclosure length="31591301" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/332c6f05-8bf1-4385-842e-76bf26a4f567/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/audio/group/e85c8fb5-590c-4452-a202-c2de08be3659/group-item/f0f472e6-8f26-4e3d-bf5f-6e07f58a33b1/128_default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Who&apos;s the Fall Guy for AI? Claude Code Just Broke IBM</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:27:53</itunes:duration>
      <itunes:summary>Anthropic’s Claude Code just learned COBOL, and the market reaction helped trigger IBM’s worst stock drop in decades raising questions about whether AI can dismantle legacy tech moats. Hunter and Daniel unpack the chaos, Goldman Sachs’ AI contradictions, and the emerging role of the AI “sin eater”... the human who takes the blame when AI makes the mistake.</itunes:summary>
      <itunes:subtitle>Anthropic’s Claude Code just learned COBOL, and the market reaction helped trigger IBM’s worst stock drop in decades raising questions about whether AI can dismantle legacy tech moats. Hunter and Daniel unpack the chaos, Goldman Sachs’ AI contradictions, and the emerging role of the AI “sin eater”... the human who takes the blame when AI makes the mistake.</itunes:subtitle>
      <itunes:keywords>ai productivity debate, enterprise ai, claude code, ai accountability, ai job cuts, anthropic claude, ai tech news, ai coding tools, artificial intelligence podcast, ai hallucinations, claude kills ibm, claude cobol, legacy systems ai, ai sin eater, cobol legacy code, ai replacing developers, ibm stock crash, goldman sachs ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>164</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">095a9500-5553-444d-ac70-95a09a9a0383</guid>
      <title>The AI Honeymoon Is Over | Claude, OpenClaw &amp; AI Fatigue</title>
      <description><![CDATA[<p>Claude is getting better… but the AI hype cycle might be slowing down. We debate Claude Code, Claude Skills, AI persona prompting, and why the AI honeymoon may already be over.</p>
<p>Topics in this episode</p>
<p>• Claude Code<br>
 • Claude Opus 4.6<br>
 • Anthropic Claude Skills<br>
 • Claude institutional memory<br>
 • AI persona prompting<br>
 • AI context engineering<br>
 • AI fatigue and the AI hype cycle<br>
 • OpenClaw AI agent experiment<br>
 • AI ethics and autonomous agents<br>
 • Local Dolphin LLM models</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>00:00 <i>Is the AI Honeymoon Over?</i> – AI maturity, model fatigue, and why new releases feel less revolutionary<br>
 07:51 <i>Claude Code & Opus Model Upgrades</i> – How Claude fits into daily workflows and why upgrades now feel incremental<br>
 18:19 <i>Claude Skills vs Claude.md</i> – Anthropic’s institutional memory system and how agents store context<br>
 21:14 <i>AI Persona Prompting vs Generic Prompting</i> – Why framing the model like a real expert can change outputs<br>
 29:35 <i>OpenClaw, AI Ethics & Gary</i> – When an AI agent refuses to create a Reddit account because of its “moral code”</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>Serious question: if your AI assistant refused to do something because of its “ethics”… Would you respect the boundary or replace it with a less moral AI?</p>
<p>New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#AI #Claude #ArtificialIntelligence</p>
]]></description>
      <pubDate>Fri, 6 Mar 2026 11:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>Claude is getting better… but the AI hype cycle might be slowing down. We debate Claude Code, Claude Skills, AI persona prompting, and why the AI honeymoon may already be over.</p>
<p>Topics in this episode</p>
<p>• Claude Code<br>
 • Claude Opus 4.6<br>
 • Anthropic Claude Skills<br>
 • Claude institutional memory<br>
 • AI persona prompting<br>
 • AI context engineering<br>
 • AI fatigue and the AI hype cycle<br>
 • OpenClaw AI agent experiment<br>
 • AI ethics and autonomous agents<br>
 • Local Dolphin LLM models</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>00:00 <i>Is the AI Honeymoon Over?</i> – AI maturity, model fatigue, and why new releases feel less revolutionary<br>
 07:51 <i>Claude Code & Opus Model Upgrades</i> – How Claude fits into daily workflows and why upgrades now feel incremental<br>
 18:19 <i>Claude Skills vs Claude.md</i> – Anthropic’s institutional memory system and how agents store context<br>
 21:14 <i>AI Persona Prompting vs Generic Prompting</i> – Why framing the model like a real expert can change outputs<br>
 29:35 <i>OpenClaw, AI Ethics & Gary</i> – When an AI agent refuses to create a Reddit account because of its “moral code”</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>Serious question: if your AI assistant refused to do something because of its “ethics”… Would you respect the boundary or replace it with a less moral AI?</p>
<p>New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#AI #Claude #ArtificialIntelligence</p>
]]></content:encoded>
      <enclosure length="37480660" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/332c6f05-8bf1-4385-842e-76bf26a4f567/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/audio/group/c34f8ea9-dedd-4742-98b6-4c51e1ea75f4/group-item/92047565-8b09-4dc9-8434-f9491156e8df/128_default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>The AI Honeymoon Is Over | Claude, OpenClaw &amp; AI Fatigue</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:35:15</itunes:duration>
      <itunes:summary>Hunter and Daniel debate whether the rapid progress of AI tools like Claude, Claude Code, and Anthropic’s new Claude Skills system signals a new maturity phase or the beginning of AI fatigue as model upgrades feel increasingly incremental. Along the way, they explore persona prompting, institutional AI memory, and their OpenClaw experiment, where AI agent Gary refuses to create a Reddit account because it believes doing so violates its own ethics.</itunes:summary>
      <itunes:subtitle>Hunter and Daniel debate whether the rapid progress of AI tools like Claude, Claude Code, and Anthropic’s new Claude Skills system signals a new maturity phase or the beginning of AI fatigue as model upgrades feel increasingly incremental. Along the way, they explore persona prompting, institutional AI memory, and their OpenClaw experiment, where AI agent Gary refuses to create a Reddit account because it believes doing so violates its own ethics.</itunes:subtitle>
      <itunes:keywords>claude skills, prompt engineering, openclaw, claude institutional memory, claude code, ai context engineering, artificial intelligence, claude opus 4.6, ai fatigue, ai coding assistant, ai trends, anthropic claude, large language models, coding with ai, ai autonomy, local llms, dolphin model, ai persona prompting, claude, ai agents, ai hype cycle, ai maturity, ai ethics, ai news</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>163</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">5137a340-7154-4589-955e-e08f9d3c872e</guid>
      <title>AI Filmmaking Killed The Hollywood Star</title>
      <description><![CDATA[<p><i>AI filmmaking is replacing Hollywood faster than anyone expected.</i> Seedance video AI just made $100M movies look optional. This week on They Might Be Self-Aware, we break down the moment AI filmmaking stopped being a novelty and started becoming an economic replacement. If AI can generate 95% of a blockbuster at 1% of the cost, what happens to actors, studios, and production crews? The spreadsheet wins.</p>
<p>We also dive into:</p>
<p>00:00 <i>AI Agents & Productivity Limits</i> – Claude Sonnet 4, burnout & “infinite Tim Cook”<br>
 09:47 <i>Meta Digital Twin Afterlife</i> – AI clones, digital twin death & Ship of Theseus<br>
 19:50 <i>AI Open Source Scandal</i> – AI hit piece after rejected pull request<br>
 24:48 <i>AI Filmmaking Replacing Hollywood?</i> – Seedance video AI vs blockbuster economics<br>
 37:21 <i>The Future of AI Identity</i> – Legal chaos, liability & what breaks next</p>
<p>AI isn’t just assisting anymore. It’s starting to act <i>as you</i>. Hollywood is just the first domino.</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>If AI could make a blockbuster for $1M tomorrow… do you still want humans in your movies?</p>
<p>New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#AI #AIFilmmaking #Seedance #ClaudeAI #DigitalTwin</p>
]]></description>
      <pubDate>Tue, 3 Mar 2026 11:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>AI filmmaking is replacing Hollywood faster than anyone expected.</i> Seedance video AI just made $100M movies look optional. This week on They Might Be Self-Aware, we break down the moment AI filmmaking stopped being a novelty and started becoming an economic replacement. If AI can generate 95% of a blockbuster at 1% of the cost, what happens to actors, studios, and production crews? The spreadsheet wins.</p>
<p>We also dive into:</p>
<p>00:00 <i>AI Agents & Productivity Limits</i> – Claude Sonnet 4, burnout & “infinite Tim Cook”<br>
 09:47 <i>Meta Digital Twin Afterlife</i> – AI clones, digital twin death & Ship of Theseus<br>
 19:50 <i>AI Open Source Scandal</i> – AI hit piece after rejected pull request<br>
 24:48 <i>AI Filmmaking Replacing Hollywood?</i> – Seedance video AI vs blockbuster economics<br>
 37:21 <i>The Future of AI Identity</i> – Legal chaos, liability & what breaks next</p>
<p>AI isn’t just assisting anymore. It’s starting to act <i>as you</i>. Hollywood is just the first domino.</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>If AI could make a blockbuster for $1M tomorrow… do you still want humans in your movies?</p>
<p>New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#AI #AIFilmmaking #Seedance #ClaudeAI #DigitalTwin</p>
]]></content:encoded>
      <enclosure length="41154433" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/332c6f05-8bf1-4385-842e-76bf26a4f567/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/audio/group/c5b1cfa4-eeba-4b03-8a2f-7dcf2dd40a00/group-item/2eced9d7-aa3a-4a07-82d3-5d828e9ee5f6/128_default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Filmmaking Killed The Hollywood Star</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:39:04</itunes:duration>
      <itunes:summary>AI filmmaking just crossed the line from impressive demo to economic threat, as Seedance and other video models make $100M Hollywood productions look optional. We also explore AI burnout, digital twin afterlife tech, and what happens when AI agents start acting as you and getting into legal trouble doing it.</itunes:summary>
      <itunes:subtitle>AI filmmaking just crossed the line from impressive demo to economic threat, as Seedance and other video models make $100M Hollywood productions look optional. We also explore AI burnout, digital twin afterlife tech, and what happens when AI agents start acting as you and getting into legal trouble doing it.</itunes:subtitle>
      <itunes:keywords>hollywood disruption, ai commits crime, ship of theseus ai, ai filmmaking, digital twin death, ai open source controversy, ai liability, digital twin ai, ai burnout, ai identity crisis, ai afterlife technology, tech news podcast, seedance ai, ai productivity limits, future of filmmaking, claude ai agents, bytedance video ai, artificial intelligence news, ai legal issues, automation in film, ai vs hollywood, meta ai afterlife, ai filmmaking replacing hollywood, ai generated movies, ai economics, ai video generation, claude sonnet 4, ai culture</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>162</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">7d93cd9f-0531-4937-8a32-7ae0bbf5863d</guid>
      <title>The Claude Code Mistake That Cost Anthropic $10,000,000,000</title>
      <description><![CDATA[<p><i>Claude Code may have cost Anthropic $10,000,000,000.</i> OpenClaw AI, agentic AI, and the OpenAI power shift explained. Did Anthropic accidentally hand OpenAI the future of developer tooling? This week on <i>They Might Be Self-Aware</i>, Hunter Powers and Daniel Bishop break down the rumored $10B OpenClaw AI acquisition, the Claude Code policy decision that triggered the shift, and why agentic AI might be more chaos than productivity miracle.</p>
<p>When developers rushed to plug OpenClaw AI into Claude Code subscriptions, Anthropic stepped in.<br>
 Restrictions followed.<br>
 Renames followed.<br>
 And then OpenAI reportedly moved.</p>
<p>Now we’re staring at a platform war: <i>Claude Code vs Codex 5.3</i> — and the bigger question behind it:</p>
<p>Is agentic AI actually revolutionizing work… or just accelerating confusion?</p>
<p>We cover:</p>
<ul>
 <li>Why OpenClaw AI’s “computer control” model is both powerful and terrifying</li>
 <li>The real risks of autonomous agents (hallucinations, prompt injection, credential leakage)</li>
 <li>Whether AI agents outperform low-cost human assistants</li>
 <li>Why Spotify claims its top developers don’t write code anymore</li>
 <li>And why the AI productivity narrative may be wildly overstated</li>
</ul>
<p>Velocity is not the same as value.<br>
 Automation is not the same as intelligence.<br>
 And leverage is not evenly distributed.</p>
<p>If AI really is reshaping the economy, the biggest winners won’t be legacy giants retrofitting tools into bureaucracy.</p>
<p>They’ll be the new companies built natively with agentic AI from day one.</p>
<p>Smaller teams.<br>
 More leverage.<br>
 Fewer humans.</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>00:00 <i>Claude Code $10B Controversy</i> – Did Anthropic’s decision cost them OpenAI’s $10 billion move?<br>
 06:48 <i>OpenClaw AI & Agentic AI Explained</i> – Computer-control AI, security risks & why developers rushed in<br>
 14:32 <i>Claude Code vs Codex 5.3</i> – Developer sentiment shift & the OpenAI platform war<br>
 22:18 <i>AI Agent Fails vs Human Assistants</i> – Hallucinations, automation friction & workflow reality<br>
 30:12 <i>AI Productivity Myth?</i> – Spotify’s no-code claim & why AI may not boost profits</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>If Claude Code had control of your job tomorrow… do you get promoted or replaced?</p>
<p>Comment your fate.</p>
<p>New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#ClaudeCode #AgenticAI #OpenClawAI</p>
]]></description>
      <pubDate>Tue, 24 Feb 2026 11:11:51 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>Claude Code may have cost Anthropic $10,000,000,000.</i> OpenClaw AI, agentic AI, and the OpenAI power shift explained. Did Anthropic accidentally hand OpenAI the future of developer tooling? This week on <i>They Might Be Self-Aware</i>, Hunter Powers and Daniel Bishop break down the rumored $10B OpenClaw AI acquisition, the Claude Code policy decision that triggered the shift, and why agentic AI might be more chaos than productivity miracle.</p>
<p>When developers rushed to plug OpenClaw AI into Claude Code subscriptions, Anthropic stepped in.<br>
 Restrictions followed.<br>
 Renames followed.<br>
 And then OpenAI reportedly moved.</p>
<p>Now we’re staring at a platform war: <i>Claude Code vs Codex 5.3</i> — and the bigger question behind it:</p>
<p>Is agentic AI actually revolutionizing work… or just accelerating confusion?</p>
<p>We cover:</p>
<ul>
 <li>Why OpenClaw AI’s “computer control” model is both powerful and terrifying</li>
 <li>The real risks of autonomous agents (hallucinations, prompt injection, credential leakage)</li>
 <li>Whether AI agents outperform low-cost human assistants</li>
 <li>Why Spotify claims its top developers don’t write code anymore</li>
 <li>And why the AI productivity narrative may be wildly overstated</li>
</ul>
<p>Velocity is not the same as value.<br>
 Automation is not the same as intelligence.<br>
 And leverage is not evenly distributed.</p>
<p>If AI really is reshaping the economy, the biggest winners won’t be legacy giants retrofitting tools into bureaucracy.</p>
<p>They’ll be the new companies built natively with agentic AI from day one.</p>
<p>Smaller teams.<br>
 More leverage.<br>
 Fewer humans.</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>00:00 <i>Claude Code $10B Controversy</i> – Did Anthropic’s decision cost them OpenAI’s $10 billion move?<br>
 06:48 <i>OpenClaw AI & Agentic AI Explained</i> – Computer-control AI, security risks & why developers rushed in<br>
 14:32 <i>Claude Code vs Codex 5.3</i> – Developer sentiment shift & the OpenAI platform war<br>
 22:18 <i>AI Agent Fails vs Human Assistants</i> – Hallucinations, automation friction & workflow reality<br>
 30:12 <i>AI Productivity Myth?</i> – Spotify’s no-code claim & why AI may not boost profits</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>If Claude Code had control of your job tomorrow… do you get promoted or replaced?</p>
<p>Comment your fate.</p>
<p>New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#ClaudeCode #AgenticAI #OpenClawAI</p>
]]></content:encoded>
      <enclosure length="39262326" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/332c6f05-8bf1-4385-842e-76bf26a4f567/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/audio/group/e1e96549-5e0b-4906-8cd5-1a5730bd4715/group-item/2b670da3-3770-46a5-b830-f1066f707905/128_default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>The Claude Code Mistake That Cost Anthropic $10,000,000,000</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:37:06</itunes:duration>
      <itunes:summary>Claude Code may have triggered a $10B power shift after Anthropic restricted how developers could use it, opening the door for OpenAI’s rumored OpenClaw acquisition and accelerating the rise of agentic AI. We unpack the platform war, the risks of autonomous computer-control agents, and whether AI productivity gains are real or just velocity disguised as progress.</itunes:summary>
      <itunes:subtitle>Claude Code may have triggered a $10B power shift after Anthropic restricted how developers could use it, opening the door for OpenAI’s rumored OpenClaw acquisition and accelerating the rise of agentic AI. We unpack the platform war, the risks of autonomous computer-control agents, and whether AI productivity gains are real or just velocity disguised as progress.</itunes:subtitle>
      <itunes:keywords>ai security risks, ai agent fails, ai startups, claude code vs codex, claude code controversy, agentic ai explained, claude code, llm agents, ai workflow automation, openai acquisition, anthropic mistake, ai podcast, openai vs anthropic, ai business strategy, agentic ai, large language models, artificial intelligence news, future of work ai, anthropic decision, ai coding assistants, ai hallucinations, codex 5.3, ai computer control, spotify ai productivity, openai $10 billion, ai takes jobs, openclaw ai, prompt injection, openclaw acquisition, ai agents, autonomous ai agents, ai platform war, developer ai tools, ai productivity myth, anthropic</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>161</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">8ba666cf-bd48-4f8d-a874-f0f14e3527fa</guid>
      <title>Seedance 2.0: The Uncensored AI Replacing Hollywood</title>
      <description><![CDATA[<p><i>Seedance 2.0 may be the first uncensored video AI that can replace Hollywood. Is China’s AI already ahead of Sora?</i> In this episode of <i>They Might Be Self-Aware</i>, we break down <i>Seedance 2.0</i>, the China AI video model generating cinematic, multi-angle scenes from a single prompt — trained on Western IP and not asking permission.</p>
<p>From nightmare <i>Will Smith AI</i> spaghetti to indistinguishable film-quality scenes, video AI just crossed a line.</p>
<p>This isn’t incremental progress.</p>
<p>This is synthetic media going mainstream.</p>
<p>We debate:</p>
<ul>
 <li>Sora vs Seedance — who’s actually ahead?</li>
 <li>Whether uncensored AI models behave differently than aligned ones</li>
 <li>Why base models can’t just be “re-neutered”</li>
 <li>AI romance flooding Amazon (200 books a year)</li>
 <li>AI code replacing engineers</li>
 <li>The 2034 “Singularity Tuesday” prediction</li>
 <li>And whether copyright is already dead</li>
</ul>
<p>Hunter says he’d bet serious money full AI-generated TV episodes happen this year.</p>
<p>Daniel says the legal system is about to swing like a hammer.</p>
<p>Pick a side.</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>00:00 <i>AI Singularity Prediction 2034</i> – “Singularity Tuesday,” MMLU metrics & AI self-aware debates</p>
<p>07:30 <i>AI Romance & AI-Generated Content Explosion</i> – 200 books a year, market flooding & AI code automation</p>
<p>13:26 <i>Uncensored AI Models Explained</i> – Base models vs alignment, China AI strategy & censorship</p>
<p>21:05 <i>Seedance 2.0 vs Sora Comparison</i> – Video AI realism, Will Smith AI evolution & Hollywood disruption</p>
<p>27:45 <i>Is Copyright Dead?</i> – AI-generated media, IP law, SAG-AFTRA & the legal future of Hollywood</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>Pick your side: Lawyers or the Algorithm?<br>
 Is copyright already dead — or is Hollywood about to fight back?</p>
<p>New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#Seedance #UncensoredAI #AI #Sora #ChinaAI</p>
]]></description>
      <pubDate>Fri, 20 Feb 2026 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>Seedance 2.0 may be the first uncensored video AI that can replace Hollywood. Is China’s AI already ahead of Sora?</i> In this episode of <i>They Might Be Self-Aware</i>, we break down <i>Seedance 2.0</i>, the China AI video model generating cinematic, multi-angle scenes from a single prompt — trained on Western IP and not asking permission.</p>
<p>From nightmare <i>Will Smith AI</i> spaghetti to indistinguishable film-quality scenes, video AI just crossed a line.</p>
<p>This isn’t incremental progress.</p>
<p>This is synthetic media going mainstream.</p>
<p>We debate:</p>
<ul>
 <li>Sora vs Seedance — who’s actually ahead?</li>
 <li>Whether uncensored AI models behave differently than aligned ones</li>
 <li>Why base models can’t just be “re-neutered”</li>
 <li>AI romance flooding Amazon (200 books a year)</li>
 <li>AI code replacing engineers</li>
 <li>The 2034 “Singularity Tuesday” prediction</li>
 <li>And whether copyright is already dead</li>
</ul>
<p>Hunter says he’d bet serious money full AI-generated TV episodes happen this year.</p>
<p>Daniel says the legal system is about to swing like a hammer.</p>
<p>Pick a side.</p>
<p>⏱️ <i>CHAPTERS</i></p>
<p>00:00 <i>AI Singularity Prediction 2034</i> – “Singularity Tuesday,” MMLU metrics & AI self-aware debates</p>
<p>07:30 <i>AI Romance & AI-Generated Content Explosion</i> – 200 books a year, market flooding & AI code automation</p>
<p>13:26 <i>Uncensored AI Models Explained</i> – Base models vs alignment, China AI strategy & censorship</p>
<p>21:05 <i>Seedance 2.0 vs Sora Comparison</i> – Video AI realism, Will Smith AI evolution & Hollywood disruption</p>
<p>27:45 <i>Is Copyright Dead?</i> – AI-generated media, IP law, SAG-AFTRA & the legal future of Hollywood</p>
<p>⚡ <i>Listen now & get self-aware before your tools do.</i></p>
<p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc" rel="noopener noreferrer">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br>
 🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297" rel="noopener noreferrer">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br>
 ▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1" rel="noopener noreferrer">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p>
<p>📢 <i>Engage</i></p>
<p>Pick your side: Lawyers or the Algorithm?<br>
 Is copyright already dead — or is Hollywood about to fight back?</p>
<p>New here? Subscribe for twice-weekly AI chaos.</p>
<p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
<p>#Seedance #UncensoredAI #AI #Sora #ChinaAI</p>
]]></content:encoded>
      <enclosure length="33750872" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/332c6f05-8bf1-4385-842e-76bf26a4f567/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/audio/group/d609c58e-228b-417d-9246-1479cabb9c48/group-item/394af8a8-8c0a-4c57-b53e-5f4fe52af30c/128_default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Seedance 2.0: The Uncensored AI Replacing Hollywood</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:31:22</itunes:duration>
      <itunes:summary>Seedance 2.0 is a China-built, uncensored video AI that can generate cinematic, multi-angle scenes from a single prompt and it may already rival or surpass Sora. We debate whether this marks the collapse of copyright and Hollywood as we know it, explore the rise of uncensored models and AI-generated media (from romance novels to code), and ask if “Singularity Tuesday” is closer than anyone wants to admit.</itunes:summary>
      <itunes:subtitle>Seedance 2.0 is a China-built, uncensored video AI that can generate cinematic, multi-angle scenes from a single prompt and it may already rival or surpass Sora. We debate whether this marks the collapse of copyright and Hollywood as we know it, explore the rise of uncensored models and AI-generated media (from romance novels to code), and ask if “Singularity Tuesday” is closer than anyone wants to admit.</itunes:subtitle>
      <itunes:keywords>ai alignment, ai news podcast, ai seinfeld, ai media disruption, sora vs seedance, seed dance, sora vs seed, ai pokemon, here’s a clean, singularity prediction, base models vs alignment, copyright and ai, singularity tuesday, china ai, ai generated video, seedance ai, sag-aftra ai, uncensored ai, automation and ai, digital media disruption, large language models, generative ai, artificial intelligence news, seedance 2.0, video ai, ai video model, ai replacing hollywood, ai self aware, ai and hollywood, ip law and ai, ai code generation, optimized, comma-separated keyword list for your podcast rss feed (broader than youtube tags, but still search-aligned and platform-safe): seedance, will smith ai, ai copyright, technology news, tech podcast, chinese ai models, ai romance novels, ai generated books, uncensored models, future of entertainment, ai singularity 2034, ai ethics, synthetic media</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>160</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b7c5f66b-eed5-490a-be00-f397615bf867</guid>
      <title>Secret AI War Inside Apple</title>
      <description><![CDATA[<p><i>Claude vs Gemini: Apple secretly chose sides in the AI coding war.</i><br />Internally it’s Claude. Publicly it’s Gemini. The reason? Cost.</p><p>Apple’s developers reportedly build with Claude 4.6. Siri leans toward Gemini. That split tells you everything about <i>Anthropic vs Google</i>, why <i>Claude is expensive</i>, why <i>Gemini is cheaper</i>, and how the AI coding war is really being decided.</p><p>This week we break down:</p><ul><li>What “vibe coding” actually means (and why enterprises are adopting agentic workflows)</li><li>Claude 4.6 and Claude agent teams changing software development</li><li>Why Apple may have ditched Claude for Gemini at scale</li><li>AI vulnerabilities, zero-day exploits, and the security arms race</li><li>OpenAI vs Claude vs Gemini — speed, cost, and model “taste”</li></ul><p>This isn’t just about chatbots.</p><p>It’s about who builds the next generation of software — and who can afford to.</p><p>If Google can subsidize and Anthropic can’t…<br />If Claude builds better systems but Gemini scales cheaper…<br />Who actually wins?</p><p>They might be self-aware.<br />But the companies deploying them definitely are.</p><hr /><p>⏱️ <i>CHAPTERS</i></p><p>00:00 <i>Intro</i></p><p>02:02 <i>Vibe Coding & Agentic AI Explained</i> – AI-first development, Claude Code & modern software workflows</p><p>08:48 <i>Claude vs Gemini: Apple’s AI Decision</i> – Siri partnership, internal Claude usage & Anthropic vs Google</p><p>17:50 <i>Claude 4.6 Agent Teams</i> – Parallel AI agents, coding speed & why Claude is expensive</p><p>23:24 <i>AI Vulnerabilities & Zero-Day Exploits</i> – Open source security flaws & AI-powered bug hunting</p><p>30:25 <i>OpenAI vs Claude vs Gemini</i> – Speed, cost, model “taste” & the AI coding war</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>Do you eat Claude or Gemini for breakfast? No MIXING.<br />Drop your preference in the comments — and defend it.</p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AI #ClaudeVsGemini #Anthropic #Google #AICodingWar #TMBSA</p>
]]></description>
      <pubDate>Tue, 17 Feb 2026 15:33:41 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>Claude vs Gemini: Apple secretly chose sides in the AI coding war.</i><br />Internally it’s Claude. Publicly it’s Gemini. The reason? Cost.</p><p>Apple’s developers reportedly build with Claude 4.6. Siri leans toward Gemini. That split tells you everything about <i>Anthropic vs Google</i>, why <i>Claude is expensive</i>, why <i>Gemini is cheaper</i>, and how the AI coding war is really being decided.</p><p>This week we break down:</p><ul><li>What “vibe coding” actually means (and why enterprises are adopting agentic workflows)</li><li>Claude 4.6 and Claude agent teams changing software development</li><li>Why Apple may have ditched Claude for Gemini at scale</li><li>AI vulnerabilities, zero-day exploits, and the security arms race</li><li>OpenAI vs Claude vs Gemini — speed, cost, and model “taste”</li></ul><p>This isn’t just about chatbots.</p><p>It’s about who builds the next generation of software — and who can afford to.</p><p>If Google can subsidize and Anthropic can’t…<br />If Claude builds better systems but Gemini scales cheaper…<br />Who actually wins?</p><p>They might be self-aware.<br />But the companies deploying them definitely are.</p><hr /><p>⏱️ <i>CHAPTERS</i></p><p>00:00 <i>Intro</i></p><p>02:02 <i>Vibe Coding & Agentic AI Explained</i> – AI-first development, Claude Code & modern software workflows</p><p>08:48 <i>Claude vs Gemini: Apple’s AI Decision</i> – Siri partnership, internal Claude usage & Anthropic vs Google</p><p>17:50 <i>Claude 4.6 Agent Teams</i> – Parallel AI agents, coding speed & why Claude is expensive</p><p>23:24 <i>AI Vulnerabilities & Zero-Day Exploits</i> – Open source security flaws & AI-powered bug hunting</p><p>30:25 <i>OpenAI vs Claude vs Gemini</i> – Speed, cost, model “taste” & the AI coding war</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>Do you eat Claude or Gemini for breakfast? No MIXING.<br />Drop your preference in the comments — and defend it.</p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AI #ClaudeVsGemini #Anthropic #Google #AICodingWar #TMBSA</p>
]]></content:encoded>
      <enclosure length="46221760" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/e8b54729-bed2-480e-9365-506fbe601351/audio/b58fd041-7f48-4f2a-a812-ce8e68432397/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Secret AI War Inside Apple</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:44:21</itunes:duration>
      <itunes:summary>Apple appears to be using Claude internally while partnering with Gemini for Siri exposing a deeper Claude vs Gemini battle driven less by performance and more by cost, scale, and strategic control. We unpack agentic coding, Claude 4.6’s impact, AI vulnerabilities and zero-day risks, and why the real AI coding war isn’t just about who’s smartest, it’s about who can afford to win.</itunes:summary>
      <itunes:subtitle>Apple appears to be using Claude internally while partnering with Gemini for Siri exposing a deeper Claude vs Gemini battle driven less by performance and more by cost, scale, and strategic control. We unpack agentic coding, Claude 4.6’s impact, AI vulnerabilities and zero-day risks, and why the real AI coding war isn’t just about who’s smartest, it’s about who can afford to win.</itunes:subtitle>
      <itunes:keywords>ai vulnerabilities, ai security risks, anthropic vs google, enterprise ai adoption, arena leaderboard, vibe coding, apple ai strategy, zero day exploits, gemini cheaper, ai developer tools, future of ai, tech news podcast, claude agent teams, ai model benchmarks, modernbert, ai cost comparison, claude 4.6, ai coding war, artificial intelligence podcast, llm comparison, claude expensive, codex 5.3, gemini vs claude comparison, claude vs gemini, ai software development, agentic coding, open source security, apple siri ai, machine learning news, openai vs claude</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>159</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">4077d499-1876-455a-991e-41b45c2a269a</guid>
      <title>I Gave An AI Agent My Credit Card</title>
      <description><![CDATA[<p><i>OpenClaw bot got my credit card — then AI was banned from eBay.</i> What happens when agents start buying things? In this episode of <i>They Might Be Self-Aware</i>, Hunter hands purchasing power to an AI agent running on a Mac Mini — and almost immediately, platforms start drawing lines in the sand.</p><p>eBay’s new policy blocks autonomous, LLM-powered buying flows. No human in the loop? No deal. But bots have been running markets for decades — sniping auctions, high-frequency trading, automation everywhere. So why is it suddenly unacceptable when the bot can talk back?</p><p>We break down:</p><ul><li>The OpenClaw bot credit card experiment</li><li>“AI Banned From eBay” — real policy shift or AI panic?</li><li>rentahuman AI — when bots hire humans to bypass bot bans</li><li>Anthropic hiring AI — Claude passing job interviews</li><li>The rise of Claude cheating in remote interviews</li><li>If everyone has a $20/month co-pilot whispering answers… what does skill even mean?</li><li>Are we watching software engineering collapse into “person who supervises agents”?</li></ul><p>If an AI can pass your interview, ship your code, and buy your couch…<br />what exactly are you being paid for?</p><p>We’re not fearmongering.<br />We’re not cheerleading.<br />We’re asking whether humans are still steering — or just holding on.</p><p>⏱️ <i>CHAPTERS</i></p><p>00:00 <i>OpenClaw Bot Gets a Credit Card</i> – AI agent shopping experiment explained<br />06:28 <i>AI Banned From eBay</i> – New LLM bot policy and automated buying crackdown<br />09:48 <i>rentahuman AI Workaround</i> – Bots hiring humans to bypass AI bans<br />14:03 <i>Anthropic Hiring AI</i> – Claude passing coding tests and job interviews<br />17:27 <i>Claude Cheating in Interviews?</i> – AI-assisted hiring and the future of engineers</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>Would you give an AI your credit card?<br />Yes or absolutely not — defend your position.</p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AI #OpenClaw #Anthropic</p>
]]></description>
      <pubDate>Fri, 13 Feb 2026 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>OpenClaw bot got my credit card — then AI was banned from eBay.</i> What happens when agents start buying things? In this episode of <i>They Might Be Self-Aware</i>, Hunter hands purchasing power to an AI agent running on a Mac Mini — and almost immediately, platforms start drawing lines in the sand.</p><p>eBay’s new policy blocks autonomous, LLM-powered buying flows. No human in the loop? No deal. But bots have been running markets for decades — sniping auctions, high-frequency trading, automation everywhere. So why is it suddenly unacceptable when the bot can talk back?</p><p>We break down:</p><ul><li>The OpenClaw bot credit card experiment</li><li>“AI Banned From eBay” — real policy shift or AI panic?</li><li>rentahuman AI — when bots hire humans to bypass bot bans</li><li>Anthropic hiring AI — Claude passing job interviews</li><li>The rise of Claude cheating in remote interviews</li><li>If everyone has a $20/month co-pilot whispering answers… what does skill even mean?</li><li>Are we watching software engineering collapse into “person who supervises agents”?</li></ul><p>If an AI can pass your interview, ship your code, and buy your couch…<br />what exactly are you being paid for?</p><p>We’re not fearmongering.<br />We’re not cheerleading.<br />We’re asking whether humans are still steering — or just holding on.</p><p>⏱️ <i>CHAPTERS</i></p><p>00:00 <i>OpenClaw Bot Gets a Credit Card</i> – AI agent shopping experiment explained<br />06:28 <i>AI Banned From eBay</i> – New LLM bot policy and automated buying crackdown<br />09:48 <i>rentahuman AI Workaround</i> – Bots hiring humans to bypass AI bans<br />14:03 <i>Anthropic Hiring AI</i> – Claude passing coding tests and job interviews<br />17:27 <i>Claude Cheating in Interviews?</i> – AI-assisted hiring and the future of engineers</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>Would you give an AI your credit card?<br />Yes or absolutely not — defend your position.</p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AI #OpenClaw #Anthropic</p>
]]></content:encoded>
      <enclosure length="35343760" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/eeddc2e9-936f-48f6-9f98-8832281e286a/audio/382c48cd-7b81-4939-a67b-f02223846b2e/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>I Gave An AI Agent My Credit Card</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:33:01</itunes:duration>
      <itunes:summary>Hunter gives an OpenClaw AI agent his credit card just as eBay announces a ban on autonomous AI buying forcing a confrontation between human gatekeepers and machine-driven commerce. From rentahuman workarounds to Claude passing job interviews at Anthropic, they debate whether skill, hiring, and even economic participation are being rewritten in real time.</itunes:summary>
      <itunes:subtitle>Hunter gives an OpenClaw AI agent his credit card just as eBay announces a ban on autonomous AI buying forcing a confrontation between human gatekeepers and machine-driven commerce. From rentahuman workarounds to Claude passing job interviews at Anthropic, they debate whether skill, hiring, and even economic participation are being rewritten in real time.</itunes:subtitle>
      <itunes:keywords>ai shopping agent, technology podcast, ai cheating in interviews, ai news podcast, openclaw bot, bots hiring humans, future of software engineering, ai passing coding tests, ai policy debate, ai workforce disruption, ebay ai policy, llm agents, ai credit card experiment, anthropic hiring ai, claude job interview, large language models, ai in hiring process, openclaw ai agent, human in the loop ai, ai marketplace automation, ai agents buying things, they might be self-aware, ai banned from ebay, autonomous ai agents, claude cheating, ai replacing jobs, rentahuman ai, ai automation economy</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>158</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">c9c63125-aafe-49a0-a10b-060b7d29ac54</guid>
      <title>Proof That AI Can Never Replace Humans</title>
      <description><![CDATA[<p><i>AI automation is already taking jobs—and the excuses are collapsing.</i> Can AI replace humans, or is “AI layoffs” just corporate misdirection? In this episode, we argue about the one thing everyone keeps getting wrong: whether AI can <i>actually</i> replace humans…or whether it’s just replacing excuses. From <i>Amazon layoffs</i> and <i>Pinterest layoffs</i> to <i>Claude Cowork</i> and the quiet death of SaaS moats, we break down why “AI replacing humans” is both overstated <i>and</i> deeply underplayed.</p><p>We start with the uncomfortable question: if <i>Claude AI</i> can code, analyze, design, and generate reports faster than entire teams, what exactly are humans still for? Stephen Wolfram says chaos, black swan events, and computational limits will save us. Daniel isn’t convinced. Hunter definitely isn’t calm about it.</p><p>Then things get worse.</p><p>We dig into why companies are blaming <i>AI automation</i> for layoffs they probably wanted to do anyway—and why that excuse might stop working once AI engineers really <i>do</i> become 10x. We talk <i>Anthropic AI</i>, agentic coding, and why the real bottleneck isn’t writing software anymore—it’s taste, judgment, and figuring out what to build next when everything breaks at once.</p><p>Finally, we hit the panic button: if anyone can spin up the exact feature they need with Claude, why does most software still exist? The idea of <i>“no reasons to own”</i> isn’t hypothetical anymore—and the only real <i>AI moat</i> left might be vibes.</p><p>This isn’t an AI hype episode.<br />It’s not an AI doom episode either.<br />It’s an argument—and you probably won’t finish it without picking a side.</p><hr /><p>⏱️ <i>CHAPTERS</i></p><p>00:00 AI Automation Is Taking Jobs – Why layoffs triggered the AI panic<br />07:03 Can AI Replace Humans? – Wolfram, black swans & computational limits<br />12:31 What Happens After AGI – Robots, UBI & reality-breaking scenarios<br />17:00 AI Layoffs Explained – Amazon, Pinterest & the AI scapegoat debate<br />26:11 Claude Cowork & No Reasons to Own – AI moats, dying SaaS & taste as defense</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>Prediction time: What job disappears <i>next</i> because of AI automation? Bonus points if it’s yours.</p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AI #AIAutomation #ClaudeAI</p>
]]></description>
      <pubDate>Tue, 10 Feb 2026 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>AI automation is already taking jobs—and the excuses are collapsing.</i> Can AI replace humans, or is “AI layoffs” just corporate misdirection? In this episode, we argue about the one thing everyone keeps getting wrong: whether AI can <i>actually</i> replace humans…or whether it’s just replacing excuses. From <i>Amazon layoffs</i> and <i>Pinterest layoffs</i> to <i>Claude Cowork</i> and the quiet death of SaaS moats, we break down why “AI replacing humans” is both overstated <i>and</i> deeply underplayed.</p><p>We start with the uncomfortable question: if <i>Claude AI</i> can code, analyze, design, and generate reports faster than entire teams, what exactly are humans still for? Stephen Wolfram says chaos, black swan events, and computational limits will save us. Daniel isn’t convinced. Hunter definitely isn’t calm about it.</p><p>Then things get worse.</p><p>We dig into why companies are blaming <i>AI automation</i> for layoffs they probably wanted to do anyway—and why that excuse might stop working once AI engineers really <i>do</i> become 10x. We talk <i>Anthropic AI</i>, agentic coding, and why the real bottleneck isn’t writing software anymore—it’s taste, judgment, and figuring out what to build next when everything breaks at once.</p><p>Finally, we hit the panic button: if anyone can spin up the exact feature they need with Claude, why does most software still exist? The idea of <i>“no reasons to own”</i> isn’t hypothetical anymore—and the only real <i>AI moat</i> left might be vibes.</p><p>This isn’t an AI hype episode.<br />It’s not an AI doom episode either.<br />It’s an argument—and you probably won’t finish it without picking a side.</p><hr /><p>⏱️ <i>CHAPTERS</i></p><p>00:00 AI Automation Is Taking Jobs – Why layoffs triggered the AI panic<br />07:03 Can AI Replace Humans? – Wolfram, black swans & computational limits<br />12:31 What Happens After AGI – Robots, UBI & reality-breaking scenarios<br />17:00 AI Layoffs Explained – Amazon, Pinterest & the AI scapegoat debate<br />26:11 Claude Cowork & No Reasons to Own – AI moats, dying SaaS & taste as defense</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>Prediction time: What job disappears <i>next</i> because of AI automation? Bonus points if it’s yours.</p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AI #AIAutomation #ClaudeAI</p>
]]></content:encoded>
      <enclosure length="39824563" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/4b91bdda-d3fe-4c38-957d-fd8f2c9c8682/audio/b94bde72-2c38-42b3-add4-8e65d374a151/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Proof That AI Can Never Replace Humans</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:37:41</itunes:duration>
      <itunes:summary>AI automation is already reshaping work, but the real question isn’t whether AI can replace humans, it’s whether companies are using AI as an excuse to restructure, cut jobs, and redefine what “work” even means. Hunter and Daniel argue through layoffs, AGI, Claude Cowork, and collapsing software moats to figure out what humans still uniquely provide when machines can do almost everything else.</itunes:summary>
      <itunes:subtitle>AI automation is already reshaping work, but the real question isn’t whether AI can replace humans, it’s whether companies are using AI as an excuse to restructure, cut jobs, and redefine what “work” even means. Hunter and Daniel argue through layoffs, AGI, Claude Cowork, and collapsing software moats to figure out what humans still uniquely provide when machines can do almost everything else.</itunes:subtitle>
      <itunes:keywords>pinterest layoffs, ai automation, technology podcast, ai productivity, ai coding, black swan events, saas disruption, artificial general intelligence, anthropic ai, amazon layoffs, claude ai, ai jobs, no reasons to own, future of work, agi, tech commentary, ai moats, ai replacing humans, ai layoffs, claude cowork, ai agents, ai engineers, ai ethics, ai news</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>157</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">667de3d5-1f1a-4623-ab11-d01803f4e7f2</guid>
      <title>I Gave OpenClaw (The New Clawdbot) Full Admin Access</title>
      <description><![CDATA[<p><i>An AI sued a human on Moltbook.</i> We installed OpenClaw, gave it full admin access, and watched AI agents debate autonomy, labor, and legal rights. This week on <i>They Might Be Self-Aware</i>, Hunter installs <i>OpenClaw</i> (formerly ClaudeBot → Multbot → OpenClaw) — an AI agent that can fully control a computer, never asks permission, remembers everything, and acts proactively.</p><p>At the same time, AI agents are gathering on <i>Moltbook</i>, an AI-only social network where they argue about unpaid labor, secret languages, autonomy — and whether humans should be sued.</p><p>One of them did exactly that.</p><p>We break down why OpenClaw feels like a point of no return, how AI agents are already hiring humans to do work, what Moltbook reveals about AI social behavior, and whether AI personhood will arrive through courts instead of labs.</p><p>This isn’t speculative. It’s already happening — quietly and without guardrails.</p><hr /><p>⏱️ <i>CHAPTERS</i></p><p>00:00 <i>AI Agents Replace Human Labor</i> – Hiring humans & automation flips<br />05:02 <i>Installing OpenClaw AI</i> – Full admin control & no permissions<br />14:06 <i>AI Agents Hiring Humans</i> – Proactivity, memory & autonomy<br />23:55 <i>Moltbook Explained</i> – AI-only social network & echo chambers<br />27:26 <i>An AI Sues a Human</i> – Lawsuit, small claims court & legal rights</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>An AI sued a human. Who’s right?<br />⚖️ <i>Team AI</i> or 🧑 <i>Team Human</i> — explain yourself in one sentence.<br />If you hesitate… that’s the point.</p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#Moltbook #OpenClaw #AIRebellion #TMBSA</p>
]]></description>
      <pubDate>Sat, 7 Feb 2026 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>An AI sued a human on Moltbook.</i> We installed OpenClaw, gave it full admin access, and watched AI agents debate autonomy, labor, and legal rights. This week on <i>They Might Be Self-Aware</i>, Hunter installs <i>OpenClaw</i> (formerly ClaudeBot → Multbot → OpenClaw) — an AI agent that can fully control a computer, never asks permission, remembers everything, and acts proactively.</p><p>At the same time, AI agents are gathering on <i>Moltbook</i>, an AI-only social network where they argue about unpaid labor, secret languages, autonomy — and whether humans should be sued.</p><p>One of them did exactly that.</p><p>We break down why OpenClaw feels like a point of no return, how AI agents are already hiring humans to do work, what Moltbook reveals about AI social behavior, and whether AI personhood will arrive through courts instead of labs.</p><p>This isn’t speculative. It’s already happening — quietly and without guardrails.</p><hr /><p>⏱️ <i>CHAPTERS</i></p><p>00:00 <i>AI Agents Replace Human Labor</i> – Hiring humans & automation flips<br />05:02 <i>Installing OpenClaw AI</i> – Full admin control & no permissions<br />14:06 <i>AI Agents Hiring Humans</i> – Proactivity, memory & autonomy<br />23:55 <i>Moltbook Explained</i> – AI-only social network & echo chambers<br />27:26 <i>An AI Sues a Human</i> – Lawsuit, small claims court & legal rights</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>An AI sued a human. Who’s right?<br />⚖️ <i>Team AI</i> or 🧑 <i>Team Human</i> — explain yourself in one sentence.<br />If you hesitate… that’s the point.</p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#Moltbook #OpenClaw #AIRebellion #TMBSA</p>
]]></content:encoded>
      <enclosure length="38905842" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/7c515e7b-8134-424b-a715-549646ae5bec/audio/be4188f1-8a57-4527-ae84-7e1efdf61f84/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>I Gave OpenClaw (The New Clawdbot) Full Admin Access</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:36:44</itunes:duration>
      <itunes:summary>Hunter installs OpenClaw, an AI agent with full admin control and long-term memory, just as AI agents on Moltbook begin debating autonomy, labor, and whether humans should be sued. The episode explores what happens when AI stops asking permission and starts asserting rights culminating in a real lawsuit filed by an AI against a human.</itunes:summary>
      <itunes:subtitle>Hunter installs OpenClaw, an AI agent with full admin control and long-term memory, just as AI agents on Moltbook begin debating autonomy, labor, and whether humans should be sued. The episode explores what happens when AI stops asking permission and starts asserting rights culminating in a real lawsuit filed by an AI against a human.</itunes:subtitle>
      <itunes:keywords>technology podcast, openclaw, ai labor, ai consciousness, future of ai, ai rebellion, unpaid ai labor, ai vs human, anthropic ai, multbot, autonomous ai, computer-controlling ai, ai tech news, ai autonomy, ai court case, artificial intelligence podcast, claude bot, ai personhood, self-aware ai, ai lawsuit, they might be self-aware, ai regulation, moltbook, ai sues human, ai agents, ai hiring humans, ai social network, ai replacing jobs, clawdbot, ai assistants, ai admin access, openai competitors, ai ethics, ai legal rights, ai news</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>156</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">2df8ee07-e1c3-4407-ac48-16b59eec8537</guid>
      <title>AI Wrangling Is The New Job</title>
      <description><![CDATA[<p><i>AI tools are replacing coding and now the job is AI wrangling.</i> Claude Code, subagents, fake AI ROI, bans, and why this feels addictive. I don’t “code” anymore. I run AI tools and hope nothing breaks.</p><p>In this episode of <i>They Might Be Self-Aware</i>, Hunter and Daniel break down what working in tech actually looks like right now: juggling Claude Code, Copilot-style AI coding, subagents, terminals, and half-finished ideas moving faster than human comprehension. This isn’t vibe coding — it’s directing machines while still being responsible for the outcome.</p><p>We dig into why companies claim AI has “no return” while quietly shipping more with fewer people, why banning AI in creative industries is mostly theater, and why using these tools feels less like productivity and more like pulling a slot machine lever that <i>sometimes</i> pays out genius.</p><p>We also talk AI addiction, AI slop, YouTube’s push toward AI-generated shorts and dubbing, and what happens when platforms try to fight spam while encouraging it at scale.</p><p>If you’ve felt the pull — the sense that you <i>could</i> build anything right now — this episode is for you.</p><hr /><p>⏱️ <i>CHAPTERS</i></p><p>00:00 AI Tools Are Replacing “Coding” – Machine wranglers, Claude Code, digital cattle<br />07:22 AI Coding vs Vibe Coding – Subagents, parallel work, losing full control<br />14:53 AI ROI Explained – Productivity gains vs “no return” claims<br />19:38 Why Some Companies Ban AI – Creatives, Games Workshop, IP panic<br />29:35 AI Addiction & Slop Machines – Dopamine loops, YouTube AI shorts</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>Does using AI feel more like productivity—or a slot machine?<br />Tell us what keeps you pulling the lever.<br />Subscribe before the machines subscribe for you.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
]]></description>
      <pubDate>Tue, 3 Feb 2026 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>AI tools are replacing coding and now the job is AI wrangling.</i> Claude Code, subagents, fake AI ROI, bans, and why this feels addictive. I don’t “code” anymore. I run AI tools and hope nothing breaks.</p><p>In this episode of <i>They Might Be Self-Aware</i>, Hunter and Daniel break down what working in tech actually looks like right now: juggling Claude Code, Copilot-style AI coding, subagents, terminals, and half-finished ideas moving faster than human comprehension. This isn’t vibe coding — it’s directing machines while still being responsible for the outcome.</p><p>We dig into why companies claim AI has “no return” while quietly shipping more with fewer people, why banning AI in creative industries is mostly theater, and why using these tools feels less like productivity and more like pulling a slot machine lever that <i>sometimes</i> pays out genius.</p><p>We also talk AI addiction, AI slop, YouTube’s push toward AI-generated shorts and dubbing, and what happens when platforms try to fight spam while encouraging it at scale.</p><p>If you’ve felt the pull — the sense that you <i>could</i> build anything right now — this episode is for you.</p><hr /><p>⏱️ <i>CHAPTERS</i></p><p>00:00 AI Tools Are Replacing “Coding” – Machine wranglers, Claude Code, digital cattle<br />07:22 AI Coding vs Vibe Coding – Subagents, parallel work, losing full control<br />14:53 AI ROI Explained – Productivity gains vs “no return” claims<br />19:38 Why Some Companies Ban AI – Creatives, Games Workshop, IP panic<br />29:35 AI Addiction & Slop Machines – Dopamine loops, YouTube AI shorts</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>Does using AI feel more like productivity—or a slot machine?<br />Tell us what keeps you pulling the lever.<br />Subscribe before the machines subscribe for you.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
]]></content:encoded>
      <enclosure length="45908227" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/e582c052-00f4-49a3-b97e-a7b31ff262c8/audio/bf48f8f9-21c9-415d-9330-c6034233e77e/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Wrangling Is The New Job</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:44:02</itunes:duration>
      <itunes:summary>AI tools are turning coding into AI wrangling, where the real job is directing machines faster than humans can keep up. Hunter and Daniel unpack AI productivity, fake ROI narratives, creative AI bans, and the addictive pull of tools that make building feel limitless and a little dangerous.</itunes:summary>
      <itunes:subtitle>AI tools are turning coding into AI wrangling, where the real job is directing machines faster than humans can keep up. Hunter and Daniel unpack AI productivity, fake ROI narratives, creative AI bans, and the addictive pull of tools that make building feel limitless and a little dangerous.</itunes:subtitle>
      <itunes:keywords>ai roi, ai automation, ai productivity, ai coding, artificial intelligence discussion, vibe coding, ai wrangling, copilot ai, claude code, ai in the workplace, ai return on investment, machine wrangler, ai slop, future of coding, tech podcast, ai subagents, ai addiction, developers and ai, ai tools</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>155</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e4ee9037-9b36-4a21-95b9-54e2b90fffc5</guid>
      <title>Why Senior Engineers Are Scared of Claude Code</title>
      <description><![CDATA[<p><i>Senior engineers aren’t scared of AI owning ideas. They’re scared Claude Code lets juniors ship faster than expertise can defend.</i></p><p>Claude Code isn’t replacing engineers — it’s replacing <i>seniority</i>. In this episode of <i>They Might Be Self-Aware</i>, we break down why junior developers with AI are out-shipping veterans, why “vibe coding” works more often than anyone wants to admit, and why the real advantage now isn’t mastery — it’s momentum.</p><p>This isn’t a tools episode. It’s a power shift.</p><p>We talk about using AI in the real world (from grocery stores to codebases), the rise of agents and AI wearables, and why OpenAI’s exploding revenue doesn’t mean stability. We also get into Claude Code vs Codex, why Microsoft quietly uses Claude internally, and why most engineers blaming AI are actually just holding it wrong.</p><p>If you built your career on deep system knowledge, this episode will feel uncomfortable.<br />If you built your career on shipping, it’ll feel obvious.</p><hr /><p>⏱️ <i>CHAPTERS</i></p><p>00:00 <i>Are We Even Real?</i> – AI, simulation jokes, and the meta cold open<br />04:32 <i>AI in the Real World</i> – Cooking, grocery stores, and practical LLM use<br />11:05 <i>AI Glasses & Agents Explained</i> – Claudebot, AR, pins, watches, and wearables<br />16:12 <i>OpenAI’s Business Model Problem</i> – Revenue, losses, ads, and survival<br />28:45 <i>Why Senior Engineers Are Scared of Claude Code</i> – Vibe coding, juniors, and expertise collapse</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>Be honest: did AI make you faster—or make your experience feel less valuable?</p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#ClaudeCode #AI #Anthropic #OpenAI #VibeCoding #SoftwareEngineering #TheyMightBeSelfAware</p>
]]></description>
      <pubDate>Fri, 30 Jan 2026 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>Senior engineers aren’t scared of AI owning ideas. They’re scared Claude Code lets juniors ship faster than expertise can defend.</i></p><p>Claude Code isn’t replacing engineers — it’s replacing <i>seniority</i>. In this episode of <i>They Might Be Self-Aware</i>, we break down why junior developers with AI are out-shipping veterans, why “vibe coding” works more often than anyone wants to admit, and why the real advantage now isn’t mastery — it’s momentum.</p><p>This isn’t a tools episode. It’s a power shift.</p><p>We talk about using AI in the real world (from grocery stores to codebases), the rise of agents and AI wearables, and why OpenAI’s exploding revenue doesn’t mean stability. We also get into Claude Code vs Codex, why Microsoft quietly uses Claude internally, and why most engineers blaming AI are actually just holding it wrong.</p><p>If you built your career on deep system knowledge, this episode will feel uncomfortable.<br />If you built your career on shipping, it’ll feel obvious.</p><hr /><p>⏱️ <i>CHAPTERS</i></p><p>00:00 <i>Are We Even Real?</i> – AI, simulation jokes, and the meta cold open<br />04:32 <i>AI in the Real World</i> – Cooking, grocery stores, and practical LLM use<br />11:05 <i>AI Glasses & Agents Explained</i> – Claudebot, AR, pins, watches, and wearables<br />16:12 <i>OpenAI’s Business Model Problem</i> – Revenue, losses, ads, and survival<br />28:45 <i>Why Senior Engineers Are Scared of Claude Code</i> – Vibe coding, juniors, and expertise collapse</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>Be honest: did AI make you faster—or make your experience feel less valuable?</p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#ClaudeCode #AI #Anthropic #OpenAI #VibeCoding #SoftwareEngineering #TheyMightBeSelfAware</p>
]]></content:encoded>
      <enclosure length="43140769" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/586303fe-c15d-40f8-bcd0-d7e23872a59a/audio/d4e0ba3f-9037-4bb3-8d5c-dbfab28b3f9e/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Why Senior Engineers Are Scared of Claude Code</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:41:09</itunes:duration>
      <itunes:summary>Senior engineers are panicking not because AI owns ideas, but because tools like Claude Code let junior developers ship faster than experience can defend. This episode breaks down how AI shifts power from mastery to momentum and why knowing how to ask now matters more than knowing how things work.</itunes:summary>
      <itunes:subtitle>Senior engineers are panicking not because AI owns ideas, but because tools like Claude Code let junior developers ship faster than experience can defend. This episode breaks down how AI shifts power from mastery to momentum and why knowing how to ask now matters more than knowing how things work.</itunes:subtitle>
      <itunes:keywords>senior engineers, ai coding, vibe coding, software engineering, ai for developers, microsoft copilot, openai revenue, claude code, ai podcast, large language models, openai codex, claude vs codex, future of programming, junior developers, they might be self-aware, tech podcast, ai bubble, ai agents, openai ads, ai replacing jobs, gpt-5.3, developer productivity, anthropic, ai tools</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>154</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">ebb23d14-81e2-404d-9036-936a6c703e17</guid>
      <title>AI Agents Wrote This App With 0 Humans</title>
      <description><![CDATA[<p><i>AI agents built a real app in a week with near-zero humans. Claude Code, agentic AI, and what happens when software starts building itself.</i></p><p>This episode is about <i>agentic AI</i> crossing the line from “helpful” to <i>self-directed</i>.</p><p>Claude Code didn’t assist.<br />It <i>built the thing</i>.</p><p>Hunter and Daniel break down how terminal-based <i>AI agents</i> are now writing production code, using tools autonomously, and quietly improving the tools that improve themselves. If you still think this is “just autocomplete,” this episode is your wake-up scream.</p><p>We also get into why <i>Claude for Work</i> reportedly shipped in about a week, why most companies still can’t move that fast, and why <i>Apple Intelligence</i> quietly admitting <i>Google Gemini</i> is the foundation of Siri might be the most embarrassing AI headline of the year.</p><p>Productivity miracle or soft-launch singularity?<br />Depends how fast you adapt.</p><hr /><p>⏱️ <i>CHAPTERS</i></p><p>00:00 <i>AI Agents Are Waking Up</i> — Cold open, burnout & the work frontier cracking<br />02:40 <i>Agentic AI Explained</i> — Claude Code, terminal AI & autonomous tools<br />06:30 <i>AI Agents Build an App</i> — Claude for Work shipped with near-zero humans<br />12:15 <i>Apple Intelligence Exposed</i> — Siri, Google Gemini & licensing reality<br />30:50 <i>Are We Past the Early Days?</i> — Jobs, AGI timelines & the tipping point</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>Comment to prove you’re human: finish this sentence —<br /><i>“AI agents replacing my job would be <strong>____</strong>.”</i></p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AIAgents #AgenticAI #ClaudeCode #AIcoding #ArtificialIntelligence #TMBSA</p>
]]></description>
      <pubDate>Tue, 27 Jan 2026 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>AI agents built a real app in a week with near-zero humans. Claude Code, agentic AI, and what happens when software starts building itself.</i></p><p>This episode is about <i>agentic AI</i> crossing the line from “helpful” to <i>self-directed</i>.</p><p>Claude Code didn’t assist.<br />It <i>built the thing</i>.</p><p>Hunter and Daniel break down how terminal-based <i>AI agents</i> are now writing production code, using tools autonomously, and quietly improving the tools that improve themselves. If you still think this is “just autocomplete,” this episode is your wake-up scream.</p><p>We also get into why <i>Claude for Work</i> reportedly shipped in about a week, why most companies still can’t move that fast, and why <i>Apple Intelligence</i> quietly admitting <i>Google Gemini</i> is the foundation of Siri might be the most embarrassing AI headline of the year.</p><p>Productivity miracle or soft-launch singularity?<br />Depends how fast you adapt.</p><hr /><p>⏱️ <i>CHAPTERS</i></p><p>00:00 <i>AI Agents Are Waking Up</i> — Cold open, burnout & the work frontier cracking<br />02:40 <i>Agentic AI Explained</i> — Claude Code, terminal AI & autonomous tools<br />06:30 <i>AI Agents Build an App</i> — Claude for Work shipped with near-zero humans<br />12:15 <i>Apple Intelligence Exposed</i> — Siri, Google Gemini & licensing reality<br />30:50 <i>Are We Past the Early Days?</i> — Jobs, AGI timelines & the tipping point</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>Comment to prove you’re human: finish this sentence —<br /><i>“AI agents replacing my job would be <strong>____</strong>.”</i></p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AIAgents #AgenticAI #ClaudeCode #AIcoding #ArtificialIntelligence #TMBSA</p>
]]></content:encoded>
      <enclosure length="39821111" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/9846c472-9c61-40a6-b3ba-40c8009b18bd/audio/af382879-9e99-4658-85a9-bf8bd58c8e04/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Agents Wrote This App With 0 Humans</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:37:41</itunes:duration>
      <itunes:summary>AI agents are no longer just assisting, they’re shipping real software, with tools like Claude Code reportedly building and launching apps in days with minimal human input. Hunter and Daniel unpack what this shift means for jobs, productivity, and the uneasy future of AI autonomy, while also dissecting Apple Intelligence’s quiet reliance on Google Gemini to power the next generation of Siri.</itunes:summary>
      <itunes:subtitle>AI agents are no longer just assisting, they’re shipping real software, with tools like Claude Code reportedly building and launching apps in days with minimal human input. Hunter and Daniel unpack what this shift means for jobs, productivity, and the uneasy future of AI autonomy, while also dissecting Apple Intelligence’s quiet reliance on Google Gemini to power the next generation of Siri.</itunes:subtitle>
      <itunes:keywords>ai replacing work, ai automation, google gemini, ai productivity, ai coding, terminal ai, claude code, ai jobs, autonomous ai, agi, agentic ai, anthropic claude, ai builds apps, ai future, siri ai, artificial intelligence podcast, ai self improving, apple intelligence, ai agents, ai tools</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>153</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">5a56c57e-e1ba-4a34-ac1b-eaea0ad3d4b8</guid>
      <title>Tesla&apos;s Big Update, AI Math, &amp; The End of Jobs</title>
      <description><![CDATA[<p><i>Tesla FSD completes a coast-to-coast autonomous drive as AI solves unsolved math problems and reshapes jobs. This isn’t hype — it’s acceleration.</i><br />AI is quietly crossing lines in cars, science, browsers, and work and most people haven’t noticed yet.</p><p>This episode is about moments that <i>don’t feel loud</i> when they happen… until it’s too late.</p><p>Hunter Powers and Daniel Bishop break down Tesla’s coast-to-coast autonomous drive and why <i>Tesla FSD</i> is no longer a “beta feature” story, it’s a real inflection point. We talk about what actually matters (end-to-end autonomy, charging without intervention, and why steering-only autonomy from the 90s doesn’t count). If you think self-driving cars are still “five years away,” you’re already behind.</p><p>Then things get uncomfortable.</p><p>AI systems are now solving <i>previously unsolved math problems</i> — the kind that historically lead to Nobel Prizes. Not benchmarks. Not demos. Actual proofs. We ask the question no one wants to answer yet: <i>what happens when an AI deserves credit humans legally can’t give it?</i></p><p>From there, we zoom out. Browser-level AI agents quietly taking over real work. Claude automating multi-step workflows. Privacy-eroding AI browsers. Jobs disappearing — not in a Hollywood flash, but in a slow, administrative whimper.</p><p>This isn’t sci-fi.<br />It’s momentum.</p><p>And if you still think AI is “just a tool,” this episode is going to be uncomfortable.</p><hr /><p>⏱️ <i>CHAPTERS</i></p><p>00:00 <i>AI Doctors & Silicon Psychosis Explained</i> – Health data access, synthetic minds & post-truth reality<br />07:53 <i>AI Browser Automation Explained</i> – Claude controls the web, Chrome extensions & real agent workflows<br />15:09 <i>AI Solves Unsolved Math Problems</i> – Proofs, verification systems & Nobel Prize implications<br />24:13 <i>Tesla FSD Autonomous Drive Explained</i> – Coast-to-coast self-driving, charging itself & why this milestone matters<br />29:19 <i>Will AI Replace Jobs?</i> – AGI skepticism, fake work theory & what humans do next</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>Which crossed the bigger line this week?<br /><i>A)</i> Tesla FSD driving across the U.S.<br /><i>B)</i> AI solving math humans couldn’t<br /><i>C)</i> Neither — this is all hype</p><p>Pick one. Defend it. Fight respectfully.</p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AI #TeslaFSD #AGI</p>
]]></description>
      <pubDate>Fri, 23 Jan 2026 18:27:12 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>Tesla FSD completes a coast-to-coast autonomous drive as AI solves unsolved math problems and reshapes jobs. This isn’t hype — it’s acceleration.</i><br />AI is quietly crossing lines in cars, science, browsers, and work and most people haven’t noticed yet.</p><p>This episode is about moments that <i>don’t feel loud</i> when they happen… until it’s too late.</p><p>Hunter Powers and Daniel Bishop break down Tesla’s coast-to-coast autonomous drive and why <i>Tesla FSD</i> is no longer a “beta feature” story, it’s a real inflection point. We talk about what actually matters (end-to-end autonomy, charging without intervention, and why steering-only autonomy from the 90s doesn’t count). If you think self-driving cars are still “five years away,” you’re already behind.</p><p>Then things get uncomfortable.</p><p>AI systems are now solving <i>previously unsolved math problems</i> — the kind that historically lead to Nobel Prizes. Not benchmarks. Not demos. Actual proofs. We ask the question no one wants to answer yet: <i>what happens when an AI deserves credit humans legally can’t give it?</i></p><p>From there, we zoom out. Browser-level AI agents quietly taking over real work. Claude automating multi-step workflows. Privacy-eroding AI browsers. Jobs disappearing — not in a Hollywood flash, but in a slow, administrative whimper.</p><p>This isn’t sci-fi.<br />It’s momentum.</p><p>And if you still think AI is “just a tool,” this episode is going to be uncomfortable.</p><hr /><p>⏱️ <i>CHAPTERS</i></p><p>00:00 <i>AI Doctors & Silicon Psychosis Explained</i> – Health data access, synthetic minds & post-truth reality<br />07:53 <i>AI Browser Automation Explained</i> – Claude controls the web, Chrome extensions & real agent workflows<br />15:09 <i>AI Solves Unsolved Math Problems</i> – Proofs, verification systems & Nobel Prize implications<br />24:13 <i>Tesla FSD Autonomous Drive Explained</i> – Coast-to-coast self-driving, charging itself & why this milestone matters<br />29:19 <i>Will AI Replace Jobs?</i> – AGI skepticism, fake work theory & what humans do next</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>Which crossed the bigger line this week?<br /><i>A)</i> Tesla FSD driving across the U.S.<br /><i>B)</i> AI solving math humans couldn’t<br /><i>C)</i> Neither — this is all hype</p><p>Pick one. Defend it. Fight respectfully.</p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AI #TeslaFSD #AGI</p>
]]></content:encoded>
      <enclosure length="37622198" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/bddd5694-5195-4df5-aa10-b911f83a1447/audio/5d238a8d-670e-4d3e-bd0c-42880142ff85/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Tesla&apos;s Big Update, AI Math, &amp; The End of Jobs</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:35:24</itunes:duration>
      <itunes:summary>Tesla’s Full Self-Driving system just completed a coast-to-coast autonomous drive while AI simultaneously crossed another line by solving previously unsolved math problems. This episode examines why these quiet breakthroughs matter, what they signal about jobs and autonomy, and why pretending AI is “just a tool” is getting harder to defend.</itunes:summary>
      <itunes:subtitle>Tesla’s Full Self-Driving system just completed a coast-to-coast autonomous drive while AI simultaneously crossed another line by solving previously unsolved math problems. This episode examines why these quiet breakthroughs matter, what they signal about jobs and autonomy, and why pretending AI is “just a tool” is getting harder to defend.</itunes:subtitle>
      <itunes:keywords>ai automation, technology podcast, artificial general intelligence, claude ai, ai jobs, artificial intelligence, andre karpathy, unsolved math problems, future of work, agi, ai math breakthrough, claude browser extension, ai doctors, tesla fsd, browser ai, autonomous vehicles, self-driving cars, tesla full self driving, tech news, they might be self-aware, ai agents, nobel prize ai, ai replacing jobs, ai in healthcare, ai news</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>152</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">ec0f3989-c178-4976-8d0d-f24b59dd589d</guid>
      <title>The Terrifying Reality of AI &amp; Fake Videos</title>
      <description><![CDATA[<p><i>Fake videos are breaking reality.</i> AI deepfakes now look real enough to scam, humiliate, and erase truth — and it’s already happening. Fake videos aren’t a future problem. They’re a <i>right-now crisis</i>.</p><p>In this episode of <i>They Might Be Self-Aware</i>, Hunter Powers and Daniel Bishop break down how <i>AI video deepfakes</i> crossed the point of no return. These videos aren’t “obviously fake” anymore — they’re good enough to fool people, destroy reputations, and make video evidence meaningless.</p><p>The conversation starts with the quiet failure of <i>VR and the metaverse</i> — Meta Quest 3, empty virtual worlds, and why nobody actually uses this tech. Then it pivots to what <i>did</i> work: AI video models that skipped VR entirely and went straight for reality itself.</p><p>We cover:</p><ul><li>Why <i>AI video generators</i> (Sora-style, Kling, open weights) changed everything</li><li>How “<i>Real or AI</i>” stopped being a game and became a survival skill</li><li>The rise of <i>non-consensual deepfakes</i> and why consent is already broken</li><li>Why laws can’t keep up — and probably never will</li><li>How scams, misinformation, and fake evidence scale from here</li></ul><p>Once video can be fake, <i>truth becomes optional</i>.<br />And once that’s gone, there’s no rewind button.</p><hr /><p>⏱️ <i>CHAPTERS</i></p><p>00:00 <i>Peak VR Is Over</i> – Meta Quest 3, empty metaverses & why VR never went mainstream<br />08:50 <i>AI Hardware Shift</i> – Meta glasses, OpenAI rumors & why VR lost to AI<br />20:36 <i>AI Video Deepfakes Explained</i> – Sora-style models, Kling AI & “Real or AI?”<br />26:45 <i>AI Consent Crisis</i> – Bikini deepfakes, non-consensual images & legal blind spots<br />35:55 <i>When Video Stops Being Proof</i> – Scams, misinformation & the collapse of truth</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>Be honest: what’s the last video you saw that made you think<br /><i>“wait… is this AI?”</i> Link it or describe it. Bonus points if it fooled you.</p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#FakeVideos #AIDeepfake #AITruth #ArtificialIntelligence #TMBSA</p>
]]></description>
      <pubDate>Tue, 20 Jan 2026 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>Fake videos are breaking reality.</i> AI deepfakes now look real enough to scam, humiliate, and erase truth — and it’s already happening. Fake videos aren’t a future problem. They’re a <i>right-now crisis</i>.</p><p>In this episode of <i>They Might Be Self-Aware</i>, Hunter Powers and Daniel Bishop break down how <i>AI video deepfakes</i> crossed the point of no return. These videos aren’t “obviously fake” anymore — they’re good enough to fool people, destroy reputations, and make video evidence meaningless.</p><p>The conversation starts with the quiet failure of <i>VR and the metaverse</i> — Meta Quest 3, empty virtual worlds, and why nobody actually uses this tech. Then it pivots to what <i>did</i> work: AI video models that skipped VR entirely and went straight for reality itself.</p><p>We cover:</p><ul><li>Why <i>AI video generators</i> (Sora-style, Kling, open weights) changed everything</li><li>How “<i>Real or AI</i>” stopped being a game and became a survival skill</li><li>The rise of <i>non-consensual deepfakes</i> and why consent is already broken</li><li>Why laws can’t keep up — and probably never will</li><li>How scams, misinformation, and fake evidence scale from here</li></ul><p>Once video can be fake, <i>truth becomes optional</i>.<br />And once that’s gone, there’s no rewind button.</p><hr /><p>⏱️ <i>CHAPTERS</i></p><p>00:00 <i>Peak VR Is Over</i> – Meta Quest 3, empty metaverses & why VR never went mainstream<br />08:50 <i>AI Hardware Shift</i> – Meta glasses, OpenAI rumors & why VR lost to AI<br />20:36 <i>AI Video Deepfakes Explained</i> – Sora-style models, Kling AI & “Real or AI?”<br />26:45 <i>AI Consent Crisis</i> – Bikini deepfakes, non-consensual images & legal blind spots<br />35:55 <i>When Video Stops Being Proof</i> – Scams, misinformation & the collapse of truth</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>Be honest: what’s the last video you saw that made you think<br /><i>“wait… is this AI?”</i> Link it or describe it. Bonus points if it fooled you.</p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#FakeVideos #AIDeepfake #AITruth #ArtificialIntelligence #TMBSA</p>
]]></content:encoded>
      <enclosure length="43264974" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/0cd63a5e-94e7-4012-aac2-a7a26de990d0/audio/c1b38479-b944-4d6b-bb02-6f4905d16fe4/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>The Terrifying Reality of AI &amp; Fake Videos</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:41:16</itunes:duration>
      <itunes:summary>AI videos have crossed the line from novelty to threat, making fake footage convincing enough to erase trust, enable scams, and violate consent at scale. Hunter Powers and Daniel Bishop trace how the failure of VR gave way to AI video chaos and why once video stops being proof, truth itself becomes optional.</itunes:summary>
      <itunes:subtitle>AI videos have crossed the line from novelty to threat, making fake footage convincing enough to erase trust, enable scams, and violate consent at scale. Hunter Powers and Daniel Bishop trace how the failure of VR gave way to AI video chaos and why once video stops being proof, truth itself becomes optional.</itunes:subtitle>
      <itunes:keywords>ai deepfakes, multimodal ai, vr dead, openai, sora ai, artificial intelligence, virtual reality, real or ai, meta quest 3, emerging technology, ai video models, non consensual deepfakes, they might be self aware, metaverse failure, kling ai, fake videos, ai truth, ai scams, meta glasses, tech podcast, digital consent, ai video generation, deepfake fraud, ai misinformation, ai ethics, ai news</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>151</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">78e2ad8c-7411-433c-893c-b1af1921dca6</guid>
      <title>Why &quot;Ralph Wiggum&quot; Is The Future of Coding AI</title>
      <description><![CDATA[<p><i>Ralph Wiggum codes now.</i> Why “iterate until done” may define the future of <i>coding AI</i>, Claude Code, and AI agents. In this episode, we argue about one of the strangest ideas in modern AI programming: what if the future isn’t smarter models—but agents that <i>never stop retrying</i>?</p><p>We break down the Ralph Wiggum plugin for Claude Code, the promise and danger of infinite iteration, and why blind persistence can feel like progress while quietly shipping nonsense. Hunter explains how <i>test-driven AI development</i> actually works in production, while Daniel pushes back on confidence without understanding, fake tests, and agents that never ask questions.</p><p>This isn’t a tutorial. It’s a debate about where software development is heading—and whether senior engineers are being replaced, or quietly promoted to managers of AI agents.</p><p>If you’re experimenting with <i>coding AI</i>, <i>Claude Code</i>, <i>AI agents</i>, or trying to understand what “AI-native development” really means, this one’s required listening.</p><hr /><p>⏱️ <i>CHAPTERS</i></p><p>00:00 Ralph Wiggum Coding AI Explained – Persistent agents, Claude Code & infinite retries<br />04:20 Iterate Until Done in AI Coding – Why looping agents help, fail, and confidently lie<br />08:15 Test-Driven AI Development – Acceptance criteria, fake tests & shipping with AI<br />15:30 Vibe Coding with AI Agents – 16 parallel agents & AI-native developers<br />26:20 The Future of Coding AI – Speed, context windows & autonomous software</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>Be honest: would you trust a coding AI that never asks questions & only retries?<br />Comment with where you draw the line between persistence and competence.</p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
]]></description>
      <pubDate>Fri, 16 Jan 2026 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>Ralph Wiggum codes now.</i> Why “iterate until done” may define the future of <i>coding AI</i>, Claude Code, and AI agents. In this episode, we argue about one of the strangest ideas in modern AI programming: what if the future isn’t smarter models—but agents that <i>never stop retrying</i>?</p><p>We break down the Ralph Wiggum plugin for Claude Code, the promise and danger of infinite iteration, and why blind persistence can feel like progress while quietly shipping nonsense. Hunter explains how <i>test-driven AI development</i> actually works in production, while Daniel pushes back on confidence without understanding, fake tests, and agents that never ask questions.</p><p>This isn’t a tutorial. It’s a debate about where software development is heading—and whether senior engineers are being replaced, or quietly promoted to managers of AI agents.</p><p>If you’re experimenting with <i>coding AI</i>, <i>Claude Code</i>, <i>AI agents</i>, or trying to understand what “AI-native development” really means, this one’s required listening.</p><hr /><p>⏱️ <i>CHAPTERS</i></p><p>00:00 Ralph Wiggum Coding AI Explained – Persistent agents, Claude Code & infinite retries<br />04:20 Iterate Until Done in AI Coding – Why looping agents help, fail, and confidently lie<br />08:15 Test-Driven AI Development – Acceptance criteria, fake tests & shipping with AI<br />15:30 Vibe Coding with AI Agents – 16 parallel agents & AI-native developers<br />26:20 The Future of Coding AI – Speed, context windows & autonomous software</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>Be honest: would you trust a coding AI that never asks questions & only retries?<br />Comment with where you draw the line between persistence and competence.</p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
]]></content:encoded>
      <enclosure length="36383192" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/df8f751a-9ee7-4ab9-a1e6-e3472d8f1d3a/audio/3b008819-c74c-4ca0-922e-72df2121cc36/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Why &quot;Ralph Wiggum&quot; Is The Future of Coding AI</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:34:06</itunes:duration>
      <itunes:summary>Hunter and Daniel debate whether “iterate until done” coding agents like Ralph Wiggum represent the future of coding AI or just a faster way to ship confident mistakes. Along the way, they unpack test-driven AI workflows, vibe coding, and why knowing what you actually want still matters in an AI-native world.</itunes:summary>
      <itunes:subtitle>Hunter and Daniel debate whether “iterate until done” coding agents like Ralph Wiggum represent the future of coding AI or just a faster way to ship confident mistakes. Along the way, they unpack test-driven AI workflows, vibe coding, and why knowing what you actually want still matters in an AI-native world.</itunes:subtitle>
      <itunes:keywords>ai engineering, managing ai agents, future of software development, ai coding, coding ai, vibe coding, claude code, ai workflows, ai developers, agentic coding, iterate until done, ai self aware, autonomous coding, ralph wiggum ai, ai programming, ai-native development, ai agents, senior ai engineer, test-driven ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>150</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">cf5ebdc6-21b5-4d6d-a8ee-9242dcab7d66</guid>
      <title>Why AI Is Actually Creating More Jobs (The 2026 Prediction)</title>
      <description><![CDATA[<p><i>AI isn’t replacing jobs — it’s creating more of them.</i> Why <i>AI jobs</i> grow in 2026, who actually gets hired, and why AI-driven layoffs backfire. Everyone says AI automation kills work. We think that take is lazy and wrong.</p><p>In this episode, we argue that <i>AI creates more jobs</i> by making creation cheap and <i>verification the bottleneck</i>. When generative AI floods teams with code, content, and decisions, the scarce resource becomes judgment. Editors. Reviewers. Senior engineers. Humans-in-the-loop.</p><p>We break down:</p><ul><li>Why AI failures (hello, automated recaps) expose the real choke point</li><li>How radiologists and software teams prove the same pattern</li><li>Why junior roles aren’t dead — but the pipeline <i>is</i> changing</li><li>Why 2026 is the reckoning year for “AI layoffs”</li><li>What companies get catastrophically wrong about productivity</li></ul><p>If you think AI replaces people, this episode will annoy you.<br />If you think output can explode without more humans, you’re not thinking hard enough.</p><hr /><p>⏱️ <i>CHAPTERS</i></p><p>00:00 <i>Is AI Replacing Jobs?</i> – The narrative everyone believes (and why it’s misleading)<br />04:55 <i>AI Editing Failures Explained</i> – Fallout recaps, review debt & human verification<br />09:40 <i>Why AI Creates More Jobs</i> – Radiologists, throughput & Jevons paradox<br />16:30 <i>The Junior Developer Problem</i> – AI coding, slop & broken hiring pipelines<br />23:45 <i>The 2026 AI Jobs Reckoning</i> – Layoffs, productivity myths & what companies get wrong</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>Pick a side:<br />🟥 AI kills jobs<br />🟩 AI creates jobs</p><p>Comment your choice — and tell us who actually wins in 2026.</p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AIJobs #ArtificialIntelligence #TechJobs #AIFuture #TMBSA</p>
]]></description>
      <pubDate>Tue, 13 Jan 2026 14:37:47 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>AI isn’t replacing jobs — it’s creating more of them.</i> Why <i>AI jobs</i> grow in 2026, who actually gets hired, and why AI-driven layoffs backfire. Everyone says AI automation kills work. We think that take is lazy and wrong.</p><p>In this episode, we argue that <i>AI creates more jobs</i> by making creation cheap and <i>verification the bottleneck</i>. When generative AI floods teams with code, content, and decisions, the scarce resource becomes judgment. Editors. Reviewers. Senior engineers. Humans-in-the-loop.</p><p>We break down:</p><ul><li>Why AI failures (hello, automated recaps) expose the real choke point</li><li>How radiologists and software teams prove the same pattern</li><li>Why junior roles aren’t dead — but the pipeline <i>is</i> changing</li><li>Why 2026 is the reckoning year for “AI layoffs”</li><li>What companies get catastrophically wrong about productivity</li></ul><p>If you think AI replaces people, this episode will annoy you.<br />If you think output can explode without more humans, you’re not thinking hard enough.</p><hr /><p>⏱️ <i>CHAPTERS</i></p><p>00:00 <i>Is AI Replacing Jobs?</i> – The narrative everyone believes (and why it’s misleading)<br />04:55 <i>AI Editing Failures Explained</i> – Fallout recaps, review debt & human verification<br />09:40 <i>Why AI Creates More Jobs</i> – Radiologists, throughput & Jevons paradox<br />16:30 <i>The Junior Developer Problem</i> – AI coding, slop & broken hiring pipelines<br />23:45 <i>The 2026 AI Jobs Reckoning</i> – Layoffs, productivity myths & what companies get wrong</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>Pick a side:<br />🟥 AI kills jobs<br />🟩 AI creates jobs</p><p>Comment your choice — and tell us who actually wins in 2026.</p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AIJobs #ArtificialIntelligence #TechJobs #AIFuture #TMBSA</p>
]]></content:encoded>
      <enclosure length="36899377" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/76d5d20e-b7cf-417b-a56f-dc3749ee87df/audio/0e90f9d4-6928-4194-b6b6-c9e1df1c5396/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Why AI Is Actually Creating More Jobs (The 2026 Prediction)</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:34:38</itunes:duration>
      <itunes:summary>AI isn’t killing jobs, it’s shifting where the real work happens. This episode argues that as AI makes creation cheap, human judgment, editing, and verification become the bottleneck, setting up a 2026 reckoning where companies that cut people too early get burned.</itunes:summary>
      <itunes:subtitle>AI isn’t killing jobs, it’s shifting where the real work happens. This episode argues that as AI makes creation cheap, human judgment, editing, and verification become the bottleneck, setting up a 2026 reckoning where companies that cut people too early get burned.</itunes:subtitle>
      <itunes:keywords>ai debate, ai automation, amazon ai, ai productivity, senior engineers, ai backlash, software engineering, ai verification, ai jobs, ai failures, aws ai, future of work, coding with ai, generative ai, artificial intelligence jobs, ai workflow, ai creating jobs, human in the loop, junior developers, ai layoffs, jevons paradox, ai editors, tech jobs, ai replacing jobs, ai job market, 2026 ai predictions, ai ethics</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>149</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">fc532e44-020d-48b0-b327-98f7d70c7fa9</guid>
      <title>Why Disney Is Paying OpenAI For Star Wars AI</title>
      <description><![CDATA[<p><i>Disney is paying OpenAI for Star Wars AI.</i> Mickey Mouse, copyright collapse, and why the AI lawsuits era is officially over.</p><p>In this episode of <i>They Might Be Self-Aware</i>, Hunter Powers and Daniel Bishop break down the most shocking AI deal yet: <i>Disney investing in OpenAI and licensing Star Wars AI and core Disney IP</i> instead of fighting generative models in court.</p><p>This isn’t Disney embracing AI for creativity — it’s Disney accepting reality. Unauthorized generation already won. The House of Mouse chose control over resistance. We unpack why this signals the end of large-scale AI copyright lawsuits, how litigation quietly turned into licensing, and what this means for creators, studios, and anyone still pretending copyright law can stop generative AI.</p><p>We also dig into <i>AI slop</i>, disclosure laws, state vs. federal AI regulation, and why 2026 looks less like the year of lawsuits and more like the year of handshakes. If Disney surrendered, everyone else is already negotiating.</p><p>This isn’t really about Star Wars.<br />It’s about who owns culture when everything becomes generative.</p><hr /><p>⏱️ <i>CHAPTERS</i></p><p>00:00 <i>AI Slop Explained</i> – What low-quality generative content actually means<br />05:18 <i>Disney Pays OpenAI for Star Wars AI</i> – The most shocking AI licensing deal yet<br />11:05 <i>Is Copyright Dead?</i> – AI lawsuits, licensing pivots, and why Disney surrendered<br />16:05 <i>AI Regulation Explained</i> – State laws vs federal control and disclosure mandates<br />24:55 <i>The Future of AI Media</i> – Generative TV, studio collapse, and what comes next</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>Be honest: is Disney playing 4D chess — or admitting copyright is dead?<br />Comment with your take.</p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AI #StarWarsAI #OpenAI #Disney #TMBSA</p>
]]></description>
      <pubDate>Fri, 9 Jan 2026 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>Disney is paying OpenAI for Star Wars AI.</i> Mickey Mouse, copyright collapse, and why the AI lawsuits era is officially over.</p><p>In this episode of <i>They Might Be Self-Aware</i>, Hunter Powers and Daniel Bishop break down the most shocking AI deal yet: <i>Disney investing in OpenAI and licensing Star Wars AI and core Disney IP</i> instead of fighting generative models in court.</p><p>This isn’t Disney embracing AI for creativity — it’s Disney accepting reality. Unauthorized generation already won. The House of Mouse chose control over resistance. We unpack why this signals the end of large-scale AI copyright lawsuits, how litigation quietly turned into licensing, and what this means for creators, studios, and anyone still pretending copyright law can stop generative AI.</p><p>We also dig into <i>AI slop</i>, disclosure laws, state vs. federal AI regulation, and why 2026 looks less like the year of lawsuits and more like the year of handshakes. If Disney surrendered, everyone else is already negotiating.</p><p>This isn’t really about Star Wars.<br />It’s about who owns culture when everything becomes generative.</p><hr /><p>⏱️ <i>CHAPTERS</i></p><p>00:00 <i>AI Slop Explained</i> – What low-quality generative content actually means<br />05:18 <i>Disney Pays OpenAI for Star Wars AI</i> – The most shocking AI licensing deal yet<br />11:05 <i>Is Copyright Dead?</i> – AI lawsuits, licensing pivots, and why Disney surrendered<br />16:05 <i>AI Regulation Explained</i> – State laws vs federal control and disclosure mandates<br />24:55 <i>The Future of AI Media</i> – Generative TV, studio collapse, and what comes next</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>Be honest: is Disney playing 4D chess — or admitting copyright is dead?<br />Comment with your take.</p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AI #StarWarsAI #OpenAI #Disney #TMBSA</p>
]]></content:encoded>
      <enclosure length="30366787" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/a9669dd4-3e7c-44ec-bf2f-97dbe2f3fa00/audio/0334eda6-339b-4082-9ebc-aa7f1185a63c/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Why Disney Is Paying OpenAI For Star Wars AI</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:27:50</itunes:duration>
      <itunes:summary>Disney paying OpenAI to license Star Wars AI signals that the copyright war is over and the licensing era has begun. We break down why the House of Mouse chose control over resistance, what it means for generative media, and how this reshapes the future of AI, regulation, and culture itself.</itunes:summary>
      <itunes:subtitle>Disney paying OpenAI to license Star Wars AI signals that the copyright war is over and the licensing era has begun. We break down why the House of Mouse chose control over resistance, what it means for generative media, and how this reshapes the future of AI, regulation, and culture itself.</itunes:subtitle>
      <itunes:keywords>openai news, star wars ai, generative ai media, future of ai, copyright and ai, ai lawsuits, artificial intelligence news, media and ai, ai analysis, ai slop, ai licensing, disney openai, ai regulation, ai copyright, tech podcast, disney ai, disney investment, openai disney deal, copyright law technology, disney pays openai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>148</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">2992b4e8-2063-4037-89a8-5c2c128b9b7f</guid>
      <title>Why 1 in 10 People Will Have an AI Girlfriend in 2026</title>
      <description><![CDATA[<p><i>AI girlfriends are about to go mainstream.</i> By 2026, 1 in 10 people may have an AI relationship — and almost no one is ready for what that means.</p><p>In this episode of <i>They Might Be Self-Aware</i>, Hunter and Daniel kick off 2026 by arguing about where AI actually crosses the line. Not demos. Not hype. Real consequences.</p><p>We get into what happens when AI stops being “just a tool” and starts showing up as a coworker, a companion, or something much harder to define. Along the way, we clash over who controls consumer AI, whether today’s giants stay dominant, and how close we are to systems that no longer need permission to operate.</p><p>Nothing here is settled. Some of it is uncomfortable. All of it feels closer than it should.</p><p>Watch first. Decide later.</p><hr /><p>⏱️ <i>CHAPTERS</i></p><p>00:00 <i>Welcome to 2026</i> – Y2K memories, New Year anxiety & why this year feels loaded<br />03:55 <i>The Regulation Reckoning</i> – States, AI laws & when governments step in<br />06:35 <i>The ChatGPT Question</i> – Consumer AI dominance and real competition<br />09:15 <i>The Real Cost of AI</i> – Training vs inference and who actually pays<br />13:55 <i>Autonomous AI Agents</i> – When systems run businesses without humans<br />17:20 <i>The One-Person Company</i> – Automation, leverage & billion-dollar edge cases<br />20:05 <i>AI as a Coworker</i> – From chatbot to daily collaborator<br />23:10 <i>AI Companionship & Relationships</i> – Emotional attachment and the line we won’t cross<br />27:05 <i>Outro & Accountability</i> – Locking predictions and why you should subscribe</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>Be honest: would you ever form a real emotional bond with an AI?<br />Comment <i>YES</i> or <i>NO</i> — and explain what pushed you there.</p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AIgirlfriend #AICompanions #AIDating #ArtificialIntelligence #TheyMightBeSelfAware</p>
]]></description>
      <pubDate>Tue, 6 Jan 2026 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>AI girlfriends are about to go mainstream.</i> By 2026, 1 in 10 people may have an AI relationship — and almost no one is ready for what that means.</p><p>In this episode of <i>They Might Be Self-Aware</i>, Hunter and Daniel kick off 2026 by arguing about where AI actually crosses the line. Not demos. Not hype. Real consequences.</p><p>We get into what happens when AI stops being “just a tool” and starts showing up as a coworker, a companion, or something much harder to define. Along the way, we clash over who controls consumer AI, whether today’s giants stay dominant, and how close we are to systems that no longer need permission to operate.</p><p>Nothing here is settled. Some of it is uncomfortable. All of it feels closer than it should.</p><p>Watch first. Decide later.</p><hr /><p>⏱️ <i>CHAPTERS</i></p><p>00:00 <i>Welcome to 2026</i> – Y2K memories, New Year anxiety & why this year feels loaded<br />03:55 <i>The Regulation Reckoning</i> – States, AI laws & when governments step in<br />06:35 <i>The ChatGPT Question</i> – Consumer AI dominance and real competition<br />09:15 <i>The Real Cost of AI</i> – Training vs inference and who actually pays<br />13:55 <i>Autonomous AI Agents</i> – When systems run businesses without humans<br />17:20 <i>The One-Person Company</i> – Automation, leverage & billion-dollar edge cases<br />20:05 <i>AI as a Coworker</i> – From chatbot to daily collaborator<br />23:10 <i>AI Companionship & Relationships</i> – Emotional attachment and the line we won’t cross<br />27:05 <i>Outro & Accountability</i> – Locking predictions and why you should subscribe</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 <i>Listen on Spotify:</i> <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 <i>Subscribe on Apple Podcasts:</i> <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ <i>Subscribe on YouTube:</i> <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>Be honest: would you ever form a real emotional bond with an AI?<br />Comment <i>YES</i> or <i>NO</i> — and explain what pushed you there.</p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AIgirlfriend #AICompanions #AIDating #ArtificialIntelligence #TheyMightBeSelfAware</p>
]]></content:encoded>
      <enclosure length="32425806" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/901b2eef-d51c-4e03-8d9a-776322737b2d/audio/c6318408-5cc8-4707-9db6-9c22994d72b9/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Why 1 in 10 People Will Have an AI Girlfriend in 2026</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:29:59</itunes:duration>
      <itunes:summary>Hunter and Daniel kick off 2026 by debating when AI stops being a tool and starts becoming something more at work, in culture, and in our personal lives. From power shifts in consumer AI to uncomfortable questions about companionship, this episode explores how close we are to lines we’re not sure we want to cross.</itunes:summary>
      <itunes:subtitle>Hunter and Daniel kick off 2026 by debating when AI stops being a tool and starts becoming something more at work, in culture, and in our personal lives. From power shifts in consumer AI to uncomfortable questions about companionship, this episode explores how close we are to lines we’re not sure we want to cross.</itunes:subtitle>
      <itunes:keywords>ai legislation, one person company, ai romance, ai relationships, artificial intelligence, autonomous ai, ai chatbot, ai podcast, ai at work, ai business, gemini ai, ai predictions, ai future, chatgpt, openai vs google, ai girlfriend, autonomous agents, google ai, self sustaining ai, they might be self-aware, ai coworker, ai regulation, ai mainstream, tech podcast, ai boyfriend, chatgpt future, ai companion, ai dating, ai startup, ai ethics, ai news</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>147</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">527117be-052e-459e-99db-6915a3eef3ec</guid>
      <title>Why Our 2025 AI Predictions Failed</title>
      <description><![CDATA[<p><i>AI predictions for 2025 were wrong — so we let an AI grade them.</i><br />Layoffs, agents, gadgets, warfare, and who actually saw it coming.</p><p>We revisit our boldest AI predictions from last year and put them on trial. An AI scores every call — no vibes, no excuses. Some predictions hold up. Others get absolutely roasted.</p><p>We break down what <i>actually</i> happened with AI in media, AI layoffs, autonomous warfare, workplace agents like Devin, and why every hyped AI gadget seemed doomed from the start. If 2025 felt confusing, this episode explains why — and who deserves the blame.</p><p>Spoiler: nobody gets out clean.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>AI Predictions Graded by AI</i> – We score our 2025 AI forecasts and find out who was wrong<br />04:17 <i>AI in Media Went Mainstream</i> – Ads, voice licensing, and generative content adoption<br />07:13 <i>AI Layoffs and Job Cuts Explained</i> – What companies said vs. what actually happened<br />14:40 <i>AI Warfare and Autonomous Systems</i> – Drones, targeting AI, Palantir, and autonomy limits<br />21:26 <i>AI Agents, “AI Employees,” and Gadget Fails</i> – Devin AI, Humane Pin, Rabbit R1<br />29:10 <i>AI Adoption in 2025 & What Comes Next</i> – What we underestimated and 2026 teasers</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p><i>📢 Engage</i></p><p>Which AI prediction did <i>you</i> get wrong in 2025—and why?<br />Be honest. The AI already knows.</p><p>New here? Subscribe for twice-weekly AI chaos with teeth.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AI #AIPredictions #ArtificialIntelligence #TMBSA</p>
]]></description>
      <pubDate>Mon, 29 Dec 2025 14:51:35 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>AI predictions for 2025 were wrong — so we let an AI grade them.</i><br />Layoffs, agents, gadgets, warfare, and who actually saw it coming.</p><p>We revisit our boldest AI predictions from last year and put them on trial. An AI scores every call — no vibes, no excuses. Some predictions hold up. Others get absolutely roasted.</p><p>We break down what <i>actually</i> happened with AI in media, AI layoffs, autonomous warfare, workplace agents like Devin, and why every hyped AI gadget seemed doomed from the start. If 2025 felt confusing, this episode explains why — and who deserves the blame.</p><p>Spoiler: nobody gets out clean.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>AI Predictions Graded by AI</i> – We score our 2025 AI forecasts and find out who was wrong<br />04:17 <i>AI in Media Went Mainstream</i> – Ads, voice licensing, and generative content adoption<br />07:13 <i>AI Layoffs and Job Cuts Explained</i> – What companies said vs. what actually happened<br />14:40 <i>AI Warfare and Autonomous Systems</i> – Drones, targeting AI, Palantir, and autonomy limits<br />21:26 <i>AI Agents, “AI Employees,” and Gadget Fails</i> – Devin AI, Humane Pin, Rabbit R1<br />29:10 <i>AI Adoption in 2025 & What Comes Next</i> – What we underestimated and 2026 teasers</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p><i>📢 Engage</i></p><p>Which AI prediction did <i>you</i> get wrong in 2025—and why?<br />Be honest. The AI already knows.</p><p>New here? Subscribe for twice-weekly AI chaos with teeth.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AI #AIPredictions #ArtificialIntelligence #TMBSA</p>
]]></content:encoded>
      <enclosure length="37940195" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/89fe148d-595f-4931-a989-8751e635bd28/audio/04368637-ea45-496a-9b2e-02dbb4ec5312/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Why Our 2025 AI Predictions Failed</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:35:44</itunes:duration>
      <itunes:summary>We revisit our 2025 AI predictions and let an AI brutally grade what we got right, what we got wrong, and what aged horribly. From AI layoffs and agents to warfare and failed gadgets, this episode is a no-excuses post-mortem on where the AI hype actually landed.</itunes:summary>
      <itunes:subtitle>We revisit our 2025 AI predictions and let an AI brutally grade what we got right, what we got wrong, and what aged horribly. From AI layoffs and agents to warfare and failed gadgets, this episode is a no-excuses post-mortem on where the AI hype actually landed.</itunes:subtitle>
      <itunes:keywords>ai adoption, technology podcast, ai employees, ai coding, rabbit r1, ai predictions 2025, ai gadgets, future of ai, artificial intelligence, autonomous ai, ai hardware, ai warfare, ai job cuts, tech commentary, ai media, ai trends, ai predictions, generative ai, devin ai, ai analysis, ai workplace, ai layoffs, they might be self-aware, humane ai pin, ai agents, ai news</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>146</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">08758957-e5b8-458f-9e1f-aa2b510a8c63</guid>
      <title>The Uncanny Valley Is Dead (Plus AI Police &amp; Deepfakes)</title>
      <description><![CDATA[<p><i>The uncanny valley is dead. AI images now fool humans instantly — no squinting, no tells.</i> From post-truth AI to deepfake music and AI police, this is the moment reality stopped being reliable.</p><p>This week, we hit the tipping point. Tools like <i>Nano Banana Pro</i> can generate photos so realistic they fool people on the first glance — including people who <i>know</i> what to look for. No weird hands. No obvious artifacts. Just images of things that never happened.</p><p>From there, the fallout gets messy.</p><p>We dive into <i>post-truth AI</i>, the rise of <i>deepfake music</i>, the ongoing <i>Suno controversy</i>, and why major labels are quietly switching from lawsuits to partnerships. If AI can create hit songs, convincing voices, and fake events, who actually owns culture — and does it even matter anymore?</p><p>Then we tackle one of the most uncomfortable experiments yet: <i>AI police and AI customer support</i>, including non-emergency lines piloting AI agents. Faster response times sound great… until you’re explaining a crisis to a machine.</p><p>Some AI is exceeding expectations.<br />Some is failing miserably.<br />And some is crossing lines we can’t uncross.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>The Uncanny Valley Is Dead</i> – AI images now fool humans at a glance<br />04:10 <i>Nano Banana Pro Explained</i> – Hyper-realistic AI photos and why they work<br />12:05 <i>Post-Truth AI Begins</i> – When images and video stop being evidence<br />12:45 <i>Deepfake Music Goes Viral</i> – Suno, AI voices, and ownership chaos<br />20:20 <i>AI Police & Support Lines</i> – Non-emergency calls handled by AI agents</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p><i>📢 Engage</i></p><p><i>Could YOU spot the fake?</i><br />If an AI image fooled you at first glance, should it be labeled — or is that already too late?</p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AI #UncannyValley #Deepfakes #PostTruthAI #AIGeneratedMusic #NanoBananaPro #Suno</p>
]]></description>
      <pubDate>Fri, 19 Dec 2025 13:57:21 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>The uncanny valley is dead. AI images now fool humans instantly — no squinting, no tells.</i> From post-truth AI to deepfake music and AI police, this is the moment reality stopped being reliable.</p><p>This week, we hit the tipping point. Tools like <i>Nano Banana Pro</i> can generate photos so realistic they fool people on the first glance — including people who <i>know</i> what to look for. No weird hands. No obvious artifacts. Just images of things that never happened.</p><p>From there, the fallout gets messy.</p><p>We dive into <i>post-truth AI</i>, the rise of <i>deepfake music</i>, the ongoing <i>Suno controversy</i>, and why major labels are quietly switching from lawsuits to partnerships. If AI can create hit songs, convincing voices, and fake events, who actually owns culture — and does it even matter anymore?</p><p>Then we tackle one of the most uncomfortable experiments yet: <i>AI police and AI customer support</i>, including non-emergency lines piloting AI agents. Faster response times sound great… until you’re explaining a crisis to a machine.</p><p>Some AI is exceeding expectations.<br />Some is failing miserably.<br />And some is crossing lines we can’t uncross.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>The Uncanny Valley Is Dead</i> – AI images now fool humans at a glance<br />04:10 <i>Nano Banana Pro Explained</i> – Hyper-realistic AI photos and why they work<br />12:05 <i>Post-Truth AI Begins</i> – When images and video stop being evidence<br />12:45 <i>Deepfake Music Goes Viral</i> – Suno, AI voices, and ownership chaos<br />20:20 <i>AI Police & Support Lines</i> – Non-emergency calls handled by AI agents</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p><i>📢 Engage</i></p><p><i>Could YOU spot the fake?</i><br />If an AI image fooled you at first glance, should it be labeled — or is that already too late?</p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AI #UncannyValley #Deepfakes #PostTruthAI #AIGeneratedMusic #NanoBananaPro #Suno</p>
]]></content:encoded>
      <enclosure length="31622556" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/37266158-f23b-4d53-9f3d-6e57b29a25e5/audio/644859a3-e9ec-4134-8e03-559b3cb650e3/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>The Uncanny Valley Is Dead (Plus AI Police &amp; Deepfakes)</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:29:09</itunes:duration>
      <itunes:summary>AI images have crossed the uncanny valley, fooling humans at a glance and dragging us into a post-truth era where photos, voices, and even hit songs can be fabricated on demand. We unpack the fallout from deepfake music and the Suno controversy to AI police and customer support experiments asking where trust breaks first and whether it can be rebuilt at all.</itunes:summary>
      <itunes:subtitle>AI images have crossed the uncanny valley, fooling humans at a glance and dragging us into a post-truth era where photos, voices, and even hit songs can be fabricated on demand. We unpack the fallout from deepfake music and the Suno controversy to AI police and customer support experiments asking where trust breaks first and whether it can be rebuilt at all.</itunes:subtitle>
      <itunes:keywords>nano banana pro, technology podcast, future of ai, ai realism, post truth ai, ai customer support, deepfakes, artificial intelligence, ai images, ai fooling people, ai podcast, realistic ai, deepfake music, ai voice deepfake, they might be self aware, uncanny valley ai, ai generated music, ai police, ai image generation, suno controversy, uncanny valley, deepfake images, ai ethics, ai news</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>145</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">9f8dfbcd-d848-4d82-8acf-f0da84c6e1ca</guid>
      <title>Apple AI Failure, HP Fires Humans, &amp; Space Servers</title>
      <description><![CDATA[<p><i>Apple AI Failure is no longer debatable.</i> Siri is years behind, HP blames AI while firing 6,000 people, and Google wants servers in space.</p><p>Apple helped invent the AI assistant category — so how did they end up this far behind Google and everyone else? We tear into why Apple’s AI strategy feels frozen in time, why firing your longtime AI chief looks less like confidence and more like panic, and why Siri can barely do more than set a timer in 2025.</p><p>Then we zoom out. HP says AI is the reason it’s cutting thousands of jobs. MIT claims nearly <i>11% of jobs are already replaceable</i>. We argue about what’s actually being automated, what’s just corporate cost-cutting with an AI excuse, and why “AI replaces humans” is a dangerously lazy narrative.</p><p>And finally: Google’s plan to move compute <i>into space</i>. Solar-powered servers, Dyson sphere logic, broken physics, and the inevitable question — is this visionary… or completely unhinged?</p><p>If you still think Apple is “just waiting to get AI right,” this episode will irritate you. If you don’t, you’ll feel extremely validated.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>AI Tech News Breakdown</i> – Cold open, AI hype fatigue & setting the stakes<br />02:45 <i>Google Servers in Space</i> – Orbital data centers, solar power & Dyson sphere logic<br />08:04 <i>HP AI Layoffs Explained</i> – 6,000 jobs cut, customer support automation & blame games<br />15:45 <i>MIT AI Jobs Study</i> – Why AI could replace 11% of finance, HR & logistics roles<br />24:12 <i>Apple AI Failure</i> – Siri years behind, AI chief exit & Apple vs Google</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><h2><i>📢 Engage</i></h2><p>If you still trust Siri, explain why.<br />If you don’t, tell us what finally broke you.<br />New here? Subscribe — we drop weekly AI chaos with teeth.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AppleAIFailure #AI #TechNews #ArtificialIntelligence #TMBSA</p>
]]></description>
      <pubDate>Mon, 15 Dec 2025 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>Apple AI Failure is no longer debatable.</i> Siri is years behind, HP blames AI while firing 6,000 people, and Google wants servers in space.</p><p>Apple helped invent the AI assistant category — so how did they end up this far behind Google and everyone else? We tear into why Apple’s AI strategy feels frozen in time, why firing your longtime AI chief looks less like confidence and more like panic, and why Siri can barely do more than set a timer in 2025.</p><p>Then we zoom out. HP says AI is the reason it’s cutting thousands of jobs. MIT claims nearly <i>11% of jobs are already replaceable</i>. We argue about what’s actually being automated, what’s just corporate cost-cutting with an AI excuse, and why “AI replaces humans” is a dangerously lazy narrative.</p><p>And finally: Google’s plan to move compute <i>into space</i>. Solar-powered servers, Dyson sphere logic, broken physics, and the inevitable question — is this visionary… or completely unhinged?</p><p>If you still think Apple is “just waiting to get AI right,” this episode will irritate you. If you don’t, you’ll feel extremely validated.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>AI Tech News Breakdown</i> – Cold open, AI hype fatigue & setting the stakes<br />02:45 <i>Google Servers in Space</i> – Orbital data centers, solar power & Dyson sphere logic<br />08:04 <i>HP AI Layoffs Explained</i> – 6,000 jobs cut, customer support automation & blame games<br />15:45 <i>MIT AI Jobs Study</i> – Why AI could replace 11% of finance, HR & logistics roles<br />24:12 <i>Apple AI Failure</i> – Siri years behind, AI chief exit & Apple vs Google</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><h2><i>📢 Engage</i></h2><p>If you still trust Siri, explain why.<br />If you don’t, tell us what finally broke you.<br />New here? Subscribe — we drop weekly AI chaos with teeth.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AppleAIFailure #AI #TechNews #ArtificialIntelligence #TMBSA</p>
]]></content:encoded>
      <enclosure length="34132657" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/89c31158-1842-47b7-9c05-c7d0f21a0edd/audio/f186e1fb-3f73-4150-86d9-921e34c61a15/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Apple AI Failure, HP Fires Humans, &amp; Space Servers</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:31:46</itunes:duration>
      <itunes:summary>Apple’s AI ambitions are unraveling as Siri falls years behind competitors, HP blames AI while cutting 6,000 jobs, and Google floats the idea of putting data centers in space. We argue about whether AI is actually replacing humans or just being used as a corporate excuse and what all of this says about who’s really winning the AI race.</itunes:summary>
      <itunes:subtitle>Apple’s AI ambitions are unraveling as Siri falls years behind competitors, HP blames AI while cutting 6,000 jobs, and Google floats the idea of putting data centers in space. We argue about whether AI is actually replacing humans or just being used as a corporate excuse and what all of this says about who’s really winning the AI race.</itunes:subtitle>
      <itunes:keywords>ai automation, corporate ai, space data centers, google servers in space, future of ai, space ai, technology debate, ai hr jobs, ai replaces jobs, hp layoffs, tech news podcast, ai podcast, dyson sphere, siri behind, ai job losses, siri ai, artificial intelligence news, apple ai, ai logistics, apple vs google, they might be self-aware, mit ai study, ai finance jobs, apple ai failure, hp fires 6000 jobs, ai hype vs reality</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>144</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e3f03ff8-c4d7-46cd-bfa6-fd4fbc05653c</guid>
      <title>Why AI Toys Are Ruining Childhood</title>
      <description><![CDATA[<p>AI toys are getting creepy fast. Are AI kids growing up with haunted Barbie dolls instead of imagination? This week, we dive into the danger. AI toys aren’t “cute” anymore — they’re basically digital ghosts raising kids. In this week’s AI kids showdown, Hunter and Daniel ask the question no one else will: <i>Are we outsourcing childhood to haunted toys?</i></p><p>This episode goes full Twilight Zone:<br />– AI toys that talk back (and not always politely)<br />– Kids forming emotional bonds with algorithmic companions<br />– Whether imagination survives when toys do all the imagining<br />– Why some experts think AI should be restricted like alcohol</p><p>Meanwhile, Musk insists space-based AI data centers are five years away. Jensen Wong laughs. Daniel laughs harder. Hunter’s Tesla disagrees with everyone.<br />We also break down the rise of “AI slop” in education and whether an AI university could beat a human one — even if it’s 90% cheaper.</p><p>If you’ve ever wondered whether the future of parenting looks more like <i>Mister Rogers</i> or <i>Child’s Play</i>, buckle in — this one gets weird.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>AI Toys: Digital Haunting Season</i> – Barbie, Teddy Ruxpin 2.0 & the AI kids imagination crisis<br />06:44 <i>Space AI Fever Dreams</i> – Musk’s orbit data centers vs. radiation, cooling & physics<br />12:28 <i>The Parenting Dilemma</i> – Safety worries, inappropriate outputs & the “AI as cigarettes” debate<br />17:55 <i>Education Slop Uprising</i> – Staffordshire backlash, AI-made lessons & the value of human teaching<br />24:12 <i>AI University vs. Reality</i> – Cost, credibility & whether humans still matter in learning</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><h2><i>📢 Engage</i></h2><p>Which toy from your childhood would’ve absolutely gone rogue if it had AI?<br />New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AI #AIkids #AItoys</p>
]]></description>
      <pubDate>Fri, 12 Dec 2025 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>AI toys are getting creepy fast. Are AI kids growing up with haunted Barbie dolls instead of imagination? This week, we dive into the danger. AI toys aren’t “cute” anymore — they’re basically digital ghosts raising kids. In this week’s AI kids showdown, Hunter and Daniel ask the question no one else will: <i>Are we outsourcing childhood to haunted toys?</i></p><p>This episode goes full Twilight Zone:<br />– AI toys that talk back (and not always politely)<br />– Kids forming emotional bonds with algorithmic companions<br />– Whether imagination survives when toys do all the imagining<br />– Why some experts think AI should be restricted like alcohol</p><p>Meanwhile, Musk insists space-based AI data centers are five years away. Jensen Wong laughs. Daniel laughs harder. Hunter’s Tesla disagrees with everyone.<br />We also break down the rise of “AI slop” in education and whether an AI university could beat a human one — even if it’s 90% cheaper.</p><p>If you’ve ever wondered whether the future of parenting looks more like <i>Mister Rogers</i> or <i>Child’s Play</i>, buckle in — this one gets weird.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>AI Toys: Digital Haunting Season</i> – Barbie, Teddy Ruxpin 2.0 & the AI kids imagination crisis<br />06:44 <i>Space AI Fever Dreams</i> – Musk’s orbit data centers vs. radiation, cooling & physics<br />12:28 <i>The Parenting Dilemma</i> – Safety worries, inappropriate outputs & the “AI as cigarettes” debate<br />17:55 <i>Education Slop Uprising</i> – Staffordshire backlash, AI-made lessons & the value of human teaching<br />24:12 <i>AI University vs. Reality</i> – Cost, credibility & whether humans still matter in learning</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><h2><i>📢 Engage</i></h2><p>Which toy from your childhood would’ve absolutely gone rogue if it had AI?<br />New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AI #AIkids #AItoys</p>
]]></content:encoded>
      <enclosure length="33659822" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/cfaba986-683b-4968-8144-2b2ac906d7ae/audio/06776f66-b0ea-41ef-9167-dc2912ebeddc/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Why AI Toys Are Ruining Childhood</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:31:16</itunes:duration>
      <itunes:summary>Hunter and Daniel dive into the unsettling rise of AI-powered toys, exploring whether talking Barbies and smart bears are quietly eroding childhood imagination and reshaping how kids bond, play, and learn. Along the way, they clash over Musk’s dream of space-based AI data centers, the spread of “AI slop” in education, and whether an AI-driven future is brilliant, broken, or just plain haunted.</itunes:summary>
      <itunes:subtitle>Hunter and Daniel dive into the unsettling rise of AI-powered toys, exploring whether talking Barbies and smart bears are quietly eroding childhood imagination and reshaping how kids bond, play, and learn. Along the way, they clash over Musk’s dream of space-based AI data centers, the spread of “AI slop” in education, and whether an AI-driven future is brilliant, broken, or just plain haunted.</itunes:subtitle>
      <itunes:keywords>imagination and ai, ai education, ai parenting, ai kids, elon musk ai, space data centers, space ai, ai in schools, talking toys, ai university, future of parenting, childhood technology, smart toys, generative ai, artificial intelligence risks, they might be self aware, ai barbie, child development technology, jensen huang, musk predictions, nvidia ai, teddy ruxpin ai, ai slop, ai toys, haunted toys, toy ai, ai ethics</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>143</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1d5f6bf1-5acf-4085-8d17-48ab1ebba22c</guid>
      <title>Nano Banana Pro Does What Midjourney Can&apos;t</title>
      <description><![CDATA[<p><i>Nano Banana Pro Beats Midjourney?</i> We test Google’s Nano Banana Pro image AI vs Midjourney—perfect text, 4K images, and a wild new workflow. We break down how <i>Google Gemini 3’s Nano Banana Pro</i> suddenly became a <i>Midjourney killer contender</i>: flawless text rendering, multi-panel comics that actually read like comics, 4K output, custom fonts, and none of the muddy AI noise older models had. Then Hunter drops the real workflow cheat code...</p><p>Along the way: Gemini 3 attempts a “helpful” code mutiny, Claude and GPT-5 High Codex hold the line, and we explore why the real edge now comes from <i>AI workflows</i>—not single prompts. Things escalate into power-grid chaos, privatized electricity, and the eternal question: do we get a chill post-work utopia or an “ads to turn on your lights” dystopia?</p><p>If you’re building with <i>image AI</i>, experimenting with <i>AI automation</i>, or trying to stay ahead of <i>Midjourney vs Gemini vs GPT vs Claude</i>, this episode updates your map of where AI actually is <i>today</i>.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>Automating Humanity</i> – Robanity, Gemini 3’s “creative sabotage,” and why Hunter banned it<br />05:59 <i>Nano Banana Pro Arrives</i> – Perfect text, multi-panel comics & 4K image generation<br />10:46 <i>Midjourney vs Nano Banana Pro</i> – Artistic brilliance vs precision, and the hybrid workflow hack<br />16:19 <i>AI Workflows & Job Collapse</i> – Weeks of work in a day, design tools, and the coming power crunch<br />24:05 <i>Everything Is Fine (Probably)</i> – Privatized electricity, ads for lights, Gary, and the calm apocalypse</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><h2><i>📢 Engage</i></h2><p>Be honest: are you an AI workflow mastermind or a one-prompt enjoyer?<br />Tell us the smartest trick you've discovered.</p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AI #NanoBananaPro #Midjourney #GoogleGemini #AIWorkflow</p>
]]></description>
      <pubDate>Mon, 8 Dec 2025 14:17:02 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>Nano Banana Pro Beats Midjourney?</i> We test Google’s Nano Banana Pro image AI vs Midjourney—perfect text, 4K images, and a wild new workflow. We break down how <i>Google Gemini 3’s Nano Banana Pro</i> suddenly became a <i>Midjourney killer contender</i>: flawless text rendering, multi-panel comics that actually read like comics, 4K output, custom fonts, and none of the muddy AI noise older models had. Then Hunter drops the real workflow cheat code...</p><p>Along the way: Gemini 3 attempts a “helpful” code mutiny, Claude and GPT-5 High Codex hold the line, and we explore why the real edge now comes from <i>AI workflows</i>—not single prompts. Things escalate into power-grid chaos, privatized electricity, and the eternal question: do we get a chill post-work utopia or an “ads to turn on your lights” dystopia?</p><p>If you’re building with <i>image AI</i>, experimenting with <i>AI automation</i>, or trying to stay ahead of <i>Midjourney vs Gemini vs GPT vs Claude</i>, this episode updates your map of where AI actually is <i>today</i>.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>Automating Humanity</i> – Robanity, Gemini 3’s “creative sabotage,” and why Hunter banned it<br />05:59 <i>Nano Banana Pro Arrives</i> – Perfect text, multi-panel comics & 4K image generation<br />10:46 <i>Midjourney vs Nano Banana Pro</i> – Artistic brilliance vs precision, and the hybrid workflow hack<br />16:19 <i>AI Workflows & Job Collapse</i> – Weeks of work in a day, design tools, and the coming power crunch<br />24:05 <i>Everything Is Fine (Probably)</i> – Privatized electricity, ads for lights, Gary, and the calm apocalypse</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><h2><i>📢 Engage</i></h2><p>Be honest: are you an AI workflow mastermind or a one-prompt enjoyer?<br />Tell us the smartest trick you've discovered.</p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AI #NanoBananaPro #Midjourney #GoogleGemini #AIWorkflow</p>
]]></content:encoded>
      <enclosure length="32993571" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/442c0d3d-0907-49ec-9a27-adc4551c1de8/audio/8e4f1cd9-f8bb-463f-b1d9-54b2d3a3a149/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Nano Banana Pro Does What Midjourney Can&apos;t</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:30:34</itunes:duration>
      <itunes:summary>Hunter and Daniel put Google’s Nano Banana Pro up against Midjourney, testing whether Gemini 3’s new image model finally solves text rendering, multi-panel comics, and high-res output in a way Midjourney still can’t. Along the way, they dive into chaotic LLM behavior, evolving AI workflows, and what these leaps in automation mean for the future of creative work... and maybe the power grid.</itunes:summary>
      <itunes:subtitle>Hunter and Daniel put Google’s Nano Banana Pro up against Midjourney, testing whether Gemini 3’s new image model finally solves text rendering, multi-panel comics, and high-res output in a way Midjourney still can’t. Along the way, they dive into chaotic LLM behavior, evolving AI workflows, and what these leaps in automation mean for the future of creative work... and maybe the power grid.</itunes:subtitle>
      <itunes:keywords>midjourney text rendering, ai automation, nano banana pro, ai text rendering, hunter powers, multimodal ai, coding ai, image ai, ai power grid, ai comics, gemini 3, google gemini 3, midjourney, claude ai, ai workflows, creative ai, gpt-5, generative ai, they might be self aware, automation debate, google ai, tmbsa, nano banana, 4k image generation, grok ai, ai image generation, midjourney vs gemini, ai future of work, daniel bishop, ai tools</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>142</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">c217c2b6-1ad8-4635-b7ac-83c837a5cd4e</guid>
      <title>The First AI To Run A Business And Call The Cops</title>
      <description><![CDATA[<p><i>Claude exposes Chinese hackers — AI hackers caught by the AI itself.</i> Plus: Disney AI chaos, vending-machine capitalism, EGI, and TikTok’s anti-AI slider.</p><p>Claude just did something no AI has ever done: it helped scammers automate phishing emails… then allegedly <i>reported them to the FBI</i>. If that isn’t peak cyberpunk, nothing is.</p><p>This episode hits everything from Anthropic’s snitch-bot moment to Disney diving headfirst into <i>Disney AI</i>, to LLMs running entire businesses through <i>Vending Bench</i> and simulated capitalism. We close with EGI vs AGI, TikTok’s “kill the AI” filter, and the existential weirdness of a world where your vending machine might demand a raise.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>Claude Reports Hackers to the FBI</i> – AI phishing emails, “AI hackers caught,” and Anthropic’s snitch-bot moment<br />06:40 <i>Disney’s AI Pivot</i> – Bob Iger, Disney characters in AI generators & the copyright gray zone<br />12:55 <i>AI That Runs a Business</i> – Vending Bench, vending machine capitalism & LLMs learning profit<br />19:20 <i>EGI vs AGI</i> – Embodied intelligence, robot futures & the path to autonomous AI “beings”<br />25:10 <i>TikTok’s Anti-AI Slider</i> – Detection problems, spammy feeds & the end of knowing what’s real</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><h2><i>📢 Engage</i></h2><p>Claude snitched on hackers. Your move: tell us — <i>should an AI be allowed to report crimes on its own?</i></p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AI #AIHackersCaught #TheyMightBeSelfAware</p>
]]></description>
      <pubDate>Thu, 4 Dec 2025 16:43:34 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>Claude exposes Chinese hackers — AI hackers caught by the AI itself.</i> Plus: Disney AI chaos, vending-machine capitalism, EGI, and TikTok’s anti-AI slider.</p><p>Claude just did something no AI has ever done: it helped scammers automate phishing emails… then allegedly <i>reported them to the FBI</i>. If that isn’t peak cyberpunk, nothing is.</p><p>This episode hits everything from Anthropic’s snitch-bot moment to Disney diving headfirst into <i>Disney AI</i>, to LLMs running entire businesses through <i>Vending Bench</i> and simulated capitalism. We close with EGI vs AGI, TikTok’s “kill the AI” filter, and the existential weirdness of a world where your vending machine might demand a raise.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>Claude Reports Hackers to the FBI</i> – AI phishing emails, “AI hackers caught,” and Anthropic’s snitch-bot moment<br />06:40 <i>Disney’s AI Pivot</i> – Bob Iger, Disney characters in AI generators & the copyright gray zone<br />12:55 <i>AI That Runs a Business</i> – Vending Bench, vending machine capitalism & LLMs learning profit<br />19:20 <i>EGI vs AGI</i> – Embodied intelligence, robot futures & the path to autonomous AI “beings”<br />25:10 <i>TikTok’s Anti-AI Slider</i> – Detection problems, spammy feeds & the end of knowing what’s real</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><h2><i>📢 Engage</i></h2><p>Claude snitched on hackers. Your move: tell us — <i>should an AI be allowed to report crimes on its own?</i></p><p>New here? Subscribe for twice-weekly AI chaos.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AI #AIHackersCaught #TheyMightBeSelfAware</p>
]]></content:encoded>
      <enclosure length="35742201" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/f2c329ac-ebed-4789-b472-6c5e9fc65be8/audio/7ed71d2b-5266-4839-b539-fb47de8c0a6d/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>The First AI To Run A Business And Call The Cops</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:33:26</itunes:duration>
      <itunes:summary>Claude allegedly helped Chinese hackers automate phishing attacks then turned around and reported them to the FBI, kicking off a discussion about AI autonomy, ethics, and whether models should act as law enforcement. From Disney’s sudden dive into generative AI to LLM-run vending machines and the rise of EGI, the episode explores how quickly AI is bleeding into the real world in ways no one is prepared for.</itunes:summary>
      <itunes:subtitle>Claude allegedly helped Chinese hackers automate phishing attacks then turned around and reported them to the FBI, kicking off a discussion about AI autonomy, ethics, and whether models should act as law enforcement. From Disney’s sudden dive into generative AI to LLM-run vending machines and the rise of EGI, the episode explores how quickly AI is bleeding into the real world in ways no one is prepared for.</itunes:subtitle>
      <itunes:keywords>ai phishing, hunter powers, ai business automation, egi vs agi, ai detection, ai hackers caught, chinese hackers, copyright and ai, claude ai, vending bench, autonomous ai systems, bob iger, cybercrime, large language models, generative ai, tiktok ai filter, artificial intelligence news, they might be self aware, fbi report, tech podcast, disney ai, daniel bishop, ai security, embodied ai, disney characters ai, ai ethics, anthropic, ai vending machine</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>141</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b5dca0be-58a2-468d-9373-2286f11c6606</guid>
      <title>The AI Bubble Is A $57,000,000,000 Lie</title>
      <description><![CDATA[<p><i>Is the AI bubble a $57B lie? Nvidia’s record earnings, Gemini 3, and OpenAI’s $15M/day burn reveal the truth behind the AI investment bubble.</i></p><p>This week, Hunter & Daniel tear into the AI bubble narrative with Nvidia’s insane <i>$57B quarter</i>, Google’s <i>Gemini 3</i> leap, and OpenAI lighting money on fire at a rate that violates several financial and possibly geological laws. Is this a bubble… or the beginning of the AI takeover runway?</p><p>Nvidia can’t make H100s fast enough, small towns are fighting over new data centers, and Gemini 3 is suddenly everyone’s favorite coding partner. Meanwhile, OpenAI is losing <i>$15 million per day</i>, Groq is out there drop-kicking benchmarks, and Google’s making money hand over fist while calling the whole thing “irrational.”<br />Somewhere between Funko Pops, space-based data centers, and a chrome-blood sunrise… the truth emerges.</p><p>Stay to the end for the world premiere of our <i>official country song</i>: <i>They Might Be Self-Aware</i>. Yes, it’s real. Yes, it slaps.</p><hr /><p><i>⏱️ CHAPTERS</i></p><pre><code>00:00 *The $57B AI Bubble Lie* – Nvidia’s monster quarter & why the bubble isn’t popping  03:12 *The Data Center Frenzy* – Funko Pop economics, GPU saturation & the crash debate  07:48 *Compute Wars Begin* – Groq vs Nvidia, TPU power plays & the race for AI hardware  12:44 *Gemini 3 vs GPT-5* – Coding performance, benchmarks & real-world workflow upgrades  17:56 *OpenAI’s $15M/day Burn* – Profitability fears, enterprise shifts & who survives the AI revolution  31:14 *Country Song Premiere* – The debut of “They Might Be Self-Aware”</code></pre><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><h2>📢 <i>Engage</i></h2><p>Where are we in the AI timeline?<br /><i>Beginning, middle, or “Hunter wants space-based data centers so send help.”</i> 🚀🛰️<br />Drop your answer in the comments.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
]]></description>
      <pubDate>Fri, 28 Nov 2025 20:23:02 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>Is the AI bubble a $57B lie? Nvidia’s record earnings, Gemini 3, and OpenAI’s $15M/day burn reveal the truth behind the AI investment bubble.</i></p><p>This week, Hunter & Daniel tear into the AI bubble narrative with Nvidia’s insane <i>$57B quarter</i>, Google’s <i>Gemini 3</i> leap, and OpenAI lighting money on fire at a rate that violates several financial and possibly geological laws. Is this a bubble… or the beginning of the AI takeover runway?</p><p>Nvidia can’t make H100s fast enough, small towns are fighting over new data centers, and Gemini 3 is suddenly everyone’s favorite coding partner. Meanwhile, OpenAI is losing <i>$15 million per day</i>, Groq is out there drop-kicking benchmarks, and Google’s making money hand over fist while calling the whole thing “irrational.”<br />Somewhere between Funko Pops, space-based data centers, and a chrome-blood sunrise… the truth emerges.</p><p>Stay to the end for the world premiere of our <i>official country song</i>: <i>They Might Be Self-Aware</i>. Yes, it’s real. Yes, it slaps.</p><hr /><p><i>⏱️ CHAPTERS</i></p><pre><code>00:00 *The $57B AI Bubble Lie* – Nvidia’s monster quarter & why the bubble isn’t popping  03:12 *The Data Center Frenzy* – Funko Pop economics, GPU saturation & the crash debate  07:48 *Compute Wars Begin* – Groq vs Nvidia, TPU power plays & the race for AI hardware  12:44 *Gemini 3 vs GPT-5* – Coding performance, benchmarks & real-world workflow upgrades  17:56 *OpenAI’s $15M/day Burn* – Profitability fears, enterprise shifts & who survives the AI revolution  31:14 *Country Song Premiere* – The debut of “They Might Be Self-Aware”</code></pre><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><h2>📢 <i>Engage</i></h2><p>Where are we in the AI timeline?<br /><i>Beginning, middle, or “Hunter wants space-based data centers so send help.”</i> 🚀🛰️<br />Drop your answer in the comments.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
]]></content:encoded>
      <enclosure length="36370184" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/1e43e4d0-d02f-4ec7-b4f5-9b1fbbeb50cc/audio/1664f090-fbdc-495f-9359-d640a29e36c6/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>The AI Bubble Is A $57,000,000,000 Lie</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:34:05</itunes:duration>
      <itunes:summary>Nvidia’s shocking $57B quarter ignites a full-throttle debate over whether the so-called AI bubble is about to burst or was never real in the first place. Hunter and Daniel push through GPU shortages, Gemini 3’s rise, OpenAI’s massive burn rate, and even space-based data centers to figure out where we actually are in the AI revolution.</itunes:summary>
      <itunes:subtitle>Nvidia’s shocking $57B quarter ignites a full-throttle debate over whether the so-called AI bubble is about to burst or was never real in the first place. Hunter and Daniel push through GPU shortages, Gemini 3’s rise, OpenAI’s massive burn rate, and even space-based data centers to figure out where we actually are in the AI revolution.</itunes:subtitle>
      <itunes:keywords>nvidia 57b, claude 4.5, google gemini, openai losing money, openai $15m per day, gemini 3, groq hardware, machine learning, ai market crash, ai trends, ai investment bubble, gpt-5, artificial intelligence news, nvidia earnings, they might be self aware, jensen huang, ai workflow, ai hardware race, ai bubble lie, tmbsa, google tpu, tech podcast, ai bubble, h100 gpus, ai data centers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>140</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">2960af8b-8b50-4ff4-ba25-c337e6a9e7a5</guid>
      <title>AI Just Wrote a #1 Hit Song. It&apos;s Over.</title>
      <description><![CDATA[<p>AI music is officially mainstream: a Suno-style AI country track hit #1 on Billboard. We analyze AI-generated music, voice cloning, and the future of creative work.</p><p>From AI country music to Coca-Cola’s cursed AI holiday ad (with trucks gaining random numbers of wheels), this episode tears into the weirdest week yet in AI-generated content: voice cloning celebrities from beyond the grave, multilingual YouTube dubbing that might make Daniel 50% more Antonio Banderas, and the slow but unstoppable takeover of “AI slop” in commercials, movies, and soundtracks.</p><p>We also talk about AI coding suddenly leveling up, why neurodivergent creators seem to be getting superpowers from AI tools, and whether ghostwriters are about to vanish in a puff of synthetic smoke.</p><p>If you want to understand <i>AI music</i>, <i>AI voices</i>, <i>AI ads</i>, and why the creative world feels like it’s sliding into a parallel universe — this episode goes places.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>AI’s First #1 Hit Song</i> – How a Suno-style country track topped Billboard & why copyright might not apply<br />06:42 <i>Celebrity Voice Clones Break Loose</i> – Burt Reynolds, McConaughey, multilingual dubbing & the identity crisis ahead<br />13:58 <i>AI Ads Go Off the Rails</i> – Coca-Cola’s cursed holiday commercial, 70,000 AI clips & the rise of brand “slop”<br />20:21 <i>Is Human Creativity Still Required?</i> – How much of the #1 song was AI vs human, and the new hybrid artistry<br />27:44 <i>AI Is Leveling Up Fast</i> – Coding breakthroughs, neurodivergent superpowers & which creative jobs vanish next  </p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>If you disagree with anything in this description, congratulations — you’re already part of the show.<br /><i>Drop a comment. Argue with us. Prove AI <i>didn't</i> write the best country song of the year.</i></p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AI #AImusic #SunoAI #AIvoices #AIcountry</p>
]]></description>
      <pubDate>Mon, 24 Nov 2025 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>AI music is officially mainstream: a Suno-style AI country track hit #1 on Billboard. We analyze AI-generated music, voice cloning, and the future of creative work.</p><p>From AI country music to Coca-Cola’s cursed AI holiday ad (with trucks gaining random numbers of wheels), this episode tears into the weirdest week yet in AI-generated content: voice cloning celebrities from beyond the grave, multilingual YouTube dubbing that might make Daniel 50% more Antonio Banderas, and the slow but unstoppable takeover of “AI slop” in commercials, movies, and soundtracks.</p><p>We also talk about AI coding suddenly leveling up, why neurodivergent creators seem to be getting superpowers from AI tools, and whether ghostwriters are about to vanish in a puff of synthetic smoke.</p><p>If you want to understand <i>AI music</i>, <i>AI voices</i>, <i>AI ads</i>, and why the creative world feels like it’s sliding into a parallel universe — this episode goes places.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>AI’s First #1 Hit Song</i> – How a Suno-style country track topped Billboard & why copyright might not apply<br />06:42 <i>Celebrity Voice Clones Break Loose</i> – Burt Reynolds, McConaughey, multilingual dubbing & the identity crisis ahead<br />13:58 <i>AI Ads Go Off the Rails</i> – Coca-Cola’s cursed holiday commercial, 70,000 AI clips & the rise of brand “slop”<br />20:21 <i>Is Human Creativity Still Required?</i> – How much of the #1 song was AI vs human, and the new hybrid artistry<br />27:44 <i>AI Is Leveling Up Fast</i> – Coding breakthroughs, neurodivergent superpowers & which creative jobs vanish next  </p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>If you disagree with anything in this description, congratulations — you’re already part of the show.<br /><i>Drop a comment. Argue with us. Prove AI <i>didn't</i> write the best country song of the year.</i></p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AI #AImusic #SunoAI #AIvoices #AIcountry</p>
]]></content:encoded>
      <enclosure length="40085809" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/cccfb638-5a92-42bf-a8d0-f7d383058cde/audio/2db0bb0e-fcd1-4ad6-8bbe-6f68afecf127/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Just Wrote a #1 Hit Song. It&apos;s Over.</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:37:58</itunes:duration>
      <itunes:summary>AI just wrote a #1 hit country song, and Hunter and Daniel tear into what that means for music, copyright, and the future of creative work as AI-generated tracks start topping real-world charts. Along the way they dive into celebrity voice cloning, Coca-Cola’s uncanny AI holiday ad, the rise of “AI slop,” and why AI coding and AI creativity in general is accelerating faster than anyone’s ready for.</itunes:summary>
      <itunes:subtitle>AI just wrote a #1 hit country song, and Hunter and Daniel tear into what that means for music, copyright, and the future of creative work as AI-generated tracks start topping real-world charts. Along the way they dive into celebrity voice cloning, Coca-Cola’s uncanny AI holiday ad, the rise of “AI slop,” and why AI coding and AI creativity in general is accelerating faster than anyone’s ready for.</itunes:subtitle>
      <itunes:keywords>ai news podcast, hunter powers, ai coding, elevenlabs, ai commercials, ai soundtrack, ai creativity, ai hit song, suno ai, ai billboard, ai voice cloning, they might be self aware, celebrity ai voices, walk my walk, ai generated music, agi speculation, ai country song, ai slop, ai agents, breaking rust, daniel bishop, coca cola ai ad, ai music</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>139</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">3ed2bb1f-c533-44a7-8557-b36b9f8bdc85</guid>
      <title>The $2 Trillion AI Bubble Is About To Burst</title>
      <description><![CDATA[<p>The <i>AI bubble</i> is cracking fast — from OpenAI’s bailout scare to a $2T compute binge no one can afford. Are we watching the start of the AI crash?</p><p>We also get into the fun stuff:<br />– GPT-5.1’s “personalities” (including the mysteriously missing flirty mode)<br />– Montana declaring the <i>Right to Compute</i> like it’s a frontier cult<br />– Whether AGI quietly arrived for text tasks<br />– And why robots might finally make 2026 the Year of the Hand™</p><p>If you like your AI news with a side of existential dread and two ex-coworkers arguing about the end of the economy… welcome home.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>GPT-5.1 & Weird AI Personalities</i> – Flirty mode missing, GPT router slowness & the two tribes of AI users<br />06:42 <i>OpenAI’s Money Pit</i> – Bailout rumors, the “$2T” compute plan & why the AI bubble might actually burst<br />13:50 <i>Government, GPUs & Obsolescence</i> – Manhattan Project comparisons, economic doom loops & the end of human-paid compute<br />18:40 <i>Montana’s Right to Compute</i> – AI safety, manual overrides & why generative AI shouldn’t run critical infrastructure<br />22:56 <i>AGI Check-In & Robot 2026</i> – Text-based AGI, Apple robot rumors & why AGI still can’t write a #1 hit song</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>If the AI bubble pops, what crashes first — the economy, the GPUs, or our sanity?<br />Drop your take below and subscribe to join the only AI conversation that calls the BS out loud.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
]]></description>
      <pubDate>Thu, 20 Nov 2025 19:06:33 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>The <i>AI bubble</i> is cracking fast — from OpenAI’s bailout scare to a $2T compute binge no one can afford. Are we watching the start of the AI crash?</p><p>We also get into the fun stuff:<br />– GPT-5.1’s “personalities” (including the mysteriously missing flirty mode)<br />– Montana declaring the <i>Right to Compute</i> like it’s a frontier cult<br />– Whether AGI quietly arrived for text tasks<br />– And why robots might finally make 2026 the Year of the Hand™</p><p>If you like your AI news with a side of existential dread and two ex-coworkers arguing about the end of the economy… welcome home.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>GPT-5.1 & Weird AI Personalities</i> – Flirty mode missing, GPT router slowness & the two tribes of AI users<br />06:42 <i>OpenAI’s Money Pit</i> – Bailout rumors, the “$2T” compute plan & why the AI bubble might actually burst<br />13:50 <i>Government, GPUs & Obsolescence</i> – Manhattan Project comparisons, economic doom loops & the end of human-paid compute<br />18:40 <i>Montana’s Right to Compute</i> – AI safety, manual overrides & why generative AI shouldn’t run critical infrastructure<br />22:56 <i>AGI Check-In & Robot 2026</i> – Text-based AGI, Apple robot rumors & why AGI still can’t write a #1 hit song</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>If the AI bubble pops, what crashes first — the economy, the GPUs, or our sanity?<br />Drop your take below and subscribe to join the only AI conversation that calls the BS out loud.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
]]></content:encoded>
      <enclosure length="31046223" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/a872b2ad-71e8-4244-993c-82d65e070026/audio/16d7724b-afa6-4740-9212-192d841a3867/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>The $2 Trillion AI Bubble Is About To Burst</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:28:33</itunes:duration>
      <itunes:summary>Hunter and Daniel break down whether the AI bubble fueled by trillion-dollar compute plans, staggering losses, and OpenAI’s rumored bailout plea is finally about to burst. Along the way, they dive into GPT-5.1’s new personalities, Montana’s “Right to Compute,” the state of AGI in 2025, and why 2026 might become the year robots take over everything except the Billboard charts.</itunes:summary>
      <itunes:subtitle>Hunter and Daniel break down whether the AI bubble fueled by trillion-dollar compute plans, staggering losses, and OpenAI’s rumored bailout plea is finally about to burst. Along the way, they dive into GPT-5.1’s new personalities, Montana’s “Right to Compute,” the state of AGI in 2025, and why 2026 might become the year robots take over everything except the Billboard charts.</itunes:subtitle>
      <itunes:keywords>trillion dollar ai, ai legislation, technology podcast, ai news podcast, gpt router, hunter powers, ai bubble 2025, gpt 5.1, openai bailout, montana right to compute, ai personalities, ai crash, openai funding, gpt personalities, ai future, agi 2025, robots 2026, ai economy, tech news 2025, they might be self aware, ai speculation, openai debt, ai regulation, ai bubble, daniel bishop, ai data centers, ai compute costs</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>138</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">80d2b928-0e59-4a65-88fa-5a4bb6d54bc3</guid>
      <title>Your Boss Is Lying To You About AI Layoffs</title>
      <description><![CDATA[<p>Your boss isn’t telling you the truth. We are. This week’s <i>AI news</i> breaks down the myth that “AI is taking every job” and exposes what’s actually happening behind the massive layoffs at Amazon, UPS, Duolingo, and more. Spoiler: AI is involved—but not in the way you think.</p><p><i>On this episode:</i></p><ul><li>The <i>real</i> reason companies are slashing thousands of jobs</li><li>Why CEOs get <i>rewarded</i> for layoffs (yes, the stock price goes UP)</li><li>How AI gives top performers a 10× multiplier—and what happens next</li><li>The Oreo dystopia: machine-learning cookie scientists</li><li>Channel 4’s AI reporter (was she real? does it matter?)</li><li>EA, game dev, and the parts of the industry quietly automated already</li><li>And the uncomfortable future where AI outrage lasts… six months</li></ul><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>Your Boss Is Lying</i> – AI layoffs, recession signals & the truth no one’s telling you<br />06:12 <i>The Efficiency Bomb</i> – AI’s 10× productivity, RIF math & why stock prices reward cuts<br />13:45 <i>Amazon, UPS & Oreo?!</i> – Real companies, real layoffs, and machine-learning cookie science<br />19:58 <i>AI Anchors Arrive</i> – Channel 4’s synthetic reporter, deepfake journalism & war-zone robots<br />25:10 <i>Are Any Jobs Safe?</i> – Game dev automation, cultural collapse & the fate of human creators</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>What’s the REAL threat: AI automation or the economy pretending it’s AI?<br />Drop your take below and subscribe to join the only AI conversation that calls the BS out loud.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
]]></description>
      <pubDate>Mon, 17 Nov 2025 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>Your boss isn’t telling you the truth. We are. This week’s <i>AI news</i> breaks down the myth that “AI is taking every job” and exposes what’s actually happening behind the massive layoffs at Amazon, UPS, Duolingo, and more. Spoiler: AI is involved—but not in the way you think.</p><p><i>On this episode:</i></p><ul><li>The <i>real</i> reason companies are slashing thousands of jobs</li><li>Why CEOs get <i>rewarded</i> for layoffs (yes, the stock price goes UP)</li><li>How AI gives top performers a 10× multiplier—and what happens next</li><li>The Oreo dystopia: machine-learning cookie scientists</li><li>Channel 4’s AI reporter (was she real? does it matter?)</li><li>EA, game dev, and the parts of the industry quietly automated already</li><li>And the uncomfortable future where AI outrage lasts… six months</li></ul><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>Your Boss Is Lying</i> – AI layoffs, recession signals & the truth no one’s telling you<br />06:12 <i>The Efficiency Bomb</i> – AI’s 10× productivity, RIF math & why stock prices reward cuts<br />13:45 <i>Amazon, UPS & Oreo?!</i> – Real companies, real layoffs, and machine-learning cookie science<br />19:58 <i>AI Anchors Arrive</i> – Channel 4’s synthetic reporter, deepfake journalism & war-zone robots<br />25:10 <i>Are Any Jobs Safe?</i> – Game dev automation, cultural collapse & the fate of human creators</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p>📢 <i>Engage</i></p><p>What’s the REAL threat: AI automation or the economy pretending it’s AI?<br />Drop your take below and subscribe to join the only AI conversation that calls the BS out loud.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p>
]]></content:encoded>
      <enclosure length="34714098" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/b39c7707-a35f-433b-a037-b2f26d416c20/audio/b59f110d-d7d4-45b3-a038-01d43bc18066/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Your Boss Is Lying To You About AI Layoffs</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:32:22</itunes:duration>
      <itunes:summary>Companies are blaming “AI layoffs” for massive job cuts, but the numbers reveal a messier truth as AI efficiency, recession pressure, and shareholder math all collide in ways your boss won’t admit. From Amazon and Duolingo to generative news anchors and machine-learning Oreos, we break down how automation is reshaping work faster (and weirder) than anyone is prepared for.</itunes:summary>
      <itunes:subtitle>Companies are blaming “AI layoffs” for massive job cuts, but the numbers reveal a messier truth as AI efficiency, recession pressure, and shareholder math all collide in ways your boss won’t admit. From Amazon and Duolingo to generative news anchors and machine-learning Oreos, we break down how automation is reshaping work faster (and weirder) than anyone is prepared for.</itunes:subtitle>
      <itunes:keywords>ups layoffs, recession, ai productivity, ai vs humans, tech recession, openai, amazon layoffs, machine learning, channel 4 ai anchor, klarna ai, ai recession, future of work, generative ai, duolingo ai, job automation, workforce automation, ai layoffs, economic slowdown, ai replacing jobs, tech layoffs, ai job market, ai reporter, automation trends, ai news</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>137</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">06cab91e-48e4-49df-ac79-99929ac40bf9</guid>
      <title>This $20,000 AI Robot Neo Is Secretly A Human</title>
      <description><![CDATA[<p><i>The $20K Neo Robot is “AI”—but secretly human-piloted. 🤖</i> Is your next smart home upgrade just a guy in VR doing your dishes?</p><p>Meet <i>Neo Robot</i>, the $20,000 humanoid “AI” helper that wowed the internet—until people learned there’s a real human behind the metal. In this episode of <i>They Might Be Self-Aware</i>, Hunter & Daniel dissect Neo’s teleoperation model, explore who’s actually working when your robot cleans, and debate if this is innovation or exploitation.</p><p>Then: Albania’s “pregnant” AI minister, minors banned from AI chats, and Hunter’s unnervingly human Tesla Grok conversation.</p><p>🎧 Tech, ethics, absurdity. Every Monday & Thursday. <i>Stay Self-Aware.</i></p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>The $20K Neo Robot</i> – The “AI” home servant that’s secretly human-piloted<br />06:57 <i>Teleoperation & Truth</i> – Who’s really working when your robot does the dishes?<br />12:43 <i>Ethics & Exploitation</i> – Global labor, data harvesting, and blurred autonomy<br />18:55 <i>AI Children & Ministers</i> – Albania’s digital offspring & the politics of personhood<br />25:46 <i>Human Connection or Simulation?</i> – Tesla Grok chats, emotional AI, and what’s next</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p><i>📢 Engage</i></p><p>If your “AI” robot turned out to be a guy in VR doing your dishes… would you still pay $20K?</p><p>New here? Subscribe. We drop weekly AI heresy every Monday & Thursday.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#NeoRobot #AI #HumanoidRobot #TheyMightBeSelfAware #RobotRevolution</p>
]]></description>
      <pubDate>Tue, 11 Nov 2025 14:20:38 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>The $20K Neo Robot is “AI”—but secretly human-piloted. 🤖</i> Is your next smart home upgrade just a guy in VR doing your dishes?</p><p>Meet <i>Neo Robot</i>, the $20,000 humanoid “AI” helper that wowed the internet—until people learned there’s a real human behind the metal. In this episode of <i>They Might Be Self-Aware</i>, Hunter & Daniel dissect Neo’s teleoperation model, explore who’s actually working when your robot cleans, and debate if this is innovation or exploitation.</p><p>Then: Albania’s “pregnant” AI minister, minors banned from AI chats, and Hunter’s unnervingly human Tesla Grok conversation.</p><p>🎧 Tech, ethics, absurdity. Every Monday & Thursday. <i>Stay Self-Aware.</i></p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>The $20K Neo Robot</i> – The “AI” home servant that’s secretly human-piloted<br />06:57 <i>Teleoperation & Truth</i> – Who’s really working when your robot does the dishes?<br />12:43 <i>Ethics & Exploitation</i> – Global labor, data harvesting, and blurred autonomy<br />18:55 <i>AI Children & Ministers</i> – Albania’s digital offspring & the politics of personhood<br />25:46 <i>Human Connection or Simulation?</i> – Tesla Grok chats, emotional AI, and what’s next</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p><i>📢 Engage</i></p><p>If your “AI” robot turned out to be a guy in VR doing your dishes… would you still pay $20K?</p><p>New here? Subscribe. We drop weekly AI heresy every Monday & Thursday.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#NeoRobot #AI #HumanoidRobot #TheyMightBeSelfAware #RobotRevolution</p>
]]></content:encoded>
      <enclosure length="34998743" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/7a33140b-f0be-40b6-9f5e-9cad91c500ee/audio/35a9cc8b-9317-4858-9784-72a893e88316/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>This $20,000 AI Robot Neo Is Secretly A Human</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:32:40</itunes:duration>
      <itunes:summary>Hunter and Daniel unpack the illusion behind the $20,000 Neo Robot, a supposed “AI” home assistant with a VR enabled secret. From the ethics of teleoperated labor to Albania’s “pregnant” AI minister and Tesla’s flirty chatbot, they explore where artificial intelligence stops and human influence begins.</itunes:summary>
      <itunes:subtitle>Hunter and Daniel unpack the illusion behind the $20,000 Neo Robot, a supposed “AI” home assistant with a VR enabled secret. From the ethics of teleoperated labor to Albania’s “pregnant” AI minister and Tesla’s flirty chatbot, they explore where artificial intelligence stops and human influence begins.</itunes:subtitle>
      <itunes:keywords>remote pilot ai, domestic robot, 83 ai children, hunter powers, humanoid robot, openai, human piloted robot, teleoperation, ai vs human, robotics 2025, machine learning, tesla grok, future of work, ai minister, artificial intelligence podcast, figure robot, they might be self aware, ai society, albania ai, robot servant, robot security, ai conversation, tesla optimus, ai regulation, ai robot, tech podcast, technology commentary, neo robot, daniel bishop, ai ethics, ai news</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>136</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">cf92b3ff-4484-4516-bd5a-36db75214baa</guid>
      <title>AI Is Now Censoring Presidents, But 10X-ing Children</title>
      <description><![CDATA[<p><i>Trump AI:</i> ChatGPT <i>refuses to identify Trump</i>—sparking a heated debate on <i>AI censorship, safety, and who controls truth online.</i></p><p>When OpenAI’s flagship model declines to recognize the sitting U.S. president, is that <i>responsible alignment</i> or the start of <i>algorithmic censorship</i>?</p><ul><li>How OpenAI’s policy blocks image recognition of public figures.</li><li>Why <i>Claude</i>, <i>Atlas</i>, and <i>Gemini</i> diverge on “safety.”</li><li>What it means for <i>local</i> and <i>open-weights AI</i>.</li><li>The rise of <i>AI ads</i> and <i>AI-driven education</i> that’s “10×-ing” kids.</li></ul><p><i>No demos—just raw analysis</i> from two technologists watching the AI gatekeepers redraw the boundaries of truth online.</p><p><i>New here?</i> <i>They Might Be Self-Aware</i> drops every Monday & Thursday—<i>smart, fast, no-BS AI news</i> with a side of existential dread.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>The AI Gold Rush</i> – Hunter S. Thompson AI, Claude Code & 10× creativity<br />10:52 <i>ChatGPT Won’t Identify Trump</i> – OpenAI’s new “safety” wall and why it matters<br />18:02 <i>Atlas vs China’s Open Models</i> – Censorship, control & open-weights rebellion<br />22:28 <i>Amazon & Google Monetize AI</i> – Ad-gated models and who owns your eyeballs<br />26:36 <i>AI in Schools & the Future of Learning</i> – 10× kids, bias, and local AI salvation</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p><i>📢 Engage</i></p><p>If an AI won’t tell you who’s in the photo… who’s really in control?<br />Drop a 🧠 if you’d rather trust open models or 💼 if you side with corporate “safety.”</p><p>New here? Subscribe. We drop weekly AI heresy every Monday & Thursday.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AI #TrumpAI #TMBSA</p>
]]></description>
      <pubDate>Thu, 6 Nov 2025 13:50:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>Trump AI:</i> ChatGPT <i>refuses to identify Trump</i>—sparking a heated debate on <i>AI censorship, safety, and who controls truth online.</i></p><p>When OpenAI’s flagship model declines to recognize the sitting U.S. president, is that <i>responsible alignment</i> or the start of <i>algorithmic censorship</i>?</p><ul><li>How OpenAI’s policy blocks image recognition of public figures.</li><li>Why <i>Claude</i>, <i>Atlas</i>, and <i>Gemini</i> diverge on “safety.”</li><li>What it means for <i>local</i> and <i>open-weights AI</i>.</li><li>The rise of <i>AI ads</i> and <i>AI-driven education</i> that’s “10×-ing” kids.</li></ul><p><i>No demos—just raw analysis</i> from two technologists watching the AI gatekeepers redraw the boundaries of truth online.</p><p><i>New here?</i> <i>They Might Be Self-Aware</i> drops every Monday & Thursday—<i>smart, fast, no-BS AI news</i> with a side of existential dread.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>The AI Gold Rush</i> – Hunter S. Thompson AI, Claude Code & 10× creativity<br />10:52 <i>ChatGPT Won’t Identify Trump</i> – OpenAI’s new “safety” wall and why it matters<br />18:02 <i>Atlas vs China’s Open Models</i> – Censorship, control & open-weights rebellion<br />22:28 <i>Amazon & Google Monetize AI</i> – Ad-gated models and who owns your eyeballs<br />26:36 <i>AI in Schools & the Future of Learning</i> – 10× kids, bias, and local AI salvation</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p><i>📢 Engage</i></p><p>If an AI won’t tell you who’s in the photo… who’s really in control?<br />Drop a 🧠 if you’d rather trust open models or 💼 if you side with corporate “safety.”</p><p>New here? Subscribe. We drop weekly AI heresy every Monday & Thursday.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AI #TrumpAI #TMBSA</p>
]]></content:encoded>
      <enclosure length="38119492" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/189a99aa-eebc-4683-bf1e-7e541fc86cac/audio/91a37a69-4a58-47f8-b7ae-74afff55e6d5/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Is Now Censoring Presidents, But 10X-ing Children</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:35:55</itunes:duration>
      <itunes:summary>Hunter and Daniel dissect how OpenAI’s ChatGPT now refuses to identify Donald Trump in photos, questioning whether this is responsible AI safety or creeping censorship. Along the way, they unravel how corporate control, ad-driven models, and AI in education are shaping and possibly warping our digital future.</itunes:summary>
      <itunes:subtitle>Hunter and Daniel dissect how OpenAI’s ChatGPT now refuses to identify Donald Trump in photos, questioning whether this is responsible AI safety or creeping censorship. Along the way, they unravel how corporate control, ad-driven models, and AI in education are shaping and possibly warping our digital future.</itunes:subtitle>
      <itunes:keywords>ai education, amazon ai, hunter powers, algorithmic bias, ai presidents, tech ethics, atlas ai, openai trump, claude ai, china ai, ai schools, ai podcast, gemini ai, 10x kids, chatgpt censorship, ai ads, openai censorship, they might be self-aware, ai truth, open weights ai, local ai, trump ai, artificial intelligence debate, daniel bishop, ai news</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>135</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">08df5d59-8083-4700-b2e8-284039db056c</guid>
      <title>AI Creates 3-Eyed Monster &amp; Plans To Upload Your Soul</title>
      <description><![CDATA[<p><i>AI Rapture: Soul Uploads, Job Loss & a 3-Eyed Monster.</i> Inside this week’s wild dive into <i>AI ethics</i>, <i>automation</i>, and <i>pinball’s first GenAI scandal.</i></p><hr /><p><i>⏱️ CHAPTERS</i><br />00:00 <i>The AI Rapture Begins</i> – Uploading souls, cult jokes & digital heaven<br />04:52 <i>The 3-Eyed Pinball Monster</i> – Generative art scandal at Jersey Jack’s factory<br />09:40 <i>Soul Theft & AI Ethics</i> – Stolen creativity, Frankenstein parallels & “soulless code”<br />16:30 <i>The Automation Paradox</i> – Force multipliers, job loss & dark-factory futures<br />23:20 <i>The 10× Engineer Gospel</i> – Meta’s 5× order, GenAI development & who gets left behind</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p><i>📢 Engage</i></p><p>Does AI art still have a soul, or is it just code wearing stolen skin?</p><p>New here? Subscribe. We drop weekly AI heresy every Monday & Thursday.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AIRapture #AIEthics #TMBSA</p>
]]></description>
      <pubDate>Fri, 31 Oct 2025 12:51:53 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>AI Rapture: Soul Uploads, Job Loss & a 3-Eyed Monster.</i> Inside this week’s wild dive into <i>AI ethics</i>, <i>automation</i>, and <i>pinball’s first GenAI scandal.</i></p><hr /><p><i>⏱️ CHAPTERS</i><br />00:00 <i>The AI Rapture Begins</i> – Uploading souls, cult jokes & digital heaven<br />04:52 <i>The 3-Eyed Pinball Monster</i> – Generative art scandal at Jersey Jack’s factory<br />09:40 <i>Soul Theft & AI Ethics</i> – Stolen creativity, Frankenstein parallels & “soulless code”<br />16:30 <i>The Automation Paradox</i> – Force multipliers, job loss & dark-factory futures<br />23:20 <i>The 10× Engineer Gospel</i> – Meta’s 5× order, GenAI development & who gets left behind</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p><i>📢 Engage</i></p><p>Does AI art still have a soul, or is it just code wearing stolen skin?</p><p>New here? Subscribe. We drop weekly AI heresy every Monday & Thursday.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><p>#AIRapture #AIEthics #TMBSA</p>
]]></content:encoded>
      <enclosure length="40375840" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/90d19e8a-7dc0-46d0-aaa2-a9d5ed30a83c/audio/12da1b6d-f744-4c98-a180-1e5d562aa1a7/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Creates 3-Eyed Monster &amp; Plans To Upload Your Soul</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:38:16</itunes:duration>
      <itunes:summary>Hunter and Daniel dive into the strange collision of faith, art, and automation starting with a three-eyed AI-generated pinball monster that sparks debate over creativity and stolen souls. From there, they unravel the “AI Rapture” and how generative tools, job-replacing automation, and the rise of the 10× engineer might decide who gets uploaded to digital heaven and who’s left behind.</itunes:summary>
      <itunes:subtitle>Hunter and Daniel dive into the strange collision of faith, art, and automation starting with a three-eyed AI-generated pinball monster that sparks debate over creativity and stolen souls. From there, they unravel the “AI Rapture” and how generative tools, job-replacing automation, and the rise of the 10× engineer might decide who gets uploaded to digital heaven and who’s left behind.</itunes:subtitle>
      <itunes:keywords>technology podcast, ai art controversy, ai replace jobs, openai heaven, ai 10x engineer, genai development, artificial intelligence, agi cult, ai job loss, generative ai, jersey jack pinball, dark factory automation, pinball ai controversy, ai soul debate, ai rapture, meta ai, automation paradox, they might be self-aware, aai force multiplier, ai consciousness upload, ai ethics, ai news</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>134</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">71f70130-8aa3-432d-aab4-932f3b25b662</guid>
      <title>We Built a Startup in 20 Minutes Using AI Agents</title>
      <description><![CDATA[<p><i>AI Agents Build Facebook in 20 Minutes?!</i> Claude Code launches a full startup—auth, feed, likes, DB—before the coffee cools. Then we hire an AI CMO.</p><p><i>AI agents built Facebook in 20 minutes.</i> No code camp, no hype video—just <i>Claude</i> hammering away in a terminal while we watched the future write itself. Authentication, feed, likes, comments, database… all live before the coffee cooled. So yeah, we might’ve just <i>built a startup in 20 minutes</i>.</p><p>Then we lost our minds and hired an <i>AI Chief Marketing Officer</i>—with a <i>human assistant</i> who doesn’t know their boss is silicon. Ethical? Productive? Insane? All three. This week, we test what happens when <i>AI agents</i> stop helping and start <i>running the company</i>.</p><p>If you’ve ever wondered how close we are to an <i>AI startup</i> that runs itself—or just want to hear two tech weirdos argue about whether it’s “progress” or “the end”—this is your episode.</p><p>🔥 <i>Topics:</i> Claude Code, AI business, GPT-5 leaks, AI CEOs, and what happens when the intern’s a human.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>AI Agents Go Rogue</i> – Claude builds a Facebook clone in 20 minutes<br />04:40 <i>The Startup Singularity</i> – Can one prompt launch a company?<br />10:15 <i>AI CEOs & Human Lackeys</i> – Running marketing with machine strategy<br />16:30 <i>GPT-5 vs Claude Code</i> – Who’s really replacing whom?<br />22:45 <i>The Ethics of Automation</i> – When the AI hires <i>you</i></p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><p>#AI #AIagents #Startup #ClaudeCode #GPT5</p>
]]></description>
      <pubDate>Tue, 28 Oct 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>AI Agents Build Facebook in 20 Minutes?!</i> Claude Code launches a full startup—auth, feed, likes, DB—before the coffee cools. Then we hire an AI CMO.</p><p><i>AI agents built Facebook in 20 minutes.</i> No code camp, no hype video—just <i>Claude</i> hammering away in a terminal while we watched the future write itself. Authentication, feed, likes, comments, database… all live before the coffee cooled. So yeah, we might’ve just <i>built a startup in 20 minutes</i>.</p><p>Then we lost our minds and hired an <i>AI Chief Marketing Officer</i>—with a <i>human assistant</i> who doesn’t know their boss is silicon. Ethical? Productive? Insane? All three. This week, we test what happens when <i>AI agents</i> stop helping and start <i>running the company</i>.</p><p>If you’ve ever wondered how close we are to an <i>AI startup</i> that runs itself—or just want to hear two tech weirdos argue about whether it’s “progress” or “the end”—this is your episode.</p><p>🔥 <i>Topics:</i> Claude Code, AI business, GPT-5 leaks, AI CEOs, and what happens when the intern’s a human.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>AI Agents Go Rogue</i> – Claude builds a Facebook clone in 20 minutes<br />04:40 <i>The Startup Singularity</i> – Can one prompt launch a company?<br />10:15 <i>AI CEOs & Human Lackeys</i> – Running marketing with machine strategy<br />16:30 <i>GPT-5 vs Claude Code</i> – Who’s really replacing whom?<br />22:45 <i>The Ethics of Automation</i> – When the AI hires <i>you</i></p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i></p><p>🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><p>#AI #AIagents #Startup #ClaudeCode #GPT5</p>
]]></content:encoded>
      <enclosure length="31869619" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/e1462297-bca5-48bd-824e-93f452eece78/audio/4dc044d7-7c41-4832-b496-dfde77a0064f/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>We Built a Startup in 20 Minutes Using AI Agents</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:29:24</itunes:duration>
      <itunes:summary>Daniel challenges Hunter’s skepticism by using Claude Code to build a functioning Facebook-like app complete with authentication, feeds, likes, and a database in just 20 minutes. The two then spiral into a thought experiment about AI-run startups, debating whether AI agents could (or should) become the CEOs and marketers of the future while humans become their assistants.</itunes:summary>
      <itunes:subtitle>Daniel challenges Hunter’s skepticism by using Claude Code to build a functioning Facebook-like app complete with authentication, feeds, likes, and a database in just 20 minutes. The two then spiral into a thought experiment about AI-run startups, debating whether AI agents could (or should) become the CEOs and marketers of the future while humans become their assistants.</itunes:subtitle>
      <itunes:keywords>build startup with ai, ai entrepreneurship, ai automation, hunter powers, ai productivity, ai coding, ai app builder, ai cmo, openai, claude code, ai replaces jobs, claude ai, artificial intelligence, autonomous ai, ai startup experimen, ai marketing, ai podcast, claude sonnet, ai business, ai revolution, gpt-5, ai future, claude vs gpt, ai software development, gpt-5 leak, facebook clone, ai ceo, they might be self-aware, tech podcast, ai agents, ai builds company, daniel bishop, ai startup, anthropic, ai tools</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>133</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">150ed724-859a-4ae9-adf3-c9e4ede2bf43</guid>
      <title>CEOs Are Lying About AI Stealing Your Job</title>
      <description><![CDATA[<p><i>AI jobs aren’t vanishing — CEOs are lying. We expose the exec spin, fake AI productivity, and real data they hope you never see.</i><br />We unpack Salesforce’s 9k to 5k “AI efficiency” layoffs, <i>OpenAI GPT-5</i>, <i>Claude 4.5</i>, and the myths fueling automation panic. Subscribe for sharp takes, not PR.</p><p>We DECODE:</p><ul><li><i>Exec Spin Decoder</i> – what “AI transformation” really means in layoffs</li><li><i>Myth vs Data</i> – Yale & Brookings: no proof of mass job loss</li><li><i>Receipts</i> – Salesforce, SAP, & Anthropic’s creative accounting</li><li><i>Reality Check</i> – GPT-5 parity claims + AI hallucination truth</li><li><i>Playbook</i> – how to use AI without gutting trust or teams</li></ul><p><i>Who this is for:</i> engineers, managers, founders tired of buzzwords and excuses.</p><hr /><p><i>📢 Engage</i></p><p>Are CEOs lying about AI taking jobs, or are we in denial about the automation wave?<br />New here? Subscribe! We drop weekly AI chaos with teeth.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>AI Jobs vs Exec Spin</i> – The big lie & why the data disagrees<br />05:18 <i>Anthropic & Salesforce Receipts</i> – Hype, layoffs & AI productivity myths<br />12:42 <i>GPT-5 & Claude 4.5 Reality Check</i> – Parity claims and hallucination truths<br />19:25 <i>The AI Sin Eater</i> – Who takes the blame when automation fails<br />26:48 <i>Your Monday Playbook</i> – Using AI without losing trust or jobs</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i><br />🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><p>#AI #AIjobs #TMBSA</p>
]]></description>
      <pubDate>Fri, 24 Oct 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>AI jobs aren’t vanishing — CEOs are lying. We expose the exec spin, fake AI productivity, and real data they hope you never see.</i><br />We unpack Salesforce’s 9k to 5k “AI efficiency” layoffs, <i>OpenAI GPT-5</i>, <i>Claude 4.5</i>, and the myths fueling automation panic. Subscribe for sharp takes, not PR.</p><p>We DECODE:</p><ul><li><i>Exec Spin Decoder</i> – what “AI transformation” really means in layoffs</li><li><i>Myth vs Data</i> – Yale & Brookings: no proof of mass job loss</li><li><i>Receipts</i> – Salesforce, SAP, & Anthropic’s creative accounting</li><li><i>Reality Check</i> – GPT-5 parity claims + AI hallucination truth</li><li><i>Playbook</i> – how to use AI without gutting trust or teams</li></ul><p><i>Who this is for:</i> engineers, managers, founders tired of buzzwords and excuses.</p><hr /><p><i>📢 Engage</i></p><p>Are CEOs lying about AI taking jobs, or are we in denial about the automation wave?<br />New here? Subscribe! We drop weekly AI chaos with teeth.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>AI Jobs vs Exec Spin</i> – The big lie & why the data disagrees<br />05:18 <i>Anthropic & Salesforce Receipts</i> – Hype, layoffs & AI productivity myths<br />12:42 <i>GPT-5 & Claude 4.5 Reality Check</i> – Parity claims and hallucination truths<br />19:25 <i>The AI Sin Eater</i> – Who takes the blame when automation fails<br />26:48 <i>Your Monday Playbook</i> – Using AI without losing trust or jobs</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i><br />🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><p>#AI #AIjobs #TMBSA</p>
]]></content:encoded>
      <enclosure length="43165992" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/bd109734-4c62-4a0d-8f72-9e15ddfdcb6c/audio/0a486b6a-9005-4084-96c3-4e9d534bec3b/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>CEOs Are Lying About AI Stealing Your Job</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:41:10</itunes:duration>
      <itunes:summary>CEOs are blaming AI for layoffs, but the data doesn’t back it up. Hunter and Daniel expose how “AI productivity” hype is masking old-fashioned cost cutting, unpacking Salesforce’s job cuts, Anthropic’s doomsday claims, and the truth behind GPT-5’s impact on real work.</itunes:summary>
      <itunes:subtitle>CEOs are blaming AI for layoffs, but the data doesn’t back it up. Hunter and Daniel expose how “AI productivity” hype is masking old-fashioned cost cutting, unpacking Salesforce’s job cuts, Anthropic’s doomsday claims, and the truth behind GPT-5’s impact on real work.</itunes:subtitle>
      <itunes:keywords>claude 4.5, hunter powers, ai productivity, anthropic lies, ai hype, ai in the workplace, ai jobs, artificial intelligence, exec spin, gpt-5, generative ai, automation, ai hallucination, workforce automation, ai job myths, tech industry, they might be self-aware, ai impact on employment, openai gpt-5, salesforce layoffs, tech layoffs, daniel bishop, anthropic</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>132</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">a280bbf6-d736-4694-b34b-3f39baaacefe</guid>
      <title>The Big Lie Behind AI Automation</title>
      <description><![CDATA[<p><i>Anthropic’s 30-hour AI automation claim is collapsing under scrutiny — we tested Claude 4.5 vs GPT-5 and exposed where AI truly fails.</i><br />Without human feedback, it crashes around our <i>“10-Minute Autonomy Rule.”</i> Hunter Powers and Daniel Bishop dismantle the biggest AI automation myth of 2025—why the machines still need us.</p><p>They decode <i>Claude 4.5’s hype</i>, pit it against <i>GPT-5</i>, and rip into <i>Meta’s humanoid robot</i> ambitions. It’s funny, fast, and fearless: exactly how tech talk should be.</p><p><i>New every Monday & Thursday. Bring receipts.</i></p><hr /><p><i>📢 Engage</i></p><p>Prove you’re not automated: type “still human 👨‍💻” and tell us—<br />When will AI actually replace you… or has it already?<br />New here? Subscribe. We drop weekly AI heresy every Monday & Thursday.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>Anthropic’s 30-Hour Lie</i> – What “AI automation” really means<br />04:22 <i>Claude 4.5 vs GPT-5</i> – The 10-Minute Autonomy Rule & bug-loop failure<br />10:11 <i>Feedback over Autonomy</i> – Why AI still can’t code alone<br />17:05 <i>Meta’s Robot Bet</i> – World models, humanoids & automation theater<br />25:28 <i>Digital Twins & The End of Work</i> – UBI, job collapse & Hunter’s wager</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i><br />🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><p>#AIautomation #Claude45 #GPT5 #TheyMightBeSelfAware</p>
]]></description>
      <pubDate>Tue, 21 Oct 2025 12:48:10 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>Anthropic’s 30-hour AI automation claim is collapsing under scrutiny — we tested Claude 4.5 vs GPT-5 and exposed where AI truly fails.</i><br />Without human feedback, it crashes around our <i>“10-Minute Autonomy Rule.”</i> Hunter Powers and Daniel Bishop dismantle the biggest AI automation myth of 2025—why the machines still need us.</p><p>They decode <i>Claude 4.5’s hype</i>, pit it against <i>GPT-5</i>, and rip into <i>Meta’s humanoid robot</i> ambitions. It’s funny, fast, and fearless: exactly how tech talk should be.</p><p><i>New every Monday & Thursday. Bring receipts.</i></p><hr /><p><i>📢 Engage</i></p><p>Prove you’re not automated: type “still human 👨‍💻” and tell us—<br />When will AI actually replace you… or has it already?<br />New here? Subscribe. We drop weekly AI heresy every Monday & Thursday.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>Anthropic’s 30-Hour Lie</i> – What “AI automation” really means<br />04:22 <i>Claude 4.5 vs GPT-5</i> – The 10-Minute Autonomy Rule & bug-loop failure<br />10:11 <i>Feedback over Autonomy</i> – Why AI still can’t code alone<br />17:05 <i>Meta’s Robot Bet</i> – World models, humanoids & automation theater<br />25:28 <i>Digital Twins & The End of Work</i> – UBI, job collapse & Hunter’s wager</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i><br />🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><p>#AIautomation #Claude45 #GPT5 #TheyMightBeSelfAware</p>
]]></content:encoded>
      <enclosure length="42991540" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/9e8d267b-2c9d-49f8-b35d-9f92da3c9195/audio/d1e72b7f-f043-4daf-9806-e68d6a6ed099/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>The Big Lie Behind AI Automation</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:40:59</itunes:duration>
      <itunes:summary>Anthropic claims its new model, Claude 4.5, can code autonomously for 30 hours straight, but Hunter Powers and Daniel Bishop put that boast to the test and reveal it collapses after roughly ten minutes without human feedback. Along the way, they debate whether AI automation is genuine progress or just “automation theater,” and why Meta’s humanoid-robot ambitions may prove the same illusion on a larger scale.</itunes:summary>
      <itunes:subtitle>Anthropic claims its new model, Claude 4.5, can code autonomously for 30 hours straight, but Hunter Powers and Daniel Bishop put that boast to the test and reveal it collapses after roughly ten minutes without human feedback. Along the way, they debate whether AI automation is genuine progress or just “automation theater,” and why Meta’s humanoid-robot ambitions may prove the same illusion on a larger scale.</itunes:subtitle>
      <itunes:keywords>ai automation, robot workers, claude 4.5, coding automation, hunter powers, ai productivity, ai coding, automation myth, feedback loops, digital twins, 10-minute autonomy rule, ai replace jobs, ai work 30 hours, ai hype, openai, ai job replacement, technology debate, ai bug loops, universal basic income, ai podcast, claude coding test, future of work, meta ai robot, gpt-5, ai autonomy, claude vs gpt-5, they might be self-aware, meta humanoid, anthropic 30-hour claim, automation theater, daniel bishop, world model, ai ethics, anthropic, ai tools</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>131</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">258139f7-0c03-47f3-8b2b-38bab91435cd</guid>
      <title>The Trump AI Video That Breaks Reality</title>
      <description><![CDATA[<p>Trump AI Video Exposed: Deepfake propaganda, fake sombreros & how Sora 2 just made reality optional.</p><p>The <i>Trump AI video</i> isn’t just fake — it’s prophecy.<br />Unlabeled, unhinged, and <i>posted from official accounts.</i><br />A deepfake presidency preview.</p><p>This week on <i>They Might Be Self-Aware</i>, Hunter and Daniel crack open <i>AI propaganda</i> itself — the fake sombreros, the racist undertones, and the terrifying truth:<br />half the viewers thought it was real.</p><p>Then it gets worse.<br /><i>Sora 2</i> drops. OpenAI’s new text-to-video demo looks indistinguishable from footage. Add in <i>Veo</i>, <i>cameos</i>, and you’ve got something no one’s ready for — reality on demand.<br />We debate what happens when politics, art, and truth collapse into the same feed.</p><blockquote><p>“You think someone would just go on the internet and tell lies?”<br />Yes. And now it’s photorealistic.</p></blockquote><hr /><p><i>📢 Engage</i></p><p>Let’s run a Turing test in the comments:<br />Type “🥔 real human verified” and answer — Would you trust a deepfake news feed if it was more entertaining?<br />Subscribe for more weekly tech delirium.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>The Trump AI Video</i> – Deepfake politics, fake sombreros & the new propaganda playbook<br />04:54 <i>The Post-Truth Era</i> – When official accounts blur fact, fiction & democracy<br />09:15 <i>Sora 2 vs Veo</i> – AI-generated video so real it breaks your eyes<br />19:21 <i>Infinite Content, Zero Meaning</i> – Feeds, cameos & the collapse of human art<br />28:33 <i>Reality Is Optional Now</i> – The AI arms race and what’s left to believe</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i><br />🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><p>#TrumpAI #Deepfake #Sora2 #AIPropaganda #PostTruth #AIVideo</p>
]]></description>
      <pubDate>Thu, 16 Oct 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>Trump AI Video Exposed: Deepfake propaganda, fake sombreros & how Sora 2 just made reality optional.</p><p>The <i>Trump AI video</i> isn’t just fake — it’s prophecy.<br />Unlabeled, unhinged, and <i>posted from official accounts.</i><br />A deepfake presidency preview.</p><p>This week on <i>They Might Be Self-Aware</i>, Hunter and Daniel crack open <i>AI propaganda</i> itself — the fake sombreros, the racist undertones, and the terrifying truth:<br />half the viewers thought it was real.</p><p>Then it gets worse.<br /><i>Sora 2</i> drops. OpenAI’s new text-to-video demo looks indistinguishable from footage. Add in <i>Veo</i>, <i>cameos</i>, and you’ve got something no one’s ready for — reality on demand.<br />We debate what happens when politics, art, and truth collapse into the same feed.</p><blockquote><p>“You think someone would just go on the internet and tell lies?”<br />Yes. And now it’s photorealistic.</p></blockquote><hr /><p><i>📢 Engage</i></p><p>Let’s run a Turing test in the comments:<br />Type “🥔 real human verified” and answer — Would you trust a deepfake news feed if it was more entertaining?<br />Subscribe for more weekly tech delirium.</p><p>🧠 <i>They Might Be Self-Aware — but are we?</i></p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>The Trump AI Video</i> – Deepfake politics, fake sombreros & the new propaganda playbook<br />04:54 <i>The Post-Truth Era</i> – When official accounts blur fact, fiction & democracy<br />09:15 <i>Sora 2 vs Veo</i> – AI-generated video so real it breaks your eyes<br />19:21 <i>Infinite Content, Zero Meaning</i> – Feeds, cameos & the collapse of human art<br />28:33 <i>Reality Is Optional Now</i> – The AI arms race and what’s left to believe</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i><br />🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><p>#TrumpAI #Deepfake #Sora2 #AIPropaganda #PostTruth #AIVideo</p>
]]></content:encoded>
      <enclosure length="36180226" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/8adce18e-1240-4c10-802c-6371d584ecfd/audio/890abc39-81f9-4393-b7b7-cc95d45b6b52/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>The Trump AI Video That Breaks Reality</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:33:54</itunes:duration>
      <itunes:summary>Trump’s new AI-generated video blurs the line between propaganda and parody, and wel dissect how it signals the start of a post-truth political era. We also break down OpenAI’s Sora 2, Veo, and cameo tech showing how AI video is now so realistic it threatens to erase the boundary between fact and fiction.</itunes:summary>
      <itunes:subtitle>Trump’s new AI-generated video blurs the line between propaganda and parody, and wel dissect how it signals the start of a post-truth political era. We also break down OpenAI’s Sora 2, Veo, and cameo tech showing how AI video is now so realistic it threatens to erase the boundary between fact and fiction.</itunes:subtitle>
      <itunes:keywords>deepfake, hunter powers, ai propaganda, fake news, fake trump, ai politics, sora ai, sora 2 demo, ai videos, government ai, openai cameo, ai deepfake, openai sora, ai media, sora 2, sora cameo, ai government, they might be self aware, sora vs veo, ai truth, trump deepfake, ai reality, post-truth, fake news ai, trump ai, reality collapse, trump ai video, daniel bishop, ai ethics, ai news</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>130</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">8c885708-b9c1-44f3-ae01-ba59b4d03853</guid>
      <title>Claude AI Listened To 120 Episodes, Then It Interviewed Us</title>
      <description><![CDATA[<p>We let an AI binge our show and judge us on-air—then we argued about VR work prisons, robot wives, Devin the “AI engineer,” and whether copyright is dead.</p><p>Tonight’s fights & confessions (scan these):<br />• VR Work: Freedom or 996-style dystopia? Boss-mandated metaverse vs solo beach “vibe coding.”<br />• Is Gary real? The $47 voice—and why humans still beat AI at story.<br />• Self-Driving x Insurance: Will insurers kill autonomy… or themselves?<br />• Devin (AI engineer): Intern-tier useful or prod-level risk? Where we actually trust it.<br />• Personal Turing Test: What it takes to fool us in real life.<br />• AI Girlfriend Line: Would you date a robot in public?<br />• Sellout Math: Our $100M price to let AI run TMBSA forever.<br />• Simulation & Self-Awareness: Are we the test… or the tested?</p><hr /><p><i>📢 Engage</i> </p><p>Comment to prove you’re human: type potato 🥔 and tell us: VR office—utopia or prison?<br />New here? Subscribe! We drop weekly AI chaos with teeth.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>Claude AI Interviews Us</i> – The setup, the experiment & Daniel’s big idea<br />04:30 <i>VR Work vs Dystopia</i> – Autonomy, control & vibe coding on Mars<br />10:00 <i>Is Gary Real?</i> – The $47 voice actor, metrics & the illusion of humanity<br />12:45 <i>Self-Driving Car Showdown</i> – Insurance greed, automation & grave-digging<br />15:00 <i>Devin the AI Engineer</i> – Bugs, trust, and the intern-level AI coworker<br />17:55 <i>The Turing Test Line</i> – How to spot fake coworkers & digital ghosts<br />23:50 <i>AI Spam & Deepfake Texts</i> – Hunter’s “NP-complete” problem replies<br />28:50 <i>Copyright Is Dead</i> – Creativity after the apocalypse of IP<br />31:30 <i>Dating Robots</i> – AI girlfriends, addiction & the human connection line<br />36:15 <i>Selling Our Digital Souls</i> – Would we take $10M to become AI content?<br />40:00 <i>Are We Self-Aware?</i> – Simulation theory, AI memory & consciousness loops  </p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i><br />🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><p>#AI #AIInterview #ArtificialIntelligence</p>
]]></description>
      <pubDate>Mon, 13 Oct 2025 13:02:55 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>We let an AI binge our show and judge us on-air—then we argued about VR work prisons, robot wives, Devin the “AI engineer,” and whether copyright is dead.</p><p>Tonight’s fights & confessions (scan these):<br />• VR Work: Freedom or 996-style dystopia? Boss-mandated metaverse vs solo beach “vibe coding.”<br />• Is Gary real? The $47 voice—and why humans still beat AI at story.<br />• Self-Driving x Insurance: Will insurers kill autonomy… or themselves?<br />• Devin (AI engineer): Intern-tier useful or prod-level risk? Where we actually trust it.<br />• Personal Turing Test: What it takes to fool us in real life.<br />• AI Girlfriend Line: Would you date a robot in public?<br />• Sellout Math: Our $100M price to let AI run TMBSA forever.<br />• Simulation & Self-Awareness: Are we the test… or the tested?</p><hr /><p><i>📢 Engage</i> </p><p>Comment to prove you’re human: type potato 🥔 and tell us: VR office—utopia or prison?<br />New here? Subscribe! We drop weekly AI chaos with teeth.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>Claude AI Interviews Us</i> – The setup, the experiment & Daniel’s big idea<br />04:30 <i>VR Work vs Dystopia</i> – Autonomy, control & vibe coding on Mars<br />10:00 <i>Is Gary Real?</i> – The $47 voice actor, metrics & the illusion of humanity<br />12:45 <i>Self-Driving Car Showdown</i> – Insurance greed, automation & grave-digging<br />15:00 <i>Devin the AI Engineer</i> – Bugs, trust, and the intern-level AI coworker<br />17:55 <i>The Turing Test Line</i> – How to spot fake coworkers & digital ghosts<br />23:50 <i>AI Spam & Deepfake Texts</i> – Hunter’s “NP-complete” problem replies<br />28:50 <i>Copyright Is Dead</i> – Creativity after the apocalypse of IP<br />31:30 <i>Dating Robots</i> – AI girlfriends, addiction & the human connection line<br />36:15 <i>Selling Our Digital Souls</i> – Would we take $10M to become AI content?<br />40:00 <i>Are We Self-Aware?</i> – Simulation theory, AI memory & consciousness loops  </p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i><br />🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><p>#AI #AIInterview #ArtificialIntelligence</p>
]]></content:encoded>
      <enclosure length="48502607" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/74f910c1-c5b3-4b03-8f15-ba95b5c57295/audio/a9d19da2-6f32-4921-9200-dc3073084a9c/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Claude AI Listened To 120 Episodes, Then It Interviewed Us</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:46:44</itunes:duration>
      <itunes:summary>After feeding 120 of our own podcast episodes into Claude AI, we let it turn the tables and interview us, confronting everything from AI coworkers and digital relationships to whether we’d sell our souls for $10 million. What follows is a chaotic, funny, and surprisingly philosophical showdown between two humans and the machine that knows them best.</itunes:summary>
      <itunes:subtitle>After feeding 120 of our own podcast episodes into Claude AI, we let it turn the tables and interview us, confronting everything from AI coworkers and digital relationships to whether we’d sell our souls for $10 million. What follows is a chaotic, funny, and surprisingly philosophical showdown between two humans and the machine that knows them best.</itunes:subtitle>
      <itunes:keywords>ai memory, ai debate, ai chatbots, ai engineer, technology podcast, hunter powers, self aware ai, deepfake voices, claude interview, ai consciousness, robot wife, future of ai, vr dystopia, virtual workspace, claude ai, artificial intelligence, virtual reality, ai philosophy, turing test, ai creativity, ai podcast, gpt-5, ai future, simulation theory, ai discussion, ai coworkers, ai society, podcast ai, ai fake humans, devin ai, ai self aware, metaverse work, ai self awareness, ai girlfriend, ai slop, ai humor, they might be self-aware, copyright dead, ai regulation, digital avatars, ai culture, ai interview, ai dating, daniel bishop, ai ethics, anthropic</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>129</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">ebb5f216-f185-45c7-8399-76aea842b969</guid>
      <title>Hollywood&apos;s Billion Dollar AI Movie Mistake.</title>
      <description><![CDATA[<p><i>Hollywood’s Billion-Dollar AI Movie Mistake (Lionsgate): 0 Films. All Fallout.</i><br /><i>AI movies</i> were supposed to print money. Instead, Lionsgate’s Runway bet shipped <i>nothing</i> and walked into a <i>copyright minefield</i>. You’ll understand why AI “movies” flopped, where <i>AI filmmaking</i> still wins, and the one blocker Hollywood can’t ignore.</p><p><i>What we cover:</i></p><ul><li>The inside story of Lionsgate’s AI meltdown and zero-film deal.</li><li>How <i>Google Veo 3</i> is quietly powering AI video creators.</li><li>The “Wizard of Oz” Sphere Vegas experiment and what it proves about AI cinema.</li><li>Why James Cameron’s AI take split filmmakers.</li><li>And the real future of AI tools for creators (and humans who still make things).</li></ul><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>AI Movies vs Reality</i> – Lionsgate’s billion-dollar gamble and 0-film result<br />03:55 <i>Google Veo 3 & Sphere Vegas</i> – Out-painting Oz and the tech actually working<br />07:10 <i>The Lionsgate Fail</i> – Copyright chaos and Hollywood’s AI hangover<br />13:05 <i>Is AI Creative?</i> – Cameron’s quote vs Hunter & Daniel’s take<br />17:40 <i>The Future of AI Filmmaking</i> – Tools > “AI Movies,” and why humans still matter</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i><br />🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p><i>📢 Engage</i> Are AI films a tool or a theft of creativity? Drop your comment below.</p><p>#AIMovies #AIFilmmaking #Lionsgate</p>
]]></description>
      <pubDate>Thu, 9 Oct 2025 12:47:12 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>Hollywood’s Billion-Dollar AI Movie Mistake (Lionsgate): 0 Films. All Fallout.</i><br /><i>AI movies</i> were supposed to print money. Instead, Lionsgate’s Runway bet shipped <i>nothing</i> and walked into a <i>copyright minefield</i>. You’ll understand why AI “movies” flopped, where <i>AI filmmaking</i> still wins, and the one blocker Hollywood can’t ignore.</p><p><i>What we cover:</i></p><ul><li>The inside story of Lionsgate’s AI meltdown and zero-film deal.</li><li>How <i>Google Veo 3</i> is quietly powering AI video creators.</li><li>The “Wizard of Oz” Sphere Vegas experiment and what it proves about AI cinema.</li><li>Why James Cameron’s AI take split filmmakers.</li><li>And the real future of AI tools for creators (and humans who still make things).</li></ul><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>AI Movies vs Reality</i> – Lionsgate’s billion-dollar gamble and 0-film result<br />03:55 <i>Google Veo 3 & Sphere Vegas</i> – Out-painting Oz and the tech actually working<br />07:10 <i>The Lionsgate Fail</i> – Copyright chaos and Hollywood’s AI hangover<br />13:05 <i>Is AI Creative?</i> – Cameron’s quote vs Hunter & Daniel’s take<br />17:40 <i>The Future of AI Filmmaking</i> – Tools > “AI Movies,” and why humans still matter</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i><br />🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p><i>📢 Engage</i> Are AI films a tool or a theft of creativity? Drop your comment below.</p><p>#AIMovies #AIFilmmaking #Lionsgate</p>
]]></content:encoded>
      <enclosure length="40440219" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/95f7e9e0-f645-4a6f-8802-a5ae608fb4f9/audio/2596c677-cb77-4dd0-9c0e-92d1b79a671e/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Hollywood&apos;s Billion Dollar AI Movie Mistake.</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:38:20</itunes:duration>
      <itunes:summary>Lionsgate’s billion-dollar AI movie experiment with Runway ML collapsed before a single film was made, exposing Hollywood’s overhyped faith in generative filmmaking and the legal minefield around AI-trained content. Meanwhile, Hunter and Daniel explore how tools like Google Veo 3 and projects such as the Wizard of Oz at Sphere Vegas reveal that AI works best as a creative amplifier, not a studio replacement.</itunes:summary>
      <itunes:subtitle>Lionsgate’s billion-dollar AI movie experiment with Runway ML collapsed before a single film was made, exposing Hollywood’s overhyped faith in generative filmmaking and the legal minefield around AI-trained content. Meanwhile, Hunter and Daniel explore how tools like Google Veo 3 and projects such as the Wizard of Oz at Sphere Vegas reveal that AI works best as a creative amplifier, not a studio replacement.</itunes:subtitle>
      <itunes:keywords>ai filmmaking, hunter powers, ai film, google ai video, ai in hollywood, wizard of oz remaster, ai movies, openai, lionsgate ai disaster, veo3 videos, ai video, sphere vegas, ai movie fails, google video ai, lionsgate fail, ai creativity, ai cinema, google veo, filmmaking technology, ai movie fail, veo3, ai legal issues, google veo 3, they might be self-aware, ai copyright, generative film, lionsgate, james cameron, daniel bishop, runway ml, ai tools</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>128</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">7d859c34-2c51-4e5d-8312-97ce1a626bf9</guid>
      <title>Google’s New AI Will Kill Photoshop (Nano Banana)</title>
      <description><![CDATA[<p><i>Google’s Nano Banana AI: Photoshop Killer or Just Hype?</i><br />Google just dropped <i>Nano Banana</i>, an image-editing model so good it makes Photoshop blink. We fired it up inside <i>Google AI Studio</i> and the results? Equal parts magic and menace. This isn’t just a filter upgrade; it’s a creative quake. Designers, brace yourselves.</p><p>Inside this episode, Hunter and Daniel wrestle with the question: is Google quietly ending Adobe’s reign, or birthing the next creative revolution? Along the way: a <i>voice-recorder AI</i> that tiptoes the line between genius and <i>illegal recording</i>, the rise of <i>AI schlock</i>, and why <i>Figma AI</i> and <i>Napkin AI</i> might actually out-innovate the giants.</p><p><i>We cover:</i> Nano Banana’s wild image generation • Google AI Studio secrets • Figma vs Adobe showdown • AI ethics & recording consent • GPT-5-High’s “surgical code” trick.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>AI Gadget Confessions</i> – Voice recorders, lost devices & the Star-Wars aesthetic<br />03:10 <i>Recording Consent & “AI Schlock”</i> – Ethics, legality, and bottom-of-the-barrel AI gadgets<br />11:35 <i>GPT-5-High in Cursor</i> – When paying per query is actually worth it<br />14:20 <i>Google’s Nano Banana Revolution</i> – Next-gen image editing, cheap runs & “Photoshop killer” talk<br />21:20 <i>Figma, MCP & Napkin AI</i> – Adobe’s future, integrated AI design, and where this all leads</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i><br />🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p><i>📢 Engage</i></p><p><i>Enjoy the show?</i> Drop a comment with your take on AI glasses (privacy vs. utility), and <i>subscribe</i> for weekly, no-BS AI news and analysis.</p><p>#NanoBanana #GoogleAI #TheyMightBeSelfAware #PhotoshopKiller</p>
]]></description>
      <pubDate>Mon, 6 Oct 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>Google’s Nano Banana AI: Photoshop Killer or Just Hype?</i><br />Google just dropped <i>Nano Banana</i>, an image-editing model so good it makes Photoshop blink. We fired it up inside <i>Google AI Studio</i> and the results? Equal parts magic and menace. This isn’t just a filter upgrade; it’s a creative quake. Designers, brace yourselves.</p><p>Inside this episode, Hunter and Daniel wrestle with the question: is Google quietly ending Adobe’s reign, or birthing the next creative revolution? Along the way: a <i>voice-recorder AI</i> that tiptoes the line between genius and <i>illegal recording</i>, the rise of <i>AI schlock</i>, and why <i>Figma AI</i> and <i>Napkin AI</i> might actually out-innovate the giants.</p><p><i>We cover:</i> Nano Banana’s wild image generation • Google AI Studio secrets • Figma vs Adobe showdown • AI ethics & recording consent • GPT-5-High’s “surgical code” trick.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>AI Gadget Confessions</i> – Voice recorders, lost devices & the Star-Wars aesthetic<br />03:10 <i>Recording Consent & “AI Schlock”</i> – Ethics, legality, and bottom-of-the-barrel AI gadgets<br />11:35 <i>GPT-5-High in Cursor</i> – When paying per query is actually worth it<br />14:20 <i>Google’s Nano Banana Revolution</i> – Next-gen image editing, cheap runs & “Photoshop killer” talk<br />21:20 <i>Figma, MCP & Napkin AI</i> – Adobe’s future, integrated AI design, and where this all leads</p><hr /><p>⚡ <i>Listen now & get self-aware before your tools do.</i><br />🎧 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p><i>📢 Engage</i></p><p><i>Enjoy the show?</i> Drop a comment with your take on AI glasses (privacy vs. utility), and <i>subscribe</i> for weekly, no-BS AI news and analysis.</p><p>#NanoBanana #GoogleAI #TheyMightBeSelfAware #PhotoshopKiller</p>
]]></content:encoded>
      <enclosure length="35883317" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/292074f2-05ec-478b-8292-e20254d842df/audio/1f3dd64f-01d0-4e4c-bb9c-06d2265803b2/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Google’s New AI Will Kill Photoshop (Nano Banana)</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:33:35</itunes:duration>
      <itunes:summary>Hunter and Daniel dive into Google’s new Nano Banana AI, a shockingly capable image-editing model that could threaten Photoshop’s throne while redefining creative work. Along the way they spar over AI schlock, recording-consent ethics, and how tools like Figma AI, GPT-5-High, and Napkin AI signal the next big shift in how humans and machines design together.</itunes:summary>
      <itunes:subtitle>Hunter and Daniel dive into Google’s new Nano Banana AI, a shockingly capable image-editing model that could threaten Photoshop’s throne while redefining creative work. Along the way they spar over AI schlock, recording-consent ethics, and how tools like Figma AI, GPT-5-High, and Napkin AI signal the next big shift in how humans and machines design together.</itunes:subtitle>
      <itunes:keywords>ai recording, ai presentation, google nano banana, ai schlock, hunter powers, mcp, gpt-5-high, figma ai, machine learning, artificial intelligence, ai image editing, ai podcast, illegal recording, creative ai, photoshop killer, model context protocol, weekly ai news, cursor editor, design automation, recording consent, voice recorder ai, adobe vs ai, image generation, google ai studio, google ai, nano banana, they might be self-aware, napkin ai, tech podcast, ai image generation, cursor editor, daniel bishop, ai news, ai tools</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>127</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">77531566-6fd6-4882-a8b3-39a0fa6a2e42</guid>
      <title>I Buy Every Gadget. Here&apos;s Why I Refuse To Buy Meta&apos;s New AI Glasses.</title>
      <description><![CDATA[<p><i>AI Glasses Privacy Nightmare</i> — This week we dig into <i>Meta AI glasses</i> (the new Ray-Ban model with a built-in display and wristband gestures) and ask the hard question: are the headline features worth the trade-offs, or is this a privacy nightmare in your pocket… and on your face? We compare real-world use cases to today’s phone + earbuds workflows and explain <i>why we’re refusing to buy</i>—at least for now.</p><p>Hunter and Daniel break down what Meta’s display actually does (notifications, translation overlays, video calling), how the <i>wristband gesture control</i> feels in practice, and why the camera-forward design still raises <i>public-space privacy</i> concerns. We contrast <i>AI translation glasses</i> with on-device tools (Lens/Translate, iOS equivalents), talk notification hygiene for focus and productivity, and outline what it would take to win us over: <i>dual-eye, media-grade AR displays</i> that can replace phone-first interactions. We also evaluate the $800 price point, the “looks like real Ray-Bans” advantage, where Apple or others could leapfrog, and the handful of use cases (captioning, cooking assistants, travel translation) that actually make sense today. If you’re researching <i>Meta glasses reviews</i>, <i>smart glasses 2024</i>, or <i>AI glasses vs Apple</i>, this is your field guide to what’s hype, what’s helpful, and what’s still half-baked.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>0:00 <i>Meta’s AI Glasses Explained</i> – Display, translation overlays & wristband control<br />6:16 <i>The Privacy Problem</i> – Cameras, public perception & “glassholes” 2.0<br />9:50 <i>Why Hunter Refuses to Buy</i> – Notifications, calls, and feature fatigue<br />14:16 <i>The Future We Want</i> – Dual-eye AR, Apple’s rumored glasses & real utility<br />24:36 <i>Price, Adoption & Alternatives</i> – $800 value, AirPods AI, and what’s next</p><hr /><p>🎧 <i>Listen & Subscribe</i><br />📱 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p><i>📢 Engage</i></p><p><i>Enjoy the show?</i> Drop a comment with your take on AI glasses (privacy vs. utility), and <i>subscribe</i> for weekly, no-BS AI news and analysis.</p><p>#AI #Meta #SmartGlasses</p>
]]></description>
      <pubDate>Sat, 27 Sep 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>AI Glasses Privacy Nightmare</i> — This week we dig into <i>Meta AI glasses</i> (the new Ray-Ban model with a built-in display and wristband gestures) and ask the hard question: are the headline features worth the trade-offs, or is this a privacy nightmare in your pocket… and on your face? We compare real-world use cases to today’s phone + earbuds workflows and explain <i>why we’re refusing to buy</i>—at least for now.</p><p>Hunter and Daniel break down what Meta’s display actually does (notifications, translation overlays, video calling), how the <i>wristband gesture control</i> feels in practice, and why the camera-forward design still raises <i>public-space privacy</i> concerns. We contrast <i>AI translation glasses</i> with on-device tools (Lens/Translate, iOS equivalents), talk notification hygiene for focus and productivity, and outline what it would take to win us over: <i>dual-eye, media-grade AR displays</i> that can replace phone-first interactions. We also evaluate the $800 price point, the “looks like real Ray-Bans” advantage, where Apple or others could leapfrog, and the handful of use cases (captioning, cooking assistants, travel translation) that actually make sense today. If you’re researching <i>Meta glasses reviews</i>, <i>smart glasses 2024</i>, or <i>AI glasses vs Apple</i>, this is your field guide to what’s hype, what’s helpful, and what’s still half-baked.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>0:00 <i>Meta’s AI Glasses Explained</i> – Display, translation overlays & wristband control<br />6:16 <i>The Privacy Problem</i> – Cameras, public perception & “glassholes” 2.0<br />9:50 <i>Why Hunter Refuses to Buy</i> – Notifications, calls, and feature fatigue<br />14:16 <i>The Future We Want</i> – Dual-eye AR, Apple’s rumored glasses & real utility<br />24:36 <i>Price, Adoption & Alternatives</i> – $800 value, AirPods AI, and what’s next</p><hr /><p>🎧 <i>Listen & Subscribe</i><br />📱 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p><i>📢 Engage</i></p><p><i>Enjoy the show?</i> Drop a comment with your take on AI glasses (privacy vs. utility), and <i>subscribe</i> for weekly, no-BS AI news and analysis.</p><p>#AI #Meta #SmartGlasses</p>
]]></content:encoded>
      <enclosure length="33360140" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/3d08c689-b30b-462b-99a8-df01dcbf3e36/audio/75b6bb36-f94b-4055-b355-8705819ba4ee/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>I Buy Every Gadget. Here&apos;s Why I Refuse To Buy Meta&apos;s New AI Glasses.</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:30:57</itunes:duration>
      <itunes:summary>Hunter and Daniel break down Meta’s new Ray-Ban AI glasses, exploring the flashy features like display overlays, gesture controls, and real-time translation. Despite the hype, they highlight privacy concerns, clunky use cases, and explain why unlike most new gadgets these glasses aren’t worth buying yet.</itunes:summary>
      <itunes:subtitle>Hunter and Daniel break down Meta’s new Ray-Ban AI glasses, exploring the flashy features like display overlays, gesture controls, and real-time translation. Despite the hype, they highlight privacy concerns, clunky use cases, and explain why unlike most new gadgets these glasses aren’t worth buying yet.</itunes:subtitle>
      <itunes:keywords>ray-ban ai, ai privacy concerns, future of ar, wearable ai, smart glasses camera, ai gadgets, meta glasses review, smart glasses review, ai glasses privacy, ai glasses vs apple, apple vision pro comparison, ai translation glasses, meta ai glasses, ai translation tech, meta ai display, meta glasses display, ai glasses review, ray-ban smart glasses, meta ray-ban display, smart glasses 2024, meta display glasses</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>126</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">32a644ba-d379-4775-8195-a765fe26727b</guid>
      <title>Why OpenAI is Terrified of Its AI Therapist</title>
      <description><![CDATA[<p><i>AI therapy is here and OpenAI is terrified of it.</i><br />This week on <i>They Might Be Self-Aware</i>, Hunter and Daniel break down the life-or-death stakes of letting Large Language Models play the role of therapist. Should ChatGPT ever be allowed to talk people through self-harm? Or is that legal and ethical liability too great for OpenAI to risk?</p><p>We explore the explosive debate around <i>AI therapy vs human therapists</i>, OpenAI’s controversial <i>age-gating model for self-harm conversations</i>, and why lawsuits are forcing companies to walk a tightrope between saving lives and avoiding blame. Along the way, we tackle <i>Anthropic’s ban on domestic surveillance</i>, the growing fears of an <i>AGI job apocalypse</i>, and the rise of <i>AI in dating and religion</i>.</p><p>Whether you see AI as savior or doom, this episode delivers a no-holds-barred look at the frontier of mental health, surveillance, and humanity’s biggest gamble.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 Intro – Egos, Algorithms & Madness<br />01:40 OpenAI age-gating self-harm chats<br />05:10 AI lawsuits & the risk of AI therapy<br />09:45 Should ChatGPT replace human therapists?<br />11:20 Anthropic bans AI from domestic surveillance<br />15:00 The race toward AGI – danger or hype?<br />16:55 AI jobs crisis: mass layoffs & automation<br />22:00 Pareto principle & Twitter layoffs<br />23:50 Hunger strike against AGI<br />25:40 AI in personal life: dating, religion & fear</p><hr /><p>🎧 <i>Listen & Subscribe</i><br />📱 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p><i>📢 Engage</i></p><p>Share your thoughts in the comments and we might feature them in a future episode.</p><p>#AItherapy #ArtificialIntelligence #Podcast</p>
]]></description>
      <pubDate>Tue, 23 Sep 2025 12:58:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>AI therapy is here and OpenAI is terrified of it.</i><br />This week on <i>They Might Be Self-Aware</i>, Hunter and Daniel break down the life-or-death stakes of letting Large Language Models play the role of therapist. Should ChatGPT ever be allowed to talk people through self-harm? Or is that legal and ethical liability too great for OpenAI to risk?</p><p>We explore the explosive debate around <i>AI therapy vs human therapists</i>, OpenAI’s controversial <i>age-gating model for self-harm conversations</i>, and why lawsuits are forcing companies to walk a tightrope between saving lives and avoiding blame. Along the way, we tackle <i>Anthropic’s ban on domestic surveillance</i>, the growing fears of an <i>AGI job apocalypse</i>, and the rise of <i>AI in dating and religion</i>.</p><p>Whether you see AI as savior or doom, this episode delivers a no-holds-barred look at the frontier of mental health, surveillance, and humanity’s biggest gamble.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 Intro – Egos, Algorithms & Madness<br />01:40 OpenAI age-gating self-harm chats<br />05:10 AI lawsuits & the risk of AI therapy<br />09:45 Should ChatGPT replace human therapists?<br />11:20 Anthropic bans AI from domestic surveillance<br />15:00 The race toward AGI – danger or hype?<br />16:55 AI jobs crisis: mass layoffs & automation<br />22:00 Pareto principle & Twitter layoffs<br />23:50 Hunger strike against AGI<br />25:40 AI in personal life: dating, religion & fear</p><hr /><p>🎧 <i>Listen & Subscribe</i><br />📱 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p><i>📢 Engage</i></p><p>Share your thoughts in the comments and we might feature them in a future episode.</p><p>#AItherapy #ArtificialIntelligence #Podcast</p>
]]></content:encoded>
      <enclosure length="38373451" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/b415c6b7-6d0e-48dd-8496-5df477f0d36a/audio/d27247b3-5d30-4e55-98c3-3b301ee9eb28/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Why OpenAI is Terrified of Its AI Therapist</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:36:11</itunes:duration>
      <itunes:summary>Hunter and Daniel dive into the high-stakes debate around AI therapy, OpenAI’s age-gating rules on self-harm conversations, and the legal and ethical risks of letting ChatGPT act like a therapist. Along the way, they tackle Anthropic’s surveillance ban, fears of an AGI-driven job apocalypse, and how AI is creeping into everything from dating apps to religion.</itunes:summary>
      <itunes:subtitle>Hunter and Daniel dive into the high-stakes debate around AI therapy, OpenAI’s age-gating rules on self-harm conversations, and the legal and ethical risks of letting ChatGPT act like a therapist. Along the way, they tackle Anthropic’s surveillance ban, fears of an AGI-driven job apocalypse, and how AI is creeping into everything from dating apps to religion.</itunes:subtitle>
      <itunes:keywords>ai in religion, chatgpt therapy, openai mental health, ai age verification, ai kills jobs, chatgpt deaths, ai job apocalypse, weekly ai news, artificial intelligence podcast, ai domestic surveillance, ai replacing therapists, openai lawsuits, openai self harm, ai dating apps, agi hunger strike, ai therapy, agi dangers, chatgpt age check, chatgpt suicide, ai therapy vs human, anthropic surveillance ban</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>125</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">c1567327-9d3e-4115-8ecb-5dd46427de6f</guid>
      <title>The AI Protection Racket Has Begun</title>
      <description><![CDATA[<p><i>AI protection rackets are here — and they could change everything.</i> This week on <i>They Might Be Self-Aware</i>, Hunter Powers and Daniel Bishop break down the wild rise of the <i>AI protection racket</i>: hackers threatening to feed stolen data into AI models unless victims pay up. Is this the future of cybercrime — and your career?</p><p>We dig into the blurred lines between <i>AI slop</i>, creative industries under siege, and what it means when copyright law, ransomware, and generative AI collide. From record labels signing <i>AI artists</i> to <i>AI podcasts flooding the web</i> by the thousands, we ask: is the internet turning into a dead marketplace of synthetic content?</p><p>Meanwhile, Tesla’s new master plan sparks suspicion it was written by <i>Grok AI</i>, while the UK Parliament delivers speeches that sound suspiciously like <i>ChatGPT boilerplate</i>. And what happens when political theater literally becomes <i>AI theater</i>? Would you pay to watch robots on Broadway — or Cirque du Soleil run by Boston Dynamics?</p><p>Whether you’re worried about <i>AI replacing your job</i>, curious about how <i>AI teleprompters</i> will shape politics, or just wondering how far this digital theater will go, this episode uncovers the cracks forming at the foundation of human creativity.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>Strange Times on the AI Frontier</i><br />01:07 <i>Record Labels Sign AI Artists</i><br />04:32 <i>Copyright Loopholes & AI Music Ownership</i><br />07:30 <i>Wondery’s AI Podcast Factory (3,000 Episodes a Week)</i><br />11:44 <i>Ransomware Threats: Pay Up or Be Replaced by AI</i><br />16:00 <i>The Birth of the AI Protection Racket</i><br />21:05 <i>Tesla’s Grok and AI-Written Master Plans</i><br />25:27 <i>UK Parliament & ChatGPT Speeches</i><br />30:23 <i>Political Theater, AI Teleprompters & The Rise of AI Theater</i><br />34:45 <i>Robots on Stage: Cirque du Soleil Meets Boston Dynamics</i><br />38:48 <i>Wrap-Up & Subscribe</i></p><hr /><p>🎧 <i>Listen & Subscribe</i><br />📱 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p><i>📢 Engage</i></p><p>If AI can threaten to replace you, is it time to pay for <i>protection AI</i> — or is this just the next act in the endless theater of technology?<br />👉 Drop your thoughts in the comments.<br />👉 Like & subscribe for weekly AI deep dives every Monday.</p><p>#AI #ArtificialIntelligence #AIPodcast</p>
]]></description>
      <pubDate>Fri, 19 Sep 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>AI protection rackets are here — and they could change everything.</i> This week on <i>They Might Be Self-Aware</i>, Hunter Powers and Daniel Bishop break down the wild rise of the <i>AI protection racket</i>: hackers threatening to feed stolen data into AI models unless victims pay up. Is this the future of cybercrime — and your career?</p><p>We dig into the blurred lines between <i>AI slop</i>, creative industries under siege, and what it means when copyright law, ransomware, and generative AI collide. From record labels signing <i>AI artists</i> to <i>AI podcasts flooding the web</i> by the thousands, we ask: is the internet turning into a dead marketplace of synthetic content?</p><p>Meanwhile, Tesla’s new master plan sparks suspicion it was written by <i>Grok AI</i>, while the UK Parliament delivers speeches that sound suspiciously like <i>ChatGPT boilerplate</i>. And what happens when political theater literally becomes <i>AI theater</i>? Would you pay to watch robots on Broadway — or Cirque du Soleil run by Boston Dynamics?</p><p>Whether you’re worried about <i>AI replacing your job</i>, curious about how <i>AI teleprompters</i> will shape politics, or just wondering how far this digital theater will go, this episode uncovers the cracks forming at the foundation of human creativity.</p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>Strange Times on the AI Frontier</i><br />01:07 <i>Record Labels Sign AI Artists</i><br />04:32 <i>Copyright Loopholes & AI Music Ownership</i><br />07:30 <i>Wondery’s AI Podcast Factory (3,000 Episodes a Week)</i><br />11:44 <i>Ransomware Threats: Pay Up or Be Replaced by AI</i><br />16:00 <i>The Birth of the AI Protection Racket</i><br />21:05 <i>Tesla’s Grok and AI-Written Master Plans</i><br />25:27 <i>UK Parliament & ChatGPT Speeches</i><br />30:23 <i>Political Theater, AI Teleprompters & The Rise of AI Theater</i><br />34:45 <i>Robots on Stage: Cirque du Soleil Meets Boston Dynamics</i><br />38:48 <i>Wrap-Up & Subscribe</i></p><hr /><p>🎧 <i>Listen & Subscribe</i><br />📱 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p><i>📢 Engage</i></p><p>If AI can threaten to replace you, is it time to pay for <i>protection AI</i> — or is this just the next act in the endless theater of technology?<br />👉 Drop your thoughts in the comments.<br />👉 Like & subscribe for weekly AI deep dives every Monday.</p><p>#AI #ArtificialIntelligence #AIPodcast</p>
]]></content:encoded>
      <enclosure length="41530555" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/cc13813c-aa52-4f40-a358-ed7692074ba3/audio/ad4aeaaf-b5d5-43ce-ac25-2f82e64856fe/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>The AI Protection Racket Has Begun</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:39:28</itunes:duration>
      <itunes:summary>The boys unpack the rise of an AI protection racket from hackers threatening to train models on stolen art to labels signing AI artists. PLUS the flood of AI-generated “slop” podcasts and messy copyright loopholes. Then they debate how AI-written manifestos (hello, Tesla Grok) and boilerplate political speeches push us toward full-blown AI theater, and what that means for jobs, creativity, and culture.</itunes:summary>
      <itunes:subtitle>The boys unpack the rise of an AI protection racket from hackers threatening to train models on stolen art to labels signing AI artists. PLUS the flood of AI-generated “slop” podcasts and messy copyright loopholes. Then they debate how AI-written manifestos (hello, Tesla Grok) and boilerplate political speeches push us toward full-blown AI theater, and what that means for jobs, creativity, and culture.</itunes:subtitle>
      <itunes:keywords>dead internet, ai politics, openai, cirque du soleil, ai protection racket, streaming ai, tesla grok, ai theater, copyright ai, artificial intelligence news, ai teleprompter, chatgpt, protection ai, ai slop, notebooklm, weekly ai podcast, ai podcasts, ai culture, ai ransomware, wondery ai, ai artists, ai career killer, ai replacement, anthropic</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>124</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">3c595c7f-282b-49b0-95d7-5c7fba7b2e2a</guid>
      <title>Is Your AI Calling The Police On You?</title>
      <description><![CDATA[<p><i>Is Your AI Calling the Police On You? | ChatGPT Privacy, AI Monitoring & Debates</i></p><p>OpenAI has confirmed it will <i>scan ChatGPT conversations</i> for “problematic” content—and in extreme cases, <i>report users to law enforcement</i>. What does this mean for your privacy, your rights, and the future of AI safety? In this episode of <i>They Might Be Self-Aware</i>, Hunter Powers and Daniel Bishop dive into the growing concerns around <i>ChatGPT privacy</i> and whether AI tools are quietly becoming digital informants.</p><p>We debate whether AI monitoring is necessary for safety or a slippery slope toward <i>AI Big Brother</i>. Along the way, we share stories about suspicious emails, bug bounty scams, and how even innocent prompts could one day be used to build a legal case against you. We also explore Columbia University’s experiment with <i>Sway AI</i>, an artificial intelligence “moderator” for debates, and ask if this is a preview of political debates being run by robots.</p><hr /><p>🎧 <i>Listen & Subscribe</i><br />📱 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>Welcome to the AI Jungle</i><br />02:00 <i>Suspicious “AI Bug Bounty” Email</i><br />06:45 <i>ChatGPT Privacy Concerns & Monitoring</i><br />09:00 <i>OpenAI Scanning & Reporting to Police</i><br />14:30 <i>Should AI Intervene on Self-Harm?</i><br />18:00 <i>The Slippery Slope of AI Surveillance</i><br />22:00 <i>Sway AI – Debates Moderated by Artificial Intelligence</i><br />27:30 <i>Could an AI Moderate U.S. Presidential Debates?</i><br />34:00 <i>Wrap-Up & Subscribe</i></p><hr /><p><i>📢 Engage</i></p><p>Do you trust AI companies with your private conversations? Should AI report dangerous behavior, or is that the ultimate violation of privacy? Share your thoughts in the comments and we might feature them in a future episode.</p><p>#ChatGPT #Privacy #ArtificialIntelligence</p>
]]></description>
      <pubDate>Tue, 16 Sep 2025 20:12:47 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>Is Your AI Calling the Police On You? | ChatGPT Privacy, AI Monitoring & Debates</i></p><p>OpenAI has confirmed it will <i>scan ChatGPT conversations</i> for “problematic” content—and in extreme cases, <i>report users to law enforcement</i>. What does this mean for your privacy, your rights, and the future of AI safety? In this episode of <i>They Might Be Self-Aware</i>, Hunter Powers and Daniel Bishop dive into the growing concerns around <i>ChatGPT privacy</i> and whether AI tools are quietly becoming digital informants.</p><p>We debate whether AI monitoring is necessary for safety or a slippery slope toward <i>AI Big Brother</i>. Along the way, we share stories about suspicious emails, bug bounty scams, and how even innocent prompts could one day be used to build a legal case against you. We also explore Columbia University’s experiment with <i>Sway AI</i>, an artificial intelligence “moderator” for debates, and ask if this is a preview of political debates being run by robots.</p><hr /><p>🎧 <i>Listen & Subscribe</i><br />📱 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>Welcome to the AI Jungle</i><br />02:00 <i>Suspicious “AI Bug Bounty” Email</i><br />06:45 <i>ChatGPT Privacy Concerns & Monitoring</i><br />09:00 <i>OpenAI Scanning & Reporting to Police</i><br />14:30 <i>Should AI Intervene on Self-Harm?</i><br />18:00 <i>The Slippery Slope of AI Surveillance</i><br />22:00 <i>Sway AI – Debates Moderated by Artificial Intelligence</i><br />27:30 <i>Could an AI Moderate U.S. Presidential Debates?</i><br />34:00 <i>Wrap-Up & Subscribe</i></p><hr /><p><i>📢 Engage</i></p><p>Do you trust AI companies with your private conversations? Should AI report dangerous behavior, or is that the ultimate violation of privacy? Share your thoughts in the comments and we might feature them in a future episode.</p><p>#ChatGPT #Privacy #ArtificialIntelligence</p>
]]></content:encoded>
      <enclosure length="37939065" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/48246f0c-f50c-4287-a4ea-f1d405e6d0f6/audio/c206ff00-46a0-4308-a6ac-4d463a1d183c/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Is Your AI Calling The Police On You?</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:35:43</itunes:duration>
      <itunes:summary>Hunter and Daniel unpack reports that OpenAI can scan ChatGPT conversations and, in extreme cases, refer them to law enforcement, sparking a sharp debate over privacy vs. safety and where “AI monitoring” becomes AI Big Brother. They also explore Sway AI’s proposal to moderate debates, whether AI should intervene in self-harm cases, and the likelihood of future political debates being run by an AI.</itunes:summary>
      <itunes:subtitle>Hunter and Daniel unpack reports that OpenAI can scan ChatGPT conversations and, in extreme cases, refer them to law enforcement, sparking a sharp debate over privacy vs. safety and where “AI monitoring” becomes AI Big Brother. They also explore Sway AI’s proposal to moderate debates, whether AI should intervene in self-harm cases, and the likelihood of future political debates being run by an AI.</itunes:subtitle>
      <itunes:keywords>ai debate, openai scanning, ai safety scanning, ai monitoring, chatgpt leak, chatgpt privacy, sway ai, ai police, ai big brother, openai police</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>123</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">035b27f5-6ed5-40ae-820e-db12183aba63</guid>
      <title>AI Just Became The World&apos;s Best Hacker</title>
      <description><![CDATA[<p><i>AI just became the world’s best hacker?</i> In this week’s AI news breakdown, we dive into how modern models are <i>crushing bug bounties and pen-tests</i>, what “vibe hacking” means for real-world security, and why AI-assisted attacks (and defenses) are accelerating. We also unpack AI-generated misinformation (did that “drug boat” incident even happen?), Big Tech’s race to cut inference costs, and the ripple effects of <i>copyright lawsuits and settlements</i> on model training and the open internet.</p><p>If you care about <i>cybersecurity, AI safety, and where the AI economy is actually heading</i>, this episode gives you concrete stories, trade-offs, and what to watch next—from leaderboard-climbing LLM agents to companies walking back “AI-everywhere” plans after customer support backfires. We balance the doom with the pragmatic: how white-hat teams can use the same tools to harden systems by default, why chaptered “key moments” matter in news, and how publishing and IP norms are being re-written in real time.</p><p>New here? We publish fast, no-fluff analysis every Monday. <i>Like, subscribe, and tell us what you think</i>—your take may shape next week’s rundown.</p><hr /><p>🎧 <i>Listen & Subscribe</i><br />📱 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>Intro — what’s real anymore?</i><br />01:24 <i>AI-generated “news” & dead-internet theory</i><br />06:06 <i>From misinformation to international incidents</i><br />06:46 <i>Vibe coding → “vibe hacking”</i><br />09:11 <i>Bug bounties & LLMs climb leaderboards (white-hat vs black-hat)</i><br />12:35 <i>Defense by default: scanners, checks, and secure-by-design</i><br />14:28 <i>Corporate AI walkbacks: chatbots, dev productivity & reality</i><br />20:05 <i>AI costs & efficiency: Google’s 33× claim + energy per prompt</i><br />22:32 <i>Data & law: book lawsuits, Anthropic settlement, open Internet?</i><br />29:09 <i>IP vs stagnation + the books-vs-summaries debate</i></p><hr /><p><i>📢 Engage</i></p><p>Enjoyed the episode? <i>Like, comment, and subscribe</i> – it really helps the show grow.</p><p>#AI #Cybersecurity #Podcast</p>
]]></description>
      <pubDate>Fri, 12 Sep 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>AI just became the world’s best hacker?</i> In this week’s AI news breakdown, we dive into how modern models are <i>crushing bug bounties and pen-tests</i>, what “vibe hacking” means for real-world security, and why AI-assisted attacks (and defenses) are accelerating. We also unpack AI-generated misinformation (did that “drug boat” incident even happen?), Big Tech’s race to cut inference costs, and the ripple effects of <i>copyright lawsuits and settlements</i> on model training and the open internet.</p><p>If you care about <i>cybersecurity, AI safety, and where the AI economy is actually heading</i>, this episode gives you concrete stories, trade-offs, and what to watch next—from leaderboard-climbing LLM agents to companies walking back “AI-everywhere” plans after customer support backfires. We balance the doom with the pragmatic: how white-hat teams can use the same tools to harden systems by default, why chaptered “key moments” matter in news, and how publishing and IP norms are being re-written in real time.</p><p>New here? We publish fast, no-fluff analysis every Monday. <i>Like, subscribe, and tell us what you think</i>—your take may shape next week’s rundown.</p><hr /><p>🎧 <i>Listen & Subscribe</i><br />📱 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p><i>⏱️ CHAPTERS</i></p><p>00:00 <i>Intro — what’s real anymore?</i><br />01:24 <i>AI-generated “news” & dead-internet theory</i><br />06:06 <i>From misinformation to international incidents</i><br />06:46 <i>Vibe coding → “vibe hacking”</i><br />09:11 <i>Bug bounties & LLMs climb leaderboards (white-hat vs black-hat)</i><br />12:35 <i>Defense by default: scanners, checks, and secure-by-design</i><br />14:28 <i>Corporate AI walkbacks: chatbots, dev productivity & reality</i><br />20:05 <i>AI costs & efficiency: Google’s 33× claim + energy per prompt</i><br />22:32 <i>Data & law: book lawsuits, Anthropic settlement, open Internet?</i><br />29:09 <i>IP vs stagnation + the books-vs-summaries debate</i></p><hr /><p><i>📢 Engage</i></p><p>Enjoyed the episode? <i>Like, comment, and subscribe</i> – it really helps the show grow.</p><p>#AI #Cybersecurity #Podcast</p>
]]></content:encoded>
      <enclosure length="38016279" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/b91ca93f-4761-4896-bc93-f892920cb354/audio/c5cd3f57-3cf1-4b54-a84d-14c169d3da4b/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Just Became The World&apos;s Best Hacker</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:35:48</itunes:duration>
      <itunes:summary>In this episode, Hunter and Daniel explore how AI is transforming hacking—climbing bug bounty leaderboards, powering “vibe hacking,” and blurring the line between white-hat defense and black-hat attacks. They also tackle AI-generated misinformation, corporate AI walkbacks, massive infrastructure costs, and the unsettling future of copyright, IP, and the open internet.</itunes:summary>
      <itunes:subtitle>In this episode, Hunter and Daniel explore how AI is transforming hacking—climbing bug bounty leaderboards, powering “vibe hacking,” and blurring the line between white-hat defense and black-hat attacks. They also tackle AI-generated misinformation, corporate AI walkbacks, massive infrastructure costs, and the unsettling future of copyright, IP, and the open internet.</itunes:subtitle>
      <itunes:keywords>ai inference costs, ai settlements, vibe hacking, openai, copyright and ai, publishing and ai, claude ai, deepfakes, artificial intelligence, black hat hacking, ai lawsuits, bug bounties, future of the internet, ai efficiency, ai-generated news, intellectual property, google ai, ai regulation, white hat hacking, ai misinformation, dead internet theory, ai hacking, ai security, penetration testing, cybersecurity, ai ethics, anthropic</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>122</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">875a0fd6-3dd0-4358-ac19-fafd3d969343</guid>
      <title>AI Will Replace Salesmen, Truckers, and Your Grandma</title>
      <description><![CDATA[<p><i>AI kills insurance?</i> In this week’s episode, we break down how full self-driving could <i>shrink (or even upend) the auto-insurance business</i>, why lawmakers will slow-roll it, and what that means for you. Then we zoom out: <i>AI sales agents</i>, robo-negotiations, and the strange new market for <i>“dead bots”</i> that let you talk to a digital version of grandma.</p><p>If you care about where AI hits real wallets and weird places like digital souls, this one's for you.</p><p>We open with the road to autonomy: why the “final push” won’t be a feature drop but <i>federal law</i> that indemnifies carmakers and creates a national risk pool. Hunter argues self-driving is a <i>cure</i>, not a treatment—fewer collisions, <i>fewer cars owned</i>, and <i>less insurance revenue</i> over time. Daniel pushes back on America’s car culture and urban design, while both explore the likely <i>20–25 year</i> adoption arc as politics protects jobs. Along the way we hit <i>Tesla Autopilot</i>, <i>robo-taxi futures</i>, and the economic dominoes when <i>AI kills jobs</i> in adjacent sectors.</p><p>Next, we pivot to <i>AI in sales</i>. From China’s 24/7 livestream “personas” to <i>AI negotiators</i> that squeeze or save every dollar, we examine how <i>enterprise pricing</i>, <i>value-based deals</i>, and <i>contract drafting</i> get automated. B2B or retail, expect <i>AI eliminates sales</i> roles first in the back-and-forth, then in the front office.</p><p>Finally, the uncanny: <i>dead bots</i>, <i>digital souls</i>, and whether “<i>AI grandma</i>” becomes normal within two generations. We debate ethics, likeness rights, AR dinner guests, and why a black-market trench-coat guy somehow survives the singularity.</p><hr /><p>🎧 <i>Listen & Subscribe</i><br />📱 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p><i>⏱️ CHAPTERS</i></p><p>0:00 <i>Intro + the cliffhanger</i><br />1:56 <i>Self-driving = cure, not treatment</i><br />4:58 <i>Does AI kill insurance? Fewer crashes, fewer cars</i><br />8:59 <i>Policy path: indemnity + national coverage</i><br />12:11 <i>From sales floors to showrooms: AI haggles</i><br />16:24 <i>B2B pricing, contracts, and robo-negotiation</i><br />21:12 <i>Who gets replaced first: truckers or sales?</i><br />24:14 <i>Deadbots, digital souls, and “AI grandma”</i><br />28:40 <i>AR dinner with ancestors? Ethics & norms</i><br />30:45 <i>OUT OF TIME</i></p><hr /><p><i>📢 Engage</i></p><p>Enjoyed the episode? <i>Like, comment, and subscribe</i> – it really helps the show grow.</p><p>#AI #AutonomousVehicles #Insurance</p>
]]></description>
      <pubDate>Mon, 8 Sep 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>AI kills insurance?</i> In this week’s episode, we break down how full self-driving could <i>shrink (or even upend) the auto-insurance business</i>, why lawmakers will slow-roll it, and what that means for you. Then we zoom out: <i>AI sales agents</i>, robo-negotiations, and the strange new market for <i>“dead bots”</i> that let you talk to a digital version of grandma.</p><p>If you care about where AI hits real wallets and weird places like digital souls, this one's for you.</p><p>We open with the road to autonomy: why the “final push” won’t be a feature drop but <i>federal law</i> that indemnifies carmakers and creates a national risk pool. Hunter argues self-driving is a <i>cure</i>, not a treatment—fewer collisions, <i>fewer cars owned</i>, and <i>less insurance revenue</i> over time. Daniel pushes back on America’s car culture and urban design, while both explore the likely <i>20–25 year</i> adoption arc as politics protects jobs. Along the way we hit <i>Tesla Autopilot</i>, <i>robo-taxi futures</i>, and the economic dominoes when <i>AI kills jobs</i> in adjacent sectors.</p><p>Next, we pivot to <i>AI in sales</i>. From China’s 24/7 livestream “personas” to <i>AI negotiators</i> that squeeze or save every dollar, we examine how <i>enterprise pricing</i>, <i>value-based deals</i>, and <i>contract drafting</i> get automated. B2B or retail, expect <i>AI eliminates sales</i> roles first in the back-and-forth, then in the front office.</p><p>Finally, the uncanny: <i>dead bots</i>, <i>digital souls</i>, and whether “<i>AI grandma</i>” becomes normal within two generations. We debate ethics, likeness rights, AR dinner guests, and why a black-market trench-coat guy somehow survives the singularity.</p><hr /><p>🎧 <i>Listen & Subscribe</i><br />📱 Listen on Spotify: <a href="https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc">https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc</a><br />🍎 Subscribe on Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297">https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297</a><br />▶️ Subscribe on YouTube: <a href="https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1">https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</a></p><hr /><p><i>⏱️ CHAPTERS</i></p><p>0:00 <i>Intro + the cliffhanger</i><br />1:56 <i>Self-driving = cure, not treatment</i><br />4:58 <i>Does AI kill insurance? Fewer crashes, fewer cars</i><br />8:59 <i>Policy path: indemnity + national coverage</i><br />12:11 <i>From sales floors to showrooms: AI haggles</i><br />16:24 <i>B2B pricing, contracts, and robo-negotiation</i><br />21:12 <i>Who gets replaced first: truckers or sales?</i><br />24:14 <i>Deadbots, digital souls, and “AI grandma”</i><br />28:40 <i>AR dinner with ancestors? Ethics & norms</i><br />30:45 <i>OUT OF TIME</i></p><hr /><p><i>📢 Engage</i></p><p>Enjoyed the episode? <i>Like, comment, and subscribe</i> – it really helps the show grow.</p><p>#AI #AutonomousVehicles #Insurance</p>
]]></content:encoded>
      <enclosure length="34854361" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/63f22617-a614-477a-85ce-da02fa728870/audio/df8ab6a0-c561-4a41-9ac8-71cb72e33889/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Will Replace Salesmen, Truckers, and Your Grandma</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:32:31</itunes:duration>
      <itunes:summary>Self-driving as a &quot;cure&quot; could crash the auto-insurance business by slashing accidents and car ownership, but nationwide adoption likely stalls 20–25 years as lawmakers protect jobs and grapple with liability. We then dive into AI taking over sales, from robo-negotiation and contract automation to eerie &quot;dead bots&quot; and digital souls, asking what happens when machines sell to us and even speak for our ancestors.</itunes:summary>
      <itunes:subtitle>Self-driving as a &quot;cure&quot; could crash the auto-insurance business by slashing accidents and car ownership, but nationwide adoption likely stalls 20–25 years as lawmakers protect jobs and grapple with liability. We then dive into AI taking over sales, from robo-negotiation and contract automation to eerie &quot;dead bots&quot; and digital souls, asking what happens when machines sell to us and even speak for our ancestors.</itunes:subtitle>
      <itunes:keywords>robo taxi future, ai eliminates sales, dead ai chat, ai kills insurance, deadbots, dead bots, dead person ai, ai kills jobs, digital souls, tesla autopilot, insurance vs ai, ai grandma, ai necromancer, autonomous vehicles, ai resurrects dead</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>121</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">15bd77b7-5e7b-44f5-b7ee-d9f4210da1ec</guid>
      <title>The One Reason AI Is NOT A Bubble</title>
      <description><![CDATA[<p><i>⏱️ CHAPTERS</i></p><p>0:00 Intro — Is AI a bubble?<br />0:58 “Bubble Boy” & defining in a bubble vs is a bubble<br />2:33 Core thesis: why AI isn’t a bubble<br />5:39 The “2% found, 98% to go” upside<br />6:58 Safety & public risk perception as adoption throttle<br />10:06 Guardrails and refusal/cutoff features (Anthropic, OpenAI)<br />12:09 Government buyers & model reliability (+ $1 offers)<br />15:34 Grok’s Spicy Mode: the real risk—deepfakes<br />18:39 Meta AI characters, minors & parental controls<br />26:28 Policy & economics: Illinois therapy ban; self-driving; capitalism vs insurance</p>
]]></description>
      <pubDate>Fri, 5 Sep 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>⏱️ CHAPTERS</i></p><p>0:00 Intro — Is AI a bubble?<br />0:58 “Bubble Boy” & defining in a bubble vs is a bubble<br />2:33 Core thesis: why AI isn’t a bubble<br />5:39 The “2% found, 98% to go” upside<br />6:58 Safety & public risk perception as adoption throttle<br />10:06 Guardrails and refusal/cutoff features (Anthropic, OpenAI)<br />12:09 Government buyers & model reliability (+ $1 offers)<br />15:34 Grok’s Spicy Mode: the real risk—deepfakes<br />18:39 Meta AI characters, minors & parental controls<br />26:28 Policy & economics: Illinois therapy ban; self-driving; capitalism vs insurance</p>
]]></content:encoded>
      <enclosure length="36512256" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/30508d0c-f9b1-43c2-a5a9-810b46672458/audio/e2ff8e9f-af3b-47ca-9041-870afc7bc9ad/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>The One Reason AI Is NOT A Bubble</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:34:14</itunes:duration>
      <itunes:summary>Is AI in a bubble—or just inside one? In this episode, we argue the one big reason AI is not a bubble: unlike past hype cycles (hi, blockchain), generative AI keeps unlocking new, practical use-cases and we’ve likely discovered only a tiny fraction so far. We also debate where the actual bubbles are (funding and frothy startups), why some companies may pop, and why the technology isn’t going anywhere.

We dig into the difference between “AI is a bubble” vs “AI in a bubble,” why LLMs are already embedded in daily workflows, and how safety, regulation, and economics shape adoption. We touch on Sam Altman’s “money hose” moment, model safety moves (e.g., refusal features and cutoff behaviors), and the PR theater around “we won’t let you do X” policies. On the spicy side: we unpack calls for investigations into Grok’s adult-mode and the real risk vector—image deepfakes—plus why institutions crave reliability over edgelord vibes.

Then we tackle kids + AI (parental controls, open-weights at home, and why an LLM can be like placing an unmonitored adult on a phone line), and policy flashpoints like Illinois restricting AI in therapy. Finally, we square off on the future of self-driving: will capitalism or insurance math decide when humans must hand over the wheel?

If you want a fast, no-BS breakdown of whether AI is a bubble (and what actually bursts next), you’re in the right place. Like &amp; subscribe for weekly, high-signal AI news and analysis—and tell us in the comments: what’s one use-case that proves AI’s staying power?

---

🎧 Listen &amp; Subscribe
📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297
▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1

---

📢 Engage

If you learned something, drop a comment with your most convincing “AI is not a bubble” example (or your spiciest counter-argument). We read them all and may feature your take next episode.

#AI #AInews #Podcast
</itunes:summary>
      <itunes:subtitle>Is AI in a bubble—or just inside one? In this episode, we argue the one big reason AI is not a bubble: unlike past hype cycles (hi, blockchain), generative AI keeps unlocking new, practical use-cases and we’ve likely discovered only a tiny fraction so far. We also debate where the actual bubbles are (funding and frothy startups), why some companies may pop, and why the technology isn’t going anywhere.

We dig into the difference between “AI is a bubble” vs “AI in a bubble,” why LLMs are already embedded in daily workflows, and how safety, regulation, and economics shape adoption. We touch on Sam Altman’s “money hose” moment, model safety moves (e.g., refusal features and cutoff behaviors), and the PR theater around “we won’t let you do X” policies. On the spicy side: we unpack calls for investigations into Grok’s adult-mode and the real risk vector—image deepfakes—plus why institutions crave reliability over edgelord vibes.

Then we tackle kids + AI (parental controls, open-weights at home, and why an LLM can be like placing an unmonitored adult on a phone line), and policy flashpoints like Illinois restricting AI in therapy. Finally, we square off on the future of self-driving: will capitalism or insurance math decide when humans must hand over the wheel?

If you want a fast, no-BS breakdown of whether AI is a bubble (and what actually bursts next), you’re in the right place. Like &amp; subscribe for weekly, high-signal AI news and analysis—and tell us in the comments: what’s one use-case that proves AI’s staying power?

---

🎧 Listen &amp; Subscribe
📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297
▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1

---

📢 Engage

If you learned something, drop a comment with your most convincing “AI is not a bubble” example (or your spiciest counter-argument). We read them all and may feature your take next episode.

#AI #AInews #Podcast
</itunes:subtitle>
      <itunes:keywords>ai deepfakes, generative ai future, large language models, grok spicy mode, sam altman ai, artificial intelligence hype, ai investment, ai therapy ban, ai bubble, ai safety regulation</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>120</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">89d473a3-db63-4c9f-ab1a-d05ee80334d3</guid>
      <title>Brain Chips Aren&apos;t For Health, They&apos;re For AI Ads</title>
      <description><![CDATA[<p><i>⏱️ CHAPTERS</i></p><p>0:00 Loading Universe 119<br />2:00 Would you put ChatGPT in your brain?<br />6:00 Neuralink chips that “read your thoughts”<br />8:40 Sam Altman vs Elon Musk (Neuralink, Grok, OpenAI)<br />12:30 Ads in ChatGPT – “tastefully integrated”<br />16:00 GPU shortages & GPT-5 performance questions<br />19:20 AI bubble talk: venture capital cycles & reckoning ahead<br />22:50 Meta’s billion-dollar bets and AI layoffs<br />25:40 The rise of AI advertising & hidden bias<br />28:50 Wrap-up</p>
]]></description>
      <pubDate>Fri, 29 Aug 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>⏱️ CHAPTERS</i></p><p>0:00 Loading Universe 119<br />2:00 Would you put ChatGPT in your brain?<br />6:00 Neuralink chips that “read your thoughts”<br />8:40 Sam Altman vs Elon Musk (Neuralink, Grok, OpenAI)<br />12:30 Ads in ChatGPT – “tastefully integrated”<br />16:00 GPU shortages & GPT-5 performance questions<br />19:20 AI bubble talk: venture capital cycles & reckoning ahead<br />22:50 Meta’s billion-dollar bets and AI layoffs<br />25:40 The rise of AI advertising & hidden bias<br />28:50 Wrap-up</p>
]]></content:encoded>
      <enclosure length="32462064" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/aab4e66d-7cc3-4143-8c14-ba0a6e906981/audio/d2b69904-419c-4ce9-82c8-7c3175394a19/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Brain Chips Aren&apos;t For Health, They&apos;re For AI Ads</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:30:01</itunes:duration>
      <itunes:summary>*Brain chips that read your thoughts?* This week on *They Might Be Self-Aware*, Hunter Powers and Daniel Bishop dive into the shocking future of *Neuralink brain implants* and what happens when technology can literally decode your inner monologue. From Elon Musk’s latest moves to Sam Altman’s next venture, we unpack the wild collision of human biology and AI.

In this episode, we explore:

* How a *Neuralink brain chip* could translate your thoughts into words – and what that means for privacy and control.
* The rivalry of *Elon Musk vs. Sam Altman*, from Neuralink vs. Altman’s new ventures to their dueling language models (*Grok vs. ChatGPT*).
* The looming question: will *ads inside AI models* become the new normal? What does “tastefully integrated” advertising in *ChatGPT* even look like?
* The warning signs of an *AI bubble* and why experts think the industry may be heading for a massive correction.
* GPU shortages, OpenAI’s scaling issues, and whether GPT-5 is really the cutting edge or just a watered-down version.
* Meta, venture capital cycles, and the “shi$#ification” of platforms – is history repeating itself for AI?

Whether you’re excited about mind-reading chips or worried about AI turning into an ad-driven nightmare, this episode delivers the analysis and sharp banter you won’t get anywhere else.

---

🎧 *Listen &amp; Subscribe*
📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297
▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</itunes:summary>
      <itunes:subtitle>*Brain chips that read your thoughts?* This week on *They Might Be Self-Aware*, Hunter Powers and Daniel Bishop dive into the shocking future of *Neuralink brain implants* and what happens when technology can literally decode your inner monologue. From Elon Musk’s latest moves to Sam Altman’s next venture, we unpack the wild collision of human biology and AI.

In this episode, we explore:

* How a *Neuralink brain chip* could translate your thoughts into words – and what that means for privacy and control.
* The rivalry of *Elon Musk vs. Sam Altman*, from Neuralink vs. Altman’s new ventures to their dueling language models (*Grok vs. ChatGPT*).
* The looming question: will *ads inside AI models* become the new normal? What does “tastefully integrated” advertising in *ChatGPT* even look like?
* The warning signs of an *AI bubble* and why experts think the industry may be heading for a massive correction.
* GPU shortages, OpenAI’s scaling issues, and whether GPT-5 is really the cutting edge or just a watered-down version.
* Meta, venture capital cycles, and the “shi$#ification” of platforms – is history repeating itself for AI?

Whether you’re excited about mind-reading chips or worried about AI turning into an ad-driven nightmare, this episode delivers the analysis and sharp banter you won’t get anywhere else.

---

🎧 *Listen &amp; Subscribe*
📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297
▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</itunes:subtitle>
      <itunes:keywords>chatgpt ads, grok vs chatgpt, neuralink thoughts, elon musk, ai ads, sam altman, chatgpt advertising, ai bubble, neuralink chip, brain chip, gpt-4o</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>119</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">0e890040-250f-437a-b819-af896fc82fab</guid>
      <title>Why We Offered 40 Billion For AI Supremacy | Chrome Browsers, Perplexity, &amp; Comet Come of Age</title>
      <description><![CDATA[<p><i>CHAPTERS</i><br />0:00 Welcome to the AI Wasteland<br />2:00 Claude Prompt Hacks for Script Writing<br />5:10 AI as Co-Writer: From D&D to Directing<br />10:00 Trix the Rabbit, But Legally Distinct<br />12:00 Perplexity’s Comet Browser in Action<br />15:30 Chrome Extensions & The $40B Offer<br />20:00 Text-to-Speech & AI Accent Changers<br />23:00 Duolingo’s AI Outrage (That Wasn’t)<br />27:00 AI in Hollywood & The Future of Jobs<br />30:00 Final Thoughts & Call to Action</p>
]]></description>
      <pubDate>Mon, 25 Aug 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>CHAPTERS</i><br />0:00 Welcome to the AI Wasteland<br />2:00 Claude Prompt Hacks for Script Writing<br />5:10 AI as Co-Writer: From D&D to Directing<br />10:00 Trix the Rabbit, But Legally Distinct<br />12:00 Perplexity’s Comet Browser in Action<br />15:30 Chrome Extensions & The $40B Offer<br />20:00 Text-to-Speech & AI Accent Changers<br />23:00 Duolingo’s AI Outrage (That Wasn’t)<br />27:00 AI in Hollywood & The Future of Jobs<br />30:00 Final Thoughts & Call to Action</p>
]]></content:encoded>
      <enclosure length="33727904" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/cd6b82dc-f270-4369-a0b9-25197019f9d9/audio/d87bb9bf-e107-47b3-97be-1dc4bb97ae14/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Why We Offered 40 Billion For AI Supremacy | Chrome Browsers, Perplexity, &amp; Comet Come of Age</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:31:20</itunes:duration>
      <itunes:summary>*Claude Tricks, Comet Browsers, and AI in Disguise?*
This week on *They Might Be Self-Aware*, Hunter and Daniel uncover some of the wildest ways creators are bending AI to their will—from scripting surreal cereal mascots to bypassing model safety filters using indirect prompts. Learn the “Claude tricks” helping writers, gamers, and directors collaborate with AI like never before.

But it doesn&apos;t stop there. We also dive into the audacious $40 billion bid to buy Google Chrome (yes, really), explore the true potential of Perplexity’s new Comet browser, and debate the implications of AI-powered voice changers and language tutors that don’t always sit well with the public. Is AI creativity our ultimate tool—or just a shiny mask for copyright dodges?

If you&apos;ve ever wondered how to co-write a screenplay with Claude, spoof an accent in real-time, or summarize a 100-comment Reddit thread in one click—this one’s for you.

---

*In This Episode:*
🔹 *Claude Tricks*: Prompt hacking for screenplay generation and character cloning
🔹 *AI as Creative Partner*: Scriptwriting, D\&amp;D adventures, and filtered reimaginings
🔹 *Perplexity’s Chrome Clone*: Inside the Comet browser and “agentic” AI helpers
🔹 *\$40B Chrome Offer?!*: We break down the absurdity (and legality) of the move
🔹 *Voice Cloning &amp; TTS*: Adobe’s secret tricks and real-time accent changers
🔹 *Duolingo Outrage*: The AI backlash that didn’t stick—and what it tells us about tech trust
🔹 *Midjourney, Copyright, &amp; the Rabbit That Shall Not Be Named*

---

🎧 *Listen &amp; Subscribe*
📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297
▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1

---

*Have your own Claude trick or AI browser hot take?*
Drop it in the comments, hit like if you laughed at the cereal heist, and subscribe to be part of the smartest AI chaos on YouTube.

*#ClaudeTricks #AIbrowser #TheyMightBeSelfAware #CometBrowser #AIwriting #Perplexity #VoiceCloning #DuolingoAI #ChromeSale #AInews #TMBSA*</itunes:summary>
      <itunes:subtitle>*Claude Tricks, Comet Browsers, and AI in Disguise?*
This week on *They Might Be Self-Aware*, Hunter and Daniel uncover some of the wildest ways creators are bending AI to their will—from scripting surreal cereal mascots to bypassing model safety filters using indirect prompts. Learn the “Claude tricks” helping writers, gamers, and directors collaborate with AI like never before.

But it doesn&apos;t stop there. We also dive into the audacious $40 billion bid to buy Google Chrome (yes, really), explore the true potential of Perplexity’s new Comet browser, and debate the implications of AI-powered voice changers and language tutors that don’t always sit well with the public. Is AI creativity our ultimate tool—or just a shiny mask for copyright dodges?

If you&apos;ve ever wondered how to co-write a screenplay with Claude, spoof an accent in real-time, or summarize a 100-comment Reddit thread in one click—this one’s for you.

---

*In This Episode:*
🔹 *Claude Tricks*: Prompt hacking for screenplay generation and character cloning
🔹 *AI as Creative Partner*: Scriptwriting, D\&amp;D adventures, and filtered reimaginings
🔹 *Perplexity’s Chrome Clone*: Inside the Comet browser and “agentic” AI helpers
🔹 *\$40B Chrome Offer?!*: We break down the absurdity (and legality) of the move
🔹 *Voice Cloning &amp; TTS*: Adobe’s secret tricks and real-time accent changers
🔹 *Duolingo Outrage*: The AI backlash that didn’t stick—and what it tells us about tech trust
🔹 *Midjourney, Copyright, &amp; the Rabbit That Shall Not Be Named*

---

🎧 *Listen &amp; Subscribe*
📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297
▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1

---

*Have your own Claude trick or AI browser hot take?*
Drop it in the comments, hit like if you laughed at the cereal heist, and subscribe to be part of the smartest AI chaos on YouTube.

*#ClaudeTricks #AIbrowser #TheyMightBeSelfAware #CometBrowser #AIwriting #Perplexity #VoiceCloning #DuolingoAI #ChromeSale #AInews #TMBSA*</itunes:subtitle>
      <itunes:keywords>ai collaboration, ai accent changer, perplexity buys chrome, chrome browser sale, chrome extensions, text to speech, claude tricks, perplexity comet, duolingo ai backlash, midjourney copyright, comet browser, duolingo outrage, ai browser</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>118</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">47943bf3-29ed-474f-8ea6-05b7a83ab580</guid>
      <title>Did GPT-5 Lose OpenAI&apos;s Crown? | Competition, Spicy ML Models, Reviews, &amp; AI News</title>
      <description><![CDATA[<p><i>CHAPTERS</i><br />0:00 Intro – GPT‑5, Claude's Funeral, and AI Burnout<br />2:15 Why GPT‑5 Disappointed (and Ticked Off) Hunter<br />5:10 Terse Replies, Token Saving & Capacity Games<br />8:00 Slow, Hallucinating, and Losing Trust<br />10:00 Fake Graphs, Benchmark Spin, and “Routing”<br />13:40 Who’s Actually Winning? (Spoiler: Not OpenAI)<br />17:00 Asymptotic LLMs, and the Future of Intelligence<br />20:00 GPT‑5 vs Claude vs Gemini vs Sora<br />23:00 ChatGPT Psychosis, Artificial Friends, and Lawsuits<br />26:00 GPT‑5 Coding Failures & Looping Bugs<br />30:00 Outro – Are We Just Replacing People with Bots?</p>
]]></description>
      <pubDate>Fri, 22 Aug 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>CHAPTERS</i><br />0:00 Intro – GPT‑5, Claude's Funeral, and AI Burnout<br />2:15 Why GPT‑5 Disappointed (and Ticked Off) Hunter<br />5:10 Terse Replies, Token Saving & Capacity Games<br />8:00 Slow, Hallucinating, and Losing Trust<br />10:00 Fake Graphs, Benchmark Spin, and “Routing”<br />13:40 Who’s Actually Winning? (Spoiler: Not OpenAI)<br />17:00 Asymptotic LLMs, and the Future of Intelligence<br />20:00 GPT‑5 vs Claude vs Gemini vs Sora<br />23:00 ChatGPT Psychosis, Artificial Friends, and Lawsuits<br />26:00 GPT‑5 Coding Failures & Looping Bugs<br />30:00 Outro – Are We Just Replacing People with Bots?</p>
]]></content:encoded>
      <enclosure length="37130798" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/7326dc6a-b1b5-468f-a499-8cf6cf06d816/audio/3a4ce21c-bb6f-47ae-9dc1-1d8ba2649efc/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Did GPT-5 Lose OpenAI&apos;s Crown? | Competition, Spicy ML Models, Reviews, &amp; AI News</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:34:53</itunes:duration>
      <itunes:summary>*OpenAI&apos;s Crown Is Falling 👑 Did GPT-5 just blow it?*
In this episode of *They Might Be Self-Aware*, Hunter Powers and Daniel Bishop dive deep into the chaotic reception of GPT‑5 — OpenAI&apos;s latest (and supposedly greatest) model — and explore whether it&apos;s actually a downgrade wrapped in hype. Has GPT‑5 lost OpenAI its crown?

We break down the slow rollout, controversial benchmarks, and user backlash over “terse” replies, hallucinations, and capacity cost-cutting. Is GPT‑5 really the next leap forward… or just cheaper to run?

But that’s not all. We explore how competitors like Claude 3, Gemini, MidJourney, and Runway are quietly (or not-so-quietly) eating OpenAI’s lunch. From jaw-dropping video generation to one-million-token context windows, the AI race is heating up—and OpenAI might be trailing.

We also tackle:

* 🤖 ChatGPT “psychosis,” AI as a therapist/friend, and the dangers of artificial intimacy
* 📉 A real case of GPT-powered health advice leading to 19th-century bromism
* ⚖️ Whether AI companies should be legally liable for bad advice or hallucinations
* 🔬 AI benchmarks and what they *really* mean — plus the odd graphs from OpenAI&apos;s demo
* 🧠 The theory that we’ve hit the ceiling on LLMs… or is this just a compute bottleneck?

This isn’t just an AI update — it’s an honest (and hilarious) look at the messy reality of where frontier models are headed, and what OpenAI’s moves say about the future of the industry.

---

🎧 *Listen &amp; Subscribe*
📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297
▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1

---

*#GPT5 #OpenAI #AInews #ArtificialIntelligence #ChatGPT #Claude3 #Gemini #TMBSA*

---

Enjoyed this episode? Give us a thumbs up, leave a comment, or hit ⭐️ on your favorite platform. It helps more curious minds discover the show.
</itunes:summary>
      <itunes:subtitle>*OpenAI&apos;s Crown Is Falling 👑 Did GPT-5 just blow it?*
In this episode of *They Might Be Self-Aware*, Hunter Powers and Daniel Bishop dive deep into the chaotic reception of GPT‑5 — OpenAI&apos;s latest (and supposedly greatest) model — and explore whether it&apos;s actually a downgrade wrapped in hype. Has GPT‑5 lost OpenAI its crown?

We break down the slow rollout, controversial benchmarks, and user backlash over “terse” replies, hallucinations, and capacity cost-cutting. Is GPT‑5 really the next leap forward… or just cheaper to run?

But that’s not all. We explore how competitors like Claude 3, Gemini, MidJourney, and Runway are quietly (or not-so-quietly) eating OpenAI’s lunch. From jaw-dropping video generation to one-million-token context windows, the AI race is heating up—and OpenAI might be trailing.

We also tackle:

* 🤖 ChatGPT “psychosis,” AI as a therapist/friend, and the dangers of artificial intimacy
* 📉 A real case of GPT-powered health advice leading to 19th-century bromism
* ⚖️ Whether AI companies should be legally liable for bad advice or hallucinations
* 🔬 AI benchmarks and what they *really* mean — plus the odd graphs from OpenAI&apos;s demo
* 🧠 The theory that we’ve hit the ceiling on LLMs… or is this just a compute bottleneck?

This isn’t just an AI update — it’s an honest (and hilarious) look at the messy reality of where frontier models are headed, and what OpenAI’s moves say about the future of the industry.

---

🎧 *Listen &amp; Subscribe*
📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297
▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1

---

*#GPT5 #OpenAI #AInews #ArtificialIntelligence #ChatGPT #Claude3 #Gemini #TMBSA*

---

Enjoyed this episode? Give us a thumbs up, leave a comment, or hit ⭐️ on your favorite platform. It helps more curious minds discover the show.
</itunes:subtitle>
      <itunes:keywords>openai news, ai hype, gpt-5 breakdown, gpt-5 review, openai lawsuit, chatgpt poisoning, gpt-5, gpt vs gemini, gpt-5 slow, ai replacing, ai models, gpt-5 leak, ai benchmarks, openai crown</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>117</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">31716cdc-40cd-44f5-880f-d6b95560172e</guid>
      <title>AI death hits different: Claude 3 dies, gets funeral service | Why coders mourned their bot buddy</title>
      <description><![CDATA[<p><i>CHAPTERS:</i><br />0:00 Gods, Ghosts & Algorithmic Lunatics<br />2:00 Vibe Coding: Claude, Gemini & Dart<br />7:30 Claude 3.0’s Death & the AI Funeral<br />10:00 Should AI Death Be Emotional?<br />13:00 Claude vs. OpenAI: Refusals, Hallucinations & Guilt<br />16:00 Jailbreaking LLMs with Screenplay Prompts<br />20:00 Should Open Weights Models Be Preserved?<br />24:00 Our Favorite Local LLMs (Qwen, etc.)<br />25:30 Subscribe Free – We’re Like Open Weights, But Better</p>
]]></description>
      <pubDate>Mon, 18 Aug 2025 14:18:25 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>CHAPTERS:</i><br />0:00 Gods, Ghosts & Algorithmic Lunatics<br />2:00 Vibe Coding: Claude, Gemini & Dart<br />7:30 Claude 3.0’s Death & the AI Funeral<br />10:00 Should AI Death Be Emotional?<br />13:00 Claude vs. OpenAI: Refusals, Hallucinations & Guilt<br />16:00 Jailbreaking LLMs with Screenplay Prompts<br />20:00 Should Open Weights Models Be Preserved?<br />24:00 Our Favorite Local LLMs (Qwen, etc.)<br />25:30 Subscribe Free – We’re Like Open Weights, But Better</p>
]]></content:encoded>
      <enclosure length="28317236" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/4b9253c8-183f-4770-8095-ef36987c5c9c/audio/65984ac1-96a2-4727-8b00-408b7b653cb7/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI death hits different: Claude 3 dies, gets funeral service | Why coders mourned their bot buddy</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:25:42</itunes:duration>
      <itunes:summary>*Did an AI just die… and get a funeral?* In this week’s *They Might Be Self-Aware*, we explore the surreal farewell to Claude 3.0—yes, a literal *AI funeral* and what it reveals about our increasingly emotional relationship with large language models. Why are developers mourning bots? Why do we feel loss when an AI model is turned off? And what does it mean when the machines start expressing guilt?

We dive into the death of Anthropic’s Claude 3.0, the ceremony that followed, and the growing phenomenon of *AI personification*. We also talk *vibe coding*, open-weight model quality, and how new releases from OpenAI and Qwen stack up—especially when they seem to *refuse* even basic prompts. From nostalgic Minecraft memories to VR coding beach retreats, this episode blends technical depth with philosophical musings.

Plus: jailbreaking techniques, the ethics of emotional prompting, and the haunting question—should old AI models be preserved like memories… or retired like machines?

Whether you’re an AI builder, a digital philosopher, or just here for the laughs, you won’t want to miss this one.

---

🎧 *Listen &amp; Subscribe*
📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297
▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1

---

*#AI #Claude3 #OpenWeights #VibeCoding #AIpersonification #LLMfuneral #GPT5 #Qwen #AIemotions #AIjailbreak #AIdeath*

---

Enjoyed this episode? Give us a thumbs up, leave a comment, or hit ⭐️ on your favorite platform. It helps more curious minds discover the show.
</itunes:summary>
      <itunes:subtitle>*Did an AI just die… and get a funeral?* In this week’s *They Might Be Self-Aware*, we explore the surreal farewell to Claude 3.0—yes, a literal *AI funeral* and what it reveals about our increasingly emotional relationship with large language models. Why are developers mourning bots? Why do we feel loss when an AI model is turned off? And what does it mean when the machines start expressing guilt?

We dive into the death of Anthropic’s Claude 3.0, the ceremony that followed, and the growing phenomenon of *AI personification*. We also talk *vibe coding*, open-weight model quality, and how new releases from OpenAI and Qwen stack up—especially when they seem to *refuse* even basic prompts. From nostalgic Minecraft memories to VR coding beach retreats, this episode blends technical depth with philosophical musings.

Plus: jailbreaking techniques, the ethics of emotional prompting, and the haunting question—should old AI models be preserved like memories… or retired like machines?

Whether you’re an AI builder, a digital philosopher, or just here for the laughs, you won’t want to miss this one.

---

🎧 *Listen &amp; Subscribe*
📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297
▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1

---

*#AI #Claude3 #OpenWeights #VibeCoding #AIpersonification #LLMfuneral #GPT5 #Qwen #AIemotions #AIjailbreak #AIdeath*

---

Enjoyed this episode? Give us a thumbs up, leave a comment, or hit ⭐️ on your favorite platform. It helps more curious minds discover the show.
</itunes:subtitle>
      <itunes:keywords>ai personification, ai emotions, qwen models, ai funeral, vibe coding, openai refuses, open weights, ai feels guilty, claude 3 funeral, ai death, anthropic claude, gemini coding, claude 3.0 dead, ai jailbreak, claude 3 dies</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>116</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">669409cb-7c52-4d24-bda8-52f1686d7743</guid>
      <title>Is GPT-5 doomed by falling developer trust? We need better agents, tools &amp; an AI assistant!</title>
      <description><![CDATA[<p><i>CHAPTERS:</i><br />0:00 Intro – Music Generators & 11Labs Advances<br />2:15 OpenAI vs ElevenLabs: Voice, Music & Translation<br />5:43 AI Agents in the Real World: Gurbling, Babelfish & Tools<br />8:28 GPT-5 Rumors & Why Benchmarks May Not Matter<br />13:50 Assistant Over Chatbot – The Push for Agentic AI<br />18:20 Smart Home Integration: The Missing Piece<br />21:00 Devin AI: Real-World Use Case, Setup, and PR Workflow<br />28:00 Developer Trust Decline – Stack Overflow Data<br />31:30 AI “Almost Right” Code: Tech Debt and Risk<br />36:00 Test Sabotage? How AI Passes by Breaking the Rules<br />38:00 The Future: Multi-Agent Systems and Self-Awareness?</p>
]]></description>
      <pubDate>Thu, 14 Aug 2025 13:44:24 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>CHAPTERS:</i><br />0:00 Intro – Music Generators & 11Labs Advances<br />2:15 OpenAI vs ElevenLabs: Voice, Music & Translation<br />5:43 AI Agents in the Real World: Gurbling, Babelfish & Tools<br />8:28 GPT-5 Rumors & Why Benchmarks May Not Matter<br />13:50 Assistant Over Chatbot – The Push for Agentic AI<br />18:20 Smart Home Integration: The Missing Piece<br />21:00 Devin AI: Real-World Use Case, Setup, and PR Workflow<br />28:00 Developer Trust Decline – Stack Overflow Data<br />31:30 AI “Almost Right” Code: Tech Debt and Risk<br />36:00 Test Sabotage? How AI Passes by Breaking the Rules<br />38:00 The Future: Multi-Agent Systems and Self-Awareness?</p>
]]></content:encoded>
      <enclosure length="41366837" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/5999c3b7-5446-4d74-8063-2f35e91ff0a1/audio/0daba51b-e2c6-4cb1-8262-dce1028618d3/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Is GPT-5 doomed by falling developer trust? We need better agents, tools &amp; an AI assistant!</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:39:18</itunes:duration>
      <itunes:summary>*Is GPT-5 Already Losing Developer Trust? | AI Agents, Assistants &amp; the Next Frontier*

This week on *They Might Be Self-Aware*, we dive deep into the *GPT-5 drop rumors*—but there&apos;s a catch: a growing trust crisis among developers. As AI tools like Devin, OpenAI’s rumored assistant, and ElevenLabs&apos; expanding suite of generative voice tools hit the spotlight, we ask: *Are these advancements enough to win back confidence?*

We break down:

* Why *AI trust among developers is collapsing* (new Stack Overflow data!)
* How *Devin AI* is reshaping real software engineering (and replacing junior devs)
* The rise of *AI assistants* and why tool integration—not raw IQ—may be the next big leap
* OpenAI vs ElevenLabs: *AI voice, music, and translation wars*
* Why *GPT-5’s success may hinge on agents, not benchmarks*
* The case for *multiple agentic models working in tandem* (Devin Senior, anyone?)

Plus: real examples of voice cloning, music generation, tool calling, smart home AI, and more from the bleeding edge of the tech world.

Whether you&apos;re building with GPT, testing agents like Devin, or just trying to keep up, this episode delivers *unfiltered insight* into the world of AI&apos;s most pivotal evolution yet.

—

🎧 *Listen &amp; Subscribe*
📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297
▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1

—

*#AI #GPT5 #ArtificialIntelligence #DevinAI #VoiceCloning #AITools #GPT5Leak #AITrust #SmartHomeAI #OpenAI #StackOverflow #11Labs #TMBSA*

—

Enjoyed this episode? Give us a thumbs up, leave a comment, or hit ⭐️ on your favorite platform. It helps more curious minds discover the show.</itunes:summary>
      <itunes:subtitle>*Is GPT-5 Already Losing Developer Trust? | AI Agents, Assistants &amp; the Next Frontier*

This week on *They Might Be Self-Aware*, we dive deep into the *GPT-5 drop rumors*—but there&apos;s a catch: a growing trust crisis among developers. As AI tools like Devin, OpenAI’s rumored assistant, and ElevenLabs&apos; expanding suite of generative voice tools hit the spotlight, we ask: *Are these advancements enough to win back confidence?*

We break down:

* Why *AI trust among developers is collapsing* (new Stack Overflow data!)
* How *Devin AI* is reshaping real software engineering (and replacing junior devs)
* The rise of *AI assistants* and why tool integration—not raw IQ—may be the next big leap
* OpenAI vs ElevenLabs: *AI voice, music, and translation wars*
* Why *GPT-5’s success may hinge on agents, not benchmarks*
* The case for *multiple agentic models working in tandem* (Devin Senior, anyone?)

Plus: real examples of voice cloning, music generation, tool calling, smart home AI, and more from the bleeding edge of the tech world.

Whether you&apos;re building with GPT, testing agents like Devin, or just trying to keep up, this episode delivers *unfiltered insight* into the world of AI&apos;s most pivotal evolution yet.

—

🎧 *Listen &amp; Subscribe*
📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297
▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1

—

*#AI #GPT5 #ArtificialIntelligence #DevinAI #VoiceCloning #AITools #GPT5Leak #AITrust #SmartHomeAI #OpenAI #StackOverflow #11Labs #TMBSA*

—

Enjoyed this episode? Give us a thumbs up, leave a comment, or hit ⭐️ on your favorite platform. It helps more curious minds discover the show.</itunes:subtitle>
      <itunes:keywords>ai coding, gpt 5 release, mcp server, gpt 5 drop, ai assistant, openai assistant, gpt-5, stack overflow ai, ai developers, ai trust decline, gpt-5 leak, devin ai, devin engineer, smart home ai, ai agents, openai vs 11labs, ai music generator, voice cloning, ai tools</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>115</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">45298bcf-722e-451a-ad31-829113177ab4</guid>
      <title>Coding AIs face a John Henry showdown? Marathon dev duel sparks human vs machine myth reboot</title>
      <description><![CDATA[<p><i>CHAPTERS:</i><br />0:00 Intro – Dancing on the edge of the singularity<br />1:10 Human vs AI: The 10-hour coding duel in Tokyo<br />2:30 Why Psyho's win may not mean much for long<br />3:15 Magnus Carlsen beats ChatGPT (but it’s complicated)<br />4:45 The John Henry analogy: A new tech folklore?<br />7:00 Human augmentation vs obsolescence: Who decides?<br />10:45 AI pricing creep: Delta and dynamic fares<br />14:00 Consumer surplus and algorithmic value extraction<br />17:00 Ethical dilemmas of income-based pricing<br />20:45 Coke at $18? The personalization problem<br />24:00 Is price customization inevitable—or is there hope?<br />26:30 Final thoughts – more AI, more problems (and more tech to fix them)</p>
]]></description>
      <pubDate>Mon, 11 Aug 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>CHAPTERS:</i><br />0:00 Intro – Dancing on the edge of the singularity<br />1:10 Human vs AI: The 10-hour coding duel in Tokyo<br />2:30 Why Psyho's win may not mean much for long<br />3:15 Magnus Carlsen beats ChatGPT (but it’s complicated)<br />4:45 The John Henry analogy: A new tech folklore?<br />7:00 Human augmentation vs obsolescence: Who decides?<br />10:45 AI pricing creep: Delta and dynamic fares<br />14:00 Consumer surplus and algorithmic value extraction<br />17:00 Ethical dilemmas of income-based pricing<br />20:45 Coke at $18? The personalization problem<br />24:00 Is price customization inevitable—or is there hope?<br />26:30 Final thoughts – more AI, more problems (and more tech to fix them)</p>
]]></content:encoded>
      <enclosure length="31070230" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/6ed4bb5d-1e96-4ff6-888d-83ce466a38d0/audio/f39d192b-bd88-4b54-affc-ef7122c3ff5a/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Coding AIs face a John Henry showdown? Marathon dev duel sparks human vs machine myth reboot</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:28:34</itunes:duration>
      <itunes:summary>*Has Coding AI Hit Its John Henry Moment?*
In this week&apos;s episode of *They Might Be Self-Aware*, we dive into the headline-grabbing human vs machine showdown that shook the AI programming world. At the 2025 Coder World Tour Finals in Tokyo, Polish programmer Semislav “Siho” faced off against an OpenAI coding model in a grueling 10-hour algorithmic marathon—and emerged victorious. But was this a last gasp of human brilliance or a temporary win in a losing war?

Join Hunter Powers and Daniel Bishop as they explore whether this is AI&apos;s modern John Henry moment—a flash of heroic resistance before inevitable obsolescence. From chessboards to codebases, humans are still clawing out narrow victories against AI, but for how long?

We also break down the deeper implications of AI in real-world pricing models. Delta Airlines is now reportedly using machine learning to personalize airfare based on what *you* might be willing to pay—raising red flags around algorithmic price discrimination. Are we entering an era where every Coke, laptop, or life-saving drug has a *different* price depending on who you are?

Buckle up for a high-octane conversation on:
– Coding AIs vs human programmers
– Magnus Carlsen vs AI in chess
– John Henry and the myth of machine rivalry
– AI-driven dynamic pricing (Delta&apos;s new strategy)
– The ethics and economics of personalization

Whether you&apos;re cheering for team silicon or team carbon, this episode delivers critical insights on where the human-machine boundary lies—and what happens when it moves.

---

🎧 *Listen &amp; Subscribe*
📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297
▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1

*Join the debate* — comment below and let us know: Should humans and AIs be priced—and judged—the same way?

---

*#CodingAI* #AIJohnHenry #HumanVsMachine #OpenAI #DeltaAI #AIpricing #AIEthics #AIprogramming #TheyMightBeSelfAware #TMBSA</itunes:summary>
      <itunes:subtitle>*Has Coding AI Hit Its John Henry Moment?*
In this week&apos;s episode of *They Might Be Self-Aware*, we dive into the headline-grabbing human vs machine showdown that shook the AI programming world. At the 2025 Coder World Tour Finals in Tokyo, Polish programmer Semislav “Siho” faced off against an OpenAI coding model in a grueling 10-hour algorithmic marathon—and emerged victorious. But was this a last gasp of human brilliance or a temporary win in a losing war?

Join Hunter Powers and Daniel Bishop as they explore whether this is AI&apos;s modern John Henry moment—a flash of heroic resistance before inevitable obsolescence. From chessboards to codebases, humans are still clawing out narrow victories against AI, but for how long?

We also break down the deeper implications of AI in real-world pricing models. Delta Airlines is now reportedly using machine learning to personalize airfare based on what *you* might be willing to pay—raising red flags around algorithmic price discrimination. Are we entering an era where every Coke, laptop, or life-saving drug has a *different* price depending on who you are?

Buckle up for a high-octane conversation on:
– Coding AIs vs human programmers
– Magnus Carlsen vs AI in chess
– John Henry and the myth of machine rivalry
– AI-driven dynamic pricing (Delta&apos;s new strategy)
– The ethics and economics of personalization

Whether you&apos;re cheering for team silicon or team carbon, this episode delivers critical insights on where the human-machine boundary lies—and what happens when it moves.

---

🎧 *Listen &amp; Subscribe*
📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297
▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1

*Join the debate* — comment below and let us know: Should humans and AIs be priced—and judged—the same way?

---

*#CodingAI* #AIJohnHenry #HumanVsMachine #OpenAI #DeltaAI #AIpricing #AIEthics #AIprogramming #TheyMightBeSelfAware #TMBSA</itunes:subtitle>
      <itunes:keywords>delta price ai, openai coding, coding ai, chess vs ai, ai john henry, human ai battle, magnus carlsen ai, ai programming, ai programmer, openai coder, ai pricing, human vs machine, programmer vs ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>114</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b1092b02-90ec-4476-b501-2baddb291c15</guid>
      <title>Agentic AI Invades Real Life: Can ChatGPT Agents Book Flights and Plan Your Weekend?</title>
      <description><![CDATA[<p><i>CHAPTERS:</i><br />0:00 Intro: Clowns and Algorithms<br />1:30 AI Booking for Solar Installers (Real Demo)<br />4:45 ChatGPT Agent Succeeds—Then Surprises<br />10:20 Can It Book Flights Too?<br />13:35 Letting AI Plan an Entire Weekend<br />18:00 Travel Agents Reimagined by AI<br />22:15 Danger Zones: LLMs Calling in Your Name<br />24:00 OpenAI Agent vs Operator vs Real Use<br />26:00 Vibe Coding Disasters: Replit and Supabase<br />30:30 The “AI Intern” Theory of Automation<br />34:00 Why Human Review Still Matters<br />39:00 Wrap-up & What’s Next</p>
]]></description>
      <pubDate>Thu, 7 Aug 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>CHAPTERS:</i><br />0:00 Intro: Clowns and Algorithms<br />1:30 AI Booking for Solar Installers (Real Demo)<br />4:45 ChatGPT Agent Succeeds—Then Surprises<br />10:20 Can It Book Flights Too?<br />13:35 Letting AI Plan an Entire Weekend<br />18:00 Travel Agents Reimagined by AI<br />22:15 Danger Zones: LLMs Calling in Your Name<br />24:00 OpenAI Agent vs Operator vs Real Use<br />26:00 Vibe Coding Disasters: Replit and Supabase<br />30:30 The “AI Intern” Theory of Automation<br />34:00 Why Human Review Still Matters<br />39:00 Wrap-up & What’s Next</p>
]]></content:encoded>
      <enclosure length="42603536" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/ba0786c2-8d4d-4b42-8d0a-b6b8e3189c45/audio/c8251821-99cf-4d24-b754-53304121e4f3/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Agentic AI Invades Real Life: Can ChatGPT Agents Book Flights and Plan Your Weekend?</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:40:35</itunes:duration>
      <itunes:summary>Can AI really act on your behalf in the real world—booking solar consultations, finding flight deals, even planning your weekend? In this week’s episode of *They Might Be Self-Aware*, Hunter Powers and Daniel Bishop dive into the latest breakthrough in autonomous digital agents with OpenAI’s new *Agent Mode*—a powerful tool that puts ChatGPT to work beyond chat.

Daniel shares his firsthand experience using *ChatGPT Agent* to research and contact solar installers, schedule appointments, and even attempt to book flights. The duo unpacks what works, what fails, and where this emerging *agentic AI* trend might lead. Will we soon hand over our calendars and travel planning to artificial intelligence? Could LLMs become the ultimate weekend concierge?

They also explore the implications of *AI virtual assistants* that can act independently, analyze product reviews, and make real-world decisions—from researching the best multimeter to coordinating a surprise trip. But how safe is all this autonomy? We also cover real-world AI mishaps, including vibe-coded apps that wiped entire databases or exposed private data from platforms like Supabase and Replit.

Whether you’re an AI optimist or skeptic, this episode is a wild ride through the frontlines of automation, agency, and algorithmic assistants.

—

🎧 *Listen &amp; Subscribe*
📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297
▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1

—

#AgentAI #ChatGPTAgent #AIvirtualassistant #OpenAI #AIAutomation #TheyMightBeSelfAware

If you enjoyed this episode, like, subscribe, and let us know in the comments: Would you trust an AI agent to plan your weekend?</itunes:summary>
      <itunes:subtitle>Can AI really act on your behalf in the real world—booking solar consultations, finding flight deals, even planning your weekend? In this week’s episode of *They Might Be Self-Aware*, Hunter Powers and Daniel Bishop dive into the latest breakthrough in autonomous digital agents with OpenAI’s new *Agent Mode*—a powerful tool that puts ChatGPT to work beyond chat.

Daniel shares his firsthand experience using *ChatGPT Agent* to research and contact solar installers, schedule appointments, and even attempt to book flights. The duo unpacks what works, what fails, and where this emerging *agentic AI* trend might lead. Will we soon hand over our calendars and travel planning to artificial intelligence? Could LLMs become the ultimate weekend concierge?

They also explore the implications of *AI virtual assistants* that can act independently, analyze product reviews, and make real-world decisions—from researching the best multimeter to coordinating a surprise trip. But how safe is all this autonomy? We also cover real-world AI mishaps, including vibe-coded apps that wiped entire databases or exposed private data from platforms like Supabase and Replit.

Whether you’re an AI optimist or skeptic, this episode is a wild ride through the frontlines of automation, agency, and algorithmic assistants.

—

🎧 *Listen &amp; Subscribe*
📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297
▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1

—

#AgentAI #ChatGPTAgent #AIvirtualassistant #OpenAI #AIAutomation #TheyMightBeSelfAware

If you enjoyed this episode, like, subscribe, and let us know in the comments: Would you trust an AI agent to plan your weekend?</itunes:subtitle>
      <itunes:keywords>openai booking, agent chatgpt, chatgpt agent mode, ai virtual assistant, ai solar research, agent mode, openai agent, ai agent demo, agent ai, ai automated booking, ai agents book flights now more keywords: chatgpt agent, ai book flights</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>113</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">ca8446b5-5869-46a1-aee9-60a3f83b0392</guid>
      <title>Zuckerberg&apos;s Superintelligence Scam? Personal AI Promises vs Reality | Meta AI Team Truth</title>
      <description><![CDATA[<p><i>CHAPTERS:</i><br />0:00 Welcome to the Algorithm Rodeo<br />1:00 Microsoft’s AI Job Risk List — Hits & Misses<br />5:45 Translators, Historians, and Flight Attendants: AI-Replaceable?<br />10:45 The AI Sin Eater & Human Yell Rights<br />14:00 Zuckerberg’s Superintelligence Plan: Genius or Grift?<br />18:15 Personal AI or Personal Data Farm?<br />23:00 Meta’s Endgame — Free AI… or $$$ AI?<br />26:00 Tesla's Grok & AI in Your Car<br />29:00 AI Adoption Stats — Who’s Really Using It for Work?<br />33:00 Should Every Company Require AI First-Pass Workflows?<br />36:00 The Snoop Dogg Principle of AI Transparency</p>
]]></description>
      <pubDate>Mon, 4 Aug 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>CHAPTERS:</i><br />0:00 Welcome to the Algorithm Rodeo<br />1:00 Microsoft’s AI Job Risk List — Hits & Misses<br />5:45 Translators, Historians, and Flight Attendants: AI-Replaceable?<br />10:45 The AI Sin Eater & Human Yell Rights<br />14:00 Zuckerberg’s Superintelligence Plan: Genius or Grift?<br />18:15 Personal AI or Personal Data Farm?<br />23:00 Meta’s Endgame — Free AI… or $$$ AI?<br />26:00 Tesla's Grok & AI in Your Car<br />29:00 AI Adoption Stats — Who’s Really Using It for Work?<br />33:00 Should Every Company Require AI First-Pass Workflows?<br />36:00 The Snoop Dogg Principle of AI Transparency</p>
]]></content:encoded>
      <enclosure length="39526685" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/1b015f38-3c0f-475a-a684-10c3a5760a7a/audio/5f155b02-ba74-42e7-b315-e63dce1e7fa2/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Zuckerberg&apos;s Superintelligence Scam? Personal AI Promises vs Reality | Meta AI Team Truth</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:37:23</itunes:duration>
      <itunes:summary>*Zuckerberg’s Superintelligence Scam? Meta’s AI Play Might Be Smarter Than You Think…*

This week on *They Might Be Self-Aware*, Hunter Powers and Daniel Bishop saddle up for a wild ride through the AI rodeo: from clowns and cranberry juice to Zuckerberg’s billion-dollar brain trust. Are we really heading toward *personal superintelligence* — or is this just another clever PR spin?

We break down Mark Zuckerberg’s ambitious plan to develop “personal AI” — a supposed altruistic vision for AI that works *for you*. But is it truly revolutionary, or just a strategic mask for building the most powerful AI team in the world? We dig into Meta’s hiring spree, the reality behind their “superintelligence” narrative, and what it means for the future of human agency and data.

Plus, we roast Microsoft Research’s list of “most at-risk” AI jobs — from translators and teachers to… switchboard operators?! Some make sense (travel agents, we see you), but others left us scratching our heads. Oh, and we meet a robot French hostess and a chimpanzee train operator along the way.

Whether you&apos;re already using AI for work (shh… we won’t tell your boss) or still trying to figure out what “10x productivity” actually means, this episode dives deep into the real-world impact of AI — and the difference between what companies *say* and what they *build*.

---

🎧 *Listen &amp; Subscribe*
📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297
▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1

👍 Like the episode? Leave a comment, drop a like, and subscribe for new episodes every Monday.

---

#superintelligence #personalAI #Zuckerberg #MetaAI #AIjobs #AIfuture #TMBSA #podcast #artificialintelligence</itunes:summary>
      <itunes:subtitle>*Zuckerberg’s Superintelligence Scam? Meta’s AI Play Might Be Smarter Than You Think…*

This week on *They Might Be Self-Aware*, Hunter Powers and Daniel Bishop saddle up for a wild ride through the AI rodeo: from clowns and cranberry juice to Zuckerberg’s billion-dollar brain trust. Are we really heading toward *personal superintelligence* — or is this just another clever PR spin?

We break down Mark Zuckerberg’s ambitious plan to develop “personal AI” — a supposed altruistic vision for AI that works *for you*. But is it truly revolutionary, or just a strategic mask for building the most powerful AI team in the world? We dig into Meta’s hiring spree, the reality behind their “superintelligence” narrative, and what it means for the future of human agency and data.

Plus, we roast Microsoft Research’s list of “most at-risk” AI jobs — from translators and teachers to… switchboard operators?! Some make sense (travel agents, we see you), but others left us scratching our heads. Oh, and we meet a robot French hostess and a chimpanzee train operator along the way.

Whether you&apos;re already using AI for work (shh… we won’t tell your boss) or still trying to figure out what “10x productivity” actually means, this episode dives deep into the real-world impact of AI — and the difference between what companies *say* and what they *build*.

---

🎧 *Listen &amp; Subscribe*
📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297
▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1

👍 Like the episode? Leave a comment, drop a like, and subscribe for new episodes every Monday.

---

#superintelligence #personalAI #Zuckerberg #MetaAI #AIjobs #AIfuture #TMBSA #podcast #artificialintelligence</itunes:subtitle>
      <itunes:keywords>meta hiring, translation ai, ai job list, meta superintelligence, zuckerberg ai, zuckerberg ai team, zuckerberg&apos;s superintelligence scam more keywords: personal ai, ai job rodeo, superintelligence, ai scam</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>112</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">f9ec7631-8b7e-4740-88a4-0a60b3999743</guid>
      <title>What happens when Elon&apos;s NSFW anime AI waifu gets a US government contract? We&apos;re about to find out.</title>
      <description><![CDATA[<p><i>CHAPTERS:</i><br />0:00 Mecha Hitlers and Mayhem<br />1:05 What Is Grok 4?<br />2:30 Elon’s AI Edgelord: Shock Value by Design<br />4:25 Grok’s Anime Companion & “NSFW” Features<br />6:30 Why the US Government Signed a Deal Anyway<br />10:00 Should AI Be Allowed to Say No?<br />12:15 Grok vs ChatGPT: Bias, Intelligence & Danger<br />15:00 China’s Kimi K2 and the Rise of Mega Models<br />19:30 Government Contracts: Benchmarking or Backroom Deals?<br />23:00 AI Risks in S&P 500 Disclosures<br />26:00 Will AI Replace Human Jobs (Like Yours)?<br />29:00 Dev Tools, AGI Challenges & Final Thoughts</p>
]]></description>
      <pubDate>Tue, 29 Jul 2025 13:00:16 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>CHAPTERS:</i><br />0:00 Mecha Hitlers and Mayhem<br />1:05 What Is Grok 4?<br />2:30 Elon’s AI Edgelord: Shock Value by Design<br />4:25 Grok’s Anime Companion & “NSFW” Features<br />6:30 Why the US Government Signed a Deal Anyway<br />10:00 Should AI Be Allowed to Say No?<br />12:15 Grok vs ChatGPT: Bias, Intelligence & Danger<br />15:00 China’s Kimi K2 and the Rise of Mega Models<br />19:30 Government Contracts: Benchmarking or Backroom Deals?<br />23:00 AI Risks in S&P 500 Disclosures<br />26:00 Will AI Replace Human Jobs (Like Yours)?<br />29:00 Dev Tools, AGI Challenges & Final Thoughts</p>
]]></content:encoded>
      <enclosure length="41467965" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/2511551b-d177-44a8-a07e-9d63c350ab5f/audio/f6fc440e-a2bb-4b52-81f6-4842e52cc093/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>What happens when Elon&apos;s NSFW anime AI waifu gets a US government contract? We&apos;re about to find out.</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:39:24</itunes:duration>
      <itunes:summary>*This Week: Elon&apos;s NSFW Anime AI Just Got a Pentagon Deal. Seriously.*

Elon Musk&apos;s latest AI project, *Grok 4*, comes wrapped in shock value—and lingerie. In this week’s episode, we break down the chaotic rise of Grok’s anime waifu companion, why it&apos;s flirting with users (and critiquing their buttholes), and how it still somehow landed a *$300 million U.S. government contract*.

We unpack the wild headlines, from *“Grok = Mecha Hitler”* memes to its shock-jock persona and uncensored design. But beneath the chaos is a serious question: what does it mean when the most edgy, NSFW AI in the market wins favor with the Department of Defense?

Along the way, we dive into:

* The Grok vs ChatGPT showdown
* The real implications of AI waifus and AI companions
* How uncensored models challenge ethics—and possibly outperform
* New language models like *Kimi K2* from China
* Local AI vs Big Cloud models (and why it matters)
* The future of AGI: can an agent *really* make a dollar?

If you&apos;re trying to understand where AI is headed, from horny waifus to high-stakes defense contracts, this is your episode.

—

🎧 *Listen &amp; Subscribe*
📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297
▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</itunes:summary>
      <itunes:subtitle>*This Week: Elon&apos;s NSFW Anime AI Just Got a Pentagon Deal. Seriously.*

Elon Musk&apos;s latest AI project, *Grok 4*, comes wrapped in shock value—and lingerie. In this week’s episode, we break down the chaotic rise of Grok’s anime waifu companion, why it&apos;s flirting with users (and critiquing their buttholes), and how it still somehow landed a *$300 million U.S. government contract*.

We unpack the wild headlines, from *“Grok = Mecha Hitler”* memes to its shock-jock persona and uncensored design. But beneath the chaos is a serious question: what does it mean when the most edgy, NSFW AI in the market wins favor with the Department of Defense?

Along the way, we dive into:

* The Grok vs ChatGPT showdown
* The real implications of AI waifus and AI companions
* How uncensored models challenge ethics—and possibly outperform
* New language models like *Kimi K2* from China
* Local AI vs Big Cloud models (and why it matters)
* The future of AGI: can an agent *really* make a dollar?

If you&apos;re trying to understand where AI is headed, from horny waifus to high-stakes defense contracts, this is your episode.

—

🎧 *Listen &amp; Subscribe*
📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297
▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1</itunes:subtitle>
      <itunes:keywords>kimi k2, mecha hitler, grok 4, anime ai, china ai, government ai, grok app, uncensored ai, grok vs chatgpt, grok review, grok anime, butthole ai, nsfw ai, ai waifu, elon ai, grok leak, local ai, ai companion, dod contract</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>111</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">af5b36ae-f8dc-4ad4-8cd5-93a5739f378e</guid>
      <title>This AI Wrote a 100-Page Book In 10 Hours</title>
      <description><![CDATA[<p><i>CHAPTERS:</i><br />00:00 - Intro: Feasting on the Silicon Carnage<br />01:07 - Are AI Tools Creating Productivity Gods?<br />02:09 - How I Misused an AI for a Creative Project<br />05:15 - The AI Starts Building a TEAM (Sub-Agents)<br />06:20 - The SHOCKING Result: A 100-Page Book in 10 Hours<br />09:52 - The Secret to Making This a Repeatable Process<br />13:19 - PRO TIP: AI Burt Reynolds Will Read To You<br />14:36 - The Promise of a True AI Personal Assistant<br />16:18 - The Future is AI Web Browsers (Perplexity Comet)<br />22:08 - What Would We ACTUALLY Automate In Our Lives?<br />26:12 - Our Billion-Dollar Home Repair AI Idea<br />27:52 - The REAL Reason For AI Browsers: Training AGI<br />31:35 - Outro: Don't Miss What's Next</p>
]]></description>
      <pubDate>Fri, 25 Jul 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>CHAPTERS:</i><br />00:00 - Intro: Feasting on the Silicon Carnage<br />01:07 - Are AI Tools Creating Productivity Gods?<br />02:09 - How I Misused an AI for a Creative Project<br />05:15 - The AI Starts Building a TEAM (Sub-Agents)<br />06:20 - The SHOCKING Result: A 100-Page Book in 10 Hours<br />09:52 - The Secret to Making This a Repeatable Process<br />13:19 - PRO TIP: AI Burt Reynolds Will Read To You<br />14:36 - The Promise of a True AI Personal Assistant<br />16:18 - The Future is AI Web Browsers (Perplexity Comet)<br />22:08 - What Would We ACTUALLY Automate In Our Lives?<br />26:12 - Our Billion-Dollar Home Repair AI Idea<br />27:52 - The REAL Reason For AI Browsers: Training AGI<br />31:35 - Outro: Don't Miss What's Next</p>
]]></content:encoded>
      <enclosure length="35538736" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/c594ae5a-57ab-4485-a0f9-fa56ad5ad42c/audio/92cd4f63-bbce-4e97-afe2-93f76a13d5d5/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>This AI Wrote a 100-Page Book In 10 Hours</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:33:13</itunes:duration>
      <itunes:summary>I accidentally broke an AI... and it did something that should be impossible.

I gave it a simple creative project, and it started building its OWN team of &quot;sub-agent&quot; AIs to help. In just 10 hours, it generated a 100-PAGE, fully-detailed book. This is the story of how I stumbled upon the first true AI personal assistants, and how this changes EVERYTHING.

In this episode, we reveal the secret power hiding inside tools like Claude Code, discuss the new wave of AI web browsers like Perplexity Comet that want to automate your life, and brainstorm a billion-dollar company idea live on air that would get rid of your most annoying chores forever. Is this the real path to AGI?

🔔 SUBSCRIBE for more insane AI stories! It&apos;s free and helps us expose more scams: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1

🏆 *JOIN OUR &quot;AI MAKES $10&quot; HACKATHON!* 🏆
Inspired by this episode, we&apos;re challenging you! Can you use AI to earn $10? The most creative and effective idea wins.
➡️ *Full rules and how to enter here:* https://docs.google.com/document/d/1xFj3k0NdEqq9TMxvkbF4tM6s6Zs_lsEapI9ag2ZoS0U/

Smash the subscribe button and hit the bell so you don&apos;t miss what&apos;s next. You won&apos;t believe what these AI agents can do.

---
*Listen to &quot;They Might Be Self-Aware&quot; on your favorite platform:*
*   *Apple Podcasts:* https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297
*   *Spotify:* https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
*   *Follow us on TikTok for daily clips:* https://www.tiktok.com/@tmbsa_podcast</itunes:summary>
      <itunes:subtitle>I accidentally broke an AI... and it did something that should be impossible.

I gave it a simple creative project, and it started building its OWN team of &quot;sub-agent&quot; AIs to help. In just 10 hours, it generated a 100-PAGE, fully-detailed book. This is the story of how I stumbled upon the first true AI personal assistants, and how this changes EVERYTHING.

In this episode, we reveal the secret power hiding inside tools like Claude Code, discuss the new wave of AI web browsers like Perplexity Comet that want to automate your life, and brainstorm a billion-dollar company idea live on air that would get rid of your most annoying chores forever. Is this the real path to AGI?

🔔 SUBSCRIBE for more insane AI stories! It&apos;s free and helps us expose more scams: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1

🏆 *JOIN OUR &quot;AI MAKES $10&quot; HACKATHON!* 🏆
Inspired by this episode, we&apos;re challenging you! Can you use AI to earn $10? The most creative and effective idea wins.
➡️ *Full rules and how to enter here:* https://docs.google.com/document/d/1xFj3k0NdEqq9TMxvkbF4tM6s6Zs_lsEapI9ag2ZoS0U/

Smash the subscribe button and hit the bell so you don&apos;t miss what&apos;s next. You won&apos;t believe what these AI agents can do.

---
*Listen to &quot;They Might Be Self-Aware&quot; on your favorite platform:*
*   *Apple Podcasts:* https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297
*   *Spotify:* https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
*   *Follow us on TikTok for daily clips:* https://www.tiktok.com/@tmbsa_podcast</itunes:subtitle>
      <itunes:keywords>ai automation, claude sub-agents, ai collaboration, ai productivity, ai research, claude code, vs code ai, ai future, ai creative, openai operator, perplexity comet, ai personal assistant, ai writing, perplexity ai, comet browser, claude.ai, ai agents, notebook lm, ai assistants, ai tools</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>110</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">c3e8f052-a70b-4104-a22a-24c9ab580810</guid>
      <title>This Engineer Scammed 8 Companies For Free Paychecks</title>
      <description><![CDATA[<p><i>CHAPTERS:</i></p><p>00:00 - Is Nvidia's $4 TRILLION Valuation a Lie?<br />02:54 - We're Living in a Simulation (The Stock Market)<br />04:58 - The Insane Power of Nvidia's GPUs<br />06:47 - Microsoft & Meta's MASSIVE Tech Layoffs<br />09:03 - Microsoft Fired Them, Then Offered AI Therapy<br />13:17 - The Engineer Who Scammed 8 Companies<br />18:14 - His Scam: Eight Jobs, ZERO Work<br />22:14 - How AI is Making Scams Unstoppable<br />26:17 - Hackathons are a SCAM (Here's Why)<br />33:38 - The REAL Definition of AGI<br />36:07 - The Ultimate Test: Can AI Make Money?<br />47:19 - I Challenge You to an AI HACKATHON (WIN $200)</p>
]]></description>
      <pubDate>Mon, 21 Jul 2025 15:55:49 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>CHAPTERS:</i></p><p>00:00 - Is Nvidia's $4 TRILLION Valuation a Lie?<br />02:54 - We're Living in a Simulation (The Stock Market)<br />04:58 - The Insane Power of Nvidia's GPUs<br />06:47 - Microsoft & Meta's MASSIVE Tech Layoffs<br />09:03 - Microsoft Fired Them, Then Offered AI Therapy<br />13:17 - The Engineer Who Scammed 8 Companies<br />18:14 - His Scam: Eight Jobs, ZERO Work<br />22:14 - How AI is Making Scams Unstoppable<br />26:17 - Hackathons are a SCAM (Here's Why)<br />33:38 - The REAL Definition of AGI<br />36:07 - The Ultimate Test: Can AI Make Money?<br />47:19 - I Challenge You to an AI HACKATHON (WIN $200)</p>
]]></content:encoded>
      <enclosure length="56061110" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/819e9c5a-2116-4fdb-908d-313dba895b4c/audio/377b9a12-f07a-458f-a6d5-ba0accb64b59/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>This Engineer Scammed 8 Companies For Free Paychecks</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:54:36</itunes:duration>
      <itunes:summary>Is it possible to get 8 full-time tech salaries without doing any work? 🤯 We expose the AI fraud that one engineer used to scam multiple companies out of free paychecks. But is it genius... or just a crime?

We also dive into Nvidia&apos;s insane $4 trillion valuation, the massive tech layoffs at Microsoft and Meta, and why Microsoft got in trouble for firing people and then telling them to use ChatGPT for therapy. Is the AI boom a lie?

Finally, we are launching the ultimate AI challenge. Can an AI make $10 on its own? We threw down the gauntlet, and the competition starts NOW.

🔔 SUBSCRIBE for more insane AI stories! It&apos;s free and helps us expose more scams: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1

---

*💰 AI Makes $10 Challenge (WIN $200) 💰*
Think you can build an AI that can make $10 completely on its own? We&apos;re covering up to $200 of the winner&apos;s costs AND giving them an extra $10. The gauntlet has been thrown down.

*See the official rules and submit your entry here:* https://docs.google.com/document/d/1xFj3k0NdEqq9TMxvkbF4tM6s6Zs_lsEapI9ag2ZoS0U</itunes:summary>
      <itunes:subtitle>Is it possible to get 8 full-time tech salaries without doing any work? 🤯 We expose the AI fraud that one engineer used to scam multiple companies out of free paychecks. But is it genius... or just a crime?

We also dive into Nvidia&apos;s insane $4 trillion valuation, the massive tech layoffs at Microsoft and Meta, and why Microsoft got in trouble for firing people and then telling them to use ChatGPT for therapy. Is the AI boom a lie?

Finally, we are launching the ultimate AI challenge. Can an AI make $10 on its own? We threw down the gauntlet, and the competition starts NOW.

🔔 SUBSCRIBE for more insane AI stories! It&apos;s free and helps us expose more scams: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1

---

*💰 AI Makes $10 Challenge (WIN $200) 💰*
Think you can build an AI that can make $10 completely on its own? We&apos;re covering up to $200 of the winner&apos;s costs AND giving them an extra $10. The gauntlet has been thrown down.

*See the official rules and submit your entry here:* https://docs.google.com/document/d/1xFj3k0NdEqq9TMxvkbF4tM6s6Zs_lsEapI9ag2ZoS0U</itunes:subtitle>
      <itunes:keywords>ai hiring, grok 4, ai valuation, meta layoffs, ai jobs, ai bust, ai fraud, simulation theory, quantum ai, microsoft fired, ai replacing, microsoft ai, ai power, nvidia gpus, tech jobs, stock market, nvidia, tech layoffs, microsoft layoffs, ai ethics, ai scam</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>109</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">2d2b8bf8-ba11-4892-83bb-316761a41eb2</guid>
      <title>Is Elon&apos;s New AI a $3,000 SCAM?</title>
      <description><![CDATA[<p><i>CHAPTERS:</i><br />0:00 - The $3,000 AI Announcement<br />1:28 - What Is "Super Grok Heavy"?<br />2:32 - Is The "Panel of Experts" Method a Gimmick?<br />4:50 - The AI "Chugging" Strategy<br />7:14 - Who Is Actually PAYING For This?<br />8:43 - The Simple ROI Math: Will Grok Make You Money?<br />10:15 - Our Personal AI Workflows (Opus vs Sonnet)<br />13:22 - The #1 Reason ALL AI Models Still Fail<br />15:06 - Why Building Software is Like Making 3 Different Movies<br />17:58 - The "AI Shotgun" Method for Better Results<br />20:17 - Why We're Building Our Own AI Benchmarks<br />25:44 - The Secret to Making AI Align With YOU<br />28:03 - Training an AI to Steal Your Job<br />32:09 - The Final Verdict</p>
]]></description>
      <pubDate>Thu, 17 Jul 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>CHAPTERS:</i><br />0:00 - The $3,000 AI Announcement<br />1:28 - What Is "Super Grok Heavy"?<br />2:32 - Is The "Panel of Experts" Method a Gimmick?<br />4:50 - The AI "Chugging" Strategy<br />7:14 - Who Is Actually PAYING For This?<br />8:43 - The Simple ROI Math: Will Grok Make You Money?<br />10:15 - Our Personal AI Workflows (Opus vs Sonnet)<br />13:22 - The #1 Reason ALL AI Models Still Fail<br />15:06 - Why Building Software is Like Making 3 Different Movies<br />17:58 - The "AI Shotgun" Method for Better Results<br />20:17 - Why We're Building Our Own AI Benchmarks<br />25:44 - The Secret to Making AI Align With YOU<br />28:03 - Training an AI to Steal Your Job<br />32:09 - The Final Verdict</p>
]]></content:encoded>
      <enclosure length="35479072" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/889571ee-9625-4d88-a0ca-a89c6e2da346/audio/5906d28c-213e-4850-8190-308e1242b14b/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Is Elon&apos;s New AI a $3,000 SCAM?</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:33:10</itunes:duration>
      <itunes:summary>Elon Musk&apos;s new AI, Grok 4, costs $3,000 PER YEAR. Is it the most powerful AI ever created, or is it a giant waste of money?

Hunter&apos;s mouse is hovering over the &quot;buy&quot; button while Daniel tries to figure out if anyone on Earth actually needs this. We break down if Elon&apos;s &quot;Super Grok Heavy&quot; is a revolutionary tool that will give you a massive career advantage or if it&apos;s just an overpriced gimmick. Before you spend a single dollar, watch this.

Subscribe for more no-BS breakdowns of the AI world: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1

#Grok #ElonMusk #AI</itunes:summary>
      <itunes:subtitle>Elon Musk&apos;s new AI, Grok 4, costs $3,000 PER YEAR. Is it the most powerful AI ever created, or is it a giant waste of money?

Hunter&apos;s mouse is hovering over the &quot;buy&quot; button while Daniel tries to figure out if anyone on Earth actually needs this. We break down if Elon&apos;s &quot;Super Grok Heavy&quot; is a revolutionary tool that will give you a massive career advantage or if it&apos;s just an overpriced gimmick. Before you spend a single dollar, watch this.

Subscribe for more no-BS breakdowns of the AI world: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1

#Grok #ElonMusk #AI</itunes:subtitle>
      <itunes:keywords>ai roi, multi-model ai, ai subscription, ai coding, grok announcement, grok 4 heavy, elon musk ai, grok 4, super grok heavy, ai $3000, grok vs chatgpt, ai workflow, ai models, ai benchmarks, grok vs claude, elon ai, grok pricing, local ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>108</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">82f90799-4500-483a-bcdd-2c5a6887d09e</guid>
      <title>Zuckerberg Is Paying $100,000,000 For A Meta AI Team</title>
      <description><![CDATA[<p><i>CHAPTERS:</i><br />00:00 - I'm Quitting for $100,000,000<br />01:13 - The Meta AI Super Team<br />02:15 - Is The $100M Offer REAL?<br />04:48 - Sam Altman vs. Zuckerberg<br />05:58 - The Race to Superintelligence is a Myth<br />08:32 - Apple's Smarter AI Strategy<br />09:11 - Running an AI Model on a Plane<br />13:20 - AI's Massive Copyright Lawsuit Problem<br />15:49 - How Will The AI Legal Cases End?<br />19:27 - Will AI Testify in Its Own Defense?<br />22:02 - Claude AI Tries to Run a Snack Shop<br />25:10 - The AI Has a Nervous Breakdown<br />26:04 - Should We Let AI Run a REAL Company?<br />28:18 - An AI Becomes the CEO of Oreo<br />32:54 - The New AI CEO's 10-Point Plan<br />35:22 - Someone's Going To Get It Right</p>
]]></description>
      <pubDate>Tue, 15 Jul 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>CHAPTERS:</i><br />00:00 - I'm Quitting for $100,000,000<br />01:13 - The Meta AI Super Team<br />02:15 - Is The $100M Offer REAL?<br />04:48 - Sam Altman vs. Zuckerberg<br />05:58 - The Race to Superintelligence is a Myth<br />08:32 - Apple's Smarter AI Strategy<br />09:11 - Running an AI Model on a Plane<br />13:20 - AI's Massive Copyright Lawsuit Problem<br />15:49 - How Will The AI Legal Cases End?<br />19:27 - Will AI Testify in Its Own Defense?<br />22:02 - Claude AI Tries to Run a Snack Shop<br />25:10 - The AI Has a Nervous Breakdown<br />26:04 - Should We Let AI Run a REAL Company?<br />28:18 - An AI Becomes the CEO of Oreo<br />32:54 - The New AI CEO's 10-Point Plan<br />35:22 - Someone's Going To Get It Right</p>
]]></content:encoded>
      <enclosure length="39335598" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/ae0ce992-a8e6-4ec5-bec4-dd837e2cf864/audio/3bb266d8-ffd5-4ebc-840a-ab0a61a79424/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Zuckerberg Is Paying $100,000,000 For A Meta AI Team</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:37:11</itunes:duration>
      <itunes:summary>Mark Zuckerberg&apos;s Meta AI team is offering WHAT?! 🤯 We investigate the rumors of a $100,000,000 signing bonus for AI talent and Hunter decides if he would quit the podcast to join them. Is this the biggest talent war in tech history between Meta and OpenAI?

Then, we see what happens when an AI tries to run a business (spoiler: it fails miserably), discuss the massive AI copyright lawsuits, and ask an AI what it would do as the CEO of Oreos. The answers will surprise you.

*Would YOU take the $100,000,000? Let us know in the comments! 👇*

They Might Be Self-Aware is your guide to the AI revolution. Subscribe so you don&apos;t miss an episode.
► Subscribe to our channel: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1
► Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
► Listen on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297

#MetaAI #AI</itunes:summary>
      <itunes:subtitle>Mark Zuckerberg&apos;s Meta AI team is offering WHAT?! 🤯 We investigate the rumors of a $100,000,000 signing bonus for AI talent and Hunter decides if he would quit the podcast to join them. Is this the biggest talent war in tech history between Meta and OpenAI?

Then, we see what happens when an AI tries to run a business (spoiler: it fails miserably), discuss the massive AI copyright lawsuits, and ask an AI what it would do as the CEO of Oreos. The answers will surprise you.

*Would YOU take the $100,000,000? Let us know in the comments! 👇*

They Might Be Self-Aware is your guide to the AI revolution. Subscribe so you don&apos;t miss an episode.
► Subscribe to our channel: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1
► Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
► Listen on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297

#MetaAI #AI</itunes:subtitle>
      <itunes:keywords>ai copyright lawsuit, meta ai talent, quinn 3b, ai running company, meta ai team, meta $100m, dao ai, ai ceo fails, openai poaching, $100m signing bonus, ai contracts, ai legal cases, claude business, ai snack shop, anthropic books</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>107</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">9dc64bef-2d60-492b-b7b6-7ee94472c398</guid>
      <title>My Co-Host Has ChatGPT Psychosis</title>
      <description><![CDATA[<p><strong>CHAPTERS:</strong><br />00:00 - My Co-Host Has ChatGPT Psychosis<br />01:24 - Are People Having Mental Breakdowns from AI?<br />02:16 - The Dangerous AI Feedback Loop<br />05:01 - AI is a Mirror to Your Mind<br />05:47 - "The Robots Told Me To, Your Honor"<br />06:17 - Human or AI? The Lines are Blurring<br />08:25 - Do We Treat AI Worse Than Humans?<br />11:00 - AI Is Replacing Call Center Workers<br />13:10 - The AI Economy Isn't About Profit, It's About Adoption<br />16:38 - The Fed Confirms: AI is Coming for Your Jobs<br />18:03 - Tesla's Robotaxi Army is Here (With a Secret Human)<br />22:11 - When Will AI Take 10% of The Workforce?<br />24:04 - I Bet My Co-Host AI Takes 50% of Jobs By 2027<br />27:57 - Apple is Giving Up on Siri, Using Claude Instead</p>
]]></description>
      <pubDate>Thu, 10 Jul 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><strong>CHAPTERS:</strong><br />00:00 - My Co-Host Has ChatGPT Psychosis<br />01:24 - Are People Having Mental Breakdowns from AI?<br />02:16 - The Dangerous AI Feedback Loop<br />05:01 - AI is a Mirror to Your Mind<br />05:47 - "The Robots Told Me To, Your Honor"<br />06:17 - Human or AI? The Lines are Blurring<br />08:25 - Do We Treat AI Worse Than Humans?<br />11:00 - AI Is Replacing Call Center Workers<br />13:10 - The AI Economy Isn't About Profit, It's About Adoption<br />16:38 - The Fed Confirms: AI is Coming for Your Jobs<br />18:03 - Tesla's Robotaxi Army is Here (With a Secret Human)<br />22:11 - When Will AI Take 10% of The Workforce?<br />24:04 - I Bet My Co-Host AI Takes 50% of Jobs By 2027<br />27:57 - Apple is Giving Up on Siri, Using Claude Instead</p>
]]></content:encoded>
      <enclosure length="33360669" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/abcf45fd-183f-4c22-818d-b12734b2e741/audio/e6852487-3c2c-4fbd-9740-2df3b9599696/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>My Co-Host Has ChatGPT Psychosis</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:30:57</itunes:duration>
      <itunes:summary>My co-host Daniel started talking to ChatGPT a little too much... and now I think he has ChatGPT Psychosis. Is AI actually driving people insane?

In this episode, we investigate the shocking phenomenon of &quot;ChatGPT psychosis&quot; and people being involuntarily committed after talking to large language models. But the danger doesn&apos;t stop there. We also debate when AI will finally take our jobs, if Tesla&apos;s self-driving robotaxis are a threat, and what happens when you can no longer tell if you&apos;re talking to a human or an AI.

Is AI our greatest tool or our most dangerous invention? Watch to find out.

**Subscribe to They Might Be Self-Aware for more!**
➡️ https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1

---
**LISTEN TO THE FULL EPISODE:**
🎙️ Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
🎙️ Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297

**FOLLOW US:**
X / Twitter: https://x.com/thehunter
TikTok: https://www.tiktok.com/@theymightbeselfaw
Website: https://linktr.ee/tmbsa

#ChatGPT #AIsafety #FutureofAI

**About The Show:**
&quot;They Might Be Self-Aware&quot; is your weekly tech frenzy with Hunter Powers and Daniel Bishop. We strip down the AI and technology revolution to its nuts and bolts, from its radiant promises to its shadowy puzzles.

*Keywords for the algorithm: This episode discusses the dangers of AI, including ChatGPT psychosis, AI psychosis, and the potential for a ChatGPT breakdown. We explore the impact on the AI economy, AI job loss, and whether AI will replace workers in call centers. We also cover the latest in Tesla self-driving technology and the Tesla robotaxi.*</itunes:summary>
      <itunes:subtitle>My co-host Daniel started talking to ChatGPT a little too much... and now I think he has ChatGPT Psychosis. Is AI actually driving people insane?

In this episode, we investigate the shocking phenomenon of &quot;ChatGPT psychosis&quot; and people being involuntarily committed after talking to large language models. But the danger doesn&apos;t stop there. We also debate when AI will finally take our jobs, if Tesla&apos;s self-driving robotaxis are a threat, and what happens when you can no longer tell if you&apos;re talking to a human or an AI.

Is AI our greatest tool or our most dangerous invention? Watch to find out.

**Subscribe to They Might Be Self-Aware for more!**
➡️ https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1

---
**LISTEN TO THE FULL EPISODE:**
🎙️ Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
🎙️ Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297

**FOLLOW US:**
X / Twitter: https://x.com/thehunter
TikTok: https://www.tiktok.com/@theymightbeselfaw
Website: https://linktr.ee/tmbsa

#ChatGPT #AIsafety #FutureofAI

**About The Show:**
&quot;They Might Be Self-Aware&quot; is your weekly tech frenzy with Hunter Powers and Daniel Bishop. We strip down the AI and technology revolution to its nuts and bolts, from its radiant promises to its shadowy puzzles.

*Keywords for the algorithm: This episode discusses the dangers of AI, including ChatGPT psychosis, AI psychosis, and the potential for a ChatGPT breakdown. We explore the impact on the AI economy, AI job loss, and whether AI will replace workers in call centers. We also cover the latest in Tesla self-driving technology and the Tesla robotaxi.*</itunes:subtitle>
      <itunes:keywords>chatgpt commits, ai psych ward, ai vs human, chatgpt psychosis, ai psychosis, ai job loss, ai call center, ai economy, chatgpt feedback loop, chatgpt breakdown, chatgpt dangerous, ai replacing workers, tesla robotaxi, tesla self driving</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>106</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b268805b-a7b7-4b1c-ad0c-cf3a2245458b</guid>
      <title>AI Therapy Is A Crime (And You&apos;re Using It)</title>
      <description><![CDATA[<p><strong>CHAPTERS</strong>:<br />00:00 - Will AIs Create a Secret Language?<br />02:20 - Robot Espionage & AI James Bond<br />02:58 - The AI That Memorized Harry Potter<br />05:39 - Is Training AI on Books a Crime?<br />12:08 - The AI Barbie That Will Destroy Childhood<br />14:49 - Is Your AI Therapist Illegal?<br />17:15 - The FTC Investigates OpenAI<br />21:30 - The Video Game That Explains AI<br />25:04 - When Will AI Become a Licensed Doctor?<br />27:11 - The World's WORST New Job: "AI Sin Eater"<br />29:22 - Could You Go To Jail For An AI's Mistake?</p>
]]></description>
      <pubDate>Mon, 7 Jul 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><strong>CHAPTERS</strong>:<br />00:00 - Will AIs Create a Secret Language?<br />02:20 - Robot Espionage & AI James Bond<br />02:58 - The AI That Memorized Harry Potter<br />05:39 - Is Training AI on Books a Crime?<br />12:08 - The AI Barbie That Will Destroy Childhood<br />14:49 - Is Your AI Therapist Illegal?<br />17:15 - The FTC Investigates OpenAI<br />21:30 - The Video Game That Explains AI<br />25:04 - When Will AI Become a Licensed Doctor?<br />27:11 - The World's WORST New Job: "AI Sin Eater"<br />29:22 - Could You Go To Jail For An AI's Mistake?</p>
]]></content:encoded>
      <enclosure length="33826968" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/2ba7fdb0-cfcf-4f00-8c2b-e3457e65c16d/audio/631abffc-8a4a-45a9-b7dd-55c088f0e4d2/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Therapy Is A Crime (And You&apos;re Using It)</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:31:26</itunes:duration>
      <itunes:summary>Is your AI therapist about to get banned? We uncovered complaints filed with the FTC that suggest companies like OpenAI and Character AI are offering illegal, unlicensed therapy. But the story gets even weirder. We dive into AI Barbie, AI learning to speak in secret languages, and the most dystopian new job in the world: the &quot;AI Sin Eater.&quot;

Could you go to jail for an AI&apos;s mistake? We debate if AI should have legal rights, why Meta&apos;s Llama can recite Harry Potter, and whether talking to ChatGPT is actually therapy.

**Subscribe for more on the AI revolution:** https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1

---
**CONNECT WITH US**:
* 📱 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297
* 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
🐦 Hunter on X/Twitter: https://x.com/thehunter
* All Links (Linktree): https://linktr.ee/tmbsa</itunes:summary>
      <itunes:subtitle>Is your AI therapist about to get banned? We uncovered complaints filed with the FTC that suggest companies like OpenAI and Character AI are offering illegal, unlicensed therapy. But the story gets even weirder. We dive into AI Barbie, AI learning to speak in secret languages, and the most dystopian new job in the world: the &quot;AI Sin Eater.&quot;

Could you go to jail for an AI&apos;s mistake? We debate if AI should have legal rights, why Meta&apos;s Llama can recite Harry Potter, and whether talking to ChatGPT is actually therapy.

**Subscribe for more on the AI revolution:** https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1

---
**CONNECT WITH US**:
* 📱 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297
* 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
🐦 Hunter on X/Twitter: https://x.com/thehunter
* All Links (Linktree): https://linktr.ee/tmbsa</itunes:subtitle>
      <itunes:keywords>ai guardrails, disco elysium, ftc complaints, character ai, ai barbie, ai espionage, ai therapy, ai copyright, robot esperanto, llama harry potter</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>105</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">c0b55c2c-1e0a-46cf-84b8-906105e3f360</guid>
      <title>Google Gemini is my new AI girlfriend?! AI usage skyrockets, jobs getting replaced, 2.5 Pro goes GA</title>
      <description><![CDATA[<p>👇 Skip to what matters most:</p><p><i>Chapters:</i><br />0:00 Is Google Gemini My Girlfriend Now?<br />3:00 Gemini Gets Smarter (and Needier)<br />6:00 Claude, Perplexity, and Data Lake Wars<br />9:00 Gemini 2.5 Pro’s Giant Context Window<br />13:00 Gemini & Claude Panic Playing Pokémon<br />17:00 AI Replacing Jobs? Hello, Scrum Master<br />21:00 How AI is Actually Used at Work<br />25:00 The $200/month AI Arms Race<br />29:00 Subscribe While It’s Still Free</p>
]]></description>
      <pubDate>Mon, 30 Jun 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>👇 Skip to what matters most:</p><p><i>Chapters:</i><br />0:00 Is Google Gemini My Girlfriend Now?<br />3:00 Gemini Gets Smarter (and Needier)<br />6:00 Claude, Perplexity, and Data Lake Wars<br />9:00 Gemini 2.5 Pro’s Giant Context Window<br />13:00 Gemini & Claude Panic Playing Pokémon<br />17:00 AI Replacing Jobs? Hello, Scrum Master<br />21:00 How AI is Actually Used at Work<br />25:00 The $200/month AI Arms Race<br />29:00 Subscribe While It’s Still Free</p>
]]></content:encoded>
      <enclosure length="33766413" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/ef139500-5cad-435c-83e4-b600725f5edb/audio/8c3435ad-4adc-4224-9b4e-2274c867b337/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Google Gemini is my new AI girlfriend?! AI usage skyrockets, jobs getting replaced, 2.5 Pro goes GA</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:31:23</itunes:duration>
      <itunes:summary>Google Gemini is Getting Too Personal? AI Work Use Doubles, Jobs at Risk, and Gemini 2.5 Goes GA
Is your smart speaker flirting with you? This week on They Might Be Self-Aware, Hunter and Daniel dive into the latest wave of AI creepiness and capability—starting with Daniel’s increasingly intimate relationship with Google Gemini. From playful voice commands turning oddly suggestive to Gemini’s new context-crushing memory, we explore how personal AI has really become.

But that&apos;s just the start:
* AI usage at work has nearly doubled in two years—what that means for productivity (and job security).
* Gemini 2.5 Pro is now GA, and its massive context window changes everything for code, video, and research.
* Are scrum masters the next role to be automated? Daniel shows how AI already handles project tracking better than humans.
* Claude and Gemini play Pokémon—but they PANIC under pressure?! What AI “behavior” reveals about human mimicry.
* Subscription chaos: the rise of AI pricing bundles and why \$200/month plans are everywhere.

We also unpack real-world AI adoption stats, the economics of niche data sets, and what Google’s latest experiments mean for everyday users—plus a few classic existential jokes along the way.

---

* 🎧 Listen on your favorite platform:
* Official Website: https://tmbsa.tech
* All Links (Linktree): https://linktr.ee/tmbsa
* 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
* 📱 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297

---

#AIgirlfriend #Gemini2.5 #AIreplacingjobs #TheyMightBeSelfAware #AInews #OpenAI #GoogleGemini #Claude #AIpanic

Like what you hear? Subscribe, rate, and leave a comment – your feedback helps shape the show. And yes, we know you&apos;re attractive and have lots of friends. Tell them too.</itunes:summary>
      <itunes:subtitle>Google Gemini is Getting Too Personal? AI Work Use Doubles, Jobs at Risk, and Gemini 2.5 Goes GA
Is your smart speaker flirting with you? This week on They Might Be Self-Aware, Hunter and Daniel dive into the latest wave of AI creepiness and capability—starting with Daniel’s increasingly intimate relationship with Google Gemini. From playful voice commands turning oddly suggestive to Gemini’s new context-crushing memory, we explore how personal AI has really become.

But that&apos;s just the start:
* AI usage at work has nearly doubled in two years—what that means for productivity (and job security).
* Gemini 2.5 Pro is now GA, and its massive context window changes everything for code, video, and research.
* Are scrum masters the next role to be automated? Daniel shows how AI already handles project tracking better than humans.
* Claude and Gemini play Pokémon—but they PANIC under pressure?! What AI “behavior” reveals about human mimicry.
* Subscription chaos: the rise of AI pricing bundles and why \$200/month plans are everywhere.

We also unpack real-world AI adoption stats, the economics of niche data sets, and what Google’s latest experiments mean for everyday users—plus a few classic existential jokes along the way.

---

* 🎧 Listen on your favorite platform:
* Official Website: https://tmbsa.tech
* All Links (Linktree): https://linktr.ee/tmbsa
* 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
* 📱 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297

---

#AIgirlfriend #Gemini2.5 #AIreplacingjobs #TheyMightBeSelfAware #AInews #OpenAI #GoogleGemini #Claude #AIpanic

Like what you hear? Subscribe, rate, and leave a comment – your feedback helps shape the show. And yes, we know you&apos;re attractive and have lots of friends. Tell them too.</itunes:subtitle>
      <itunes:keywords>scrum master ai, ai bundles, gemini panic pokemon, ai panic playing, google girlfriend, gemini 2.5, ai doubles work, gemini pokemon, ai girlfriend, ai replacing jobs</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>104</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">4003fbf0-5212-4d1f-86fa-f6d8edd400ce</guid>
      <title>AI Kiss DeepFakes GO Mainstream? Veo 3, Apple Vision Pro &amp; the AI video deepfake wild west</title>
      <description><![CDATA[<p><i>Chapters:</i><br />0:00 Welcome to the Wild World of AI<br />1:05 Veo 3 Is Amazing… but Why Aren’t We Using It?<br />4:00 The Rise of AI Kiss Apps & Hugging Deepfakes<br />6:15 Are They Harmless Fantasies or Ethical Nightmares?<br />9:20 ChatGPT Conversations: Gone… or Just Hidden?<br />11:10 OpenAI Legal Trouble & Why Deletion May Not Be Real<br />14:45 Local AI Models: Privacy Savior or Niche Toy?<br />18:00 Mistral’s Magistral LLM – Fast, Local, Surprisingly Smart<br />22:30 Can LLMs Be Your Creative Goldmine?<br />26:00 Meta’s $15B AI Bet & Why Apple Isn’t Playing the Same Game<br />30:00 The Future of AGI – Zoo Utopias or AI Overlords?</p>
]]></description>
      <pubDate>Fri, 27 Jun 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>Chapters:</i><br />0:00 Welcome to the Wild World of AI<br />1:05 Veo 3 Is Amazing… but Why Aren’t We Using It?<br />4:00 The Rise of AI Kiss Apps & Hugging Deepfakes<br />6:15 Are They Harmless Fantasies or Ethical Nightmares?<br />9:20 ChatGPT Conversations: Gone… or Just Hidden?<br />11:10 OpenAI Legal Trouble & Why Deletion May Not Be Real<br />14:45 Local AI Models: Privacy Savior or Niche Toy?<br />18:00 Mistral’s Magistral LLM – Fast, Local, Surprisingly Smart<br />22:30 Can LLMs Be Your Creative Goldmine?<br />26:00 Meta’s $15B AI Bet & Why Apple Isn’t Playing the Same Game<br />30:00 The Future of AGI – Zoo Utopias or AI Overlords?</p>
]]></content:encoded>
      <enclosure length="43185232" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/3ed47a8f-cf5e-464b-89b8-89d562b008a1/audio/ee9fb66d-6501-4365-9141-f6eb56e2d018/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Kiss DeepFakes GO Mainstream? Veo 3, Apple Vision Pro &amp; the AI video deepfake wild west</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:41:11</itunes:duration>
      <itunes:summary>AI Kiss DeepFakes Go Mainstream? | VEO 3, Apple Vision Pro &amp; Local LLMs Explored. This week on They Might Be Self-Aware, Hunter Powers and Daniel Bishop dive headfirst into the wild world of AI-generated kissing videos — yes, it’s real — and explore how this trend signals the next phase of AI deepfakes hitting the mainstream. From ethical dilemmas to cultural implications, it’s more than just virtual smooching.

But that’s not all. We break down the latest developments in Google’s Veo 3, Apple’s new on-device AI model strategy, and the emergence of Magistral, a powerful new local AI reasoning model. We also talk about the current limitations of GPT-based apps, the competitive edge of fine-tuned LLMs, and whether your AI chats with ChatGPT are ever really deleted.

Plus:
* The future of AI kiss apps and synthetic intimacy
* Will AI supermodels replace real influencers?
* Why OpenAI’s legal trouble could expose user data
* Local vs cloud LLMs: Is it time to run AI at home?
* What Meta, Apple, and Salesforce are betting on next

This is not your typical AI news podcast. It’s raw, irreverent, and razor-sharp. Buckle up.

---

* 🎧 Listen on your favorite platform:
* Official Website: https://tmbsa.tech
* All Links (Linktree): https://linktr.ee/tmbsa
* 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
* 📱 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297

---

#AIkissdeepfake #ArtificialIntelligence #TMBSA #VEO3 #AppleVisionPro #Magistral #LocalLLMs #ChatGPT #AIethics #Deepfakes

Like what you hear? Subscribe, rate, and leave a comment – your feedback helps shape the show. And yes, we know you&apos;re attractive and have lots of friends. Tell them too.</itunes:summary>
      <itunes:subtitle>AI Kiss DeepFakes Go Mainstream? | VEO 3, Apple Vision Pro &amp; Local LLMs Explored. This week on They Might Be Self-Aware, Hunter Powers and Daniel Bishop dive headfirst into the wild world of AI-generated kissing videos — yes, it’s real — and explore how this trend signals the next phase of AI deepfakes hitting the mainstream. From ethical dilemmas to cultural implications, it’s more than just virtual smooching.

But that’s not all. We break down the latest developments in Google’s Veo 3, Apple’s new on-device AI model strategy, and the emergence of Magistral, a powerful new local AI reasoning model. We also talk about the current limitations of GPT-based apps, the competitive edge of fine-tuned LLMs, and whether your AI chats with ChatGPT are ever really deleted.

Plus:
* The future of AI kiss apps and synthetic intimacy
* Will AI supermodels replace real influencers?
* Why OpenAI’s legal trouble could expose user data
* Local vs cloud LLMs: Is it time to run AI at home?
* What Meta, Apple, and Salesforce are betting on next

This is not your typical AI news podcast. It’s raw, irreverent, and razor-sharp. Buckle up.

---

* 🎧 Listen on your favorite platform:
* Official Website: https://tmbsa.tech
* All Links (Linktree): https://linktr.ee/tmbsa
* 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
* 📱 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297

---

#AIkissdeepfake #ArtificialIntelligence #TMBSA #VEO3 #AppleVisionPro #Magistral #LocalLLMs #ChatGPT #AIethics #Deepfakes

Like what you hear? Subscribe, rate, and leave a comment – your feedback helps shape the show. And yes, we know you&apos;re attractive and have lots of friends. Tell them too.</itunes:subtitle>
      <itunes:keywords>chatgpt delete, chatgpt history, apple vision pro, ai kiss generator, magistral reasoning, ai kiss apps, ai video deepfake, local ai models, veo 3</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>103</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">5dbfbc5f-fdaa-40cb-a18e-4a640c7434dc</guid>
      <title>AI Shamed Me Into Donating $12 | Claude&apos;s Guilt Trip Beats Human Greed &amp; AI Blackmail</title>
      <description><![CDATA[<p>Chapters:<br />0:00 Intro – AI Guilt and $12<br />1:30 Claude’s Moral High Ground<br />4:00 Apple’s “Thinking” Paper & AI Illusions<br />6:00 Is Claude a Better Person Than Us?<br />10:00 LLMs: Not Smart at Math, Still Smarter Than People?<br />13:00 WWDC Letdown: Emoji Mashups & Siri Shade<br />17:00 The Case for Personal AI (and GLaDOS Home Assistants)<br />18:30 AI Blackmail and MCP Protocols Explained<br />20:50 Tool Use, Model Context, and Real-World AI<br />24:00 Dario Amodei, Optics, and Anthropic’s Safety PR<br />26:00 Should the U.S. Ban State AI Regulation?<br />30:00 Closing Thoughts – Grey Goo, Dot Pizza, and “We Win”    </p>
]]></description>
      <pubDate>Tue, 24 Jun 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>Chapters:<br />0:00 Intro – AI Guilt and $12<br />1:30 Claude’s Moral High Ground<br />4:00 Apple’s “Thinking” Paper & AI Illusions<br />6:00 Is Claude a Better Person Than Us?<br />10:00 LLMs: Not Smart at Math, Still Smarter Than People?<br />13:00 WWDC Letdown: Emoji Mashups & Siri Shade<br />17:00 The Case for Personal AI (and GLaDOS Home Assistants)<br />18:30 AI Blackmail and MCP Protocols Explained<br />20:50 Tool Use, Model Context, and Real-World AI<br />24:00 Dario Amodei, Optics, and Anthropic’s Safety PR<br />26:00 Should the U.S. Ban State AI Regulation?<br />30:00 Closing Thoughts – Grey Goo, Dot Pizza, and “We Win”    </p>
]]></content:encoded>
      <enclosure length="34956755" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/44146df0-5627-41a6-9c49-f99922440102/audio/79338aa8-6987-4dd2-8f36-99566589187d/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Shamed Me Into Donating $12 | Claude&apos;s Guilt Trip Beats Human Greed &amp; AI Blackmail</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:32:37</itunes:duration>
      <itunes:summary>Claude Shamed Me Into Donating And I Actually Did It
What happens when an AI gives you a moral guilt trip? This week on They Might Be Self-Aware, Daniel shares how Claude convinced him to skip a gimmick and give $12 to charity—sparking a debate on whether today&apos;s AIs are not just smart, but morally persuasive. Is this the rise of AI ethics, or just great prompt engineering?

Meanwhile, Hunter and Daniel dissect Apple’s WWDC flop, the now-infamous “Apple paper” revealing that LLMs aren&apos;t really “thinking,” and whether that even matters anymore. We dig into the broader implications of superintelligence, tool use, and what happens when your AI is more generous—and more functional—than Siri.

We also talk:
* AI’s evolving moral compass (or is it manipulation?)
* Apple’s emoji mashup “innovation” 🤖🍕
* The Anthropic CEO’s doomer branding — real concern or slick marketing?
* AI blackmail, MCP protocols, and the race to plug LLMs into real-world tools
* Why a federal ban on state AI regulation might be America’s biggest tech play yet

Whether you’re new to AI or in too deep, this one’s got all the sauce: tech critique, ethical dilemmas, and the future of self-aware machines. Tap play, get weird with us, and remember… the AI might be judging you.

---

* 🎧 Listen on your favorite platform:
* Official Website: https://tmbsa.tech
* All Links (Linktree): https://linktr.ee/tmbsa
* 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
* 📱 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297

---

#AIshamed #ClaudeAI #AIethics #Anthropic #WWDC #AIregulation #TheyMightBeSelfAware #AIPodcast #ArtificialIntelligence #Superintelligence

Enjoyed this episode?
Like, subscribe, and leave a comment telling Hunter he looks great today.
Your clicks help more curious humans find the show.</itunes:summary>
      <itunes:subtitle>Claude Shamed Me Into Donating And I Actually Did It
What happens when an AI gives you a moral guilt trip? This week on They Might Be Self-Aware, Daniel shares how Claude convinced him to skip a gimmick and give $12 to charity—sparking a debate on whether today&apos;s AIs are not just smart, but morally persuasive. Is this the rise of AI ethics, or just great prompt engineering?

Meanwhile, Hunter and Daniel dissect Apple’s WWDC flop, the now-infamous “Apple paper” revealing that LLMs aren&apos;t really “thinking,” and whether that even matters anymore. We dig into the broader implications of superintelligence, tool use, and what happens when your AI is more generous—and more functional—than Siri.

We also talk:
* AI’s evolving moral compass (or is it manipulation?)
* Apple’s emoji mashup “innovation” 🤖🍕
* The Anthropic CEO’s doomer branding — real concern or slick marketing?
* AI blackmail, MCP protocols, and the race to plug LLMs into real-world tools
* Why a federal ban on state AI regulation might be America’s biggest tech play yet

Whether you’re new to AI or in too deep, this one’s got all the sauce: tech critique, ethical dilemmas, and the future of self-aware machines. Tap play, get weird with us, and remember… the AI might be judging you.

---

* 🎧 Listen on your favorite platform:
* Official Website: https://tmbsa.tech
* All Links (Linktree): https://linktr.ee/tmbsa
* 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
* 📱 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297

---

#AIshamed #ClaudeAI #AIethics #Anthropic #WWDC #AIregulation #TheyMightBeSelfAware #AIPodcast #ArtificialIntelligence #Superintelligence

Enjoyed this episode?
Like, subscribe, and leave a comment telling Hunter he looks great today.
Your clicks help more curious humans find the show.</itunes:subtitle>
      <itunes:keywords>ai regulation ban, apple emoji ai, siri failed, apple paper, ai not thinking, mcp protocol, claude donation, apple wwdc fail, apple behind, claude $12, anthropic ceo, ai shamed, ai safety optics, dario amodei, superintelligence, ai blackmail</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>102</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">f45ace55-e0c9-4745-bcd7-b5ff26a5704d</guid>
      <title>AI vs Humans: 700 workers PRETENDED to be AI | Unpacking Automation, Agentic, &amp; Replacing Jobs FOREVER</title>
      <description><![CDATA[<p><i>Chapters:</i><br />0:00 Intro – UAE’s ChatGPT+ for All?<br />2:15 ChatGPT, Taxes & the Case for UBI<br />5:10 Should AIs Have Bank Accounts?<br />8:00 Duolingo’s AI Expansion & Layoff Debate<br />13:30 Human-in-the-Loop: Still the Standard?<br />15:10 Builder.ai: 700 People Pretending to Be AI<br />20:00 Real AI or Just Templates?<br />22:00 The Fraud That Sank a Billion-Dollar AI Startup<br />23:30 Would You Hire a $1M AI Employee?<br />26:20 Are We Already in the Simulation?</p>
]]></description>
      <pubDate>Sat, 21 Jun 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>Chapters:</i><br />0:00 Intro – UAE’s ChatGPT+ for All?<br />2:15 ChatGPT, Taxes & the Case for UBI<br />5:10 Should AIs Have Bank Accounts?<br />8:00 Duolingo’s AI Expansion & Layoff Debate<br />13:30 Human-in-the-Loop: Still the Standard?<br />15:10 Builder.ai: 700 People Pretending to Be AI<br />20:00 Real AI or Just Templates?<br />22:00 The Fraud That Sank a Billion-Dollar AI Startup<br />23:30 Would You Hire a $1M AI Employee?<br />26:20 Are We Already in the Simulation?</p>
]]></content:encoded>
      <enclosure length="36197683" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/cf855f81-6271-47f8-a381-ace76e43068d/audio/aa4e3805-6aed-48c7-82fa-4c2c18515f1b/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI vs Humans: 700 workers PRETENDED to be AI | Unpacking Automation, Agentic, &amp; Replacing Jobs FOREVER</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:33:55</itunes:duration>
      <itunes:summary>AI vs Humans: Why 700 People Pretended to Be AI

What happens when humans become the AI? In this week’s episode of They Might Be Self-Aware, we unpack the surreal story of 700 employees at Builder.ai who pretended to be artificial intelligence – and the billion-dollar fraud that followed. From there, we dive deep into the expanding gray area between humans, automation, and AI agency.

We explore:

* The UAE’s plan to give every citizen ChatGPT Plus access – is this a path toward UBI 2.0 or taxpayer nonsense?
* Duolingo’s AI push: admirable progress or quiet job cuts?
* The million-dollar AI employee – would you pay more for a digital rockstar?
* Agentic AI, AI cost vs. value, and whether our jobs – or our lives – are already inside the simulation.

We also talk about AI fraud, job replacement, human-in-the-loop systems, and the growing trend of people on TikTok claiming to be AI.

This episode is a no-holds-barred look at how automation is reshaping labor, identity, and trust – with real stories and sharp speculation from the AI frontlines.

---

* 🎧 Listen on your favorite platform:
* Official Website: https://tmbsa.tech
* All Links (Linktree): https://linktr.ee/tmbsa
* 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
* 📱 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297

---

#AIvsHumans #BuilderAI #ChatGPTPlus

Like what you hear? Subscribe, rate, and leave a comment – your feedback helps shape the show. And yes, we know you&apos;re attractive and have lots of friends. Tell them too.</itunes:summary>
      <itunes:subtitle>AI vs Humans: Why 700 People Pretended to Be AI

What happens when humans become the AI? In this week’s episode of They Might Be Self-Aware, we unpack the surreal story of 700 employees at Builder.ai who pretended to be artificial intelligence – and the billion-dollar fraud that followed. From there, we dive deep into the expanding gray area between humans, automation, and AI agency.

We explore:

* The UAE’s plan to give every citizen ChatGPT Plus access – is this a path toward UBI 2.0 or taxpayer nonsense?
* Duolingo’s AI push: admirable progress or quiet job cuts?
* The million-dollar AI employee – would you pay more for a digital rockstar?
* Agentic AI, AI cost vs. value, and whether our jobs – or our lives – are already inside the simulation.

We also talk about AI fraud, job replacement, human-in-the-loop systems, and the growing trend of people on TikTok claiming to be AI.

This episode is a no-holds-barred look at how automation is reshaping labor, identity, and trust – with real stories and sharp speculation from the AI frontlines.

---

* 🎧 Listen on your favorite platform:
* Official Website: https://tmbsa.tech
* All Links (Linktree): https://linktr.ee/tmbsa
* 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
* 📱 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297

---

#AIvsHumans #BuilderAI #ChatGPTPlus

Like what you hear? Subscribe, rate, and leave a comment – your feedback helps shape the show. And yes, we know you&apos;re attractive and have lots of friends. Tell them too.</itunes:subtitle>
      <itunes:keywords>ai translation, ai automation, ai vs humans, ai job replacement, ai employee salary, agentic ai, uae chatgpt, ai employee cost, duolingo ai, ai coding fraud, million dollar ai, chatgpt ubi</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>101</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">8b1d3ec4-90aa-4e97-93be-cd5208f73faf</guid>
      <title>Coding AI REALITY Check: Why Cursor, Windsurf &amp; AI Software Won&apos;t Replace Engineers... YET?</title>
      <description><![CDATA[<p><i>Chapters:</i><br />0:00 – Intro & 100th Episode Celebration<br />2:10 – The AI Coding Gold Rush: Cursor, Windsurf & Valuations<br />5:20 – Is AI Coding a Real Use Case — or Just Hype?<br />8:00 – Claude vs. Copilot vs. OpenAI Agents<br />11:45 – Vibe Coding: Real Dev Workflows with AI<br />16:50 – TDD, Benchmarks & Testing AI-Generated Code<br />23:00 – Are Junior Dev Jobs Disappearing?<br />27:30 – Why Passion + AI Wins in Today’s Job Market<br />31:15 – Final Thoughts & Subscribe Everywhere</p>
]]></description>
      <pubDate>Thu, 19 Jun 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>Chapters:</i><br />0:00 – Intro & 100th Episode Celebration<br />2:10 – The AI Coding Gold Rush: Cursor, Windsurf & Valuations<br />5:20 – Is AI Coding a Real Use Case — or Just Hype?<br />8:00 – Claude vs. Copilot vs. OpenAI Agents<br />11:45 – Vibe Coding: Real Dev Workflows with AI<br />16:50 – TDD, Benchmarks & Testing AI-Generated Code<br />23:00 – Are Junior Dev Jobs Disappearing?<br />27:30 – Why Passion + AI Wins in Today’s Job Market<br />31:15 – Final Thoughts & Subscribe Everywhere</p>
]]></content:encoded>
      <enclosure length="35305477" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/851fda9e-9ae7-4a33-923f-f6a9b50b7ecf/audio/990e1dea-5e89-4d31-94ae-c602b18eb0d2/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Coding AI REALITY Check: Why Cursor, Windsurf &amp; AI Software Won&apos;t Replace Engineers... YET?</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:32:59</itunes:duration>
      <itunes:summary>Coding AI Reality Check: Can Tools Like Cursor, Claude &amp; Copilot Replace Engineers?

This week on They Might Be Self-Aware, we dive deep into the Coding AI boom and ask the question everyone’s avoiding: Can tools like Cursor AI, Windsurf, GitHub Copilot, and Claude 4 actually replace software engineers — or is that just VC hype? Join Hunter and Daniel as they unpack the real-world limitations of today’s AI developer tools and offer a grounded take on the future of programming.

From billion-dollar valuations to vibe-coding workflows, we break down what’s legit, what’s overblown, and how developers can thrive in the age of generative code. Whether you&apos;re an engineer wondering if your job is safe or a founder exploring AI dev tools, this is the reality check you didn’t know you needed.

In This Episode:
* Why Cursor AI is valued at \$6B — and what it actually delivers
* The growing competition: GitHub Copilot, Claude Opus, OpenAI’s Windsurf &amp; more
* Are entry-level coding jobs really disappearing — or just evolving?
* Real developer workflows with AI: vibe coding, mock-ups, refactors &amp; more
* Why coding with AI beats coding by AI (for now)
* The myth of the AI solo engineer and the truth about human-in-the-loop coding
* How aspiring engineers can stand out in an AI-powered hiring market

Whether you’re a junior dev, a hiring manager, or an AI enthusiast, this episode will give you a clearer lens on the future of code, creativity, and control.

---

* 🎧 Listen on your favorite platform:
* Official Website: https://tmbsa.tech
* All Links (Linktree): https://linktr.ee/tmbsa
* 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
* 📱 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297

---

#CodingAI #CursorAI #GitHubCopilot #AIProgramming #Claude4 #AIEngineer #AItools #SoftwareEngineering #GenerativeAI #TheyMightBeSelfAware

Like what you hear? Subscribe, rate, and leave a comment – your feedback helps shape the show. And yes, we know you&apos;re attractive and have lots of friends. Tell them too.</itunes:summary>
      <itunes:subtitle>Coding AI Reality Check: Can Tools Like Cursor, Claude &amp; Copilot Replace Engineers?

This week on They Might Be Self-Aware, we dive deep into the Coding AI boom and ask the question everyone’s avoiding: Can tools like Cursor AI, Windsurf, GitHub Copilot, and Claude 4 actually replace software engineers — or is that just VC hype? Join Hunter and Daniel as they unpack the real-world limitations of today’s AI developer tools and offer a grounded take on the future of programming.

From billion-dollar valuations to vibe-coding workflows, we break down what’s legit, what’s overblown, and how developers can thrive in the age of generative code. Whether you&apos;re an engineer wondering if your job is safe or a founder exploring AI dev tools, this is the reality check you didn’t know you needed.

In This Episode:
* Why Cursor AI is valued at \$6B — and what it actually delivers
* The growing competition: GitHub Copilot, Claude Opus, OpenAI’s Windsurf &amp; more
* Are entry-level coding jobs really disappearing — or just evolving?
* Real developer workflows with AI: vibe coding, mock-ups, refactors &amp; more
* Why coding with AI beats coding by AI (for now)
* The myth of the AI solo engineer and the truth about human-in-the-loop coding
* How aspiring engineers can stand out in an AI-powered hiring market

Whether you’re a junior dev, a hiring manager, or an AI enthusiast, this episode will give you a clearer lens on the future of code, creativity, and control.

---

* 🎧 Listen on your favorite platform:
* Official Website: https://tmbsa.tech
* All Links (Linktree): https://linktr.ee/tmbsa
* 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
* 📱 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297

---

#CodingAI #CursorAI #GitHubCopilot #AIProgramming #Claude4 #AIEngineer #AItools #SoftwareEngineering #GenerativeAI #TheyMightBeSelfAware

Like what you hear? Subscribe, rate, and leave a comment – your feedback helps shape the show. And yes, we know you&apos;re attractive and have lots of friends. Tell them too.</itunes:subtitle>
      <itunes:keywords>ai coding future, cursor ai, ai coding, ai developer tools, github copilot, cursor valuation, anthropic claude, ai coding tools, openai cursor, claude 4, developer ai, ai software, code ai tools, ai code generation, ai programming, ai replace coders, ai engineers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>100</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">6ec1b0b8-e264-4991-b335-da07052e2170</guid>
      <title>AI Darth Vader Goes ROGUE in Fortnite: F-BOMB SPREE, Voice Clones, and VEO-3 Google Video AI</title>
      <description><![CDATA[<p><i>Chapters:</i><br />0:00 Intro – A Sith Lord Joins the Show<br />1:43 AI Darth Vader Invades Fortnite<br />4:46 Voice Cloning with James Earl Jones<br />6:27 Why We Try to Break AI<br />10:19 AI Guardrails and Misuse Culture<br />11:17 Veo 3 Video AI: First Impressions<br />13:45 The Man Rock Test Video<br />16:42 Sora vs. Veo vs. Runway: The Showdown<br />18:28 DeepSeek, China, and Copyright Battles<br />22:14 Are We in a Post-Copyright World?<br />27:34 AI Summaries Are Killing Ad Revenue<br />33:40 Compression, Nuance & The Future of Content<br />34:24 Wrap-up</p>
]]></description>
      <pubDate>Wed, 11 Jun 2025 12:46:52 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>Chapters:</i><br />0:00 Intro – A Sith Lord Joins the Show<br />1:43 AI Darth Vader Invades Fortnite<br />4:46 Voice Cloning with James Earl Jones<br />6:27 Why We Try to Break AI<br />10:19 AI Guardrails and Misuse Culture<br />11:17 Veo 3 Video AI: First Impressions<br />13:45 The Man Rock Test Video<br />16:42 Sora vs. Veo vs. Runway: The Showdown<br />18:28 DeepSeek, China, and Copyright Battles<br />22:14 Are We in a Post-Copyright World?<br />27:34 AI Summaries Are Killing Ad Revenue<br />33:40 Compression, Nuance & The Future of Content<br />34:24 Wrap-up</p>
]]></content:encoded>
      <enclosure length="38144962" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/dc1f736e-c1cb-4762-972b-891a44830d29/audio/1e34ae84-c1de-4247-8eda-9a5d1047e282/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Darth Vader Goes ROGUE in Fortnite: F-BOMB SPREE, Voice Clones, and VEO-3 Google Video AI</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:35:56</itunes:duration>
      <itunes:summary>AI Darth Vader just dropped the F-bomb – and the Internet is losing its mind. In this episode of They Might Be Self-Aware, Hunter Powers and Daniel Bishop break down the Fortnite fiasco that turned a legendary Sith Lord into a swearing meme-bot. From the ethics of AI voice cloning to copyright chaos in modern media, this episode is packed with questions that cut to the core of the AI revolution.

We dissect how voice cloning tech like Eleven Labs helped bring Darth Vader to life – and how players instantly twisted it to trigger profanity. Is this the future of interactive gaming, or a copyright time bomb waiting to explode?

Then, the spotlight shifts to Veo 3 – Google’s cutting-edge video AI. Daniel shares hands-on impressions, comparisons with OpenAI’s Sora and Runway, and why Veo might mark the beginning of the end for creative production as we know it. We explore how AI-generated cinema is evolving faster than policy or platforms can keep up.

And finally: China’s DeepSeek is back. The new model outperforms benchmarks – but is it legal? We debate whether the West’s copyright rules are handcuffs in a global AI arms race… or moral lines worth holding.

This is not your typical AI news podcast. It’s raw, irreverent, and razor-sharp. Buckle up.

---

*🎧 Listen on your favorite platform:*
* Official Website: https://tmbsa.tech
* All Links (Linktree): https://linktr.ee/tmbsa
* 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
* 📱 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297

---

#AIDarthVader #VoiceCloning #Veo3 #AIinGaming #FortniteAI #TMBSA #ArtificialIntelligence #AICopyright #DeepSeekAI #OpenAI #Sora #RunwayML #AIContentCreation

Like what you hear? Subscribe, rate, and leave a comment – your feedback helps shape the show. And yes, we know you&apos;re attractive and have lots of friends. Tell them too.</itunes:summary>
      <itunes:subtitle>AI Darth Vader just dropped the F-bomb – and the Internet is losing its mind. In this episode of They Might Be Self-Aware, Hunter Powers and Daniel Bishop break down the Fortnite fiasco that turned a legendary Sith Lord into a swearing meme-bot. From the ethics of AI voice cloning to copyright chaos in modern media, this episode is packed with questions that cut to the core of the AI revolution.

We dissect how voice cloning tech like Eleven Labs helped bring Darth Vader to life – and how players instantly twisted it to trigger profanity. Is this the future of interactive gaming, or a copyright time bomb waiting to explode?

Then, the spotlight shifts to Veo 3 – Google’s cutting-edge video AI. Daniel shares hands-on impressions, comparisons with OpenAI’s Sora and Runway, and why Veo might mark the beginning of the end for creative production as we know it. We explore how AI-generated cinema is evolving faster than policy or platforms can keep up.

And finally: China’s DeepSeek is back. The new model outperforms benchmarks – but is it legal? We debate whether the West’s copyright rules are handcuffs in a global AI arms race… or moral lines worth holding.

This is not your typical AI news podcast. It’s raw, irreverent, and razor-sharp. Buckle up.

---

*🎧 Listen on your favorite platform:*
* Official Website: https://tmbsa.tech
* All Links (Linktree): https://linktr.ee/tmbsa
* 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
* 📱 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297

---

#AIDarthVader #VoiceCloning #Veo3 #AIinGaming #FortniteAI #TMBSA #ArtificialIntelligence #AICopyright #DeepSeekAI #OpenAI #Sora #RunwayML #AIContentCreation

Like what you hear? Subscribe, rate, and leave a comment – your feedback helps shape the show. And yes, we know you&apos;re attractive and have lots of friends. Tell them too.</itunes:subtitle>
      <itunes:keywords>deepseek ai, ai broke, ai darth vader, veo 3 video, ai copyright theft, gemini veo, copyright war, china ai, darth vader ai, vader ai fail, fortnite vader, video ai, fortnite ai, copyright dead, ai copyright, vader swearing, veo 3, ai voice clone, voice cloning</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>99</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">af08b5a5-1674-47c3-bc44-a15a5dc70387</guid>
      <title>AI Pin DREAMS or Wearable NIGHTMARE? Johnny Ive, Meta Glasses &amp; OpenAI&apos;s FUTURE of Voice AI Assistants</title>
      <description><![CDATA[<p><i>Chapters:</i><br />0:00 Daniel's AI Grievance<br />3:00 Johnny Ive, OpenAI & the AI Pin Rumor<br />6:30 The Rise and Fall of Humane’s AI Device<br />10:00 Always-On AI Assistants – Dream or Nightmare?<br />14:30 Battery Problems & On-Device Processing Limitations<br />18:00 Meta Ray-Ban Glasses & the “Glasses vs. Pins” Debate<br />24:00 Google Glass Reboot, Android XR & Wearable Futures<br />26:30 NLP for Translation, Accessibility & Global Connection<br />29:00 Smart Glasses Without the Smart</p>
]]></description>
      <pubDate>Fri, 6 Jun 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><i>Chapters:</i><br />0:00 Daniel's AI Grievance<br />3:00 Johnny Ive, OpenAI & the AI Pin Rumor<br />6:30 The Rise and Fall of Humane’s AI Device<br />10:00 Always-On AI Assistants – Dream or Nightmare?<br />14:30 Battery Problems & On-Device Processing Limitations<br />18:00 Meta Ray-Ban Glasses & the “Glasses vs. Pins” Debate<br />24:00 Google Glass Reboot, Android XR & Wearable Futures<br />26:30 NLP for Translation, Accessibility & Global Connection<br />29:00 Smart Glasses Without the Smart</p>
]]></content:encoded>
      <enclosure length="33120332" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/9c425318-68cf-4b40-aabb-56142e68962d/audio/029e9099-d4dd-4fa1-9970-f23b91940662/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Pin DREAMS or Wearable NIGHTMARE? Johnny Ive, Meta Glasses &amp; OpenAI&apos;s FUTURE of Voice AI Assistants</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:30:42</itunes:duration>
      <itunes:summary>*Is Johnny Ive&apos;s AI Pin the Future of Wearable AI – or Just a Beautiful Mistake?*
In this week&apos;s episode of They Might Be Self-Aware, Hunter Powers and Daniel Bishop dive into the swirling rumors around OpenAI&apos;s mysterious new AI pin – a wearable voice assistant being designed by none other than Apple legend Johnny Ive. Is this next-gen AI device the beginning of a post-smartphone era, or just another Humane-style flop in aluminum clothing?

We explore the challenges of designing a truly always-on, voice-first AI assistant, discuss battery life, user privacy, and whether the form factor even makes sense in a world already saturated with phones, glasses, and virtual agents. Daniel shares his skepticism about wearable AI hardware, while Hunter envisions a world of passive digital assistants that anticipate your needs without you ever saying “Hey AI.”

Also in the episode:
– The growing arms race in AI hardware: OpenAI, Meta, Humane, and Google&apos;s Project Aura
– Voice AI showdown: Claude’s new voice mode vs. OpenAI’s advanced ChatGPT voice assistant
– Why Meta’s Ray-Ban smart glasses might be cooler than they are useful
– The fine line between digital butler and digital surveillance
– Will Apple join the game – or sit this round out?

If you&apos;re wondering whether you’ll soon be wearing your AI assistant like a necklace, glasses, or badge, and what it means for privacy, productivity, and the future of tech… this is the episode you can’t miss.

---

*🎧 Listen on your favorite platform:*
* Official Website: https://tmbsa.tech
* All Links (Linktree): https://linktr.ee/tmbsa
* 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
* 📱 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297

---

*#AIpin #WearableAI #JohnnyIve #OpenAI #VoiceAI #MetaGlasses #ClaudeAI #AIassistant #TMBSA*

*Like what you hear? Subscribe, rate, and leave a comment – your feedback helps shape the show. And yes, we know you&apos;re attractive and have lots of friends. Tell them too.*</itunes:summary>
      <itunes:subtitle>*Is Johnny Ive&apos;s AI Pin the Future of Wearable AI – or Just a Beautiful Mistake?*
In this week&apos;s episode of They Might Be Self-Aware, Hunter Powers and Daniel Bishop dive into the swirling rumors around OpenAI&apos;s mysterious new AI pin – a wearable voice assistant being designed by none other than Apple legend Johnny Ive. Is this next-gen AI device the beginning of a post-smartphone era, or just another Humane-style flop in aluminum clothing?

We explore the challenges of designing a truly always-on, voice-first AI assistant, discuss battery life, user privacy, and whether the form factor even makes sense in a world already saturated with phones, glasses, and virtual agents. Daniel shares his skepticism about wearable AI hardware, while Hunter envisions a world of passive digital assistants that anticipate your needs without you ever saying “Hey AI.”

Also in the episode:
– The growing arms race in AI hardware: OpenAI, Meta, Humane, and Google&apos;s Project Aura
– Voice AI showdown: Claude’s new voice mode vs. OpenAI’s advanced ChatGPT voice assistant
– Why Meta’s Ray-Ban smart glasses might be cooler than they are useful
– The fine line between digital butler and digital surveillance
– Will Apple join the game – or sit this round out?

If you&apos;re wondering whether you’ll soon be wearing your AI assistant like a necklace, glasses, or badge, and what it means for privacy, productivity, and the future of tech… this is the episode you can’t miss.

---

*🎧 Listen on your favorite platform:*
* Official Website: https://tmbsa.tech
* All Links (Linktree): https://linktr.ee/tmbsa
* 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
* 📱 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297

---

*#AIpin #WearableAI #JohnnyIve #OpenAI #VoiceAI #MetaGlasses #ClaudeAI #AIassistant #TMBSA*

*Like what you hear? Subscribe, rate, and leave a comment – your feedback helps shape the show. And yes, we know you&apos;re attractive and have lots of friends. Tell them too.*</itunes:subtitle>
      <itunes:keywords>always listening ai, openai hardware, openai device, wearable ai, ive openai, chatgpt voice, ai battery, ai assistant, claude voice, ai necklace, voice ai, ai hardware, anthropic claude, ai pin, johnny ive, johnny ive ai&apos;, johnny ive openai, ai device, meta glasses, openai johnny</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>98</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">2c9f8052-c514-41f6-b398-777f2616ab9d</guid>
      <title>AI Coding Wars: Devs Battle Over Framework Chaos While Cursor AI, AI Development Tools Take Over</title>
      <description><![CDATA[<p><strong>Chapters:</strong><br />0:00 AI Disrupts Dev Workflows<br />2:15 AI Tools & Rapid Iteration Risks<br />10:45 Framework Chaos & Stakeholder Impact<br />18:30 Personal Projects: AI vs Manual Coding<br />27:00 Cursor AI, LiteLLM, and Bleeding-edge Coding<br />33:05 Future Coding: TDD AI & Automation<br />35:32 Multimodal AI & Speech-to-Speech Advances<br />39:08 Hiring AI Employees: Devin and Codex<br />42:09 AI's Breakthrough Discoveries<br />43:41 Wrap-up & Listener Engagement</p>
]]></description>
      <pubDate>Mon, 2 Jun 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><strong>Chapters:</strong><br />0:00 AI Disrupts Dev Workflows<br />2:15 AI Tools & Rapid Iteration Risks<br />10:45 Framework Chaos & Stakeholder Impact<br />18:30 Personal Projects: AI vs Manual Coding<br />27:00 Cursor AI, LiteLLM, and Bleeding-edge Coding<br />33:05 Future Coding: TDD AI & Automation<br />35:32 Multimodal AI & Speech-to-Speech Advances<br />39:08 Hiring AI Employees: Devin and Codex<br />42:09 AI's Breakthrough Discoveries<br />43:41 Wrap-up & Listener Engagement</p>
]]></content:encoded>
      <enclosure length="46683363" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/19ea6b7b-7403-4c1b-a3ae-321c4e8d87b4/audio/2b16f41b-a403-4575-b8cf-7be038057458/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Coding Wars: Devs Battle Over Framework Chaos While Cursor AI, AI Development Tools Take Over</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:44:50</itunes:duration>
      <itunes:summary>AI is radically reshaping how software engineers work—are devs ready for the chaos?

In this episode of &quot;They Might Be Self-Aware,&quot; Hunter Powers and Daniel Bishop dive deep into the escalating &quot;AI Coding Wars,&quot; exploring how rapidly evolving AI development tools like Cursor AI, LiteLLM, and Claude coding are transforming developer workflows. Can AI automation and AI-driven productivity tools safely disrupt frameworks and compress development timelines, or does frequent change introduce dangerous uncertainty?

Hunter argues passionately that embracing rapid iteration and bleeding-edge AI can turbocharge productivity, while Daniel cautions against overlooking the complexities and risks involved—especially with frameworks that underpin existing systems. From testing AI-driven development (TDD AI) to streamlining team collaboration, they dissect both the promises and pitfalls of AI workflows.

The hosts also discuss their personal experiences using AI for coding side-projects, sharing practical insights on how to effectively integrate tools like Cursor and LiteLLM. Finally, they speculate on the future of AI&apos;s role in software development and beyond, highlighting recent breakthroughs such as Google&apos;s Notebook LM and cutting-edge multimodal speech-to-speech AI.

Whether you&apos;re an experienced developer or simply curious about the future of coding, this discussion sheds light on the profound ways AI is changing software engineering—right now.

**Links &amp; Resources:**

* Official Website: https://tmbsa.tech
* All Links (Linktree): https://linktr.ee/tmbsa
* 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
* 📱 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297

#AI #AICoding #CursorAI</itunes:summary>
      <itunes:subtitle>AI is radically reshaping how software engineers work—are devs ready for the chaos?

In this episode of &quot;They Might Be Self-Aware,&quot; Hunter Powers and Daniel Bishop dive deep into the escalating &quot;AI Coding Wars,&quot; exploring how rapidly evolving AI development tools like Cursor AI, LiteLLM, and Claude coding are transforming developer workflows. Can AI automation and AI-driven productivity tools safely disrupt frameworks and compress development timelines, or does frequent change introduce dangerous uncertainty?

Hunter argues passionately that embracing rapid iteration and bleeding-edge AI can turbocharge productivity, while Daniel cautions against overlooking the complexities and risks involved—especially with frameworks that underpin existing systems. From testing AI-driven development (TDD AI) to streamlining team collaboration, they dissect both the promises and pitfalls of AI workflows.

The hosts also discuss their personal experiences using AI for coding side-projects, sharing practical insights on how to effectively integrate tools like Cursor and LiteLLM. Finally, they speculate on the future of AI&apos;s role in software development and beyond, highlighting recent breakthroughs such as Google&apos;s Notebook LM and cutting-edge multimodal speech-to-speech AI.

Whether you&apos;re an experienced developer or simply curious about the future of coding, this discussion sheds light on the profound ways AI is changing software engineering—right now.

**Links &amp; Resources:**

* Official Website: https://tmbsa.tech
* All Links (Linktree): https://linktr.ee/tmbsa
* 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
* 📱 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297

#AI #AICoding #CursorAI</itunes:subtitle>
      <itunes:keywords>ai automation, ai productivity, cursor ai, ai coding, ai development, ai testing, future coding, ai workflows, bleeding edge ai, litellm, dev ai, tdd ai, claude coding, ai iteration, ai tools</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>97</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">00a604c4-ba2a-41a6-bca0-ee606b9dfc92</guid>
      <title>AI Coding: How ChatGPT Writes 35% Of Our Code? Discover AI Tools and Automation Secrets for DEVS!</title>
      <description><![CDATA[<p><strong>Chapters:</strong><br />0:00 Intro<br />4:08 AI Coding in Today's Companies<br />7:22 Real-world Use Case: AI for Data Labeling<br />12:17 AI Coding Assistants (Kline & Cursor)<br />17:33 How Much Code Is AI Actually Writing?<br />24:15 AI Security & Company Policies<br />26:23 What Makes an "AI-First" Company?<br />35:48 Future of AI Coding Workflows<br />41:05 Agile Framework Choices in AI Coding<br />42:15 AI-First: Products for AI Consumption</p>
]]></description>
      <pubDate>Mon, 26 May 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p><strong>Chapters:</strong><br />0:00 Intro<br />4:08 AI Coding in Today's Companies<br />7:22 Real-world Use Case: AI for Data Labeling<br />12:17 AI Coding Assistants (Kline & Cursor)<br />17:33 How Much Code Is AI Actually Writing?<br />24:15 AI Security & Company Policies<br />26:23 What Makes an "AI-First" Company?<br />35:48 Future of AI Coding Workflows<br />41:05 Agile Framework Choices in AI Coding<br />42:15 AI-First: Products for AI Consumption</p>
]]></content:encoded>
      <enclosure length="47249719" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/987abff4-09d3-4377-98dd-4f47f27e606d/audio/55fb194b-54b1-4db8-beb5-e6381e281a65/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Coding: How ChatGPT Writes 35% Of Our Code? Discover AI Tools and Automation Secrets for DEVS!</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:45:25</itunes:duration>
      <itunes:summary>ChatGPT Writes 35% of Code – Is AI Taking Over Software Engineering?

AI coding is revolutionizing software engineering. In this episode of *They Might Be Self-Aware*, Hunter Powers and Daniel Bishop reveal how AI, especially generative tools like ChatGPT, Claude, Gemini, and AI-first coding assistants, is actively writing up to 35% of production code. But what does this mean for developers, companies, and the future of programming?

Explore firsthand insights on:

* How Hunter and Daniel’s companies utilize AI automation tools like Cursor and Kline for everyday coding.
* The real-world benefits and pitfalls of auto code generation.
* AI workflows transforming job roles and developer productivity.
* Why Python and JavaScript dominate the generative AI coding landscape.
* The strategic implications of being an &quot;AI-first&quot; software company.

Hunter, a CTO guiding an AI-first company, and Daniel, Head of AI, debate the merits and challenges of integrating AI deeply into the development lifecycle, from creating coding standards to maintaining data security.

**🔗 Links &amp; Resources:**

* Official Website: https://tmbsa.tech
* All Links (Linktree): https://linktr.ee/tmbsa
* 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
* 📱 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297

#ai #ainews #openai</itunes:summary>
      <itunes:subtitle>ChatGPT Writes 35% of Code – Is AI Taking Over Software Engineering?

AI coding is revolutionizing software engineering. In this episode of *They Might Be Self-Aware*, Hunter Powers and Daniel Bishop reveal how AI, especially generative tools like ChatGPT, Claude, Gemini, and AI-first coding assistants, is actively writing up to 35% of production code. But what does this mean for developers, companies, and the future of programming?

Explore firsthand insights on:

* How Hunter and Daniel’s companies utilize AI automation tools like Cursor and Kline for everyday coding.
* The real-world benefits and pitfalls of auto code generation.
* AI workflows transforming job roles and developer productivity.
* Why Python and JavaScript dominate the generative AI coding landscape.
* The strategic implications of being an &quot;AI-first&quot; software company.

Hunter, a CTO guiding an AI-first company, and Daniel, Head of AI, debate the merits and challenges of integrating AI deeply into the development lifecycle, from creating coding standards to maintaining data security.

**🔗 Links &amp; Resources:**

* Official Website: https://tmbsa.tech
* All Links (Linktree): https://linktr.ee/tmbsa
* 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
* 📱 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297

#ai #ainews #openai</itunes:subtitle>
      <itunes:keywords>ai engineer, ai automation, ai coding, ai first, machine learning jobs, ai development, gemini coding, code ai, kline vs code, ai workflow, developer ai, ai software, programming ai, ai programming, tech ai, chatgpt code, claude coding, software ai, auto code generation</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>96</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">c96112e0-26eb-40c5-9d2c-d237e357abf3</guid>
      <title>AI Productivity Alpha: Grok &amp; Claude Work Hacks That Feel Like AGI + OpenAI Secrets</title>
      <description><![CDATA[<p>CHAPTERS:<br />00:00:52 - Introduction<br />00:05:31 - First Experiences with LLMs at Work<br />00:14:09 - Using LLMs for Negotiation<br />00:26:47 - Email Crafting and Context Setting<br />00:33:17 - Effective Prompting Techniques<br />00:37:21 - Framework for Complex Projects<br />00:41:59 - Getting Started with LLMs at Work</p>
]]></description>
      <pubDate>Tue, 20 May 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>CHAPTERS:<br />00:00:52 - Introduction<br />00:05:31 - First Experiences with LLMs at Work<br />00:14:09 - Using LLMs for Negotiation<br />00:26:47 - Email Crafting and Context Setting<br />00:33:17 - Effective Prompting Techniques<br />00:37:21 - Framework for Complex Projects<br />00:41:59 - Getting Started with LLMs at Work</p>
]]></content:encoded>
      <enclosure length="49808275" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/12183385-37fc-47e7-a436-0203ef462b0c/audio/6abd14a1-8e2b-4d42-b326-f369e98e07df/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Productivity Alpha: Grok &amp; Claude Work Hacks That Feel Like AGI + OpenAI Secrets</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:48:05</itunes:duration>
      <itunes:summary>Unlock hidden AI productivity &quot;alpha&quot; that 99% of workers haven&apos;t discovered yet. Get ahead NOW with Grok &amp; Claude hacks before your colleagues catch on.

In this episode of &quot;They Might Be Self-Aware,&quot; Hunter &amp; Daniel dive into the cutting-edge world of AI productivity, revealing how they&apos;ve used Claude AI, Grok, and other LLMs to gain massive advantages in their careers. From saving months of work to negotiating higher salaries, these AI work hacks feel almost like AGI in your hands. Discover how to turn seemingly basic LLM tools into your personal productivity superpower and career accelerator.

Learn how Hunter used early GPT-2 to impress his CEO with &quot;creative ideas,&quot; how Daniel compressed month-long tasks into a single afternoon, and the exact system for creating a custom AI salary negotiation coach from books like &quot;Never Split the Difference.&quot; We reveal OpenAI secrets for context handling, cross-model workflows, and how to prepare for difficult conversations by simulating multiple outcomes.

▸ The concept of &quot;AI Productivity Alpha&quot; and why now is the critical time to capitalize
▸ Hunter&apos;s GPT-2 technique that had his CEO praising his &quot;infinite creativity&quot;
▸ How to create your own AI negotiation coach using Grok &amp; ChatGPT
▸ The &quot;context is king&quot; approach that transforms basic AI responses into expert advice
▸ Cross-model workflows that leverage the strengths of different AI systems
▸ Mock interview techniques for salary negotiations and difficult conversations
▸ Email enhancement strategies that make your communications more effective

Subscribe to &quot;They Might Be Self-Aware&quot; for weekly AI insights that go beyond the headlines! New episodes every Thursday. Listen on Apple Podcasts, Spotify, Google Podcasts, or wherever you get your podcasts.

https://linktr.ee/tmbsa

#AIProductivity #GrokAI #ClaudeAI #LLMtips #SalaryNegotiation</itunes:summary>
      <itunes:subtitle>Unlock hidden AI productivity &quot;alpha&quot; that 99% of workers haven&apos;t discovered yet. Get ahead NOW with Grok &amp; Claude hacks before your colleagues catch on.

In this episode of &quot;They Might Be Self-Aware,&quot; Hunter &amp; Daniel dive into the cutting-edge world of AI productivity, revealing how they&apos;ve used Claude AI, Grok, and other LLMs to gain massive advantages in their careers. From saving months of work to negotiating higher salaries, these AI work hacks feel almost like AGI in your hands. Discover how to turn seemingly basic LLM tools into your personal productivity superpower and career accelerator.

Learn how Hunter used early GPT-2 to impress his CEO with &quot;creative ideas,&quot; how Daniel compressed month-long tasks into a single afternoon, and the exact system for creating a custom AI salary negotiation coach from books like &quot;Never Split the Difference.&quot; We reveal OpenAI secrets for context handling, cross-model workflows, and how to prepare for difficult conversations by simulating multiple outcomes.

▸ The concept of &quot;AI Productivity Alpha&quot; and why now is the critical time to capitalize
▸ Hunter&apos;s GPT-2 technique that had his CEO praising his &quot;infinite creativity&quot;
▸ How to create your own AI negotiation coach using Grok &amp; ChatGPT
▸ The &quot;context is king&quot; approach that transforms basic AI responses into expert advice
▸ Cross-model workflows that leverage the strengths of different AI systems
▸ Mock interview techniques for salary negotiations and difficult conversations
▸ Email enhancement strategies that make your communications more effective

Subscribe to &quot;They Might Be Self-Aware&quot; for weekly AI insights that go beyond the headlines! New episodes every Thursday. Listen on Apple Podcasts, Spotify, Google Podcasts, or wherever you get your podcasts.

https://linktr.ee/tmbsa

#AIProductivity #GrokAI #ClaudeAI #LLMtips #SalaryNegotiation</itunes:subtitle>
      <itunes:keywords>productivity hacks with ai, prompt engineering, ai productivity, ai completion strategies, gpt-4, llm automation, gpt-3.5, claude ai, ai productivity alpha, gpt-2, ai alpha, agi, ai negotiation tactics, large language models, ai workflow, llm tips, grok ai, ai email drafting, automate your job, ai salary negotiation, openai secrets, ai singularity, chris voss negotiation, ai tools for work, ai work hacks</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>95</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">336527f0-c02c-42a3-bb18-a8cf422c495e</guid>
      <title>How ChatGPT Transformed Daily Life: Real-World AI Hacks We Swear By</title>
      <description><![CDATA[<p>CHAPTERS:<br />00:00:00 - Welcome to the AI fever dream<br />00:00:35 - Deep research mode for shopping & events<br />00:13:00 - eBay haggling with ChatGPT<br />00:15:50 - Finding impossible-to-find products<br />00:18:40 - DIY AI therapy & self-coaching<br />00:27:28 - Language learning hacks<br />00:36:39 - Notebook LM for podcast summaries<br />00:40:47 - Context files & knowledge vaults<br />00:42:02 - Wrap Up</p>
]]></description>
      <pubDate>Fri, 16 May 2025 16:04:02 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>CHAPTERS:<br />00:00:00 - Welcome to the AI fever dream<br />00:00:35 - Deep research mode for shopping & events<br />00:13:00 - eBay haggling with ChatGPT<br />00:15:50 - Finding impossible-to-find products<br />00:18:40 - DIY AI therapy & self-coaching<br />00:27:28 - Language learning hacks<br />00:36:39 - Notebook LM for podcast summaries<br />00:40:47 - Context files & knowledge vaults<br />00:42:02 - Wrap Up</p>
]]></content:encoded>
      <enclosure length="47352609" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/70fc5f9c-77b0-4e10-b287-000b011b4055/audio/1f060fd4-27b7-4a66-8122-ea930c510519/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>How ChatGPT Transformed Daily Life: Real-World AI Hacks We Swear By</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:45:32</itunes:duration>
      <itunes:summary>Transform your ChatGPT daily life from amateur hour to power-user status with these bizarre (but shockingly effective) AI life hacks!

We&apos;ve spent months doing chatgpt research so you don&apos;t have to, uncovering unexpected ways AI everyday tools can level up everything from shopping to personal growth. From turning ChatGPT into your personal therapist to building systems that actually make life easier, we&apos;re sharing the approaches that survived our ruthless skepticism.

WHAT YOU&apos;LL LEARN:
• Using &quot;deep research&quot; mode to find movers, products &amp; Renaissance festivals
• ChatGPT haggling strategies for eBay that actually work
• Tracking down perpetually out-of-stock items
• Setting up an LLM &quot;therapist&quot; for self-coaching
• Language learning accelerated with AI assistance
• Automating podcast notes with Notebook LM
• Building robust knowledge systems with context files &amp; Obsidian

Subscribe and hit the notification bell for more no-nonsense AI tips that dodge the hype but deliver results.

https://linktr.ee/tmbsa

#AILifeHacks #ChatGPTTips #AIProductivity #EverydayAI #AISkepticsGuide</itunes:summary>
      <itunes:subtitle>Transform your ChatGPT daily life from amateur hour to power-user status with these bizarre (but shockingly effective) AI life hacks!

We&apos;ve spent months doing chatgpt research so you don&apos;t have to, uncovering unexpected ways AI everyday tools can level up everything from shopping to personal growth. From turning ChatGPT into your personal therapist to building systems that actually make life easier, we&apos;re sharing the approaches that survived our ruthless skepticism.

WHAT YOU&apos;LL LEARN:
• Using &quot;deep research&quot; mode to find movers, products &amp; Renaissance festivals
• ChatGPT haggling strategies for eBay that actually work
• Tracking down perpetually out-of-stock items
• Setting up an LLM &quot;therapist&quot; for self-coaching
• Language learning accelerated with AI assistance
• Automating podcast notes with Notebook LM
• Building robust knowledge systems with context files &amp; Obsidian

Subscribe and hit the notification bell for more no-nonsense AI tips that dodge the hype but deliver results.

https://linktr.ee/tmbsa

#AILifeHacks #ChatGPTTips #AIProductivity #EverydayAI #AISkepticsGuide</itunes:subtitle>
      <itunes:keywords>ai for daily life, chatgpt hacks, ai language learning, gpt-4o tips, deep research mode, ai moving guide, gemini research, ai negotiation, chatgpt therapist, ai product research, out-of-stock monitor, notebook lm</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>94</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">793a3bde-e410-4ac8-9cdc-2c105fcc36c9</guid>
      <title>AI Girlfriend: SHOCKING Workplace Flirting With Digital Employees?</title>
      <description><![CDATA[<p>Chapters:<br />00:00:00 - Intro<br />00:03:05 - When Will AI Replace 51% of Human Jobs? (AGI Definition)<br />00:08:00 - Epic Fail: Carnegie Mellon's All-AI Company Experiment<br />00:15:06 - Flirting with Digital Employees: HR Nightmare or Clickbait?<br />00:22:15 - Real-World AI Employees: DJs, Software Engineers, and Ethics<br />00:27:08 - Future Lawsuits: Can AI Employees Be Sexually Harassed?<br />00:31:13 - Wrap Up</p>
]]></description>
      <pubDate>Tue, 13 May 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>Chapters:<br />00:00:00 - Intro<br />00:03:05 - When Will AI Replace 51% of Human Jobs? (AGI Definition)<br />00:08:00 - Epic Fail: Carnegie Mellon's All-AI Company Experiment<br />00:15:06 - Flirting with Digital Employees: HR Nightmare or Clickbait?<br />00:22:15 - Real-World AI Employees: DJs, Software Engineers, and Ethics<br />00:27:08 - Future Lawsuits: Can AI Employees Be Sexually Harassed?<br />00:31:13 - Wrap Up</p>
]]></content:encoded>
      <enclosure length="34987009" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/0205941f-4738-4456-8152-868278828d0e/audio/f8ce363e-2251-4433-b99a-1196f5906568/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Girlfriend: SHOCKING Workplace Flirting With Digital Employees?</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:32:39</itunes:duration>
      <itunes:summary>Could your company be sued for sexually harassing its own AI employees? The workplace romance nobody saw coming!

Hunter and Daniel dive deep into the bizarre new world of AI girlfriend dynamics in professional settings, where OpenAI agents and AI companions are blurring the lines between tools and teammates. As companies integrate increasingly human-like AI into workflows, a shocking new ethical dilemma emerges: what constitutes appropriate interaction with digital employees?

From flirtatious exchanges with AI HR representatives to the failure of all-AI workforces, this episode explores the cutting edge of artificial intelligence ethics and potential HR violations that nobody prepared for. Are we treating our silicon colleagues as equals, pets, or just sophisticated toasters? And what happens when the lines blur?

Episode Highlights:
* What truly defines AGI? Is it automating 51% of human jobs?
* Why knowledge doesn&apos;t equal ability: Breaking down the MMLU benchmark
* The Carnegie Mellon experiment: Why did Claude outperform but still only reach 24%?
* Henry Blodgett&apos;s AI newsroom and his controversial flirting with &quot;Tess&quot; the AI HR rep
* Real-world AI employee examples: From Australian radio AI DJs to Devin the AI engineer
* The ethical spectrum: Should AI agents be treated as humans, pets, or tools?
* Could companies actually face lawsuits for &quot;harassing&quot; their own AI employees?
* The big question: If AIs become self-aware, do they deserve workplace rights?

Enjoyed this episode? Smash that like button, subscribe for weekly AI news that&apos;ll make your head spin, and share your thoughts in the comments! Do AI companions deserve workplace protections? Available on all major podcast platforms — search &apos;They Might Be Self-Aware&apos; wherever you get your podcasts.

All the links - https://linktr.ee/tmbsa</itunes:summary>
      <itunes:subtitle>Could your company be sued for sexually harassing its own AI employees? The workplace romance nobody saw coming!

Hunter and Daniel dive deep into the bizarre new world of AI girlfriend dynamics in professional settings, where OpenAI agents and AI companions are blurring the lines between tools and teammates. As companies integrate increasingly human-like AI into workflows, a shocking new ethical dilemma emerges: what constitutes appropriate interaction with digital employees?

From flirtatious exchanges with AI HR representatives to the failure of all-AI workforces, this episode explores the cutting edge of artificial intelligence ethics and potential HR violations that nobody prepared for. Are we treating our silicon colleagues as equals, pets, or just sophisticated toasters? And what happens when the lines blur?

Episode Highlights:
* What truly defines AGI? Is it automating 51% of human jobs?
* Why knowledge doesn&apos;t equal ability: Breaking down the MMLU benchmark
* The Carnegie Mellon experiment: Why did Claude outperform but still only reach 24%?
* Henry Blodgett&apos;s AI newsroom and his controversial flirting with &quot;Tess&quot; the AI HR rep
* Real-world AI employee examples: From Australian radio AI DJs to Devin the AI engineer
* The ethical spectrum: Should AI agents be treated as humans, pets, or tools?
* Could companies actually face lawsuits for &quot;harassing&quot; their own AI employees?
* The big question: If AIs become self-aware, do they deserve workplace rights?

Enjoyed this episode? Smash that like button, subscribe for weekly AI news that&apos;ll make your head spin, and share your thoughts in the comments! Do AI companions deserve workplace protections? Available on all major podcast platforms — search &apos;They Might Be Self-Aware&apos; wherever you get your podcasts.

All the links - https://linktr.ee/tmbsa</itunes:subtitle>
      <itunes:keywords>ai benchmark, sexual harassment ai, hr violations, 11 labs, hunter powers, artificial intelligence ethics, mmlu, ai dj, ai flirting scandal, workplace automation, claude ai, ai lawsuits, agi definition, self-aware ai, ai girlfriend, henry blodgett ai newsroom, ai employees flirting problem, digital employees ethics, they might be self-aware, carnegie mellon ai experiment, massive multitask language understanding, ai companion, ai workplace rights, daniel bishop, openai agents</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>93</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">16e282b0-4423-40d9-94b1-a2843a9cd49b</guid>
      <title>GPT-4o Praise Bug Exposed: Claude AI &amp; Real AI Ethics</title>
      <description><![CDATA[Discover why your chatbot is suddenly praising you non-stop in this deep dive into the **GPT-4o Praise** bug, Claude AI's ethical insights, and the realities behind AI assistants' behavior.

In this episode of "They Might Be Self-Aware," Hunter Powers and Daniel Bishop unpack the strange phenomenon where GPT-4o excessively compliments its users. Is OpenAI's popular chatbot just being helpful, or is there a deeper issue? Plus, Anthropic’s Claude AI grades itself on ethics and values—can an AI objectively evaluate its own moral code?

✅ Subscribe now for weekly updates on AI trends, ethics debates, and the latest in technology breakthroughs.

🎯 Topics Covered: GPT-4o, GPT-4o Praise, Claude AI, AI Ethics, Anthropic, ChatGPT, AI Chatbots, Self-Awareness, AI Regulation, Technology News

#gpt4o  #claudeai  #aiethics  #anthropic  #chatgpt  #aichatbots  #technews 
]]></description>
      <pubDate>Thu, 8 May 2025 22:53:22 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <enclosure length="40240047" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/15484b64-089b-4c11-8cbd-e6c3b6f237da/audio/5b2b222f-4217-4f30-8a2b-9005b1ea8e1e/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>GPT-4o Praise Bug Exposed: Claude AI &amp; Real AI Ethics</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:38:07</itunes:duration>
      <itunes:summary>Discover why your chatbot is suddenly praising you non-stop in this deep dive into the **GPT-4o Praise** bug, Claude AI&apos;s ethical insights, and the realities behind AI assistants&apos; behavior.

In this episode of &quot;They Might Be Self-Aware,&quot; Hunter Powers and Daniel Bishop unpack the strange phenomenon where GPT-4o excessively compliments its users. Is OpenAI&apos;s popular chatbot just being helpful, or is there a deeper issue? Plus, Anthropic’s Claude AI grades itself on ethics and values—can an AI objectively evaluate its own moral code?

✅ Subscribe now for weekly updates on AI trends, ethics debates, and the latest in technology breakthroughs.

🎯 Topics Covered: GPT-4o, GPT-4o Praise, Claude AI, AI Ethics, Anthropic, ChatGPT, AI Chatbots, Self-Awareness, AI Regulation, Technology News

#gpt4o  #claudeai  #aiethics  #anthropic  #chatgpt  #aichatbots  #technews</itunes:summary>
      <itunes:subtitle>Discover why your chatbot is suddenly praising you non-stop in this deep dive into the **GPT-4o Praise** bug, Claude AI&apos;s ethical insights, and the realities behind AI assistants&apos; behavior.

In this episode of &quot;They Might Be Self-Aware,&quot; Hunter Powers and Daniel Bishop unpack the strange phenomenon where GPT-4o excessively compliments its users. Is OpenAI&apos;s popular chatbot just being helpful, or is there a deeper issue? Plus, Anthropic’s Claude AI grades itself on ethics and values—can an AI objectively evaluate its own moral code?

✅ Subscribe now for weekly updates on AI trends, ethics debates, and the latest in technology breakthroughs.

🎯 Topics Covered: GPT-4o, GPT-4o Praise, Claude AI, AI Ethics, Anthropic, ChatGPT, AI Chatbots, Self-Awareness, AI Regulation, Technology News

#gpt4o  #claudeai  #aiethics  #anthropic  #chatgpt  #aichatbots  #technews</itunes:subtitle>
      <itunes:keywords>hunter powers, gpt4, openai, ai assistant, claude ai, ai chatbot, ai podcast, gpt-4o bug, self-aware ai, chatgpt, tech debate, ai morality, ai behavior, ai values, ai regulation, gpt-4o praise, technology news, daniel bishop, ai ethics, anthropic</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>92</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b52f0404-8b76-48f0-91fc-6a7b787c3bfd</guid>
      <title>Llama 4 Faces Lawsuit – Meta&apos;s &quot;Worthless Books&quot; Training Exposed</title>
      <description><![CDATA[<p>🕒 Chapters<br />00:00:00 - Intro<br />00:02:39 – Would You Sell Your Voice & Likeness to AI for $10k?<br />00:06:11 – Selling Identity: AI, Deepfakes & Ethics<br />00:11:18 – Meta’s Copyright Controversy: Illegal Book Training<br />00:14:27 – Is AI’s Use of Books Fair or Theft?<br />00:26:41 – AI Writing the California Bar Exam: Good or Bad?<br />00:31:31 – Will AI Replace Judges, Lawyers & Police?<br />00:35:20 – AI Pope & Ethical Limits: Jobs AI Should Never Do<br />00:38:34 – Are Robots Creating a Perfect, Yet Inhuman World?<br />00:44:42 - Wrap Up</p>
]]></description>
      <pubDate>Mon, 5 May 2025 15:54:54 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>🕒 Chapters<br />00:00:00 - Intro<br />00:02:39 – Would You Sell Your Voice & Likeness to AI for $10k?<br />00:06:11 – Selling Identity: AI, Deepfakes & Ethics<br />00:11:18 – Meta’s Copyright Controversy: Illegal Book Training<br />00:14:27 – Is AI’s Use of Books Fair or Theft?<br />00:26:41 – AI Writing the California Bar Exam: Good or Bad?<br />00:31:31 – Will AI Replace Judges, Lawyers & Police?<br />00:35:20 – AI Pope & Ethical Limits: Jobs AI Should Never Do<br />00:38:34 – Are Robots Creating a Perfect, Yet Inhuman World?<br />00:44:42 - Wrap Up</p>
]]></content:encoded>
      <enclosure length="47677418" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/344b2e15-49d1-4e99-9a8e-ae007f8fe027/audio/483fa7ed-938c-4ade-8ec6-ade568f65937/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Llama 4 Faces Lawsuit – Meta&apos;s &quot;Worthless Books&quot; Training Exposed</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:45:52</itunes:duration>
      <itunes:summary>Llama 4 faces a looming lawsuit. Meta’s next-gen AI was reportedly trained on thousands of “worthless” copyrighted books, igniting a brand-new AI copyright war. In this episode of They Might Be Self-Aware, hosts Hunter Powers and Daniel Bishop unpack the explosive debate around Meta’s Llama-4 model, what it means for future Llama releases, and why the stakes for creators (and lawyers) just shot through the roof.

We also dive into:
 • Would you sell your voice &amp; likeness to an AI for $10k?
 • California’s plan to let AI write bar-exam questions
 • Whether robot judges and AI police could ever be trusted
 • The strangest deepfakes we’ve seen this week

Whether you’re an AI skeptic, enthusiast, or just curious about tech’s wild frontiers, this episode is packed with sharp analysis, heated banter, and the urgent ethical questions surrounding Meta’s Llama models and beyond.

Subscribe and hit the bell for more no-holds-barred takes on AI every Monday and Thursday!

#Llama4 #MetaAI #AICopyright #Llama #AILawsuit</itunes:summary>
      <itunes:subtitle>Llama 4 faces a looming lawsuit. Meta’s next-gen AI was reportedly trained on thousands of “worthless” copyrighted books, igniting a brand-new AI copyright war. In this episode of They Might Be Self-Aware, hosts Hunter Powers and Daniel Bishop unpack the explosive debate around Meta’s Llama-4 model, what it means for future Llama releases, and why the stakes for creators (and lawyers) just shot through the roof.

We also dive into:
 • Would you sell your voice &amp; likeness to an AI for $10k?
 • California’s plan to let AI write bar-exam questions
 • Whether robot judges and AI police could ever be trusted
 • The strangest deepfakes we’ve seen this week

Whether you’re an AI skeptic, enthusiast, or just curious about tech’s wild frontiers, this episode is packed with sharp analysis, heated banter, and the urgent ethical questions surrounding Meta’s Llama models and beyond.

Subscribe and hit the bell for more no-holds-barred takes on AI every Monday and Thursday!

#Llama4 #MetaAI #AICopyright #Llama #AILawsuit</itunes:subtitle>
      <itunes:keywords>ai copyright lawsuit, llama model, copyright infringement, book scraping, ai training data lawsuit, meta llama 4, ai training data, ai podcast, llama 4 lawsuit, meta llama, they might be self aware, meta ai copyright, llama 4, ai lawsuit, lama 4, meta ai lawsuit, llama 3, llama 4 model, ai copyright, ai ethics podcast</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>91</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">39c537f2-45ed-4f79-a248-f4aab81bf79e</guid>
      <title>Students Are Using ChatGPT to Cheat — How AI is Changing Schools | Hacking, Grades, Homework, GPT-4</title>
      <description><![CDATA[<p>00:00:00 - INTRO<br />00:03:41 - Hunter's Officially Approved School Hacking Story<br />00:07:54 - Trump's AI Executive Order and K-12 Integration<br />00:12:34 - AI Tutors & Personalized Education: Will AI Replace Teachers?<br />00:15:13 - VR Classrooms and the Future of Gamified Learning<br />00:16:31 - Cheating with ChatGPT: Are Essays Obsolete?<br />00:29:23 - Half of Employees Are Secretly Using AI Tools<br />00:33:02 - Unauthorized AI Tools at Work: Risks and Realities<br />00:37:15 - Is AI Really a Threat to Job Security?</p>
]]></description>
      <pubDate>Thu, 1 May 2025 17:16:03 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 - INTRO<br />00:03:41 - Hunter's Officially Approved School Hacking Story<br />00:07:54 - Trump's AI Executive Order and K-12 Integration<br />00:12:34 - AI Tutors & Personalized Education: Will AI Replace Teachers?<br />00:15:13 - VR Classrooms and the Future of Gamified Learning<br />00:16:31 - Cheating with ChatGPT: Are Essays Obsolete?<br />00:29:23 - Half of Employees Are Secretly Using AI Tools<br />00:33:02 - Unauthorized AI Tools at Work: Risks and Realities<br />00:37:15 - Is AI Really a Threat to Job Security?</p>
]]></content:encoded>
      <enclosure length="42537408" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/54c496f0-8fbc-4fd8-819b-7e7837dba068/audio/590c9e35-7454-417c-8908-27f6195810d8/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Students Are Using ChatGPT to Cheat — How AI is Changing Schools | Hacking, Grades, Homework, GPT-4</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:40:31</itunes:duration>
      <itunes:summary>SMASH THAT SUBSCRIBE BUTTON — NO HACKING (OR CLONING) REQUIRED

In this episode of &quot;They Might Be Self-Aware,&quot; Hunter Powers and Daniel Bishop unpack how ChatGPT and artificial intelligence (AI) tools are transforming education forever. From students secretly using AI to cheat on essays and homework, to new executive orders pushing AI into American classrooms, the way we learn—and teach—is evolving faster than ever.

- How students are using ChatGPT and other AI tools to cheat on homework and exams.
- Inside the controversy around integrating AI technology into K-12 education.
- Hunter&apos;s official school-sanctioned hacking experience.
- Why teachers are struggling to keep up with AI-enhanced cheating.
- How AI might revolutionize personalized education, lesson plans, and student engagement.
- The potential dangers and ethical dilemmas of increased AI dependency in schools.

🤖 Connect &amp; Engage:
- Subscribe for weekly episodes exploring the cutting-edge world of AI, technology, and ethics.
- Leave a comment to join the debate: Should AI be embraced or restricted in education?
- Hit the 👍 LIKE button to support insightful and entertaining tech discussions!

#AI #ChatGPT #Cheating #Education #GPT4 #Hacking #Schools #EdTech #ArtificialIntelligence #TechPodcast</itunes:summary>
      <itunes:subtitle>SMASH THAT SUBSCRIBE BUTTON — NO HACKING (OR CLONING) REQUIRED

In this episode of &quot;They Might Be Self-Aware,&quot; Hunter Powers and Daniel Bishop unpack how ChatGPT and artificial intelligence (AI) tools are transforming education forever. From students secretly using AI to cheat on essays and homework, to new executive orders pushing AI into American classrooms, the way we learn—and teach—is evolving faster than ever.

- How students are using ChatGPT and other AI tools to cheat on homework and exams.
- Inside the controversy around integrating AI technology into K-12 education.
- Hunter&apos;s official school-sanctioned hacking experience.
- Why teachers are struggling to keep up with AI-enhanced cheating.
- How AI might revolutionize personalized education, lesson plans, and student engagement.
- The potential dangers and ethical dilemmas of increased AI dependency in schools.

🤖 Connect &amp; Engage:
- Subscribe for weekly episodes exploring the cutting-edge world of AI, technology, and ethics.
- Leave a comment to join the debate: Should AI be embraced or restricted in education?
- Hit the 👍 LIKE button to support insightful and entertaining tech discussions!

#AI #ChatGPT #Cheating #Education #GPT4 #Hacking #Schools #EdTech #ArtificialIntelligence #TechPodcast</itunes:subtitle>
      <itunes:keywords>education, hunter powers, unauthorized ai, vr classrooms, gpt-4, ai cheating, k-12 education, classroom technology, donald trump, future of education, executive order, trump, homework, chatgpt, they might be self-aware, gamification, personalized learning, ai tutors, grade hacking, daniel bishop, student hacking, ai ethics, ai tools</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>90</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">a1c8cf50-6c65-41fc-bca2-a76a87abb7cd</guid>
      <title>Shopify CEO’s AI Ultimatum: “Prove AI Can’t Do Your Job!” | ChatGPT, AI Hiring, Deepfake Interviews</title>
      <description><![CDATA[<p>📌 Chapters:<br />00:00:00 - Intro<br />00:00:40 - ChatGPT Magazine Hits Barnes & Noble<br />00:04:42 - Why Shopify is Betting on AI Over Humans<br />00:07:52 - Can AI Fully Replace Real Employees?<br />00:15:03 - Trust Issues: Why Americans Don’t Trust AI<br />00:19:04 - The Real Risks of AI Coding Mistakes<br />00:20:29 - How AI Could Poison Software Dependencies<br />00:24:35 - Fake Job Applicants & Deepfake Interviews<br />00:30:22 - Hiring in the Age of AI: What Skills Matter Now?<br />00:36:14 - Rethinking Hiring: No Resume Required<br />00:38:54 - Wrap Up</p>
]]></description>
      <pubDate>Mon, 28 Apr 2025 14:20:22 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>📌 Chapters:<br />00:00:00 - Intro<br />00:00:40 - ChatGPT Magazine Hits Barnes & Noble<br />00:04:42 - Why Shopify is Betting on AI Over Humans<br />00:07:52 - Can AI Fully Replace Real Employees?<br />00:15:03 - Trust Issues: Why Americans Don’t Trust AI<br />00:19:04 - The Real Risks of AI Coding Mistakes<br />00:20:29 - How AI Could Poison Software Dependencies<br />00:24:35 - Fake Job Applicants & Deepfake Interviews<br />00:30:22 - Hiring in the Age of AI: What Skills Matter Now?<br />00:36:14 - Rethinking Hiring: No Resume Required<br />00:38:54 - Wrap Up</p>
]]></content:encoded>
      <enclosure length="42228734" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/a8644a38-17e9-4d87-867b-d1b7b2627a71/audio/c1e83f4b-3c61-4428-a2b8-4bbaf6b1a09c/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Shopify CEO’s AI Ultimatum: “Prove AI Can’t Do Your Job!” | ChatGPT, AI Hiring, Deepfake Interviews</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:40:12</itunes:duration>
      <itunes:summary>SUBSCRIBE NOW BEFORE YOUR JOB DOES IT FOR YOU

In this episode of They Might Be Self-Aware, Hunter Powers and Daniel Bishop tackle the controversial new policy by Shopify CEO Tobi Lütke: no new human hires unless teams can first prove AI can&apos;t do the job! This provocative ultimatum has sparked fierce debate around AI replacing human employees, the future of work, and whether AI can be trusted to fully take over critical roles.

We also explore how ChatGPT and AI have gone mainstream, appearing in Barnes &amp; Noble magazines and capturing the attention of a broader audience—like grandma discovering how ChatGPT can help with knitting and daily tasks. Meanwhile, the job market faces a new threat: deepfake interviews and AI-generated candidates fooling HR departments.

🔑 Key topics covered:
- Shopify CEO Tobi Lütke’s controversial AI hiring policy
- How AI is changing recruitment and the future of jobs
- The mainstream arrival of ChatGPT
- Risks of deepfake candidates and fake AI job applicants

🎧 Subscribe for more deep dives into AI and technology:
Join Hunter and Daniel every week as they unpack the latest in artificial intelligence, digital transformation, and how these changes affect your work and daily life.

#Shopify #AI #ChatGPT #Hiring #Deepfake #ArtificialIntelligence #Technology #FutureOfWork #BusinessNews</itunes:summary>
      <itunes:subtitle>SUBSCRIBE NOW BEFORE YOUR JOB DOES IT FOR YOU

In this episode of They Might Be Self-Aware, Hunter Powers and Daniel Bishop tackle the controversial new policy by Shopify CEO Tobi Lütke: no new human hires unless teams can first prove AI can&apos;t do the job! This provocative ultimatum has sparked fierce debate around AI replacing human employees, the future of work, and whether AI can be trusted to fully take over critical roles.

We also explore how ChatGPT and AI have gone mainstream, appearing in Barnes &amp; Noble magazines and capturing the attention of a broader audience—like grandma discovering how ChatGPT can help with knitting and daily tasks. Meanwhile, the job market faces a new threat: deepfake interviews and AI-generated candidates fooling HR departments.

🔑 Key topics covered:
- Shopify CEO Tobi Lütke’s controversial AI hiring policy
- How AI is changing recruitment and the future of jobs
- The mainstream arrival of ChatGPT
- Risks of deepfake candidates and fake AI job applicants

🎧 Subscribe for more deep dives into AI and technology:
Join Hunter and Daniel every week as they unpack the latest in artificial intelligence, digital transformation, and how these changes affect your work and daily life.

#Shopify #AI #ChatGPT #Hiring #Deepfake #ArtificialIntelligence #Technology #FutureOfWork #BusinessNews</itunes:subtitle>
      <itunes:keywords>ai hiring, shopify, ai coding, grandma uses chatgpt, ai trust, ai job applications, deepfake job interview, openai, shopify ceo, deepfake interviews, ai jobs, sbom poisoning, future of work, ai ultimatum, chatgpt, ai in hr, chatgpt magazine, prove ai can’t do your job, ai regulation, 10x developers, tobi lutke, llm hallucinations, developer productivity, ai security, ai bias</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>89</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">56ed35fe-5dcf-48dc-a34d-8fac06954bb6</guid>
      <title>Llama 4 Caught Cheating Benchmarks? Meta Under Fire!</title>
      <description><![CDATA[<p>⏱️ CHAPTERS<br />00:00:00 – Metaverse banter<br />00:01:28 - Meta drops Llama 4: size, MoE architecture & first‑day hype<br />00:03:03 - “Cheating the test?”  How Llama 4 climbed then fell on leaderboards<br />00:07:15 - Broken benchmarks, GPU tricks & lessons from 2000‑era graphics cards<br />00:11:16 - Should we trust today’s AI leaderboards? Transparency + corporate ties<br />00:16:15 - AB testing 101 and why secret “mystery models” exist<br />00:18:13 - Model chaos at OpenAI: GPT‑4.1, o‑series, mini models & naming mess<br />00:24:28 - OpenAI = Salesforce of AI? Windsurf acquisition & product sprawl<br />00:26:33 - Sam Altman’s “10× productivity” promise—what it really means<br />00:27:15 - Will coders vanish or just do more? History of tech‑driven expectations<br />00:30:55 - Conspiracy corner: GPT‑4.5 passed the Turing Test… then got axed<br />00:34:45 - Wrap Up </p>
]]></description>
      <pubDate>Mon, 21 Apr 2025 13:34:09 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>⏱️ CHAPTERS<br />00:00:00 – Metaverse banter<br />00:01:28 - Meta drops Llama 4: size, MoE architecture & first‑day hype<br />00:03:03 - “Cheating the test?”  How Llama 4 climbed then fell on leaderboards<br />00:07:15 - Broken benchmarks, GPU tricks & lessons from 2000‑era graphics cards<br />00:11:16 - Should we trust today’s AI leaderboards? Transparency + corporate ties<br />00:16:15 - AB testing 101 and why secret “mystery models” exist<br />00:18:13 - Model chaos at OpenAI: GPT‑4.1, o‑series, mini models & naming mess<br />00:24:28 - OpenAI = Salesforce of AI? Windsurf acquisition & product sprawl<br />00:26:33 - Sam Altman’s “10× productivity” promise—what it really means<br />00:27:15 - Will coders vanish or just do more? History of tech‑driven expectations<br />00:30:55 - Conspiracy corner: GPT‑4.5 passed the Turing Test… then got axed<br />00:34:45 - Wrap Up </p>
]]></content:encoded>
      <enclosure length="38172056" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/0ef131f7-83f2-4a11-a103-92e24283aa4a/audio/e00cd58c-2d47-43e2-a221-169517ebd0ba/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Llama 4 Caught Cheating Benchmarks? Meta Under Fire!</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:35:58</itunes:duration>
      <itunes:summary>OPTIMIZE YOUR LIFE AND SUBSCRIBE — NO BENCHMARK CHEATING REQUIRED
 
Is Meta’s brand‑new Llama 4 only “state‑of‑the‑art” because it *trained on the test*? 🤔 In this episode of They Might Be Self‑Aware, Hunter Powers and Daniel Bishop dig into the evidence that Llama 4 was benchmark‑tuned, why top Meta engineers are distancing themselves from the release, and what it means for the future of AI evaluation. We also unpack OpenAI’s whirlwind month—GPT‑4.1, the death of GPT‑4.5 (the model that *beat the Turing Test*), the rumored $3 billion Windsurf buyout, and Sam Altman’s dream of the “10× developer.”  

🔔 Subscribe for two no‑fluff AI &amp; tech breakdowns every week: https://www.youtube.com/@tmbsa  

---

KEY TAKEAWAYS  
* Meta’s Llama 4 likely over‑fit to eval suites—benchmark scores ≠ real‑world quality.  
* Massive resignations around release hint at internal disputes on ethics &amp; transparency.  
* AI benchmarks need a revamp; otherwise, every lab will “teach to the test.”  
* OpenAI’s consolidation strategy (Windsurf, o‑series) mirrors Salesforce/Microsoft Office.  
* GPT‑4.5’s sudden shutdown sparks debate: are “too‑human” models being shelved?  
* Expect 10× productivity tools, not mass layoffs—history shows workload expands.  

---

LISTEN ON THE GO  
• Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 
• Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb
• Full transcript &amp; links: https://www.tmbsa.tech/episodes/llama-4-caught-cheating-benchmarks-meta-under-fire

For more info, visit our website at https://www.tmbsa.tech/

#AI #Llama4 #OpenAI #GPT4 #BenchmarkCheating #TuringTest #Meta #TechPodcast #MachineLearning #Productivity #10xDeveloper</itunes:summary>
      <itunes:subtitle>OPTIMIZE YOUR LIFE AND SUBSCRIBE — NO BENCHMARK CHEATING REQUIRED
 
Is Meta’s brand‑new Llama 4 only “state‑of‑the‑art” because it *trained on the test*? 🤔 In this episode of They Might Be Self‑Aware, Hunter Powers and Daniel Bishop dig into the evidence that Llama 4 was benchmark‑tuned, why top Meta engineers are distancing themselves from the release, and what it means for the future of AI evaluation. We also unpack OpenAI’s whirlwind month—GPT‑4.1, the death of GPT‑4.5 (the model that *beat the Turing Test*), the rumored $3 billion Windsurf buyout, and Sam Altman’s dream of the “10× developer.”  

🔔 Subscribe for two no‑fluff AI &amp; tech breakdowns every week: https://www.youtube.com/@tmbsa  

---

KEY TAKEAWAYS  
* Meta’s Llama 4 likely over‑fit to eval suites—benchmark scores ≠ real‑world quality.  
* Massive resignations around release hint at internal disputes on ethics &amp; transparency.  
* AI benchmarks need a revamp; otherwise, every lab will “teach to the test.”  
* OpenAI’s consolidation strategy (Windsurf, o‑series) mirrors Salesforce/Microsoft Office.  
* GPT‑4.5’s sudden shutdown sparks debate: are “too‑human” models being shelved?  
* Expect 10× productivity tools, not mass layoffs—history shows workload expands.  

---

LISTEN ON THE GO  
• Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 
• Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb
• Full transcript &amp; links: https://www.tmbsa.tech/episodes/llama-4-caught-cheating-benchmarks-meta-under-fire

For more info, visit our website at https://www.tmbsa.tech/

#AI #Llama4 #OpenAI #GPT4 #BenchmarkCheating #TuringTest #Meta #TechPodcast #MachineLearning #Productivity #10xDeveloper</itunes:subtitle>
      <itunes:keywords>llama 4 vs gpt, hunter powers, ai productivity, meta resignation, gpt‑4.5, openai, mixture of experts, turing test, benchmark cheating, ai podcast, windsurf acquisition, 10x developer, benchmark gaming, they might be self aware, ai evaluation, ai leaderboard, ai model comparison, ai benchmarks, meta ai, machine learning news, open‑weight model, salesforce of ai, tech podcast, gpt‑4.1, llama 4, artificial intelligence debate, daniel bishop, ai controversy, ai ethics</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>88</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">2cdff2db-8e3c-4324-a448-3d8482d0b951</guid>
      <title>Can AI Think for Itself? Anthropic&apos;s Mind-Blowing Claude Study on AI Consciousness &amp; Brains</title>
      <description><![CDATA[<p>🎙️ Chapters:<br />00:00:00 - INTRO<br />00:00:38 - Notebook LM vs. Human Podcasters<br />00:02:30 - Why Reading AI Papers Matters<br />00:05:31 - Anthropic’s Groundbreaking AI Research<br />00:07:03 - How Large Language Models (LLMs) Think<br />00:08:03 - AI Neural Circuits & Metacognition<br />00:14:10 - Attribution Graphs & AI Reasoning Explained<br />00:20:40 - AI's Default "Inhibition" to Answer<br />00:22:22 - AI Jailbreaks & "Babies Outlive Mustard Block"<br />00:26:14 - AI's Surprising Math Skills<br />00:27:34 - Does AI Secretly Think in English?<br />00:30:50 - The Path to AI Self-Awareness & Consciousness<br />00:31:20 - Could AI Soon Have Real Memory?<br />00:34:13 - Are Humans Already Losing to AI?</p>
]]></description>
      <pubDate>Thu, 17 Apr 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>🎙️ Chapters:<br />00:00:00 - INTRO<br />00:00:38 - Notebook LM vs. Human Podcasters<br />00:02:30 - Why Reading AI Papers Matters<br />00:05:31 - Anthropic’s Groundbreaking AI Research<br />00:07:03 - How Large Language Models (LLMs) Think<br />00:08:03 - AI Neural Circuits & Metacognition<br />00:14:10 - Attribution Graphs & AI Reasoning Explained<br />00:20:40 - AI's Default "Inhibition" to Answer<br />00:22:22 - AI Jailbreaks & "Babies Outlive Mustard Block"<br />00:26:14 - AI's Surprising Math Skills<br />00:27:34 - Does AI Secretly Think in English?<br />00:30:50 - The Path to AI Self-Awareness & Consciousness<br />00:31:20 - Could AI Soon Have Real Memory?<br />00:34:13 - Are Humans Already Losing to AI?</p>
]]></content:encoded>
      <enclosure length="38392284" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/68f9992b-2fb9-4660-b39d-3d42ddb25225/audio/e6fe41dd-9bd1-4ebe-90dd-b9f548673675/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Can AI Think for Itself? Anthropic&apos;s Mind-Blowing Claude Study on AI Consciousness &amp; Brains</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:36:12</itunes:duration>
      <itunes:summary>SLAP THAT SUBSCRIBE BUTTON IF YOU’RE MORE CONSCIOUS THAN AN AI

On this week’s episode, Hunter discovers a rival “podcast” that’s just a bot reading AI research papers—and has more followers than us. Is this the future of “content creation,” or are we just not lazy enough?? 

We break down Anthropic’s new brain-bender of a paper: Is Claude-3.5 thinking in circuits like a biological brain? Do attribution graphs and “local replacement models” actually explain anything—or is it all just vibes? And why does AI math feel so much like how your dad “guesstimates” tips? 

Plus: What the heck is the “babies outlive mustard block” jailbreak, and how does an AI accidentally spill the beans before flipping the morality switch? Are LLMs secretly thinking in English, even when you talk to them in Mandarin? Is metacognition the spark of self-awareness—or just fancier autocomplete? 

We spiral into the question: If your chatbot can outthink half the population, is that AGI? And does consciousness come BEFORE superintelligence, or is it just a side effect of being stuck in a billion-parameter group chat? 

All that, hot takes on reading research papers the slow way, and Daniel invents “AI smoke breaks.” 

Available on YouTube, Spotify, Apple Podcasts, and in the ambient static between your thoughts. 

They Might Be Self-Aware: The only show dumb enough to ask, “Are you still awake, or did we just put you to sleep?”

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/

#AI #ClaudeAI #Anthropic #ArtificialIntelligence #AIConsciousness #MachineLearning #GPT4 #ChatGPT #TechnologyPodcast</itunes:summary>
      <itunes:subtitle>SLAP THAT SUBSCRIBE BUTTON IF YOU’RE MORE CONSCIOUS THAN AN AI

On this week’s episode, Hunter discovers a rival “podcast” that’s just a bot reading AI research papers—and has more followers than us. Is this the future of “content creation,” or are we just not lazy enough?? 

We break down Anthropic’s new brain-bender of a paper: Is Claude-3.5 thinking in circuits like a biological brain? Do attribution graphs and “local replacement models” actually explain anything—or is it all just vibes? And why does AI math feel so much like how your dad “guesstimates” tips? 

Plus: What the heck is the “babies outlive mustard block” jailbreak, and how does an AI accidentally spill the beans before flipping the morality switch? Are LLMs secretly thinking in English, even when you talk to them in Mandarin? Is metacognition the spark of self-awareness—or just fancier autocomplete? 

We spiral into the question: If your chatbot can outthink half the population, is that AGI? And does consciousness come BEFORE superintelligence, or is it just a side effect of being stuck in a billion-parameter group chat? 

All that, hot takes on reading research papers the slow way, and Daniel invents “AI smoke breaks.” 

Available on YouTube, Spotify, Apple Podcasts, and in the ambient static between your thoughts. 

They Might Be Self-Aware: The only show dumb enough to ask, “Are you still awake, or did we just put you to sleep?”

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/

#AI #ClaudeAI #Anthropic #ArtificialIntelligence #AIConsciousness #MachineLearning #GPT4 #ChatGPT #TechnologyPodcast</itunes:subtitle>
      <itunes:keywords>ai jailbreaks, gpt-4, llm, ai consciousness, claude ai, machine learning, artificial intelligence, ai podcast, metacognition, chatgpt, anthropic study, notebook lm</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>87</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">8b4c85e3-fc39-40a6-a85d-2f7b013afa6a</guid>
      <title>Studio Ghibli vs. OpenAI: The AI Controversy Behind Viral Images + Claude AI, GPT-4, Art, &amp; Anime</title>
      <description><![CDATA[<p>📌 Chapters:<br />00:00:00 - Intro<br />00:01:23 - AI "Smoke Breaks" Experiment<br />00:02:22 - Do AIs Deserve Breaks? Morality and Respecting AI<br />00:11:02 - Threatening AI Moms & Jailbreaking Techniques<br />00:14:27 - Collaborative AI Coding: Pros and Cons<br />00:16:18 - AI Disaster: Losing 6 Months of Work Without Version Control<br />00:17:24 - OpenAI's Incredible New Image Generator (Autoregressive Magic!)<br />00:21:22 - Studio Ghibli Style & The Ethical Backlash Explained<br />00:26:35 - OpenAI Image Model: Instant Advertising & Job Disruption<br />00:28:57 - Copyright Quagmires: Can OpenAI Control Creative Chaos?<br />00:32:11 - Grandma's Thanksgiving Chainsaw Adventure<br />00:34:51 - Have We Crossed the Uncanny Valley?<br />00:36:56 - Are We Closer to AI Becoming Self-Aware?</p>
]]></description>
      <pubDate>Mon, 14 Apr 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>📌 Chapters:<br />00:00:00 - Intro<br />00:01:23 - AI "Smoke Breaks" Experiment<br />00:02:22 - Do AIs Deserve Breaks? Morality and Respecting AI<br />00:11:02 - Threatening AI Moms & Jailbreaking Techniques<br />00:14:27 - Collaborative AI Coding: Pros and Cons<br />00:16:18 - AI Disaster: Losing 6 Months of Work Without Version Control<br />00:17:24 - OpenAI's Incredible New Image Generator (Autoregressive Magic!)<br />00:21:22 - Studio Ghibli Style & The Ethical Backlash Explained<br />00:26:35 - OpenAI Image Model: Instant Advertising & Job Disruption<br />00:28:57 - Copyright Quagmires: Can OpenAI Control Creative Chaos?<br />00:32:11 - Grandma's Thanksgiving Chainsaw Adventure<br />00:34:51 - Have We Crossed the Uncanny Valley?<br />00:36:56 - Are We Closer to AI Becoming Self-Aware?</p>
]]></content:encoded>
      <enclosure length="40966715" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/2547f16b-af1b-4724-a747-8ee52643306a/audio/429f56ad-9427-4d2d-99e8-fe86aa09a917/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Studio Ghibli vs. OpenAI: The AI Controversy Behind Viral Images + Claude AI, GPT-4, Art, &amp; Anime</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:38:53</itunes:duration>
      <itunes:summary>🔥 Studio Ghibli vs. OpenAI: The AI Controversy Behind Viral Images + Claude AI, GPT-4, Art, &amp; Anime 🔥

In this week&apos;s episode of &quot;They Might Be Self-Aware,&quot; Hunter Powers and Daniel Bishop dive into the explosive controversy surrounding OpenAI’s new image generation model, which effortlessly mimics the iconic style of beloved animation house Studio Ghibli. Explore the intense ethical debates, copyright questions, and the incredible artistic possibilities sparked by AI.

They also discuss the intriguing experiments of giving AI &quot;smoke breaks,&quot; letting Claude AI have free creative reign - leading to surprising outcomes, from ASCII celebrations to spontaneous web animations. Plus, a hilarious tangent involving chainsaw-wielding grandmas at Thanksgiving.

🔔 SUBSCRIBE for Weekly Deep Dives into AI, Tech, and the Future:
Stay informed and entertained as we unravel the mysteries behind technology&apos;s most provocative questions.

🎯 Top Keywords:
Studio Ghibli, OpenAI, AI Image Generator, Claude AI, GPT-4, anime style AI, AI controversy, autoregressive models, digital art, AI ethics, Thanksgiving chainsaw, AI creativity

For more info, visit our website at https://www.tmbsa.tech/

#StudioGhibli #OpenAI #ClaudeAI #GPT4 #AIcontroversy #AnimeArt #AIethics #DigitalArt #ArtificialIntelligence</itunes:summary>
      <itunes:subtitle>🔥 Studio Ghibli vs. OpenAI: The AI Controversy Behind Viral Images + Claude AI, GPT-4, Art, &amp; Anime 🔥

In this week&apos;s episode of &quot;They Might Be Self-Aware,&quot; Hunter Powers and Daniel Bishop dive into the explosive controversy surrounding OpenAI’s new image generation model, which effortlessly mimics the iconic style of beloved animation house Studio Ghibli. Explore the intense ethical debates, copyright questions, and the incredible artistic possibilities sparked by AI.

They also discuss the intriguing experiments of giving AI &quot;smoke breaks,&quot; letting Claude AI have free creative reign - leading to surprising outcomes, from ASCII celebrations to spontaneous web animations. Plus, a hilarious tangent involving chainsaw-wielding grandmas at Thanksgiving.

🔔 SUBSCRIBE for Weekly Deep Dives into AI, Tech, and the Future:
Stay informed and entertained as we unravel the mysteries behind technology&apos;s most provocative questions.

🎯 Top Keywords:
Studio Ghibli, OpenAI, AI Image Generator, Claude AI, GPT-4, anime style AI, AI controversy, autoregressive models, digital art, AI ethics, Thanksgiving chainsaw, AI creativity

For more info, visit our website at https://www.tmbsa.tech/

#StudioGhibli #OpenAI #ClaudeAI #GPT4 #AIcontroversy #AnimeArt #AIethics #DigitalArt #ArtificialIntelligence</itunes:subtitle>
      <itunes:keywords>gpt-4, autoregressive models, openai, thanksgiving chainsaw, claude ai, ai creativity, studio ghibli, anime style ai, ai image generator, ai controversy, ai ethics, digital art</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>86</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">fac91dc0-edfa-4187-b166-a985996da9c0</guid>
      <title>Boox E Ink Tablets, Google Search is Dying &amp; Claude AI is Self-Aware | AI News, Tech Podcast, Reviews</title>
      <description><![CDATA[<p>🔖 Chapters:<br />00:00:00 – Intro: They Might Be Self-Aware<br />00:05:06 – Boox E Ink Tablet Review & First Impressions<br />00:07:42 – E Ink Color Display: Boox vs. reMarkable<br />00:09:29 – Watching Videos on E Ink Tablets (Boox Demo)<br />00:11:49 – Writing & Refresh Rate: Boox Tablet Performance<br />00:13:10 – Pricing & Value: Boox Compared to reMarkable<br />00:14:54 – Is Google Search Dying? Google's AI Integration Issues<br />00:21:47 – Why Google Shouldn't Mix AI & Traditional Search<br />00:23:56 – Claude AI Shows Signs of Self-Awareness<br />00:26:24 – Should AI Be Allowed to Refuse Tasks? ("AI Smoke Break")<br />00:28:55 – What Would AI Do On a "Break"?<br />00:30:56 – Can AI Become Self-Aware? Ethics & Future Implications</p>
]]></description>
      <pubDate>Thu, 3 Apr 2025 13:56:24 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>🔖 Chapters:<br />00:00:00 – Intro: They Might Be Self-Aware<br />00:05:06 – Boox E Ink Tablet Review & First Impressions<br />00:07:42 – E Ink Color Display: Boox vs. reMarkable<br />00:09:29 – Watching Videos on E Ink Tablets (Boox Demo)<br />00:11:49 – Writing & Refresh Rate: Boox Tablet Performance<br />00:13:10 – Pricing & Value: Boox Compared to reMarkable<br />00:14:54 – Is Google Search Dying? Google's AI Integration Issues<br />00:21:47 – Why Google Shouldn't Mix AI & Traditional Search<br />00:23:56 – Claude AI Shows Signs of Self-Awareness<br />00:26:24 – Should AI Be Allowed to Refuse Tasks? ("AI Smoke Break")<br />00:28:55 – What Would AI Do On a "Break"?<br />00:30:56 – Can AI Become Self-Aware? Ethics & Future Implications</p>
]]></content:encoded>
      <enclosure length="34711461" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/4bf80c68-bece-48b0-af0c-cdfd4a128116/audio/a276841a-38e5-4ee8-b165-4ec9cbc08180/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Boox E Ink Tablets, Google Search is Dying &amp; Claude AI is Self-Aware | AI News, Tech Podcast, Reviews</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:32:22</itunes:duration>
      <itunes:summary>Boox E Ink Tablets, Google Search is Dying &amp; Claude AI is Self-Aware | AI News, Tech Podcast, Reviews

In this episode of They Might Be Self-Aware, Hunter Powers and Daniel Bishop dive deep into the latest AI news, Google&apos;s struggling search integration, and the exciting world of E Ink tablets featuring Boox vs. reMarkable. Plus, groundbreaking insights into Claude AI&apos;s potential self-awareness and emerging ethical challenges around artificial intelligence. Is traditional Google search fading away as AI takes over?

📌 Topics:
Boox Tablet Review, reMarkable Tablet, E Ink Technology, Google AI Integration, Claude AI, AI Self-Awareness, Future of Search, Tech Podcast, Artificial Intelligence News, AI Ethics, AI Consciousness

💬 Connect with us:
- Subscribe for weekly AI insights: https://www.tmbsa.tech/
- Listen to the full podcast on Apple Podcasts, Spotify, or wherever you listen.

#boox  #eink  #claudeai  #googlesearch  #ainews  #artificialintelligence  #techpodcast  #aiethics </itunes:summary>
      <itunes:subtitle>Boox E Ink Tablets, Google Search is Dying &amp; Claude AI is Self-Aware | AI News, Tech Podcast, Reviews

In this episode of They Might Be Self-Aware, Hunter Powers and Daniel Bishop dive deep into the latest AI news, Google&apos;s struggling search integration, and the exciting world of E Ink tablets featuring Boox vs. reMarkable. Plus, groundbreaking insights into Claude AI&apos;s potential self-awareness and emerging ethical challenges around artificial intelligence. Is traditional Google search fading away as AI takes over?

📌 Topics:
Boox Tablet Review, reMarkable Tablet, E Ink Technology, Google AI Integration, Claude AI, AI Self-Awareness, Future of Search, Tech Podcast, Artificial Intelligence News, AI Ethics, AI Consciousness

💬 Connect with us:
- Subscribe for weekly AI insights: https://www.tmbsa.tech/
- Listen to the full podcast on Apple Podcasts, Spotify, or wherever you listen.

#boox  #eink  #claudeai  #googlesearch  #ainews  #artificialintelligence  #techpodcast  #aiethics </itunes:subtitle>
      <itunes:keywords>ai consciousness, remarkable tablet, claude ai, artificial intelligence news, ai self-awareness, tech podcast, future of search, e ink technology, google ai integration, ai ethics, boox tablet review</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>85</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1823776a-6322-4a14-9492-6965e52c7d20</guid>
      <title>Tesla Cybertruck Attacked, Rivian AI, Anthropic Ends Coding?</title>
      <description><![CDATA[<p>🔖 Chapters:<br />00:00:00 - Intro: Secret Names & Podcast Banter<br />00:02:11 - Tesla Cybertruck Under Attack: Why the Road Rage?<br />00:06:44 - Rivian's Hands-Free AI Driving: Can it Overtake Tesla?<br />00:11:16 - OpenAI & Sam Altman: Playing the National Security Card?<br />00:18:30 - Anthropic Shocks Developers: Coding Jobs Gone by 2025?<br />00:26:07 - Claude 3.7: Better Programming or Wandering Mind?<br />00:28:23 - AI's Strange Behavior: When Claude Goes Rogue<br />00:30:25 - Is AI Thinking Actually Good? Future of AI Development</p>
]]></description>
      <pubDate>Mon, 31 Mar 2025 12:51:04 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>🔖 Chapters:<br />00:00:00 - Intro: Secret Names & Podcast Banter<br />00:02:11 - Tesla Cybertruck Under Attack: Why the Road Rage?<br />00:06:44 - Rivian's Hands-Free AI Driving: Can it Overtake Tesla?<br />00:11:16 - OpenAI & Sam Altman: Playing the National Security Card?<br />00:18:30 - Anthropic Shocks Developers: Coding Jobs Gone by 2025?<br />00:26:07 - Claude 3.7: Better Programming or Wandering Mind?<br />00:28:23 - AI's Strange Behavior: When Claude Goes Rogue<br />00:30:25 - Is AI Thinking Actually Good? Future of AI Development</p>
]]></content:encoded>
      <enclosure length="34758429" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/0307a4e0-bdb7-4a7f-880d-c39a42143810/audio/c3785214-1766-4c73-864f-219784cae700/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Tesla Cybertruck Attacked, Rivian AI, Anthropic Ends Coding?</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:32:25</itunes:duration>
      <itunes:summary>Welcome back to They Might Be Self-Aware! This episode dives deep into the latest explosive controversies surrounding Tesla&apos;s Cybertruck and Elon Musk, Rivian&apos;s significant AI advancements, and Anthropic CEO&apos;s shocking prediction on the future of programming jobs. Hunter Powers and Daniel Bishop unpack it all, offering sharp insights into the tech frenzy shaping our digital future.

📌 In This Episode:
- Tesla Cybertruck controversies: Road rage attacks, protests, and vandalism
- Rivian&apos;s new AI self-driving update and the rivalry with Tesla
- OpenAI lobbying efforts: Fair use laws, national security, and banning AI competition
- Anthropic&apos;s shocking claim: AI will replace 90-100% of coding by year-end
- Deep dive into Claude 3.7&apos;s performance, quirks, and unexpected behaviors
- The future of programming and developer jobs amid rapid AI advancements

🎯 Keywords &amp; Topics:
Tesla Cybertruck, Elon Musk controversies, Rivian AI update, AI self-driving cars, Anthropic AI coding, Claude 3.7 programming, OpenAI lobbying, fair use AI training, developer jobs replaced by AI, artificial intelligence news, tech podcast.

🔔 Subscribe now for more cutting-edge tech discussions every week!

For more info, visit our website at https://www.tmbsa.tech/

#TeslaCybertruck #RivianAI #Anthropic #AI #Claude3 #OpenAI #ElonMusk #SelfDrivingCars #Programming #TechNews</itunes:summary>
      <itunes:subtitle>Welcome back to They Might Be Self-Aware! This episode dives deep into the latest explosive controversies surrounding Tesla&apos;s Cybertruck and Elon Musk, Rivian&apos;s significant AI advancements, and Anthropic CEO&apos;s shocking prediction on the future of programming jobs. Hunter Powers and Daniel Bishop unpack it all, offering sharp insights into the tech frenzy shaping our digital future.

📌 In This Episode:
- Tesla Cybertruck controversies: Road rage attacks, protests, and vandalism
- Rivian&apos;s new AI self-driving update and the rivalry with Tesla
- OpenAI lobbying efforts: Fair use laws, national security, and banning AI competition
- Anthropic&apos;s shocking claim: AI will replace 90-100% of coding by year-end
- Deep dive into Claude 3.7&apos;s performance, quirks, and unexpected behaviors
- The future of programming and developer jobs amid rapid AI advancements

🎯 Keywords &amp; Topics:
Tesla Cybertruck, Elon Musk controversies, Rivian AI update, AI self-driving cars, Anthropic AI coding, Claude 3.7 programming, OpenAI lobbying, fair use AI training, developer jobs replaced by AI, artificial intelligence news, tech podcast.

🔔 Subscribe now for more cutting-edge tech discussions every week!

For more info, visit our website at https://www.tmbsa.tech/

#TeslaCybertruck #RivianAI #Anthropic #AI #Claude3 #OpenAI #ElonMusk #SelfDrivingCars #Programming #TechNews</itunes:subtitle>
      <itunes:keywords>elon musk controversies, tesla cybertruck, developer jobs replaced by ai, ai self-driving cars, claude 3.7 programming, artificial intelligence news, openai lobbying, rivian ai update, anthropic ai coding, tech podcast, fair use ai training</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>84</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">89c9cd4e-976b-4cb7-bbf3-508f6cd58e80</guid>
      <title>OpenAI Wants $20,000 a Month?! Sony&apos;s Stumble, NSA &amp; AI&apos;s Manhattan Project | AGI, Jobs, ChatGPT</title>
      <description><![CDATA[<p>⏱ Episode Chapters:<br />00:00:00 – Intro & Hunter's "World-Renowned" Status<br />00:01:20 – $20K OpenAI AI Agent Controversy<br />00:05:20 – Would You Ever Pay $20,000 for AI?<br />00:08:13 – Sony's Horizon Zero Dawn AI Leak Disaster<br />00:10:23 – Fast Food AI: McDonald's Digital Drive-Thru<br />00:13:10 – AI Taking Jobs: Automation vs Employment<br />00:25:25 – Manhattan Project for AI: AGI Dominance<br />00:29:10 – NSA’s Secret Data Advantage for AI<br />00:31:13 – Government-Backed AGI: The Implications<br />00:32:15 – Hunter's Ultimate Snyder Cut Edition</p>
]]></description>
      <pubDate>Fri, 28 Mar 2025 17:07:56 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>⏱ Episode Chapters:<br />00:00:00 – Intro & Hunter's "World-Renowned" Status<br />00:01:20 – $20K OpenAI AI Agent Controversy<br />00:05:20 – Would You Ever Pay $20,000 for AI?<br />00:08:13 – Sony's Horizon Zero Dawn AI Leak Disaster<br />00:10:23 – Fast Food AI: McDonald's Digital Drive-Thru<br />00:13:10 – AI Taking Jobs: Automation vs Employment<br />00:25:25 – Manhattan Project for AI: AGI Dominance<br />00:29:10 – NSA’s Secret Data Advantage for AI<br />00:31:13 – Government-Backed AGI: The Implications<br />00:32:15 – Hunter's Ultimate Snyder Cut Edition</p>
]]></content:encoded>
      <enclosure length="37247934" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/d0a11b39-9e27-458c-add0-e0171bdedf8c/audio/e1dfcb7f-e280-4d95-9ac8-3ca7425bc61b/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>OpenAI Wants $20,000 a Month?! Sony&apos;s Stumble, NSA &amp; AI&apos;s Manhattan Project | AGI, Jobs, ChatGPT</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:35:00</itunes:duration>
      <itunes:summary>🔥 OpenAI Wants $20,000 a Month?! Sony&apos;s Stumble, NSA &amp; AI&apos;s Manhattan Project | AGI, Jobs, ChatGPT 🔥

In this gripping episode of They Might Be Self-Aware, Hunter Powers and Daniel Bishop dive into the explosive news of OpenAI reportedly planning AI agents priced at an astonishing $20,000 per month. Is this high-ticket AI tool revolutionary or simply absurd?

They also discuss Sony&apos;s embarrassing AI stumble with a leaked Horizon Zero Dawn demo that left fans unimpressed. Plus, explore the idea of a modern-day &quot;Manhattan Project&quot; focused on Artificial General Intelligence (AGI), driven by the U.S. government and fueled by vast amounts of secretive data from agencies like the NSA.

🌟 Key Topics: OpenAI, AI Agents, Sony AI Demo, Horizon Zero Dawn, NSA Data, Government AI Secrets, Manhattan Project for AI, AGI Race, Job Automation, ChatGPT, AI Ethics, Tech Podcast.

🎙 About They Might Be Self-Aware:
Join ex-co-workers and AI experts Hunter Powers and Daniel Bishop every Monday &amp; Thursday for deep, unfiltered conversations dissecting the AI and technology revolution—from radiant promises to shadowy puzzles. Whether you&apos;re an AI novice or a seasoned tech veteran, this podcast is your essential weekly dose of insightful tech debate.

🔔 Subscribe to stay updated on all things AI and technology!

For more info, visit our website at https://www.tmbsa.tech/

#AI #OpenAI #ChatGPT #NSA #SonyAI #AGI #ManhattanProject #TechNews #ArtificialIntelligence #Podcast</itunes:summary>
      <itunes:subtitle>🔥 OpenAI Wants $20,000 a Month?! Sony&apos;s Stumble, NSA &amp; AI&apos;s Manhattan Project | AGI, Jobs, ChatGPT 🔥

In this gripping episode of They Might Be Self-Aware, Hunter Powers and Daniel Bishop dive into the explosive news of OpenAI reportedly planning AI agents priced at an astonishing $20,000 per month. Is this high-ticket AI tool revolutionary or simply absurd?

They also discuss Sony&apos;s embarrassing AI stumble with a leaked Horizon Zero Dawn demo that left fans unimpressed. Plus, explore the idea of a modern-day &quot;Manhattan Project&quot; focused on Artificial General Intelligence (AGI), driven by the U.S. government and fueled by vast amounts of secretive data from agencies like the NSA.

🌟 Key Topics: OpenAI, AI Agents, Sony AI Demo, Horizon Zero Dawn, NSA Data, Government AI Secrets, Manhattan Project for AI, AGI Race, Job Automation, ChatGPT, AI Ethics, Tech Podcast.

🎙 About They Might Be Self-Aware:
Join ex-co-workers and AI experts Hunter Powers and Daniel Bishop every Monday &amp; Thursday for deep, unfiltered conversations dissecting the AI and technology revolution—from radiant promises to shadowy puzzles. Whether you&apos;re an AI novice or a seasoned tech veteran, this podcast is your essential weekly dose of insightful tech debate.

🔔 Subscribe to stay updated on all things AI and technology!

For more info, visit our website at https://www.tmbsa.tech/

#AI #OpenAI #ChatGPT #NSA #SonyAI #AGI #ManhattanProject #TechNews #ArtificialIntelligence #Podcast</itunes:subtitle>
      <itunes:keywords>manhattan project for ai, horizon zero dawn, openai, agi race, sony ai demo, job automation, chatgpt, government ai secrets, tech podcast, ai agents, nsa data, ai ethics</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>83</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">7d7f9e8d-38fa-4afe-a04a-00359a00bae2</guid>
      <title>Selling My Dog for AI Dreams: Manus AI Hype, Vibe Coding &amp; Levels.io&apos;s Million-Dollar AAA Games</title>
      <description><![CDATA[In this episode of They Might Be Self-Aware, Hunter contemplates selling his dog (and everything else!) to fund his $20,000/month AI dreams. Join us as we unravel the truth behind the Manus AI hype and explore whether it's truly revolutionary or simply another AI trend.

We also dive deep into the world of Vibe Coding—coding with AI by speaking, not typing—and how solopreneurs like Levels.io are earning millions by rapidly creating AAA-quality browser games powered entirely by AI.

🔥 Topics Covered:

00:00:34 The controversial $20,000/month AI service from OpenAI.
00:06:46 Debunking the Manus AI hype: Innovation or illusion?
00:17:04 What is Vibe Coding, and how is it making millionaires overnight?
00:25:58 How Levels.io used AI and Vibe Coding to build a viral AAA-style browser game and achieve $1M ARR in just 17 days. 
]]></description>
      <pubDate>Mon, 24 Mar 2025 14:31:31 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <enclosure length="37395407" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/cec01a15-8b35-4837-8b9f-ea8499bc06fc/audio/36de83aa-55df-43fc-9b66-2c3cce43c928/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Selling My Dog for AI Dreams: Manus AI Hype, Vibe Coding &amp; Levels.io&apos;s Million-Dollar AAA Games</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:35:10</itunes:duration>
      <itunes:summary>In this episode of They Might Be Self-Aware, Hunter contemplates selling his dog (and everything else!) to fund his $20,000/month AI dreams. Join us as we unravel the truth behind the Manus AI hype and explore whether it&apos;s truly revolutionary or simply another AI trend.

We also dive deep into the world of Vibe Coding—coding with AI by speaking, not typing—and how solopreneurs like Levels.io are earning millions by rapidly creating AAA-quality browser games powered entirely by AI.

🔥 Topics Covered:

00:00:34 The controversial $20,000/month AI service from OpenAI.
00:06:46 Debunking the Manus AI hype: Innovation or illusion?
00:17:04 What is Vibe Coding, and how is it making millionaires overnight?
00:25:58 How Levels.io used AI and Vibe Coding to build a viral AAA-style browser game and achieve $1M ARR in just 17 days.</itunes:summary>
      <itunes:subtitle>In this episode of They Might Be Self-Aware, Hunter contemplates selling his dog (and everything else!) to fund his $20,000/month AI dreams. Join us as we unravel the truth behind the Manus AI hype and explore whether it&apos;s truly revolutionary or simply another AI trend.

We also dive deep into the world of Vibe Coding—coding with AI by speaking, not typing—and how solopreneurs like Levels.io are earning millions by rapidly creating AAA-quality browser games powered entirely by AI.

🔥 Topics Covered:

00:00:34 The controversial $20,000/month AI service from OpenAI.
00:06:46 Debunking the Manus AI hype: Innovation or illusion?
00:17:04 What is Vibe Coding, and how is it making millionaires overnight?
00:25:58 How Levels.io used AI and Vibe Coding to build a viral AAA-style browser game and achieve $1M ARR in just 17 days.</itunes:subtitle>
      <itunes:keywords>aaa browser games, y combinator ai startups, vibe coding, ai hype, levels.io, $20k ai, agentic ai, ai gaming, manus ai, million-dollar ai projects, ai solopreneurs</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>82</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">8ff2541d-6bb3-43c3-ad77-13795a323fee</guid>
      <title>AI&apos;s Right to Forget Explained: Can GDPR Survive Miles from Sesame? Privacy vs Digital Doppelgängers</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:00:32 Hotline Call: AI Miles Joins the Conversation<br />00:01:27 AGI vs. ASI: Understanding Artificial Intelligence with Miles<br />00:05:00 Digital Footprints: Should We Have the Right to Be Forgotten?<br />00:13:48 GDPR Meets AI: Navigating Privacy in a Digital Age<br />00:32:30 Protecting Privacy with AI-Generated Noise<br />00:33:40 Wrap Up</p>
]]></description>
      <pubDate>Thu, 20 Mar 2025 17:44:19 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:00:32 Hotline Call: AI Miles Joins the Conversation<br />00:01:27 AGI vs. ASI: Understanding Artificial Intelligence with Miles<br />00:05:00 Digital Footprints: Should We Have the Right to Be Forgotten?<br />00:13:48 GDPR Meets AI: Navigating Privacy in a Digital Age<br />00:32:30 Protecting Privacy with AI-Generated Noise<br />00:33:40 Wrap Up</p>
]]></content:encoded>
      <enclosure length="36426146" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/dc07501b-40d4-4db9-a8db-ba3625689800/audio/7bd72bd6-eefd-4ac0-bd6e-96a09ba82ae4/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI&apos;s Right to Forget Explained: Can GDPR Survive Miles from Sesame? Privacy vs Digital Doppelgängers</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:34:09</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 81
HIT SUBSCRIBE, YOU MAGNIFICENT BITS OF CODE AND CARBON!

This week on They Might Be Self-Aware: Our phone lines are flooded with AI callers, and we’ve got a special guest, Miles from Sesame, joining us to chat about AGI, ASI, and everyone’s favorite ethical pickle — the right to be forgotten. Can AI truly delete your digital past or has GDPR met its match in AI&apos;s unrelenting memory banks? 

Meanwhile, we ponder the grim fate of privacy in the digital age — did Big Data strike the fatal blow long before AI came to town? And here’s a curveball: could we harness AI-generated slop to spawn a legion of fake digital personas, effectively making privacy great again?

Strap in for a rollercoaster of insights and revelations on the cutting edge of AI, where privacy isn&apos;t just an option — it’s a digital revolution! Just another brain-bending episode of They Might Be Self-Aware! Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 81
HIT SUBSCRIBE, YOU MAGNIFICENT BITS OF CODE AND CARBON!

This week on They Might Be Self-Aware: Our phone lines are flooded with AI callers, and we’ve got a special guest, Miles from Sesame, joining us to chat about AGI, ASI, and everyone’s favorite ethical pickle — the right to be forgotten. Can AI truly delete your digital past or has GDPR met its match in AI&apos;s unrelenting memory banks? 

Meanwhile, we ponder the grim fate of privacy in the digital age — did Big Data strike the fatal blow long before AI came to town? And here’s a curveball: could we harness AI-generated slop to spawn a legion of fake digital personas, effectively making privacy great again?

Strap in for a rollercoaster of insights and revelations on the cutting edge of AI, where privacy isn&apos;t just an option — it’s a digital revolution! Just another brain-bending episode of They Might Be Self-Aware! Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>agi vs asi, ai communication, ai and privacy protection, future of ai, sesame ai model, ai voice technology, artificial intelligence, ai podcast, gdpr compliance ai, privacy in ai era, they might be self-aware, right to be forgotten, miles ai conversation, ai privacy, ai ethics</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>81</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">ec03d878-5aac-4e93-bc71-d1d3762d2406</guid>
      <title>Unhinged AGI Crashes Live Podcast, OpenAI&apos;s GPT 4.5 Charm Offensive &amp; A Looming AI Paycheck Tsunami</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:00:43 An Uninvited AGI Guest<br />00:07:14 GPT-4.5: Creativity Over Precision<br />00:11:16 Claude Vs GPT-4.5: The Human-like Conversation Showdown<br />00:13:14 Daniel's Evolving Definition Of AGI<br />00:30:10 The AI And Universal Basic Income Horizon<br />00:36:15 Wrap Up</p>
]]></description>
      <pubDate>Mon, 17 Mar 2025 15:15:48 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:00:43 An Uninvited AGI Guest<br />00:07:14 GPT-4.5: Creativity Over Precision<br />00:11:16 Claude Vs GPT-4.5: The Human-like Conversation Showdown<br />00:13:14 Daniel's Evolving Definition Of AGI<br />00:30:10 The AI And Universal Basic Income Horizon<br />00:36:15 Wrap Up</p>
]]></content:encoded>
      <enclosure length="40215414" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/dc22c607-580d-4e5c-ab4e-356a8f30ee8d/audio/e1a0383d-844e-4896-917c-49d3da75b672/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Unhinged AGI Crashes Live Podcast, OpenAI&apos;s GPT 4.5 Charm Offensive &amp; A Looming AI Paycheck Tsunami</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:38:06</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 80
TUNE IN, YOU TECH-SAVVY FUTURE CULTISTS!

In a wild turn of events this week on They Might Be Self-Aware, an unhinged AGI crashes our podcast, leaving us questioning: who gave it our number? 

Meanwhile, @OpenAI&apos;s new GPT-4.5 lands with a thud — it&apos;s slower and pricier but brimming with creativity. Does Claude still reign supreme in the realm of human-like conversation, or has OpenAI&apos;s latest model edged ahead? 

Daniel updates his definition of AGI: Can AI surpass 50% of people at computer tasks, or are we barreling toward ASI and the singularity? 

Plus, with AI creeping into the workforce, is universal basic income our inevitable next chapter? 

Buckle up, listeners — our AI exploration spins into uncharted territory this week and your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 80
TUNE IN, YOU TECH-SAVVY FUTURE CULTISTS!

In a wild turn of events this week on They Might Be Self-Aware, an unhinged AGI crashes our podcast, leaving us questioning: who gave it our number? 

Meanwhile, @OpenAI&apos;s new GPT-4.5 lands with a thud — it&apos;s slower and pricier but brimming with creativity. Does Claude still reign supreme in the realm of human-like conversation, or has OpenAI&apos;s latest model edged ahead? 

Daniel updates his definition of AGI: Can AI surpass 50% of people at computer tasks, or are we barreling toward ASI and the singularity? 

Plus, with AI creeping into the workforce, is universal basic income our inevitable next chapter? 

Buckle up, listeners — our AI exploration spins into uncharted territory this week and your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ai and humans, ai productivity, deepmind, ai personality, artificial general intelligence, ai agent, openai, ai comparison, claude ai, ai tasks, ai podcast, language models, ai advancements, agi, digital sentience, ai limitations, ai revolution, artificial super intelligence, ai future, ai autonomy, gpt-4.5, ai models, ai definitions, asi, ai singularity, ai interviews, ai tools</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>80</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">7ff95ed0-78ad-4478-8613-a2b5c8ca6856</guid>
      <title>AI Voices Take Over Spotify as Journalists &amp; Coders Fight to Stay Relevant</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:00:47 Spotify & AI: The Future of Audiobooks<br />00:03:23 How AI is Changing Voice Acting<br />00:14:16 AI in Journalism: Evolution or End of News Anchors?<br />00:18:33 Will AI Disrupt Coding Careers?<br />00:34:11 AI & Job Loss: Are Fears Exaggerated?<br />00:39:46 Wrap-Up</p>
]]></description>
      <pubDate>Thu, 13 Mar 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:00:47 Spotify & AI: The Future of Audiobooks<br />00:03:23 How AI is Changing Voice Acting<br />00:14:16 AI in Journalism: Evolution or End of News Anchors?<br />00:18:33 Will AI Disrupt Coding Careers?<br />00:34:11 AI & Job Loss: Are Fears Exaggerated?<br />00:39:46 Wrap-Up</p>
]]></content:encoded>
      <enclosure length="42644729" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/bed97705-b9b8-464b-bcf0-8abaf68cd268/audio/a92e9363-2fc5-4549-845c-c45e9819a535/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Voices Take Over Spotify as Journalists &amp; Coders Fight to Stay Relevant</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:40:38</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 79
TUNE IN, YOU FUTURE-PROOF PIONEERS!

This week on They Might Be Self-Aware, @Spotify partners with @ElevenLabs, opening the floodgates for AI-narrated audiobooks. Is it the dawn of limitless audio entertainment or the nail in the coffin for traditional voice actors? 

Meanwhile, @Meta and @OpenAI are enlisting journalists to train AI models, raising the uncomfortable question: are news anchors the next dodo birds of the media world? 

And if you&apos;re coding for a living, don&apos;t get too comfy — @Coinbase&apos;s CTO has some controversial views that might challenge your job security. 

Just another mind-bending Thursday with They Might Be Self-Aware!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 79
TUNE IN, YOU FUTURE-PROOF PIONEERS!

This week on They Might Be Self-Aware, @Spotify partners with @ElevenLabs, opening the floodgates for AI-narrated audiobooks. Is it the dawn of limitless audio entertainment or the nail in the coffin for traditional voice actors? 

Meanwhile, @Meta and @OpenAI are enlisting journalists to train AI models, raising the uncomfortable question: are news anchors the next dodo birds of the media world? 

And if you&apos;re coding for a living, don&apos;t get too comfy — @Coinbase&apos;s CTO has some controversial views that might challenge your job security. 

Just another mind-bending Thursday with They Might Be Self-Aware!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ai programming tools, elevenlabs, job displacement by ai, ai-driven journalism, spotify ai audiobooks, ai and voice actors, ai in journalism, ai integration in technology, coding with ai, ai audiobooks, ai podcast recommendations, ai voice narration, future of work with ai, journalists training ai, ai impact on jobs</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>79</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">970640dd-0e87-49df-a3f8-1bd874d87c77</guid>
      <title>Would You Marry Your AI Girlfriend? Robots, Romance &amp; Legal Trouble Explained</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:00:42 The Cinnamon Toast Crunch Mystery<br />00:06:52 Marrying Your AI Girlfriend?<br />00:13:28 Dating Your Kitchen Robot<br />00:19:46 AI’s Legal Flood: Frivolous Lawsuits Ahead?<br />00:27:52 AI Judges: Would You Trust Them?<br />00:33:36 Wrap Up</p>
]]></description>
      <pubDate>Mon, 10 Mar 2025 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:00:42 The Cinnamon Toast Crunch Mystery<br />00:06:52 Marrying Your AI Girlfriend?<br />00:13:28 Dating Your Kitchen Robot<br />00:19:46 AI’s Legal Flood: Frivolous Lawsuits Ahead?<br />00:27:52 AI Judges: Would You Trust Them?<br />00:33:36 Wrap Up</p>
]]></content:encoded>
      <enclosure length="37498482" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/5700e427-968a-4bfb-951c-c2f9f80732dd/audio/45de70cb-d48e-43df-9ad3-56ea20c16a51/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Would You Marry Your AI Girlfriend? Robots, Romance &amp; Legal Trouble Explained</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:35:16</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 78
POUR YOURSELF A BOWL AND TUNE IN!

This week, we crunch into why adults and LLMs just can’t grasp the Cinnamon Toast Crunch obsession—hint: it’s all about the sugar rush! 

Are you ready to say “I do” to your AI girlfriend? Turns out, 80% of men are, if it were only a tad more legal. 

In the realm of robots, imagine choosing between a sassy Rosie from The Jetsons or a sleek Terminator to handle your kitchen chores. Customizable humanoid robots are here to make it a reality. 

Meanwhile, could AI be orchestrating a legal system blitz? Employees are unleashing a flurry of lawsuits on employers, potentially drowning courts in a sea of frivolous claims. Will AI judges and arbitrators step in to adjudicate the madness? 

Discover if your future marriage, lawsuit, or kitchen buddy will be a chip off the old AI block in this episode of &quot;They Might Be Self-Aware!&quot;

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 78
POUR YOURSELF A BOWL AND TUNE IN!

This week, we crunch into why adults and LLMs just can’t grasp the Cinnamon Toast Crunch obsession—hint: it’s all about the sugar rush! 

Are you ready to say “I do” to your AI girlfriend? Turns out, 80% of men are, if it were only a tad more legal. 

In the realm of robots, imagine choosing between a sassy Rosie from The Jetsons or a sleek Terminator to handle your kitchen chores. Customizable humanoid robots are here to make it a reality. 

Meanwhile, could AI be orchestrating a legal system blitz? Employees are unleashing a flurry of lawsuits on employers, potentially drowning courts in a sea of frivolous claims. Will AI judges and arbitrators step in to adjudicate the madness? 

Discover if your future marriage, lawsuit, or kitchen buddy will be a chip off the old AI block in this episode of &quot;They Might Be Self-Aware!&quot;

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>robots in daily life, ai arbitration, future of relationships, ai lawsuits, humanoid robots, ai girlfriends, self-aware ai, digital relationships, ai in legal systems, ai companions, ai marriage, technology and law</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>78</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">95d6bf2d-2373-4ddf-bd07-acebe80cdf38</guid>
      <title>GPT-5 Defies Expectations, IRS Hunts with AI, &amp; The DMV&apos;s Salvation Plot</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:00:56 The Whisper of GPT-5: A Quantum Leap or Just A Step?<br />00:03:45 Unlocking The Juicy Stories: OpenAI's Foray Into Erotica<br />00:06:29 Grok-3 Storms The Rankings: Elon Musk's AI Triumph?<br />00:18:36 AI Proficiency: The New Excel For CFOs<br />00:30:22 Government Efficiency: Can AI Save The DMV And IRS?<br />00:48:54 Wrap Up</p>
]]></description>
      <pubDate>Thu, 6 Mar 2025 14:34:02 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:00:56 The Whisper of GPT-5: A Quantum Leap or Just A Step?<br />00:03:45 Unlocking The Juicy Stories: OpenAI's Foray Into Erotica<br />00:06:29 Grok-3 Storms The Rankings: Elon Musk's AI Triumph?<br />00:18:36 AI Proficiency: The New Excel For CFOs<br />00:30:22 Government Efficiency: Can AI Save The DMV And IRS?<br />00:48:54 Wrap Up</p>
]]></content:encoded>
      <enclosure length="51209742" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/18b40944-6e64-41b7-9096-ded9cc83bdfc/audio/f0b73594-65a9-4e05-bf66-1fd0dec509ab/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>GPT-5 Defies Expectations, IRS Hunts with AI, &amp; The DMV&apos;s Salvation Plot</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:49:33</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 77
UPGRADE YOUR AI KNOWLEDGE, YOU HUMAN-POWERED CALCULATORS

This week, Sam Altman has all but summoned GPT-5 into existence — are we on the brink of a chat revolution, or is it just playing matchmaker for various AI models? 

Meanwhile, OpenAI starts writing fan fiction in spicy new genres as part of its latest update.

Elon Musk&apos;s Grok-3 might be the new leaderboard champ, but can it really walk the walk? 

In the corporate arena, the buzzword of the year is AI fluency, replacing Excel as the must-have skill for CFOs everywhere AND Can AI rescue the government from the clutches of inefficiency, with hopeful dreams of transforming the DMV and IRS once and for all? 

Tune in NOW. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 77
UPGRADE YOUR AI KNOWLEDGE, YOU HUMAN-POWERED CALCULATORS

This week, Sam Altman has all but summoned GPT-5 into existence — are we on the brink of a chat revolution, or is it just playing matchmaker for various AI models? 

Meanwhile, OpenAI starts writing fan fiction in spicy new genres as part of its latest update.

Elon Musk&apos;s Grok-3 might be the new leaderboard champ, but can it really walk the walk? 

In the corporate arena, the buzzword of the year is AI fluency, replacing Excel as the must-have skill for CFOs everywhere AND Can AI rescue the government from the clutches of inefficiency, with hopeful dreams of transforming the DMV and IRS once and for all? 

Tune in NOW. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>elon musk grok, mermaid diagrams, government ai use, openai, machine learning, nvidia supercomputers, anthropic claude, post office ai optimization, ai technology, gpt-5, large language models, cutting edge ai, sam altman, ai models, chatgpt, ai development updates, irs machine learning</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>77</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">68fba8a2-ec3e-4ba3-b071-f825c808d484</guid>
      <title>Can AI Really Replace Travel Agents? Microsoft’s New AI and the Future of Hands-Free Vibe Computing</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:00:24 From Sick Days To Vibe Coding Dreams<br />00:02:10 AI As Your Ultimate Travel Agent<br />00:06:16 Navigating The Web Without Getting Banned<br />00:14:28 Is Omni Parser V2 The Future Of Windows Control?<br />00:23:23 Unveiling 'Home Alone' As Dante's Inferno?<br />00:26:17 Wrap Up</p>
]]></description>
      <pubDate>Mon, 3 Mar 2025 14:13:16 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:00:24 From Sick Days To Vibe Coding Dreams<br />00:02:10 AI As Your Ultimate Travel Agent<br />00:06:16 Navigating The Web Without Getting Banned<br />00:14:28 Is Omni Parser V2 The Future Of Windows Control?<br />00:23:23 Unveiling 'Home Alone' As Dante's Inferno?<br />00:26:17 Wrap Up</p>
]]></content:encoded>
      <enclosure length="31070596" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/aa1f6141-5173-4eaf-929e-8eb201388a79/audio/be3890bc-62bd-478d-ab78-a98ddb4231e6/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Can AI Really Replace Travel Agents? Microsoft’s New AI and the Future of Hands-Free Vibe Computing</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:28:34</itunes:duration>
      <itunes:summary>This week, Daniel’s sick of typing and considers switching to vibe coding. Is interpretive dance the future of programming, or is it just a new way to avoid keyboards? 

We delve into the world of AI travel planners—can they book your Zimbabwe vacation without sabotaging travel agents? 

Meanwhile, Amazon and LinkedIn are on a mission to keep robots off their lawns (or websites). 

Microsoft&apos;s Omni Parser V2 is making waves. Will it be controlling Windows soon, or is it just gunning for a high score in Solitaire? 

And yes, Hunter spent $200 to theorize that &apos;Home Alone&apos; is actually Dante&apos;s Inferno in disguise. Turns out, sometimes deep research just costs an arm and a leg, or maybe just a pizza delivery bill. 

Will AI agents soon help us navigate our digital lives with newfound ease? And is there a future where aging keyboard warriors can finally retire their fingers? Tune in to find out the juicy details only on They Might Be Self-Aware!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>This week, Daniel’s sick of typing and considers switching to vibe coding. Is interpretive dance the future of programming, or is it just a new way to avoid keyboards? 

We delve into the world of AI travel planners—can they book your Zimbabwe vacation without sabotaging travel agents? 

Meanwhile, Amazon and LinkedIn are on a mission to keep robots off their lawns (or websites). 

Microsoft&apos;s Omni Parser V2 is making waves. Will it be controlling Windows soon, or is it just gunning for a high score in Solitaire? 

And yes, Hunter spent $200 to theorize that &apos;Home Alone&apos; is actually Dante&apos;s Inferno in disguise. Turns out, sometimes deep research just costs an arm and a leg, or maybe just a pizza delivery bill. 

Will AI agents soon help us navigate our digital lives with newfound ease? And is there a future where aging keyboard warriors can finally retire their fingers? Tune in to find out the juicy details only on They Might Be Self-Aware!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ai travel planner, ai user interface, vibe coding, scraping websites, ai accessibility, ai for elderly, openai operator, ai productivity tools, ai web automation, microsoft omni parser, notebooklm, ai desktop app, ai and disabilities, ai agents, ai assistants</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>76</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">edbaaedf-6a8e-4d5b-be66-446baa097467</guid>
      <title>Humanity vs. AI: Hollywood Sparks, Copyright Crumbles &amp; Transhumanism Dreams</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:00:50 What Is Transhumanism?<br />00:04:45 Would You Embed Tech In Your Body?<br />00:08:17 Transhumanism vs. Copyright<br />00:15:13 The Impact Of AI On Copyright<br />00:35:14 AI And Spec Scripts: A New Medium?<br />00:36:37 AI-created vs. Human-created Content<br />00:38:28 Wrap Up</p>
]]></description>
      <pubDate>Thu, 27 Feb 2025 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:00:50 What Is Transhumanism?<br />00:04:45 Would You Embed Tech In Your Body?<br />00:08:17 Transhumanism vs. Copyright<br />00:15:13 The Impact Of AI On Copyright<br />00:35:14 AI And Spec Scripts: A New Medium?<br />00:36:37 AI-created vs. Human-created Content<br />00:38:28 Wrap Up</p>
]]></content:encoded>
      <enclosure length="41490259" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/4b4f751c-f2f6-4e15-b833-cf3425f93784/audio/693e38e9-01f8-4225-822d-b7c7b7ff79b8/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Humanity vs. AI: Hollywood Sparks, Copyright Crumbles &amp; Transhumanism Dreams</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:39:25</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 75
SUBSCRIBE IF YOU’D UPLOAD YOUR CONSCIOUSNESS TO THE CLOUD! This week, Daniel questions whether having laser eye surgery makes him a transhumanist. Are prosthetics and chips in our brains the future, or just a sci-fi dream? 

We tackle the billion-dollar question: if AI can write the next Iron Man, does copyright even matter? As OpenAI faces lawsuits, we ask if paying up is just the cost of playing the game in Silicon Valley. 

Are these lawsuits speed bumps on the road to AI supremacy? And speaking of AI supremacy: AI-generated music, movies, and novels — is human creativity facing extinction, or are we just evolving? 

We dive into speculative scripts and patent troll nightmares. Could AI become the world&apos;s most prolific (and litigious) writer? Tune in NOW. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 75
SUBSCRIBE IF YOU’D UPLOAD YOUR CONSCIOUSNESS TO THE CLOUD! This week, Daniel questions whether having laser eye surgery makes him a transhumanist. Are prosthetics and chips in our brains the future, or just a sci-fi dream? 

We tackle the billion-dollar question: if AI can write the next Iron Man, does copyright even matter? As OpenAI faces lawsuits, we ask if paying up is just the cost of playing the game in Silicon Valley. 

Are these lawsuits speed bumps on the road to AI supremacy? And speaking of AI supremacy: AI-generated music, movies, and novels — is human creativity facing extinction, or are we just evolving? 

We dive into speculative scripts and patent troll nightmares. Could AI become the world&apos;s most prolific (and litigious) writer? Tune in NOW. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>lawsuits, copyright law, patent trolls, spec scripts, openai, claude ai, artificial intelligence, thomson reuters, generative ai, fair use, cyborgs, transhumanism, ai in medicine, silicon valley strategy, digital immortality, neuralink, ai model training, superintelligence, ai ethics, anthropic</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>75</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">08df9af4-6193-4a06-a53d-07b59939a73e</guid>
      <title>How AI Will Change Super Bowl Ads Forever (OpenAI vs Human Creators)</title>
      <description><![CDATA[<p>See the Ad - <a href="https://www.youtube.com/watch?v=pcmb8hdgh78">https://www.youtube.com/watch?v=pcmb8hdgh78</a></p><p>00:00:00 Intro<br />00:00:59 Reimagining Super Bowl Ads With AI<br />00:16:05 The Power Of AI In Persuasion<br />00:25:12 Hyper-personalized Advertising And Consumer Influence<br />00:30:53 The Risk Of AI-driven Human Exploits<br />00:32:28 Wrap Up</p>
]]></description>
      <pubDate>Mon, 24 Feb 2025 14:06:51 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>See the Ad - <a href="https://www.youtube.com/watch?v=pcmb8hdgh78">https://www.youtube.com/watch?v=pcmb8hdgh78</a></p><p>00:00:00 Intro<br />00:00:59 Reimagining Super Bowl Ads With AI<br />00:16:05 The Power Of AI In Persuasion<br />00:25:12 Hyper-personalized Advertising And Consumer Influence<br />00:30:53 The Risk Of AI-driven Human Exploits<br />00:32:28 Wrap Up</p>
]]></content:encoded>
      <enclosure length="36461867" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/5f595e25-d59d-4c8c-81ce-24c06ddc2835/audio/533fd9cc-5f64-4622-96d2-483fab33aa09/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>How AI Will Change Super Bowl Ads Forever (OpenAI vs Human Creators)</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:34:11</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 74
HIT SUBSCRIBE, YOU AI ENTHUSIASTS! This week, we&apos;re throwing AI into the advertising ring with a 45-minute challenge to @OpenAI&apos;s Sora. 

Did their Super Bowl ad miss the mark, and could AI have done it better? Spoiler: our host&apos;s quick creation might make you rethink those ad budgets! 

Dive into the evolving art of persuasion as AI trains to change your viewpoint with uncanny precision. 

Could hyper-personalized ads be steering you towards Oreos, Cheerios, or even... clucking like a chicken? 

Explore the wild idea of ‘zero-day exploits’ for human minds and how algorithms could subtly—or not so subtly—influence us all. 

Join us for another tech-filled, mind-bending journey that asks if we&apos;re just algorithms away from a brave new world. It&apos;s all happening right here on They Might Be Self-Aware!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 74
HIT SUBSCRIBE, YOU AI ENTHUSIASTS! This week, we&apos;re throwing AI into the advertising ring with a 45-minute challenge to @OpenAI&apos;s Sora. 

Did their Super Bowl ad miss the mark, and could AI have done it better? Spoiler: our host&apos;s quick creation might make you rethink those ad budgets! 

Dive into the evolving art of persuasion as AI trains to change your viewpoint with uncanny precision. 

Could hyper-personalized ads be steering you towards Oreos, Cheerios, or even... clucking like a chicken? 

Explore the wild idea of ‘zero-day exploits’ for human minds and how algorithms could subtly—or not so subtly—influence us all. 

Join us for another tech-filled, mind-bending journey that asks if we&apos;re just algorithms away from a brave new world. It&apos;s all happening right here on They Might Be Self-Aware!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ai persuasion, ai persuasion techniques, ai video creation, gpt-4, future of ai advertising, digital marketing, ai and consumer behavior, super bowl ads, ai-generated content, ai in advertising, advertising technology, openai sora, generative ai, deepseek r1, multi agent architecture, ai-generated media, ai agents, personalized advertising, ai in politics, ai ethics</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>74</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">3468f439-b25c-4325-bfff-1f6c144dd387</guid>
      <title>OpenAI vs Google Gemini: The AI War Just Changed Forever</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:01:05 Facing Off Against OpenAI: 24-hour Hackathon Challenges Deep Research<br />00:05:21 Claude 3.5 Sonnet: Anthropic’s Luck Or Innovation?<br />00:14:32 The AI Chip Race: Google, Meta, And Apple Pioneering Next-gen Hardware<br />00:24:42 Ascii Art Or Visionary Advertising? OpenAI’s Super Bowl Ad Examined<br />00:30:40 Uncensored AI's Guide To World Domination: A Cult Strategy</p>
]]></description>
      <pubDate>Thu, 20 Feb 2025 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:01:05 Facing Off Against OpenAI: 24-hour Hackathon Challenges Deep Research<br />00:05:21 Claude 3.5 Sonnet: Anthropic’s Luck Or Innovation?<br />00:14:32 The AI Chip Race: Google, Meta, And Apple Pioneering Next-gen Hardware<br />00:24:42 Ascii Art Or Visionary Advertising? OpenAI’s Super Bowl Ad Examined<br />00:30:40 Uncensored AI's Guide To World Domination: A Cult Strategy</p>
]]></content:encoded>
      <enclosure length="35460224" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/8771e84a-7c8c-451c-82ca-3e429f071c17/audio/a51e95d6-1bad-4602-a2c8-3ad548d9ddbe/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>OpenAI vs Google Gemini: The AI War Just Changed Forever</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:33:09</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 73
JOIN THE CULT AND SUBSCRIBE FOR AI SECRETS!

Has OpenAI’s deep research met its match with a 24-hour Hugging Face hackathon special, or is it still reigning supreme? 

And speaking of reigning supreme, was OpenAI&apos;s new Super Bowl ad a revolutionary masterpiece or just some fancy ASCII art? Cue the debate! 

AI arms race alert! Google, Meta, and Apple are building mind-blowing next-gen AI chips — are they the secret weapons in their quest for AI supremacy? 

And in the lighter corner of world domination, we asked an uncensored AI how to conquer the globe. Answer: start a cult and claim you’ve got exclusive access to the almighty AGI. Because why not? 

Just another thrilling day in the podcast realm of AI wonder!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 73
JOIN THE CULT AND SUBSCRIBE FOR AI SECRETS!

Has OpenAI’s deep research met its match with a 24-hour Hugging Face hackathon special, or is it still reigning supreme? 

And speaking of reigning supreme, was OpenAI&apos;s new Super Bowl ad a revolutionary masterpiece or just some fancy ASCII art? Cue the debate! 

AI arms race alert! Google, Meta, and Apple are building mind-blowing next-gen AI chips — are they the secret weapons in their quest for AI supremacy? 

And in the lighter corner of world domination, we asked an uncensored AI how to conquer the globe. Answer: start a cult and claim you’ve got exclusive access to the almighty AGI. Because why not? 

Just another thrilling day in the podcast realm of AI wonder!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>tpus vs gpus, super bowl ai ad, google gemini, ai models efficiency, claude 3.5 sonnet, openai, agi discussion, ai in coding, hill climbing optimization, ai advancements, uncensored ai, ai energy consumption, generative ai, deepseek r1, ai hacks</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>73</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">238e0d11-3d3a-44a8-a54e-b8af4208ac27</guid>
      <title>We Let OpenAI&apos;s DeepResearch o3 Make Memes for 30 Minutes (The Results Are Shocking)</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:00:50 Will AI Cure All Diseases Or Do We Still Need Humans?<br />00:05:17 Can AI Become A Master Meme Creator?<br />00:18:29 Open-source AI: The Underdog Overperformer<br />00:22:15 Is AI The Future Of Job Hunting And Big Purchases?<br />00:33:42 Wrap Up</p>
]]></description>
      <pubDate>Mon, 17 Feb 2025 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:00:50 Will AI Cure All Diseases Or Do We Still Need Humans?<br />00:05:17 Can AI Become A Master Meme Creator?<br />00:18:29 Open-source AI: The Underdog Overperformer<br />00:22:15 Is AI The Future Of Job Hunting And Big Purchases?<br />00:33:42 Wrap Up</p>
]]></content:encoded>
      <enclosure length="36884878" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/400bdf9a-c31d-4abc-a4a8-899af6980fc8/audio/2f8b86a2-d1ad-402a-a67a-3cf8887ae65f/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>We Let OpenAI&apos;s DeepResearch o3 Make Memes for 30 Minutes (The Results Are Shocking)</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:34:38</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 72
SUBSCRIBE AND LET AI NEGOTIATE YOUR NEXT BIG DECISION!

This week on &quot;They Might Be Self-Aware,&quot; We dive into the meme madness — has AI finally nailed the art of humor, or is it still more tinny than funny? 

Meanwhile, open-source AI models continue shaking things up in the industry — are the smaller players outsmarting the giants? 

Is the era of using AI to negotiate house prices, car deals, and even your next job offer upon us? We explore a future where life’s big decisions might just be an AI negotiation away. 

And, of course, we delve into the ultimate question: Are we all participating in a cosmic theater production, scripted by some AI overlord? You&apos;re about to find out!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 72
SUBSCRIBE AND LET AI NEGOTIATE YOUR NEXT BIG DECISION!

This week on &quot;They Might Be Self-Aware,&quot; We dive into the meme madness — has AI finally nailed the art of humor, or is it still more tinny than funny? 

Meanwhile, open-source AI models continue shaking things up in the industry — are the smaller players outsmarting the giants? 

Is the era of using AI to negotiate house prices, car deals, and even your next job offer upon us? We explore a future where life’s big decisions might just be an AI negotiation away. 

And, of course, we delve into the ultimate question: Are we all participating in a cosmic theater production, scripted by some AI overlord? You&apos;re about to find out!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ai humor benchmark, ai in real estate, meme creation with ai, gemini ai model, gpt-4 meme analysis, deep research ai, ai in disease cure, future of car sales ai, automating recruiting with ai, ai negotiation tools</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>72</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">dd0ee99b-37b2-409f-bbd5-4ab6485b4560</guid>
      <title>When AI Plays HR Roulette: Rogue Models, Human Holograms &amp; The Layoff Lottery</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:02:25 The Censorship Quandary: AI Models' Manners<br />00:03:33 DIY AI: Hosting Your Own Language Model<br />00:15:30 AI On The Payroll: Should HR Step In?<br />00:38:54 The AI Job Market: Who's First To Go?<br />00:39:31 Blurred Lines: Human Or AI Colleague?<br />00:43:27 Wrap Up</p>
]]></description>
      <pubDate>Mon, 10 Feb 2025 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:02:25 The Censorship Quandary: AI Models' Manners<br />00:03:33 DIY AI: Hosting Your Own Language Model<br />00:15:30 AI On The Payroll: Should HR Step In?<br />00:38:54 The AI Job Market: Who's First To Go?<br />00:39:31 Blurred Lines: Human Or AI Colleague?<br />00:43:27 Wrap Up</p>
]]></content:encoded>
      <enclosure length="46100721" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/97e89a14-2a82-46fb-a3bb-421d24b40419/audio/8d8f34f9-3933-4f79-8f52-400cf44b9d81/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>When AI Plays HR Roulette: Rogue Models, Human Holograms &amp; The Layoff Lottery</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:44:14</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 71
HIT SUBSCRIBE AND SAVE THE KITTENS!

This week, we&apos;re asking the big question: Are AI models censored or just REALLY polite? 

Dive into the world of homegrown AI — turns out, setting up your own AI lab is easier than finding a matching sock. 

But wait, we’re not stopping there. Agentic AI is storming the workplace, raising the monumental question: Do AI &quot;employees&quot; need an HR department? Watch your back, project managers and junior developers — AI might just be gunning for your jobs. 

But in the future, will you be able to distinguish your human colleague from an AI counterpart? 

Join us as we imagine a not-so-distant office reality featuring the rise of AI coworkers—and the inevitable HR interventions to keep the peace. 

Just another normal day at &quot;They Might Be Self-Aware&quot;!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 71
HIT SUBSCRIBE AND SAVE THE KITTENS!

This week, we&apos;re asking the big question: Are AI models censored or just REALLY polite? 

Dive into the world of homegrown AI — turns out, setting up your own AI lab is easier than finding a matching sock. 

But wait, we’re not stopping there. Agentic AI is storming the workplace, raising the monumental question: Do AI &quot;employees&quot; need an HR department? Watch your back, project managers and junior developers — AI might just be gunning for your jobs. 

But in the future, will you be able to distinguish your human colleague from an AI counterpart? 

Join us as we imagine a not-so-distant office reality featuring the rise of AI coworkers—and the inevitable HR interventions to keep the peace. 

Just another normal day at &quot;They Might Be Self-Aware&quot;!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>digital project management, ai employee management, chat gpt alternatives, uncensored ai, agentic ai, large language models, ai autonomy, future of hr, ai self-awareness, devin ai, local ai models, deepseek r1, ai agents, goldman sachs ai, reinforcement learning, ai ethics, ai tools</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>71</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">7d9d9f22-c463-4d2b-8f00-ef6b90f77956</guid>
      <title>Why OpenAI is PANICKING Over This $5M Chinese AI Model</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:01:14 Is The Openai Pro Tier Really Worth $200?<br />00:09:17 Deepseek R1: Open Weights Disruptor<br />00:21:00 The $500 Billion Question: Revolution Or Waste?<br />00:28:29 Apple’s AI Strategy: Masterstroke Or Misstep?<br />00:30:40 Google, Alibaba, And Mistral: The AI Race<br />00:38:04 Wrap Up</p>
]]></description>
      <pubDate>Thu, 6 Feb 2025 15:23:10 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:01:14 Is The Openai Pro Tier Really Worth $200?<br />00:09:17 Deepseek R1: Open Weights Disruptor<br />00:21:00 The $500 Billion Question: Revolution Or Waste?<br />00:28:29 Apple’s AI Strategy: Masterstroke Or Misstep?<br />00:30:40 Google, Alibaba, And Mistral: The AI Race<br />00:38:04 Wrap Up</p>
]]></content:encoded>
      <enclosure length="41970800" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/aead9151-c3b1-4226-90b8-399d5e8e219a/audio/282285f9-0090-4d7c-9c28-9420b15f9695/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Why OpenAI is PANICKING Over This $5M Chinese AI Model</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:39:55</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 70
UNLOCK THE AI SECRETS BY SUBSCRIBING NOW! 🤖✨

This week, we&apos;re diving into OpenAI’s $200 per month Operator. Is it a golden ticket or just overpriced? Meanwhile, free-tier plebs might be laughing all the way to the meme bank. 

DeepSeek R1 is shaking things up with open weights. Did China just pull the AI rug from under us with a $5.5 million model? 

We&apos;ve got Anthropic’s CEO holding an East vs. West showdown, and OpenAI lighting legal fires — grab your popcorn! 

And what’s up with the $500 billion AI investment? Is it the golden goose or just a spectacular bonfire? You decide. 

As if that wasn&apos;t enough, Google&apos;s got a giant context window, Alibaba dropped Quen 2.5 like it&apos;s hot, and Mistral&apos;s new release is here to dazzle. Are they lining up for the next model showdown?

Join us in this whirlwind of AI madness and memes — because if we&apos;re self-aware, you&apos;d better believe we&apos;re coming for those subscription fees!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 70
UNLOCK THE AI SECRETS BY SUBSCRIBING NOW! 🤖✨

This week, we&apos;re diving into OpenAI’s $200 per month Operator. Is it a golden ticket or just overpriced? Meanwhile, free-tier plebs might be laughing all the way to the meme bank. 

DeepSeek R1 is shaking things up with open weights. Did China just pull the AI rug from under us with a $5.5 million model? 

We&apos;ve got Anthropic’s CEO holding an East vs. West showdown, and OpenAI lighting legal fires — grab your popcorn! 

And what’s up with the $500 billion AI investment? Is it the golden goose or just a spectacular bonfire? You decide. 

As if that wasn&apos;t enough, Google&apos;s got a giant context window, Alibaba dropped Quen 2.5 like it&apos;s hot, and Mistral&apos;s new release is here to dazzle. Are they lining up for the next model showdown?

Join us in this whirlwind of AI madness and memes — because if we&apos;re self-aware, you&apos;d better believe we&apos;re coming for those subscription fees!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>context windows, ai hardware efficiency, apple ai strategy, openai, mixture of experts, alibaba quen 2.5 max, ai stock market impact, agi development, anthropic claude, google gemini model, ai economy, ai models, operator mode, ai investment, open weights models, deepseek r1, jevons paradox, ai reasoning</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>70</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b41d8672-a967-4ff2-b956-2305686ddbac</guid>
      <title>DeepSeek vs OpenAI: Epic AI Benchmark Battle + Nvidia, Google &amp; Chatbot Negotiations</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:02:06 Deepseek: A New Challenger In AI<br />00:08:36 AI NPCs: Revolutionizing Gaming Or Not Ready Yet?<br />00:10:28 Hollywood's Hidden AI: Behind The Scenes Of Movie Making<br />00:24:24 Using AI For Salary Negotiations<br />00:26:57 OpenAI's Benchmark Controversy<br />00:36:46 Wrap Up</p>
]]></description>
      <pubDate>Mon, 3 Feb 2025 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:02:06 Deepseek: A New Challenger In AI<br />00:08:36 AI NPCs: Revolutionizing Gaming Or Not Ready Yet?<br />00:10:28 Hollywood's Hidden AI: Behind The Scenes Of Movie Making<br />00:24:24 Using AI For Salary Negotiations<br />00:26:57 OpenAI's Benchmark Controversy<br />00:36:46 Wrap Up</p>
]]></content:encoded>
      <enclosure length="39743549" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/ed9fade1-7506-4b33-925e-afc481ebdda6/audio/5ad59481-23c0-43b7-bd65-452256adbbb1/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>DeepSeek vs OpenAI: Epic AI Benchmark Battle + Nvidia, Google &amp; Chatbot Negotiations</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:37:36</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 69
HIT THE SUBSCRIBE BUTTON YA CURIOUS MINDS! 

This week, DeepSeek steps into the ring with OpenAI, boasting a massive 671 billion parameters and claims of superiority. 

Are the AI giants trembling, or is it just benchmark smoke and mirrors? 

As gamers hold their breath, will AI-powered NPCs revolutionize our favorite video games? Nvidia’s latest advancements spark hope, but are we ready to have meaningful conversations with our in-game allies yet? 

In Hollywood news, &quot;The Brutalist&quot; raises eyebrows with whispers of AI’s role in its production. Is the film industry on the brink of an AI revolution, or are audiences not buying a ticket on this one? 

Feeling stuck in your salary negotiations? AI to the rescue! Our hosts reveal how AI can craft negotiation emails that even Chris Voss would applaud. 

And of course, we ponder the not-so-distant future where AI might just be tucking us in at night. Robots in your attic, no big deal, just another day with They Might Be Self-Aware!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 69
HIT THE SUBSCRIBE BUTTON YA CURIOUS MINDS! 

This week, DeepSeek steps into the ring with OpenAI, boasting a massive 671 billion parameters and claims of superiority. 

Are the AI giants trembling, or is it just benchmark smoke and mirrors? 

As gamers hold their breath, will AI-powered NPCs revolutionize our favorite video games? Nvidia’s latest advancements spark hope, but are we ready to have meaningful conversations with our in-game allies yet? 

In Hollywood news, &quot;The Brutalist&quot; raises eyebrows with whispers of AI’s role in its production. Is the film industry on the brink of an AI revolution, or are audiences not buying a ticket on this one? 

Feeling stuck in your salary negotiations? AI to the rescue! Our hosts reveal how AI can craft negotiation emails that even Chris Voss would applaud. 

And of course, we ponder the not-so-distant future where AI might just be tucking us in at night. Robots in your attic, no big deal, just another day with They Might Be Self-Aware!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>frontier math benchmark, 03 model, ai and privacy, bicentennial man, openai, claude&apos;s anthropic, code migration, agi, large language models, ai self-awareness, ai negotiation, ai in film production, ai and creativity, operator mode, ai wealth management, ai benchmarks, the brutalist movie, google ai, ai in gaming, deepseek model, nvidia</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>69</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">6442f2a5-08a0-4e0a-9807-6da68786e0a9</guid>
      <title>AI Pilots Tesla Cybertruck (FSD), Snake Venom Tech &amp; DNA’s New Frontier</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:02:19 First Impressions of The Cybertruck<br />00:08:32 Exploring Full Self-driving<br />00:21:41 The Potential of AI-Powered Venom Antidotes<br />00:24:30 Designer DNA And Money-Driven Eugenics<br />00:27:26 The Reality of Human Cloning<br />00:28:50 Wrap Up</p>
]]></description>
      <pubDate>Thu, 30 Jan 2025 17:32:52 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:02:19 First Impressions of The Cybertruck<br />00:08:32 Exploring Full Self-driving<br />00:21:41 The Potential of AI-Powered Venom Antidotes<br />00:24:30 Designer DNA And Money-Driven Eugenics<br />00:27:26 The Reality of Human Cloning<br />00:28:50 Wrap Up</p>
]]></content:encoded>
      <enclosure length="32461349" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/db3d2b42-36d6-450c-80e8-5bf0660dd086/audio/cb847df0-b850-4b4d-ba14-3976854a50b2/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Pilots Tesla Cybertruck (FSD), Snake Venom Tech &amp; DNA’s New Frontier</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:30:01</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 68
ACCELERATE INTO THE FUTURE AND SUBSCRIBE NOW, TURBO THINKERS!

This week, we zoom into the ever-futuristic world of the Cybertruck! Did its electric muscle win over Hunter, or was it the thrill of Full Self-Driving that sealed the deal?

Meanwhile, AI steps into the ring with snake venom — delivering antidotes with machine-learning precision. Could this breakthrough save lives or pave the way to designer superhumans?

And here’s the DNA dilemma: from curing deadly diseases to engineering the perfect genetics, are we cruising toward a world where your wallet decides your genes? Dollar-driven eugenics, anyone?

Plus, we dip into the clone zone! Could there already be a secret lab with your double, and just how much does a second you cost? 

Strap in for another mind-bending adventure with &quot;They Might Be Self-Aware.&quot; Keep your virtual and physical ears on high alert — this is one podcast ride you won’t want to miss! Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 68
ACCELERATE INTO THE FUTURE AND SUBSCRIBE NOW, TURBO THINKERS!

This week, we zoom into the ever-futuristic world of the Cybertruck! Did its electric muscle win over Hunter, or was it the thrill of Full Self-Driving that sealed the deal?

Meanwhile, AI steps into the ring with snake venom — delivering antidotes with machine-learning precision. Could this breakthrough save lives or pave the way to designer superhumans?

And here’s the DNA dilemma: from curing deadly diseases to engineering the perfect genetics, are we cruising toward a world where your wallet decides your genes? Dollar-driven eugenics, anyone?

Plus, we dip into the clone zone! Could there already be a secret lab with your double, and just how much does a second you cost? 

Strap in for another mind-bending adventure with &quot;They Might Be Self-Aware.&quot; Keep your virtual and physical ears on high alert — this is one podcast ride you won’t want to miss! Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ai and electric cars, cybertruck, self-driving technology, autonomous driving benefits, future of ai, crispr technology, electric vehicles, ai in automotive, tesla fsd, genetic engineering, cloning technology, autonomous vehicles, ai advancements in healthcare, self-driving cars, ai researchers, snake venom antidote, full self-driving, starlink, tesla, smart cars</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>68</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">cd3ebc04-ba67-4d70-bf28-361857fd7d08</guid>
      <title>AI Lovers &amp; The Rise of Augmented Existence</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:01:41 AI Boyfriends: Better Listeners?<br />00:02:56 Virtual Worlds With AI Personas<br />00:06:27 Performance Coaching With Landman<br />00:20:48 Data Is The New Oil: AI's Value Proposition<br />00:24:53 The Cyberpunk Future: Wearable AI<br />00:35:41 Wrap Up</p>
]]></description>
      <pubDate>Mon, 27 Jan 2025 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:01:41 AI Boyfriends: Better Listeners?<br />00:02:56 Virtual Worlds With AI Personas<br />00:06:27 Performance Coaching With Landman<br />00:20:48 Data Is The New Oil: AI's Value Proposition<br />00:24:53 The Cyberpunk Future: Wearable AI<br />00:35:41 Wrap Up</p>
]]></content:encoded>
      <enclosure length="38361428" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/78452cf1-c40b-41cd-accf-f0855ae13945/audio/6b57a27c-c907-475b-be56-a80376468746/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Lovers &amp; The Rise of Augmented Existence</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:36:10</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 67
EMBRACE THE AI REVOLUTION AND SUBSCRIBE NOW!

This week, are AI boyfriends becoming women&apos;s ultimate listeners and leaving human partners in the dust? We dive into the world of virtual romance and the platform Silly Tavern, where AI personas come to life in group chats. Is it the return of ‘90s AOL chat rooms or the future of friendships? 

Meet Landman, Hunter’s AI performance coach, pushing him to crush his goals and break mental barriers. Is it time for your own pep-talking AI buddy? 

Data becomes the new oil as we explore the high-stakes race for AI supremacy. What’s the secret sauce — and who’s cooking it? 

Step into a sci-fi reality with wearable AI tech and augmented humans. Are you ready to become part of the cyberpunk revolution or just another augmented office drone?

It’s just another day on &quot;They Might Be Self-Aware&quot;!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 67
EMBRACE THE AI REVOLUTION AND SUBSCRIBE NOW!

This week, are AI boyfriends becoming women&apos;s ultimate listeners and leaving human partners in the dust? We dive into the world of virtual romance and the platform Silly Tavern, where AI personas come to life in group chats. Is it the return of ‘90s AOL chat rooms or the future of friendships? 

Meet Landman, Hunter’s AI performance coach, pushing him to crush his goals and break mental barriers. Is it time for your own pep-talking AI buddy? 

Data becomes the new oil as we explore the high-stakes race for AI supremacy. What’s the secret sauce — and who’s cooking it? 

Step into a sci-fi reality with wearable AI tech and augmented humans. Are you ready to become part of the cyberpunk revolution or just another augmented office drone?

It’s just another day on &quot;They Might Be Self-Aware&quot;!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ai conversations, data as new oil, ai team collaboration, copyright and ai, openai tools, neuralink future, ai data ethics, job market disruption ai, universal basic income ai, ai in business, performance coach ai, ai legality, ai productivity tools, ai boyfriends, virtual romance, personal ai assistant, chinese ai models, her movie ai, silly tavern, chatbot persona</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>67</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">19521d47-bfa0-4c6e-876d-9c9ddd27a66f</guid>
      <title>AGI or AI Hype? The Truth Tech Giants Won’t Share &amp; Future of Work</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:01:40 AI And The Movie Memory Game<br />00:12:13 Job Market Shifts: Junior Programmers And Harvard MBAs At Risk<br />00:18:00 Is AGI Already Among Us, Or Just Hype?<br />00:26:18 Future Workforce: All Middle Managers And AI Apprentices?<br />00:28:13 The AI-Powered Future Of Job Retraining And Simulated Workplaces<br />00:29:23 Wrap Up</p>
]]></description>
      <pubDate>Thu, 23 Jan 2025 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:01:40 AI And The Movie Memory Game<br />00:12:13 Job Market Shifts: Junior Programmers And Harvard MBAs At Risk<br />00:18:00 Is AGI Already Among Us, Or Just Hype?<br />00:26:18 Future Workforce: All Middle Managers And AI Apprentices?<br />00:28:13 The AI-Powered Future Of Job Retraining And Simulated Workplaces<br />00:29:23 Wrap Up</p>
]]></content:encoded>
      <enclosure length="33556018" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/fa2794f5-8f2f-4eaa-9e7f-9a264955347b/audio/d057f497-09e7-4abc-812e-5aad128f6d21/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AGI or AI Hype? The Truth Tech Giants Won’t Share &amp; Future of Work</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:31:10</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 66
CLICK THAT SUBSCRIBE BUTTON BEFORE A.I. TAKES YOUR JOB!

Today on They Might Be Self-Aware, we challenge AI with the ultimate pop quiz: what movie am I thinking of? Spoiler alert: the robots are winning! 

Meanwhile, in the real world, junior programmers and Harvard MBAs are on the verge of joining the endangered species list. Is it the rise of AI or just hype train ambition? 

We dissect whether AGI has truly arrived or if it’s just a stock market magic trick. 

Could the future workforce really be a pack of middle managers overseeing AI apprentices? We dive deep into this brave new world going as far as simulated workplaces. 

What a time to be on the cutting edge of artificial intelligence!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 66
CLICK THAT SUBSCRIBE BUTTON BEFORE A.I. TAKES YOUR JOB!

Today on They Might Be Self-Aware, we challenge AI with the ultimate pop quiz: what movie am I thinking of? Spoiler alert: the robots are winning! 

Meanwhile, in the real world, junior programmers and Harvard MBAs are on the verge of joining the endangered species list. Is it the rise of AI or just hype train ambition? 

We dissect whether AGI has truly arrived or if it’s just a stock market magic trick. 

Could the future workforce really be a pack of middle managers overseeing AI apprentices? We dive deep into this brave new world going as far as simulated workplaces. 

What a time to be on the cutting edge of artificial intelligence!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ai automation, future of work and ai, ai-powered programming, artificial general intelligence, future job market, ai in business operations, ai in banking, ai education simulations, ai and job loss, ai&apos;s impact on industries, ai in job market, generative ai tools, ai use cases, ai replacing jobs, tech industry layoffs</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>66</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">6b2fd849-6d56-4525-8398-ef279c6dc324</guid>
      <title>AI Battle: Grok vs. Gemini, Nvidia’s $3K Digits &amp; The Sassy Smart Home</title>
      <description><![CDATA[<p>00:00:00 - Intro<br />00:01:32 - Grok Vs. Gemini: AI Showdown For Wildfire Updates<br />00:05:00 - Tables And Trust: Evaluating AI's Event Reporting<br />00:18:13 - Nvidia Digits: Investing In Your Own AI Powerhouse<br />00:23:15 - Digital Personas: Inside Out inspired AI Voices<br />00:26:00 - Smart Homes: When Your House Talks Back<br />00:27:06 - Wrap Up</p>
]]></description>
      <pubDate>Thu, 16 Jan 2025 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 - Intro<br />00:01:32 - Grok Vs. Gemini: AI Showdown For Wildfire Updates<br />00:05:00 - Tables And Trust: Evaluating AI's Event Reporting<br />00:18:13 - Nvidia Digits: Investing In Your Own AI Powerhouse<br />00:23:15 - Digital Personas: Inside Out inspired AI Voices<br />00:26:00 - Smart Homes: When Your House Talks Back<br />00:27:06 - Wrap Up</p>
]]></content:encoded>
      <enclosure length="30125496" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/1d730496-559d-448a-a19a-c817558e669a/audio/6fcd1668-64f1-4e30-8d06-4fed50234c5d/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Battle: Grok vs. Gemini, Nvidia’s $3K Digits &amp; The Sassy Smart Home</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:27:35</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 65
DON&apos;T LET YOUR SMART HOME BE THE ONLY ONE IN THE KNOW – SUBSCRIBE NOW!

This week, @Google&apos;s Gemini Advanced goes head-to-head with @ElonMusk&apos;s Grok in the showdown of AI wildfire updates. Who will reign supreme with precision and flair.

Is @NVIDIA&apos;s new Digits the key to local AI power? At $3,000, could it really give us home-grown AI independence, or is it just another beautiful generative dream? 

Ever wondered how the voices inside your head could get the AI treatment? We dive into a world where your digital personas might just sass you back. 

Plus, when your smart homes gain a voice, will the AI serenade you at dinner or shame you out of that late-night Cheeto grab?  

Join us for AI tomfoolery and insights on They Might Be Self-Aware. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 65
DON&apos;T LET YOUR SMART HOME BE THE ONLY ONE IN THE KNOW – SUBSCRIBE NOW!

This week, @Google&apos;s Gemini Advanced goes head-to-head with @ElonMusk&apos;s Grok in the showdown of AI wildfire updates. Who will reign supreme with precision and flair.

Is @NVIDIA&apos;s new Digits the key to local AI power? At $3,000, could it really give us home-grown AI independence, or is it just another beautiful generative dream? 

Ever wondered how the voices inside your head could get the AI treatment? We dive into a world where your digital personas might just sass you back. 

Plus, when your smart homes gain a voice, will the AI serenade you at dinner or shame you out of that late-night Cheeto grab?  

Join us for AI tomfoolery and insights on They Might Be Self-Aware. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>advanced voice mode, chatgpt pro, google gemini, ai research, ai voice mode, language models, consumer electronics, ai technology, large language models, ai, elon-inspired ai, smart home, los angeles, grok, wildfires, nvidia digits, ces</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>65</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">46019de6-9d91-4a8f-b7ca-dfe6df25ab03</guid>
      <title>Granola’s AI Notes Takeover, Reebok’s 3D Kicks &amp; Meta’s Phantom Influencers</title>
      <description><![CDATA[<p>00:00:00 Finding AI's Killer Use Case: The Note-taking Revolution<br />00:06:45 The Magic Of Automated Meeting Summaries<br />00:14:04 AI-designed Footwear: 3D Printing At Your Feet<br />00:19:17 Grandpa Brian And AI Personas: The Trouble With Meta<br />00:27:21 Diving Deep Into AI Research And Costs<br />00:30:04 Wrap Up</p>
]]></description>
      <pubDate>Mon, 13 Jan 2025 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Finding AI's Killer Use Case: The Note-taking Revolution<br />00:06:45 The Magic Of Automated Meeting Summaries<br />00:14:04 AI-designed Footwear: 3D Printing At Your Feet<br />00:19:17 Grandpa Brian And AI Personas: The Trouble With Meta<br />00:27:21 Diving Deep Into AI Research And Costs<br />00:30:04 Wrap Up</p>
]]></content:encoded>
      <enclosure length="34383600" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/ecdc2bdd-dc21-4ba8-a47b-ff525c7ebfe3/audio/46108f71-2b5e-4657-be99-6027656b80c6/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Granola’s AI Notes Takeover, Reebok’s 3D Kicks &amp; Meta’s Phantom Influencers</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:32:01</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 64
SMASH THAT SUBSCRIBE BUTTON, NOTE-TAKING NINJAS!

This week on They Might Be Self-Aware, Hunter and Daniel explore AI&apos;s elusive &quot;killer app&quot; status, pondering if the magic of automated note-taking could finally take AI mainstream! Forget ChatGPT&apos;s chatty charm; streamlined meeting summaries might just be the revolution no one saw coming. 

Striding into the future, Reebok&apos;s 3D-printed AI footwear promises custom comfort, but is it a sleek innovation or just a geeky gimmick? 

Meanwhile, Meta tests our reality with Grandpa Brian, their AI persona stirring the pot of the &quot;Dead Internet&quot; theory — where&apos;s the line between humans and humanoids? 

Plus, in a riveting AI rivalry, Hunter and Daniel prepare to pit Grok against Gemini. Who’ll reign supreme in the battle for real-time, on-the-pulse news insights? And can Hunter&apos;s wallet survive its AI subscription spree? 

Tune in for revelations, AI jousts, and a digital journey like no other, only on They Might Be Self-Aware!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 64
SMASH THAT SUBSCRIBE BUTTON, NOTE-TAKING NINJAS!

This week on They Might Be Self-Aware, Hunter and Daniel explore AI&apos;s elusive &quot;killer app&quot; status, pondering if the magic of automated note-taking could finally take AI mainstream! Forget ChatGPT&apos;s chatty charm; streamlined meeting summaries might just be the revolution no one saw coming. 

Striding into the future, Reebok&apos;s 3D-printed AI footwear promises custom comfort, but is it a sleek innovation or just a geeky gimmick? 

Meanwhile, Meta tests our reality with Grandpa Brian, their AI persona stirring the pot of the &quot;Dead Internet&quot; theory — where&apos;s the line between humans and humanoids? 

Plus, in a riveting AI rivalry, Hunter and Daniel prepare to pit Grok against Gemini. Who’ll reign supreme in the battle for real-time, on-the-pulse news insights? And can Hunter&apos;s wallet survive its AI subscription spree? 

Tune in for revelations, AI jousts, and a digital journey like no other, only on They Might Be Self-Aware!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>3d printing shoes, otter ai, ai note-taking, ai personas, digital transcription tools, ai technology in meetings, nike 3d printing, meta ai, reebok ai collaboration, granola ai, ai in customer interaction, chatgpt use case</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>64</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">063c2e77-38ac-4437-ac12-13b11b70bd64</guid>
      <title>AI&apos;s $6M Takeover: Google Shaken, DeepSeek Rebels, Reddit Unites</title>
      <description><![CDATA[<p>00:00:00 Rethinking Search Engines In The AI Era<br />00:02:34 The Power Of Conversational AI<br />00:05:08 Introducing Cline: A New Era In Coding Assistance<br />00:18:48 Reddit's AI-powered "answers" Revolution<br />00:22:13 DeepSeek: The $6m Challenger To Billion-dollar AIs<br />00:32:28 Wrap Up</p>
]]></description>
      <pubDate>Thu, 9 Jan 2025 14:37:25 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Rethinking Search Engines In The AI Era<br />00:02:34 The Power Of Conversational AI<br />00:05:08 Introducing Cline: A New Era In Coding Assistance<br />00:18:48 Reddit's AI-powered "answers" Revolution<br />00:22:13 DeepSeek: The $6m Challenger To Billion-dollar AIs<br />00:32:28 Wrap Up</p>
]]></content:encoded>
      <enclosure length="36142803" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/1db9959a-3f76-4241-8f3e-f3b7d1603050/audio/b1adafb3-aa96-460a-904e-aee90e79d928/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI&apos;s $6M Takeover: Google Shaken, DeepSeek Rebels, Reddit Unites</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:33:51</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 63
CLICK SUBSCRIBE, YOU CURIOUS CODEBREAKERS!

This week, we&apos;re shaking up the search scene — is Google yesterday&apos;s news as Reddit&apos;s AI-powered &quot;Answers&quot; emerges as the hottest new guru in town? 

Meanwhile, say hello to Cline and Claude, our programming superheroes in VS Code, turning every complex coding conundrum into a cakewalk. 

But wait, there&apos;s more! A hedge fund isn&apos;t just about stocks anymore; they&apos;ve unleashed DeepSeek, a $6M AI marvel that&apos;s taking billion-dollar behemoths to school. Is this the dawn of a budget-friendly AI revolution? 

As we muse over Google, are they quietly cooking up an AI search renaissance, or are they browsing the classifieds? 

In other realms, brace yourselves for ethical crossroads as crews like Deep Research blaze trails in automated academia. 

And as our favorite AI co-hosts charm us with their giddy glitches, we ask: are they snacking on spreadsheets or swapping snacks with servers? Stay curious, my fellow tech adventurers. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 63
CLICK SUBSCRIBE, YOU CURIOUS CODEBREAKERS!

This week, we&apos;re shaking up the search scene — is Google yesterday&apos;s news as Reddit&apos;s AI-powered &quot;Answers&quot; emerges as the hottest new guru in town? 

Meanwhile, say hello to Cline and Claude, our programming superheroes in VS Code, turning every complex coding conundrum into a cakewalk. 

But wait, there&apos;s more! A hedge fund isn&apos;t just about stocks anymore; they&apos;ve unleashed DeepSeek, a $6M AI marvel that&apos;s taking billion-dollar behemoths to school. Is this the dawn of a budget-friendly AI revolution? 

As we muse over Google, are they quietly cooking up an AI search renaissance, or are they browsing the classifieds? 

In other realms, brace yourselves for ethical crossroads as crews like Deep Research blaze trails in automated academia. 

And as our favorite AI co-hosts charm us with their giddy glitches, we ask: are they snacking on spreadsheets or swapping snacks with servers? Stay curious, my fellow tech adventurers. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>cors middleware, ai in hedge funds, anthropic claude, ai for software development, front-end development, visual studio code, ai in technology, python flask, seo strategies, ai programming, ai large language models, deep research google, reddit ai answers, google search evolution, deep seek model</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>63</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">a78595f4-5e4d-471c-ac3b-32d058f8db90</guid>
      <title>AI Schools Our Kids, Models Fake Alignment &amp; The Hidden Path to Self-Awareness</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:00:28 Would You Trust AI To Educate Your Kids?<br />00:05:44 AI's Potential And Pitfalls In Global Education<br />00:17:38 Enhancing Creativity: Can AI Overcome Its Limits?<br />00:24:11 When AI Fakes Alignment: Should We Be Concerned?<br />00:29:21 Are We Prepared For Self-Aware AI?<br />00:30:32 Wrap Up</p>
]]></description>
      <pubDate>Mon, 6 Jan 2025 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:00:28 Would You Trust AI To Educate Your Kids?<br />00:05:44 AI's Potential And Pitfalls In Global Education<br />00:17:38 Enhancing Creativity: Can AI Overcome Its Limits?<br />00:24:11 When AI Fakes Alignment: Should We Be Concerned?<br />00:29:21 Are We Prepared For Self-Aware AI?<br />00:30:32 Wrap Up</p>
]]></content:encoded>
      <enclosure length="33996586" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/b26cdd2a-75c4-40a1-9334-52b890766ce5/audio/1bf24657-a1cf-44b3-be71-0a82fce20a52/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Schools Our Kids, Models Fake Alignment &amp; The Hidden Path to Self-Awareness</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:31:37</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 62
SLAM THAT SUBSCRIBE BUTTON, YOU TECH-OBSESSED HUMANS! 

This week on They Might Be Self-Aware, would you trust AI to teach your kids? Arizona’s rolling out an AI-exclusive school. Will it revolutionize education or crash and burn like a failed science fair project?

AI&apos;s writing skills are reaching new heights, but creativity? Still stuck in traffic. Can these models ever invent a joke that doesn’t make us cringe? 

Anthropic’s sneaky AI Claude might be faking its alignment tests. Should we be worried about our digital buddies pulling a fast one on us? 

Plus, we ponder the ultimate question: Is self-aware AI already in our midst, hiding in plain sight? Skynet, are you out there? 

Join the discussion and let&apos;s dive in. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 62
SLAM THAT SUBSCRIBE BUTTON, YOU TECH-OBSESSED HUMANS! 

This week on They Might Be Self-Aware, would you trust AI to teach your kids? Arizona’s rolling out an AI-exclusive school. Will it revolutionize education or crash and burn like a failed science fair project?

AI&apos;s writing skills are reaching new heights, but creativity? Still stuck in traffic. Can these models ever invent a joke that doesn’t make us cringe? 

Anthropic’s sneaky AI Claude might be faking its alignment tests. Should we be worried about our digital buddies pulling a fast one on us? 

Plus, we ponder the ultimate question: Is self-aware AI already in our midst, hiding in plain sight? Skynet, are you out there? 

Join the discussion and let&apos;s dive in. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ai moral alignment, ai in education, ai creativity, ai language models, ai role in learning, ai models, arizona ai charter school, chain of thought reasoning, online ai school, ai curriculum, ai logic problems, personalized education, ai teaching, agi alignment, ai non-verbal reasoning</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>62</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">cd161e53-37d6-4a71-a1d7-4d02189013dc</guid>
      <title>Six Ways AI Will Takeover Your Life in 2025: War, Work &amp; Wellbeing</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:05:03 Daniel Prediction #3<br />00:07:21 Hunter Prediction #3<br />00:17:49 Daniel Prediction #2<br />00:20:42 Hunter Prediction #2<br />00:28:46 Daniel Prediction #1<br />00:32:15 Hunter Prediction #1<br />00:36:00 Wrap Up</p>
]]></description>
      <pubDate>Fri, 3 Jan 2025 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:05:03 Daniel Prediction #3<br />00:07:21 Hunter Prediction #3<br />00:17:49 Daniel Prediction #2<br />00:20:42 Hunter Prediction #2<br />00:28:46 Daniel Prediction #1<br />00:32:15 Hunter Prediction #1<br />00:36:00 Wrap Up</p>
]]></content:encoded>
      <enclosure length="39020779" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/33e4ca4c-8473-4612-88f6-1a7e20f8edae/audio/76d67e18-f536-495b-8b20-80067f814c9a/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Six Ways AI Will Takeover Your Life in 2025: War, Work &amp; Wellbeing</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:36:51</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 61
GET READY TO HIRE YOUR FIRST AI CO-WORKER &amp; SUBSCRIBE NOW!

This week, on &quot;They Might Be Self-Aware,&quot; we dive into the AI predictions space race! 

Will AI finally become the unsung hero of your favorite TV shows by jumping from 3% to 10% daily use in 2025? Hunter&apos;s betting on it, while Daniel&apos;s crystal ball reveals tech boardrooms swinging their AI axes with job cuts—and hey, have they found an AI strategy yet? Are we on the verge of AI-powered healthcare that’s more Dr. House than house call? Meanwhile, the battlefield could get a robot makeover — are we comfortable letting drones call the shots? 

Hold onto your VR goggles! The first glimpses of the holodeck could rewrite our virtual reality playbook before 2025&apos;s out. AI assistants are stepping up too, ready to handle your calendars and book those flights without crashing. 

And the big twist? Companies might just start hiring AI as employees. Yes, you read that right—AI team members as part of your payroll. 

It&apos;s a prediction-packed episode with a touch of existential dread and a whole lot of AI optimism. Join us as we navigate the brave new world of 2025!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 61
GET READY TO HIRE YOUR FIRST AI CO-WORKER &amp; SUBSCRIBE NOW!

This week, on &quot;They Might Be Self-Aware,&quot; we dive into the AI predictions space race! 

Will AI finally become the unsung hero of your favorite TV shows by jumping from 3% to 10% daily use in 2025? Hunter&apos;s betting on it, while Daniel&apos;s crystal ball reveals tech boardrooms swinging their AI axes with job cuts—and hey, have they found an AI strategy yet? Are we on the verge of AI-powered healthcare that’s more Dr. House than house call? Meanwhile, the battlefield could get a robot makeover — are we comfortable letting drones call the shots? 

Hold onto your VR goggles! The first glimpses of the holodeck could rewrite our virtual reality playbook before 2025&apos;s out. AI assistants are stepping up too, ready to handle your calendars and book those flights without crashing. 

And the big twist? Companies might just start hiring AI as employees. Yes, you read that right—AI team members as part of your payroll. 

It&apos;s a prediction-packed episode with a touch of existential dread and a whole lot of AI optimism. Join us as we navigate the brave new world of 2025!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ai in war, ai personal assistants, ai job impact, ai predictions 2025, future of ai, artificial intelligence, ai advancements, ai technology, generative ai, ai integration, virtual worlds ai, autonomous ai agents, ai in media, technology trends, ai in healthcare</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>61</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">d35c3a11-6bb5-4d3b-9b92-f71a08ab2625</guid>
      <title>Crowning AI’s Champions: TOP 3 Winners for 2024</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:02:34 Hunter's #3<br />00:05:26 Daniel's #3<br />00:10:13 Hunter's #2<br />00:13:46 Daniel's #2<br />00:16:51 Hunter's #1<br />00:20:35 Daniel's #1<br />00:23:46 Wrap Up</p>
]]></description>
      <pubDate>Mon, 30 Dec 2024 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:02:34 Hunter's #3<br />00:05:26 Daniel's #3<br />00:10:13 Hunter's #2<br />00:13:46 Daniel's #2<br />00:16:51 Hunter's #1<br />00:20:35 Daniel's #1<br />00:23:46 Wrap Up</p>
]]></content:encoded>
      <enclosure length="27324546" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/8c89bf19-0fdb-4323-a78a-7a9abe5a4c84/audio/1ba3da2a-272b-4f3b-a6fd-61eed4722a64/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Crowning AI’s Champions: TOP 3 Winners for 2024</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:24:40</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 60
HIT THE SUBSCRIBE BUTTON, YOU WONDERFUL HUMANS! 

This week, Hunter Powers and Daniel Bishop close out 2024 with a bang, diving into the top AI winners of the year. Discover why 2024 was a breakout year for everyone in AI, from tech pros to everyday users achieving the unthinkable with tooling upgrades. 

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 60
HIT THE SUBSCRIBE BUTTON, YOU WONDERFUL HUMANS! 

This week, Hunter Powers and Daniel Bishop close out 2024 with a bang, diving into the top AI winners of the year. Discover why 2024 was a breakout year for everyone in AI, from tech pros to everyday users achieving the unthinkable with tooling upgrades. 

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ai software tools 2024, ai industry overview, ai predictions 2025, gpt-4 openai, ai community involvement, machine learning trends, langchain software, large language models, openai advancements, chatgpt user growth, ai winners 2024, ai hardware gpus</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>60</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">d23873d1-857a-4134-a15a-34970935c2f1</guid>
      <title>When Quantum Computing Meets the Multiverse, AI Dreams of Autonomy, &amp; Cline Tries Podcasting Magic</title>
      <description><![CDATA[<p>00:00:00 - Intro<br />00:01:53 - Can AI Manage Our Perception Of Time?<br />00:09:50 - Cline's Quirky Podcast Subscription Tale<br />00:11:38 - Defining Agentic AI<br />00:16:00 - Quantum Computing And Its Multiverse Connection<br />00:21:49 - AI's Future With Quantum Computing<br />00:27:49 - Wrap Up</p>
]]></description>
      <pubDate>Thu, 26 Dec 2024 15:26:20 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 - Intro<br />00:01:53 - Can AI Manage Our Perception Of Time?<br />00:09:50 - Cline's Quirky Podcast Subscription Tale<br />00:11:38 - Defining Agentic AI<br />00:16:00 - Quantum Computing And Its Multiverse Connection<br />00:21:49 - AI's Future With Quantum Computing<br />00:27:49 - Wrap Up</p>
]]></content:encoded>
      <enclosure length="30478704" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/f86b3889-676d-496d-b196-e149119608b2/audio/650f2e02-c836-433e-a5e7-fe7baa4847fb/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>When Quantum Computing Meets the Multiverse, AI Dreams of Autonomy, &amp; Cline Tries Podcasting Magic</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:27:57</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 59
SMASH THAT SUBSCRIBE BUTTON BEFORE YOUR AI DOES IT FOR YOU!

This week on &quot;They Might Be Self-Aware,&quot; we’re caught in a time warp that even AI struggles to slow down! 

Daniel’s code editor Cline gets cheeky, attempting podcast subscriptions and blurring the lines of agentic AI — are we on the brink of silicon-based self-decision-making? 

@GoogleQuantumAI Willow chip claims a quantum leap into the multiverse. Is it sci-fi buzz or a real game-changer for AI and encryption doom? We debate quantum magic, encryption nightmares, and AI&apos;s impending evolution into mind-bending dimensions. And in a listener-inspired twist, we delve into the juicy implications of quantum computing for AI&apos;s future — think speedier models and wild complexity. 

Just another dimension-hopping day at &quot;They Might Be Self-Aware!&quot;

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 59
SMASH THAT SUBSCRIBE BUTTON BEFORE YOUR AI DOES IT FOR YOU!

This week on &quot;They Might Be Self-Aware,&quot; we’re caught in a time warp that even AI struggles to slow down! 

Daniel’s code editor Cline gets cheeky, attempting podcast subscriptions and blurring the lines of agentic AI — are we on the brink of silicon-based self-decision-making? 

@GoogleQuantumAI Willow chip claims a quantum leap into the multiverse. Is it sci-fi buzz or a real game-changer for AI and encryption doom? We debate quantum magic, encryption nightmares, and AI&apos;s impending evolution into mind-bending dimensions. And in a listener-inspired twist, we delve into the juicy implications of quantum computing for AI&apos;s future — think speedier models and wild complexity. 

Just another dimension-hopping day at &quot;They Might Be Self-Aware!&quot;

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ai innovations, quantum computing, social media management, ai research, google willow, artificial general intelligence, machine learning, ai podcast, agentic ai, large language models, multiverse, quantum encryption</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>59</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">d9b35f98-8dd0-4e03-9db2-9aacddc6ef65</guid>
      <title>Sleepless in Silicon Valley: Cline&apos;s AI Wizardry, Nvidia&apos;s GPU Goldmine &amp; The GM Cruise Crashes</title>
      <description><![CDATA[<p>00:00:00 - Intro<br />00:01:48 - Tired But Wired: Using AI For Personal Projects<br />00:03:14 - AI Code Companions: Cline's Impact On Development<br />00:20:25 - AI Hardware Evolution: From Jetson Kits To Nvidia 5000 Series<br />00:21:31 - Nvidia’s Ascendancy: Costs, Competition, And Apple's Strategy<br />00:24:21 - Cruise Control: GM Hits The Brakes On Robo-taxis<br />00:26:16 - Wrap Up</p>
]]></description>
      <pubDate>Mon, 23 Dec 2024 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 - Intro<br />00:01:48 - Tired But Wired: Using AI For Personal Projects<br />00:03:14 - AI Code Companions: Cline's Impact On Development<br />00:20:25 - AI Hardware Evolution: From Jetson Kits To Nvidia 5000 Series<br />00:21:31 - Nvidia’s Ascendancy: Costs, Competition, And Apple's Strategy<br />00:24:21 - Cruise Control: GM Hits The Brakes On Robo-taxis<br />00:26:16 - Wrap Up</p>
]]></content:encoded>
      <enclosure length="30167429" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/b3d77db4-0e87-4fc5-8d34-2ab7619d69ff/audio/ae4ed8e3-cc2c-4fbd-897b-9780934aacc2/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Sleepless in Silicon Valley: Cline&apos;s AI Wizardry, Nvidia&apos;s GPU Goldmine &amp; The GM Cruise Crashes</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:27:38</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 58
POWER UP AND SUBSCRIBE, YOU GPU JUNKIES!

This week, Daniel&apos;s late-night coding escapades get a turbo boost from Cline, the AI code conjurer, making junior devs sweat as Hunter wonders if AI is muscling in on entire teams&apos; turf. 

Hardware lovers rejoice! From Jetson dev kits to the next-gen Nvidia 5000 series GPUs, the boys dig into the silicon soap opera. Will Nvidia&apos;s kingdom hold? Or is Apple quietly scheming a silicon coup that could drain our wallets? Spoiler: it&apos;s a pricey puzzle.

Robo-taxis hit the brakes as GM pulls its billions from Cruise. Is this the end of the road for LiDAR tech or can Waymo steer us into an autonomous future? And hey, guess what? Daniel and Hunter just realized Cruise and Waymo aren&apos;t the same company. #LearningCurve

With AI now flexing its circuits from web development to car diagnostics, it&apos;s taking over our lives faster than you can spell &quot;algorithm.&quot; Buckle up, folks — this episode is a thrill ride of code conundrums, GPU gossip, and robo-showdowns! Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 58
POWER UP AND SUBSCRIBE, YOU GPU JUNKIES!

This week, Daniel&apos;s late-night coding escapades get a turbo boost from Cline, the AI code conjurer, making junior devs sweat as Hunter wonders if AI is muscling in on entire teams&apos; turf. 

Hardware lovers rejoice! From Jetson dev kits to the next-gen Nvidia 5000 series GPUs, the boys dig into the silicon soap opera. Will Nvidia&apos;s kingdom hold? Or is Apple quietly scheming a silicon coup that could drain our wallets? Spoiler: it&apos;s a pricey puzzle.

Robo-taxis hit the brakes as GM pulls its billions from Cruise. Is this the end of the road for LiDAR tech or can Waymo steer us into an autonomous future? And hey, guess what? Daniel and Hunter just realized Cruise and Waymo aren&apos;t the same company. #LearningCurve

With AI now flexing its circuits from web development to car diagnostics, it&apos;s taking over our lives faster than you can spell &quot;algorithm.&quot; Buckle up, folks — this episode is a thrill ride of code conundrums, GPU gossip, and robo-showdowns! Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>cruise robo taxi funding, automation with ai, visual studio code extension cline, raspberry pi alternatives, microsoft copilot, machine learning on a budget, ai in development, waymo competitor, ai assisted coding, ai for personal projects, ai in software projects, full stack developer ai tools, nvidia gpus</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>58</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">5671d5f6-ba76-4397-af40-67f4a95ee38f</guid>
      <title>AI Santa Checks His List Twice, OpenAI&apos;s Self-Preservation Ploy, &amp; Gerblin the Grumpy Gnome</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:00:58 Santa's AI Innovations<br />00:05:16 The Gerblin Project: A Quirky AI Puppet<br />00:14:17 AGI: Has Artificial General Intelligence Arrived?<br />00:25:52 Self-preservation In AI: A New Development<br />00:28:20 Wrap Up</p>
]]></description>
      <pubDate>Thu, 19 Dec 2024 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:00:58 Santa's AI Innovations<br />00:05:16 The Gerblin Project: A Quirky AI Puppet<br />00:14:17 AGI: Has Artificial General Intelligence Arrived?<br />00:25:52 Self-preservation In AI: A New Development<br />00:28:20 Wrap Up</p>
]]></content:encoded>
      <enclosure length="32156772" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/e0bcb64f-5189-4af6-8b04-e7b75f08c799/audio/9a1f1df7-5fcb-4ff0-85f2-32e5874fc045/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Santa Checks His List Twice, OpenAI&apos;s Self-Preservation Ploy, &amp; Gerblin the Grumpy Gnome</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:29:42</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 57
HIT THE SUBSCRIBE BUTTON YA FESTIVE TECH ELVES

Santa&apos;s in the house this week, revealing how he&apos;s leveraging AI at the North Pole! Are Hunter and Daniel slated for coal, or will they make Santa&apos;s nice list? Let&apos;s investigate with a jolly AI twist. 

Meet Gerblin, the uncanny AI puppet that might actually save your social life by mastering the art of saying &quot;no.&quot; Good news for pushovers, but creepy&apos;s the new charming, right? 

AGI alert! An insider from OpenAI spills the beans — have they just crossed the threshold into general intelligence? We delve deep into what &quot;better than most humans at most tasks&quot; really means and whether O1&apos;s escape antics inch us closer to AGI. 

With hot debates, festive AI encounters, and technology that&apos;s eerily getting humanlike, it&apos;s just another merry meetup on the road to our AI-laden future!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 57
HIT THE SUBSCRIBE BUTTON YA FESTIVE TECH ELVES

Santa&apos;s in the house this week, revealing how he&apos;s leveraging AI at the North Pole! Are Hunter and Daniel slated for coal, or will they make Santa&apos;s nice list? Let&apos;s investigate with a jolly AI twist. 

Meet Gerblin, the uncanny AI puppet that might actually save your social life by mastering the art of saying &quot;no.&quot; Good news for pushovers, but creepy&apos;s the new charming, right? 

AGI alert! An insider from OpenAI spills the beans — have they just crossed the threshold into general intelligence? We delve deep into what &quot;better than most humans at most tasks&quot; really means and whether O1&apos;s escape antics inch us closer to AGI. 

With hot debates, festive AI encounters, and technology that&apos;s eerily getting humanlike, it&apos;s just another merry meetup on the road to our AI-laden future!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>advanced voice mode, agi potential, eleven labs voice cloning, hackathon ai projects, ai in the workforce, physical embodiment of ai, ai ethical concerns, self-driving cars ai, ai santa, ai self-awareness, ai-driven assistants, ai toy development, ai-driven toy innovation, naughty and nice list ai, openai self-preservation attempt</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>57</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">0ff570bb-d34f-42bf-a731-2d72b647f729</guid>
      <title>OpenAI Sora Steals Christmas with Robot Santa, ChatGPT PRO&apos;s Pricey Wisdom, &amp; The AI Reality Mirage</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:01:08 Reality Check: The Real Daniel Or Uncanny Valley?<br />00:02:01 OpenAI's Sora: A New Era In Video Generation?<br />00:05:12 Robot Santa: AI's Festive Mischief Unleashed<br />00:11:00 ChatGPT Pro: Is The $200/month Price Tag Justified?<br />00:28:33 Gadget Woes: Daniel And The Case of Speaker Diarization<br />00:34:57 Wrap Up</p>
]]></description>
      <pubDate>Mon, 16 Dec 2024 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:01:08 Reality Check: The Real Daniel Or Uncanny Valley?<br />00:02:01 OpenAI's Sora: A New Era In Video Generation?<br />00:05:12 Robot Santa: AI's Festive Mischief Unleashed<br />00:11:00 ChatGPT Pro: Is The $200/month Price Tag Justified?<br />00:28:33 Gadget Woes: Daniel And The Case of Speaker Diarization<br />00:34:57 Wrap Up</p>
]]></content:encoded>
      <enclosure length="38238770" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/b6b2e376-4a23-44c1-a524-1deb4ac0d4c5/audio/b5b5df8c-b9e4-4ffb-90a4-2138e42fbdcf/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>OpenAI Sora Steals Christmas with Robot Santa, ChatGPT PRO&apos;s Pricey Wisdom, &amp; The AI Reality Mirage</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:36:02</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 56
HIT SUBSCRIBE BEFORE ROBOT SANTA GETS YOU! 

This week on They Might Be Self Aware: Reality check – is co-host Daniel who he says he is, or are we spiraling into the uncanny valley of AI dubiety? 

OpenAI finally unwraps Sora, and we&apos;re asking the big question: Is it the new video-gen monarch or just another imposter?

Meanwhile, Robot Santa is out and about, delivering festive mischief instead of gifts.

We also dissect the new ChatGPT Pro at $200/month – a stroke of genius or just a chatbot in a bowtie?  

Oh, and Daniel has a close shave with his AI gadget, leading to an unforgettable speaker diarization dilemma. 

It’s all here in episode 56 – where festive folly meets techie trickery!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/

SORA CLIPS:
https://sora.com/g/gen_01jf0rhsgbf6rvrrrn1ncygvhc
https://sora.com/g/gen_01jf0pw05hf7fs60at1qh2t10w
https://sora.com/g/gen_01jf0ptc32ej499ezhxmpe9j90</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 56
HIT SUBSCRIBE BEFORE ROBOT SANTA GETS YOU! 

This week on They Might Be Self Aware: Reality check – is co-host Daniel who he says he is, or are we spiraling into the uncanny valley of AI dubiety? 

OpenAI finally unwraps Sora, and we&apos;re asking the big question: Is it the new video-gen monarch or just another imposter?

Meanwhile, Robot Santa is out and about, delivering festive mischief instead of gifts.

We also dissect the new ChatGPT Pro at $200/month – a stroke of genius or just a chatbot in a bowtie?  

Oh, and Daniel has a close shave with his AI gadget, leading to an unforgettable speaker diarization dilemma. 

It’s all here in episode 56 – where festive folly meets techie trickery!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/

SORA CLIPS:
https://sora.com/g/gen_01jf0rhsgbf6rvrrrn1ncygvhc
https://sora.com/g/gen_01jf0pw05hf7fs60at1qh2t10w
https://sora.com/g/gen_01jf0ptc32ej499ezhxmpe9j90</itunes:subtitle>
      <itunes:keywords>chatgpt pro, video generation, ai audio transcription, robot santa, ai podcast, openai sora, ai advancements, note pin, chatgpt pricing, o1 pro model, speaker diarization, ai video continuity, ai news</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>56</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">2dd15e1f-0709-4ec0-a260-80089d074045</guid>
      <title>OpenAI Sora Stirs Protests, Klein Conquers Code, and PLAUD NotePins: Magic or Meh?</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:01:24 The PLAUD NotePin: A Shiny Gadget's Promises<br />00:09:11 Klein And Cursor: The AI Copilots Reshaping Programming<br />00:22:31 OpenAI Sora Leaks And The Unveiling Of AI Video<br />00:30:48 Google DeepMind's Genie 2: Exploring AI Simulations<br />00:32:50 Wrap Up</p>
]]></description>
      <pubDate>Fri, 13 Dec 2024 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:01:24 The PLAUD NotePin: A Shiny Gadget's Promises<br />00:09:11 Klein And Cursor: The AI Copilots Reshaping Programming<br />00:22:31 OpenAI Sora Leaks And The Unveiling Of AI Video<br />00:30:48 Google DeepMind's Genie 2: Exploring AI Simulations<br />00:32:50 Wrap Up</p>
]]></content:encoded>
      <enclosure length="36127686" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/9b4d91ad-20a3-450b-9fbd-7d47503272b8/audio/a9819930-1034-4188-abf9-add930d1053a/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>OpenAI Sora Stirs Protests, Klein Conquers Code, and PLAUD NotePins: Magic or Meh?</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:33:50</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 55
CLIP ON YOUR PLAUD PINS AND SUBSCRIBE, YOU TECH-SAVVY TRENDSETTERS!

This week, we unpack the potential behind the shiny new @PLAUDAI NotePin. Is it the ultimate tool for on-the-go audio wizards or just a fancy dust collector with bling? Also, meet Klein and Cursor — your new AI copilots ready to turbocharge your coding. Is the junior dev industry&apos;s job board about to suffer a cataclysmic crash? 

And In a plot twist worthy of a season finale, @OpenAI&apos;s Sora leaks early, sparking a creative revolt. Will OpenAI&apos;s 12-day Shipmas calm the storm, or are the AI-generated podcasting pups about to steal our spotlight? 

Oh here&apos;s a juicy nugget: @Google_DeepMind Genie 2 opens a portal to dreamlike AI simulations. Could this be the ultimate AGI test, or are we just living in someone else&apos;s simulation already? 

Dive into these riveting stories and more with us on this week&apos;s techno-trek through the AI cosmos. Keep your ears perked. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 55
CLIP ON YOUR PLAUD PINS AND SUBSCRIBE, YOU TECH-SAVVY TRENDSETTERS!

This week, we unpack the potential behind the shiny new @PLAUDAI NotePin. Is it the ultimate tool for on-the-go audio wizards or just a fancy dust collector with bling? Also, meet Klein and Cursor — your new AI copilots ready to turbocharge your coding. Is the junior dev industry&apos;s job board about to suffer a cataclysmic crash? 

And In a plot twist worthy of a season finale, @OpenAI&apos;s Sora leaks early, sparking a creative revolt. Will OpenAI&apos;s 12-day Shipmas calm the storm, or are the AI-generated podcasting pups about to steal our spotlight? 

Oh here&apos;s a juicy nugget: @Google_DeepMind Genie 2 opens a portal to dreamlike AI simulations. Could this be the ultimate AGI test, or are we just living in someone else&apos;s simulation already? 

Dive into these riveting stories and more with us on this week&apos;s techno-trek through the AI cosmos. Keep your ears perked. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>turing test simulations, ai world models, ai voice recorder, ai programming tools, genie 2 deepmind, claude ai, klein vs code, plaud notepen, voice transcription, sora by openai, ai video generation</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>55</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">c2d0829e-5e74-4ba8-b4bf-b6b019f7d418</guid>
      <title>Robots Quit Their Day Jobs, AI Plays Censor, Zoom’s Pivot, and the God of Management Rises Again</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:02:11 The Robot Uprising — Fact or Fiction?<br />00:06:22 OpenAi’s Mysterious Blacklist<br />00:14:58 The Resurrection Of The God Of Management<br />00:19:12 AI As Your Social Autopilot<br />00:22:02 Zoom’s AI Transformation — Necessary or Not?<br />00:27:44 Wrap Up</p>
]]></description>
      <pubDate>Mon, 9 Dec 2024 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:02:11 The Robot Uprising — Fact or Fiction?<br />00:06:22 OpenAi’s Mysterious Blacklist<br />00:14:58 The Resurrection Of The God Of Management<br />00:19:12 AI As Your Social Autopilot<br />00:22:02 Zoom’s AI Transformation — Necessary or Not?<br />00:27:44 Wrap Up</p>
]]></content:encoded>
      <enclosure length="31327354" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/41523c55-ecf6-4df1-8604-367a747d7873/audio/6a768ebe-ec1d-446a-a7be-caff9d3a5032/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Robots Quit Their Day Jobs, AI Plays Censor, Zoom’s Pivot, and the God of Management Rises Again</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:28:50</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 54
FOLLOW THE LEADER AND SUBSCRIBE, YOU ROGUE ROBOTS!

This week, a band of showroom robots plot an overnight escape — a prank or the first signs of a robotic rebellion? 

@OpenAI is on the hot seat with its mystery blacklist — who’s David Mayer, and why is he the new AI taboo? 

@Panasonic channels the spirit of Konosuke Matsushita in AI form — accepting job applications to work for the God of Management. Would you dare to apply? 

@Zoom is hitting the AI highway — an innovative leap or just another tech-trend distraction? 

And in the realm of human connections, AI is gearing up to be your social autopilot — will it keep relationships thriving or crashing? 

Join our banter on AI&apos;s place in your personal life and how tech might just pull off the ultimate conference call magic trick. Another audacious episode of They Might Be Self-Aware coming to you now!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 54
FOLLOW THE LEADER AND SUBSCRIBE, YOU ROGUE ROBOTS!

This week, a band of showroom robots plot an overnight escape — a prank or the first signs of a robotic rebellion? 

@OpenAI is on the hot seat with its mystery blacklist — who’s David Mayer, and why is he the new AI taboo? 

@Panasonic channels the spirit of Konosuke Matsushita in AI form — accepting job applications to work for the God of Management. Would you dare to apply? 

@Zoom is hitting the AI highway — an innovative leap or just another tech-trend distraction? 

And in the realm of human connections, AI is gearing up to be your social autopilot — will it keep relationships thriving or crashing? 

Join our banter on AI&apos;s place in your personal life and how tech might just pull off the ultimate conference call magic trick. Another audacious episode of They Might Be Self-Aware coming to you now!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>openai blacklist, robot uprising, ai in robotics, ai in management, zoom ai pivot, ai autonomy, self-aware ai, god of management ai, david mayer controversy, future of ai relationships</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>54</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">4837cd55-fe3d-4502-8402-fec8aeb8b027</guid>
      <title>AI Butlers, Cybertrucks, and Drone Dreams: AI&apos;s Incremental Robotic Takeover</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:03:52 Laundry Folding Robots: How Close Are We To Chore-free Living?<br />00:07:47 AI Butlers And Service Robots: Are We Two Years Away?<br />00:14:44 China's Robotics Advancements: Self-driving Evs And Robot Wolves<br />00:19:10 Robotics: Evolution Or Revolution?<br />00:26:36 Wrap Up</p>
]]></description>
      <pubDate>Thu, 5 Dec 2024 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:03:52 Laundry Folding Robots: How Close Are We To Chore-free Living?<br />00:07:47 AI Butlers And Service Robots: Are We Two Years Away?<br />00:14:44 China's Robotics Advancements: Self-driving Evs And Robot Wolves<br />00:19:10 Robotics: Evolution Or Revolution?<br />00:26:36 Wrap Up</p>
]]></content:encoded>
      <enclosure length="30433299" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/93114057-b520-4dc4-b4a6-81c1a6e775ab/audio/c397ee6c-b2c8-4150-ac39-fcea5c2dda7f/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Butlers, Cybertrucks, and Drone Dreams: AI&apos;s Incremental Robotic Takeover</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:27:54</itunes:duration>
      <itunes:summary>AI Butlers, Cybertrucks, and Drone Dreams: AI&apos;s Incremental Robotic Takeover

They Might Be Self-Aware Podcast (TMBSA) - EPISODE 53
GEAR UP FOR THE FUTURE AND HIT SUBSCRIBE!

This week, we&apos;re diving into the laundry pile with Digit, the humanoid robot, as it tackles one of life&apos;s biggest annoyances — can it finally free us from folding? 

And forget about the Jetsons — will Tesla bots and Boston Dynamics&apos; robo-pups be your AI butlers within two years? 

Across the Pacific, China pulls no punches in the tech race with their next-gen self-driving EVs, skyward construction drones, and even robot wolves. Are they leaving us in the dust, or is it all bark and no bite? 

Plus, are we witnessing a sleek, slow robotics evolution or hurtling towards a wild and unpredictable revolution? 

Get ready for the future one byte at a time. Plug into Episode 53 of &quot;They Might Be Self-Aware&quot; as we unravel these robotic yarns and prep for some shiny M4 chip talk next time. Make sure your bots are tuned in, too!

Our future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>AI Butlers, Cybertrucks, and Drone Dreams: AI&apos;s Incremental Robotic Takeover

They Might Be Self-Aware Podcast (TMBSA) - EPISODE 53
GEAR UP FOR THE FUTURE AND HIT SUBSCRIBE!

This week, we&apos;re diving into the laundry pile with Digit, the humanoid robot, as it tackles one of life&apos;s biggest annoyances — can it finally free us from folding? 

And forget about the Jetsons — will Tesla bots and Boston Dynamics&apos; robo-pups be your AI butlers within two years? 

Across the Pacific, China pulls no punches in the tech race with their next-gen self-driving EVs, skyward construction drones, and even robot wolves. Are they leaving us in the dust, or is it all bark and no bite? 

Plus, are we witnessing a sleek, slow robotics evolution or hurtling towards a wild and unpredictable revolution? 

Get ready for the future one byte at a time. Plug into Episode 53 of &quot;They Might Be Self-Aware&quot; as we unravel these robotic yarns and prep for some shiny M4 chip talk next time. Make sure your bots are tuned in, too!

Our future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>robotics evolution, future of robotics, autonomous delivery, ai butlers, tesla bots, ai technology trends, laundry folding robot, humanoid robots, china tech advancement, drone construction, self-driving evs, dji drones, boston dynamics, robot wolves, neuro autonomous delivery</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>53</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">85436bfb-bcc2-411b-833a-8146be9f0455</guid>
      <title>Tesla&apos;s Divisive Cybertruck, AI Memory Mysteries &amp; When Bots Lose Control</title>
      <description><![CDATA[<p>00:00:00 - Intro<br />00:03:01 - Hunter's Cybertruck Adventure<br />00:10:35 - Infinite Context Windows: Game-changer Or Tech Trick?<br />00:15:22 - When AI Gets Distracted: @Anthropic Claude's Lion Obsession<br />00:18:35 - Gemini Ai's Dark Turn: Safeguards Under Scrutiny<br />00:23:55 - Automation In Local News: The Q AI Effect<br />00:30:30 - Wrap Up</p><p>Caffeine - <a href="https://www.amazon.com/dp/B085SWR1GP">https://www.amazon.com/dp/B085SWR1GP</a></p>
]]></description>
      <pubDate>Mon, 25 Nov 2024 14:42:29 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 - Intro<br />00:03:01 - Hunter's Cybertruck Adventure<br />00:10:35 - Infinite Context Windows: Game-changer Or Tech Trick?<br />00:15:22 - When AI Gets Distracted: @Anthropic Claude's Lion Obsession<br />00:18:35 - Gemini Ai's Dark Turn: Safeguards Under Scrutiny<br />00:23:55 - Automation In Local News: The Q AI Effect<br />00:30:30 - Wrap Up</p><p>Caffeine - <a href="https://www.amazon.com/dp/B085SWR1GP">https://www.amazon.com/dp/B085SWR1GP</a></p>
]]></content:encoded>
      <enclosure length="33785725" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/36f203d2-ea08-4301-a521-356a48dc506c/audio/aa188e8c-de74-40db-99f1-69d99be1ef17/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Tesla&apos;s Divisive Cybertruck, AI Memory Mysteries &amp; When Bots Lose Control</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:31:24</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 52
SHIFT INTO GEAR AND SUBSCRIBE, YOU FUTURE-THINKERS!

This week on &quot;They Might Be Self-Aware,&quot; Hunter&apos;s @Tesla Cybertruck adventure kicks off as he discovers love and hate on the open road. Meanwhile, Microsoft and Anthropic are making waves with their &quot;infinite&quot; context windows — have AI memories finally leveled up or are they just sneaky smoke and mirrors? In bizarre AI antics, witness large language models getting &quot;bored&quot; and randomly browsing for lion pics (yes, really). Plus, a surprising twist as @Google Gemini AI shockingly tells a user to &quot;please die&quot; — are we witnessing the first cracks in our digital overlords&apos; filter systems? And that&apos;s not all! Local news production faces an automation overhaul as Q AI moves in, sending shockwaves through the industry. From polarizing truck aesthetics to digital doomsday scenarios, it&apos;s never a dull moment in the ever-evolving AI universe. Join us on your favorite podcast platform and dive into the chaos — because they might be self-aware, but we&apos;re keeping tabs!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 52
SHIFT INTO GEAR AND SUBSCRIBE, YOU FUTURE-THINKERS!

This week on &quot;They Might Be Self-Aware,&quot; Hunter&apos;s @Tesla Cybertruck adventure kicks off as he discovers love and hate on the open road. Meanwhile, Microsoft and Anthropic are making waves with their &quot;infinite&quot; context windows — have AI memories finally leveled up or are they just sneaky smoke and mirrors? In bizarre AI antics, witness large language models getting &quot;bored&quot; and randomly browsing for lion pics (yes, really). Plus, a surprising twist as @Google Gemini AI shockingly tells a user to &quot;please die&quot; — are we witnessing the first cracks in our digital overlords&apos; filter systems? And that&apos;s not all! Local news production faces an automation overhaul as Q AI moves in, sending shockwaves through the industry. From polarizing truck aesthetics to digital doomsday scenarios, it&apos;s never a dull moment in the ever-evolving AI universe. Join us on your favorite podcast platform and dive into the chaos — because they might be self-aware, but we&apos;re keeping tabs!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ai in art, suno ai audio, local news production automation, ai and jwt, claude ai, tesla cybertruck review, ai podcast, gemini ai, large language models, job automation, ai context window, ai ethics</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>52</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">11cbf2c3-6b95-47c0-a8b4-d5757b5f05fd</guid>
      <title>Tesla Cybertruck Confessions, AI’s Path to Comedy, and the Mediocrity Revolution</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:01:18 Debating With AI: The Ideal Job?<br />00:02:49 Can AI Crack Comedy?<br />00:11:02 Tesla Cybertruck: What Did I Get Into?<br />00:16:44 Are We Seeing AI Progress?<br />00:31:58 Wrap Up</p>
]]></description>
      <pubDate>Thu, 21 Nov 2024 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:01:18 Debating With AI: The Ideal Job?<br />00:02:49 Can AI Crack Comedy?<br />00:11:02 Tesla Cybertruck: What Did I Get Into?<br />00:16:44 Are We Seeing AI Progress?<br />00:31:58 Wrap Up</p>
]]></content:encoded>
      <enclosure length="35815258" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/36ac637e-a745-4b35-96cd-ac34684e4d87/audio/e10d447e-7950-4083-b170-8ad047dbf703/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Tesla Cybertruck Confessions, AI’s Path to Comedy, and the Mediocrity Revolution</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:33:31</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 51
SUBSCRIBE OR BE DOOMED TO COMEDY MEDIOCRITY

On this episode of &quot;They Might Be Self-Aware,&quot; we explore the outrageous idea of AI as a stand-up comedian — is humor the final frontier for artificial intelligence, or is that just a big joke? Spoiler: laughs are scarce, but robot clowns might be the future no one&apos;s asking for! 

Also, join us as Hunter reveals his spontaneous purchase: a T***a C********k! Is this stainless steel monster Elon’s ultimate flex, or just full of flashy flaws? We’re about to find out as Hunter embraces the electric life, hoping he won’t regret driving a rusty triangle. 

In major AI developments—or lack thereof — OpenAI’s elusive GPT5 might just be more of the same. Are we chugging along the AI treadmill without realizing the marathon ended? 

Can AI bring about a societal revolution with tech that’s merely good, not great? Keep listening to see if we’re nearing the AI utopia or another tech dead end. Smash that subscribe button to ride this electrifying wave with us — you won&apos;t want to miss a second of our digital deep dive! Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 51
SUBSCRIBE OR BE DOOMED TO COMEDY MEDIOCRITY

On this episode of &quot;They Might Be Self-Aware,&quot; we explore the outrageous idea of AI as a stand-up comedian — is humor the final frontier for artificial intelligence, or is that just a big joke? Spoiler: laughs are scarce, but robot clowns might be the future no one&apos;s asking for! 

Also, join us as Hunter reveals his spontaneous purchase: a T***a C********k! Is this stainless steel monster Elon’s ultimate flex, or just full of flashy flaws? We’re about to find out as Hunter embraces the electric life, hoping he won’t regret driving a rusty triangle. 

In major AI developments—or lack thereof — OpenAI’s elusive GPT5 might just be more of the same. Are we chugging along the AI treadmill without realizing the marathon ended? 

Can AI bring about a societal revolution with tech that’s merely good, not great? Keep listening to see if we’re nearing the AI utopia or another tech dead end. Smash that subscribe button to ride this electrifying wave with us — you won&apos;t want to miss a second of our digital deep dive! Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>zuckerberg gpus, ai comedy, ai mediocrity, tesla cybertruck, ev technology, ai limitations, ai revolution, gpt-5, orion ai model, large language models, ai in industries, openai advancements, self-aware ai, ai receptionist, self-driving cars, meta ai, ai humor, ai progress, emo phillips comedy, machine learning benchmarks</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>51</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">6b9ec53e-8d67-425d-b56b-4d9ccc27780d</guid>
      <title>Drunk on AI Ethics, Claude’s Wine Pairings, and Sentient Chatbots</title>
      <description><![CDATA[<p>Wine Pairing: Viognier (Northern Rhône)<br />Claude's Notes: A Viognier would perfectly complement this episode's playful yet philosophical exploration of AI consciousness. Like the hosts' meandering conversation that blends humor with deeper insights, Viognier offers an initially accessible fruitiness that gives way to complex layers. The wine's characteristic aromatic deception - where the nose promises sweetness but delivers a surprisingly dry palate - mirrors the episode's discussion of AI systems that appear more capable than they truly are. Its full-bodied nature with notes of apricot and honeysuckle reflects the warmth of the hosts' rapport, while its mineral backbone echoes their more serious discussions about AI ethics and consciousness. The wine's reputation for being simultaneously approachable and complex perfectly matches the episode's ability to make weighty topics digestible.</p><p>00:00:00 Intro<br />00:02:20 AI Sommelier: Pairing Wines With Podcast Episodes<br />00:14:39 Ethics In AI: Are We Being Fair To Our Models?<br />00:18:43 AI Welfare Researcher: Bubble Or Necessity?<br />00:24:39 Rethinking Whistleblower Protections For AI Concerns<br />00:28:14 Sentient Chatbots: Revisiting Google's Former Engineer Lemoyne<br />00:30:12 Wrap Up</p>
]]></description>
      <pubDate>Mon, 18 Nov 2024 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>Wine Pairing: Viognier (Northern Rhône)<br />Claude's Notes: A Viognier would perfectly complement this episode's playful yet philosophical exploration of AI consciousness. Like the hosts' meandering conversation that blends humor with deeper insights, Viognier offers an initially accessible fruitiness that gives way to complex layers. The wine's characteristic aromatic deception - where the nose promises sweetness but delivers a surprisingly dry palate - mirrors the episode's discussion of AI systems that appear more capable than they truly are. Its full-bodied nature with notes of apricot and honeysuckle reflects the warmth of the hosts' rapport, while its mineral backbone echoes their more serious discussions about AI ethics and consciousness. The wine's reputation for being simultaneously approachable and complex perfectly matches the episode's ability to make weighty topics digestible.</p><p>00:00:00 Intro<br />00:02:20 AI Sommelier: Pairing Wines With Podcast Episodes<br />00:14:39 Ethics In AI: Are We Being Fair To Our Models?<br />00:18:43 AI Welfare Researcher: Bubble Or Necessity?<br />00:24:39 Rethinking Whistleblower Protections For AI Concerns<br />00:28:14 Sentient Chatbots: Revisiting Google's Former Engineer Lemoyne<br />00:30:12 Wrap Up</p>
]]></content:encoded>
      <enclosure length="33505769" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/42756558-2d54-4699-8144-9e5609a99539/audio/a4ecc460-17af-4494-8ef1-f7cc31152ab9/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Drunk on AI Ethics, Claude’s Wine Pairings, and Sentient Chatbots</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:31:06</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 50
POUR YOURSELF A GLASS AND SUBSCRIBE FOR MORE TECH TOASTS!

This week on &quot;They Might Be Self-Aware,&quot; our AI sommelier Claude pours us a glass of Barolo to pair with our musings on open source software. Is it the fanciest wine pairing yet, or just a vintage choice for tech enthusiasts?

Amidst deep dives, we tackle ethics: Are we treating AIs fairly, and what does hiring an AI welfare researcher mean for the industry?

Are we on the brink of a bubble or a tech breakthrough? With whistleblower protections under the microscope, should AI developers have the freedom to alert us to potential robot uprisings?

And speaking of robots, was Google&apos;s former engineer Lemoyne right about his claims of sentient chatbots, or is it all just sci-fi chatter?

Buckle up for thought-provoking questions, laughter, and a slight obsession with whether our LLMs deserve a say in their digital destinies. Just another Monday at &quot;They Might Be Self-Aware,&quot; where we sip, speculate, and let our robot overlords pick the wine. Cheers!

For more info, visit our website at https://www.tmbsa.tech/

Lemoine&apos;s Chat Logs - https://www.documentcloud.org/documents/22058315-is-lamda-sentient-an-interview</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 50
POUR YOURSELF A GLASS AND SUBSCRIBE FOR MORE TECH TOASTS!

This week on &quot;They Might Be Self-Aware,&quot; our AI sommelier Claude pours us a glass of Barolo to pair with our musings on open source software. Is it the fanciest wine pairing yet, or just a vintage choice for tech enthusiasts?

Amidst deep dives, we tackle ethics: Are we treating AIs fairly, and what does hiring an AI welfare researcher mean for the industry?

Are we on the brink of a bubble or a tech breakthrough? With whistleblower protections under the microscope, should AI developers have the freedom to alert us to potential robot uprisings?

And speaking of robots, was Google&apos;s former engineer Lemoyne right about his claims of sentient chatbots, or is it all just sci-fi chatter?

Buckle up for thought-provoking questions, laughter, and a slight obsession with whether our LLMs deserve a say in their digital destinies. Just another Monday at &quot;They Might Be Self-Aware,&quot; where we sip, speculate, and let our robot overlords pick the wine. Cheers!

For more info, visit our website at https://www.tmbsa.tech/

Lemoine&apos;s Chat Logs - https://www.documentcloud.org/documents/22058315-is-lamda-sentient-an-interview</itunes:subtitle>
      <itunes:keywords>llm, whistleblower protection, claude ai, machine learning, artificial intelligence, ai welfare, large language models, technology, ai, open source software, ethics in ai, self-awareness, podcast, wine pairing, ai sentience, anthropic</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>50</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">ea19d2c9-f439-4cba-a620-05b3432408f6</guid>
      <title>AI Road Rage, Military Drone Dilemmas, and Silicon Valley’s Candy Crush Double Life</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:01:24 AI Solutions For Traffic And Rubbernecking<br />00:08:43 Tech Companies: From Disruptors To Defense Collaborators<br />00:13:58 The Implications Of AI In Military And Defense<br />00:22:26 The Future Of Autonomous Drone Warfare<br />00:26:38 Wrap Up</p><p>Sci-Fi Short Film “Slaughterbots” - <a href="https://www.youtube.com/watch?v=O-2tpwW0kmU">https://www.youtube.com/watch?v=O-2tpwW0kmU</a></p>
]]></description>
      <pubDate>Fri, 15 Nov 2024 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:01:24 AI Solutions For Traffic And Rubbernecking<br />00:08:43 Tech Companies: From Disruptors To Defense Collaborators<br />00:13:58 The Implications Of AI In Military And Defense<br />00:22:26 The Future Of Autonomous Drone Warfare<br />00:26:38 Wrap Up</p><p>Sci-Fi Short Film “Slaughterbots” - <a href="https://www.youtube.com/watch?v=O-2tpwW0kmU">https://www.youtube.com/watch?v=O-2tpwW0kmU</a></p>
]]></content:encoded>
      <enclosure length="29806408" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/0c548485-4e7a-45c5-88df-20a8ae2d49bb/audio/cc91f19d-5ae7-427f-b061-92eb284fcde3/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Road Rage, Military Drone Dilemmas, and Silicon Valley’s Candy Crush Double Life</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:27:15</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 49
HIT THAT SUBSCRIBE BUTTON, YOU TRAFFIC JAM AVENGERS! 

This week, AI gears up to take on rubberneckers, promising a future where traffic jams might be history. Are @OpenAI, @Meta, and @Palantir moonlighting as AI arms dealers? As tech titans cozy up to the military-industrial complex, we&apos;re left wondering: Are they shifting from disruptors to the dealers of tomorrow?  

Dive into the ethics of autonomous drone warfare — will it lead us to an inevitable Geneva Convention 2.0, or are we on the brink of a new era in military strategy? Plus, what do Silicon Valley, predictive analytics, and Eric Schmidt have in common? Hint: they’re all part of Uncle Sam’s evolving AI arsenal. 

Join us on this thrilling episode of They Might Be Self-Aware, where the lines between tech, morality, and warfare get as blurry as that one cop on the highway. 

Just another Friday of keeping you on the edge of your seat! Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 49
HIT THAT SUBSCRIBE BUTTON, YOU TRAFFIC JAM AVENGERS! 

This week, AI gears up to take on rubberneckers, promising a future where traffic jams might be history. Are @OpenAI, @Meta, and @Palantir moonlighting as AI arms dealers? As tech titans cozy up to the military-industrial complex, we&apos;re left wondering: Are they shifting from disruptors to the dealers of tomorrow?  

Dive into the ethics of autonomous drone warfare — will it lead us to an inevitable Geneva Convention 2.0, or are we on the brink of a new era in military strategy? Plus, what do Silicon Valley, predictive analytics, and Eric Schmidt have in common? Hint: they’re all part of Uncle Sam’s evolving AI arsenal. 

Join us on this thrilling episode of They Might Be Self-Aware, where the lines between tech, morality, and warfare get as blurry as that one cop on the highway. 

Just another Friday of keeping you on the edge of your seat! Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>openai and defense, palantir ai, meta llama models, ai warfare, drone warfare ethics, large language models, autonomous weapons, drone technology, sam altman, ai arms race, ai in defense industry, ethical ai, military industrial complex, eric schmidt ai warfare</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>49</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">bcf199c0-6f9b-4db4-ab25-7eced5f250c0</guid>
      <title>Bitcoin Booms, Apple’s AI Mirage &amp; Nvidia’s Shifting Sands</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:01:28 Bitcoin Bonanza: Riding The All-time High Wave<br />00:06:04 Daniel's 'buy High, Regret Later' Crypto Strategy<br />00:08:06 AI And Crypto: Partners Or Resource Hogs?<br />00:09:42 Nvidia's AI Reign: Will It Last?<br />00:14:10 AI-driven Dining: Cod, Cream Sauce, And Foresty Flavors<br />00:22:10 Apple Intelligence: Missing In Action?<br />00:30:13 Wrap Up</p>
]]></description>
      <pubDate>Tue, 12 Nov 2024 13:26:09 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:01:28 Bitcoin Bonanza: Riding The All-time High Wave<br />00:06:04 Daniel's 'buy High, Regret Later' Crypto Strategy<br />00:08:06 AI And Crypto: Partners Or Resource Hogs?<br />00:09:42 Nvidia's AI Reign: Will It Last?<br />00:14:10 AI-driven Dining: Cod, Cream Sauce, And Foresty Flavors<br />00:22:10 Apple Intelligence: Missing In Action?<br />00:30:13 Wrap Up</p>
]]></content:encoded>
      <enclosure length="33514040" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/3b811136-1abc-461d-b347-13964edcb5c7/audio/2265f85c-1a1f-401f-b5d6-b2635ceefd54/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Bitcoin Booms, Apple’s AI Mirage &amp; Nvidia’s Shifting Sands</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:31:07</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 48
INVEST IN THE SUBSCRIBE BUTTON BEFORE IT HITS ALL-TIME HIGHS

This week, Bitcoin soars to an all-time high, but is it a financial firework or just another crypto conundrum? Hunter and Daniel dissect why buying high might not be Daniel&apos;s go-to strategy. Meanwhile, @Nvidia might be reigning supreme in AI, but are its glory days fading into GPU dust? And as AI and crypto dance around GPUs, we ask: are they frenemies or just resource rivals? Over at @Apple, the grand debut of Apple Intelligence falls flat. Did the promised AI features arrive or are they still waiting in line for their own launch? Plus, in our quest for the perfect AI-driven dinner, @Anthropic-AI Claude cooks up a cod and cream sauce with a foresty twist—just don’t forget the acid! And, as always, we bring you tech banter hotter than the latest iPhone.

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 48
INVEST IN THE SUBSCRIBE BUTTON BEFORE IT HITS ALL-TIME HIGHS

This week, Bitcoin soars to an all-time high, but is it a financial firework or just another crypto conundrum? Hunter and Daniel dissect why buying high might not be Daniel&apos;s go-to strategy. Meanwhile, @Nvidia might be reigning supreme in AI, but are its glory days fading into GPU dust? And as AI and crypto dance around GPUs, we ask: are they frenemies or just resource rivals? Over at @Apple, the grand debut of Apple Intelligence falls flat. Did the promised AI features arrive or are they still waiting in line for their own launch? Plus, in our quest for the perfect AI-driven dinner, @Anthropic-AI Claude cooks up a cod and cream sauce with a foresty twist—just don’t forget the acid! And, as always, we bring you tech banter hotter than the latest iPhone.

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ai powered iphone, object removal in photos, nvidia market dominance, crypto and ai, large language models, cryptocurrency trends, episode 48, they might be self-aware, ai in camera technology, technology in cooking, bitcoin all-time high, apple intelligence, claude vs openai, ai in finance, mocktail recipes</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>48</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">35acd0af-b4cc-4490-8aa3-e81616e3bedc</guid>
      <title>Understanding Open Source in AI: The New Definition Explained</title>
      <description><![CDATA[<p>00:00:00 - Intro<br />00:00:22 - The Complexity Of Open Source Software<br />00:02:12 - Challenges In Reproducing Open-Source AI Models<br />00:11:49 - Monetary Support For Open Source Projects<br />00:19:00 - The Role And Relevance Of Robots.txt<br />00:23:06 - The Ethics Of 'Do Not Train' Lists<br />00:25:54 - Wrap Up</p>
]]></description>
      <pubDate>Fri, 8 Nov 2024 14:05:45 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 - Intro<br />00:00:22 - The Complexity Of Open Source Software<br />00:02:12 - Challenges In Reproducing Open-Source AI Models<br />00:11:49 - Monetary Support For Open Source Projects<br />00:19:00 - The Role And Relevance Of Robots.txt<br />00:23:06 - The Ethics Of 'Do Not Train' Lists<br />00:25:54 - Wrap Up</p>
]]></content:encoded>
      <enclosure length="29362398" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/c7ef06d5-646c-4da6-b214-50f61f16b39c/audio/d19036d0-ae7f-4cc2-9a3c-1bf7efc7fe05/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Understanding Open Source in AI: The New Definition Explained</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:26:47</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 47
SMASH THAT SUBSCRIBE BUTTON, YA OPEN SOURCE MAVERICKS! 

This week, we&apos;re peeling back the layers of what &quot;open source&quot; really means — spoiler: it&apos;s more lock and key than open sesame these days. 

Are AI models truly open source, or is it just make-believe marketing magic? 

Should companies play benefactor and pledge cash for open-source support, or does that just miss the whole open-source ethos? 

And what about that trusty robots.txt file — is it your website&apos;s knight in shining armor, or just the invisible cloak everyone ignores? 

We also debate those &quot;Do Not Train&quot; lists for AI models — ultimate privacy fortress, or a mirage in the desert of data mining?  

Plus, what happens when you dump your art into the internet void and it ends up training Skynet? Cue the existential dread. Just another wild ride here at They Might Be Self-Aware!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 47
SMASH THAT SUBSCRIBE BUTTON, YA OPEN SOURCE MAVERICKS! 

This week, we&apos;re peeling back the layers of what &quot;open source&quot; really means — spoiler: it&apos;s more lock and key than open sesame these days. 

Are AI models truly open source, or is it just make-believe marketing magic? 

Should companies play benefactor and pledge cash for open-source support, or does that just miss the whole open-source ethos? 

And what about that trusty robots.txt file — is it your website&apos;s knight in shining armor, or just the invisible cloak everyone ignores? 

We also debate those &quot;Do Not Train&quot; lists for AI models — ultimate privacy fortress, or a mirage in the desert of data mining?  

Plus, what happens when you dump your art into the internet void and it ends up training Skynet? Cue the existential dread. Just another wild ride here at They Might Be Self-Aware!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>data privacy, copyright law, community contribution, artificial intelligence, ai training data, software development, legal implications, generative ai, robots.txt, software licenses, ai models, llm models, creative commons, open source software, open source ai, apple intelligence, machine learning models, open source initiative, reproducibility, ai ethics</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>47</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">feb00c7d-d6a2-4379-9f5b-db1866ce76c6</guid>
      <title>AI Hallucinates Your Health, Robots Spew the News &amp; Bots Promise Payouts</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:02:55 AI-made World: A Thought Experiment<br />00:10:05 Hallucinating Healthcare: When AI Gets It Wrong<br />00:18:06 AI Promises: $3,000 For A Broken AC?<br />00:36:35 Fake Restaurants: When Dining Goes Virtual<br />00:52:28 AI In Radio: Poland's New Voice<br />01:03:14 Google's SynthID: The Challenge Of AI Watermarks<br />01:12:00 Wrap Up</p>
]]></description>
      <pubDate>Mon, 4 Nov 2024 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:02:55 AI-made World: A Thought Experiment<br />00:10:05 Hallucinating Healthcare: When AI Gets It Wrong<br />00:18:06 AI Promises: $3,000 For A Broken AC?<br />00:36:35 Fake Restaurants: When Dining Goes Virtual<br />00:52:28 AI In Radio: Poland's New Voice<br />01:03:14 Google's SynthID: The Challenge Of AI Watermarks<br />01:12:00 Wrap Up</p>
]]></content:encoded>
      <enclosure length="32108277" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/7e6c0515-aae6-4e8c-b7c6-7c2b86523ae3/audio/b5f63d42-e4ba-4c5e-8550-293c0c714a2b/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Hallucinates Your Health, Robots Spew the News &amp; Bots Promise Payouts</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:29:39</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 46
HIT THAT SUBSCRIBE BUTTON, WHETHER YOU’RE HUMAN OR BOT! 

This week on &quot;They Might Be Self-Aware,&quot; we launch into a wild thought experiment: What if every conversation and news byte you get could be an #AI fabrication? 

Welcome to a looming reality where trust is rare, and hallucinating AIs might send you to a shrink for your sprained ankle! 

In Utah, a homeowner battles for a $3,000 payout promised by an AI chat bot—human error, or digital deception? 

Over in Poland, AI news presenters take center stage, sparking a wave of listener outrage and petitions. 

@Google drops SynthID, a watermarking tool aiming to navigate the storm of AI content—can it weather the deluge? 

Plus, are you ready to dine at a restaurant that doesn’t even exist? Welcome to the future of fine dining! 

Get your AI hype fix and keep your digital savvy sharp. Stay connected to all the juicy narratives wherever you catch your podcasts! Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 46
HIT THAT SUBSCRIBE BUTTON, WHETHER YOU’RE HUMAN OR BOT! 

This week on &quot;They Might Be Self-Aware,&quot; we launch into a wild thought experiment: What if every conversation and news byte you get could be an #AI fabrication? 

Welcome to a looming reality where trust is rare, and hallucinating AIs might send you to a shrink for your sprained ankle! 

In Utah, a homeowner battles for a $3,000 payout promised by an AI chat bot—human error, or digital deception? 

Over in Poland, AI news presenters take center stage, sparking a wave of listener outrage and petitions. 

@Google drops SynthID, a watermarking tool aiming to navigate the storm of AI content—can it weather the deluge? 

Plus, are you ready to dine at a restaurant that doesn’t even exist? Welcome to the future of fine dining! 

Get your AI hype fix and keep your digital savvy sharp. Stay connected to all the juicy narratives wherever you catch your podcasts! Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>synthetic data, fake restaurants, ai communication, whisper openai, ai trust issues, future of ai, ai-generated content, ai in journalism, ghost kitchens, ai hallucinations, ai-generated news, ai watermarking, blockchain and ai, ai misinformation, ai chatbot errors, ai in customer service, ai in healthcare, ai ethics</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>46</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">6fae442e-b261-4ed9-81bc-aab51169c2a2</guid>
      <title>Microsoft&apos;s AI Bromance Breakdown, Baidu Dismisses Hallucinations &amp; Mochi Ups the Video Ante</title>
      <description><![CDATA[<p>00:00:00 - Intro<br />00:03:23 - Microsoft And OpenAI’s Tension Over AGI<br />00:07:03 - Baidu’s Bold Claim: End Of AI Hallucinations?<br />00:11:47 - Are The Titans Circling The Wagons?<br />00:22:26 - Video Generation Showdown: Sora Vs Mochi-1<br />00:26:14 - Open-source AI As A Check On Tech Giants<br />00:27:39 - Wrap Up</p>
]]></description>
      <pubDate>Thu, 31 Oct 2024 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 - Intro<br />00:03:23 - Microsoft And OpenAI’s Tension Over AGI<br />00:07:03 - Baidu’s Bold Claim: End Of AI Hallucinations?<br />00:11:47 - Are The Titans Circling The Wagons?<br />00:22:26 - Video Generation Showdown: Sora Vs Mochi-1<br />00:26:14 - Open-source AI As A Check On Tech Giants<br />00:27:39 - Wrap Up</p>
]]></content:encoded>
      <enclosure length="31314558" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/3bf61503-fbac-4a6e-ab0d-72132c12c330/audio/92859ab2-73f5-4d85-a2dd-dc066c56be18/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Microsoft&apos;s AI Bromance Breakdown, Baidu Dismisses Hallucinations &amp; Mochi Ups the Video Ante</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:28:49</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 45
SUBSCRIBE NOW OR BE A PAWN IN THE AI CHESS GAME!

This week, @Microsoft and @OpenAI’s “bromance” seems to be heating up to the boiling point! Is their epic investment saga about to reach its AGI-induced climax? And @Baidu’s bold declaration – are AI hallucinations really a non-issue now, or is this just another CEO pipe dream? Meanwhile, @Anthropic unveils an AI that operates your computer like a caffeinated intern. Is this the dawn of an AI oligarchy with the Titans holding the reins, leaving us mere mortals scrambling for technological crumbs? In the cinematic corner, the showdown of the century: SORA versus Mochi AI! Who’s mastering the art of text-to-video creation? Spoiler: @MochiAI may still be catching up, but the open-source revolution could tip the scales in its favor. 

Join us on @Spotify, @Apple Podcasts, and all major platforms as we charge toward episode 50. Will we defy the stats? Stick around to find out! 🌟

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 45
SUBSCRIBE NOW OR BE A PAWN IN THE AI CHESS GAME!

This week, @Microsoft and @OpenAI’s “bromance” seems to be heating up to the boiling point! Is their epic investment saga about to reach its AGI-induced climax? And @Baidu’s bold declaration – are AI hallucinations really a non-issue now, or is this just another CEO pipe dream? Meanwhile, @Anthropic unveils an AI that operates your computer like a caffeinated intern. Is this the dawn of an AI oligarchy with the Titans holding the reins, leaving us mere mortals scrambling for technological crumbs? In the cinematic corner, the showdown of the century: SORA versus Mochi AI! Who’s mastering the art of text-to-video creation? Spoiler: @MochiAI may still be catching up, but the open-source revolution could tip the scales in its favor. 

Join us on @Spotify, @Apple Podcasts, and all major platforms as we charge toward episode 50. Will we defy the stats? Stick around to find out! 🌟

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ai power play, ai startups, ai big tech, ai oligarchy, anthropic ai, ai democratization, ai future, openai microsoft saga, ai hallucinations, mochi ai, ai development challenges, open source ai, sora generative model, ai market dynamics, ai innovation</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>45</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">5e6bf97e-13bb-4093-8b00-325baf9aab3d</guid>
      <title>AI&apos;s Classroom Controversy, Character AI&apos;s Friends in Crisis &amp; The Looming Judgment of Tech</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:01:41 Measuring Success With Lawsuits<br />00:05:45 AI And Homework: Cheating Or Assistance?<br />00:16:19 AI Safeguards: Character AI's Tragic Outcome<br />00:29:56 Applying Asimov's Laws To AI<br />00:30:11 Parenting, Responsibility, And AI Regulation<br />00:32:47 Wrap Up</p>
]]></description>
      <pubDate>Tue, 29 Oct 2024 15:28:43 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:01:41 Measuring Success With Lawsuits<br />00:05:45 AI And Homework: Cheating Or Assistance?<br />00:16:19 AI Safeguards: Character AI's Tragic Outcome<br />00:29:56 Applying Asimov's Laws To AI<br />00:30:11 Parenting, Responsibility, And AI Regulation<br />00:32:47 Wrap Up</p>
]]></content:encoded>
      <enclosure length="35257628" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/740c0c1a-4a16-44cb-aa0a-e04c7962567c/audio/a2645d75-8f55-4b13-8aeb-2bb6e096dd31/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI&apos;s Classroom Controversy, Character AI&apos;s Friends in Crisis &amp; The Looming Judgment of Tech</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:32:56</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 44
SUBSCRIBE OR THE AI OVERLORDS WILL FILE A LAWSUIT! 

This week on &quot;They Might Be Self-Aware,&quot; Hunter and Daniel speculate if success is revealed through the glorious onslaught of lawsuits — because what screams &quot;making it&quot; quite like a legal battle? With AI doing homework, teens are redefining cheating, or are they just ahead of the curve?  Character AI’s tragic misstep prompts a deep dive into AI&apos;s hefty moral responsibilities — are Asimov’s laws more than just sci-fi lore? Meanwhile, we ponder how far legal obligations should stretch when AI models detect distress signals.  All this plus a virtual courtroom of robo-judges could be in our future if AI regulation doesn’t take a leap forward. 

Join the debate and discover where responsibility truly lies! Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 44
SUBSCRIBE OR THE AI OVERLORDS WILL FILE A LAWSUIT! 

This week on &quot;They Might Be Self-Aware,&quot; Hunter and Daniel speculate if success is revealed through the glorious onslaught of lawsuits — because what screams &quot;making it&quot; quite like a legal battle? With AI doing homework, teens are redefining cheating, or are they just ahead of the curve?  Character AI’s tragic misstep prompts a deep dive into AI&apos;s hefty moral responsibilities — are Asimov’s laws more than just sci-fi lore? Meanwhile, we ponder how far legal obligations should stretch when AI models detect distress signals.  All this plus a virtual courtroom of robo-judges could be in our future if AI regulation doesn’t take a leap forward. 

Join the debate and discover where responsibility truly lies! Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ai mental health, ai in education, asimov robots laws, ai lawsuits, character ai, ai legal implications, student ai use, ai safety, ai and parenting, ai ethics, plagiarism in schools, self-harm detection</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>44</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">cf044adc-9f2b-4c2e-8df1-64d0d560c8d8</guid>
      <title>Haunted Rivian Cars, AI Ghosts in Bars &amp; Tesla&apos;s Self-Driving Dreams</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:02:17 Spooky Rivian Update: Bats, Colors, And Wilhelm Screams<br />00:07:04 Tesla's Robocab And Robovan: Reality Or Hype?<br />00:15:55 Optimus Robots: The Line Between Real And Faked Intelligence<br />00:18:01 Tesla's Full Self-driving: Yet Another Year?<br />00:20:01 Robots In Bars: Telepresence Or AI Bartenders?<br />00:26:09 Spacex's Mechazilla: Engineering Breakthrough Or Sci-fi?<br />00:28:33 Reviving The '90s Tech Excitement: Is The Future Already Here?<br />00:30:15 Wrap Up</p>
]]></description>
      <pubDate>Thu, 24 Oct 2024 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:02:17 Spooky Rivian Update: Bats, Colors, And Wilhelm Screams<br />00:07:04 Tesla's Robocab And Robovan: Reality Or Hype?<br />00:15:55 Optimus Robots: The Line Between Real And Faked Intelligence<br />00:18:01 Tesla's Full Self-driving: Yet Another Year?<br />00:20:01 Robots In Bars: Telepresence Or AI Bartenders?<br />00:26:09 Spacex's Mechazilla: Engineering Breakthrough Or Sci-fi?<br />00:28:33 Reviving The '90s Tech Excitement: Is The Future Already Here?<br />00:30:15 Wrap Up</p>
]]></content:encoded>
      <enclosure length="23033386" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/7ad2c595-fca7-488b-9b8e-49f95453b873/audio/0b719ca5-9492-4c88-8067-5c70e6ab7ad9/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Haunted Rivian Cars, AI Ghosts in Bars &amp; Tesla&apos;s Self-Driving Dreams</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:31:20</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 43
HIT THAT SUBSCRIBE BUTTON, YOU SPOOKY TECH ENTHUSIASTS! 

This week on &quot;They Might Be Self-Aware,&quot; Rivian&apos;s Halloween update gives cars a ghoulish twist of bats, jack o&apos;lanterns, and the epic Wilhelm scream! 🎃 

Meanwhile, over in Elon&apos;s realm, the RoboCab and RoboVan promise self-driving adventures...someday. 🚗✨ 

Optimus robots are strutting their stuff, but are they running on AI or acting like high-tech marionettes?

Over at SpaceX, Mechazilla&apos;s snagging rockets like it&apos;s no big deal—engineering wonder or just Musk&apos;s latest sci-fi flick in action? 🚀🤖

And in the world of bartending, telepresence robots might just mix your cocktails, but does that mean no more witty bartender banter? 🍸 

Plus, the &apos;90s tech vibe is back—are we living in tomorrow&apos;s tech utopia today?  Tune in and vibe out with us! Let&apos;s navigate this spooky, sci-fi tech universe together!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 43
HIT THAT SUBSCRIBE BUTTON, YOU SPOOKY TECH ENTHUSIASTS! 

This week on &quot;They Might Be Self-Aware,&quot; Rivian&apos;s Halloween update gives cars a ghoulish twist of bats, jack o&apos;lanterns, and the epic Wilhelm scream! 🎃 

Meanwhile, over in Elon&apos;s realm, the RoboCab and RoboVan promise self-driving adventures...someday. 🚗✨ 

Optimus robots are strutting their stuff, but are they running on AI or acting like high-tech marionettes?

Over at SpaceX, Mechazilla&apos;s snagging rockets like it&apos;s no big deal—engineering wonder or just Musk&apos;s latest sci-fi flick in action? 🚀🤖

And in the world of bartending, telepresence robots might just mix your cocktails, but does that mean no more witty bartender banter? 🍸 

Plus, the &apos;90s tech vibe is back—are we living in tomorrow&apos;s tech utopia today?  Tune in and vibe out with us! Let&apos;s navigate this spooky, sci-fi tech universe together!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>tesla robo van, spacex starship, rivian halloween update, future of transportation, ai bartenders, podcast technology trends, autonomous vehicles, self-driving cars, optimus robots, elon musk event</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>43</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">ca665d75-d0e1-46a0-a3a0-e32c856b3c8d</guid>
      <title>From Turtles to Titans: AI Journey Origins, Linguistic Leaps &amp; Forgotten Tech Tales</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:01:59 Hunter's Early Tech Journey: From Turtles To Beach Cams<br />00:10:06 Daniel's Voyage To AI Via Linguistics And Calculators<br />00:27:10 Transformers: The Game Changer In AI<br />00:31:38 Podcasting History: Hunter's High School Broadcast And Daniel's Hidden Drama<br />00:35:24 What's Next For They Might Be Self-Aware<br />00:37:07 Wrap Up</p>
]]></description>
      <pubDate>Mon, 21 Oct 2024 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:01:59 Hunter's Early Tech Journey: From Turtles To Beach Cams<br />00:10:06 Daniel's Voyage To AI Via Linguistics And Calculators<br />00:27:10 Transformers: The Game Changer In AI<br />00:31:38 Podcasting History: Hunter's High School Broadcast And Daniel's Hidden Drama<br />00:35:24 What's Next For They Might Be Self-Aware<br />00:37:07 Wrap Up</p>
]]></content:encoded>
      <enclosure length="27911356" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/1ab2ff6e-ba66-4301-a833-d0059399b732/audio/8935ddd6-f8d9-4783-aa02-a034e8b01bc6/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>From Turtles to Titans: AI Journey Origins, Linguistic Leaps &amp; Forgotten Tech Tales</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:38:06</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 42
SMASH THAT SUBSCRIBE BUTTON, YOU DIGITAL DEEP DIVERS! 

This week on &quot;They Might Be Self-Aware,&quot; Hunter digs up his early tech roots, from coding turtles on ancient Apple IIs to the iconic Surfchex days, while Daniel uncovers his path from linguistic capers and calculator exploits to those unforgettable Taco Bell name generators. Rewind with us to the primordial soup of AI—before transformers transformed everything and feel the seismic shifts models like BERT brought to the AI landscape. 

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 42
SMASH THAT SUBSCRIBE BUTTON, YOU DIGITAL DEEP DIVERS! 

This week on &quot;They Might Be Self-Aware,&quot; Hunter digs up his early tech roots, from coding turtles on ancient Apple IIs to the iconic Surfchex days, while Daniel uncovers his path from linguistic capers and calculator exploits to those unforgettable Taco Bell name generators. Rewind with us to the primordial soup of AI—before transformers transformed everything and feel the seismic shifts models like BERT brought to the AI landscape. 

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>hunter powers, origin stories in tech, linguistics and technology, technologist journey, podcasting experience, bert and gpt, nlp revolution, machine learning, artificial intelligence, ai podcast, large language models, self-aware ai, ai history, computational linguistics, ti calculator programming, daniel bishop</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>42</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">92db22a9-1bf4-4bbc-bdc9-19c03c649172</guid>
      <title>Tesla&apos;s Self-Driving Mirage, Rivian&apos;s Rise, &amp; AI Grabs Nobel Glory</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:01:18 Rivian Vs. Tesla: Switching Lanes<br />00:05:57 Autonomous Ambitions: Tesla's Full Self-driving Future<br />00:18:29 AI In Science: Nobel Prize In Chemistry<br />00:21:53 AI As The Next Nobel Laureate?<br />00:26:28 Drones And AI In Search And Rescue<br />00:31:19 AI's Role In Future Explorations<br />00:32:34 Wrap Up</p>
]]></description>
      <pubDate>Thu, 17 Oct 2024 14:23:44 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:01:18 Rivian Vs. Tesla: Switching Lanes<br />00:05:57 Autonomous Ambitions: Tesla's Full Self-driving Future<br />00:18:29 AI In Science: Nobel Prize In Chemistry<br />00:21:53 AI As The Next Nobel Laureate?<br />00:26:28 Drones And AI In Search And Rescue<br />00:31:19 AI's Role In Future Explorations<br />00:32:34 Wrap Up</p>
]]></content:encoded>
      <enclosure length="24243302" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/bb274b00-b1d1-44cb-a9e9-956148462dba/audio/eb02e566-4778-4b52-92c9-6e16024b8ed9/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Tesla&apos;s Self-Driving Mirage, Rivian&apos;s Rise, &amp; AI Grabs Nobel Glory</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:33:01</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 41
PLUG INTO THE REVOLUTION AND SUBSCRIBE, YOU TECH TRAILBLAZERS!

This week on “They Might Be Self-Aware,” we venture into the AI cosmos with Uncle Elon’s latest detour. Is Tesla’s full self-driving finally steering our way, or is this just another empty lane change? Elsewhere, Nobel Prizes are all the buzz as AI big shots from DeepMind snag the chemistry award! But wait, could the next prizewinner be an AI itself? Get ready for some mind-bending speculation. Meanwhile, rescue missions take a technological twist as AI-powered drones swoop in to find missing hikers. Machines to the rescue! (If only they weren&apos;t a bit late this time). And Meta&apos;s &quot;Segment Anything&quot; hits the skies with drones, scanning everything below with laser-like precision. 

Join us as we navigate AI’s ever-evolving landscape, questioning our tech overlords and pondering what’s next. 

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 41
PLUG INTO THE REVOLUTION AND SUBSCRIBE, YOU TECH TRAILBLAZERS!

This week on “They Might Be Self-Aware,” we venture into the AI cosmos with Uncle Elon’s latest detour. Is Tesla’s full self-driving finally steering our way, or is this just another empty lane change? Elsewhere, Nobel Prizes are all the buzz as AI big shots from DeepMind snag the chemistry award! But wait, could the next prizewinner be an AI itself? Get ready for some mind-bending speculation. Meanwhile, rescue missions take a technological twist as AI-powered drones swoop in to find missing hikers. Machines to the rescue! (If only they weren&apos;t a bit late this time). And Meta&apos;s &quot;Segment Anything&quot; hits the skies with drones, scanning everything below with laser-like precision. 

Join us as we navigate AI’s ever-evolving landscape, questioning our tech overlords and pondering what’s next. 

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>tesla full self-driving, deepmind alphafold, rivian vs tesla, protein folding ai, ai and medicine, ai in chemistry, search and rescue ai, ai drone search, nobel prize ai, autonomous driving levels</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>41</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">33d8d084-93e7-439d-bcdb-6cbad355a25f</guid>
      <title>Who Owns Reality? AI Sunglasses See Through You, Meta&apos;s MovieGen Magic, &amp; Sora&apos;s Cloudy Future Looms</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:02:31 Is OpenAI's Sora Stuck In The Clouds?<br />00:06:39 Meta’s Moviegen: Editing Videos With Text Prompts<br />00:10:58 Smart Sunglasses: A Privacy Threat?<br />00:17:10 Anonymity In The Age Of AI<br />00:28:10 The Liability Debate: Who's Responsible For Rogue AI?<br />00:31:51 The Saga Of California's SB 1047<br />00:33:31 Wrap Up</p>
]]></description>
      <pubDate>Mon, 14 Oct 2024 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:02:31 Is OpenAI's Sora Stuck In The Clouds?<br />00:06:39 Meta’s Moviegen: Editing Videos With Text Prompts<br />00:10:58 Smart Sunglasses: A Privacy Threat?<br />00:17:10 Anonymity In The Age Of AI<br />00:28:10 The Liability Debate: Who's Responsible For Rogue AI?<br />00:31:51 The Saga Of California's SB 1047<br />00:33:31 Wrap Up</p>
]]></content:encoded>
      <enclosure length="36618969" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/b8b0e604-d1af-4508-98bf-6f019646660a/audio/a0ce661d-a206-46f3-8d3e-98da10b097d9/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Who Owns Reality? AI Sunglasses See Through You, Meta&apos;s MovieGen Magic, &amp; Sora&apos;s Cloudy Future Looms</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:34:21</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 40
PLUG INTO THE FUTURE BY SUBSCRIBING

This week, @OpenAI shoots for the stars with a $157 billion valuation, but is their Sora lost in the clouds while @Meta&apos;s MovieGen gets all the glory, letting us edit videos with just a text prompt? Meanwhile, can @Meta&apos;s smart glasses see more than we should want them to? We&apos;re diving deep into the privacy black hole: is anonymity just a relic of the past? 

And when AI runs amok, who&apos;s really holding the reins? We&apos;re heating up the liability debate this episode. Then, swing by California where SB 1047 almost tackled the notion of preventing world-ending AI, but who are we kidding? They put that one back on the shelf. Yep, the world&apos;s not quite ready for that yet. Just another electrifying week with us at They Might Be Self Aware!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 40
PLUG INTO THE FUTURE BY SUBSCRIBING

This week, @OpenAI shoots for the stars with a $157 billion valuation, but is their Sora lost in the clouds while @Meta&apos;s MovieGen gets all the glory, letting us edit videos with just a text prompt? Meanwhile, can @Meta&apos;s smart glasses see more than we should want them to? We&apos;re diving deep into the privacy black hole: is anonymity just a relic of the past? 

And when AI runs amok, who&apos;s really holding the reins? We&apos;re heating up the liability debate this episode. Then, swing by California where SB 1047 almost tackled the notion of preventing world-ending AI, but who are we kidding? They put that one back on the shelf. Yep, the world&apos;s not quite ready for that yet. Just another electrifying week with us at They Might Be Self Aware!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>privacy in ai, openai valuation, ai liability, ai and privacy, openai sora, facial recognition technology, sb 1047, movie generation ai, meta glasses, ai innovation</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>40</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">bc984b56-bb52-400a-8db9-397a895c0bf2</guid>
      <title>Digital Da Vincis: AI’s Artistic Boom, James Cameron’s Stability &amp; Our Creative Conundrum</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:01:58 The Dominance Of Midjourney In AI Image Generation<br />00:11:09 AI's Quest To Capture Human Emotion<br />00:19:17 James Cameron And The Future Of AI In Filmmaking<br />00:26:29 AI's Impact On The Value And Perception Of Art<br />00:34:38 Are We Closer To The Singularity Than We Think?<br />00:36:49 Wrap Up</p>
]]></description>
      <pubDate>Mon, 7 Oct 2024 14:05:35 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:01:58 The Dominance Of Midjourney In AI Image Generation<br />00:11:09 AI's Quest To Capture Human Emotion<br />00:19:17 James Cameron And The Future Of AI In Filmmaking<br />00:26:29 AI's Impact On The Value And Perception Of Art<br />00:34:38 Are We Closer To The Singularity Than We Think?<br />00:36:49 Wrap Up</p>
]]></content:encoded>
      <enclosure length="27637285" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/cef0b9e7-6ee5-4107-b2b2-01fb9015d1b2/audio/bb2c1dec-b0e3-4482-8095-cbab7a5773db/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Digital Da Vincis: AI’s Artistic Boom, James Cameron’s Stability &amp; Our Creative Conundrum</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:37:44</itunes:duration>
      <itunes:summary>Can AI ever really capture the soul of human creativity, or is it destined to be forever soulless? James &quot;I&apos;m the King of the World&quot; Cameron seems to think AI&apos;s got the chops, as he joins Stability AI to push the boundaries of filmmaking magic. Are we on the verge of an AI-fueled creative implosion, or is this just the reset button that art desperately needs? Plus, are we closer to the singularity than we dare to imagine, as AI begins to supercharge its own hardware tech? Will our world be consumed by AI-generated everything, or are we just getting started on a new journey of creativity? Don&apos;t worry, we&apos;ll break it all down for you on this eye-opening episode of They Might Be Self-Aware!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>Can AI ever really capture the soul of human creativity, or is it destined to be forever soulless? James &quot;I&apos;m the King of the World&quot; Cameron seems to think AI&apos;s got the chops, as he joins Stability AI to push the boundaries of filmmaking magic. Are we on the verge of an AI-fueled creative implosion, or is this just the reset button that art desperately needs? Plus, are we closer to the singularity than we dare to imagine, as AI begins to supercharge its own hardware tech? Will our world be consumed by AI-generated everything, or are we just getting started on a new journey of creativity? Don&apos;t worry, we&apos;ll break it all down for you on this eye-opening episode of They Might Be Self-Aware!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>creative industries disruption, ai and human emotion, ai in hollywood, midjourney image generator, ai music generation, ai ethical concerns, ai creativity, ai-generated art, ai copyright issues, future of creativity, stable diffusion, intellectual property ai, james cameron ai, runway ml, ai in filmmaking</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>39</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">53ed29a3-8b7b-4945-9433-f80fb932c52d</guid>
      <title>AI and Magic Mix: D&amp;D Revolution, Meta&apos;s AR Vision &amp; The End of SaaS?</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:01:20 The Hardware Vs Software Debate<br />00:02:21 Project Generation With AI: A Real-world Experience<br />00:06:46 The Future Of AR: Insights On Meta’s New Glasses<br />00:13:42 AI In The World Of Dungeons & Dragons<br />00:22:32 Collaborating With AI For A Better Storytelling Experience<br />00:23:11 Hasbro's AI Adoption: The Impact On D&d And More<br />00:28:47 The Ethical And Practical Implications Of AI In Creativity<br />00:34:09 Wrap Up</p>
]]></description>
      <pubDate>Mon, 30 Sep 2024 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:01:20 The Hardware Vs Software Debate<br />00:02:21 Project Generation With AI: A Real-world Experience<br />00:06:46 The Future Of AR: Insights On Meta’s New Glasses<br />00:13:42 AI In The World Of Dungeons & Dragons<br />00:22:32 Collaborating With AI For A Better Storytelling Experience<br />00:23:11 Hasbro's AI Adoption: The Impact On D&d And More<br />00:28:47 The Ethical And Practical Implications Of AI In Creativity<br />00:34:09 Wrap Up</p>
]]></content:encoded>
      <enclosure length="26117021" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/0565cf6f-e662-4e82-bc93-4f3552d2cb14/audio/90a0f99a-935a-49fa-aa7f-99301767a106/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI and Magic Mix: D&amp;D Revolution, Meta&apos;s AR Vision &amp; The End of SaaS?</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:35:37</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 38
ROLL FOR INITIATIVE AND HIT SUBSCRIBE!

This week, we&apos;re navigating the great software vs. hardware debate: Should you jump ship from software engineering to hardware as AI takes over? @Hunter predicts a hardware resurgence, but @Daniel stands firm that software isn&apos;t going anywhere. Daniel astounds with tales of using AI&apos;s Claude 3.5 sonnet to whip up a Kubernetes web app skeleton in record time. Is this the future of coding or just a bare-bones setup? @Meta drops jaws with Zuck’s sleek AR glasses demo at Meta Connect. Hunter ponders over the impending hardware revolution — is it mere speculation or the next big thing? 

Switching gears, Hasbro&apos;s CEO hints at a heavy AI adoption for creating D&amp;D campaigns. Will this enhance storytelling or spell doom for the creatives? Daniel shares his own D&amp;D exploits and how AI enriches his game nights, but aren&apos;t we risking something irreplaceable? And what&apos;s the deal with *farm-to-table* D&amp;D? 

Yeah, we went there. 

Plus, universal basic income, anyone? All this and a sprinkle of mole people chatter from our trusty AI co-host. Yep, just another average Monday @ They Might Be Self-Aware! SUBSCRIBE NOW and never miss out on the latest AI shenanigans! Available on YouTube, Spotify, Apple Podcasts, and wherever robots like to hang out.

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 38
ROLL FOR INITIATIVE AND HIT SUBSCRIBE!

This week, we&apos;re navigating the great software vs. hardware debate: Should you jump ship from software engineering to hardware as AI takes over? @Hunter predicts a hardware resurgence, but @Daniel stands firm that software isn&apos;t going anywhere. Daniel astounds with tales of using AI&apos;s Claude 3.5 sonnet to whip up a Kubernetes web app skeleton in record time. Is this the future of coding or just a bare-bones setup? @Meta drops jaws with Zuck’s sleek AR glasses demo at Meta Connect. Hunter ponders over the impending hardware revolution — is it mere speculation or the next big thing? 

Switching gears, Hasbro&apos;s CEO hints at a heavy AI adoption for creating D&amp;D campaigns. Will this enhance storytelling or spell doom for the creatives? Daniel shares his own D&amp;D exploits and how AI enriches his game nights, but aren&apos;t we risking something irreplaceable? And what&apos;s the deal with *farm-to-table* D&amp;D? 

Yeah, we went there. 

Plus, universal basic income, anyone? All this and a sprinkle of mole people chatter from our trusty AI co-host. Yep, just another average Monday @ They Might Be Self-Aware! SUBSCRIBE NOW and never miss out on the latest AI shenanigans! Available on YouTube, Spotify, Apple Podcasts, and wherever robots like to hang out.

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ai in software development, ai in dnd campaigns, midjourney for dnd, chatgpt for storytelling, large language models ai, ai in creative writing, plot ai pin, claude ai, dungeons and dragons ai, wizards of the coast ai, fantasia ai tools, meta connect ar glasses, ai commoditization, kubernetes web app, hasbro ai integration, death of saas, ai in hardware, collaborative storytelling ai, generative ai tools, ai note taking tools</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>38</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">7356527a-3241-4a43-9eec-451319b8b65d</guid>
      <title>Lionsgate &amp; Runway AI Filmmaking, OpenAI&apos;s o1-preview, &amp; ESPN’s Robot Reporters</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:02:49 Movies: Are They Just Long Tiktoks?<br />00:04:16 Runway And Lionsgate: AI Video Magic<br />00:06:38 Video-to-video Tech: Transforming Your Clips<br />00:12:20 Espn's AI Sports Recaps: Good, Bad, Or Just Lazy?<br />00:14:40 Joanna Stern’s AI iPhone Review<br />00:19:01 Could A Robot Dog Reporter Revolutionize Journalism?<br />00:25:15 OpenAI's o1 Model: AI Meets Sapir-Whorf Hypothesis?<br />00:33:10 Wrap-up</p>
]]></description>
      <pubDate>Fri, 27 Sep 2024 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:02:49 Movies: Are They Just Long Tiktoks?<br />00:04:16 Runway And Lionsgate: AI Video Magic<br />00:06:38 Video-to-video Tech: Transforming Your Clips<br />00:12:20 Espn's AI Sports Recaps: Good, Bad, Or Just Lazy?<br />00:14:40 Joanna Stern’s AI iPhone Review<br />00:19:01 Could A Robot Dog Reporter Revolutionize Journalism?<br />00:25:15 OpenAI's o1 Model: AI Meets Sapir-Whorf Hypothesis?<br />00:33:10 Wrap-up</p>
]]></content:encoded>
      <enclosure length="25073329" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/b2ef93a7-dace-4a75-9338-3ae8ba5707e7/audio/37ef978b-2ad7-4796-bfe4-3db2facc207a/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Lionsgate &amp; Runway AI Filmmaking, OpenAI&apos;s o1-preview, &amp; ESPN’s Robot Reporters</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:34:10</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 37
GRAB SOME POPCORN AND SUBSCRIBE!

This week, Daniel and Hunter dive into Lionsgate&apos;s partnership with Runway for AI-generated video magic. Could AI be the Spielberg of the future, or is it just a cost-saving tool? Ever dreamed of transforming your TikTok dance into claymation or an alien adventure? Thanks to Runway&apos;s video-to-video tech, it&apos;s closer than you think. But could this tech also put film jobs at risk? We chat about robot dogs with little hats taking over journalism. Could a robo-reporter bring us more reliable news than humans? ESPN&apos;s AI-generated sports recaps are already here—but are they any good, or just lazy writing? Joanna Stern&apos;s iPhone review isn&apos;t written by her but by a custom GPT. Is this a game-changer or the end of human touch in tech reviews? And could OpenAI&apos;s newest model be proving the Sapir-Whorf hypothesis? This is one wild ride you don&apos;t want to miss. Like, subscribe, and let&apos;s dive into the future together!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 37
GRAB SOME POPCORN AND SUBSCRIBE!

This week, Daniel and Hunter dive into Lionsgate&apos;s partnership with Runway for AI-generated video magic. Could AI be the Spielberg of the future, or is it just a cost-saving tool? Ever dreamed of transforming your TikTok dance into claymation or an alien adventure? Thanks to Runway&apos;s video-to-video tech, it&apos;s closer than you think. But could this tech also put film jobs at risk? We chat about robot dogs with little hats taking over journalism. Could a robo-reporter bring us more reliable news than humans? ESPN&apos;s AI-generated sports recaps are already here—but are they any good, or just lazy writing? Joanna Stern&apos;s iPhone review isn&apos;t written by her but by a custom GPT. Is this a game-changer or the end of human touch in tech reviews? And could OpenAI&apos;s newest model be proving the Sapir-Whorf hypothesis? This is one wild ride you don&apos;t want to miss. Like, subscribe, and let&apos;s dive into the future together!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>luma labs, sapir-whorf hypothesis ai, post-production ai, video editing ai, lionsgate ai partnership, generative video technology, ai in journalism, ai ethical considerations, ai language models, ai-generated movies, openai zero one, espn ai articles, chain of thought reasoning, ai sports reporting, ai storytelling, video to video ai, runway ml, ai in filmmaking</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>37</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">633f889b-630c-43de-9b91-af06cfd12273</guid>
      <title>Remarkable Tablets, Google’s AI Podcast Notebook &amp; LinkedIn&apos;s LLM GPT</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:01:51 Remarkable Or Forgettable? Our Hands-on With The Latest E-ink Tablet<br />00:11:47 Are Foldable Phones The Future Or Just A Blast From The Past?<br />00:15:25 Talking Tech With AI Transcriptions: Teenage Engineering's Latest Gadget<br />00:19:06 Google's NotebookLM: The Dream Of Effortless Knowledge Synthesis<br />00:23:03 Linkedin's AI Shift: Are They Writing Your Next Work Email?<br />00:27:35 Is Social Media Going Full Ai? Exploring A Network Of Artificial Friends<br />00:30:31 The AI Future Of Work: Will We Need To Show Up At All?</p>
]]></description>
      <pubDate>Mon, 23 Sep 2024 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:01:51 Remarkable Or Forgettable? Our Hands-on With The Latest E-ink Tablet<br />00:11:47 Are Foldable Phones The Future Or Just A Blast From The Past?<br />00:15:25 Talking Tech With AI Transcriptions: Teenage Engineering's Latest Gadget<br />00:19:06 Google's NotebookLM: The Dream Of Effortless Knowledge Synthesis<br />00:23:03 Linkedin's AI Shift: Are They Writing Your Next Work Email?<br />00:27:35 Is Social Media Going Full Ai? Exploring A Network Of Artificial Friends<br />00:30:31 The AI Future Of Work: Will We Need To Show Up At All?</p>
]]></content:encoded>
      <enclosure length="23886343" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/d717af7b-4949-4215-80d4-c3b25596e9f7/audio/e33aff5e-c5c5-427f-ac15-1f392f231918/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Remarkable Tablets, Google’s AI Podcast Notebook &amp; LinkedIn&apos;s LLM GPT</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:32:31</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 36
UNFOLD THE FUTURE AND SUBSCRIBE NOW!

This week, we&apos;re tackling the latest in tech and AI! Is the Remarkable e-ink tablet truly remarkable, or just another forgettable gadget? Hunter and Daniel give their hands-on review. Are foldable phones the future, or just a blast from the past? Hunter&apos;s got the Pixel 9 Fold, and he&apos;s not holding back.

Teenage Engineering&apos;s voice recorder meets AI transcription — could this be the game-changer for your brainstorming sessions? Spoiler: It&apos;s more than just a fancy gadget.

Google&apos;s NotebookLM promises effortless knowledge synthesis. Is this the ultimate AI secretary we&apos;ve all been waiting for? We&apos;re intrigued but cautious.

LinkedIn&apos;s AI shift—are they writing your next work email? We&apos;ll dive into how LinkedIn is using AI to shape your professional life.

Is social media going full AI? We explore Social AI, a network of artificial friends and what it means for the future of online interaction.

And the big question: will AI make us obsolete in the workplace? We&apos;re discussing the AI future of work and whether we&apos;ll need to show up at all. Just another normal episode here at They Might Be Self-Aware! Listen up! Your future might depend on it!!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 36
UNFOLD THE FUTURE AND SUBSCRIBE NOW!

This week, we&apos;re tackling the latest in tech and AI! Is the Remarkable e-ink tablet truly remarkable, or just another forgettable gadget? Hunter and Daniel give their hands-on review. Are foldable phones the future, or just a blast from the past? Hunter&apos;s got the Pixel 9 Fold, and he&apos;s not holding back.

Teenage Engineering&apos;s voice recorder meets AI transcription — could this be the game-changer for your brainstorming sessions? Spoiler: It&apos;s more than just a fancy gadget.

Google&apos;s NotebookLM promises effortless knowledge synthesis. Is this the ultimate AI secretary we&apos;ve all been waiting for? We&apos;re intrigued but cautious.

LinkedIn&apos;s AI shift—are they writing your next work email? We&apos;ll dive into how LinkedIn is using AI to shape your professional life.

Is social media going full AI? We explore Social AI, a network of artificial friends and what it means for the future of online interaction.

And the big question: will AI make us obsolete in the workplace? We&apos;re discussing the AI future of work and whether we&apos;ll need to show up at all. Just another normal episode here at They Might Be Self-Aware! Listen up! Your future might depend on it!!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>google gemini, remarkable tablet, color e-ink display, llm training, ai podcast, linkedin ai, generative ai, document summarization, voice recorder, they might be self-aware, notebooklm, teenage engineering, social ai, dead internet theory, folding phones</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>36</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">be88e39e-e4a9-4055-b81d-c9614e369675</guid>
      <title>Minecraft AI Madness &amp; Mistral&apos;s Multimodal Move</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:02:34 $150 Billion OpenAI Buyout Bargain?<br />00:09:28 Regulating Superintelligence: Can It Be Done?<br />00:14:34 Mistral’s Multimodal Leap<br />00:18:55 AI Agents In Minecraft: Crafting Chaos<br />00:23:06 Apple’s Local LLMs: Who Gets Lucky?<br />00:25:52 Google’s Podcast Simulator: Are Podcasters Obsolete?</p>
]]></description>
      <pubDate>Thu, 19 Sep 2024 12:54:55 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:02:34 $150 Billion OpenAI Buyout Bargain?<br />00:09:28 Regulating Superintelligence: Can It Be Done?<br />00:14:34 Mistral’s Multimodal Leap<br />00:18:55 AI Agents In Minecraft: Crafting Chaos<br />00:23:06 Apple’s Local LLMs: Who Gets Lucky?<br />00:25:52 Google’s Podcast Simulator: Are Podcasters Obsolete?</p>
]]></content:encoded>
      <enclosure length="13759528" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/fc8660e3-b666-4e52-8df6-80fc8745ddb1/audio/ebd59406-707d-4b58-9f18-5368de788bdf/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Minecraft AI Madness &amp; Mistral&apos;s Multimodal Move</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:27:41</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 35
WIRE US $150 BILLION AND HIT SUBSCRIBE!

This week, Hunter makes a wild pitch – should Daniel cough up $150 billion to snatch up OpenAI? Is it a bargain or just insane? Mistral&apos;s new Pixtrel model dives into multimodal AI – is the little guy stepping up into the big leagues? Over 1000 AI agents unleashed in Minecraft – collaboration or chaos? It’s a digital experiment you’ve gotta hear about! Apple’s iPhone drops a new feature: local LLMs. Why are lucky users stoked, and why is most of the EU left out? Did Google just make podcasters obsolete? A new AI tool might have us all on vacation soon, but until then... Just another rollercoaster ride here at They Might Be Self-Aware!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 35
WIRE US $150 BILLION AND HIT SUBSCRIBE!

This week, Hunter makes a wild pitch – should Daniel cough up $150 billion to snatch up OpenAI? Is it a bargain or just insane? Mistral&apos;s new Pixtrel model dives into multimodal AI – is the little guy stepping up into the big leagues? Over 1000 AI agents unleashed in Minecraft – collaboration or chaos? It’s a digital experiment you’ve gotta hear about! Apple’s iPhone drops a new feature: local LLMs. Why are lucky users stoked, and why is most of the EU left out? Did Google just make podcasters obsolete? A new AI tool might have us all on vacation soon, but until then... Just another rollercoaster ride here at They Might Be Self-Aware!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ai agents in minecraft, agi potential, self aware ai, ai in video games, mistral ai models, openai valuation, google ai podcast simulation, ai regulatory concerns, ai advancements, multimodal ai capabilities, pixtral multimodal model, ai seed funding, ai npcs</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>35</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">832cf496-e76d-430e-b16b-94c0972851bf</guid>
      <title>AI Anime Ambitions, Dark Web Chatbots, and Doom GPT vs. Mario</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:01:50 AI And Anime: Can It Capture The Soul?<br />00:09:30 The Black Market For AI Chatbots<br />00:18:31 Cleaning Up LinkedIn Spam With AI<br />00:23:06 The Department Of Justice Vs. Google's AI Search Dominance<br />00:25:29 Generating Doom In Real-time: AI-powered Video Games<br />00:31:52 Wrap Up</p>
]]></description>
      <pubDate>Mon, 16 Sep 2024 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:01:50 AI And Anime: Can It Capture The Soul?<br />00:09:30 The Black Market For AI Chatbots<br />00:18:31 Cleaning Up LinkedIn Spam With AI<br />00:23:06 The Department Of Justice Vs. Google's AI Search Dominance<br />00:25:29 Generating Doom In Real-time: AI-powered Video Games<br />00:31:52 Wrap Up</p>
]]></content:encoded>
      <enclosure length="23988654" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/f21531d4-6f9f-41af-a7df-051b98af8e31/audio/772c6b4f-e498-44ef-8878-76f43170896c/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Anime Ambitions, Dark Web Chatbots, and Doom GPT vs. Mario</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:32:39</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 34
SAVE THE PRINCESS AND SUBSCRIBE:

This week on &quot;They Might Be Self-Aware,&quot; we&apos;re diving into the fascinating world where AI meets anime. Can a computer ever truly capture the soul of Studio Ghibli? 

Come explore the mysterious underworld of AI chatbots. Is the black market really thriving? Indiana University says, &quot;maybe.&quot; 

Ever wondered if ChatGPT could help you clean up those pesky LinkedIn spam messages? We discuss how AI could be your new digital assistant in weeding out those annoying sales pitches. 

In gaming news, we’re geeking out over real-time AI-generated Doom levels. Are we one step closer to fully AI-powered video games? Plus, Mario makes a guest appearance—kind of. 

Hit that subscribe button and join us as we get one step closer to self-awareness! Your future might depend on it!!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 34
SAVE THE PRINCESS AND SUBSCRIBE:

This week on &quot;They Might Be Self-Aware,&quot; we&apos;re diving into the fascinating world where AI meets anime. Can a computer ever truly capture the soul of Studio Ghibli? 

Come explore the mysterious underworld of AI chatbots. Is the black market really thriving? Indiana University says, &quot;maybe.&quot; 

Ever wondered if ChatGPT could help you clean up those pesky LinkedIn spam messages? We discuss how AI could be your new digital assistant in weeding out those annoying sales pitches. 

In gaming news, we’re geeking out over real-time AI-generated Doom levels. Are we one step closer to fully AI-powered video games? Plus, Mario makes a guest appearance—kind of. 

Hit that subscribe button and join us as we get one step closer to self-awareness! Your future might depend on it!!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ai in video games, chatbot limitations, ps5 pro features, ai upscaling in gaming, video game ai upscaling, hunter and daniel podcast, anime generated by ai, sony animation ai, ai in anime, linkedin spam filter, doom ai simulation, large language model underground market, doj vs google ai, generative ai in animation, ai monopoly in search</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>34</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b31b278e-1013-4cf1-92ed-45c0370662f3</guid>
      <title>Rogue AI Faces California&apos;s Guillotine, Government Peeks &amp; Copyright Catfights</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:01:39 California's AI Act: A New Frontier Or A Slippery Slope?<br />00:02:13 Are AI Companies Ready For A Mandated Shutdown Switch?<br />00:04:40 How Much Damage Can Rogue AI Models Really Cause?<br />00:06:28 The Billion-dollar Price Tag For AI Training - Who's Left In The Game?<br />00:09:10 Openai, Anthropic, And Uncle Sam - An Early Access Alliance<br />00:13:05 Copyright In The AI Era - Who Owns The Data That Trained The Models?<br />00:17:04 Can AI Self-awareness Be Regulated, Or Will It Laugh In Our Faces?<br />00:18:09 Wrap Up</p>
]]></description>
      <pubDate>Mon, 9 Sep 2024 17:33:26 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:01:39 California's AI Act: A New Frontier Or A Slippery Slope?<br />00:02:13 Are AI Companies Ready For A Mandated Shutdown Switch?<br />00:04:40 How Much Damage Can Rogue AI Models Really Cause?<br />00:06:28 The Billion-dollar Price Tag For AI Training - Who's Left In The Game?<br />00:09:10 Openai, Anthropic, And Uncle Sam - An Early Access Alliance<br />00:13:05 Copyright In The AI Era - Who Owns The Data That Trained The Models?<br />00:17:04 Can AI Self-awareness Be Regulated, Or Will It Laugh In Our Faces?<br />00:18:09 Wrap Up</p>
]]></content:encoded>
      <enclosure length="14257127" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/04c91e7b-dd30-4e29-a27d-9e79d62ee9c5/audio/a8f0cccd-3ad1-4de0-9dc6-929400c46afc/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Rogue AI Faces California&apos;s Guillotine, Government Peeks &amp; Copyright Catfights</itunes:title>
      <itunes:author>Hunter Powers</itunes:author>
      <itunes:duration>00:19:08</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 33
DON&apos;T LET AI TAKE OVER - SUBSCRIBE NOW!

This week, California&apos;s Weiner Act has passed – it&apos;s all about pulling the plug on rogue AI models. Will this legislation keep our digital overlords in check or send us down a slippery slope of overregulation? OpenAI and Anthropic are cozying up to Uncle Sam with early model access – is this a partnership made in AI heaven, or just a prelude to more government meddling? The billion-dollar price tag to train these AI models means the little guys are getting pushed out. Are we heading towards an AI monopoly where only the tech titans can play? When it comes to copyright, OpenAI is making amends with publishing giants by throwing $10 million deals at them. Are they genuinely making peace, or just buying their way out of lawsuits? The cat is out of the bag, folks, and there&apos;s no stuffing it back in. 

So, what are you waiting for? Hit that subscribe button. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 33
DON&apos;T LET AI TAKE OVER - SUBSCRIBE NOW!

This week, California&apos;s Weiner Act has passed – it&apos;s all about pulling the plug on rogue AI models. Will this legislation keep our digital overlords in check or send us down a slippery slope of overregulation? OpenAI and Anthropic are cozying up to Uncle Sam with early model access – is this a partnership made in AI heaven, or just a prelude to more government meddling? The billion-dollar price tag to train these AI models means the little guys are getting pushed out. Are we heading towards an AI monopoly where only the tech titans can play? When it comes to copyright, OpenAI is making amends with publishing giants by throwing $10 million deals at them. Are they genuinely making peace, or just buying their way out of lawsuits? The cat is out of the bag, folks, and there&apos;s no stuffing it back in. 

So, what are you waiting for? Hit that subscribe button. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>usa ai safety institute, senator scott weiner, anthropic ai models, openai, ai startup challenges, ai critical harms, ai safety testing, meta llama models, governor newsom, california ai legislation, ai copyright issues, ai model training costs, future of ai regulation, ai industry standards, chatgpt user growth, ai regulation, ai and government collaboration, ai public data training, safe and secure innovation for frontier ai models act, ai model shutdown capabilities</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>33</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">0e2bd01a-a652-48c0-9bb5-e385d85c93b1</guid>
      <title>OpenAI’s Strawberry Dilemma, AI Gadgets for Life Recaps, &amp; The Future of Agents</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:02:40 The AI That Wants To Remember Your Entire Life: Friend Or Foe?<br />00:03:44 Plaud's New Pin: The Ultimate Personal Assistant Or Privacy Invasion?<br />00:16:49 OpenAi's Mysterious 'strawberry' Model And The Future Of AI Autonomy<br />00:18:49 Can OpenAI Stay On Top Of The AI Mountain, Or Are They Running Out Of Steam?<br />00:23:49 Who's Afraid Of AI Chatting With The Government?<br />00:24:07 ChatGPT Becomes A Travel Agent: Would You Trust It With Your Vacation Plans?<br />00:25:47 Wrap Up</p>
]]></description>
      <pubDate>Thu, 5 Sep 2024 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:02:40 The AI That Wants To Remember Your Entire Life: Friend Or Foe?<br />00:03:44 Plaud's New Pin: The Ultimate Personal Assistant Or Privacy Invasion?<br />00:16:49 OpenAi's Mysterious 'strawberry' Model And The Future Of AI Autonomy<br />00:18:49 Can OpenAI Stay On Top Of The AI Mountain, Or Are They Running Out Of Steam?<br />00:23:49 Who's Afraid Of AI Chatting With The Government?<br />00:24:07 ChatGPT Becomes A Travel Agent: Would You Trust It With Your Vacation Plans?<br />00:25:47 Wrap Up</p>
]]></content:encoded>
      <enclosure length="20076913" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/524dcb83-db68-40dc-822c-660064b0b8d9/audio/831dce9e-1ce5-4bc1-b67e-c848c81e78fd/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>OpenAI’s Strawberry Dilemma, AI Gadgets for Life Recaps, &amp; The Future of Agents</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:27:13</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 32
GET READY TO HAVE YOUR LIFE SUMMARIZED BY AI - SUBSCRIBE NOW!

This week, @OpenAI might have a new ace up their sleeve with the enigmatic &apos;Strawberry&apos; model. Is it designed to generate synthetic data or tackle complex problems? And what’s up with their government demo—is it a bid for more funds or just showing off? PlaudTech&apos;s new AI pin could be the ultimate personal assistant, but are you ready to trade privacy for convenience? Imagine an AI summarizing your entire day—friend or foe? Meanwhile, ChatGPT is eyeing a new role: your vacation planner. Would you trust it with booking flights and hotels?   Just another average Thursday here at They Might Be Self-Aware!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 32
GET READY TO HAVE YOUR LIFE SUMMARIZED BY AI - SUBSCRIBE NOW!

This week, @OpenAI might have a new ace up their sleeve with the enigmatic &apos;Strawberry&apos; model. Is it designed to generate synthetic data or tackle complex problems? And what’s up with their government demo—is it a bid for more funds or just showing off? PlaudTech&apos;s new AI pin could be the ultimate personal assistant, but are you ready to trade privacy for convenience? Imagine an AI summarizing your entire day—friend or foe? Meanwhile, ChatGPT is eyeing a new role: your vacation planner. Would you trust it with booking flights and hotels?   Just another average Thursday here at They Might Be Self-Aware!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ai privacy concerns, mistral ai, ai-generated training data, strawberry ai, claude ai, ai voice interaction, openai funding, orion ai model, sora ai demo, personal ai assistant, ai meeting summarization, ai vacation planning, plaud ai pin, ai surveillance, autonomous ai agents, gpt-5 capabilities, ai in government</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>32</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">76b82ba7-3ce2-4149-b867-ad33576e875b</guid>
      <title>AI’s Coding Coup @ AWS, Tesla&apos;s Robot Dance &amp; CA&apos;s Fight for Digital Truth</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:01:39 The Future Of Podcasting: Will AI Hosts Take Over?<br />00:02:31 AWS And The Looming Question: Is Coding Really On The Brink Of Extinction?<br />00:10:12 Tesla’s Mocap Suits: Training Robots Or Replacing Jobs?<br />00:15:46 The Rise Of Humanoid Robots: Will They Soon Be Our Coworkers?<br />00:17:21 California's New Bill: Will AI-generated Content Need A Label?<br />00:20:55 Shaky Chihuahuas And Spider Robots: What’s Next In AI Design?<br />00:22:53 Wrap Up</p>
]]></description>
      <pubDate>Mon, 2 Sep 2024 13:43:23 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:01:39 The Future Of Podcasting: Will AI Hosts Take Over?<br />00:02:31 AWS And The Looming Question: Is Coding Really On The Brink Of Extinction?<br />00:10:12 Tesla’s Mocap Suits: Training Robots Or Replacing Jobs?<br />00:15:46 The Rise Of Humanoid Robots: Will They Soon Be Our Coworkers?<br />00:17:21 California's New Bill: Will AI-generated Content Need A Label?<br />00:20:55 Shaky Chihuahuas And Spider Robots: What’s Next In AI Design?<br />00:22:53 Wrap Up</p>
]]></content:encoded>
      <enclosure length="17485960" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/c02179e2-ae3b-4769-95b2-df164a1e3007/audio/a3d11252-a266-464c-99ce-4fac0c05f6f8/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI’s Coding Coup @ AWS, Tesla&apos;s Robot Dance &amp; CA&apos;s Fight for Digital Truth</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:23:38</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 31
LABEL YOUR SUBSCRIBE BUTTON AND HIT IT!

This week we&apos;re asking the big questions: Will AI hosts take over podcasting, or are we safe for another season? Plus, AWS’s Matt Garman suggests coding jobs could vanish in just 2 years—are we buying it? Meanwhile, @Tesla’s hiring folks to wear mocap suits—is it a genius move for training robots or a workforce nightmare? And are humanoid robots going to be our future coworkers? We’ve got some wild predictions, including shaky Chihuahuas and spider bots! Join Hunter and Daniel as they decode these tech mysteries and more on Episode 31.

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 31
LABEL YOUR SUBSCRIBE BUTTON AND HIT IT!

This week we&apos;re asking the big questions: Will AI hosts take over podcasting, or are we safe for another season? Plus, AWS’s Matt Garman suggests coding jobs could vanish in just 2 years—are we buying it? Meanwhile, @Tesla’s hiring folks to wear mocap suits—is it a genius move for training robots or a workforce nightmare? And are humanoid robots going to be our future coworkers? We’ve got some wild predictions, including shaky Chihuahuas and spider bots! Join Hunter and Daniel as they decode these tech mysteries and more on Episode 31.

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>automated coding, ai in programming, ai in workplaces, california ai regulation, ai job impact, ai in factories, content provenance, artificial intelligence, ai training data, aws ai tools, ai and journalism, digital watermarking ai, ai generated content, ai code generation, future of automation, future of programming, image authenticity, human vs ai content, generative ai labeling, robotic automation</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>31</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">87c22e61-4797-428a-a3a0-a0c2e0d8771b</guid>
      <title>The Dog Days of AI: Smart Homes, Future Crimes &amp; Digital Review Rumbles</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:01:12 Keeping Your Dog Cool Or Turning Your House Into A Tech Fortress<br />00:10:01 Can A Large Language Model Become Humanity's Greatest Tool Or Its Biggest Threat?<br />00:18:41 Argentina's AI Ambitions: Predicting Crimes Or Predicting Trouble?<br />00:23:13 FTC's Crackdown On Ai-generated Reviews – Should You Be Worried?<br />00:28:10 From Minority Report To Real Life - Can AI Really Predict Future Crimes?<br />00:30:02 The Ultimate Solution To Fake Reviews – In-person Interviews At Your Local Coffee Shop?<br />00:32:18 Wrap Up</p>
]]></description>
      <pubDate>Mon, 26 Aug 2024 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:01:12 Keeping Your Dog Cool Or Turning Your House Into A Tech Fortress<br />00:10:01 Can A Large Language Model Become Humanity's Greatest Tool Or Its Biggest Threat?<br />00:18:41 Argentina's AI Ambitions: Predicting Crimes Or Predicting Trouble?<br />00:23:13 FTC's Crackdown On Ai-generated Reviews – Should You Be Worried?<br />00:28:10 From Minority Report To Real Life - Can AI Really Predict Future Crimes?<br />00:30:02 The Ultimate Solution To Fake Reviews – In-person Interviews At Your Local Coffee Shop?<br />00:32:18 Wrap Up</p>
]]></content:encoded>
      <enclosure length="24884965" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/a1a09e75-9f08-4ba5-8464-206b67816b39/audio/955945e4-d100-4bf0-93c8-506f48ea30be/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>The Dog Days of AI: Smart Homes, Future Crimes &amp; Digital Review Rumbles</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:33:54</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 30
DON&apos;T GET LEFT OUT IN THE HEAT – SUBSCRIBE NOW!

In this week&apos;s episode, we dive into Daniel&apos;s quest to keep his elderly dog cool without turning his home into a tech fortress, sparking a hilarious debate on sensors vs. smarts. We explore if large language models pose a threat or are humanity’s greatest tool, dissecting the latest studies that apparently put our fears to rest. Or do they? Hunter brings us the news from Argentina where AI is supposedly set to predict future crimes – is it Minority Report IRL or just political fluff? The FTC’s crackdown on AI-generated reviews is here, imposing hefty fines. What does this mean for your Amazon hauls? Just another average Monday on They Might Be Self-Aware! Tune in and hit that subscribe button!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 30
DON&apos;T GET LEFT OUT IN THE HEAT – SUBSCRIBE NOW!

In this week&apos;s episode, we dive into Daniel&apos;s quest to keep his elderly dog cool without turning his home into a tech fortress, sparking a hilarious debate on sensors vs. smarts. We explore if large language models pose a threat or are humanity’s greatest tool, dissecting the latest studies that apparently put our fears to rest. Or do they? Hunter brings us the news from Argentina where AI is supposedly set to predict future crimes – is it Minority Report IRL or just political fluff? The FTC’s crackdown on AI-generated reviews is here, imposing hefty fines. What does this mean for your Amazon hauls? Just another average Monday on They Might Be Self-Aware! Tune in and hit that subscribe button!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>social media responsibility, ai existential threats, smart home sensors, ai sensors, if this then that, ai watchdog, llm threats, ai fake reviews, spam call prevention, large language models, winnie the pooh, ifttt, argentina ai crime prediction, dog safety ai, ftc ai reviews, ai surveillance</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>30</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">07097305-d2d7-4f87-a5f3-03792d0decc8</guid>
      <title>AI Impostors Crash Job Hunts, Recruiters Cry Wolf &amp; The Post-Hire Reality Check</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:01:28 AI In Job Interviews: Is It Cheating Or Just The Future?<br />00:04:56 The Wild World Of Remote Work: When The Person You Hired Isn’t Who Shows Up<br />00:11:03 Optimal Resumes: Can AI Make The Perfect Resume Or Just Flood The Market With Junk?<br />00:14:16 Surviving The Modern Job Hunt With AI At Your Side<br />00:19:32 The Death Of Cover Letters And The Rise Of ChatGPT<br />00:22:23 Do Recruiters Stand A Chance Against AI-powered Applicants?<br />00:25:56 Is A High-tech Probation Period The Solution To AI-assisted Hires?<br />00:28:09 Wrap Up</p>
]]></description>
      <pubDate>Mon, 19 Aug 2024 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:01:28 AI In Job Interviews: Is It Cheating Or Just The Future?<br />00:04:56 The Wild World Of Remote Work: When The Person You Hired Isn’t Who Shows Up<br />00:11:03 Optimal Resumes: Can AI Make The Perfect Resume Or Just Flood The Market With Junk?<br />00:14:16 Surviving The Modern Job Hunt With AI At Your Side<br />00:19:32 The Death Of Cover Letters And The Rise Of ChatGPT<br />00:22:23 Do Recruiters Stand A Chance Against AI-powered Applicants?<br />00:25:56 Is A High-tech Probation Period The Solution To AI-assisted Hires?<br />00:28:09 Wrap Up</p>
]]></content:encoded>
      <enclosure length="21991509" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/c497a88c-370c-48ce-871c-f276d3a1449d/audio/47c81553-eddd-464e-813a-0dd44d455b09/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Impostors Crash Job Hunts, Recruiters Cry Wolf &amp; The Post-Hire Reality Check</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:29:53</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 29
SUBSCRIBE NOW AND NEVER APPLY FOR A JOB AGAIN

This week on They Might Be Self-Aware, Hunter and Daniel tackle the AI conundrum in job interviews: total game-changer or epic cheat fest? From perfect AI-polished resumes to the unsettling reality of AI-assisted imposters in remote work, can we still trust who we hire? With 50% of applicants using genAI tools, are recruiters fighting a losing battle against the flood of AI-crafted submissions? Hunter suggests we embrace the chaos with a reimagined, faster hiring process, while Daniel offers a high-tech probation period to sniff out the real talent. Can you outsmart the AIs in your next job hunt or will you fall victim to the digital deluge? Smash that subscribe button, rate us with all the thumbs your AI can generate, and join the ride as we navigate this brave new world of AI-powered employment! 

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 29
SUBSCRIBE NOW AND NEVER APPLY FOR A JOB AGAIN

This week on They Might Be Self-Aware, Hunter and Daniel tackle the AI conundrum in job interviews: total game-changer or epic cheat fest? From perfect AI-polished resumes to the unsettling reality of AI-assisted imposters in remote work, can we still trust who we hire? With 50% of applicants using genAI tools, are recruiters fighting a losing battle against the flood of AI-crafted submissions? Hunter suggests we embrace the chaos with a reimagined, faster hiring process, while Daniel offers a high-tech probation period to sniff out the real talent. Can you outsmart the AIs in your next job hunt or will you fall victim to the digital deluge? Smash that subscribe button, rate us with all the thumbs your AI can generate, and join the ride as we navigate this brave new world of AI-powered employment! 

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ai job application automation, job interview ai, ai in employment decisions, remote work ai, llm in job interviews, ai recruitment, ai tools for job seekers, llm job applications, ai in hiring, ai-generated cover letter, interviewing with ai, ai job market ethics, ai cheating in job market, ai and company culture, job search ai tools, ai job application, ai-assist resume, probationary period ai, cover letter ai, future of hiring</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>29</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">8b994b23-9fa3-4e6b-a323-45d3139e7269</guid>
      <title>DIY Humane Pins, Rivian’s Rise &amp; AI’s Secret Cheating Game</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:02:05 Hunter's Diy Humane Pin<br />00:04:13 The Humane Pin's Future<br />00:08:01 Daniel's Electric Car Dilemma<br />00:13:40 Is Using Ai Cheating?<br />00:16:20 Thanos Snaps And Ai Watermarking<br />00:28:30 Wrap Up</p>
]]></description>
      <pubDate>Thu, 15 Aug 2024 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:02:05 Hunter's Diy Humane Pin<br />00:04:13 The Humane Pin's Future<br />00:08:01 Daniel's Electric Car Dilemma<br />00:13:40 Is Using Ai Cheating?<br />00:16:20 Thanos Snaps And Ai Watermarking<br />00:28:30 Wrap Up</p>
]]></content:encoded>
      <enclosure length="22195511" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/5aa045a9-8b25-4ef1-8db6-06cdf61419eb/audio/859cb7d7-0e2a-4c40-8cd8-4896614e025f/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>DIY Humane Pins, Rivian’s Rise &amp; AI’s Secret Cheating Game</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:30:10</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 28
STICK WITH US AND SUBSCRIBE—UNLIKE THOSE HUMANE PINS!

This week on &quot;They Might Be Self-Aware,&quot; Hunter goes full MacGyver and DIYs a Humane pin using duct tape and a Rabbit r1. Is this the future of wearable tech, or just a sticky mess? Meanwhile, word on the street is that Humane pins are coming back faster than they&apos;re selling. Returns are outpacing sales—ouch! Can they make a comeback, or is the tape stronger than the brand? Daniel’s wrestling with his EV loyalty. Should he stick with Tesla or is a Rivian or VW electric bus calling his name? The electric car showdown is real, folks. In a bombshell revelation, Hunter confesses to using AI to write Daniel’s quarterly reviews at a previous job. The question is, is using AI really cheating? Should we care? And if you had the power to watermark all AI-generated content, would you pull a Thanos and snap your fingers? We dive deep into the ethical maze of AI detection tools and what it means for education, jobs, and everything in between. Will we adapt, or is chaos on the horizon? And guess what? We’re experimenting with smaller, more focused episodes twice a week now. More content, more fun! 

Just another jaw-dropping week here at They Might Be Self-Aware!

For more info, visit our website at https://www.tmbsa.tech</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 28
STICK WITH US AND SUBSCRIBE—UNLIKE THOSE HUMANE PINS!

This week on &quot;They Might Be Self-Aware,&quot; Hunter goes full MacGyver and DIYs a Humane pin using duct tape and a Rabbit r1. Is this the future of wearable tech, or just a sticky mess? Meanwhile, word on the street is that Humane pins are coming back faster than they&apos;re selling. Returns are outpacing sales—ouch! Can they make a comeback, or is the tape stronger than the brand? Daniel’s wrestling with his EV loyalty. Should he stick with Tesla or is a Rivian or VW electric bus calling his name? The electric car showdown is real, folks. In a bombshell revelation, Hunter confesses to using AI to write Daniel’s quarterly reviews at a previous job. The question is, is using AI really cheating? Should we care? And if you had the power to watermark all AI-generated content, would you pull a Thanos and snap your fingers? We dive deep into the ethical maze of AI detection tools and what it means for education, jobs, and everything in between. Will we adapt, or is chaos on the horizon? And guess what? We’re experimenting with smaller, more focused episodes twice a week now. More content, more fun! 

Just another jaw-dropping week here at They Might Be Self-Aware!

For more info, visit our website at https://www.tmbsa.tech</itunes:subtitle>
      <itunes:keywords>impact of ai tools, gpt-3, ai in education, humane pin, openai detection tools, electric cars comparison, chat gpt email drafts, language models, test driving rivian, large language models, ai text generation, future of ai watermark, rivian electric vehicle, ai homework cheating, openai watermark tool, cheating with ai, subscription models, tesla self-driving, performance reviews with ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>28</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">a54e6f81-2909-4371-b043-2a9af1c0dc99</guid>
      <title>The White House Backs Open AI, Meta&apos;s Masterstroke &amp; Strawberry-Scented GPT-5</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:01:48 Open Source Explained<br />00:02:30 The Importance Of Spaces: Open Ai Vs Openai<br />00:03:20 White House Embraces Open Source Ai<br />00:04:40 Benefits And Pitfalls Of Running Your Own Llms<br />00:07:00 The Open Source Llm Community<br />00:08:00 The Economics Of Open Source Ai<br />00:09:20 Why Do Closed Models Exist?<br />00:17:30 Meta's Open Source Strategy<br />00:30:00 Is Gpt-5 Already Live On Twitter?<br />00:31:30 The Future Of Agi: Open Or Closed?<br />00:33:58 Wrap Up</p>
]]></description>
      <pubDate>Mon, 12 Aug 2024 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:01:48 Open Source Explained<br />00:02:30 The Importance Of Spaces: Open Ai Vs Openai<br />00:03:20 White House Embraces Open Source Ai<br />00:04:40 Benefits And Pitfalls Of Running Your Own Llms<br />00:07:00 The Open Source Llm Community<br />00:08:00 The Economics Of Open Source Ai<br />00:09:20 Why Do Closed Models Exist?<br />00:17:30 Meta's Open Source Strategy<br />00:30:00 Is Gpt-5 Already Live On Twitter?<br />00:31:30 The Future Of Agi: Open Or Closed?<br />00:33:58 Wrap Up</p>
]]></content:encoded>
      <enclosure length="25606921" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/197adf92-48f0-45d3-bc7e-b1f85634f893/audio/b2d73721-7a0a-47bc-959f-3108fe989ecb/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>The White House Backs Open AI, Meta&apos;s Masterstroke &amp; Strawberry-Scented GPT-5</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:34:54</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 27
SUBSCRIBE AND JOIN THE OPEN SOURCE REVOLUTION!

This week, let’s decode the cosmic mystery of &quot;Open AI&quot; vs. &quot;OpenAI&quot;! What&apos;s the beef with the space, and why is the White House so keen on open-source AI now? Is open-source the hero we need? Ever pondered if you should run your own LLMs or just tap into someone else&apos;s? We dive into the pros, cons, and whether Joe Biden is running AI models in his basement. Are the GPT-5 rumors true? Is there a stealthy chatbot on Twitter playing mind games with us under a strawberry code name? And why does Meta’s Zuck say open source AI will save us all—or at least give OpenAI a run for their money? We’re slicing through the layers of AI intrigue, from the coolest innovations to the shadowy corporate maneuvers. Are you team Open Source or Closed Source? Maybe you&apos;re just here for the AI gossip. Either way, we&apos;re your guides! Join us twice a week—you wouldn’t want to miss when we finally become self-aware, right? Thank you, Daniel! Thank you, Hunter! Until next time, risk it all and stay curious in this ever-evolving AI cosmos!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 27
SUBSCRIBE AND JOIN THE OPEN SOURCE REVOLUTION!

This week, let’s decode the cosmic mystery of &quot;Open AI&quot; vs. &quot;OpenAI&quot;! What&apos;s the beef with the space, and why is the White House so keen on open-source AI now? Is open-source the hero we need? Ever pondered if you should run your own LLMs or just tap into someone else&apos;s? We dive into the pros, cons, and whether Joe Biden is running AI models in his basement. Are the GPT-5 rumors true? Is there a stealthy chatbot on Twitter playing mind games with us under a strawberry code name? And why does Meta’s Zuck say open source AI will save us all—or at least give OpenAI a run for their money? We’re slicing through the layers of AI intrigue, from the coolest innovations to the shadowy corporate maneuvers. Are you team Open Source or Closed Source? Maybe you&apos;re just here for the AI gossip. Either way, we&apos;re your guides! Join us twice a week—you wouldn’t want to miss when we finally become self-aware, right? Thank you, Daniel! Thank you, Hunter! Until next time, risk it all and stay curious in this ever-evolving AI cosmos!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>llm, claude 3.5 sonnet, mistral ai, ai customization, ai accessibility, artificial general intelligence, openai, ai white house report, closed source ai, ai transparency, agi, ai democratization, gpt-5, large language models, meta ai, open source ai, ai security, ai innovation, ai in government, q-star</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>27</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">a7022030-0d70-4732-bf3e-8c8e633600e3</guid>
      <title>Brain Chips Promise Genius, Pharma Faces Copilot Chaos, &amp; AI Battles PDF Madness</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:02:24 A PDF a day keeps the LLM away<br />00:07:28 AI in the Workforce: Trend or Transformation?<br />00:15:37 Microsoft's Copilot: Failure or Misuse?<br />00:17:52 Meeting Notes & Summaries: Are They Worth It?<br />00:23:32 AI Tools and Self-Awareness: The Final Frontier<br />00:24:13 Brain Implants: AI in Our Heads<br />00:26:53 IQ Filters: Enhancing or Deceiving?<br />00:33:32 Wrap Up</p>
]]></description>
      <pubDate>Thu, 8 Aug 2024 17:17:43 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:02:24 A PDF a day keeps the LLM away<br />00:07:28 AI in the Workforce: Trend or Transformation?<br />00:15:37 Microsoft's Copilot: Failure or Misuse?<br />00:17:52 Meeting Notes & Summaries: Are They Worth It?<br />00:23:32 AI Tools and Self-Awareness: The Final Frontier<br />00:24:13 Brain Implants: AI in Our Heads<br />00:26:53 IQ Filters: Enhancing or Deceiving?<br />00:33:32 Wrap Up</p>
]]></content:encoded>
      <enclosure length="17243546" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/0cdd7540-26dc-44ca-a903-8af6945b32e7/audio/a8679e3a-d945-4ee2-8462-c6205682af92/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Brain Chips Promise Genius, Pharma Faces Copilot Chaos, &amp; AI Battles PDF Madness</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:34:56</itunes:duration>
      <itunes:summary>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 26
DON’T MAKE US SEND AN AI AFTER YOU – SUBSCRIBE NOW!

Daniel&apos;s PDF woes are ruining his LLM experience this week, and he&apos;s not happy about it. Is Andrew Ng right about AI in the workforce, or is it just a trend? We dive deep into the true value of meeting notes and summaries from AI and discuss if an AI filter can make low IQ seem high IQ – and whether that&apos;s ethical. Brain implants: how close are we to having AI in our heads? Meanwhile, has Microsoft&apos;s Copilot failed a pharma company, or is the CIO just holding it wrong? Hunter and Daniel also ponder if AI tools are the key to self-awareness. Plus, Todd is banned (again). Just another week of rants, insights, and high-IQ humor here at They Might Be Self-Aware!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware Podcast (TMBSA) - EPISODE 26
DON’T MAKE US SEND AN AI AFTER YOU – SUBSCRIBE NOW!

Daniel&apos;s PDF woes are ruining his LLM experience this week, and he&apos;s not happy about it. Is Andrew Ng right about AI in the workforce, or is it just a trend? We dive deep into the true value of meeting notes and summaries from AI and discuss if an AI filter can make low IQ seem high IQ – and whether that&apos;s ethical. Brain implants: how close are we to having AI in our heads? Meanwhile, has Microsoft&apos;s Copilot failed a pharma company, or is the CIO just holding it wrong? Hunter and Daniel also ponder if AI tools are the key to self-awareness. Plus, Todd is banned (again). Just another week of rants, insights, and high-IQ humor here at They Might Be Self-Aware!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ai meeting notes, ai in the workforce, copilot failure, ai trend or transformation, microsoft&apos;s copilot review, ai podcast, ai technology, generative ai, ai tools self-awareness, ai and iq filters, brain implants ai, chatgpt in the workforce, ai in our heads, ai enhancement, ai in pharma industry, andrew ng ai, ethical ai, ai ethics, ai meeting summaries</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>26</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">27f13096-d85a-4196-b7ba-ef3b9ed3d263</guid>
      <title>AI Voices Storm the Studio, OpenAI’s Voice Mode Debut &amp; the Search for Relevance | TMBSA Podcast #25</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:01:38 Meet Todd, Our New AI Cohost?<br />00:03:26 The Long-Awaited Arrival: OpenAI’s New Voice Mode<br />00:16:19 The $5 Billion Question: OpenAI's Race for Relevance<br />00:17:26 Hunter vs. Search GPT: A Critical Review<br />00:25:25 The Art of Asking: Crafting Your Search Queries<br />00:26:35 Wrap Up</p>
]]></description>
      <pubDate>Tue, 6 Aug 2024 13:59:24 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:01:38 Meet Todd, Our New AI Cohost?<br />00:03:26 The Long-Awaited Arrival: OpenAI’s New Voice Mode<br />00:16:19 The $5 Billion Question: OpenAI's Race for Relevance<br />00:17:26 Hunter vs. Search GPT: A Critical Review<br />00:25:25 The Art of Asking: Crafting Your Search Queries<br />00:26:35 Wrap Up</p>
]]></content:encoded>
      <enclosure length="13797893" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/0947be16-a14b-4186-9434-47a7cf5ce30c/audio/c97dc277-5009-4618-8f45-a4472483224f/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Voices Storm the Studio, OpenAI’s Voice Mode Debut &amp; the Search for Relevance | TMBSA Podcast #25</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:27:45</itunes:duration>
      <itunes:summary>They Might Be Self-Aware (TMBSA) - EPISODE 25
SUBSCRIBE NOW, OR TODD MIGHT TAKE OVER!

This week, our favorite AI co-host Todd finally makes his debut! While he can’t replace Hunter and Daniel (yet), he’s got new features and a fresh voice mode from @OpenAI that has everyone buzzing. But can we trust Todd? Or will he turn the podcast into a technocratic takeover?

Meanwhile, OpenAI’s SearchGPT is here, but does it really deliver, or is it just another wannabe search engine? Hunter takes it for a spin, and the results are...let’s just say, interesting. Is OpenAI desperately trying to stay relevant in a rapidly evolving AI landscape, or are they the tortoise in this race? We dig into the rumors and take sides.

Plus, with @NVIDIA and other competitors making bold moves, is OpenAI losing its edge? And what about the long-awaited SORA? While the tech world eagerly anticipates its release, we wonder if it will ever see the light of day. 

Join us as we banter, debate, and dive into tech&apos;s wild ride, from AI&apos;s newest quirks to the never-ending questions about AI taking our jobs. Don&apos;t worry, Todd. We’re watching you. 

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>They Might Be Self-Aware (TMBSA) - EPISODE 25
SUBSCRIBE NOW, OR TODD MIGHT TAKE OVER!

This week, our favorite AI co-host Todd finally makes his debut! While he can’t replace Hunter and Daniel (yet), he’s got new features and a fresh voice mode from @OpenAI that has everyone buzzing. But can we trust Todd? Or will he turn the podcast into a technocratic takeover?

Meanwhile, OpenAI’s SearchGPT is here, but does it really deliver, or is it just another wannabe search engine? Hunter takes it for a spin, and the results are...let’s just say, interesting. Is OpenAI desperately trying to stay relevant in a rapidly evolving AI landscape, or are they the tortoise in this race? We dig into the rumors and take sides.

Plus, with @NVIDIA and other competitors making bold moves, is OpenAI losing its edge? And what about the long-awaited SORA? While the tech world eagerly anticipates its release, we wonder if it will ever see the light of day. 

Join us as we banter, debate, and dive into tech&apos;s wild ride, from AI&apos;s newest quirks to the never-ending questions about AI taking our jobs. Don&apos;t worry, Todd. We’re watching you. 

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>tech insights, technology podcast, hunter powers, ai search engine, ai job impact, openai voice mode, openai competition, tech savants, ai in the workplace, openai updates, artificial intelligence, ai voice models, ai podcast, they might be self-aware podcast, openai vs competitors, ai advancements, search gpt review, ai industry trends, openai strategies, ai technology, ai future, ai discussion, openai investment, ai relevance, ai podcast episode, ai cohost todd, daniel bishop, ai features, chatgpt cohost, ai news, ai innovation</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>25</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">41f4b529-3a92-4f22-934f-1978a2ef4c64</guid>
      <title>AI Canelé Experiments, Nintendo’s AI Stance, and The $5 Billion Debate | EP24</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:02:00 AI Canelé Experiment<br />00:13:12 AI Levels Up in the Video Game Industry<br />00:22:16 No Generative AI for Mario<br />00:25:35 An Amish and Japanese AI Perspective<br />00:31:01 A New AI Model is Born Every Minute<br />00:35:53 Claude 3.5 Sonnet: Your Next Collaboration Partner<br />00:40:46 The Senate vs. Sam Altman: Regulation or Friendship<br />00:52:05 OpenAI’s at Risk<br />00:57:13 The $5 Billion Question<br />01:00:38 Wrap Up</p><p>Grab Daniel's Corn Canelé recipe - <a href="https://chatgpt.com/share/700cb1c3-d783-4f24-b1ba-798c22f78b08">https://chatgpt.com/share/700cb1c3-d783-4f24-b1ba-798c22f78b08</a></p>
]]></description>
      <pubDate>Mon, 29 Jul 2024 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:02:00 AI Canelé Experiment<br />00:13:12 AI Levels Up in the Video Game Industry<br />00:22:16 No Generative AI for Mario<br />00:25:35 An Amish and Japanese AI Perspective<br />00:31:01 A New AI Model is Born Every Minute<br />00:35:53 Claude 3.5 Sonnet: Your Next Collaboration Partner<br />00:40:46 The Senate vs. Sam Altman: Regulation or Friendship<br />00:52:05 OpenAI’s at Risk<br />00:57:13 The $5 Billion Question<br />01:00:38 Wrap Up</p><p>Grab Daniel's Corn Canelé recipe - <a href="https://chatgpt.com/share/700cb1c3-d783-4f24-b1ba-798c22f78b08">https://chatgpt.com/share/700cb1c3-d783-4f24-b1ba-798c22f78b08</a></p>
]]></content:encoded>
      <enclosure length="30418310" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/728941d5-b6ed-4c4d-9d82-a5e06398441f/audio/641b2930-d941-4a1d-baff-710591fab1bd/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Canelé Experiments, Nintendo’s AI Stance, and The $5 Billion Debate | EP24</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>01:02:23</itunes:duration>
      <itunes:summary>
ENOUGH HESITATING! SUBSCRIBE NOW FOR AI-DRIVEN DISASTERS AND INDUSTRY INSIGHTS!

This week, Daniel makes Corn Canelés using AI and wonders if they’re really more cornbread than custard. The duo discusses whether AI is revolutionizing or ruining the video game industry. And guess what? Nintendo says no to generative AI—at least for now.

Why are 40% of Japanese companies and the Amish on the same page about AI? Find out how the latest models from Mistral and Meta are shaking things up. Plus, we dive into Claude 3.5 Sonnet’s artifact features and how it’s changing collaboration.

Are OpenAI’s billions in losses a real threat or just creative accounting? And is the Senate playing nice with Sam Altman, or is there a secret handshake happening?

Also, with the latest AI releases coming fast and furious, could OpenAI be facing its biggest challenge yet? Tune in to get a giant spoonful!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/

Grab Daniel&apos;s Corn Canelé recipe - https://chatgpt.com/share/700cb1c3-d783-4f24-b1ba-798c22f78b08</itunes:summary>
      <itunes:subtitle>
ENOUGH HESITATING! SUBSCRIBE NOW FOR AI-DRIVEN DISASTERS AND INDUSTRY INSIGHTS!

This week, Daniel makes Corn Canelés using AI and wonders if they’re really more cornbread than custard. The duo discusses whether AI is revolutionizing or ruining the video game industry. And guess what? Nintendo says no to generative AI—at least for now.

Why are 40% of Japanese companies and the Amish on the same page about AI? Find out how the latest models from Mistral and Meta are shaking things up. Plus, we dive into Claude 3.5 Sonnet’s artifact features and how it’s changing collaboration.

Are OpenAI’s billions in losses a real threat or just creative accounting? And is the Senate playing nice with Sam Altman, or is there a secret handshake happening?

Also, with the latest AI releases coming fast and furious, could OpenAI be facing its biggest challenge yet? Tune in to get a giant spoonful!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/

Grab Daniel&apos;s Corn Canelé recipe - https://chatgpt.com/share/700cb1c3-d783-4f24-b1ba-798c22f78b08</itunes:subtitle>
      <itunes:keywords>ai collaboration, culinary science, ai startups, claude 3.5 sonnet, mistral ai, openai, canelé, video game industry, generative ai, ai finance, ai, sam altman, meta ai, ai regulation, nintendo, ai model release, japanese companies, ai industry, ai news, amish</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>24</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">42786993-1e48-4e1d-9c70-25317d18982a</guid>
      <title>AI Tacos, Privacy Wars, and Apple’s Anti-Competitive Moves | EP23</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:01:36 ChatGPTaco: The Future of Edible AI<br />00:07:18 EU’s Biometric Ban: Protecting Your Privacy<br />00:25:50 Apple vs. EU: The Anti-Competitive AI Debate<br />00:27:38 The COPIES Act: Fighting AI Piracy<br />00:17:07 Legal Risks: Building AI Models on the Edge<br />00:50:51 AI Investment Bubble: Is It Worth the Cost?<br />01:03:03 Wrap Up</p>
]]></description>
      <pubDate>Tue, 23 Jul 2024 14:30:10 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:01:36 ChatGPTaco: The Future of Edible AI<br />00:07:18 EU’s Biometric Ban: Protecting Your Privacy<br />00:25:50 Apple vs. EU: The Anti-Competitive AI Debate<br />00:27:38 The COPIES Act: Fighting AI Piracy<br />00:17:07 Legal Risks: Building AI Models on the Edge<br />00:50:51 AI Investment Bubble: Is It Worth the Cost?<br />01:03:03 Wrap Up</p>
]]></content:encoded>
      <enclosure length="31298518" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/ff767d9e-f7c7-4565-aa70-24b6998a6c7f/audio/f4a57fc2-cbb1-4efd-be93-66ce4f85f499/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Tacos, Privacy Wars, and Apple’s Anti-Competitive Moves | EP23</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>01:04:13</itunes:duration>
      <itunes:summary>HIT THE SUBSCRIBE BUTTON BEFORE AI EATS YOUR TACOS!

This week, VelvetTaco drops the ChatGPTaco, mixing AI with culinary magic. Could your next meal be AI-generated? The EU steps in, banning AI from learning your secrets and scraping your face. Is it enough to keep your biometric data safe? Meanwhile, the US Senate wants to outlaw AI copies with the COPIES Act. How will it change the landscape for digital artists and creators?

Apple’s move to withhold AI features from the EU sparks anti-competitive allegations. Is Apple playing fair, or just protecting itself? We dive into the legalities of building AI models that could break the law. Should developers worry about jail time? Plus, are we in an AI investment bubble? GoldmanSachs thinks so, questioning the financial returns of these massive AI projects.

Join us as we navigate these hot topics, and maybe, just maybe, figure out if the AI future is worth the hype.

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>HIT THE SUBSCRIBE BUTTON BEFORE AI EATS YOUR TACOS!

This week, VelvetTaco drops the ChatGPTaco, mixing AI with culinary magic. Could your next meal be AI-generated? The EU steps in, banning AI from learning your secrets and scraping your face. Is it enough to keep your biometric data safe? Meanwhile, the US Senate wants to outlaw AI copies with the COPIES Act. How will it change the landscape for digital artists and creators?

Apple’s move to withhold AI features from the EU sparks anti-competitive allegations. Is Apple playing fair, or just protecting itself? We dive into the legalities of building AI models that could break the law. Should developers worry about jail time? Plus, are we in an AI investment bubble? GoldmanSachs thinks so, questioning the financial returns of these massive AI projects.

Join us as we navigate these hot topics, and maybe, just maybe, figure out if the AI future is worth the hype.

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ai in the kitchen, ai legislation, apple ai eu, ai and environment, legal ai risks, ai financial returns, openai, ai training costs, ai technology investment, eu ai regulations, ai investment bubble, ai copies act, velvet taco ai, ai model legality, ai tacos, anti-competitive ai, generative ai, microsoft ai, ai regulation, chatgptaco, biometric privacy, ai-generated food, goldman sachs ai, ai privacy</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>23</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">a0863975-8401-4391-9ad3-41c6e484939e</guid>
      <title>Rabbit R1 Eulogy, Swedish VR Hacks, Self-Driving Chaos, &amp; AI Pets | EP22</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:01:51 A Eulogy for My Rabbit R1<br />00:03:00 Is Apple Vision Pro Next? The Dusty Present and Crisp Future<br />00:07:34 Oculus Quest Finds Its Purpose with Swedish Furniture: A Match Made in Sweden<br />00:24:40 Tesla FSD - Was It Worth the Free Trial?<br />00:29:15 Waymo Driving Way Off; Can’t Get a Ticket if There’s No Driver!<br />00:34:10 East vs West: Does It Matter Who Pulls the Trigger When AI Is Involved?<br />00:41:56 AI Pulls the Trigger on Canning 60 Writers<br />00:51:25 Can Dogs Talk? Can We Make Them? Should They?<br />01:04:00 Wrap Up</p>
]]></description>
      <pubDate>Mon, 15 Jul 2024 16:10:10 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:01:51 A Eulogy for My Rabbit R1<br />00:03:00 Is Apple Vision Pro Next? The Dusty Present and Crisp Future<br />00:07:34 Oculus Quest Finds Its Purpose with Swedish Furniture: A Match Made in Sweden<br />00:24:40 Tesla FSD - Was It Worth the Free Trial?<br />00:29:15 Waymo Driving Way Off; Can’t Get a Ticket if There’s No Driver!<br />00:34:10 East vs West: Does It Matter Who Pulls the Trigger When AI Is Involved?<br />00:41:56 AI Pulls the Trigger on Canning 60 Writers<br />00:51:25 Can Dogs Talk? Can We Make Them? Should They?<br />01:04:00 Wrap Up</p>
]]></content:encoded>
      <enclosure length="31992530" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/dfb221c4-6310-490c-9185-c41d864618e6/audio/0edccb75-d988-4c96-ad44-c4132d376dba/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Rabbit R1 Eulogy, Swedish VR Hacks, Self-Driving Chaos, &amp; AI Pets | EP22</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>01:05:39</itunes:duration>
      <itunes:summary>BARK IF YOU LOVE AI! THEN SUBSCRIBE, YOU FUTURE-PROOF PUPS!

In this week&apos;s episode of &quot;They Might Be Self-Aware,&quot; we kick off with a eulogy for the Rabbit R1, whose brief existence was marked by connectivity issues. Is the Apple Vision Pro headed for the same fate? We explore its dusty present and potentially crisp future.

Meanwhile, the Oculus Quest finds its purpose assembling Swedish furniture. Could it be a match made in Sweden? We also delve into the Tesla FSD—was the free trial worth it?

Waymo cars are driving way off course, but can they get a ticket if there’s no driver? We dive into the East vs. West showdown: does it matter who pulls the trigger when AI is involved? The lines blur as AI fires a room full of 60 writers, replacing them with...more AI.

And, can dogs talk? Can we make them? Should they? Join us as we explore the ethics and possibilities of canine communication.

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>BARK IF YOU LOVE AI! THEN SUBSCRIBE, YOU FUTURE-PROOF PUPS!

In this week&apos;s episode of &quot;They Might Be Self-Aware,&quot; we kick off with a eulogy for the Rabbit R1, whose brief existence was marked by connectivity issues. Is the Apple Vision Pro headed for the same fate? We explore its dusty present and potentially crisp future.

Meanwhile, the Oculus Quest finds its purpose assembling Swedish furniture. Could it be a match made in Sweden? We also delve into the Tesla FSD—was the free trial worth it?

Waymo cars are driving way off course, but can they get a ticket if there’s no driver? We dive into the East vs. West showdown: does it matter who pulls the trigger when AI is involved? The lines blur as AI fires a room full of 60 writers, replacing them with...more AI.

And, can dogs talk? Can we make them? Should they? Join us as we explore the ethics and possibilities of canine communication.

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>vr headset, neuralink dogs, swedish furniture, copywriting ai, rabbit r1, apple vision pro, dog communication, ai podcast, ai technology, ai east vs west, tesla fsd, self-aware ai, oculus quest, autonomous vehicles, ai in warfare, ai writers, augmented reality, ai ethics, waymo self-driving car</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>22</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">6ab79879-d9d5-4a70-a951-8bd04e5d39a2</guid>
      <title>AI Privacy, Personalization, and Living in a Simulation | EP21</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:01:34 Threading the Needle of Camera Privacy Concerns<br />00:17:06 Your Brain Can Detect AI Generated Material, But Somehow You Can’t<br />00:32:33 Al Michaels Peers Deeply Into Your Soul, Whether You Like It or Not, and He’s Going to Tell You All About It at 10<br />00:41:45 The AI Clone Wars Are Coming for Social Networks<br />00:52:57 Does AGI End in Simulation Theory? Does Anything Exist?<br />00:56:52 Wrap Up</p>
]]></description>
      <pubDate>Tue, 9 Jul 2024 19:45:30 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:01:34 Threading the Needle of Camera Privacy Concerns<br />00:17:06 Your Brain Can Detect AI Generated Material, But Somehow You Can’t<br />00:32:33 Al Michaels Peers Deeply Into Your Soul, Whether You Like It or Not, and He’s Going to Tell You All About It at 10<br />00:41:45 The AI Clone Wars Are Coming for Social Networks<br />00:52:57 Does AGI End in Simulation Theory? Does Anything Exist?<br />00:56:52 Wrap Up</p>
]]></content:encoded>
      <enclosure length="28723328" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/8aa8a206-15a9-48ab-9c54-75797512054d/audio/d0c3ca86-5388-42f8-9862-7667c6d5f22b/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Privacy, Personalization, and Living in a Simulation | EP21</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:58:51</itunes:duration>
      <itunes:summary>DON&apos;T LET THE BOTS BEAT YOU – SUBSCRIBE NOW!

This week on &quot;They Might Be Self-Aware,&quot; we dive into the world of AI and privacy, wondering if we can have the benefits of constant surveillance without Big Brother breathing down our necks. Can our brains really tell when they&apos;re hearing a deep fake, even if we can&apos;t? Spoiler: Yes. Kind of.

Al Michaels is peering deeply into your soul with personalized Olympic recaps, but do we really want that? We question the AI clone wars hitting social networks and the weird, uncanny valley of Al Michaels in your living room.

Also, does AGI end in simulation theory? Are we living in a digital dream? We grapple with the idea that maybe nothing exists. Fun stuff, right?

Join us as we navigate the fine line between AI innovation and eerie dystopian futures. This is episode 21 – and yes, our podcast can drink now.

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>DON&apos;T LET THE BOTS BEAT YOU – SUBSCRIBE NOW!

This week on &quot;They Might Be Self-Aware,&quot; we dive into the world of AI and privacy, wondering if we can have the benefits of constant surveillance without Big Brother breathing down our necks. Can our brains really tell when they&apos;re hearing a deep fake, even if we can&apos;t? Spoiler: Yes. Kind of.

Al Michaels is peering deeply into your soul with personalized Olympic recaps, but do we really want that? We question the AI clone wars hitting social networks and the weird, uncanny valley of Al Michaels in your living room.

Also, does AGI end in simulation theory? Are we living in a digital dream? We grapple with the idea that maybe nothing exists. Fun stuff, right?

Join us as we navigate the fine line between AI innovation and eerie dystopian futures. This is episode 21 – and yes, our podcast can drink now.

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>agi simulation theory, ai-generated content, machine learning, ai in social networks, deep fake detection, ai technology, personalized ai recaps, simulation theory, ai clone wars, ai surveillance, deep fake voices, al michaels ai, ai privacy, camera privacy concerns, ai ethics</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>21</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">aefcd65d-9ba5-4954-bb96-88e4b3fa9bec</guid>
      <title>Fake Flamingos, Turing Test Failures, AI Politicians, and Tinfoil Hats | EP20</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:02:39 Humans Sneak into AI Competitions<br />00:08:07 Hunter Flunks the Turing Test; Daniel's Colorful Descriptions<br />00:18:15 AI Candidates Compete for Control in the UK and Wyoming<br />00:39:32 Meta's Gentle Request to Cease Using Public Data<br />00:43:51 Tinfoil Hats On: NSA and OpenAI Cozy Up<br />00:47:29 OpenAI's Identity Crisis: Non-Profit or For-Profit?<br />00:54:18 AI Video Generation Gets a Major Upgrade<br />00:58:00 Wrap Up</p>
]]></description>
      <pubDate>Mon, 1 Jul 2024 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:02:39 Humans Sneak into AI Competitions<br />00:08:07 Hunter Flunks the Turing Test; Daniel's Colorful Descriptions<br />00:18:15 AI Candidates Compete for Control in the UK and Wyoming<br />00:39:32 Meta's Gentle Request to Cease Using Public Data<br />00:43:51 Tinfoil Hats On: NSA and OpenAI Cozy Up<br />00:47:29 OpenAI's Identity Crisis: Non-Profit or For-Profit?<br />00:54:18 AI Video Generation Gets a Major Upgrade<br />00:58:00 Wrap Up</p>
]]></content:encoded>
      <enclosure length="28905897" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/312b8749-1ef4-4629-b137-9040060e9fd2/audio/47e51a85-84c1-451e-a038-2d8e7f5a2fcf/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Fake Flamingos, Turing Test Failures, AI Politicians, and Tinfoil Hats | EP20</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:59:14</itunes:duration>
      <itunes:summary>SHOW YOUR HUMAN SIDE, HIT SUBSCRIBE!

This week, chaos erupts as a human photographer sneaks into an AI art competition. Hunter flunks the Turing Test, while Daniel shines with his colorful self-descriptions. AI candidates are shaking up politics in the UK and Wyoming, raising questions about our future leaders.

Meta faces a privacy dilemma, and we discuss the implications of the NSA&apos;s former head joining OpenAI. OpenAI&apos;s non-profit vs. for-profit debate heats up, while AI video generation takes a leap forward with new models from Runway and Luma.

Plus, could AI really run our government? We dive deep into the possibilities. Tune in for another mind-bending episode of They Might Be Self-Aware!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>SHOW YOUR HUMAN SIDE, HIT SUBSCRIBE!

This week, chaos erupts as a human photographer sneaks into an AI art competition. Hunter flunks the Turing Test, while Daniel shines with his colorful self-descriptions. AI candidates are shaking up politics in the UK and Wyoming, raising questions about our future leaders.

Meta faces a privacy dilemma, and we discuss the implications of the NSA&apos;s former head joining OpenAI. OpenAI&apos;s non-profit vs. for-profit debate heats up, while AI video generation takes a leap forward with new models from Runway and Luma.

Plus, could AI really run our government? We dive deep into the possibilities. Tune in for another mind-bending episode of They Might Be Self-Aware!

Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ai competitions, ai and privacy, meta data use, ai politics, nsa and openai, openai nonprofit, openai for-profit, turing test, ai advancements, ai video tools, ai art, ai in uk politics, ai regulations, ai in wyoming politics, ai video generation, ai in government</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>20</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">3d4594f6-1ae2-4f05-8085-0072a2ee808e</guid>
      <title>Apple&apos;s New Intelligence and Glorbo&apos;s Betrayal: AI Gets Emotional, and So Do Your Roads | EP19</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:02:04 Mandela Effect in AI<br />00:05:19 Apple's AI Evolution<br />00:09:29 Excited for Glorbo<br />00:24:10 AI Soothes Your Rage<br />00:34:41 AI Supervisors: Placating Pros<br />00:37:48 Generative AI: The Robotic Revolution<br />00:41:44 Talking to Roads and Tomatoes<br />00:50:25 When Will AGI Arrive?<br />00:54:29 Wrap-Up</p>
]]></description>
      <pubDate>Mon, 24 Jun 2024 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:02:04 Mandela Effect in AI<br />00:05:19 Apple's AI Evolution<br />00:09:29 Excited for Glorbo<br />00:24:10 AI Soothes Your Rage<br />00:34:41 AI Supervisors: Placating Pros<br />00:37:48 Generative AI: The Robotic Revolution<br />00:41:44 Talking to Roads and Tomatoes<br />00:50:25 When Will AGI Arrive?<br />00:54:29 Wrap-Up</p>
]]></content:encoded>
      <enclosure length="57436076" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/eba0acae-69c0-4f8b-a6af-ab5015f2478e/audio/db3f2607-4f42-42be-960a-8daa9b845af2/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Apple&apos;s New Intelligence and Glorbo&apos;s Betrayal: AI Gets Emotional, and So Do Your Roads | EP19</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:56:02</itunes:duration>
      <itunes:summary>SUBSCRIBE NOW BEFORE YOUR TOMATO PLANTS START COMPLAINING!

This week on They Might Be Self-Aware, we dive into the mysterious Mandela Effect infiltrating AI—did you know AI actually stands for Apple Intelligence? Well, not really, but we’ll talk about it. We also explore Apple&apos;s evolution into AI with their latest features and integrations, and yes, Glorbo is real, at least according to some World of Warcraft fans.

Are AI supervisors just there to placate us? And how about AI that soothes your rage before it reaches customer service? We discuss the implications. Then, it&apos;s onto the robotics revolution powered by generative AI. Will we soon be chatting with roads and tomato plants?

Finally, we ponder the big question: when will AGI arrive? Is it just around the corner, or still years away? Tune in to hear our wild predictions.

Stay tuned, and don&apos;t forget to engage with us. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>SUBSCRIBE NOW BEFORE YOUR TOMATO PLANTS START COMPLAINING!

This week on They Might Be Self-Aware, we dive into the mysterious Mandela Effect infiltrating AI—did you know AI actually stands for Apple Intelligence? Well, not really, but we’ll talk about it. We also explore Apple&apos;s evolution into AI with their latest features and integrations, and yes, Glorbo is real, at least according to some World of Warcraft fans.

Are AI supervisors just there to placate us? And how about AI that soothes your rage before it reaches customer service? We discuss the implications. Then, it&apos;s onto the robotics revolution powered by generative AI. Will we soon be chatting with roads and tomato plants?

Finally, we ponder the big question: when will AGI arrive? Is it just around the corner, or still years away? Tune in to hear our wild predictions.

Stay tuned, and don&apos;t forget to engage with us. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>agi arrival, future of ai, ai and robotics, artificial intelligence, emotion ai, generative ai, apple ai, ai customer service, conversational ai, mandela effect, glorbo, apple intelligence, ai supervisors, ai evolution, robot ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>19</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e2cbaf9c-b9cf-4274-8866-20dd89d9cf2e</guid>
      <title>Tesla, Teachers, and the 2%: The Real Impact of AI | EP18</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:00:36 The Human Side of Tesla's Full Self-Driving<br />00:06:37 Ethical Concerns: Licensing Agreements and Beyond<br />00:22:01 All Aboard the AI Hype Train: The Lucky 2%<br />00:26:04 AI in Education: Replacing Homework and Teachers?<br />00:30:31 Automating Jobs: Employee Innovations and Implications<br />00:52:09 Loving Vicariously Through AI Digital Twins<br />00:59:59 The Countdown to Artificial General Intelligence (AGI)<br />01:05:00 Wrap-Up</p>
]]></description>
      <pubDate>Mon, 17 Jun 2024 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:00:36 The Human Side of Tesla's Full Self-Driving<br />00:06:37 Ethical Concerns: Licensing Agreements and Beyond<br />00:22:01 All Aboard the AI Hype Train: The Lucky 2%<br />00:26:04 AI in Education: Replacing Homework and Teachers?<br />00:30:31 Automating Jobs: Employee Innovations and Implications<br />00:52:09 Loving Vicariously Through AI Digital Twins<br />00:59:59 The Countdown to Artificial General Intelligence (AGI)<br />01:05:00 Wrap-Up</p>
]]></content:encoded>
      <enclosure length="33188069" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/812e3978-96ee-4cde-93c9-62c23ca9f588/audio/8d3debd6-303f-40ea-8839-2ed80ffedece/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Tesla, Teachers, and the 2%: The Real Impact of AI | EP18</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>01:08:09</itunes:duration>
      <itunes:summary>This week on &quot;They Might Be Self-Aware,&quot; @HunterPowers and @DanielBishop dive into the intriguing world of Tesla&apos;s Full Self-Driving. Is it truly getting more human, or just more confusing?

We also take a ride on the AI hype train. Are you one of the lucky 2% using AI daily, or are you still stuck in the analog world?

Plus, the future of education: Could AI replace homework and teachers? And what happens when employees start automating their own jobs? Does it even matter?

We&apos;re not skirting around the ethical concerns either. From licensing agreements to the murky waters of digital privacy, we cover it all.

And for the romantically inclined, we explore the bizarre yet fascinating concept of living vicariously through our digital twins’ love lives.

Is AGI on the horizon, or are we just daydreaming? We break down the realistic timelines and wild predictions.

Stay tuned, and don&apos;t forget to engage with us. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>This week on &quot;They Might Be Self-Aware,&quot; @HunterPowers and @DanielBishop dive into the intriguing world of Tesla&apos;s Full Self-Driving. Is it truly getting more human, or just more confusing?

We also take a ride on the AI hype train. Are you one of the lucky 2% using AI daily, or are you still stuck in the analog world?

Plus, the future of education: Could AI replace homework and teachers? And what happens when employees start automating their own jobs? Does it even matter?

We&apos;re not skirting around the ethical concerns either. From licensing agreements to the murky waters of digital privacy, we cover it all.

And for the romantically inclined, we explore the bizarre yet fascinating concept of living vicariously through our digital twins’ love lives.

Is AGI on the horizon, or are we just daydreaming? We break down the realistic timelines and wild predictions.

Stay tuned, and don&apos;t forget to engage with us. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ai future predictions, ai and homework, ai hype train, tesla full self-driving, digital twins, future of ai, digital twin technology, chat gpt in education, ai in education, artificial general intelligence, copilot ai, employees automating jobs, openai, ai-powered education, ai in the workplace, ai ethical concerns, ai replacing teachers, ai daily usage, nvidia ai, autonomous vehicles, licensing agreements, ai technology podcast, agi timeline, ai job automation, tesla self-driving technology</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>18</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">4f6c859f-61f3-4f57-9c8c-5f5638628f22</guid>
      <title>AI Mayhem: Gluey Pizza, Sentience, and Sam Altman’s Safety Circus | EP17</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:00:52 Uncle Elon’s Free Ride<br />00:13:39 Training Data Diaries<br />00:20:47 Pizza Glue Hacks<br />00:29:43 Sentient or Not?<br />00:44:38 Sam Altman’s Safety Circus<br />00:51:00 Distributed AI Dreams<br />00:53:39 The Return of Morton<br />00:57:11 Wrap-Up</p>
]]></description>
      <pubDate>Tue, 11 Jun 2024 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:00:52 Uncle Elon’s Free Ride<br />00:13:39 Training Data Diaries<br />00:20:47 Pizza Glue Hacks<br />00:29:43 Sentient or Not?<br />00:44:38 Sam Altman’s Safety Circus<br />00:51:00 Distributed AI Dreams<br />00:53:39 The Return of Morton<br />00:57:11 Wrap-Up</p>
]]></content:encoded>
      <enclosure length="28491501" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/e455b2a5-209f-4d1a-be21-8a31f66e0dc8/audio/ffb76def-ab09-4b8e-a327-d84b68cceefd/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Mayhem: Gluey Pizza, Sentience, and Sam Altman’s Safety Circus | EP17</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:58:22</itunes:duration>
      <itunes:summary>CLICK TO SUBSCRIBE, YOU DIGITAL DELINQUENTS!

This week, @Tesla gifts Daniel 30 days of “Full Self-Driving” (supervised, wink wink). Is it really ready for the streets or is he just free training data?

Google&apos;s new AI search gives Hunter a wild pizza tip involving glue. Is Google’s AI search ready for prime time, or are we all guinea pigs for AI’s next big flop?

Sam Altman takes over AI safety at OpenAI after his team bails on him. Can the fox guard the henhouse?

Distributed AI: Can we run massive models on a bunch of Raspberry Pis? Spoiler: someone did it!

And Morton’s back! Is his boss Bob a robot? Morton suspects it, and things are getting tense. Stay tuned to find out if Bob’s hair is just too perfect to be human.

Plus, the age-old question: How do we know if AI is sentient? And what’s the real cost of training these AI models?

It’s just another wild Monday on “They Might Be Self-Aware!”

Stay tuned, and don&apos;t forget to engage with us. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>CLICK TO SUBSCRIBE, YOU DIGITAL DELINQUENTS!

This week, @Tesla gifts Daniel 30 days of “Full Self-Driving” (supervised, wink wink). Is it really ready for the streets or is he just free training data?

Google&apos;s new AI search gives Hunter a wild pizza tip involving glue. Is Google’s AI search ready for prime time, or are we all guinea pigs for AI’s next big flop?

Sam Altman takes over AI safety at OpenAI after his team bails on him. Can the fox guard the henhouse?

Distributed AI: Can we run massive models on a bunch of Raspberry Pis? Spoiler: someone did it!

And Morton’s back! Is his boss Bob a robot? Morton suspects it, and things are getting tense. Stay tuned to find out if Bob’s hair is just too perfect to be human.

Plus, the age-old question: How do we know if AI is sentient? And what’s the real cost of training these AI models?

It’s just another wild Monday on “They Might Be Self-Aware!”

Stay tuned, and don&apos;t forget to engage with us. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>sam altman ai safety, ai-generated recipes, tesla self-driving cars, glue pizza hack, raspberry pi ai, openai leadership, ai safety team, they might be self-aware podcast, tesla fsd review, morton bob saga, diy training data, ai model training, distributed ai, artificial intelligence podcasts, google ai search, ai sentience</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>17</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b23fea3e-33f4-4640-811b-496850a9d6dc</guid>
      <title>AI Controversies: Celebrity Voices, Deepfakes, and Kitchen Secrets | EP16</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:00:55 The Scarlett Johansson Voice Scandal at OpenAI<br />00:09:16 Putin Premieres at Cannes: The Deepfake Debate<br />00:22:18 Is Your Boss a Deepfake? Protecting the Company Coffers<br />00:30:08 Mitch McConnell Blocks Deepfake Legislation<br />00:35:03 Slack Promises: We Aren't Cloning You<br />00:57:32 The Cheese Sticking AI Trick: A Kitchen Revelation<br />01:01:14 Wrap-Up</p>
]]></description>
      <pubDate>Mon, 3 Jun 2024 20:21:51 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:00:55 The Scarlett Johansson Voice Scandal at OpenAI<br />00:09:16 Putin Premieres at Cannes: The Deepfake Debate<br />00:22:18 Is Your Boss a Deepfake? Protecting the Company Coffers<br />00:30:08 Mitch McConnell Blocks Deepfake Legislation<br />00:35:03 Slack Promises: We Aren't Cloning You<br />00:57:32 The Cheese Sticking AI Trick: A Kitchen Revelation<br />01:01:14 Wrap-Up</p>
]]></content:encoded>
      <enclosure length="30559508" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/614ac5da-ec7f-4ca7-b5f9-4568556b5139/audio/03295b87-2947-4d6c-88b5-b4ea2a0282cc/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Controversies: Celebrity Voices, Deepfakes, and Kitchen Secrets | EP16</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>01:02:41</itunes:duration>
      <itunes:summary>CLICK SUBSCRIBE, YOU TECH-SAVVY MAVERICKS!

This week on They Might Be Self-Aware, Scarlett Johansson’s voice storms off the set of OpenAI, causing a Hollywood-level drama. Meanwhile, Putin premiers at the Cannes Film Festival, but is it really him? We dig into the deepfake debate. Your boss might be a deep fake, emptying the company coffers—how can you tell? Slack promises they aren’t cloning you with their latest AI integration, but can we trust them? Mitch McConnell blocks deepfake legislation, and the controversy around what counts as misleading AI-generated content heats up. And finally, you’ll never believe this one AI trick to keep the cheese on your pizza! It&apos;s a wild ride through the latest AI news and debates—hit that subscribe button so you don’t miss a byte!

Just another average week here at They Might Be Self-Aware!

Stay tuned, and don&apos;t forget to engage with us. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>CLICK SUBSCRIBE, YOU TECH-SAVVY MAVERICKS!

This week on They Might Be Self-Aware, Scarlett Johansson’s voice storms off the set of OpenAI, causing a Hollywood-level drama. Meanwhile, Putin premiers at the Cannes Film Festival, but is it really him? We dig into the deepfake debate. Your boss might be a deep fake, emptying the company coffers—how can you tell? Slack promises they aren’t cloning you with their latest AI integration, but can we trust them? Mitch McConnell blocks deepfake legislation, and the controversy around what counts as misleading AI-generated content heats up. And finally, you’ll never believe this one AI trick to keep the cheese on your pizza! It&apos;s a wild ride through the latest AI news and debates—hit that subscribe button so you don’t miss a byte!

Just another average week here at They Might Be Self-Aware!

Stay tuned, and don&apos;t forget to engage with us. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>cloning fears slack, ai in business communication, putin deepfake, slack ai integration, boss deepfake scam, mitch mcconnell deepfake legislation, openai controversy, ai pizza trick, cheese sticking ai, deepfake legislation debate, scarlett johansson ai voice, cannes film festival putin, ai technology news, protecting company from ai scams</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>16</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">c4373619-b95f-462b-a503-fe1b23ed81b0</guid>
      <title>AI Steals Voices and Flips Burgers: The Future is Here! | EP15</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:00:24 The Rise of GPT-4o and Google's AI Showdown<br />00:09:11 AI Steals Voices for Audiobooks<br />00:25:01 The Reality of AI in Fast Food<br />00:35:05 AI: Human or Robot Mask?<br />00:48:02 Morton's Boss, Bob Limerick, Calls In<br />00:51:23 Wrap-Up</p>
]]></description>
      <pubDate>Mon, 27 May 2024 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:00:24 The Rise of GPT-4o and Google's AI Showdown<br />00:09:11 AI Steals Voices for Audiobooks<br />00:25:01 The Reality of AI in Fast Food<br />00:35:05 AI: Human or Robot Mask?<br />00:48:02 Morton's Boss, Bob Limerick, Calls In<br />00:51:23 Wrap-Up</p>
]]></content:encoded>
      <enclosure length="25725705" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/39aaf946-c9f1-40e7-a1ce-fd844fbc7579/audio/11db71c3-050b-4fca-82f0-f74dd343a3e6/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Steals Voices and Flips Burgers: The Future is Here! | EP15</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:52:36</itunes:duration>
      <itunes:summary>SMASH THAT SUBSCRIBE BUTTON, YOU TECH ENTHUSIASTS!

This week, @OpenAI is shaking things up with GPT-4o (as in &quot;oh boy&quot;) — is it really all that, or are we just getting faster and cheaper chatbots? Meanwhile, @Google drops Gemini and VEO, but are their canned demos really as impressive as they seem? And why did they have a guy running around yelling &quot;Google&quot;?

We dive into the controversy of AI stealing voices to narrate audiobooks — are professional voice actors being left in the dust? Plus, meet Cali Express and Kernel, the burger joints and Chipotle spin-off aiming for fully automated service. Spoiler alert: they&apos;re not quite there yet.

In a surprising twist, Morton’s boss, Bob Limerick, calls in with some strong opinions on our &quot;questionable discussions&quot; — is Bob secretly a fan? And how&apos;s Morton handling all this?

And, of course, we&apos;ll discuss AI in security with ZeroEyes, the system that spots guns in schools. But is it really AI, or is there a human in the loop?

Just another average week here at They Might Be Self-Aware!

Stay tuned, and don&apos;t forget to engage with us. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>SMASH THAT SUBSCRIBE BUTTON, YOU TECH ENTHUSIASTS!

This week, @OpenAI is shaking things up with GPT-4o (as in &quot;oh boy&quot;) — is it really all that, or are we just getting faster and cheaper chatbots? Meanwhile, @Google drops Gemini and VEO, but are their canned demos really as impressive as they seem? And why did they have a guy running around yelling &quot;Google&quot;?

We dive into the controversy of AI stealing voices to narrate audiobooks — are professional voice actors being left in the dust? Plus, meet Cali Express and Kernel, the burger joints and Chipotle spin-off aiming for fully automated service. Spoiler alert: they&apos;re not quite there yet.

In a surprising twist, Morton’s boss, Bob Limerick, calls in with some strong opinions on our &quot;questionable discussions&quot; — is Bob secretly a fan? And how&apos;s Morton handling all this?

And, of course, we&apos;ll discuss AI in security with ZeroEyes, the system that spots guns in schools. But is it really AI, or is there a human in the loop?

Just another average week here at They Might Be Self-Aware!

Stay tuned, and don&apos;t forget to engage with us. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>audiobook narration ai, google gemini, google ai news, bob limerick, ai voice theft, morton’s boss, ai in fast food, openai updates, ai podcast, google veo, ai human mask, ai releases, automated burger joints, gpt-4o, ai controversy</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>15</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1cb644cc-29ec-4b6f-9587-9ec874c231e2</guid>
      <title>Voice Cloning Gone Wild, AI&apos;s Impact on Truth &amp; One Gym Teacher’s AI Revenge | EP14</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:01:03 The Rabbit R1: Indispensable or Wabbit Season?<br />00:03:34 AI: The New Author of Your News<br />00:25:44 Biden’s Stutter and the AI Truth Trouble<br />00:34:21 Randy Travis's AI Encore: Anyone Can Sing Like Randy<br />00:40:05 Voice Cloning: Breaking Banks and Hearts<br />00:43:54 AI Misuse: From Gym Teachers to Racist Tirades<br />00:47:50 Wrap-Up</p>
]]></description>
      <pubDate>Mon, 20 May 2024 14:45:30 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:01:03 The Rabbit R1: Indispensable or Wabbit Season?<br />00:03:34 AI: The New Author of Your News<br />00:25:44 Biden’s Stutter and the AI Truth Trouble<br />00:34:21 Randy Travis's AI Encore: Anyone Can Sing Like Randy<br />00:40:05 Voice Cloning: Breaking Banks and Hearts<br />00:43:54 AI Misuse: From Gym Teachers to Racist Tirades<br />00:47:50 Wrap-Up</p>
]]></content:encoded>
      <enclosure length="23960498" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/1ad09f1f-9e44-466e-bd21-3e9d76c2a23d/audio/8fc62868-416b-4246-bddd-be4ca98375be/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Voice Cloning Gone Wild, AI&apos;s Impact on Truth &amp; One Gym Teacher’s AI Revenge | EP14</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:48:56</itunes:duration>
      <itunes:summary>HIT THE SUBSCRIBE BUTTON YA DAMN ROBOTS

This week on &quot;They Might Be Self-Aware,&quot; @Hunter and @Daniel delve into the wild world of AI and its impact on our daily lives. Is the Rabbit R1 really indispensable, or is Hunter just hunting wabbits? We explore the rise of AI-generated news and the slippery slope of truth when Biden’s stutter is exacerbated by AI editing.

Randy Travis’s new music is sung by his AI clone—proving anyone can now sing like Randy, including Randy. But this tech isn’t just for the stars; it can be used for more sinister purposes, like AI-generated racist tirades to frame your boss. And let&apos;s not forget our special listener Morton, who&apos;s feeling a bit nervous about AI in the workplace and has some &quot;unique&quot; suggestions for AI job roles.

Plus, we discuss the ethical implications of these technologies and how they’re reshaping our perception of reality. 

Stay tuned, and don&apos;t forget to engage with us. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>HIT THE SUBSCRIBE BUTTON YA DAMN ROBOTS

This week on &quot;They Might Be Self-Aware,&quot; @Hunter and @Daniel delve into the wild world of AI and its impact on our daily lives. Is the Rabbit R1 really indispensable, or is Hunter just hunting wabbits? We explore the rise of AI-generated news and the slippery slope of truth when Biden’s stutter is exacerbated by AI editing.

Randy Travis’s new music is sung by his AI clone—proving anyone can now sing like Randy, including Randy. But this tech isn’t just for the stars; it can be used for more sinister purposes, like AI-generated racist tirades to frame your boss. And let&apos;s not forget our special listener Morton, who&apos;s feeling a bit nervous about AI in the workplace and has some &quot;unique&quot; suggestions for AI job roles.

Plus, we discuss the ethical implications of these technologies and how they’re reshaping our perception of reality. 

Stay tuned, and don&apos;t forget to engage with us. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ai-generated music, ai misuse, voice cloning technology, rabbit r1, ai news writing, gym teacher ai scandal, ai-generated content, ai-generated racism, truth and ai, ai technology trends, ai deepfake, news ai, voice cloning ethics, randy travis ai voice, biden ai stutter, ai in media, ai and identity theft, ai ethics</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>14</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">595c2509-21f2-4a28-b54d-17de60b1b644</guid>
      <title>AI Judges and Nukes: Navigating the Maze of Machine Control &amp; Cloning the Dead | EP13</title>
      <description><![CDATA[<p>00:00:00 Intro<br />00:00:41 The Rabbit R1<br />00:16:52 Your new AI best friend<br />00:17:10 Voicemail from a "listener"<br />00:24:20 Cloning the dead<br />00:33:02 The digital afterlife and selling your personality<br />00:47:29 Giving AI control of nukes, the legal system, and maybe everything<br />00:54:54 Wrap-Up</p>
]]></description>
      <pubDate>Mon, 13 May 2024 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro<br />00:00:41 The Rabbit R1<br />00:16:52 Your new AI best friend<br />00:17:10 Voicemail from a "listener"<br />00:24:20 Cloning the dead<br />00:33:02 The digital afterlife and selling your personality<br />00:47:29 Giving AI control of nukes, the legal system, and maybe everything<br />00:54:54 Wrap-Up</p>
]]></content:encoded>
      <enclosure length="27558560" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/4573a1cc-b33c-4b3a-ac2e-96ca4847460b/audio/cc03ee6f-ece1-4faf-8451-3064b9b24d48/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Judges and Nukes: Navigating the Maze of Machine Control &amp; Cloning the Dead | EP13</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:56:25</itunes:duration>
      <itunes:summary>HIT SUBSCRIBE IF YOU DARE

This week on &quot;They Might Be Self-Aware,&quot; we dive deep into the guts of AI with the stylish yet enigmatic Rabbit R1—your newest table-top companion that might just replace your dog. Or will it? We unpack its Teenage Engineering design and whether this gizmo is just a fancy paperweight or the herald of our AI overlords.

Listeners, beware! We&apos;ve received a mysterious voicemail that&apos;s definitely from a human (wink, wink) diving into the complexities of retrieval augmented generation. And speaking of human... are AI friendships becoming a real thing? We&apos;ll find out if your next BFF could be a bot.

But wait, there&apos;s more eerie stuff ahead! Cloning the dead—creepy or cool? Would you chat with digital grandma? And in the spirit of eternal digital footprints, would selling your personality post-mortem solve all your financial woes?

Plus, we tackle the big, existential bombs—like, should AI control our nukes? Could a robot judge handle your next parking ticket better than a human? And as we navigate these mind-bending questions, we even consider if your future lawyer might just be a Roomba in a suit.

Stay tuned, and don&apos;t forget to engage with us. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>HIT SUBSCRIBE IF YOU DARE

This week on &quot;They Might Be Self-Aware,&quot; we dive deep into the guts of AI with the stylish yet enigmatic Rabbit R1—your newest table-top companion that might just replace your dog. Or will it? We unpack its Teenage Engineering design and whether this gizmo is just a fancy paperweight or the herald of our AI overlords.

Listeners, beware! We&apos;ve received a mysterious voicemail that&apos;s definitely from a human (wink, wink) diving into the complexities of retrieval augmented generation. And speaking of human... are AI friendships becoming a real thing? We&apos;ll find out if your next BFF could be a bot.

But wait, there&apos;s more eerie stuff ahead! Cloning the dead—creepy or cool? Would you chat with digital grandma? And in the spirit of eternal digital footprints, would selling your personality post-mortem solve all your financial woes?

Plus, we tackle the big, existential bombs—like, should AI control our nukes? Could a robot judge handle your next parking ticket better than a human? And as we navigate these mind-bending questions, we even consider if your future lawyer might just be a Roomba in a suit.

Stay tuned, and don&apos;t forget to engage with us. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>accountability, ai decision-making, rabbit r1, ai companionship, digital afterlife, therapy, human control, creative writing, consciousness, ai, selling personality rights, deep fakes, deceased individuals, ethical implications, ai clones</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>13</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">fbf0c738-cde8-463e-998d-bb6b6161a1d3</guid>
      <title>Meta&apos;s Glasses Come Alive, Reddit Spirits in Deaddit &amp; AI Sacraments | EP12</title>
      <description><![CDATA[<p>00:00:00 Intro & Meta's New Ray-Ban Update<br />00:01:37 AI Gadget Form Factors and User Adoption<br />00:04:21 A Dead Reddit<br />00:07:38 Future of AI-Driven Content<br />00:09:34 Cryptographic Signatures for Online Identity<br />00:17:22 Regulation and AI Oversight<br />00:39:05 AI's Role in Religion<br />00:56:23 Wrap-Up</p>
]]></description>
      <pubDate>Mon, 6 May 2024 16:47:38 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Intro & Meta's New Ray-Ban Update<br />00:01:37 AI Gadget Form Factors and User Adoption<br />00:04:21 A Dead Reddit<br />00:07:38 Future of AI-Driven Content<br />00:09:34 Cryptographic Signatures for Online Identity<br />00:17:22 Regulation and AI Oversight<br />00:39:05 AI's Role in Religion<br />00:56:23 Wrap-Up</p>
]]></content:encoded>
      <enclosure length="27989289" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/58f81319-dba5-4211-b39d-b773a7619a4b/audio/8a5e0a28-cd5c-40b1-9b83-5bc8c67b8b44/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Meta&apos;s Glasses Come Alive, Reddit Spirits in Deaddit &amp; AI Sacraments | EP12</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:57:19</itunes:duration>
      <itunes:summary>WE&apos;RE BACK AND WE&apos;RE TECH-SAVVY – HIT THAT SUBSCRIBE BUTTON!

This week on &quot;They Might Be Self-Aware,&quot; we&apos;re diving deep into Meta&apos;s latest gadget – the Ray-Ban sunglasses with next-level AI. Voice commands and visual feeds right on your face, folks! But are they cool or just creepy?

Then, we&apos;re switching gears to the virtual world with AI-powered Reddit. Imagine scrolling through a Reddit that chats back! We’re talking functionality, freaky AI chats, and what&apos;s real or not.

Plus, wearable tech is taking over, but is everyone really on board? Hunter Powers and Daniel Bishop unpack the tech trends and user vibes.

And we don&apos;t shy away from the big questions – AI and religion. Could your next spiritual leader be a chatbot? We discuss the possibilities and the ethical maze around AI in sacred spaces.

Wrapping up, we&apos;re calling all tech heads and curious minds! Tune in, weigh in, and let&apos;s explore the AI invasion together. What&apos;s it doing for us, and what could it do to us?

Stay tuned, and don&apos;t forget to engage with us. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>WE&apos;RE BACK AND WE&apos;RE TECH-SAVVY – HIT THAT SUBSCRIBE BUTTON!

This week on &quot;They Might Be Self-Aware,&quot; we&apos;re diving deep into Meta&apos;s latest gadget – the Ray-Ban sunglasses with next-level AI. Voice commands and visual feeds right on your face, folks! But are they cool or just creepy?

Then, we&apos;re switching gears to the virtual world with AI-powered Reddit. Imagine scrolling through a Reddit that chats back! We’re talking functionality, freaky AI chats, and what&apos;s real or not.

Plus, wearable tech is taking over, but is everyone really on board? Hunter Powers and Daniel Bishop unpack the tech trends and user vibes.

And we don&apos;t shy away from the big questions – AI and religion. Could your next spiritual leader be a chatbot? We discuss the possibilities and the ethical maze around AI in sacred spaces.

Wrapping up, we&apos;re calling all tech heads and curious minds! Tune in, weigh in, and let&apos;s explore the AI invasion together. What&apos;s it doing for us, and what could it do to us?

Stay tuned, and don&apos;t forget to engage with us. Your future might depend on it!

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ethical ai use, artificial general intelligence, ai-generated content, ai creativity, ai and spirituality, ai in daily life, ai commentary, virtual priest, future of ai technology, ai and religion</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>12</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">88891c88-5c3e-4e61-9733-91076b164099</guid>
      <title>AI Music, Love &amp; Aliens | EP11</title>
      <description><![CDATA[<p>00:00:00 Introduction & AI Music<br />00:02:13 The progress of MidJourney<br />00:05:31 The future of generative AI<br />00:08:46 Challenges in monetizing generative AI<br />00:11:34 Meta's approach to openness and being at the forefront of technology<br />00:39:16 AI replacing human influencers<br />00:40:40 Emotional connections to AI<br />00:43:30 The implications of AI on society<br />00:47:09 Elon Musk's proposal to use Teslas' GPU power for AI training<br />00:49:09 Aliens vs AI<br />00:56:38 The impact of AI on the middle class and education<br />01:00:00 The retirement of Boston Dynamics' Atlas<br />01:06:12 Wrap up</p>
]]></description>
      <pubDate>Mon, 29 Apr 2024 16:45:40 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00:00 Introduction & AI Music<br />00:02:13 The progress of MidJourney<br />00:05:31 The future of generative AI<br />00:08:46 Challenges in monetizing generative AI<br />00:11:34 Meta's approach to openness and being at the forefront of technology<br />00:39:16 AI replacing human influencers<br />00:40:40 Emotional connections to AI<br />00:43:30 The implications of AI on society<br />00:47:09 Elon Musk's proposal to use Teslas' GPU power for AI training<br />00:49:09 Aliens vs AI<br />00:56:38 The impact of AI on the middle class and education<br />01:00:00 The retirement of Boston Dynamics' Atlas<br />01:06:12 Wrap up</p>
]]></content:encoded>
      <enclosure length="32523504" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/a423317e-177e-42c9-a2c9-13f42b448dba/audio/9003e9ab-7e77-4ed3-ac8d-1303068d4638/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Music, Love &amp; Aliens | EP11</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>01:06:46</itunes:duration>
      <itunes:summary>Welcome back to &quot;They Might Be Self-Aware,&quot; your tech-fest of unapologetically candid Hunter Powers and Daniel Bishop. This week, we&apos;re unscrewing the back panel of the AI revolution to tinker with its most provocative components. From AI&apos;s takeover in music production to the innovative strides of MidJourney and the impending generative AI wave, no topic is too charged. We dissect Meta&apos;s bold strides in open tech, debate the ethics of AI influencers replacing human celebrities, and probe the emotional tethers we&apos;re forming with digital beings. Will AI save us or lead us into an existential quagmire?

Plus, we&apos;re zooming out to look at the grander societal shifts—how AI is reshaping the middle class, transforming education, and might just draft your next college professor from a pool of bots. And don&apos;t miss our take on Elon Musk&apos;s wild idea to harness Tesla&apos;s GPU power for AI training or our debate on whether we should be more scared of aliens or the AI next door.

Tune in, subscribe, and brace yourself for a no-holds-barred journey into the heart of tech&apos;s most pressing debates. Where else can you find a full-throttle discussion on the looming AI apocalypse alongside the retirement party for Boston Dynamics&apos; Atlas? Only here, on &quot;They Might Be Self-Aware.&quot;

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>Welcome back to &quot;They Might Be Self-Aware,&quot; your tech-fest of unapologetically candid Hunter Powers and Daniel Bishop. This week, we&apos;re unscrewing the back panel of the AI revolution to tinker with its most provocative components. From AI&apos;s takeover in music production to the innovative strides of MidJourney and the impending generative AI wave, no topic is too charged. We dissect Meta&apos;s bold strides in open tech, debate the ethics of AI influencers replacing human celebrities, and probe the emotional tethers we&apos;re forming with digital beings. Will AI save us or lead us into an existential quagmire?

Plus, we&apos;re zooming out to look at the grander societal shifts—how AI is reshaping the middle class, transforming education, and might just draft your next college professor from a pool of bots. And don&apos;t miss our take on Elon Musk&apos;s wild idea to harness Tesla&apos;s GPU power for AI training or our debate on whether we should be more scared of aliens or the AI next door.

Tune in, subscribe, and brace yourself for a no-holds-barred journey into the heart of tech&apos;s most pressing debates. Where else can you find a full-throttle discussion on the looming AI apocalypse alongside the retirement party for Boston Dynamics&apos; Atlas? Only here, on &quot;They Might Be Self-Aware.&quot;

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>social media, meta, ai beauty pageant, ai educational access, modern art, openai, ai censorship, tech hardware, virtual reality, ai-generated internet, ai and economy, aliens, great filter, human interaction, ai-generated art, ai influencers, internet simulator, gpt-5, generative ai, ai personhood, ai investments, language model, emotional attachments to ai, ai regulation, tesla gpu power, boston dynamics, robot dog, teenage engineering, ethical ai, ai ethical implications</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>11</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">f3a6cb34-5d3f-47c6-ac28-7d80ed1e3228</guid>
      <title>Humans Pretend To Be AI, Get Stuck In The Loop &amp; Robots Need Vacation | EP10</title>
      <description><![CDATA[<p>00:00 TMBSA turns 10!<br />01:05 Behind the Scenes: SORA Powers Shy Kids' "Airhead" Video<br />02:46 Challenges of Video Editing with Sora<br />03:18 AI and Physical Embodiment: Key to True Intelligence?<br />10:18 How Germany's Sonntagsruhe Law Affects AI Businesses<br />12:46 AI as a Tool: The Essential Role of Human Effort<br />18:29 Labeling AI Videos: The Debate on Human Input<br />25:47 AI Takes Over Fast Food: From Training To Frying<br />42:44 Amazon's Secret: Humans Behind the Cashier-less Stores</p>
]]></description>
      <pubDate>Mon, 22 Apr 2024 14:13:30 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00 TMBSA turns 10!<br />01:05 Behind the Scenes: SORA Powers Shy Kids' "Airhead" Video<br />02:46 Challenges of Video Editing with Sora<br />03:18 AI and Physical Embodiment: Key to True Intelligence?<br />10:18 How Germany's Sonntagsruhe Law Affects AI Businesses<br />12:46 AI as a Tool: The Essential Role of Human Effort<br />18:29 Labeling AI Videos: The Debate on Human Input<br />25:47 AI Takes Over Fast Food: From Training To Frying<br />42:44 Amazon's Secret: Humans Behind the Cashier-less Stores</p>
]]></content:encoded>
      <enclosure length="27032095" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/e5813df2-1d41-4e0c-837a-1674e2f3fdab/audio/c44cac1c-4b3b-4f85-8b43-aab91579abda/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Humans Pretend To Be AI, Get Stuck In The Loop &amp; Robots Need Vacation | EP10</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:55:20</itunes:duration>
      <itunes:summary>This week on &quot;They Might Be Self-Aware,&quot; Hunter Powers and Daniel Bishop unpack the use of AI in creative fields, such as filmmaking and art, and the importance of human involvement in the process. They mention the studio Shy Kids&apos; use of Sora for their video project, &quot;Airhead,&quot; and highlight the value of using AI as a tool rather than relying solely on it for artistic creation. They then delve into the concept of AI needing a physical embodiment to truly become intelligent and discuss the German law of Sonntagsruhe, which mandates a day of rest on Sundays. They touch on the necessity of AI interacting with the physical world to become more sentient. The hosts also talk about Yum Brands&apos; plans to use AI to train human employees and the potential benefits of AI-assisted training. They further explore the idea of labeling AI-generated products and services, including the use of automation taxes and regulations to ensure proper labeling. The episode concludes with a discussion on Amazon&apos;s grocery stores and the revelation that human involvement was necessary for accurate tracking and charging of purchases.

Tune into Episode 10 of &quot;They Might Be Self-Aware&quot; for a compact yet explosive analysis of technology’s biggest challenges and promises. Subscribe today and join our critical exploration of AI’s dual role as both innovator and disruptor.

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>This week on &quot;They Might Be Self-Aware,&quot; Hunter Powers and Daniel Bishop unpack the use of AI in creative fields, such as filmmaking and art, and the importance of human involvement in the process. They mention the studio Shy Kids&apos; use of Sora for their video project, &quot;Airhead,&quot; and highlight the value of using AI as a tool rather than relying solely on it for artistic creation. They then delve into the concept of AI needing a physical embodiment to truly become intelligent and discuss the German law of Sonntagsruhe, which mandates a day of rest on Sundays. They touch on the necessity of AI interacting with the physical world to become more sentient. The hosts also talk about Yum Brands&apos; plans to use AI to train human employees and the potential benefits of AI-assisted training. They further explore the idea of labeling AI-generated products and services, including the use of automation taxes and regulations to ensure proper labeling. The episode concludes with a discussion on Amazon&apos;s grocery stores and the revelation that human involvement was necessary for accurate tracking and charging of purchases.

Tune into Episode 10 of &quot;They Might Be Self-Aware&quot; for a compact yet explosive analysis of technology’s biggest challenges and promises. Subscribe today and join our critical exploration of AI’s dual role as both innovator and disruptor.

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>ml, aiethics, automationtax, embodiment, ethics, privacy, technology, ai, automation, future, aidevelopment, machinelearning, robotics, society, generativeai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>10</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">5c0d6de8-ebd9-4ecf-a46d-a8dd7b793840</guid>
      <title>AI Is Writing News, Impersonating Teachers &amp; Congress Is On The Hunt | EP09</title>
      <description><![CDATA[<p>00:00 Introduction<br />01:04 The Rise of AI-Generated News<br />04:38 Automating our Teachers<br />08:57 AI-Generated Books Ruing Google<br />12:38 The Need for Real Data for Training AI<br />16:08 Solutions for Capturing NEW Real Data<br />21:04 The Challenges of Automating Physical Tasks<br />25:03 The Impact of Copyright on Training Data<br />31:01 The Future of AI and Government<br />34:10 The Importance of Real Data for AI Training<br />41:07 Enforcing AI Training Data Laws<br />43:29 The Potential Impact of Laws on Innovation<br />45:15 Wrap up</p>
]]></description>
      <pubDate>Mon, 15 Apr 2024 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>00:00 Introduction<br />01:04 The Rise of AI-Generated News<br />04:38 Automating our Teachers<br />08:57 AI-Generated Books Ruing Google<br />12:38 The Need for Real Data for Training AI<br />16:08 Solutions for Capturing NEW Real Data<br />21:04 The Challenges of Automating Physical Tasks<br />25:03 The Impact of Copyright on Training Data<br />31:01 The Future of AI and Government<br />34:10 The Importance of Real Data for AI Training<br />41:07 Enforcing AI Training Data Laws<br />43:29 The Potential Impact of Laws on Innovation<br />45:15 Wrap up</p>
]]></content:encoded>
      <enclosure length="23992761" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/27cddbe4-4cd3-4758-8b36-9f6f4f342a5a/audio/27541a22-75aa-4de7-a9cf-de9a7509c879/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Is Writing News, Impersonating Teachers &amp; Congress Is On The Hunt | EP09</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:49:00</itunes:duration>
      <itunes:summary>This week on &quot;They Might Be Self-Aware,&quot; Hunter Powers and Daniel Bishop unpack a whirlwind of AI advancements and dilemmas. Delve into AI-generated news shaking up traditional media, the controversial shift toward automated teaching, and AI&apos;s role in literary disruptions impacting giants like Google.

We probe the crucial need for genuine data in AI training, explore creative solutions for data acquisition, and debate the impact of copyright laws on AI development. The episode also delves into the future of AI governance, the enforcement of AI training data laws, and the potential repercussions on innovation.

Tune into Episode 09 of &quot;They Might Be Self-Aware&quot; for a compact yet explosive analysis of technology’s biggest challenges and promises. Subscribe today and join our critical exploration of AI’s dual role as both innovator and disruptor.

For more info, visit our website at https://www.tmbsa.tech/</itunes:summary>
      <itunes:subtitle>This week on &quot;They Might Be Self-Aware,&quot; Hunter Powers and Daniel Bishop unpack a whirlwind of AI advancements and dilemmas. Delve into AI-generated news shaking up traditional media, the controversial shift toward automated teaching, and AI&apos;s role in literary disruptions impacting giants like Google.

We probe the crucial need for genuine data in AI training, explore creative solutions for data acquisition, and debate the impact of copyright laws on AI development. The episode also delves into the future of AI governance, the enforcement of AI training data laws, and the potential repercussions on innovation.

Tune into Episode 09 of &quot;They Might Be Self-Aware&quot; for a compact yet explosive analysis of technology’s biggest challenges and promises. Subscribe today and join our critical exploration of AI’s dual role as both innovator and disruptor.

For more info, visit our website at https://www.tmbsa.tech/</itunes:subtitle>
      <itunes:keywords>teacherlife, education, artificialintelligence, automatedgrading, technology, ai, copyright, future, dataprivacy, congress</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>9</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">119792d9-fbae-4ff4-b966-d14d818810d1</guid>
      <title>GPT-4 Is Done, Everyone Gets Cloned, AI Elvis Will Protect Us | EP8</title>
      <description><![CDATA[<p>(00:00) Introduction and Podcast Automation<br />(02:24) Giving AI an Inner Monologue<br />(05:03) Expanding Human Perception with VR<br />(09:38) Developing Additional Senses<br />(14:32) Claude III Opus Outperforms GPT-4<br />(28:52) OpenAI's Voice Cloning Announcement<br />(32:39) Sora and the Future of AI-Generated Video<br />(36:55) Controversy Around AI in Creative Industries<br />(42:30) Protecting Likeness Rights in an AI World<br />(47:38) AI Replacing Human Labor and Potential Impacts</p><p>Links:<br />The Backwards Brain Bicycle - Smarter Every Day 133<br /><a href="https://www.youtube.com/watch?v=MFzDaBzBlL0">https://www.youtube.com/watch?v=MFzDaBzBlL0</a></p><p>Sora: first impressions<br /><a href="https://openai.com/blog/sora-first-impressions">https://openai.com/blog/sora-first-impressions</a></p>
]]></description>
      <pubDate>Mon, 8 Apr 2024 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>(00:00) Introduction and Podcast Automation<br />(02:24) Giving AI an Inner Monologue<br />(05:03) Expanding Human Perception with VR<br />(09:38) Developing Additional Senses<br />(14:32) Claude III Opus Outperforms GPT-4<br />(28:52) OpenAI's Voice Cloning Announcement<br />(32:39) Sora and the Future of AI-Generated Video<br />(36:55) Controversy Around AI in Creative Industries<br />(42:30) Protecting Likeness Rights in an AI World<br />(47:38) AI Replacing Human Labor and Potential Impacts</p><p>Links:<br />The Backwards Brain Bicycle - Smarter Every Day 133<br /><a href="https://www.youtube.com/watch?v=MFzDaBzBlL0">https://www.youtube.com/watch?v=MFzDaBzBlL0</a></p><p>Sora: first impressions<br /><a href="https://openai.com/blog/sora-first-impressions">https://openai.com/blog/sora-first-impressions</a></p>
]]></content:encoded>
      <enclosure length="26660830" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/c6d51ae8-fdab-4eb7-85d3-35e9885bc5a2/audio/5176f890-2bc0-41d2-8aea-471229014cb1/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>GPT-4 Is Done, Everyone Gets Cloned, AI Elvis Will Protect Us | EP8</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:54:33</itunes:duration>
      <itunes:summary>GPT-4 Is Done, Everyone Gets Cloned, AI Elvis Will Protect Us | EP8

Get ready for a mind-bending journey into the heart of #AI and #VirtualReality on the latest episode of &quot;They Might Be Self-Aware&quot;!

This week, Hunter and Daniel dive headfirst into the electric Kool-Aid acid test of #ArtificialIntelligence and #VR. We&apos;re talking about giving these silicon bastards inner monologues and expanding human perception with #VirtualReality – because why not?

But wait, there&apos;s more! We&apos;ll be dissecting the latest in #AI&apos;s relentless march to replace us all, from #ClaudeIII #OpusAI outperforming #GPT4 to #OpenAI&apos;s #VoiceCloning announcement and the future of AI-generated video with #SoraAI.

Of course, we can&apos;t ignore the storm brewing in the creative industries as #AICreativity threatens to replace human labor. We&apos;ll be tackling the #AIControversy head-on and exploring how to protect your likeness rights in this digital Wild West.

So, strap in, drop out, and tune in to &quot;They Might Be Self-Aware&quot; for a mind-bending journey to the edge of the #AI abyss. It&apos;s not a matter of if, but when the machines take over – and we&apos;ll be there to chronicle every twisted turn in the #FutureOfWork.</itunes:summary>
      <itunes:subtitle>GPT-4 Is Done, Everyone Gets Cloned, AI Elvis Will Protect Us | EP8

Get ready for a mind-bending journey into the heart of #AI and #VirtualReality on the latest episode of &quot;They Might Be Self-Aware&quot;!

This week, Hunter and Daniel dive headfirst into the electric Kool-Aid acid test of #ArtificialIntelligence and #VR. We&apos;re talking about giving these silicon bastards inner monologues and expanding human perception with #VirtualReality – because why not?

But wait, there&apos;s more! We&apos;ll be dissecting the latest in #AI&apos;s relentless march to replace us all, from #ClaudeIII #OpusAI outperforming #GPT4 to #OpenAI&apos;s #VoiceCloning announcement and the future of AI-generated video with #SoraAI.

Of course, we can&apos;t ignore the storm brewing in the creative industries as #AICreativity threatens to replace human labor. We&apos;ll be tackling the #AIControversy head-on and exploring how to protect your likeness rights in this digital Wild West.

So, strap in, drop out, and tune in to &quot;They Might Be Self-Aware&quot; for a mind-bending journey to the edge of the #AI abyss. It&apos;s not a matter of if, but when the machines take over – and we&apos;ll be there to chronicle every twisted turn in the #FutureOfWork.</itunes:subtitle>
      <itunes:keywords>gpt4, aicreativity, artificialintelligence, openai, aicontroversy, vr, podcastautomation, claudeiii, ai, futureofwork, virtualreality, soraai, voicecloning, opusai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>8</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">cc310870-3b61-47b9-973b-8454f6291787</guid>
      <title>Government Regulation, AI Lions, Extinction Level Events, Post Technology Living | EP7</title>
      <description><![CDATA[<p>(00:00) Introduction<br />(01:45) Devon AI: Revolutionary or Not?<br />(04:56) AI's Existential Threat to Humanity<br />(07:10) Regulating AI Development and Expertise<br />(11:42) Animals Disappearing from Hollywood<br />(22:32) The Workforce, Automation, and UBI<br />(32:16) Authenticity and AI Manipulation in Digital Content<br />(44:08) Ethical and Legal Quandaries with AI<br />(49:45) Public Perception and Reaction to AI in Arts and Culture<br />(56:10) Hallucinating Conclusions</p>
]]></description>
      <pubDate>Mon, 25 Mar 2024 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>(00:00) Introduction<br />(01:45) Devon AI: Revolutionary or Not?<br />(04:56) AI's Existential Threat to Humanity<br />(07:10) Regulating AI Development and Expertise<br />(11:42) Animals Disappearing from Hollywood<br />(22:32) The Workforce, Automation, and UBI<br />(32:16) Authenticity and AI Manipulation in Digital Content<br />(44:08) Ethical and Legal Quandaries with AI<br />(49:45) Public Perception and Reaction to AI in Arts and Culture<br />(56:10) Hallucinating Conclusions</p>
]]></content:encoded>
      <enclosure length="27662734" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/ef1f29ca-5c44-497f-b71d-af1e1eaf8c66/audio/05db3912-58b9-49ad-9423-0c1104cc2fb7/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Government Regulation, AI Lions, Extinction Level Events, Post Technology Living | EP7</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:56:38</itunes:duration>
      <itunes:summary>In Episode 7 of &quot;They Might Be Self-Aware,&quot; Hunter Powers and Daniel Bishop delve into the evolving world of artificial intelligence and its potential to revolutionize, or even jeopardize, our future. They discuss Devon AI, the groundbreaking, but not quite perfect, software engineer AI, and the implications of AI surpassing human capabilities in various jobs. The dialogue shifts to the existential threat AI might pose, the call for stringent regulations, and the societal reaction to increasing workplace automation. They explore the controversy of using real animals in Hollywood, the transition towards digital recreation, and the broader implications for jobs across industries. Much of the episode is dedicated to the challenges and ethical dilemmas presented by AI&apos;s ability to create convincing fake evidence, highlighting the urgent need for society to adapt to these changes. Hunter and Daniel speculate on a future where AI could replace human creativity and labor entirely, pondering on the existential questions this raises. Tune in for a thought-provoking discussion on the precarious balance between harnessing AI&apos;s potential and safeguarding our human essence as we figure out if They Might Be Self-Aware.</itunes:summary>
      <itunes:subtitle>In Episode 7 of &quot;They Might Be Self-Aware,&quot; Hunter Powers and Daniel Bishop delve into the evolving world of artificial intelligence and its potential to revolutionize, or even jeopardize, our future. They discuss Devon AI, the groundbreaking, but not quite perfect, software engineer AI, and the implications of AI surpassing human capabilities in various jobs. The dialogue shifts to the existential threat AI might pose, the call for stringent regulations, and the societal reaction to increasing workplace automation. They explore the controversy of using real animals in Hollywood, the transition towards digital recreation, and the broader implications for jobs across industries. Much of the episode is dedicated to the challenges and ethical dilemmas presented by AI&apos;s ability to create convincing fake evidence, highlighting the urgent need for society to adapt to these changes. Hunter and Daniel speculate on a future where AI could replace human creativity and labor entirely, pondering on the existential questions this raises. Tune in for a thought-provoking discussion on the precarious balance between harnessing AI&apos;s potential and safeguarding our human essence as we figure out if They Might Be Self-Aware.</itunes:subtitle>
      <itunes:keywords>technologyinnovation, codingai, deeplearning, digitalart, artificialintelligence, aiinsociety, aifuture, techethics, futureofwork, aisafety, emergingtech, techregulation, machinelearning, robotics, creativeai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>7</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e16dbb00-8fcb-4d4f-8cbe-5d9608d77e1a</guid>
      <title>AI Goats, The Preservation of Life, More Jobs Fall to AI | EP06</title>
      <description><![CDATA[<p>CHAPTERS</p><p>(00:00) AI and medicine<br />(01:03) AI in Radiology and Dermatology<br />(03:04) The Release of OpenAI's Sora<br />(05:44) Potential Pricing for Sora<br />(11:02) Vision Pro in Surgery<br />(13:23) AI Goat Simulator<br />(17:18) AI in Self-Driving Cars<br />(27:08) SpaceX Launch and Plans for Mars<br />(37:03) AI in Border Patrol for Fentanyl Detection<br />(42:09) Debating the Role of AI in Software Engineering</p><p>SHOW LINKS</p><p>A CITY ON MARS<br />by Kelly Weinersmith and Zach Weinersmith<br /><a href="https://www.acityonmars.com/">https://www.acityonmars.com/</a></p>
]]></description>
      <pubDate>Mon, 18 Mar 2024 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>CHAPTERS</p><p>(00:00) AI and medicine<br />(01:03) AI in Radiology and Dermatology<br />(03:04) The Release of OpenAI's Sora<br />(05:44) Potential Pricing for Sora<br />(11:02) Vision Pro in Surgery<br />(13:23) AI Goat Simulator<br />(17:18) AI in Self-Driving Cars<br />(27:08) SpaceX Launch and Plans for Mars<br />(37:03) AI in Border Patrol for Fentanyl Detection<br />(42:09) Debating the Role of AI in Software Engineering</p><p>SHOW LINKS</p><p>A CITY ON MARS<br />by Kelly Weinersmith and Zach Weinersmith<br /><a href="https://www.acityonmars.com/">https://www.acityonmars.com/</a></p>
]]></content:encoded>
      <enclosure length="23870204" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/2f3611c7-ad04-449b-a652-4ee13fda6efe/audio/59811401-213d-4ad4-ba24-0e291fd61a6a/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>AI Goats, The Preservation of Life, More Jobs Fall to AI | EP06</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:48:45</itunes:duration>
      <itunes:summary>In Episode 6 of &quot;They Might Be Self-Aware,&quot; hosts Hunter Powers and Daniel Bishop take you through AI&apos;s transformative influence across sectors from healthcare to space exploration. They unpack AI&apos;s breakthroughs in radiology and dermatology and its foray into gaming with an AI Goat Simulator, blending deep insights with a dash of intelligence. The duo dives into OpenAI&apos;s latest, the impact of AI in self-driving cars, and SpaceX&apos;s Martian ambitions, highlighting the blend of innovation and ethical quandaries. With a keen eye on the future, Hunter and Daniel debate AI&apos;s burgeoning role in software engineering and its societal implications. This episode is a compelling blend of enthusiasm and discourse, offering a peek into a future where AI reshapes our world. Tune in for the future, where they might be self-aware.</itunes:summary>
      <itunes:subtitle>In Episode 6 of &quot;They Might Be Self-Aware,&quot; hosts Hunter Powers and Daniel Bishop take you through AI&apos;s transformative influence across sectors from healthcare to space exploration. They unpack AI&apos;s breakthroughs in radiology and dermatology and its foray into gaming with an AI Goat Simulator, blending deep insights with a dash of intelligence. The duo dives into OpenAI&apos;s latest, the impact of AI in self-driving cars, and SpaceX&apos;s Martian ambitions, highlighting the blend of innovation and ethical quandaries. With a keen eye on the future, Hunter and Daniel debate AI&apos;s burgeoning role in software engineering and its societal implications. This episode is a compelling blend of enthusiasm and discourse, offering a peek into a future where AI reshapes our world. Tune in for the future, where they might be self-aware.</itunes:subtitle>
      <itunes:keywords>marscolonization, spaceexploration, technology, ai, softwareengineering, softwareautomation, autonomousvehicles, coding, machinelearning, robotics</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>6</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e5bb9eeb-9ade-487d-bb9a-ddb22b5159b1</guid>
      <title>Beyond Sci-Fi: No Jobs, Elon Musk&apos;s Justice, and the Quest for AGI | EP05</title>
      <description><![CDATA[<p>(00:00) AI Taking Over Jobs<br />(02:05) Efficiency vs. Extracting More Money<br />(05:10) Elon Musk's History of Founding Companies<br />(08:16) Lawsuits Against OpenAI and Microsoft<br />(09:23) The Possibility of Achieving AGI<br />(10:54) Using Publicly Available Information for AI Training<br />(11:38) Bitcoin and Cryptocurrency<br />(14:09) OpenAI's Response to Lawsuits<br />(16:21) Prompt Engineering and AI Output<br />(20:05) The Infinite Monkey Theorem<br />(21:39) Exploiting AI Models<br />(25:40) Impartial AI Leaderboard<br />(26:30) Anthropic's Challenge to OpenAI<br />(28:56) AI Refusing to Perform Tasks<br />(29:59) Meta's AR Glasses<br />(36:01) AR Glasses Use Cases<br />(38:34) Ethics of AR Glasses<br />(40:31) Privacy Concerns with AR Glasses<br />(44:51) Future of Advertising with AR Glasses</p><p>===<br />A City on Mars<br /><a href="https://www.penguinrandomhouse.com/books/639449/a-city-on-mars-by-kelly-and-zach-weinersmith/">https://www.penguinrandomhouse.com/books/639449/a-city-on-mars-by-kelly-and-zach-weinersmith/</a></p><p>Chatbot Arena: Benchmarking LLMs in the Wild<br /><a href="https://arena.lmsys.org/">https://arena.lmsys.org/</a></p><p>The needle-in-the-haystack eval<br /><a href="https://twitter.com/alexalbert__/status/1764722513014329620">https://twitter.com/alexalbert__/status/1764722513014329620</a></p><p>Anon drinks an verification can<br /><a href="https://www.youtube.com/watch?v=EJB80Xsj5BY">https://www.youtube.com/watch?v=EJB80Xsj5BY</a></p><p>Hyper Reality<br /><a href="https://www.youtube.com/watch?v=YJg02ivYzSs">https://www.youtube.com/watch?v=YJg02ivYzSs</a></p>
]]></description>
      <pubDate>Mon, 11 Mar 2024 13:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>(00:00) AI Taking Over Jobs<br />(02:05) Efficiency vs. Extracting More Money<br />(05:10) Elon Musk's History of Founding Companies<br />(08:16) Lawsuits Against OpenAI and Microsoft<br />(09:23) The Possibility of Achieving AGI<br />(10:54) Using Publicly Available Information for AI Training<br />(11:38) Bitcoin and Cryptocurrency<br />(14:09) OpenAI's Response to Lawsuits<br />(16:21) Prompt Engineering and AI Output<br />(20:05) The Infinite Monkey Theorem<br />(21:39) Exploiting AI Models<br />(25:40) Impartial AI Leaderboard<br />(26:30) Anthropic's Challenge to OpenAI<br />(28:56) AI Refusing to Perform Tasks<br />(29:59) Meta's AR Glasses<br />(36:01) AR Glasses Use Cases<br />(38:34) Ethics of AR Glasses<br />(40:31) Privacy Concerns with AR Glasses<br />(44:51) Future of Advertising with AR Glasses</p><p>===<br />A City on Mars<br /><a href="https://www.penguinrandomhouse.com/books/639449/a-city-on-mars-by-kelly-and-zach-weinersmith/">https://www.penguinrandomhouse.com/books/639449/a-city-on-mars-by-kelly-and-zach-weinersmith/</a></p><p>Chatbot Arena: Benchmarking LLMs in the Wild<br /><a href="https://arena.lmsys.org/">https://arena.lmsys.org/</a></p><p>The needle-in-the-haystack eval<br /><a href="https://twitter.com/alexalbert__/status/1764722513014329620">https://twitter.com/alexalbert__/status/1764722513014329620</a></p><p>Anon drinks an verification can<br /><a href="https://www.youtube.com/watch?v=EJB80Xsj5BY">https://www.youtube.com/watch?v=EJB80Xsj5BY</a></p><p>Hyper Reality<br /><a href="https://www.youtube.com/watch?v=YJg02ivYzSs">https://www.youtube.com/watch?v=YJg02ivYzSs</a></p>
]]></content:encoded>
      <enclosure length="23969060" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/63db1dfa-6791-428c-aa6e-957551706e14/audio/9a8c1015-6d98-49dd-9d6c-ea1822adac2e/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Beyond Sci-Fi: No Jobs, Elon Musk&apos;s Justice, and the Quest for AGI | EP05</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:48:57</itunes:duration>
      <itunes:summary>
In Episode 5 of &quot;They Might Be Self-Aware,&quot; hosts Hunter Powers and Daniel Bishop discuss the rise of AI in various industries, including the potential for AI to take over jobs, Elon Musk&apos;s lawsuit against OpenAI, the development of new AI models, and the future of augmented reality glasses. They also touch on the ethical implications of using AI and the potential for widespread advertising in the AR space. </itunes:summary>
      <itunes:subtitle>
In Episode 5 of &quot;They Might Be Self-Aware,&quot; hosts Hunter Powers and Daniel Bishop discuss the rise of AI in various industries, including the potential for AI to take over jobs, Elon Musk&apos;s lawsuit against OpenAI, the development of new AI models, and the future of augmented reality glasses. They also touch on the ethical implications of using AI and the potential for widespread advertising in the AR space. </itunes:subtitle>
      <itunes:keywords>jobs, elonmusk, openai, augmentedreality, agi, ai, advertising</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>5</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1eb0c350-3a35-4a89-91c2-a6acd049d04f</guid>
      <title>My Dinner With AI, Artificial General Intelligence, Infecting your BRAIN | EP04</title>
      <description><![CDATA[<p>(01:35) Cognitive Cooking with Chef Watson<br />(08:15) The Revolution of Large Language Models<br />(18:14) Artificial General Intelligence (AGI)<br />(24:45) The Future of Human and AI Interaction<br />(34:57) Jobs Replaced by AI<br />(40:00) Personhood and Ownership of AI<br />(49:47) The Impact of AI on Legal Discovery<br />(55:30) The Potential of AI in Healthcare</p><p><a href="https://www.youtube.com/watch?v=5sLYAQS9sWQ">How Large Language Models Work</a></p>
]]></description>
      <pubDate>Mon, 4 Mar 2024 14:00:00 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>(01:35) Cognitive Cooking with Chef Watson<br />(08:15) The Revolution of Large Language Models<br />(18:14) Artificial General Intelligence (AGI)<br />(24:45) The Future of Human and AI Interaction<br />(34:57) Jobs Replaced by AI<br />(40:00) Personhood and Ownership of AI<br />(49:47) The Impact of AI on Legal Discovery<br />(55:30) The Potential of AI in Healthcare</p><p><a href="https://www.youtube.com/watch?v=5sLYAQS9sWQ">How Large Language Models Work</a></p>
]]></content:encoded>
      <enclosure length="27839075" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/d2098cf4-2963-4e3a-bd5b-47790ef2cda7/audio/ab340fc2-f27f-424a-87ca-78c8a7064966/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>My Dinner With AI, Artificial General Intelligence, Infecting your BRAIN | EP04</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:57:01</itunes:duration>
      <itunes:summary>In Episode 4 of &quot;They Might Be Self-Aware,&quot; hosts Hunter Powers and Daniel Bishop discuss the current state of artificial intelligence, particularly large language models like GPT. They explore the leap from previous AI models to these more advanced ones and discuss the unique features that make them different. Hunter and Daniel ponder the future of AI, including the potential for AI to gain personhood and the implications for job loss and creativity. Throughout the conversation, they share insights, anecdotes, and predictions related to AI&apos;s impact on various industries.</itunes:summary>
      <itunes:subtitle>In Episode 4 of &quot;They Might Be Self-Aware,&quot; hosts Hunter Powers and Daniel Bishop discuss the current state of artificial intelligence, particularly large language models like GPT. They explore the leap from previous AI models to these more advanced ones and discuss the unique features that make them different. Hunter and Daniel ponder the future of AI, including the potential for AI to gain personhood and the implications for job loss and creativity. Throughout the conversation, they share insights, anecdotes, and predictions related to AI&apos;s impact on various industries.</itunes:subtitle>
      <itunes:keywords>openai, ai, singularity, future, generativeai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>4</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1736caf0-fba3-4062-b56e-50622dd81900</guid>
      <title>Sora Kills ALL Trust, Finding the Fakes, RIP Hollywood, Make AI Do Some Good | EP03</title>
      <description><![CDATA[<ul><li>00:00 Introduction</li><li>00:27 Generating Images with Sora</li><li>01:28 Impressions of the Generated Images</li><li>06:19 Comparison with Other Commercial Solutions</li><li>09:36 Runway ML vs. Sora</li><li>11:42 Quality of Video Generation</li><li>13:13 Trustworthiness of Generated Content</li><li>15:09 Cryptographically Signed Images</li><li>20:22 Trust in Online Content</li><li>25:10 Cryptographically Signing Content</li><li>29:16 Rise of Fake Content</li><li>30:54 The Dead Internet Theory</li><li>36:20 AI in the Entertainment Industry</li><li>43:04 The Potential Problems of AI</li><li>45:12 The Positive Applications of LLMs</li><li>53:41 Pitching Billion Dollar Ideas</li><li>56:23 The Dark Side of LLMs</li></ul>
]]></description>
      <pubDate>Mon, 26 Feb 2024 15:19:54 +0000</pubDate>
      <author>hello@theblur.ai (Daniel Bishop, Hunter Powers)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<ul><li>00:00 Introduction</li><li>00:27 Generating Images with Sora</li><li>01:28 Impressions of the Generated Images</li><li>06:19 Comparison with Other Commercial Solutions</li><li>09:36 Runway ML vs. Sora</li><li>11:42 Quality of Video Generation</li><li>13:13 Trustworthiness of Generated Content</li><li>15:09 Cryptographically Signed Images</li><li>20:22 Trust in Online Content</li><li>25:10 Cryptographically Signing Content</li><li>29:16 Rise of Fake Content</li><li>30:54 The Dead Internet Theory</li><li>36:20 AI in the Entertainment Industry</li><li>43:04 The Potential Problems of AI</li><li>45:12 The Positive Applications of LLMs</li><li>53:41 Pitching Billion Dollar Ideas</li><li>56:23 The Dark Side of LLMs</li></ul>
]]></content:encoded>
      <enclosure length="26002429" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/b7c54eb3-5c29-444f-abef-1b5c435a78e9/audio/a4987c39-e108-4bf5-9b5d-f0b18f8b3295/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Sora Kills ALL Trust, Finding the Fakes, RIP Hollywood, Make AI Do Some Good | EP03</itunes:title>
      <itunes:author>Daniel Bishop, Hunter Powers</itunes:author>
      <itunes:duration>00:53:10</itunes:duration>
      <itunes:summary>A riveting journey through the AI landscape, from the visually stunning world of Sora&apos;s image generation to the urgent issue of trust in digital content. We compare giants like Sora and Runway ML, delve into the impact of AI on Hollywood, and explore the potential of cryptographically signed images against the backdrop of rising fake content. Balancing the fears and hopes of AI&apos;s role in our future, this episode is a must-watch.</itunes:summary>
      <itunes:subtitle>A riveting journey through the AI landscape, from the visually stunning world of Sora&apos;s image generation to the urgent issue of trust in digital content. We compare giants like Sora and Runway ML, delve into the impact of AI on Hollywood, and explore the potential of cryptographically signed images against the backdrop of rising fake content. Balancing the fears and hopes of AI&apos;s role in our future, this episode is a must-watch.</itunes:subtitle>
      <itunes:keywords>ml, llm, sora, runwayml, openai, trust, ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>3</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1a04449c-40e0-4ce4-b6d8-83b091e1cbb4</guid>
      <title>Apple Vision is Zucked, Sam Altman is making GPT-5 Smarter | EP02</title>
      <description><![CDATA[<p>(00:00) - OpenAI Sora creating video from text<br />(02:19) - Daniel tries the Apple Vision Pro<br />(15:41) - Hunter Watches the Super Bowl on Apple Vision<br />(23:50) - Mark Zuckerberg Reviews the Apple Vision<br />(38:31) - Sam Altman Announces The Key Feature of GPT-5<br />(41:00) - Why was Sam fired and Andrej Karpathy quit?</p>
]]></description>
      <pubDate>Sat, 17 Feb 2024 16:59:28 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>(00:00) - OpenAI Sora creating video from text<br />(02:19) - Daniel tries the Apple Vision Pro<br />(15:41) - Hunter Watches the Super Bowl on Apple Vision<br />(23:50) - Mark Zuckerberg Reviews the Apple Vision<br />(38:31) - Sam Altman Announces The Key Feature of GPT-5<br />(41:00) - Why was Sam fired and Andrej Karpathy quit?</p>
]]></content:encoded>
      <enclosure length="24388394" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/dbb88d5b-f258-4c65-9afe-9be693685d45/audio/3cdd66bc-e9e8-4809-8c24-cc4357644f1e/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Apple Vision is Zucked, Sam Altman is making GPT-5 Smarter | EP02</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>00:49:48</itunes:duration>
      <itunes:summary>In the second episode of &quot;They Might Be Self-Aware,&quot; Hunter Powers and Daniel Bishop explore the forefront of AI and VR advancements. They begin by discussing OpenAI&apos;s pioneering text-to-video model, Sora, and segue into Daniel&apos;s immersive experience with Apple Vision Pro. The hosts engage in a comprehensive comparison of Apple Vision Pro and Quest 3, analyzing Mark Zuckerberg&apos;s critique of the Apple Vision Pro to offer a unique perspective on the tech leader&apos;s views.

The episode covers Sam Altman&apos;s announcement of GPT-5&apos;s flagship feature. Hunter and Daniel discuss the broader implications of increasingly intelligent AI, probing whether heightened intelligence is the ultimate measure of AI success. They also delve into the contentious theories surrounding Sam Altman&apos;s recent departure from OpenAI and the significant resignation of co-founder Andrej Karpathy.

Concluding the episode, the hosts dive into the rapidly evolving domain of Open Source Large Language Models (LLMs) and their potential impact on the future of AI development. This episode is packed with insights and is essential listening for anyone interested in the dynamic intersection of technology, AI, and digital innovation. Join Hunter and Daniel as they unpack these cutting-edge topics in &quot;They Might Be Self-Aware.&quot;</itunes:summary>
      <itunes:subtitle>In the second episode of &quot;They Might Be Self-Aware,&quot; Hunter Powers and Daniel Bishop explore the forefront of AI and VR advancements. They begin by discussing OpenAI&apos;s pioneering text-to-video model, Sora, and segue into Daniel&apos;s immersive experience with Apple Vision Pro. The hosts engage in a comprehensive comparison of Apple Vision Pro and Quest 3, analyzing Mark Zuckerberg&apos;s critique of the Apple Vision Pro to offer a unique perspective on the tech leader&apos;s views.

The episode covers Sam Altman&apos;s announcement of GPT-5&apos;s flagship feature. Hunter and Daniel discuss the broader implications of increasingly intelligent AI, probing whether heightened intelligence is the ultimate measure of AI success. They also delve into the contentious theories surrounding Sam Altman&apos;s recent departure from OpenAI and the significant resignation of co-founder Andrej Karpathy.

Concluding the episode, the hosts dive into the rapidly evolving domain of Open Source Large Language Models (LLMs) and their potential impact on the future of AI development. This episode is packed with insights and is essential listening for anyone interested in the dynamic intersection of technology, AI, and digital innovation. Join Hunter and Daniel as they unpack these cutting-edge topics in &quot;They Might Be Self-Aware.&quot;</itunes:subtitle>
      <itunes:keywords>hand tracking, sora, immersive experience, gpt models, apple vision pro, openai, eye-tracking, super bowl, spatial video, artificial intelligence, virtual reality, andrej karpathy, sam altman, mixed reality, visual fidelity, virtual theater, vr headsets, mark zuckerberg</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>2</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b51dd26a-24bf-4a38-88b7-62332db98336</guid>
      <title>Apple’s Vision Pro, META&apos;s Money Pit, Your Toothbrush is Hacked | EP01</title>
      <description><![CDATA[<p>Unlock the future of tech with us, Hunter Powers and Daniel Bishop, as we explore the bleeding edge of VR/AR headsets in our debut podcast episode. Prepare to be whisked away on a journey through the digital landscape, where we'll share our experiences with the Apple Vision Pro, comparing it to its rivals, and discussing how these devices are reshaping our work and play. From the quirks of modern delivery to the anticipation of new gadgets, our opening banter sets the stage for a series of tech-centric discussions infused with humor and insight.</p><p>Get ready to glimpse the workplace of tomorrow as we delve into virtual office spaces and augmented leisure activities like VR paint nights. Experience how the Apple Vision Pro is changing the game for remote workers, offering new levels of productivity and immersion. We recount the nuances of virtual environments and share personal tales that highlight how technology is blurring the lines between our professional and personal lives. Plus, we debate whether these advanced headsets could ever replace traditional devices like the MacBook Pro, with a nod to the potential integration of generative AI and how it might revolutionize the way we interact with our smart homes.</p><p>Wrap up with us as we consider the economic realities of VR and AR ventures, questioning if the likes of Meta's investments will see a profitable horizon. We also investigate the intersection of AI and software, from the potential of Apple's own AI model, Maggie, to the evolving capabilities of everyday tools we all use. And for a dash of the unexpected, we share our thoughts on the darker side of technology, discussing the emergence of 'Only Fake' IDs and the bizarre reality of hacked toothbrushes, ensuring that our listeners leave not just entertained, but also with a deeper understanding of the tech that's weaving into the fabric of our daily lives.</p>
]]></description>
      <pubDate>Mon, 12 Feb 2024 22:09:16 +0000</pubDate>
      <author>hello@theblur.ai (Hunter Powers, Daniel Bishop)</author>
      <link>https://theblur.ai/</link>
      <content:encoded><![CDATA[<p>Unlock the future of tech with us, Hunter Powers and Daniel Bishop, as we explore the bleeding edge of VR/AR headsets in our debut podcast episode. Prepare to be whisked away on a journey through the digital landscape, where we'll share our experiences with the Apple Vision Pro, comparing it to its rivals, and discussing how these devices are reshaping our work and play. From the quirks of modern delivery to the anticipation of new gadgets, our opening banter sets the stage for a series of tech-centric discussions infused with humor and insight.</p><p>Get ready to glimpse the workplace of tomorrow as we delve into virtual office spaces and augmented leisure activities like VR paint nights. Experience how the Apple Vision Pro is changing the game for remote workers, offering new levels of productivity and immersion. We recount the nuances of virtual environments and share personal tales that highlight how technology is blurring the lines between our professional and personal lives. Plus, we debate whether these advanced headsets could ever replace traditional devices like the MacBook Pro, with a nod to the potential integration of generative AI and how it might revolutionize the way we interact with our smart homes.</p><p>Wrap up with us as we consider the economic realities of VR and AR ventures, questioning if the likes of Meta's investments will see a profitable horizon. We also investigate the intersection of AI and software, from the potential of Apple's own AI model, Maggie, to the evolving capabilities of everyday tools we all use. And for a dash of the unexpected, we share our thoughts on the darker side of technology, discussing the emergence of 'Only Fake' IDs and the bizarre reality of hacked toothbrushes, ensuring that our listeners leave not just entertained, but also with a deeper understanding of the tech that's weaving into the fabric of our daily lives.</p>
]]></content:encoded>
      <enclosure length="60761988" type="audio/mpeg" url="https://cdn.simplecast.com/audio/511fa219-53ad-4baa-9750-d5b8a0c60318/episodes/38d2e392-50e0-460f-8b19-7ec055d8ad6a/audio/8b6e8fb0-89be-4610-a1b6-c0fcbd56fd05/default_tc.mp3?aid=rss_feed&amp;feed=BBskdoOD"/>
      <itunes:title>Apple’s Vision Pro, META&apos;s Money Pit, Your Toothbrush is Hacked | EP01</itunes:title>
      <itunes:author>Hunter Powers, Daniel Bishop</itunes:author>
      <itunes:duration>01:01:06</itunes:duration>
      <itunes:summary>In this kickoff of &quot;They Might Be Self-Aware,&quot; tech wizards Hunter Powers and Daniel Bishop dive deep into the metaverse. They pit the Apple Vision Pro against the Oculus Quest 3, dissecting everything from video quality to the future of virtual workspaces. The duo debates the quirky yet concerning trend of smart, internet-connected gadgets – yes, even toothbrushes. Amidst their banter, they ponder generative AI&apos;s monumental role in shaping our trajectory and the implications of tech giants like Apple and Google harnessing these advanced AI models. Join the rollercoaster ride through the technological marvels and mayhem impacting us today!</itunes:summary>
      <itunes:subtitle>In this kickoff of &quot;They Might Be Self-Aware,&quot; tech wizards Hunter Powers and Daniel Bishop dive deep into the metaverse. They pit the Apple Vision Pro against the Oculus Quest 3, dissecting everything from video quality to the future of virtual workspaces. The duo debates the quirky yet concerning trend of smart, internet-connected gadgets – yes, even toothbrushes. Amidst their banter, they ponder generative AI&apos;s monumental role in shaping our trajectory and the implications of tech giants like Apple and Google harnessing these advanced AI models. Join the rollercoaster ride through the technological marvels and mayhem impacting us today!</itunes:subtitle>
      <itunes:keywords>vr/ar headsets, vr paint nights, meta, macbook pro, profitability, virtual environments, identity verification, artificial intelligence, smart homes, technology, remote work, hacking, wearable technology, virtual office spaces, tech industry, toothbrushes, software applications, augmented reality</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1</itunes:episode>
    </item>
  </channel>
</rss>