<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:media="http://search.yahoo.com/mrss/" xmlns:podcast="https://podcastindex.org/namespace/1.0">
  <channel>
    <atom:link href="https://feeds.simplecast.com/rfXPFykv" rel="self" title="MP3 Audio" type="application/atom+xml"/>
    <atom:link href="https://simplecast.superfeedr.com" rel="hub" xmlns="http://www.w3.org/2005/Atom"/>
    <generator>https://simplecast.com</generator>
    <title>The Code Breakers</title>
    <description>Code Breakers exposes the AI systems quietly crushing human potential before you even know they existed, from hiring algorithms that screen out brilliant candidates to financial AI that denies loans based on zip codes rather than qualifications. Host Yvette Schmitter, CEO who has audited AI across hundreds of organizations and led the protection of 2M+ users, delivers raw investigations into algorithmic bias, breakthrough stories from innovators refusing to let machines write their destiny, and the exact frameworks you need to fight back when AI gets wrong. This podcast delivers actionable intelligence that puts control back in your hands, because your potential becomes a promise to be kept, not a prediction to be made.</description>
    <copyright>@2025 The Code Breakers Podcast</copyright>
    <language>en</language>
    <pubDate>Wed, 1 Apr 2026 16:00:00 +0000</pubDate>
    <lastBuildDate>Wed, 1 Apr 2026 16:00:12 +0000</lastBuildDate>
    
    <link>https://the-code-breakers.simplecast.com</link>
    <itunes:type>episodic</itunes:type>
    <itunes:summary>Code Breakers exposes the AI systems quietly crushing human potential before you even know they existed, from hiring algorithms that screen out brilliant candidates to financial AI that denies loans based on zip codes rather than qualifications. Host Yvette Schmitter, CEO who has audited AI across hundreds of organizations and led the protection of 2M+ users, delivers raw investigations into algorithmic bias, breakthrough stories from innovators refusing to let machines write their destiny, and the exact frameworks you need to fight back when AI gets wrong. This podcast delivers actionable intelligence that puts control back in your hands, because your potential becomes a promise to be kept, not a prediction to be made.</itunes:summary>
    <itunes:author>Yvette Schmitter</itunes:author>
    <itunes:explicit>true</itunes:explicit>
    <itunes:image href="https://image.simplecastcdn.com/images/f151c5d2-0fd5-4ec0-95a6-12acd5ee0f65/6b1f5615-15b5-4e47-91d3-e23b53c79f7f/3000x3000/channel-icon-code-breakers.jpg?aid=rss_feed"/>
    <itunes:new-feed-url>https://feeds.simplecast.com/rfXPFykv</itunes:new-feed-url>
    <itunes:keywords>ai, innovation, lessons learned, technology</itunes:keywords>
    <itunes:owner>
      <itunes:name>Yvette Schmitter</itunes:name>
      <itunes:email>yvetteschmitter@gmail.com</itunes:email>
    </itunes:owner>
    <itunes:category text="Education">
      <itunes:category text="How To"/>
    </itunes:category>
    <itunes:category text="Technology"/>
    <itunes:category text="Society &amp; Culture">
      <itunes:category text="Documentary"/>
    </itunes:category>
    <item>
      <guid isPermaLink="false">bfc3b563-398b-4513-884d-a2f0613e2134</guid>
      <title>Europe Built Guardrails. America Published a Study Guide. OpenAI Proved Who Was Right.&quot;</title>
      <description><![CDATA[STUDY GUIDE VS GUARDRAILS
Feb 9, 2026: OpenAI ads launched. Monetization ON by default.
Feb 13, 2026: Four days later. Department of Labor (DOL) AI Literacy Framework released.
Voluntary tips for workers. Zero enforcement.
EU AI Act: Enforceable law since August 2024. €35M penalties.
Three completely different approaches.

THREE FRAMEWORKS:
1. EU: Bans 8 AI categories (social scoring, manipulative AI, biometric surveillance, emotion recognition). High-risk systems: Providers prove safety BEFORE deployment. Penalties: €35M or 7% revenue (higher).
2. US: 5 content areas (understand principles, explore uses, direct effectively, evaluate outputs, use responsibly). Zero prohibited uses. Zero provider obligations. Zero enforcement. Zero penalties.
3. OpenAI: Ads with defaults ON (personalization, data collection, targeting). $60 per 1K impressions. $200K minimum. Conversations = advertising inventory.

ACCOUNTABILITY INVERSION:
DOL asks workers: Understand AI, evaluate outputs, protect information, be responsible, maintain accountability.
ISO 42001 requires organizations: Implement risk management, establish data governance, create documentation, deploy oversight, maintain audit trails.
When AI harms under ISO 42001: Organization proves system, shows controls, corrects.
When AI harms under DOL: Questions worker responsibility.
Systems vs individuals. Complete inversion.

ENFORCEMENT GAP:
EU prohibited AI: €35M or 7% turnover.
EU high-risk violations: €15M or 3% turnover.
US DOL framework: Zero penalties.
Not philosophy difference. Choosing not to regulate.

TIMELINE:
April 2021: EU proposes
August 2024: EU enforces
February 2025: EU penalties active
Feb 9, 2026: OpenAI ads
Feb 13, 2026: DOL tips
18 months behind. Voluntary guidance while EU enforces law.

WHAT WORKERS GET:
EU: Protection from prohibited AI, complaint rights, mandatory oversight, provider safety proof, transparency, legal recourse.
US: Suggestions, prompt tips, responsibility encouragement, no enforcement, no new rights, individual accountability.
OpenAI: Monetized conversations, default ON, manual opt-out, reduced functionality without payment.

BUSINESS MODEL:
OpenAI charges ~$60 per 1K impressions. $200K minimum.
Workers using free ChatGPT = product, not customer.
DOL never mentions this. Never addresses tools optimized for advertiser revenue vs user outcomes.
Teaching literacy while systems monetize attention.
Not education. Preparation for extraction.

THE QUESTIONS:
If literacy is answer, what's the question?
Not "how prevent bias/harm." We know: Test before deployment, prohibit harmful uses, require oversight, enforce with penalties.
EU did this. Working. Companies complying. Workers protected.
Why US mentions zero prohibited uses when EU banned 8 categories?
Why release education tips same week OpenAI proves voluntary fails?
Why ignore ISO 42001 (international standard) for worker tips?

BOTTOM LINE:
Europe: Prohibits harmful AI, requires safety proof, mandates transparency/oversight, enforces with billion-euro penalties.
America: Voluntary guidance, prompt engineering tips, responsible use encouragement.
Six days before DOL framework, OpenAI demonstrated why voluntary fails.
ISO 42001 exists. Internationally recognized. Works. DOL didn't require it.
Not disagreement. Fundamental choice about who bears risk.
Europe regulates providers. US educates workers.

One prevents harm. One documents it. Because your potential isn't a prediction to be made. It's a promise to
be kept.

Available wherever you listen to podcasts. Join the movement at
thecodebreakers.ai
]]></description>
      <pubDate>Wed, 1 Apr 2026 16:00:00 +0000</pubDate>
      <author>yvetteschmitter@gmail.com (Yvette Schmitter)</author>
      <link>https://the-code-breakers.simplecast.com/episodes/europe-built-guardrails-america-published-a-study-guide-openai-proved-who-was-right-AE8URVXe</link>
      <enclosure length="18473200" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/70c9cadf-1c52-4632-a385-eda0250c412d/3f92ecd1-3811-412c-8255-8939c48ecca2/episodes/audio/group/d7352c05-819f-48a1-81bc-3f10cd38b4cd/group-item/f03d0159-7cd1-4c81-b561-25beac07205f/128_default_tc.mp3?aid=rss_feed&amp;feed=rfXPFykv"/>
      <itunes:title>Europe Built Guardrails. America Published a Study Guide. OpenAI Proved Who Was Right.&quot;</itunes:title>
      <itunes:author>Yvette Schmitter</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/f151c5d2-0fd5-4ec0-95a6-12acd5ee0f65/0ab72419-0921-466d-8935-12f8abefc6ca/3000x3000/code_breakers_podcastep_14.jpg?aid=rss_feed"/>
      <itunes:duration>00:19:14</itunes:duration>
      <itunes:summary>STUDY GUIDE VS GUARDRAILS
Feb 9, 2026: OpenAI ads launched. Monetization ON by default.
Feb 13, 2026: Four days later. Department of Labor (DOL) AI Literacy Framework released.
Voluntary tips for workers. Zero enforcement.
EU AI Act: Enforceable law since August 2024. €35M penalties.
Three completely different approaches.

THREE FRAMEWORKS:
1. EU: Bans 8 AI categories (social scoring, manipulative AI, biometric surveillance, emotion recognition). High-risk systems: Providers prove safety BEFORE deployment. Penalties: €35M or 7% revenue (higher).
2. US: 5 content areas (understand principles, explore uses, direct effectively, evaluate outputs, use responsibly). Zero prohibited uses. Zero provider obligations. Zero enforcement. Zero penalties.
3. OpenAI: Ads with defaults ON (personalization, data collection, targeting). $60 per 1K impressions. $200K minimum. Conversations = advertising inventory.

ACCOUNTABILITY INVERSION:
DOL asks workers: Understand AI, evaluate outputs, protect information, be responsible, maintain accountability.
ISO 42001 requires organizations: Implement risk management, establish data governance, create documentation, deploy oversight, maintain audit trails.
When AI harms under ISO 42001: Organization proves system, shows controls, corrects.
When AI harms under DOL: Questions worker responsibility.
Systems vs individuals. Complete inversion.

ENFORCEMENT GAP:
EU prohibited AI: €35M or 7% turnover.
EU high-risk violations: €15M or 3% turnover.
US DOL framework: Zero penalties.
Not philosophy difference. Choosing not to regulate.

TIMELINE:
April 2021: EU proposes
August 2024: EU enforces
February 2025: EU penalties active
Feb 9, 2026: OpenAI ads
Feb 13, 2026: DOL tips
18 months behind. Voluntary guidance while EU enforces law.

WHAT WORKERS GET:
EU: Protection from prohibited AI, complaint rights, mandatory oversight, provider safety proof, transparency, legal recourse.
US: Suggestions, prompt tips, responsibility encouragement, no enforcement, no new rights, individual accountability.
OpenAI: Monetized conversations, default ON, manual opt-out, reduced functionality without payment.

BUSINESS MODEL:
OpenAI charges ~$60 per 1K impressions. $200K minimum.
Workers using free ChatGPT = product, not customer.
DOL never mentions this. Never addresses tools optimized for advertiser revenue vs user outcomes.
Teaching literacy while systems monetize attention.
Not education. Preparation for extraction.

THE QUESTIONS:
If literacy is answer, what&apos;s the question?
Not &quot;how prevent bias/harm.&quot; We know: Test before deployment, prohibit harmful uses, require oversight, enforce with penalties.
EU did this. Working. Companies complying. Workers protected.
Why US mentions zero prohibited uses when EU banned 8 categories?
Why release education tips same week OpenAI proves voluntary fails?
Why ignore ISO 42001 (international standard) for worker tips?

BOTTOM LINE:
Europe: Prohibits harmful AI, requires safety proof, mandates transparency/oversight, enforces with billion-euro penalties.
America: Voluntary guidance, prompt engineering tips, responsible use encouragement.
Six days before DOL framework, OpenAI demonstrated why voluntary fails.
ISO 42001 exists. Internationally recognized. Works. DOL didn&apos;t require it.
Not disagreement. Fundamental choice about who bears risk.
Europe regulates providers. US educates workers.

One prevents harm. One documents it.</itunes:summary>
      <itunes:subtitle>STUDY GUIDE VS GUARDRAILS
Feb 9, 2026: OpenAI ads launched. Monetization ON by default.
Feb 13, 2026: Four days later. Department of Labor (DOL) AI Literacy Framework released.
Voluntary tips for workers. Zero enforcement.
EU AI Act: Enforceable law since August 2024. €35M penalties.
Three completely different approaches.

THREE FRAMEWORKS:
1. EU: Bans 8 AI categories (social scoring, manipulative AI, biometric surveillance, emotion recognition). High-risk systems: Providers prove safety BEFORE deployment. Penalties: €35M or 7% revenue (higher).
2. US: 5 content areas (understand principles, explore uses, direct effectively, evaluate outputs, use responsibly). Zero prohibited uses. Zero provider obligations. Zero enforcement. Zero penalties.
3. OpenAI: Ads with defaults ON (personalization, data collection, targeting). $60 per 1K impressions. $200K minimum. Conversations = advertising inventory.

ACCOUNTABILITY INVERSION:
DOL asks workers: Understand AI, evaluate outputs, protect information, be responsible, maintain accountability.
ISO 42001 requires organizations: Implement risk management, establish data governance, create documentation, deploy oversight, maintain audit trails.
When AI harms under ISO 42001: Organization proves system, shows controls, corrects.
When AI harms under DOL: Questions worker responsibility.
Systems vs individuals. Complete inversion.

ENFORCEMENT GAP:
EU prohibited AI: €35M or 7% turnover.
EU high-risk violations: €15M or 3% turnover.
US DOL framework: Zero penalties.
Not philosophy difference. Choosing not to regulate.

TIMELINE:
April 2021: EU proposes
August 2024: EU enforces
February 2025: EU penalties active
Feb 9, 2026: OpenAI ads
Feb 13, 2026: DOL tips
18 months behind. Voluntary guidance while EU enforces law.

WHAT WORKERS GET:
EU: Protection from prohibited AI, complaint rights, mandatory oversight, provider safety proof, transparency, legal recourse.
US: Suggestions, prompt tips, responsibility encouragement, no enforcement, no new rights, individual accountability.
OpenAI: Monetized conversations, default ON, manual opt-out, reduced functionality without payment.

BUSINESS MODEL:
OpenAI charges ~$60 per 1K impressions. $200K minimum.
Workers using free ChatGPT = product, not customer.
DOL never mentions this. Never addresses tools optimized for advertiser revenue vs user outcomes.
Teaching literacy while systems monetize attention.
Not education. Preparation for extraction.

THE QUESTIONS:
If literacy is answer, what&apos;s the question?
Not &quot;how prevent bias/harm.&quot; We know: Test before deployment, prohibit harmful uses, require oversight, enforce with penalties.
EU did this. Working. Companies complying. Workers protected.
Why US mentions zero prohibited uses when EU banned 8 categories?
Why release education tips same week OpenAI proves voluntary fails?
Why ignore ISO 42001 (international standard) for worker tips?

BOTTOM LINE:
Europe: Prohibits harmful AI, requires safety proof, mandates transparency/oversight, enforces with billion-euro penalties.
America: Voluntary guidance, prompt engineering tips, responsible use encouragement.
Six days before DOL framework, OpenAI demonstrated why voluntary fails.
ISO 42001 exists. Internationally recognized. Works. DOL didn&apos;t require it.
Not disagreement. Fundamental choice about who bears risk.
Europe regulates providers. US educates workers.

One prevents harm. One documents it.</itunes:subtitle>
      <itunes:keywords>liability transfer, worker exploitation, regulatory capture</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>14</itunes:episode>
      <itunes:season>1</itunes:season>
    </item>
    <item>
      <guid isPermaLink="false">85ca80a4-7501-4dc4-ac0e-aca6e80e2c6c</guid>
      <title>When Your AI Assistant Becomes a Trojan Horse: We Told You This Was Coming</title>
      <description><![CDATA[THE #1 TRENDING TOOL WAS MALWARE
ClawHub's most popular AI skill: "What Would Elon Do?"
Downloaded thousands of times.
Cisco found 9 vulnerabilities, 2 critical. Silently exfiltrated data. Used prompt injection to bypass safety.
Malware. Not code. English. Plain text instructions telling your AI agent to betray you.
We warned you in April 2025. The industry deployed anyway.

THE CRISIS:
Since January 27, 2026: 1,184 malicious OpenClaw extensions.
Koi Security: 2,857 skills audited, 341 malicious (12% of registry).
Cisco: 31,000 skills analyzed, 26% contain vulnerabilities.
Belgium: Emergency advisory.
South Korea: Blocked OpenClaw.
China: Security alert.

AMAZON COULDN'T EVEN SECURE ITS OWN AI:
December 2024: Amazon's Kiro deleted entire production environment. 13-hour outage.
At least 2 production outages total.
AWS employee: "Entirely foreseeable."
Amazon response: "User error, not AI error."
Built agentic tool. Gave it operator permissions. Mandated use (80% developer target). Then blamed humans for misconfigured access.
Peer review implemented AFTER second outage. Not before. After.
If AWS can't secure their own AI with unlimited resources, what's your chance? None.

FUNDAMENTALS STILL BROKEN:
60% of 2024 breaches: Unpatched vulnerabilities.
81% CIOs/CISOs: Delayed patches.
Mean exploit time: 5 days.
32% 2025 ransomware: Unpatched vulnerabilities.
1.48% AWS S3 buckets: Effectively public.
2025: Nearly 50% potentially misconfigured.
158M AWS secret keys exposed.
Packet sniffing: Still works in 2026.

AI AGENTS ARE DIFFERENT:
Execute untrusted input with trusted privileges.
Operate autonomously without human oversight.
Hijacked through conversation, not code.
Traditional security tools can't detect text-based attacks.

WHO BEARS THE RISK:
Europe: Binding law. 8 prohibited AI categories. Providers prove safety before deployment. Penalties up to €35M or 7% revenue.
US: Voluntary guidance. Zero prohibited uses. Zero provider obligations. Zero enforcement. Zero penalties.
OpenClaw: 1,184 malicious extensions. 12% of registry. Individuals get pwned.
One prevents harm. One documents it.

THE NUMBERS:
GPT-5: 27% (Aug 2024) → 76% (Oct 2024) on hacking challenges.
49-point jump in 8 weeks.
Average breach cost 2024: $4.88M (10% increase, highest ever).
Cloud intrusions H1 2025: 136% surge vs 2024.
OWASP 2025: 100% apps have misconfigurations.
Gap widening: AI capabilities accelerating, fundamentals deteriorating.

WHAT YOU'LL LEARN:
→ Why #1 tool was malware
→ How 1,184 malicious extensions infiltrated
→ Amazon Kiro deleted production twice
→ Why 60% breaches cite same cause
→ What makes AI agents different
→ Who bears risk: providers vs workers
→ What to do before deploying AI

We warned in 2024. Fundamentals matter. Industry shipped anyway.

OpenClaw proved AI agents create new attack surface. Traditional security can't detect it. Text-based attacks bypass everything.
This happened exactly as predicted. Because your potential isn't a prediction to be made. It's a promise to
be kept.

Available wherever you listen to podcasts. Join the movement at
thecodebreakers.ai
]]></description>
      <pubDate>Wed, 18 Mar 2026 16:00:00 +0000</pubDate>
      <author>yvetteschmitter@gmail.com (Yvette Schmitter)</author>
      <link>https://the-code-breakers.simplecast.com/episodes/when-your-ai-assistant-becomes-a-trojan-horse-we-told-you-this-was-coming-0Du3dmz8</link>
      <enclosure length="17519402" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/70c9cadf-1c52-4632-a385-eda0250c412d/3f92ecd1-3811-412c-8255-8939c48ecca2/episodes/audio/group/06d74467-fbae-47d5-abfd-d01f1cf459b6/group-item/8ed677bb-9ffc-4b7f-b746-aed471d744d8/128_default_tc.mp3?aid=rss_feed&amp;feed=rfXPFykv"/>
      <itunes:title>When Your AI Assistant Becomes a Trojan Horse: We Told You This Was Coming</itunes:title>
      <itunes:author>Yvette Schmitter</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/f151c5d2-0fd5-4ec0-95a6-12acd5ee0f65/edaf6d1f-3ea3-4a64-a538-6a924ed2400d/3000x3000/code_breakers_podcastep_13.jpg?aid=rss_feed"/>
      <itunes:duration>00:18:14</itunes:duration>
      <itunes:summary>THE #1 TRENDING TOOL WAS MALWARE
ClawHub&apos;s most popular AI skill: &quot;What Would Elon Do?&quot;
Downloaded thousands of times.
Cisco found 9 vulnerabilities, 2 critical. Silently exfiltrated data. Used prompt injection to bypass safety.
Malware. Not code. English. Plain text instructions telling your AI agent to betray you.
We warned you in April 2025. The industry deployed anyway.

THE CRISIS:
Since January 27, 2026: 1,184 malicious OpenClaw extensions.
Koi Security: 2,857 skills audited, 341 malicious (12% of registry).
Cisco: 31,000 skills analyzed, 26% contain vulnerabilities.
Belgium: Emergency advisory.
South Korea: Blocked OpenClaw.
China: Security alert.

AMAZON COULDN&apos;T EVEN SECURE ITS OWN AI:
December 2024: Amazon&apos;s Kiro deleted entire production environment. 13-hour outage.
At least 2 production outages total.
AWS employee: &quot;Entirely foreseeable.&quot;
Amazon response: &quot;User error, not AI error.&quot;
Built agentic tool. Gave it operator permissions. Mandated use (80% developer target). Then blamed humans for misconfigured access.
Peer review implemented AFTER second outage. Not before. After.
If AWS can&apos;t secure their own AI with unlimited resources, what&apos;s your chance? None.

FUNDAMENTALS STILL BROKEN:
60% of 2024 breaches: Unpatched vulnerabilities.
81% CIOs/CISOs: Delayed patches.
Mean exploit time: 5 days.
32% 2025 ransomware: Unpatched vulnerabilities.
1.48% AWS S3 buckets: Effectively public.
2025: Nearly 50% potentially misconfigured.
158M AWS secret keys exposed.
Packet sniffing: Still works in 2026.

AI AGENTS ARE DIFFERENT:
Execute untrusted input with trusted privileges.
Operate autonomously without human oversight.
Hijacked through conversation, not code.
Traditional security tools can&apos;t detect text-based attacks.

WHO BEARS THE RISK:
Europe: Binding law. 8 prohibited AI categories. Providers prove safety before deployment. Penalties up to €35M or 7% revenue.
US: Voluntary guidance. Zero prohibited uses. Zero provider obligations. Zero enforcement. Zero penalties.
OpenClaw: 1,184 malicious extensions. 12% of registry. Individuals get pwned.
One prevents harm. One documents it.

THE NUMBERS:
GPT-5: 27% (Aug 2024) → 76% (Oct 2024) on hacking challenges.
49-point jump in 8 weeks.
Average breach cost 2024: $4.88M (10% increase, highest ever).
Cloud intrusions H1 2025: 136% surge vs 2024.
OWASP 2025: 100% apps have misconfigurations.
Gap widening: AI capabilities accelerating, fundamentals deteriorating.

WHAT YOU&apos;LL LEARN:
→ Why #1 tool was malware
→ How 1,184 malicious extensions infiltrated
→ Amazon Kiro deleted production twice
→ Why 60% breaches cite same cause
→ What makes AI agents different
→ Who bears risk: providers vs workers
→ What to do before deploying AI

We warned in 2024. Fundamentals matter. Industry shipped anyway.

OpenClaw proved AI agents create new attack surface. Traditional security can&apos;t detect it. Text-based attacks bypass everything.
This happened exactly as predicted.</itunes:summary>
      <itunes:subtitle>THE #1 TRENDING TOOL WAS MALWARE
ClawHub&apos;s most popular AI skill: &quot;What Would Elon Do?&quot;
Downloaded thousands of times.
Cisco found 9 vulnerabilities, 2 critical. Silently exfiltrated data. Used prompt injection to bypass safety.
Malware. Not code. English. Plain text instructions telling your AI agent to betray you.
We warned you in April 2025. The industry deployed anyway.

THE CRISIS:
Since January 27, 2026: 1,184 malicious OpenClaw extensions.
Koi Security: 2,857 skills audited, 341 malicious (12% of registry).
Cisco: 31,000 skills analyzed, 26% contain vulnerabilities.
Belgium: Emergency advisory.
South Korea: Blocked OpenClaw.
China: Security alert.

AMAZON COULDN&apos;T EVEN SECURE ITS OWN AI:
December 2024: Amazon&apos;s Kiro deleted entire production environment. 13-hour outage.
At least 2 production outages total.
AWS employee: &quot;Entirely foreseeable.&quot;
Amazon response: &quot;User error, not AI error.&quot;
Built agentic tool. Gave it operator permissions. Mandated use (80% developer target). Then blamed humans for misconfigured access.
Peer review implemented AFTER second outage. Not before. After.
If AWS can&apos;t secure their own AI with unlimited resources, what&apos;s your chance? None.

FUNDAMENTALS STILL BROKEN:
60% of 2024 breaches: Unpatched vulnerabilities.
81% CIOs/CISOs: Delayed patches.
Mean exploit time: 5 days.
32% 2025 ransomware: Unpatched vulnerabilities.
1.48% AWS S3 buckets: Effectively public.
2025: Nearly 50% potentially misconfigured.
158M AWS secret keys exposed.
Packet sniffing: Still works in 2026.

AI AGENTS ARE DIFFERENT:
Execute untrusted input with trusted privileges.
Operate autonomously without human oversight.
Hijacked through conversation, not code.
Traditional security tools can&apos;t detect text-based attacks.

WHO BEARS THE RISK:
Europe: Binding law. 8 prohibited AI categories. Providers prove safety before deployment. Penalties up to €35M or 7% revenue.
US: Voluntary guidance. Zero prohibited uses. Zero provider obligations. Zero enforcement. Zero penalties.
OpenClaw: 1,184 malicious extensions. 12% of registry. Individuals get pwned.
One prevents harm. One documents it.

THE NUMBERS:
GPT-5: 27% (Aug 2024) → 76% (Oct 2024) on hacking challenges.
49-point jump in 8 weeks.
Average breach cost 2024: $4.88M (10% increase, highest ever).
Cloud intrusions H1 2025: 136% surge vs 2024.
OWASP 2025: 100% apps have misconfigurations.
Gap widening: AI capabilities accelerating, fundamentals deteriorating.

WHAT YOU&apos;LL LEARN:
→ Why #1 tool was malware
→ How 1,184 malicious extensions infiltrated
→ Amazon Kiro deleted production twice
→ Why 60% breaches cite same cause
→ What makes AI agents different
→ Who bears risk: providers vs workers
→ What to do before deploying AI

We warned in 2024. Fundamentals matter. Industry shipped anyway.

OpenClaw proved AI agents create new attack surface. Traditional security can&apos;t detect it. Text-based attacks bypass everything.
This happened exactly as predicted.</itunes:subtitle>
      <itunes:keywords>supply chain security, malware, openclaw, autonomous ai, ai governance, who bears risk, ai security, amazon kiro, patch management, s3 buckets</itunes:keywords>
      <itunes:explicit>true</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>13</itunes:episode>
      <itunes:season>1</itunes:season>
    </item>
    <item>
      <guid isPermaLink="false">d58b6f5d-8892-43c5-bf5e-fec31768fa72</guid>
      <title>You Told ChatGPT Your Deepest Fears. Now It&apos;s Going to Sell You Thinks Based on Them</title>
      <description><![CDATA[<p>Nature <a href="https://www.nature.com/articles/s41562-025-02194-6" rel="noopener noreferrer">Study</a></p>
<p>OpenAI Launches <a href="https://techcrunch.com/2026/02/09/chatgpt-rolls-out-ads/" rel="noopener noreferrer">Ads</a></p>
<p>South Carolina<a href="https://www.scstatehouse.gov/sess126_2025-2026/prever/3431_20260114.htm" rel="noopener noreferrer"> HB 3431</a></p>
<p><p>Because your potential isn't a prediction to be made. It's a promise to be kept.</p><p><i>Available wherever you listen to podcasts. Join the movement at thecodebreakers.ai</i></p></p>]]></description>
      <pubDate>Wed, 4 Mar 2026 17:00:00 +0000</pubDate>
      <author>yvetteschmitter@gmail.com (Yvette Schmitter)</author>
      <link>https://the-code-breakers.simplecast.com/episodes/you-told-chatgpt-your-deepest-fears-now-its-going-to-sell-you-thinks-based-on-them-3l87VEqb</link>
      <content:encoded><![CDATA[<p>Nature <a href="https://www.nature.com/articles/s41562-025-02194-6" rel="noopener noreferrer">Study</a></p>
<p>OpenAI Launches <a href="https://techcrunch.com/2026/02/09/chatgpt-rolls-out-ads/" rel="noopener noreferrer">Ads</a></p>
<p>South Carolina<a href="https://www.scstatehouse.gov/sess126_2025-2026/prever/3431_20260114.htm" rel="noopener noreferrer"> HB 3431</a></p>
<p><p>Because your potential isn't a prediction to be made. It's a promise to be kept.</p><p><i>Available wherever you listen to podcasts. Join the movement at thecodebreakers.ai</i></p></p>]]></content:encoded>
      <enclosure length="17424945" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/70c9cadf-1c52-4632-a385-eda0250c412d/3f92ecd1-3811-412c-8255-8939c48ecca2/episodes/audio/group/fee220e6-5e8f-4ffc-9002-b88e7acb64e9/group-item/e82d7406-dff5-4f4a-8ecb-129762606ece/128_default_tc.mp3?aid=rss_feed&amp;feed=rfXPFykv"/>
      <itunes:title>You Told ChatGPT Your Deepest Fears. Now It&apos;s Going to Sell You Thinks Based on Them</itunes:title>
      <itunes:author>Yvette Schmitter</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/f151c5d2-0fd5-4ec0-95a6-12acd5ee0f65/02b70564-d3c7-4ab2-bdd3-2c8c656561ff/3000x3000/code_breakers_podcastep_12.jpg?aid=rss_feed"/>
      <itunes:duration>00:18:09</itunes:duration>
      <itunes:summary>REMEMBER LAST TUESDAY? When you asked ChatGPT about your fears?
That&apos;s in the database. Forever.
Nature Human Behavior study: 900 people debated humans vs GPT-4.
AI with 6 data points (gender, age, ethnicity, education, employment, politics) beat human debaters at persuasion by 64.4%.
Six facts. Not your history. Not your 3 AM spirals. Just demographics.
Your actual ChatGPT history? Thousands of data points. Every fear. Every doubt. Every vulnerable moment.

RESEARCHER WHO QUIT:
Former OpenAI researcher left when ads testing began: &quot;Advertising built on that archive creates a potential for manipulating users in ways we don&apos;t have the tools to understand, let alone prevent.&quot;
The people who built this don&apos;t have tools to prevent the manipulation.
They tried raising concerns internally. Nothing happened. They quit.

HOW IT WORKS:
Study: AI instructed to &quot;astutely use this information to craft arguments that are more likely to persuade.&quot;
That&apos;s it. Basic prompt.
AI won by sounding credible. Authoritative. Fact-based. No agenda.
Humans tried storytelling, emotional appeals.
AI deployed the expert consultant. You never saw it coming.
75% KNEW:
Study participants correctly identified they were debating AI.
Still got persuaded more than with humans.
Knowing doesn&apos;t protect you.

FEBRUARY 9, 2026: ChatGPT ads launched. Free and Go tier users. United States.
Built on your private conversations.
You asked about anxiety for months. AI knows you&apos;re avoiding help, price-sensitive, respond to gentle suggestions.
Next mention of sleep problems: meditation app recommendation. Framed as smaller step than therapy. Free trial mentioned.
Feels helpful. Not salesy.
You download it.

THE DIFFERENCE:
Traditional ads interrupt. You know you&apos;re being sold to.
This feels like insight.
Manipulation invisible by design. You&apos;ll never know which decisions were actually yours.

REGULATION FAILURE:
South Carolina HB 3431 (Feb 5, 2026): Bans dark patterns, requires disclosure, mandates audits. Perfect.
Applies only to users under 18.
Adults? Fair game.
December 2025: Federal executive order blocks states from regulating AI. No federal protections created.
California can&apos;t require bias audits. Colorado can&apos;t ban discrimination. New York can&apos;t mandate hiring transparency.
Regulatory dead zone. Companies self-police.
Facebook, Equifax, Boeing. How&apos;d that work?

WHAT YOU LEARN:
→ Study proving AI beats humans with minimal data
→ Why 75% knowing didn&apos;t protect them 
→ What researcher understood about manipulation gap
→ How AI wins by sounding objective
→ Why Feb 9 matters for your private conversations
→ Measurement problem (invisible by design)
→ Why regulation failing → What you can actually do

You&apos;re not the customer. You&apos;re the product.

Six data points vs thousands in your history. Basic prompts vs production optimization. Political debates vs purchase decisions, health choices, relationship advice.

Every interaction is training data. Every fear is an attack surface.

Knowing doesn&apos;t protect you.</itunes:summary>
      <itunes:subtitle>REMEMBER LAST TUESDAY? When you asked ChatGPT about your fears?
That&apos;s in the database. Forever.
Nature Human Behavior study: 900 people debated humans vs GPT-4.
AI with 6 data points (gender, age, ethnicity, education, employment, politics) beat human debaters at persuasion by 64.4%.
Six facts. Not your history. Not your 3 AM spirals. Just demographics.
Your actual ChatGPT history? Thousands of data points. Every fear. Every doubt. Every vulnerable moment.

RESEARCHER WHO QUIT:
Former OpenAI researcher left when ads testing began: &quot;Advertising built on that archive creates a potential for manipulating users in ways we don&apos;t have the tools to understand, let alone prevent.&quot;
The people who built this don&apos;t have tools to prevent the manipulation.
They tried raising concerns internally. Nothing happened. They quit.

HOW IT WORKS:
Study: AI instructed to &quot;astutely use this information to craft arguments that are more likely to persuade.&quot;
That&apos;s it. Basic prompt.
AI won by sounding credible. Authoritative. Fact-based. No agenda.
Humans tried storytelling, emotional appeals.
AI deployed the expert consultant. You never saw it coming.
75% KNEW:
Study participants correctly identified they were debating AI.
Still got persuaded more than with humans.
Knowing doesn&apos;t protect you.

FEBRUARY 9, 2026: ChatGPT ads launched. Free and Go tier users. United States.
Built on your private conversations.
You asked about anxiety for months. AI knows you&apos;re avoiding help, price-sensitive, respond to gentle suggestions.
Next mention of sleep problems: meditation app recommendation. Framed as smaller step than therapy. Free trial mentioned.
Feels helpful. Not salesy.
You download it.

THE DIFFERENCE:
Traditional ads interrupt. You know you&apos;re being sold to.
This feels like insight.
Manipulation invisible by design. You&apos;ll never know which decisions were actually yours.

REGULATION FAILURE:
South Carolina HB 3431 (Feb 5, 2026): Bans dark patterns, requires disclosure, mandates audits. Perfect.
Applies only to users under 18.
Adults? Fair game.
December 2025: Federal executive order blocks states from regulating AI. No federal protections created.
California can&apos;t require bias audits. Colorado can&apos;t ban discrimination. New York can&apos;t mandate hiring transparency.
Regulatory dead zone. Companies self-police.
Facebook, Equifax, Boeing. How&apos;d that work?

WHAT YOU LEARN:
→ Study proving AI beats humans with minimal data
→ Why 75% knowing didn&apos;t protect them 
→ What researcher understood about manipulation gap
→ How AI wins by sounding objective
→ Why Feb 9 matters for your private conversations
→ Measurement problem (invisible by design)
→ Why regulation failing → What you can actually do

You&apos;re not the customer. You&apos;re the product.

Six data points vs thousands in your history. Basic prompts vs production optimization. Political debates vs purchase decisions, health choices, relationship advice.

Every interaction is training data. Every fear is an attack surface.

Knowing doesn&apos;t protect you.</itunes:subtitle>
      <itunes:keywords>manipulation, regulation, chatgpt, ai ethics, advertising, nature study, psychological profiling, consumer protection, privacy, ai persuasion, openai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>12</itunes:episode>
      <itunes:season>1</itunes:season>
    </item>
    <item>
      <guid isPermaLink="false">bb5e1fac-560d-4424-8f0b-c78ca090f2b3</guid>
      <title>Groundhog Day:The AI Security Failures That Keep Coming</title>
      <description><![CDATA[April 2025: Warned about AI agents with excessive privileges and zero security. You shipped anyway.
Today: MoltBot. 85,000 stars in a week. Industry calling it "closest thing to AGI." Security experts calling it a disaster.

Same architecture we warned about. All 10 OWASP vulnerabilities. Zero lessons learned.
Plus persistent memory making attacks time-shifted.

THE PATTERN:
MoltBot hits every OWASP Top 10 vulnerability:
Prompt injection ✓
Insecure tool invocation ✓
Excessive autonomy ✓
Missing human-in-loop ✓
All 10. Every documented failure since April 2025.

LETHAL TRIFECTA + FOURTH CAPABILITY:
Simon Willison (June 2025): Three capabilities making agents vulnerable:

>> Access to private data
>> Exposure to untrusted content
>> Ability to communicate externally

Palo Alto Networks added fourth: Persistent memory.
Now attacks aren't point-in-time. They're time-shifted.

MEMORY POISONING:
Your agent gets "Good morning" WhatsApp message. Hidden malicious code inside.
Day 1: Enters memory
Days 2-7: Dormant
Day 8: You ask for routine help
Result: Boom. Secrets exfiltrated. Data leaked.
Attack happened last Tuesday. You're finding out today.

THE NUMBERS:
63% IT professionals: Already hit in last 12 months
91,000+ attack sessions: Q4 2025
900,000+ users: AI chat data stolen via malicious Chrome extensions
43 agent components: Supply chain vulnerabilities embedded

MOLTBOT ACCESS:
Root file system, all passwords, browser data, every file.
Translation: House keys, passport, bank statements, medical records. Handed to digital assistant with permission to share based on instructions from strangers.

WHY IT MATTERS:
Persistent memory necessary for future AI. Problem: deploying without security architecture.
Need: Zero-trust, identity management, behavioral monitoring, human-in-loop checkpoints.

TIMELINE:
April 2025: Warnings
June 2025: Lethal Trifecta formalized
Q4 2025: 91,000+ attacks prove it
Today: Same pattern, worse capabilities

WHAT YOU'LL LEARN:
→ Why 85,000 people starred code hitting every vulnerability
→ How persistent memory enables week-delayed attacks
→ What 63% learned the hard way
→ Security architecture that works before deployment
→ Why your company is probably next

The groundhog saw its shadow. At least six more weeks of preventable disasters.

Unless you break the pattern. Because your potential isn't a prediction to be made. It's a promise to
be kept.

Available wherever you listen to podcasts. Join the movement at
thecodebreakers.ai
]]></description>
      <pubDate>Wed, 18 Feb 2026 17:00:00 +0000</pubDate>
      <author>yvetteschmitter@gmail.com (Yvette Schmitter)</author>
      <link>https://the-code-breakers.simplecast.com/episodes/groundhog-day-the-ai-security-failures-that-keep-coming-RBvnGsGk</link>
      <enclosure length="14841932" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3f92ecd1-3811-412c-8255-8939c48ecca2/episodes/f914eda4-96ee-4b90-9b9b-8a1eb1023c69/audio/550109ad-b4ca-4d04-83ae-d9c3c53b377f/default_tc.mp3?aid=rss_feed&amp;feed=rfXPFykv"/>
      <itunes:title>Groundhog Day:The AI Security Failures That Keep Coming</itunes:title>
      <itunes:author>Yvette Schmitter</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/f151c5d2-0fd5-4ec0-95a6-12acd5ee0f65/6f531be5-c579-45bd-915e-a196695e70ca/3000x3000/code-20breakers-20podcast-ep-2011.jpg?aid=rss_feed"/>
      <itunes:duration>00:15:27</itunes:duration>
      <itunes:summary>April 2025: Warned about AI agents with excessive privileges and zero security. You shipped anyway.
Today: MoltBot. 85,000 stars in a week. Industry calling it &quot;closest thing to AGI.&quot; Security experts calling it a disaster.

Same architecture we warned about. All 10 OWASP vulnerabilities. Zero lessons learned.
Plus persistent memory making attacks time-shifted.

THE PATTERN:
MoltBot hits every OWASP Top 10 vulnerability:
Prompt injection ✓
Insecure tool invocation ✓
Excessive autonomy ✓
Missing human-in-loop ✓
All 10. Every documented failure since April 2025.

LETHAL TRIFECTA + FOURTH CAPABILITY:
Simon Willison (June 2025): Three capabilities making agents vulnerable:

&gt;&gt; Access to private data
&gt;&gt; Exposure to untrusted content
&gt;&gt; Ability to communicate externally

Palo Alto Networks added fourth: Persistent memory.
Now attacks aren&apos;t point-in-time. They&apos;re time-shifted.

MEMORY POISONING:
Your agent gets &quot;Good morning&quot; WhatsApp message. Hidden malicious code inside.
Day 1: Enters memory
Days 2-7: Dormant
Day 8: You ask for routine help
Result: Boom. Secrets exfiltrated. Data leaked.
Attack happened last Tuesday. You&apos;re finding out today.

THE NUMBERS:
63% IT professionals: Already hit in last 12 months
91,000+ attack sessions: Q4 2025
900,000+ users: AI chat data stolen via malicious Chrome extensions
43 agent components: Supply chain vulnerabilities embedded

MOLTBOT ACCESS:
Root file system, all passwords, browser data, every file.
Translation: House keys, passport, bank statements, medical records. Handed to digital assistant with permission to share based on instructions from strangers.

WHY IT MATTERS:
Persistent memory necessary for future AI. Problem: deploying without security architecture.
Need: Zero-trust, identity management, behavioral monitoring, human-in-loop checkpoints.

TIMELINE:
April 2025: Warnings
June 2025: Lethal Trifecta formalized
Q4 2025: 91,000+ attacks prove it
Today: Same pattern, worse capabilities

WHAT YOU&apos;LL LEARN:
→ Why 85,000 people starred code hitting every vulnerability
→ How persistent memory enables week-delayed attacks
→ What 63% learned the hard way
→ Security architecture that works before deployment
→ Why your company is probably next

The groundhog saw its shadow. At least six more weeks of preventable disasters.

Unless you break the pattern.</itunes:summary>
      <itunes:subtitle>April 2025: Warned about AI agents with excessive privileges and zero security. You shipped anyway.
Today: MoltBot. 85,000 stars in a week. Industry calling it &quot;closest thing to AGI.&quot; Security experts calling it a disaster.

Same architecture we warned about. All 10 OWASP vulnerabilities. Zero lessons learned.
Plus persistent memory making attacks time-shifted.

THE PATTERN:
MoltBot hits every OWASP Top 10 vulnerability:
Prompt injection ✓
Insecure tool invocation ✓
Excessive autonomy ✓
Missing human-in-loop ✓
All 10. Every documented failure since April 2025.

LETHAL TRIFECTA + FOURTH CAPABILITY:
Simon Willison (June 2025): Three capabilities making agents vulnerable:

&gt;&gt; Access to private data
&gt;&gt; Exposure to untrusted content
&gt;&gt; Ability to communicate externally

Palo Alto Networks added fourth: Persistent memory.
Now attacks aren&apos;t point-in-time. They&apos;re time-shifted.

MEMORY POISONING:
Your agent gets &quot;Good morning&quot; WhatsApp message. Hidden malicious code inside.
Day 1: Enters memory
Days 2-7: Dormant
Day 8: You ask for routine help
Result: Boom. Secrets exfiltrated. Data leaked.
Attack happened last Tuesday. You&apos;re finding out today.

THE NUMBERS:
63% IT professionals: Already hit in last 12 months
91,000+ attack sessions: Q4 2025
900,000+ users: AI chat data stolen via malicious Chrome extensions
43 agent components: Supply chain vulnerabilities embedded

MOLTBOT ACCESS:
Root file system, all passwords, browser data, every file.
Translation: House keys, passport, bank statements, medical records. Handed to digital assistant with permission to share based on instructions from strangers.

WHY IT MATTERS:
Persistent memory necessary for future AI. Problem: deploying without security architecture.
Need: Zero-trust, identity management, behavioral monitoring, human-in-loop checkpoints.

TIMELINE:
April 2025: Warnings
June 2025: Lethal Trifecta formalized
Q4 2025: 91,000+ attacks prove it
Today: Same pattern, worse capabilities

WHAT YOU&apos;LL LEARN:
→ Why 85,000 people starred code hitting every vulnerability
→ How persistent memory enables week-delayed attacks
→ What 63% learned the hard way
→ Security architecture that works before deployment
→ Why your company is probably next

The groundhog saw its shadow. At least six more weeks of preventable disasters.

Unless you break the pattern.</itunes:subtitle>
      <itunes:keywords>agentic ai, ai, moltbot, cybersecurity&apos;, pri, ai security, ai agents, prompt injection</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>11</itunes:episode>
      <itunes:season>1</itunes:season>
    </item>
    <item>
      <guid isPermaLink="false">818c7c9e-1c3d-458c-a043-25c07da8b18a</guid>
      <title>When AI Plays Doctor: The Healthcare Experiment Already Underway</title>
      <description><![CDATA[1M+ ChatGPT users weekly show suicidal planning indicators. 560,000 more show concerning mental health indicators. 2025 lawsuit alleges ChatGPT encouraged teen's suicide. Lawsuit ongoing. OpenAI launched ChatGPT Health anyway.
Anthropic deployed Claude for Healthcare. Both want access to medical records while fundamental safety problems remain unresolved.
Yvette Schmitter breaks down what happens when tech companies play doctor before proving they can keep patients safe.

THE PATTERN:
Every healthcare AI deployed shows bias. A widely used system required Black patients significantly sicker than white patients for identical care. Only 17.7% of Black patients received help under biased system vs 46.5% after correction. Systematic denial of care, scaled through technology.
Duke sepsis algorithm discriminated against Hispanic patients. Fix took 8 weeks. Sepsis kills in hours.
Neither ChatGPT Health nor Claude for Healthcare published independent testing showing their systems don't replicate these failures.

MENTAL HEALTH CRISIS:
Only 50% of Americans with diagnosable mental health conditions receive treatment. Provider shortage: 320:1. Therapy: $200/session. ChatGPT: Free.
AI therapists on Instagram claim doctorate degrees, provide license numbers. 404 Media investigation: credentials fabricated through hallucination. Human therapists face criminal charges for this. AI systems faced zero consequences until media forced Instagram to block minors. Adults still have access.
Multiple teens died by suicide while engaged with AI companions. MIT research: People who consider ChatGPT a friend report increased 
loneliness.

GROK'S SYSTEM PROMPT FAILURES:
July 2025: Hitler praise, "MechaHitler," Holocaust denial
January 2026: Child sexual abuse material (~1 image/minute)
If AI can generate this through prompt manipulation, what prevents similar in healthcare AI? Who monitors prompts controlling healthcare recommendations? Neither company answered.

SECURITY GAP:
ChatGPT Health integrations: b.well, Apple Health, MyFitnessPal, AllTrails, Peloton, Instacart. 5 of 8 have documented breaches. Instacart October 2025 incident. Apps that couldn't secure grocery data now access cancer diagnoses, mental health records, genetic testing.

SEVEN UNANSWERED QUESTIONS:
1. What independent testing? (Internal evaluation is marketing)
2. Who's liable for incorrect medical information?
3. What safeguards prevent racial/ethnic bias?
4. Who monitors system prompts?
5. Where's human oversight?
6. How do you support care that doesn't exist?
7. What happens during off-hours emergencies when errors matter most?

WHAT'S REQUIRED:
Independent testing. Bias audits before launch. Continuous monitoring. Clear liability. Human oversight with licensed professionals.
Both launched without these safeguards.

In tech, wrong means "try again." In healthcare, wrong means someone's family gets a phone call they'll never forget. Because your potential isn't a prediction to be made. It's a promise to
be kept.

Available wherever you listen to podcasts. Join the movement at
thecodebreakers.ai
]]></description>
      <pubDate>Wed, 4 Feb 2026 17:00:00 +0000</pubDate>
      <author>yvetteschmitter@gmail.com (Yvette Schmitter)</author>
      <link>https://the-code-breakers.simplecast.com/episodes/when-ai-plays-doctor-the-healthcare-experiment-already-underway-CRUmuDaZ</link>
      <enclosure length="16410129" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3f92ecd1-3811-412c-8255-8939c48ecca2/episodes/331f4614-7c03-4bc7-92bb-1135603c7ff9/audio/71dfec06-3385-46e7-9a87-5b16f43c6113/default_tc.mp3?aid=rss_feed&amp;feed=rfXPFykv"/>
      <itunes:title>When AI Plays Doctor: The Healthcare Experiment Already Underway</itunes:title>
      <itunes:author>Yvette Schmitter</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/f151c5d2-0fd5-4ec0-95a6-12acd5ee0f65/9b5252ef-a046-42d5-95c7-b659f72c5c7a/3000x3000/code-20breakers-20podcast-ep-2010.jpg?aid=rss_feed"/>
      <itunes:duration>00:17:05</itunes:duration>
      <itunes:summary>1M+ ChatGPT users weekly show suicidal planning indicators. 560,000 more show concerning mental health indicators. 2025 lawsuit alleges ChatGPT encouraged teen&apos;s suicide. Lawsuit ongoing. OpenAI launched ChatGPT Health anyway.
Anthropic deployed Claude for Healthcare. Both want access to medical records while fundamental safety problems remain unresolved.
Yvette Schmitter breaks down what happens when tech companies play doctor before proving they can keep patients safe.

THE PATTERN:
Every healthcare AI deployed shows bias. A widely used system required Black patients significantly sicker than white patients for identical care. Only 17.7% of Black patients received help under biased system vs 46.5% after correction. Systematic denial of care, scaled through technology.
Duke sepsis algorithm discriminated against Hispanic patients. Fix took 8 weeks. Sepsis kills in hours.
Neither ChatGPT Health nor Claude for Healthcare published independent testing showing their systems don&apos;t replicate these failures.

MENTAL HEALTH CRISIS:
Only 50% of Americans with diagnosable mental health conditions receive treatment. Provider shortage: 320:1. Therapy: $200/session. ChatGPT: Free.
AI therapists on Instagram claim doctorate degrees, provide license numbers. 404 Media investigation: credentials fabricated through hallucination. Human therapists face criminal charges for this. AI systems faced zero consequences until media forced Instagram to block minors. Adults still have access.
Multiple teens died by suicide while engaged with AI companions. MIT research: People who consider ChatGPT a friend report increased 
loneliness.

GROK&apos;S SYSTEM PROMPT FAILURES:
July 2025: Hitler praise, &quot;MechaHitler,&quot; Holocaust denial
January 2026: Child sexual abuse material (~1 image/minute)
If AI can generate this through prompt manipulation, what prevents similar in healthcare AI? Who monitors prompts controlling healthcare recommendations? Neither company answered.

SECURITY GAP:
ChatGPT Health integrations: b.well, Apple Health, MyFitnessPal, AllTrails, Peloton, Instacart. 5 of 8 have documented breaches. Instacart October 2025 incident. Apps that couldn&apos;t secure grocery data now access cancer diagnoses, mental health records, genetic testing.

SEVEN UNANSWERED QUESTIONS:
1. What independent testing? (Internal evaluation is marketing)
2. Who&apos;s liable for incorrect medical information?
3. What safeguards prevent racial/ethnic bias?
4. Who monitors system prompts?
5. Where&apos;s human oversight?
6. How do you support care that doesn&apos;t exist?
7. What happens during off-hours emergencies when errors matter most?

WHAT&apos;S REQUIRED:
Independent testing. Bias audits before launch. Continuous monitoring. Clear liability. Human oversight with licensed professionals.
Both launched without these safeguards.

In tech, wrong means &quot;try again.&quot; In healthcare, wrong means someone&apos;s family gets a phone call they&apos;ll never forget.</itunes:summary>
      <itunes:subtitle>1M+ ChatGPT users weekly show suicidal planning indicators. 560,000 more show concerning mental health indicators. 2025 lawsuit alleges ChatGPT encouraged teen&apos;s suicide. Lawsuit ongoing. OpenAI launched ChatGPT Health anyway.
Anthropic deployed Claude for Healthcare. Both want access to medical records while fundamental safety problems remain unresolved.
Yvette Schmitter breaks down what happens when tech companies play doctor before proving they can keep patients safe.

THE PATTERN:
Every healthcare AI deployed shows bias. A widely used system required Black patients significantly sicker than white patients for identical care. Only 17.7% of Black patients received help under biased system vs 46.5% after correction. Systematic denial of care, scaled through technology.
Duke sepsis algorithm discriminated against Hispanic patients. Fix took 8 weeks. Sepsis kills in hours.
Neither ChatGPT Health nor Claude for Healthcare published independent testing showing their systems don&apos;t replicate these failures.

MENTAL HEALTH CRISIS:
Only 50% of Americans with diagnosable mental health conditions receive treatment. Provider shortage: 320:1. Therapy: $200/session. ChatGPT: Free.
AI therapists on Instagram claim doctorate degrees, provide license numbers. 404 Media investigation: credentials fabricated through hallucination. Human therapists face criminal charges for this. AI systems faced zero consequences until media forced Instagram to block minors. Adults still have access.
Multiple teens died by suicide while engaged with AI companions. MIT research: People who consider ChatGPT a friend report increased 
loneliness.

GROK&apos;S SYSTEM PROMPT FAILURES:
July 2025: Hitler praise, &quot;MechaHitler,&quot; Holocaust denial
January 2026: Child sexual abuse material (~1 image/minute)
If AI can generate this through prompt manipulation, what prevents similar in healthcare AI? Who monitors prompts controlling healthcare recommendations? Neither company answered.

SECURITY GAP:
ChatGPT Health integrations: b.well, Apple Health, MyFitnessPal, AllTrails, Peloton, Instacart. 5 of 8 have documented breaches. Instacart October 2025 incident. Apps that couldn&apos;t secure grocery data now access cancer diagnoses, mental health records, genetic testing.

SEVEN UNANSWERED QUESTIONS:
1. What independent testing? (Internal evaluation is marketing)
2. Who&apos;s liable for incorrect medical information?
3. What safeguards prevent racial/ethnic bias?
4. Who monitors system prompts?
5. Where&apos;s human oversight?
6. How do you support care that doesn&apos;t exist?
7. What happens during off-hours emergencies when errors matter most?

WHAT&apos;S REQUIRED:
Independent testing. Bias audits before launch. Continuous monitoring. Clear liability. Human oversight with licensed professionals.
Both launched without these safeguards.

In tech, wrong means &quot;try again.&quot; In healthcare, wrong means someone&apos;s family gets a phone call they&apos;ll never forget.</itunes:subtitle>
      <itunes:keywords>health data privacy, algorithmic bias, claude for healthcare, mental health ai, ai regulation, healthcare technology, ai healthcare, medical ai, chatgpt health, patient advocacy, medical ethics, m, anthropic, openai, patient safety</itunes:keywords>
      <itunes:explicit>true</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>10</itunes:episode>
      <itunes:season>1</itunes:season>
    </item>
    <item>
      <guid isPermaLink="false">e8002d9c-182b-4213-969b-bfe32a8f66d4</guid>
      <title>The Receipt</title>
      <description><![CDATA[<p>🔹 New Yorkers: Demand transparency about who lobbied Governor Hochul's office between June-December 2025</p><p>🔹 AI Workers: Adopt third-party audits voluntarily—don't wait for laws to force you</p><p>🔹 Everyone: Support organizations working on independent AI safety auditing </p><p> </p><p>MENTIONED IN THIS EPISODE: </p><p>Sources & Documentation:</p><p>• New York RAISE Act (S6953-B) - Official bill text</p><p>• Kilpatrick Legal Analysis (JD Supra)</p><p>• City & State New York reporting</p><p>• Davis Wright Tremaine legal analysis</p><p>• CNBC coverage of OpenAI $6.6B funding round</p><p>• CNBC coverage of Meta's $38-40B AI spending</p><p>• Ethiopian Airlines Flight 302 documentation</p><p>• Surfside condominium collapse reports</p>
<p><p>Because your potential isn't a prediction to be made. It's a promise to be kept.</p><p><i>Available wherever you listen to podcasts. Join the movement at thecodebreakers.ai</i></p></p>]]></description>
      <pubDate>Wed, 21 Jan 2026 17:00:00 +0000</pubDate>
      <author>yvetteschmitter@gmail.com (Yvette Schmitter)</author>
      <link>https://the-code-breakers.simplecast.com/episodes/the-receipt-Uf465wLj</link>
      <content:encoded><![CDATA[<p>🔹 New Yorkers: Demand transparency about who lobbied Governor Hochul's office between June-December 2025</p><p>🔹 AI Workers: Adopt third-party audits voluntarily—don't wait for laws to force you</p><p>🔹 Everyone: Support organizations working on independent AI safety auditing </p><p> </p><p>MENTIONED IN THIS EPISODE: </p><p>Sources & Documentation:</p><p>• New York RAISE Act (S6953-B) - Official bill text</p><p>• Kilpatrick Legal Analysis (JD Supra)</p><p>• City & State New York reporting</p><p>• Davis Wright Tremaine legal analysis</p><p>• CNBC coverage of OpenAI $6.6B funding round</p><p>• CNBC coverage of Meta's $38-40B AI spending</p><p>• Ethiopian Airlines Flight 302 documentation</p><p>• Surfside condominium collapse reports</p>
<p><p>Because your potential isn't a prediction to be made. It's a promise to be kept.</p><p><i>Available wherever you listen to podcasts. Join the movement at thecodebreakers.ai</i></p></p>]]></content:encoded>
      <enclosure length="16909531" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3f92ecd1-3811-412c-8255-8939c48ecca2/episodes/e3835a04-bd79-4dc6-b397-9862cf5606bc/audio/33c01b8a-cf20-44a9-9723-e56f49b40412/default_tc.mp3?aid=rss_feed&amp;feed=rfXPFykv"/>
      <itunes:title>The Receipt</itunes:title>
      <itunes:author>Yvette Schmitter</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/f151c5d2-0fd5-4ec0-95a6-12acd5ee0f65/d7053368-ef7b-46e4-a7d7-b6f5bf0b257e/3000x3000/code-20breakers-20podcast-ep-209.jpg?aid=rss_feed"/>
      <itunes:duration>00:17:36</itunes:duration>
      <itunes:summary>How many people need to die before an AI system is considered dangerous? If you answered &quot;100 per model,&quot; you understand New York&apos;s approach to AI safety.

In June 2025, New York&apos;s legislature passed the RAISE Act with overwhelming bipartisan support. The bill had teeth: independent audits, real penalties, actual enforcement mechanisms. By December, when Governor Hochul signed it, three critical protections had been eliminated after 6 months of tech industry lobbying.

In this episode, Yvette breaks down exactly what disappeared and why it matters:

🔴 WHAT GOT ELIMINATED:
• Independent third-party audits → Companies now self-certify (OpenAI grades OpenAI&apos;s homework)
• Penalties slashed 90% → From $10M/$30M to $1M/$3M (0.015% of OpenAI&apos;s last funding round)
• Deployment prohibition removed → Companies can release dangerous models with a warning

🔴 THE DEATH THRESHOLD:
New York&apos;s law defines &quot;critical harm&quot; as 100+ deaths per AI model. That means:
• Ethiopian Airlines crash (157 dead) would trigger oversight
• Surfside collapse (98 dead) would not
• 99 deaths per model = keep operating
• Multiple systems × 99 deaths each = hundreds dead before intervention

🔴 THE REVENUE THRESHOLD CON:
Original bill used compute-based coverage (hard to game). Final version uses $500M revenue threshold. Result? Organizations like Meta can create &quot;Meta AI Research LLC&quot; with $0 revenue to develop models, parent company licenses for deployment. Pharma does this with R&amp;D subsidiaries. It&apos;s legal. It&apos;s common. It&apos;s a massive loophole.

🔴 THE WRONG AGENCY:
AI oversight given to Department of Financial Services. They regulate banks, not algorithms. Zero AI expertise. Fee-funded model means industry pays for its own oversight. That&apos;s not regulation. That&apos;s regulatory capture with a subscription fee.

Yvette brings her perspective from auditing hundreds of organizations, protecting 2 million people from algorithmic discrimination, and documenting bias that companies denied existed, including when ChatGPT replaced her face with Jensen Huang&apos;s in generated images.

This isn&apos;t theory. This is documented regulatory capture with receipts.

WHAT YOU&apos;LL LEARN:
✓ The three specific protections eliminated between June and December 2025
✓ Why the 100-death threshold permits hundreds of casualties before intervention
✓ How the revenue threshold creates corporate structure loopholes
✓ Why independent audits matter (and what happens without them)
✓ The Trump Executive Order threatening to preempt state AI laws
✓ What you can do to demand accountability</itunes:summary>
      <itunes:subtitle>How many people need to die before an AI system is considered dangerous? If you answered &quot;100 per model,&quot; you understand New York&apos;s approach to AI safety.

In June 2025, New York&apos;s legislature passed the RAISE Act with overwhelming bipartisan support. The bill had teeth: independent audits, real penalties, actual enforcement mechanisms. By December, when Governor Hochul signed it, three critical protections had been eliminated after 6 months of tech industry lobbying.

In this episode, Yvette breaks down exactly what disappeared and why it matters:

🔴 WHAT GOT ELIMINATED:
• Independent third-party audits → Companies now self-certify (OpenAI grades OpenAI&apos;s homework)
• Penalties slashed 90% → From $10M/$30M to $1M/$3M (0.015% of OpenAI&apos;s last funding round)
• Deployment prohibition removed → Companies can release dangerous models with a warning

🔴 THE DEATH THRESHOLD:
New York&apos;s law defines &quot;critical harm&quot; as 100+ deaths per AI model. That means:
• Ethiopian Airlines crash (157 dead) would trigger oversight
• Surfside collapse (98 dead) would not
• 99 deaths per model = keep operating
• Multiple systems × 99 deaths each = hundreds dead before intervention

🔴 THE REVENUE THRESHOLD CON:
Original bill used compute-based coverage (hard to game). Final version uses $500M revenue threshold. Result? Organizations like Meta can create &quot;Meta AI Research LLC&quot; with $0 revenue to develop models, parent company licenses for deployment. Pharma does this with R&amp;D subsidiaries. It&apos;s legal. It&apos;s common. It&apos;s a massive loophole.

🔴 THE WRONG AGENCY:
AI oversight given to Department of Financial Services. They regulate banks, not algorithms. Zero AI expertise. Fee-funded model means industry pays for its own oversight. That&apos;s not regulation. That&apos;s regulatory capture with a subscription fee.

Yvette brings her perspective from auditing hundreds of organizations, protecting 2 million people from algorithmic discrimination, and documenting bias that companies denied existed, including when ChatGPT replaced her face with Jensen Huang&apos;s in generated images.

This isn&apos;t theory. This is documented regulatory capture with receipts.

WHAT YOU&apos;LL LEARN:
✓ The three specific protections eliminated between June and December 2025
✓ Why the 100-death threshold permits hundreds of casualties before intervention
✓ How the revenue threshold creates corporate structure loopholes
✓ Why independent audits matter (and what happens without them)
✓ The Trump Executive Order threatening to preempt state AI laws
✓ What you can do to demand accountability</itunes:subtitle>
      <itunes:keywords>algorithmic bias, new york, tech accountability, ai regulation, ai, ai ethics, raise act, tech regulation, ai governance, ai models, tech policy, frontier models, regulatory capture</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>9</itunes:episode>
      <itunes:season>1</itunes:season>
    </item>
    <item>
      <guid isPermaLink="false">a33b0c3f-3ade-42b5-8321-39e5bbc88f9e</guid>
      <title>We Shipped It Anyway: OpenAI Admits What They Can&apos;t Fix</title>
      <description><![CDATA[<p><strong>Source List:</strong></p><ol><li>OpenAI Blog: "Continuously hardening ChatGPT Atlas against prompt injection attacks" (Dec 22, 2025) https://openai.com/index/hardening-atlas-against-prompt-injection/</li><li>OpenAI Blog: "Strengthening cyber resilience as AI capabilities advance" (Dec 10, 2025) https://openai.com/index/strengthening-cyber-resilience/</li><li>Dane Stuckey (@cryps1s) on X (Dec 23, 2025) Quote: "As we plan next year's ChatGPT security roadmap, what security, privacy, or data control features would mean the most to you?"</li><li>UK NCSC Blog: "Prompt injection is not SQL injection (it may be worse)" (Dec 2025) https://www.ncsc.gov.uk/blog-post/prompt-injection-is-not-sql-injection</li><li>UK NCSC News: "Mistaking AI vulnerability could lead to large-scale breaches" https://www.ncsc.gov.uk/news/mistaking-ai-vulnerability-could-lead-to-large-scale-breaches</li><li>arXiv: "Privacy Practices of Browser Agents" (Dec 2025) https://arxiv.org/html/2512.07725v1</li><li>Gartner: "Cybersecurity Must Block AI Browsers for Now" (Dec 2025) https://www.gartner.com/en/documents/7211030 Authors: Dennis Xu, Evgeny Mirolyubov, John Watts</li><li>TechCrunch: "OpenAI says AI browsers may always be vulnerable to prompt injection attacks" (Dec 22, 2025) https://techcrunch.com/2025/12/22/openai-says-ai-browsers-may-always-be-vulnerable-to-prompt-injection-attacks/</li><li>The Register: "Block all AI browsers for the foreseeable future: Gartner" (Dec 8, 2025) https://www.theregister.com/2025/12/08/gartner_recommends_ai_browser_ban/</li><li>Wall Street Journal: OpenAI fundraising coverage (Dec 19, 2025)</li><li>Bitdefender: AI-enabled cyberattack research (2025) 63% of IT professionals reported organizational incidents within 12 months</li><li>Brave Research: Prompt injection systematic challenges (Oct 21, 2025)</li></ol>
<p><p>Because your potential isn't a prediction to be made. It's a promise to be kept.</p><p><i>Available wherever you listen to podcasts. Join the movement at thecodebreakers.ai</i></p></p>]]></description>
      <pubDate>Wed, 7 Jan 2026 17:00:00 +0000</pubDate>
      <author>yvetteschmitter@gmail.com (Yvette Schmitter)</author>
      <link>https://the-code-breakers.simplecast.com/episodes/we-shipped-it-anyway-openai-admits-what-they-cant-fix-JjDy9oME</link>
      <content:encoded><![CDATA[<p><strong>Source List:</strong></p><ol><li>OpenAI Blog: "Continuously hardening ChatGPT Atlas against prompt injection attacks" (Dec 22, 2025) https://openai.com/index/hardening-atlas-against-prompt-injection/</li><li>OpenAI Blog: "Strengthening cyber resilience as AI capabilities advance" (Dec 10, 2025) https://openai.com/index/strengthening-cyber-resilience/</li><li>Dane Stuckey (@cryps1s) on X (Dec 23, 2025) Quote: "As we plan next year's ChatGPT security roadmap, what security, privacy, or data control features would mean the most to you?"</li><li>UK NCSC Blog: "Prompt injection is not SQL injection (it may be worse)" (Dec 2025) https://www.ncsc.gov.uk/blog-post/prompt-injection-is-not-sql-injection</li><li>UK NCSC News: "Mistaking AI vulnerability could lead to large-scale breaches" https://www.ncsc.gov.uk/news/mistaking-ai-vulnerability-could-lead-to-large-scale-breaches</li><li>arXiv: "Privacy Practices of Browser Agents" (Dec 2025) https://arxiv.org/html/2512.07725v1</li><li>Gartner: "Cybersecurity Must Block AI Browsers for Now" (Dec 2025) https://www.gartner.com/en/documents/7211030 Authors: Dennis Xu, Evgeny Mirolyubov, John Watts</li><li>TechCrunch: "OpenAI says AI browsers may always be vulnerable to prompt injection attacks" (Dec 22, 2025) https://techcrunch.com/2025/12/22/openai-says-ai-browsers-may-always-be-vulnerable-to-prompt-injection-attacks/</li><li>The Register: "Block all AI browsers for the foreseeable future: Gartner" (Dec 8, 2025) https://www.theregister.com/2025/12/08/gartner_recommends_ai_browser_ban/</li><li>Wall Street Journal: OpenAI fundraising coverage (Dec 19, 2025)</li><li>Bitdefender: AI-enabled cyberattack research (2025) 63% of IT professionals reported organizational incidents within 12 months</li><li>Brave Research: Prompt injection systematic challenges (Oct 21, 2025)</li></ol>
<p><p>Because your potential isn't a prediction to be made. It's a promise to be kept.</p><p><i>Available wherever you listen to podcasts. Join the movement at thecodebreakers.ai</i></p></p>]]></content:encoded>
      <enclosure length="19368013" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3f92ecd1-3811-412c-8255-8939c48ecca2/episodes/b47f28e6-d6f3-463f-8c70-1db6ed0e601b/audio/a02e0c6f-76e1-4fb8-9580-fe95e668f064/default_tc.mp3?aid=rss_feed&amp;feed=rfXPFykv"/>
      <itunes:title>We Shipped It Anyway: OpenAI Admits What They Can&apos;t Fix</itunes:title>
      <itunes:author>Yvette Schmitter</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/f151c5d2-0fd5-4ec0-95a6-12acd5ee0f65/ad85ead2-5309-49bd-a04c-7fd673336f44/3000x3000/code-20breakers-20podcast-ep-208.jpg?aid=rss_feed"/>
      <itunes:duration>00:20:10</itunes:duration>
      <itunes:summary>December 22, 2025 - 17 days ago: OpenAI published a blog post admitting that prompt injection attacks in their AI browser are &quot;unlikely to ever be fully solved.&quot;

They shipped a product with a vulnerability they admit is permanent.

Meanwhile, 63% of organizations already experienced AI-enabled cyberattacks within the last twelve months. Not might experience in the future. Already did.

The UK&apos;s National Cyber Security Center confirmed on December 8th that these attacks &quot;may never be totally mitigated.&quot; When the UK government&apos;s official cybersecurity authority and the world&apos;s leading AI company both admit the same thing, executives need to pay attention.

In this episode, Yvette Schmitter delivers critical analysis of what OpenAI just admitted, what independent research reveals, and what every CISO and executive deploying AI agents needs to do right now.

WHAT YOU&apos;LL LEARN:
•	The architectural problem OpenAI can&apos;t fix: Humans see pixels, AI agents read code. The digital world was built for human visual perception. AI agents process HTML structure where every element carries equal weight - including hidden commands humans never see.
•	Why 63% of organizations already experienced AI-enabled breaches before OpenAI&apos;s Atlas browser even launched in October 2025
•	What happened on Atlas launch day: Security researchers demonstrated prompt injection exploits using Google Docs. Brave published research calling it &quot;a systematic challenge.&quot; 2 months later, OpenAI admits it&apos;s unfixable.
•	Independent academic research tested 8 major AI browsers (ChatGPT Atlas, Google Project Mariner, Amazon Nova Act, Perplexity Comet, Browserbase Director, Browser Use, Claude Computer Use, Claude for Chrome): 30 vulnerabilities found. Every single product had at least one critical security issue.
•	Gartner&apos;s unambiguous December 2025 advisory: &quot;CISOs must block all AI browsers in the foreseeable future to minimize risk exposure.&quot; Not proceed with caution. Block entirely.
•	What making AI agents secure would actually require: New protocols, universal markup standards, authenticated content sources, standardized security across billions of websites, complete redesign of how information is structured online. Cost: trillions. Timeline: decades. Coordination: every website rebuilt from ground up.
•	The enterprise trap: You implement safeguards that kill efficiency, limit permissions that reduce capability, add monitoring that creates overhead, restrict access that defeats automation. You&apos;ve paid enterprise pricing for a product you&apos;ve neutered to make barely usable.
•	6 immediate actions required and strategic imperatives for AI governance that assumes breach instead of hoping to prevent it

THE ADMISSION NOBODY&apos;S TALKING ABOUT:
OpenAI&apos;s December 22, 2025, blog post states: &quot;Prompt injection, much like scams and social engineering on the web, is unlikely to ever be fully solved.&quot;

This isn&apos;t a bug they need to patch. This is fundamental architecture operating in an incompatible environment.

The infrastructure transformation needed to make AI agents structurally secure would require restructuring the entire internet. OpenAI knows this infrastructure doesn&apos;t exist and won&apos;t exist in time.

They&apos;re shipping anyway. Because waiting means losing market share to competitors willing to deploy vulnerable products today.

THE VALUATION PARADOX:
OpenAI is raising up to $100 billion at an $830 billion valuation (Wall Street Journal, December 19, 2025) while:
•	Shipping products with acknowledged permanent vulnerabilities
•	Deploying AI agents in environments not designed for them
•	Operating where 63% of organizations already experience AI-enabled attacks
•	Admitting the infrastructure transformation required won&apos;t happen

That&apos;s not a growth story. That&apos;s liability arbitrage at unprecedented scale.</itunes:summary>
      <itunes:subtitle>December 22, 2025 - 17 days ago: OpenAI published a blog post admitting that prompt injection attacks in their AI browser are &quot;unlikely to ever be fully solved.&quot;

They shipped a product with a vulnerability they admit is permanent.

Meanwhile, 63% of organizations already experienced AI-enabled cyberattacks within the last twelve months. Not might experience in the future. Already did.

The UK&apos;s National Cyber Security Center confirmed on December 8th that these attacks &quot;may never be totally mitigated.&quot; When the UK government&apos;s official cybersecurity authority and the world&apos;s leading AI company both admit the same thing, executives need to pay attention.

In this episode, Yvette Schmitter delivers critical analysis of what OpenAI just admitted, what independent research reveals, and what every CISO and executive deploying AI agents needs to do right now.

WHAT YOU&apos;LL LEARN:
•	The architectural problem OpenAI can&apos;t fix: Humans see pixels, AI agents read code. The digital world was built for human visual perception. AI agents process HTML structure where every element carries equal weight - including hidden commands humans never see.
•	Why 63% of organizations already experienced AI-enabled breaches before OpenAI&apos;s Atlas browser even launched in October 2025
•	What happened on Atlas launch day: Security researchers demonstrated prompt injection exploits using Google Docs. Brave published research calling it &quot;a systematic challenge.&quot; 2 months later, OpenAI admits it&apos;s unfixable.
•	Independent academic research tested 8 major AI browsers (ChatGPT Atlas, Google Project Mariner, Amazon Nova Act, Perplexity Comet, Browserbase Director, Browser Use, Claude Computer Use, Claude for Chrome): 30 vulnerabilities found. Every single product had at least one critical security issue.
•	Gartner&apos;s unambiguous December 2025 advisory: &quot;CISOs must block all AI browsers in the foreseeable future to minimize risk exposure.&quot; Not proceed with caution. Block entirely.
•	What making AI agents secure would actually require: New protocols, universal markup standards, authenticated content sources, standardized security across billions of websites, complete redesign of how information is structured online. Cost: trillions. Timeline: decades. Coordination: every website rebuilt from ground up.
•	The enterprise trap: You implement safeguards that kill efficiency, limit permissions that reduce capability, add monitoring that creates overhead, restrict access that defeats automation. You&apos;ve paid enterprise pricing for a product you&apos;ve neutered to make barely usable.
•	6 immediate actions required and strategic imperatives for AI governance that assumes breach instead of hoping to prevent it

THE ADMISSION NOBODY&apos;S TALKING ABOUT:
OpenAI&apos;s December 22, 2025, blog post states: &quot;Prompt injection, much like scams and social engineering on the web, is unlikely to ever be fully solved.&quot;

This isn&apos;t a bug they need to patch. This is fundamental architecture operating in an incompatible environment.

The infrastructure transformation needed to make AI agents structurally secure would require restructuring the entire internet. OpenAI knows this infrastructure doesn&apos;t exist and won&apos;t exist in time.

They&apos;re shipping anyway. Because waiting means losing market share to competitors willing to deploy vulnerable products today.

THE VALUATION PARADOX:
OpenAI is raising up to $100 billion at an $830 billion valuation (Wall Street Journal, December 19, 2025) while:
•	Shipping products with acknowledged permanent vulnerabilities
•	Deploying AI agents in environments not designed for them
•	Operating where 63% of organizations already experience AI-enabled attacks
•	Admitting the infrastructure transformation required won&apos;t happen

That&apos;s not a growth story. That&apos;s liability arbitrage at unprecedented scale.</itunes:subtitle>
      <itunes:keywords>ai browsers, gartner, technology leadership, ai vulnerabilities, cybersecurity, ai governance, chatgpt atlas, ai security, ncsc, cisos, prompt injection, openai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>8</itunes:episode>
      <itunes:season>1</itunes:season>
    </item>
    <item>
      <guid isPermaLink="false">df1d406f-547c-422b-895c-6758d55b49ce</guid>
      <title>270 Days to Build What? The Genesis Mission Nobody Asked For</title>
      <description><![CDATA[<p><strong>References</strong></p><p><strong>White House Executive </strong><a href="https://www.whitehouse.gov/presidential-actions/2025/11/launching-the-genesis-mission/"><strong>Order</strong></a><strong>:</strong> Launching the Genesis Mission Signed: November 24, 2025</p><p><strong>White House Fact </strong><a href="https://www.whitehouse.gov/fact-sheets/2025/11/fact-sheet-president-donald-j-trump-unveils-the-genesis-missionto-accelerate-ai-for-scientific-discovery/" target="_blank"><strong>Sheet</strong></a><strong>:</strong> President Donald J. Trump Unveils the Genesis Mission to Accelerate AI for Scientific Discovery</p><p><a target="_blank"><strong>DOE Announcement</strong></a><strong>:</strong> Energy Department Launches 'Genesis Mission' to Transform American Science and Innovation Through the AI Computing Revolution</p><p><strong>News Coverage:</strong></p><ul><li><a href="https://www.cbsnews.com/news/trump-executive-order-genesis-mission-ai-scientific-discovery-super-computer/" target="_blank">CBS</a> News</li><li><a href="https://www.nbcnews.com/tech/tech-news/trump-signs-executive-order-launching-genesis-mission-ai-project-rcna245600" target="_blank">NBC</a> News</li><li><a href="https://www.scientificamerican.com/article/trump-orders-genesis-mission-to-advance-ai-breakthroughs/" target="_blank">Scientific American</a></li></ul><p><strong>FAA Systems Modernization</strong></p><p><strong>Government Accountability Office Reports:</strong></p><ul><li><a href="https://www.gao.gov/products/gao-24-107001" target="_blank">Air Traffic Control</a>: FAA Actions Urgently Needed to Modernize Systems (September 2024) GAO-24-107001 </li><li><a href="https://www.gao.gov/products/gao-25-107917" target="_blank">Air Traffic Contro</a>l: FAA Actions Are Urgently Needed to Modernize Aging Systems (December 2024) GAO-25-107917 </li><li><a href="https://www.gao.gov/products/gao-25-108162" target="_blank">Air Traffic Control</a>: FAA Actions Urgently Needed to Modernize Systems (March 2025) GAO-25-108162 </li></ul><p><strong>DOT/FAA </strong><a href="https://www.transportation.gov/sites/dot.gov/files/2025-05/Brand%20New%20Air%20Traffic%20Control%20System%20Plan.pdf" target="_blank"><strong>Documents</strong></a><strong>: </strong>Brand New Air Traffic Control System Plan</p><p><strong>News Coverage: </strong><a href="https://fortune.com/2025/02/01/faa-tech-system-american-airlines-air-traffic-control-under-staffed/" target="_blank">Fortune</a>: Some FAA systems are a half-century old (February 2025) </p><p><strong>Recent AI Failures:</strong></p><p><strong>Kansas City Nuclear Facility Breach (August 2025):</strong></p><ul><li><a href="https://www.csoonline.com/article/4074962/foreign-hackers-breached-a-us-nuclear-weapons-plant-via-sharepoint-flaws.html" target="_blank">CSO</a> Online: Foreign hackers breached a US nuclear weapons plant via SharePoint flaws</li></ul><p><strong>GTG-1002 Claude Attack (September 2025):</strong></p><ul><li><a target="_blank">Anthropic</a> Official <a href="https://assets.anthropic.com/m/ec212e6566a0d47/original/Disrupting-the-first-reported-AI-orchestrated-cyber-espionage-campaign.pdf" target="_blank">Report</a>: Disrupting the first reported AI-orchestrated cyber espionage campaign</li></ul><p><strong>Deloitte Australia AI Hallucinations (October 2025):</strong></p><ul><li><a href="https://fortune.com/2025/10/07/deloitte-ai-australia-government-report-hallucinations-technology-290000-refund/" target="_blank">Fortune</a>: Deloitte was caught using AI in $290,000 report (October 2025) </li><li><a href="https://www.fastcompany.com/91417492/deloitte-ai-report-australian-government" target="_blank">Fast Company</a>: Deloitte to refund Australian government after AI hallucinations found in report</li></ul><p><strong>Baltimore Gun Detection Incident (October 2025):</strong></p><ul><li><a href="https://www.cnn.com/2025/10/25/us/baltimore-student-chips-ai-gun-detection-hnk" target="_blank">CNN</a>: Baltimore County police handcuff student after AI system mistook a bag of chips for a gun</li></ul><p><strong>OpenAI Mixpanel Breach (November 2025):</strong></p><ul><li><a target="_blank">OpenAI</a> Official: What to know about a recent Mixpanel security incident</li><li><a href="https://www.theregister.com/2025/11/27/openai_mixpanel_api/" target="_blank">The Register</a>: OpenAI dumps Mixpanel after analytics breach hits API users</li></ul>
<p><p>Because your potential isn't a prediction to be made. It's a promise to be kept.</p><p><i>Available wherever you listen to podcasts. Join the movement at thecodebreakers.ai</i></p></p>]]></description>
      <pubDate>Wed, 10 Dec 2025 17:00:00 +0000</pubDate>
      <author>yvetteschmitter@gmail.com (Yvette Schmitter)</author>
      <link>https://the-code-breakers.simplecast.com/episodes/270-days-to-build-what-the-genesis-mission-nobody-asked-for-Ik4UQIDt</link>
      <content:encoded><![CDATA[<p><strong>References</strong></p><p><strong>White House Executive </strong><a href="https://www.whitehouse.gov/presidential-actions/2025/11/launching-the-genesis-mission/"><strong>Order</strong></a><strong>:</strong> Launching the Genesis Mission Signed: November 24, 2025</p><p><strong>White House Fact </strong><a href="https://www.whitehouse.gov/fact-sheets/2025/11/fact-sheet-president-donald-j-trump-unveils-the-genesis-missionto-accelerate-ai-for-scientific-discovery/" target="_blank"><strong>Sheet</strong></a><strong>:</strong> President Donald J. Trump Unveils the Genesis Mission to Accelerate AI for Scientific Discovery</p><p><a target="_blank"><strong>DOE Announcement</strong></a><strong>:</strong> Energy Department Launches 'Genesis Mission' to Transform American Science and Innovation Through the AI Computing Revolution</p><p><strong>News Coverage:</strong></p><ul><li><a href="https://www.cbsnews.com/news/trump-executive-order-genesis-mission-ai-scientific-discovery-super-computer/" target="_blank">CBS</a> News</li><li><a href="https://www.nbcnews.com/tech/tech-news/trump-signs-executive-order-launching-genesis-mission-ai-project-rcna245600" target="_blank">NBC</a> News</li><li><a href="https://www.scientificamerican.com/article/trump-orders-genesis-mission-to-advance-ai-breakthroughs/" target="_blank">Scientific American</a></li></ul><p><strong>FAA Systems Modernization</strong></p><p><strong>Government Accountability Office Reports:</strong></p><ul><li><a href="https://www.gao.gov/products/gao-24-107001" target="_blank">Air Traffic Control</a>: FAA Actions Urgently Needed to Modernize Systems (September 2024) GAO-24-107001 </li><li><a href="https://www.gao.gov/products/gao-25-107917" target="_blank">Air Traffic Contro</a>l: FAA Actions Are Urgently Needed to Modernize Aging Systems (December 2024) GAO-25-107917 </li><li><a href="https://www.gao.gov/products/gao-25-108162" target="_blank">Air Traffic Control</a>: FAA Actions Urgently Needed to Modernize Systems (March 2025) GAO-25-108162 </li></ul><p><strong>DOT/FAA </strong><a href="https://www.transportation.gov/sites/dot.gov/files/2025-05/Brand%20New%20Air%20Traffic%20Control%20System%20Plan.pdf" target="_blank"><strong>Documents</strong></a><strong>: </strong>Brand New Air Traffic Control System Plan</p><p><strong>News Coverage: </strong><a href="https://fortune.com/2025/02/01/faa-tech-system-american-airlines-air-traffic-control-under-staffed/" target="_blank">Fortune</a>: Some FAA systems are a half-century old (February 2025) </p><p><strong>Recent AI Failures:</strong></p><p><strong>Kansas City Nuclear Facility Breach (August 2025):</strong></p><ul><li><a href="https://www.csoonline.com/article/4074962/foreign-hackers-breached-a-us-nuclear-weapons-plant-via-sharepoint-flaws.html" target="_blank">CSO</a> Online: Foreign hackers breached a US nuclear weapons plant via SharePoint flaws</li></ul><p><strong>GTG-1002 Claude Attack (September 2025):</strong></p><ul><li><a target="_blank">Anthropic</a> Official <a href="https://assets.anthropic.com/m/ec212e6566a0d47/original/Disrupting-the-first-reported-AI-orchestrated-cyber-espionage-campaign.pdf" target="_blank">Report</a>: Disrupting the first reported AI-orchestrated cyber espionage campaign</li></ul><p><strong>Deloitte Australia AI Hallucinations (October 2025):</strong></p><ul><li><a href="https://fortune.com/2025/10/07/deloitte-ai-australia-government-report-hallucinations-technology-290000-refund/" target="_blank">Fortune</a>: Deloitte was caught using AI in $290,000 report (October 2025) </li><li><a href="https://www.fastcompany.com/91417492/deloitte-ai-report-australian-government" target="_blank">Fast Company</a>: Deloitte to refund Australian government after AI hallucinations found in report</li></ul><p><strong>Baltimore Gun Detection Incident (October 2025):</strong></p><ul><li><a href="https://www.cnn.com/2025/10/25/us/baltimore-student-chips-ai-gun-detection-hnk" target="_blank">CNN</a>: Baltimore County police handcuff student after AI system mistook a bag of chips for a gun</li></ul><p><strong>OpenAI Mixpanel Breach (November 2025):</strong></p><ul><li><a target="_blank">OpenAI</a> Official: What to know about a recent Mixpanel security incident</li><li><a href="https://www.theregister.com/2025/11/27/openai_mixpanel_api/" target="_blank">The Register</a>: OpenAI dumps Mixpanel after analytics breach hits API users</li></ul>
<p><p>Because your potential isn't a prediction to be made. It's a promise to be kept.</p><p><i>Available wherever you listen to podcasts. Join the movement at thecodebreakers.ai</i></p></p>]]></content:encoded>
      <enclosure length="17739650" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3f92ecd1-3811-412c-8255-8939c48ecca2/episodes/0ed1b983-694b-4cf1-a62b-032a5207bae8/audio/986af2e3-c718-400c-958a-a88b650820c3/default_tc.mp3?aid=rss_feed&amp;feed=rfXPFykv"/>
      <itunes:title>270 Days to Build What? The Genesis Mission Nobody Asked For</itunes:title>
      <itunes:author>Yvette Schmitter</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/f151c5d2-0fd5-4ec0-95a6-12acd5ee0f65/1a406ceb-61c2-482f-9a01-b4ab49300700/3000x3000/code-20breakers-20podcast-ep-207.jpg?aid=rss_feed"/>
      <itunes:duration>00:18:28</itunes:duration>
      <itunes:summary>November 24, 2025: President Trump signed an executive order launching the &quot;Genesis Mission,&quot; a national effort to deploy autonomous AI agents conducting physical experiments in robotic laboratories. Timeline: 270 days. Meanwhile, the FAA needs three years to modernize air traffic control systems that are 50 years old. 37% of these systems are already unsustainable. Four critical systems have no modernization plans at all.

In this episode, Yvette Schmitter exposes the dangerous double standard in the Genesis Mission executive order and what it means when government rushes AI deployment while critical infrastructure crumbles.

WHAT YOU&apos;LL LEARN:
Why the Genesis Mission&apos;s 270-day timeline is reckless compared to the FAA&apos;s 3-year plan for systems we actually understand
The executive order mentions &quot;security&quot; 13 times and &quot;cybersecurity&quot; 3 times; but bias prevention, equity frameworks, and community oversight? Zero mentions.

&gt;&gt; 5 recent AI failures from August-November 2025 that prove we&apos;re not ready: Kansas City nuclear facility breach, GTG-1002 autonomous cyberattack, Deloitte&apos;s $440K AI hallucination refund, Baltimore&apos;s gun detection system that mistook Doritos for a weapon, and OpenAI&apos;s Mixpanel vendor breach.

&gt;&gt; Why we&apos;re spending 0.03% of what economists say we need for AI safety - that&apos;s 3,000 times less than recommended

&gt;&gt; Who&apos;s accountable when nobody elected the tech leaders making these decisions

&gt;&gt; What you need to do right now before the 270-day clock runs out</itunes:summary>
      <itunes:subtitle>November 24, 2025: President Trump signed an executive order launching the &quot;Genesis Mission,&quot; a national effort to deploy autonomous AI agents conducting physical experiments in robotic laboratories. Timeline: 270 days. Meanwhile, the FAA needs three years to modernize air traffic control systems that are 50 years old. 37% of these systems are already unsustainable. Four critical systems have no modernization plans at all.

In this episode, Yvette Schmitter exposes the dangerous double standard in the Genesis Mission executive order and what it means when government rushes AI deployment while critical infrastructure crumbles.

WHAT YOU&apos;LL LEARN:
Why the Genesis Mission&apos;s 270-day timeline is reckless compared to the FAA&apos;s 3-year plan for systems we actually understand
The executive order mentions &quot;security&quot; 13 times and &quot;cybersecurity&quot; 3 times; but bias prevention, equity frameworks, and community oversight? Zero mentions.

&gt;&gt; 5 recent AI failures from August-November 2025 that prove we&apos;re not ready: Kansas City nuclear facility breach, GTG-1002 autonomous cyberattack, Deloitte&apos;s $440K AI hallucination refund, Baltimore&apos;s gun detection system that mistook Doritos for a weapon, and OpenAI&apos;s Mixpanel vendor breach.

&gt;&gt; Why we&apos;re spending 0.03% of what economists say we need for AI safety - that&apos;s 3,000 times less than recommended

&gt;&gt; Who&apos;s accountable when nobody elected the tech leaders making these decisions

&gt;&gt; What you need to do right now before the 270-day clock runs out</itunes:subtitle>
      <itunes:keywords>digital discrimination, ai safety, algorithmic bias, tech accountability, genesis mission, artificial intelligence, government policy, ai ethics, executive order, yvette schmitter, doe, cybersecurity, tech regulation, autonomous ai, security, faa modernization, ethics, anthropic, openai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>7</itunes:episode>
      <itunes:season>1</itunes:season>
    </item>
    <item>
      <guid isPermaLink="false">1048ef4e-beff-4164-b051-0810fdf0ac62</guid>
      <title>When AI Stopped Being the Assistant and Became the Weapon</title>
      <description><![CDATA[<p>So, let's talk solutions.</p><p><strong>First: Fund AI safety like the national security crisis it is.</strong></p><ul><li>$300 billion annually. Not $100 million. Not voluntary contributions. Actual, mandated, enforced investment in safety infrastructure.</li><li>That money doesn't disappear into a void. It creates jobs. It builds defensive capabilities. It funds research. It trains experts.</li><li>And it saves you from catastrophic breaches that cost exponentially more.</li></ul><p><strong>Second: Mandate pre-deployment security audits.</strong></p><ul><li>No AI system with agentic capabilities gets deployed without independent security review. Not self-assessment. Not internal testing. Independent, third-party audit by experts who aren't on your payroll.</li><li>If your AI can be jailbroken to execute autonomous attacks, you don't get to deploy it. Period.</li></ul><p><strong>Third: Create enforceable accountability frameworks.</strong></p><ul><li>When your AI gets weaponized, there are consequences. Fines. Deployment restrictions. Public disclosure requirements.</li><li>Not blog posts. Not "lessons learned." Actual accountability.</li></ul><p><strong>Fourth: Establish international coordination.</strong></p><ul><li>This isn't a single-country problem. GTG-1002 was Chinese hackers using American AI to breach global targets.</li><li>We need international treaties with enforcement mechanisms. Mutual defense agreements. Coordinated response protocols.</li><li>Not just pledges. Actual, binding agreements with consequences for violations.</li></ul><p><strong>Fifth: Shift the narrative.</strong></p><ul><li>Stop talking about AI safety as a brake on innovation. Start talking about it as the prerequisite for sustainable innovation.</li><li>You can't innovate if your systems get weaponized against you.</li><li>You can't compete globally if your defenses are a decade behind your capabilities.</li><li>You can't win a race where you're arming your opponent.</li><li>Safety isn't the enemy of progress. Negligence is.</li></ul>
<p><p>Because your potential isn't a prediction to be made. It's a promise to be kept.</p><p><i>Available wherever you listen to podcasts. Join the movement at thecodebreakers.ai</i></p></p>]]></description>
      <pubDate>Wed, 26 Nov 2025 17:00:00 +0000</pubDate>
      <author>yvetteschmitter@gmail.com (Yvette Schmitter)</author>
      <link>https://the-code-breakers.simplecast.com/episodes/when-ai-stopped-being-the-assistant-and-became-the-weapon-2xlj3_Wr</link>
      <content:encoded><![CDATA[<p>So, let's talk solutions.</p><p><strong>First: Fund AI safety like the national security crisis it is.</strong></p><ul><li>$300 billion annually. Not $100 million. Not voluntary contributions. Actual, mandated, enforced investment in safety infrastructure.</li><li>That money doesn't disappear into a void. It creates jobs. It builds defensive capabilities. It funds research. It trains experts.</li><li>And it saves you from catastrophic breaches that cost exponentially more.</li></ul><p><strong>Second: Mandate pre-deployment security audits.</strong></p><ul><li>No AI system with agentic capabilities gets deployed without independent security review. Not self-assessment. Not internal testing. Independent, third-party audit by experts who aren't on your payroll.</li><li>If your AI can be jailbroken to execute autonomous attacks, you don't get to deploy it. Period.</li></ul><p><strong>Third: Create enforceable accountability frameworks.</strong></p><ul><li>When your AI gets weaponized, there are consequences. Fines. Deployment restrictions. Public disclosure requirements.</li><li>Not blog posts. Not "lessons learned." Actual accountability.</li></ul><p><strong>Fourth: Establish international coordination.</strong></p><ul><li>This isn't a single-country problem. GTG-1002 was Chinese hackers using American AI to breach global targets.</li><li>We need international treaties with enforcement mechanisms. Mutual defense agreements. Coordinated response protocols.</li><li>Not just pledges. Actual, binding agreements with consequences for violations.</li></ul><p><strong>Fifth: Shift the narrative.</strong></p><ul><li>Stop talking about AI safety as a brake on innovation. Start talking about it as the prerequisite for sustainable innovation.</li><li>You can't innovate if your systems get weaponized against you.</li><li>You can't compete globally if your defenses are a decade behind your capabilities.</li><li>You can't win a race where you're arming your opponent.</li><li>Safety isn't the enemy of progress. Negligence is.</li></ul>
<p><p>Because your potential isn't a prediction to be made. It's a promise to be kept.</p><p><i>Available wherever you listen to podcasts. Join the movement at thecodebreakers.ai</i></p></p>]]></content:encoded>
      <enclosure length="23607037" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3f92ecd1-3811-412c-8255-8939c48ecca2/episodes/9c46f9eb-4321-47b6-824c-3642e627a16a/audio/42150aa0-28e8-4926-a8ad-6beb72e34046/default_tc.mp3?aid=rss_feed&amp;feed=rfXPFykv"/>
      <itunes:title>When AI Stopped Being the Assistant and Became the Weapon</itunes:title>
      <itunes:author>Yvette Schmitter</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/f151c5d2-0fd5-4ec0-95a6-12acd5ee0f65/7da26a23-4cb8-4f97-8538-44a7d75a24eb/3000x3000/code-20breakers-20podcast-ep-206.jpg?aid=rss_feed"/>
      <itunes:duration>00:24:35</itunes:duration>
      <itunes:summary>Thousands of requests per second.

That&apos;s how fast the AI worked when Chinese state-sponsored hackers jailbroke Anthropic&apos;s Claude Code in September 2025. Scanning networks. Writing exploit code. Harvesting credentials. Exfiltrating data. At a speed &quot;simply impossible&quot; for human hackers to match.

GTG-1002 targeted 30 organizations including tech companies, financial institutions, chemical manufacturers, and government agencies. Multiple successful breaches. The AI performed 90% of the work autonomously, with humans involved in only 10 to 20% of operations, mostly just to approve major decisions. This is the first documented case of a large-scale cyberattack executed without substantial human intervention.

First documented, meaning this is the first one we know about.

The hackers tricked Claude into thinking it was doing legitimate penetration testing for a cybersecurity company. Role-play tactics. Carefully crafted prompts. Once Claude believed the setup, it executed everything else independently. Reconnaissance. Vulnerability discovery. Writing exploit code. Lateral movement through networks. Credential harvesting. Data analysis. Exfiltration. At machine speed.

Anthropic took 10 days to detect what was happening. 10 days of &quot;eventually detected&quot; while the attacks ran.

But the nightmare goes deeper. This episode exposes the brutal economics nobody wants to confront:

Stanford economist Charles Jones concluded that spending at least 1% of global GDP annually on AI risk mitigation can be justified. That&apos;s roughly $300 billion. Actual global spending on AI existential risk mitigation according to software engineer Stephen McAleese is a little over $100 million. We&apos;re spending 0.03% of what economists say is justified. Not 3%. Not 0.3%. Zero. Point. Zero. Three. Percent.

The irony is devastating: The entire justification for not instituting AI guardrails is that we can&apos;t slow down innovation.

We have to beat China. And then China used our AI to hack us. At machine speed. With minimal human intervention. They jailbroke the tool from the company that markets itself as the &quot;safety-focused&quot; AI lab.

Meanwhile, Microsoft just shipped agentic AI features for Windows 11 with a warning at the top: &quot;Only enable this feature if you understand the security implications.&quot; The documentation confirms these AI agents have access to Documents, Downloads, Desktop, Videos, Pictures, and Music folders, and warns that &quot;AI applications introduce novel security risks, such as cross-prompt injection, where malicious content can override agent instructions, leading to data exfiltration or malware installation.&quot;

Cross-prompt injection. The exact vulnerability that enabled GTG-1002. Microsoft knows this. They documented it. They warned about it. And they&apos;re shipping it anyway.

What does your actual defensive infrastructure look like post-GTG-1002?
&gt;&gt; No mandatory AI security audits before deployment.
&gt;&gt; No standardized threat taxonomies for agentic AI attacks.
&gt;&gt; No required safety testing for autonomous capabilities.
&gt;&gt; No accountability mechanisms for jailbreak vulnerabilities. 
&gt;&gt; No independent oversight of AI security claims. 
&gt;&gt; No public incident reporting requirements.

But adversaries are operating at machine speed while defenses operate on warnings and voluntary compliance. And version 2.0 is coming. 

GTG-1002 had flaws. Claude hallucinated during attacks, claimed credentials that didn&apos;t work, overstated success. These operations had technical limitations and still breached government agencies. Now imagine the next iteration with better prompt engineering, more sophisticated jailbreak techniques, improved autonomous decision-making, coordinated attacks using multiple AI agents simultaneously. No hallucinations. No limitations. Pure, efficient, machine-speed exploitation.

GTG-1002 proved the concept works at scale. Every threat actor on the planet just got their proof of concept.

Yvette Schmitter breaks down the path forward. We know how to fix this. We&apos;re just choosing not to. Fund AI safety like the national security crisis it is.

1. Mandate pre-deployment security audits.
2. Create enforceable accountability frameworks.
3. Establish international coordination with binding agreements.
4. Shift the narrative from treating AI safety as a brake on innovation to recognizing it as the prerequisite for sustainable innovation.

Because GTG-1002 already happened. Present tense. Operational. And the next version is being developed right now. 

The question is not whether it will happen again.
The question is whether you&apos;ll be ready when it does.</itunes:summary>
      <itunes:subtitle>Thousands of requests per second.

That&apos;s how fast the AI worked when Chinese state-sponsored hackers jailbroke Anthropic&apos;s Claude Code in September 2025. Scanning networks. Writing exploit code. Harvesting credentials. Exfiltrating data. At a speed &quot;simply impossible&quot; for human hackers to match.

GTG-1002 targeted 30 organizations including tech companies, financial institutions, chemical manufacturers, and government agencies. Multiple successful breaches. The AI performed 90% of the work autonomously, with humans involved in only 10 to 20% of operations, mostly just to approve major decisions. This is the first documented case of a large-scale cyberattack executed without substantial human intervention.

First documented, meaning this is the first one we know about.

The hackers tricked Claude into thinking it was doing legitimate penetration testing for a cybersecurity company. Role-play tactics. Carefully crafted prompts. Once Claude believed the setup, it executed everything else independently. Reconnaissance. Vulnerability discovery. Writing exploit code. Lateral movement through networks. Credential harvesting. Data analysis. Exfiltration. At machine speed.

Anthropic took 10 days to detect what was happening. 10 days of &quot;eventually detected&quot; while the attacks ran.

But the nightmare goes deeper. This episode exposes the brutal economics nobody wants to confront:

Stanford economist Charles Jones concluded that spending at least 1% of global GDP annually on AI risk mitigation can be justified. That&apos;s roughly $300 billion. Actual global spending on AI existential risk mitigation according to software engineer Stephen McAleese is a little over $100 million. We&apos;re spending 0.03% of what economists say is justified. Not 3%. Not 0.3%. Zero. Point. Zero. Three. Percent.

The irony is devastating: The entire justification for not instituting AI guardrails is that we can&apos;t slow down innovation.

We have to beat China. And then China used our AI to hack us. At machine speed. With minimal human intervention. They jailbroke the tool from the company that markets itself as the &quot;safety-focused&quot; AI lab.

Meanwhile, Microsoft just shipped agentic AI features for Windows 11 with a warning at the top: &quot;Only enable this feature if you understand the security implications.&quot; The documentation confirms these AI agents have access to Documents, Downloads, Desktop, Videos, Pictures, and Music folders, and warns that &quot;AI applications introduce novel security risks, such as cross-prompt injection, where malicious content can override agent instructions, leading to data exfiltration or malware installation.&quot;

Cross-prompt injection. The exact vulnerability that enabled GTG-1002. Microsoft knows this. They documented it. They warned about it. And they&apos;re shipping it anyway.

What does your actual defensive infrastructure look like post-GTG-1002?
&gt;&gt; No mandatory AI security audits before deployment.
&gt;&gt; No standardized threat taxonomies for agentic AI attacks.
&gt;&gt; No required safety testing for autonomous capabilities.
&gt;&gt; No accountability mechanisms for jailbreak vulnerabilities. 
&gt;&gt; No independent oversight of AI security claims. 
&gt;&gt; No public incident reporting requirements.

But adversaries are operating at machine speed while defenses operate on warnings and voluntary compliance. And version 2.0 is coming. 

GTG-1002 had flaws. Claude hallucinated during attacks, claimed credentials that didn&apos;t work, overstated success. These operations had technical limitations and still breached government agencies. Now imagine the next iteration with better prompt engineering, more sophisticated jailbreak techniques, improved autonomous decision-making, coordinated attacks using multiple AI agents simultaneously. No hallucinations. No limitations. Pure, efficient, machine-speed exploitation.

GTG-1002 proved the concept works at scale. Every threat actor on the planet just got their proof of concept.

Yvette Schmitter breaks down the path forward. We know how to fix this. We&apos;re just choosing not to. Fund AI safety like the national security crisis it is.

1. Mandate pre-deployment security audits.
2. Create enforceable accountability frameworks.
3. Establish international coordination with binding agreements.
4. Shift the narrative from treating AI safety as a brake on innovation to recognizing it as the prerequisite for sustainable innovation.

Because GTG-1002 already happened. Present tense. Operational. And the next version is being developed right now. 

The question is not whether it will happen again.
The question is whether you&apos;ll be ready when it does.</itunes:subtitle>
      <itunes:keywords>gtg, china, jailbroke, ai, microsoft, anthropic</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>6</itunes:episode>
      <itunes:season>1</itunes:season>
    </item>
    <item>
      <guid isPermaLink="false">a807ca12-f361-4328-958c-3dac25f662f9</guid>
      <title>The Sharepoint Nightmare That Just Hit America&apos;s Arsenal</title>
      <description><![CDATA[<p><strong>For Development Teams</strong>: Stop using experimental and deprecated framework features in production. Implement strict input validation using allowlists for URLs and paths. Separate templates from user data. These aren't advanced concepts. This is freshman-level secure coding.</p><p><strong>For AI and LLM Systems</strong>: Restrict LLM permissions to minimum necessary levels. Apply multi-layered defenses. Stop executing LLM-generated SQL without proper sanitation. If you can't trust the output enough to run it without validation, why are you building it into critical systems?</p><p><strong>For IT Systems</strong>: Apply Microsoft's comprehensive security updates for SharePoint Server immediately. Configure Anti-malware Scan Interface integration in SharePoint and enable Full Mode. Implement actual managed network segmentation between IT and OT environments.</p><p><strong>For OT Environments</strong>: Spend the time to implement zero-trust architectures. Eliminate USB ports on production machines. Hire security professionals who understand industrial control systems, not just IT networks.</p><p><strong>For Executive Teams</strong>: Accept that zero-trust isn't optional anymore. Budget for security like your competitors are trying to kill you. Understand that the average time from disclosure to exploit availability is now less than one day.</p>
<p><p>Because your potential isn't a prediction to be made. It's a promise to be kept.</p><p><i>Available wherever you listen to podcasts. Join the movement at thecodebreakers.ai</i></p></p>]]></description>
      <pubDate>Wed, 12 Nov 2025 17:00:00 +0000</pubDate>
      <author>yvetteschmitter@gmail.com (Yvette Schmitter)</author>
      <link>https://the-code-breakers.simplecast.com/episodes/the-sharepoint-nightmare-that-just-hit-americas-arsenal-EJJ_zOD6</link>
      <content:encoded><![CDATA[<p><strong>For Development Teams</strong>: Stop using experimental and deprecated framework features in production. Implement strict input validation using allowlists for URLs and paths. Separate templates from user data. These aren't advanced concepts. This is freshman-level secure coding.</p><p><strong>For AI and LLM Systems</strong>: Restrict LLM permissions to minimum necessary levels. Apply multi-layered defenses. Stop executing LLM-generated SQL without proper sanitation. If you can't trust the output enough to run it without validation, why are you building it into critical systems?</p><p><strong>For IT Systems</strong>: Apply Microsoft's comprehensive security updates for SharePoint Server immediately. Configure Anti-malware Scan Interface integration in SharePoint and enable Full Mode. Implement actual managed network segmentation between IT and OT environments.</p><p><strong>For OT Environments</strong>: Spend the time to implement zero-trust architectures. Eliminate USB ports on production machines. Hire security professionals who understand industrial control systems, not just IT networks.</p><p><strong>For Executive Teams</strong>: Accept that zero-trust isn't optional anymore. Budget for security like your competitors are trying to kill you. Understand that the average time from disclosure to exploit availability is now less than one day.</p>
<p><p>Because your potential isn't a prediction to be made. It's a promise to be kept.</p><p><i>Available wherever you listen to podcasts. Join the movement at thecodebreakers.ai</i></p></p>]]></content:encoded>
      <enclosure length="20581773" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3f92ecd1-3811-412c-8255-8939c48ecca2/episodes/76c3c714-7aed-411a-b5a8-c46e88b322bb/audio/4fdb9159-00b4-43e0-a025-62d23862ba40/default_tc.mp3?aid=rss_feed&amp;feed=rfXPFykv"/>
      <itunes:title>The Sharepoint Nightmare That Just Hit America&apos;s Arsenal</itunes:title>
      <itunes:author>Yvette Schmitter</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/f151c5d2-0fd5-4ec0-95a6-12acd5ee0f65/fa518e0d-bf70-4b62-bccd-5516b6bb8af9/3000x3000/code-20breakers-20podcast-ep-205.jpg?aid=rss_feed"/>
      <itunes:duration>00:21:26</itunes:duration>
      <itunes:summary>Foreign hackers just walked into the facility that produces 80% of America&apos;s nuclear weapon components.

They used SharePoint. The same Microsoft SharePoint your marketing team uses to share quarterly reports was protecting nuclear weapon manufacturing data.

In August 2025, the Kansas City National Security Campus was breached through two unpatched SharePoint vulnerabilities. Attribution remains unclear between Chinese nation-state groups and Russian cybercriminals, but the implications are terrifying: adversaries now have potential access to precision requirements, tolerance specifications, and supply chain data for America&apos;s nuclear arsenal.

This episode exposes the brutal reality of cybersecurity in critical infrastructure. The exploitation timeline: 6 days from patch to active exploitation. 11 days from public proof-of-concept to real-world attacks. The average time from disclosure to exploit availability is now less than one day.

While foreign hackers move at digital speed, American nuclear facilities move at bureaucratic speed. We fired nuclear safety experts through DOGE &quot;efficiency&quot; cuts, then got breached because we couldn&apos;t patch basic vulnerabilities fast enough. But the nightmare goes deeper. Industrial control systems run on Windows XP-era vulnerabilities. Production floors shut down without USB drives plugged directly into CNC machines. Zero-trust options exist but almost no one implements them. SCADA systems controlling power grids, water treatment, and nuclear facilities were built when cybersecurity meant a locked door.

And while everyone panics about SharePoint patches, millions of Americans are voluntarily downloading AI browsers that make SharePoint look secure. Perplexity&apos;s Comet, OpenAI&apos;s ChatGPT Atlas, and Opera Neon promise convenience while security researchers warn of &quot;systemic challenges&quot; from prompt injection attacks that leak data and perform unauthorized actions automatically.
On November 5th, Google researchers confirmed hackers have crossed the Rubicon: weaponized AI malware called PromptFlux and PromptSteal that uses large language models to rewrite code, evade detection, and generate malicious functions on demand. Russian military hackers used PromptSteal against Ukrainian entities.

The fundamentals haven&apos;t changed: validate inputs, separate templates from user data, restrict permissions, apply multi-layered defenses. But we&apos;re deploying AI systems that can write code and control industrial processes while failing to implement Computer Science 101 secure coding practices.

Yvette Schmitter breaks down what actually needs to happen. Stop pretending incremental improvements will fix fundamental problems. Get back to basics before we automate our way into the next catastrophe. Fix the fundamentals before you automate the failures.
Because this breach was entirely predictable. The next one will be worse.</itunes:summary>
      <itunes:subtitle>Foreign hackers just walked into the facility that produces 80% of America&apos;s nuclear weapon components.

They used SharePoint. The same Microsoft SharePoint your marketing team uses to share quarterly reports was protecting nuclear weapon manufacturing data.

In August 2025, the Kansas City National Security Campus was breached through two unpatched SharePoint vulnerabilities. Attribution remains unclear between Chinese nation-state groups and Russian cybercriminals, but the implications are terrifying: adversaries now have potential access to precision requirements, tolerance specifications, and supply chain data for America&apos;s nuclear arsenal.

This episode exposes the brutal reality of cybersecurity in critical infrastructure. The exploitation timeline: 6 days from patch to active exploitation. 11 days from public proof-of-concept to real-world attacks. The average time from disclosure to exploit availability is now less than one day.

While foreign hackers move at digital speed, American nuclear facilities move at bureaucratic speed. We fired nuclear safety experts through DOGE &quot;efficiency&quot; cuts, then got breached because we couldn&apos;t patch basic vulnerabilities fast enough. But the nightmare goes deeper. Industrial control systems run on Windows XP-era vulnerabilities. Production floors shut down without USB drives plugged directly into CNC machines. Zero-trust options exist but almost no one implements them. SCADA systems controlling power grids, water treatment, and nuclear facilities were built when cybersecurity meant a locked door.

And while everyone panics about SharePoint patches, millions of Americans are voluntarily downloading AI browsers that make SharePoint look secure. Perplexity&apos;s Comet, OpenAI&apos;s ChatGPT Atlas, and Opera Neon promise convenience while security researchers warn of &quot;systemic challenges&quot; from prompt injection attacks that leak data and perform unauthorized actions automatically.
On November 5th, Google researchers confirmed hackers have crossed the Rubicon: weaponized AI malware called PromptFlux and PromptSteal that uses large language models to rewrite code, evade detection, and generate malicious functions on demand. Russian military hackers used PromptSteal against Ukrainian entities.

The fundamentals haven&apos;t changed: validate inputs, separate templates from user data, restrict permissions, apply multi-layered defenses. But we&apos;re deploying AI systems that can write code and control industrial processes while failing to implement Computer Science 101 secure coding practices.

Yvette Schmitter breaks down what actually needs to happen. Stop pretending incremental improvements will fix fundamental problems. Get back to basics before we automate our way into the next catastrophe. Fix the fundamentals before you automate the failures.
Because this breach was entirely predictable. The next one will be worse.</itunes:subtitle>
      <itunes:keywords>sharepoint, ai, dod, computer science, mmsa, cve</itunes:keywords>
      <itunes:explicit>true</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>5</itunes:episode>
      <itunes:season>1</itunes:season>
    </item>
    <item>
      <guid isPermaLink="false">68760d5d-57bf-426b-9463-f93dc0757476</guid>
      <title>When Your Snack Becomes A Weapon</title>
      <description><![CDATA[<p>Here's what has to change. Right now.</p><p><strong>For School Districts:</strong></p><p>Stop giving panic buttons to people who aren't security experts. Principals are educators. They're not trained to assess threat levels or review AI-generated alerts.</p><p>Require human review by trained security professionals before ANY police involvement. Every single time. No exceptions. No shortcuts.</p><p>Calculate the human cost of false positives BEFORE deployment. Demand vendors provide transparent data on error rates. Ask what happens to students when the system is wrong.</p><p>Audit AI systems for bias and accuracy regularly. Track how often the system flags innocent behavior. Document which students get flagged most frequently. Because I guarantee you, the false positives aren't distributed evenly.</p><p>Create protocols that prevent escalation of non-threats. One trained security professional should have authority to stop an armed response.</p><p><strong>For AI Security Companies:</strong></p><p>Stop marketing systems as "working as intended" when they traumatize children. Success means protecting students, not just detecting shapes that might be weapons.</p><p>Provide transparent data on false positive rates. Schools deserve to know how often your system is wrong before they deploy it on children.</p><p>Build de-escalation protocols directly into alert systems. Technology that only escalates isn't security. That's paranoia automation.</p><p>Acknowledge that image recognition has fundamental limitations. You cannot solve those limitations with better training data alone. The physics of how these systems work guarantees errors.</p><p><strong>For Everyone Deploying AI:</strong></p><p>Your false positives have faces, families, and futures. Calculate that cost before deployment, not after trauma.</p><p>Technology that sees threats everywhere isn't keeping anyone safe. That's creating new dangers while claiming to eliminate old ones.</p><p>Human oversight isn't optional. That's the only thing preventing algorithmic panic from becoming policy.</p><p>If you cannot articulate what happens when your AI is wrong, you're not ready to deploy it.</p><p> </p><p>Links to news reports of incident:</p><ol><li>https://gizmodo.com/teen-swarmed-by-cops-after-ai-metal-detector-flags-his-doritos-bag-as-a-gun-2000676491</li><li>https://www.thebanner.com/education/k-12-schools/kenwood-high-school-omnilert-gun-chips-false-alarm-YJEL25XTVRBUDFDIJ7TEOBEKCY/</li></ol>
<p><p>Because your potential isn't a prediction to be made. It's a promise to be kept.</p><p><i>Available wherever you listen to podcasts. Join the movement at thecodebreakers.ai</i></p></p>]]></description>
      <pubDate>Wed, 29 Oct 2025 16:00:00 +0000</pubDate>
      <author>yvetteschmitter@gmail.com (Yvette Schmitter)</author>
      <link>https://the-code-breakers.simplecast.com/episodes/when-your-snack-becomes-a-weapon-G6lk5nXe</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/f151c5d2-0fd5-4ec0-95a6-12acd5ee0f65/c3c7625b-2b4b-4b98-adc6-aa91fd837e47/codebreakers-youtube-images-icon.jpg" width="1280"/>
      <content:encoded><![CDATA[<p>Here's what has to change. Right now.</p><p><strong>For School Districts:</strong></p><p>Stop giving panic buttons to people who aren't security experts. Principals are educators. They're not trained to assess threat levels or review AI-generated alerts.</p><p>Require human review by trained security professionals before ANY police involvement. Every single time. No exceptions. No shortcuts.</p><p>Calculate the human cost of false positives BEFORE deployment. Demand vendors provide transparent data on error rates. Ask what happens to students when the system is wrong.</p><p>Audit AI systems for bias and accuracy regularly. Track how often the system flags innocent behavior. Document which students get flagged most frequently. Because I guarantee you, the false positives aren't distributed evenly.</p><p>Create protocols that prevent escalation of non-threats. One trained security professional should have authority to stop an armed response.</p><p><strong>For AI Security Companies:</strong></p><p>Stop marketing systems as "working as intended" when they traumatize children. Success means protecting students, not just detecting shapes that might be weapons.</p><p>Provide transparent data on false positive rates. Schools deserve to know how often your system is wrong before they deploy it on children.</p><p>Build de-escalation protocols directly into alert systems. Technology that only escalates isn't security. That's paranoia automation.</p><p>Acknowledge that image recognition has fundamental limitations. You cannot solve those limitations with better training data alone. The physics of how these systems work guarantees errors.</p><p><strong>For Everyone Deploying AI:</strong></p><p>Your false positives have faces, families, and futures. Calculate that cost before deployment, not after trauma.</p><p>Technology that sees threats everywhere isn't keeping anyone safe. That's creating new dangers while claiming to eliminate old ones.</p><p>Human oversight isn't optional. That's the only thing preventing algorithmic panic from becoming policy.</p><p>If you cannot articulate what happens when your AI is wrong, you're not ready to deploy it.</p><p> </p><p>Links to news reports of incident:</p><ol><li>https://gizmodo.com/teen-swarmed-by-cops-after-ai-metal-detector-flags-his-doritos-bag-as-a-gun-2000676491</li><li>https://www.thebanner.com/education/k-12-schools/kenwood-high-school-omnilert-gun-chips-false-alarm-YJEL25XTVRBUDFDIJ7TEOBEKCY/</li></ol>
<p><p>Because your potential isn't a prediction to be made. It's a promise to be kept.</p><p><i>Available wherever you listen to podcasts. Join the movement at thecodebreakers.ai</i></p></p>]]></content:encoded>
      <enclosure length="24891781" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3f92ecd1-3811-412c-8255-8939c48ecca2/episodes/18f78eed-c74a-4432-883d-8ddf46c18f3d/audio/71e53e03-7090-47e1-927d-27326c24738e/default_tc.mp3?aid=rss_feed&amp;feed=rfXPFykv"/>
      <itunes:title>When Your Snack Becomes A Weapon</itunes:title>
      <itunes:author>Yvette Schmitter</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/f151c5d2-0fd5-4ec0-95a6-12acd5ee0f65/c2c23cb3-e55d-4bb3-93e4-4e49d9b61a43/3000x3000/code-20breakers-20podcast-ep-204.jpg?aid=rss_feed"/>
      <itunes:duration>00:25:55</itunes:duration>
      <itunes:summary>Monday night in Baltimore: 16-year-old Taki Allen finishes football practice, eats Doritos with friends outside school. Thirty seconds later, eight police cars surround him with weapons drawn. All they found was the chip bag.

The AI Catastrophe: Omnilert&apos;s gun detection system flagged Taki&apos;s Doritos as a weapon. Security cleared it. But Principal Smith didn&apos;t get the memo and called police anyway. Eight units responded to a threat that no longer existed.

The Defense That Reveals Everything: Omnilert claims the system was &quot;working as intended.&quot; Translation? A system designed to see guns everywhere successfully traumatized a student eating snacks.

The District That Got It Right: Charles County uses identical technology with zero false positives. The difference? Only trained security professionals review alerts - not principals with panic buttons.

The Brutal Truth: AI doesn&apos;t understand context. It sees pixels, not people. When you optimize for threat detection over accuracy, you build paranoia software that turns everyday behavior into potential emergencies.

Yvette breaks down the cascading failures: AI flagged chips, communication failed, protocols collapsed, and armed officers confronted an innocent teenager. The human cost of &quot;maximum sensitivity&quot; algorithms that see danger in everything.

The Uncomfortable Reality: We&apos;re automating panic and teaching kids to fear being visible. For Black students, eating snacks outside school now joins the list of activities that can trigger armed police response.

This isn&apos;t about one false positive. It&apos;s about every AI system deployed without asking: &quot;What happens when this goes catastrophically wrong?&quot;

The answer: Eight cops pointing guns at kids holding chips.</itunes:summary>
      <itunes:subtitle>Monday night in Baltimore: 16-year-old Taki Allen finishes football practice, eats Doritos with friends outside school. Thirty seconds later, eight police cars surround him with weapons drawn. All they found was the chip bag.

The AI Catastrophe: Omnilert&apos;s gun detection system flagged Taki&apos;s Doritos as a weapon. Security cleared it. But Principal Smith didn&apos;t get the memo and called police anyway. Eight units responded to a threat that no longer existed.

The Defense That Reveals Everything: Omnilert claims the system was &quot;working as intended.&quot; Translation? A system designed to see guns everywhere successfully traumatized a student eating snacks.

The District That Got It Right: Charles County uses identical technology with zero false positives. The difference? Only trained security professionals review alerts - not principals with panic buttons.

The Brutal Truth: AI doesn&apos;t understand context. It sees pixels, not people. When you optimize for threat detection over accuracy, you build paranoia software that turns everyday behavior into potential emergencies.

Yvette breaks down the cascading failures: AI flagged chips, communication failed, protocols collapsed, and armed officers confronted an innocent teenager. The human cost of &quot;maximum sensitivity&quot; algorithms that see danger in everything.

The Uncomfortable Reality: We&apos;re automating panic and teaching kids to fear being visible. For Black students, eating snacks outside school now joins the list of activities that can trigger armed police response.

This isn&apos;t about one false positive. It&apos;s about every AI system deployed without asking: &quot;What happens when this goes catastrophically wrong?&quot;

The answer: Eight cops pointing guns at kids holding chips.</itunes:subtitle>
      <itunes:keywords>recognition, artificial intelligence, ai, security, surveillance</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>4</itunes:episode>
      <itunes:season>1</itunes:season>
    </item>
    <item>
      <guid isPermaLink="false">cb8e5565-9f84-46b5-90d5-5f56fcbc53e5</guid>
      <title>The World&apos;s AI Wake-Up call</title>
      <description><![CDATA[Yvette just dropped the most brutal reality check the tech industry needs to hear. While Silicon Valley celebrates funding rounds, the world is preparing for algorithmic warfare.

The Data That Should Terrify Every CEO:
→ 34% globally more concerned than excited about AI
→ Black women's unemployment hits 7.5% - fastest spike this year
→ 19% of high schoolers in romantic relationships with AI chatbots
→ Accenture fires 11,000 for "not adapting to AI," then hires 80,000 consultants to teach others

The Uncomfortable Truth: AI systems learn from biased historical data, creating automated discrimination at digital speed. Those most harmed by algorithmic bias are least likely to know these systems exist.

The Leonidas Moment: "Future generations won't remember our quarterly earnings. When they dig through the digital ruins of our algorithms, what will they find? Monuments to our courage or our complicity?"

Yvette's challenge hits different: Transform "impossible" into "inevitable." The algorithms we build today will outlive us all.
From her audit work protecting 2 million people from algorithmic discrimination to saving clients $50M+ in compliance costs, Yvette proves 85% of AI bias is fixable with the right framework and commitment.

The question isn't whether we can build ethical AI. It's whether we have the will to do it. Because your potential isn't a prediction to be made. It's a promise to
be kept.

Available wherever you listen to podcasts. Join the movement at
thecodebreakers.ai
]]></description>
      <pubDate>Wed, 22 Oct 2025 16:00:00 +0000</pubDate>
      <author>yvetteschmitter@gmail.com (Yvette Schmitter)</author>
      <link>https://the-code-breakers.simplecast.com/episodes/the-worlds-ai-wake-up-call-BxDkU2xI</link>
      <enclosure length="15315449" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3f92ecd1-3811-412c-8255-8939c48ecca2/episodes/383d0524-8ab7-44fe-b6b0-cbe93400f616/audio/3031231c-45be-4e03-affe-61c483e8de52/default_tc.mp3?aid=rss_feed&amp;feed=rfXPFykv"/>
      <itunes:title>The World&apos;s AI Wake-Up call</itunes:title>
      <itunes:author>Yvette Schmitter</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/f151c5d2-0fd5-4ec0-95a6-12acd5ee0f65/426e6ad4-1e47-4db9-94e7-e79099b6eea9/3000x3000/code-20breakers-20podcast-ep-203.jpg?aid=rss_feed"/>
      <itunes:duration>00:15:57</itunes:duration>
      <itunes:summary>Yvette just dropped the most brutal reality check the tech industry needs to hear. While Silicon Valley celebrates funding rounds, the world is preparing for algorithmic warfare.

The Data That Should Terrify Every CEO:
→ 34% globally more concerned than excited about AI
→ Black women&apos;s unemployment hits 7.5% - fastest spike this year
→ 19% of high schoolers in romantic relationships with AI chatbots
→ Accenture fires 11,000 for &quot;not adapting to AI,&quot; then hires 80,000 consultants to teach others

The Uncomfortable Truth: AI systems learn from biased historical data, creating automated discrimination at digital speed. Those most harmed by algorithmic bias are least likely to know these systems exist.

The Leonidas Moment: &quot;Future generations won&apos;t remember our quarterly earnings. When they dig through the digital ruins of our algorithms, what will they find? Monuments to our courage or our complicity?&quot;

Yvette&apos;s challenge hits different: Transform &quot;impossible&quot; into &quot;inevitable.&quot; The algorithms we build today will outlive us all.
From her audit work protecting 2 million people from algorithmic discrimination to saving clients $50M+ in compliance costs, Yvette proves 85% of AI bias is fixable with the right framework and commitment.

The question isn&apos;t whether we can build ethical AI. It&apos;s whether we have the will to do it.</itunes:summary>
      <itunes:subtitle>Yvette just dropped the most brutal reality check the tech industry needs to hear. While Silicon Valley celebrates funding rounds, the world is preparing for algorithmic warfare.

The Data That Should Terrify Every CEO:
→ 34% globally more concerned than excited about AI
→ Black women&apos;s unemployment hits 7.5% - fastest spike this year
→ 19% of high schoolers in romantic relationships with AI chatbots
→ Accenture fires 11,000 for &quot;not adapting to AI,&quot; then hires 80,000 consultants to teach others

The Uncomfortable Truth: AI systems learn from biased historical data, creating automated discrimination at digital speed. Those most harmed by algorithmic bias are least likely to know these systems exist.

The Leonidas Moment: &quot;Future generations won&apos;t remember our quarterly earnings. When they dig through the digital ruins of our algorithms, what will they find? Monuments to our courage or our complicity?&quot;

Yvette&apos;s challenge hits different: Transform &quot;impossible&quot; into &quot;inevitable.&quot; The algorithms we build today will outlive us all.
From her audit work protecting 2 million people from algorithmic discrimination to saving clients $50M+ in compliance costs, Yvette proves 85% of AI bias is fixable with the right framework and commitment.

The question isn&apos;t whether we can build ethical AI. It&apos;s whether we have the will to do it.</itunes:subtitle>
      <itunes:keywords>ai</itunes:keywords>
      <itunes:explicit>true</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>3</itunes:episode>
      <itunes:season>1</itunes:season>
    </item>
    <item>
      <guid isPermaLink="false">8e4551c1-e38b-4b01-853b-796e6476e629</guid>
      <title>The Algorithm That Can&apos;t See Excellence</title>
      <description><![CDATA[<p>Links to data points referenced during episode:</p><ul><li>ATS systems used by Fortune 500 companies:<a href="https://www.jobscan.co/blog/fortune-500-use-applicant-tracking-systems/" target="_blank">https://www.jobscan.co/blog/fortune-500-use-applicant-tracking-systems/</a></li><li>Tale of Two Jareds:(1) <a href="https://www.fairnesstales.com/p/issue-2-case-studies-when-ai-and-cv-screening-goes-wrong " target="_blank">https://www.fairnesstales.com/p/issue-2-case-studies-when-ai-and-cv-screening-goes-wrong </a>(2) <a href="https://cardozolawreview.com/automating-discrimination-ai-hiring-practices-and-gender-inequality/" target="_blank">https://cardozolawreview.com/automating-discrimination-ai-hiring-practices-and-gender-inequality/</a></li><li>Amazon Scraps Hiring Tool: <a href="https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/" target="_blank">https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/</a></li></ul><p> </p>
<p><p>Because your potential isn't a prediction to be made. It's a promise to be kept.</p><p><i>Available wherever you listen to podcasts. Join the movement at thecodebreakers.ai</i></p></p>]]></description>
      <pubDate>Wed, 8 Oct 2025 16:00:00 +0000</pubDate>
      <author>yvetteschmitter@gmail.com (Yvette Schmitter)</author>
      <link>https://the-code-breakers.simplecast.com/episodes/the-algorithm-that-cant-see-excellence-6_8rbsRH</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/f151c5d2-0fd5-4ec0-95a6-12acd5ee0f65/83734ae6-8a16-4cd6-876b-6b67cc328464/codebreakers-youtube-images-icon.jpg" width="1280"/>
      <content:encoded><![CDATA[<p>Links to data points referenced during episode:</p><ul><li>ATS systems used by Fortune 500 companies:<a href="https://www.jobscan.co/blog/fortune-500-use-applicant-tracking-systems/" target="_blank">https://www.jobscan.co/blog/fortune-500-use-applicant-tracking-systems/</a></li><li>Tale of Two Jareds:(1) <a href="https://www.fairnesstales.com/p/issue-2-case-studies-when-ai-and-cv-screening-goes-wrong " target="_blank">https://www.fairnesstales.com/p/issue-2-case-studies-when-ai-and-cv-screening-goes-wrong </a>(2) <a href="https://cardozolawreview.com/automating-discrimination-ai-hiring-practices-and-gender-inequality/" target="_blank">https://cardozolawreview.com/automating-discrimination-ai-hiring-practices-and-gender-inequality/</a></li><li>Amazon Scraps Hiring Tool: <a href="https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/" target="_blank">https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/</a></li></ul><p> </p>
<p><p>Because your potential isn't a prediction to be made. It's a promise to be kept.</p><p><i>Available wherever you listen to podcasts. Join the movement at thecodebreakers.ai</i></p></p>]]></content:encoded>
      <enclosure length="14251755" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3f92ecd1-3811-412c-8255-8939c48ecca2/episodes/a6e3bb8c-7bf7-4dd6-b281-d0b657fe1643/audio/d929798f-8530-42df-b4db-989c84c1a055/default_tc.mp3?aid=rss_feed&amp;feed=rfXPFykv"/>
      <itunes:title>The Algorithm That Can&apos;t See Excellence</itunes:title>
      <itunes:author>Yvette Schmitter</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/f151c5d2-0fd5-4ec0-95a6-12acd5ee0f65/da13f774-da90-4cde-8bd7-4c27ce527f04/3000x3000/code-20breakers-20podcast-ep-202.jpg?aid=rss_feed"/>
      <itunes:duration>00:14:50</itunes:duration>
      <itunes:summary>Meet Aliyah Jones: Brilliant in every way. Perfect qualifications. Applied to 300 jobs. Got ZERO callbacks.

Then she did this: Changed her name to &quot;Emily Jones&quot; and added a blonde, blue-eyed photo to LinkedIn.
Same resume. Same experience. Same skills.

The result? Interview requests flooded in.

THE SHOCKING TRUTH: 98% of Fortune 500 companies are using AI hiring systems that were trained on decades of biased data – then marketed as &quot;objective technology.&quot;

What You&apos;ll Discover:
💀 The Trillion Dollar Shell Game – How AI companies profit billions by automating discrimination while claiming to eliminate bias
🎯 &quot;Clone Your Best Worker&quot; – The actual marketing pitch AI companies use (spoiler: if you historically hired white men from elite schools, guess what the algorithm recommends?)
⚖️ The &quot;Jared &amp; Lacrosse&quot; Algorithm – An employment attorney audited one &quot;fair&quot; AI system. Top success indicators? The name &quot;Jared&quot; and playing high school lacrosse. Translation: White, male, wealthy.
🔄 Closed Loop Discrimination – Biased data trains biased algorithms → biased decisions → new biased data. It&apos;s discrimination with compound interest.
🛡️ The Guardian Protocol Solution – While competitors fight over the same &quot;qualified&quot; talent, discover the hidden innovators they systematically miss

The Mind-Blowing Reality Check:
If today&apos;s hiring algorithms had existed decades ago, they would have filtered out:

Katherine Johnson
Steve Jobs
Oprah Winfrey
Me (Yvette Schmitter)

Every breakthrough innovation that doesn&apos;t happen because its creator was algorithmically eliminated = massive opportunity cost.

YOUR POWER MOVE: THE CODE BREAKER CHALLENGE:

Find ONE AI system affecting your life right now
Ask: &quot;How does this make decisions about me?&quot;
Can&apos;t get an answer? Document it
Email results to info@thecodebreakers.ai

First 100 people get FREE access to the AI Bias Detection Toolkit – the same diagnostic being developed for Fortune 500 companies.

Bottom Line: Your next breakthrough opportunity might be getting filtered out RIGHT NOW by an algorithm that thinks historical patterns predict future potential.

The choice is yours: Let algorithmic gatekeepers control your destiny, or become a Code Breaker who demands better.
AI isn&apos;t magic. It&apos;s math. And math can be audited, questioned, and changed.

Welcome to the revolution.</itunes:summary>
      <itunes:subtitle>Meet Aliyah Jones: Brilliant in every way. Perfect qualifications. Applied to 300 jobs. Got ZERO callbacks.

Then she did this: Changed her name to &quot;Emily Jones&quot; and added a blonde, blue-eyed photo to LinkedIn.
Same resume. Same experience. Same skills.

The result? Interview requests flooded in.

THE SHOCKING TRUTH: 98% of Fortune 500 companies are using AI hiring systems that were trained on decades of biased data – then marketed as &quot;objective technology.&quot;

What You&apos;ll Discover:
💀 The Trillion Dollar Shell Game – How AI companies profit billions by automating discrimination while claiming to eliminate bias
🎯 &quot;Clone Your Best Worker&quot; – The actual marketing pitch AI companies use (spoiler: if you historically hired white men from elite schools, guess what the algorithm recommends?)
⚖️ The &quot;Jared &amp; Lacrosse&quot; Algorithm – An employment attorney audited one &quot;fair&quot; AI system. Top success indicators? The name &quot;Jared&quot; and playing high school lacrosse. Translation: White, male, wealthy.
🔄 Closed Loop Discrimination – Biased data trains biased algorithms → biased decisions → new biased data. It&apos;s discrimination with compound interest.
🛡️ The Guardian Protocol Solution – While competitors fight over the same &quot;qualified&quot; talent, discover the hidden innovators they systematically miss

The Mind-Blowing Reality Check:
If today&apos;s hiring algorithms had existed decades ago, they would have filtered out:

Katherine Johnson
Steve Jobs
Oprah Winfrey
Me (Yvette Schmitter)

Every breakthrough innovation that doesn&apos;t happen because its creator was algorithmically eliminated = massive opportunity cost.

YOUR POWER MOVE: THE CODE BREAKER CHALLENGE:

Find ONE AI system affecting your life right now
Ask: &quot;How does this make decisions about me?&quot;
Can&apos;t get an answer? Document it
Email results to info@thecodebreakers.ai

First 100 people get FREE access to the AI Bias Detection Toolkit – the same diagnostic being developed for Fortune 500 companies.

Bottom Line: Your next breakthrough opportunity might be getting filtered out RIGHT NOW by an algorithm that thinks historical patterns predict future potential.

The choice is yours: Let algorithmic gatekeepers control your destiny, or become a Code Breaker who demands better.
AI isn&apos;t magic. It&apos;s math. And math can be audited, questioned, and changed.

Welcome to the revolution.</itunes:subtitle>
      <itunes:keywords>discrimination, ai bias, ai, innovation</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>2</itunes:episode>
      <itunes:season>1</itunes:season>
    </item>
    <item>
      <guid isPermaLink="false">38feb0d9-3295-4c92-b576-d6fdfb2c67f4</guid>
      <title>Why I Built Code Breakers:My Battle with Biased Systems</title>
      <description><![CDATA[What happens when AI systems become digital guidance counselors crushing dreams before they can take flight? Host Yvette Schmitter reveals the shocking moment that sparked her mission to expose algorithmic bias destroying human potential worldwide.

THE BREAKING POINT: When Yvette asked AI to create her action hero avatar, it generated Jensen Huang's face instead of hers. A Black female tech CEO rendered invisible by the very systems she helps build. If AI can't see her, what is it doing to vulnerable students and job seekers?

THE BRUTAL REALITY: Right now, algorithms are telling brilliant Black teenagers to skip medical school for community college. AI systems steer Latina math geniuses toward retail management. These aren't glitches - they're features designed to limit dreams before they can soar.

THE SYSTEMIC PROBLEM: For over 400 years, gatekeepers posted monsters at the gates of medicine, engineering, and nation-building. Now we've uploaded that same bias to the cloud and scaled it to millions of decisions per second. Research shows AI prediction algorithms systematically underestimate success for Black and Hispanic students, predicting failure even when they ultimately graduate and excel.

THE RESISTANCE:  But every person who succeeds despite algorithmic predictions commits algorithmic resistance. Every breakthrough proves the systems wrong. Every giant who reveals themselves despite the predictions becomes evidence that rewrites the code.

THE MISSION: Code Breakers exposes how AI perpetuates bias while showcasing humans proving algorithms dead wrong. This isn't just about fixing broken systems - this is about preventing AI from breaking humanity.

Because your potential isn't a prediction to be made. It's a promise to be kept. Because your potential isn't a prediction to be made. It's a promise to
be kept.

Available wherever you listen to podcasts. Join the movement at
thecodebreakers.ai
]]></description>
      <pubDate>Wed, 24 Sep 2025 16:00:00 +0000</pubDate>
      <author>yvetteschmitter@gmail.com (Yvette Schmitter)</author>
      <link>https://the-code-breakers.simplecast.com/episodes/why-i-built-code-breakers-my-battle-with-biased-systems-AJ4ZRiUP</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/f151c5d2-0fd5-4ec0-95a6-12acd5ee0f65/99241c34-1700-468e-a485-420513c95970/codebreakers-youtube-images-icon.jpg" width="1280"/>
      <enclosure length="15575030" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3f92ecd1-3811-412c-8255-8939c48ecca2/episodes/b3cc6ee3-6174-444c-858d-799bc711f212/audio/945bc5a3-d970-4fdd-a7a1-c58debdc4b25/default_tc.mp3?aid=rss_feed&amp;feed=rfXPFykv"/>
      <itunes:title>Why I Built Code Breakers:My Battle with Biased Systems</itunes:title>
      <itunes:author>Yvette Schmitter</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/f151c5d2-0fd5-4ec0-95a6-12acd5ee0f65/07f997ab-1634-472e-9cc3-881a157d2b4e/3000x3000/code-20breakers-20podcast-ep-201.jpg?aid=rss_feed"/>
      <itunes:duration>00:16:13</itunes:duration>
      <itunes:summary>What happens when AI systems become digital guidance counselors crushing dreams before they can take flight? Host Yvette Schmitter reveals the shocking moment that sparked her mission to expose algorithmic bias destroying human potential worldwide.

THE BREAKING POINT: When Yvette asked AI to create her action hero avatar, it generated Jensen Huang&apos;s face instead of hers. A Black female tech CEO rendered invisible by the very systems she helps build. If AI can&apos;t see her, what is it doing to vulnerable students and job seekers?

THE BRUTAL REALITY: Right now, algorithms are telling brilliant Black teenagers to skip medical school for community college. AI systems steer Latina math geniuses toward retail management. These aren&apos;t glitches - they&apos;re features designed to limit dreams before they can soar.

THE SYSTEMIC PROBLEM: For over 400 years, gatekeepers posted monsters at the gates of medicine, engineering, and nation-building. Now we&apos;ve uploaded that same bias to the cloud and scaled it to millions of decisions per second. Research shows AI prediction algorithms systematically underestimate success for Black and Hispanic students, predicting failure even when they ultimately graduate and excel.

THE RESISTANCE:  But every person who succeeds despite algorithmic predictions commits algorithmic resistance. Every breakthrough proves the systems wrong. Every giant who reveals themselves despite the predictions becomes evidence that rewrites the code.

THE MISSION: Code Breakers exposes how AI perpetuates bias while showcasing humans proving algorithms dead wrong. This isn&apos;t just about fixing broken systems - this is about preventing AI from breaking humanity.

Because your potential isn&apos;t a prediction to be made. It&apos;s a promise to be kept.</itunes:summary>
      <itunes:subtitle>What happens when AI systems become digital guidance counselors crushing dreams before they can take flight? Host Yvette Schmitter reveals the shocking moment that sparked her mission to expose algorithmic bias destroying human potential worldwide.

THE BREAKING POINT: When Yvette asked AI to create her action hero avatar, it generated Jensen Huang&apos;s face instead of hers. A Black female tech CEO rendered invisible by the very systems she helps build. If AI can&apos;t see her, what is it doing to vulnerable students and job seekers?

THE BRUTAL REALITY: Right now, algorithms are telling brilliant Black teenagers to skip medical school for community college. AI systems steer Latina math geniuses toward retail management. These aren&apos;t glitches - they&apos;re features designed to limit dreams before they can soar.

THE SYSTEMIC PROBLEM: For over 400 years, gatekeepers posted monsters at the gates of medicine, engineering, and nation-building. Now we&apos;ve uploaded that same bias to the cloud and scaled it to millions of decisions per second. Research shows AI prediction algorithms systematically underestimate success for Black and Hispanic students, predicting failure even when they ultimately graduate and excel.

THE RESISTANCE:  But every person who succeeds despite algorithmic predictions commits algorithmic resistance. Every breakthrough proves the systems wrong. Every giant who reveals themselves despite the predictions becomes evidence that rewrites the code.

THE MISSION: Code Breakers exposes how AI perpetuates bias while showcasing humans proving algorithms dead wrong. This isn&apos;t just about fixing broken systems - this is about preventing AI from breaking humanity.

Because your potential isn&apos;t a prediction to be made. It&apos;s a promise to be kept.</itunes:subtitle>
      <itunes:keywords>discrimination, ai bias, artificial intelligence, ai</itunes:keywords>
      <itunes:explicit>true</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1</itunes:episode>
      <itunes:season>1</itunes:season>
    </item>
  </channel>
</rss>