<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:media="http://search.yahoo.com/mrss/" xmlns:podcast="https://podcastindex.org/namespace/1.0">
  <channel>
    <atom:link href="https://feeds.simplecast.com/EndWK70X" rel="self" title="MP3 Audio" type="application/atom+xml"/>
    <atom:link href="https://simplecast.superfeedr.com" rel="hub" xmlns="http://www.w3.org/2005/Atom"/>
    <generator>https://simplecast.com</generator>
    <title>Lunchtime BABLing with Dr. Shea Brown</title>
    <description>Presented by Babl AI, this podcast discusses all issues related to algorithmic bias, algorithmic auditing, algorithmic governance, and the ethics of artificial intelligence and autonomous systems.</description>
    <copyright>2022 Lunchtime BABLing</copyright>
    <language>en</language>
    <pubDate>Mon, 23 Mar 2026 06:00:00 +0000</pubDate>
    <lastBuildDate>Mon, 23 Mar 2026 06:00:11 +0000</lastBuildDate>
    
    <link>https://babl.ai</link>
    <itunes:type>episodic</itunes:type>
    <itunes:summary>Presented by Babl AI, this podcast discusses all issues related to algorithmic bias, algorithmic auditing, algorithmic governance, and the ethics of artificial intelligence and autonomous systems.</itunes:summary>
    <itunes:author>Babl AI, Jeffery Recker, Shea Brown</itunes:author>
    <itunes:explicit>false</itunes:explicit>
    <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/29a6e7da-4750-4dc9-a72a-c833cea09a8f/3000x3000/colorful-modern-microphone-illustrations-podcast-logo.jpg?aid=rss_feed"/>
    <itunes:new-feed-url>https://feeds.simplecast.com/EndWK70X</itunes:new-feed-url>
    <itunes:keywords>ai, data analysis, data trust, ai ethics, ai governance, nlp, ai research, python, algorithm, data privacy, algorithm bias law, algorithmic auditing, shea brown, algorithmic bias, data analytics, philosophy, artificial intelligence, autonomous systems, babl, jeffery recker, babl ai, benjamin lange, research consultant, big data, data science, consultant, consulting, ethics, cyber security, data, digital ethics, digital policy, digitalization, research, education, ethics education, ethical ai, machine learning ethics, ethical consultants, ethics of artificial intelligence, ethics research, gdpr, governance of artificial intelligence, khoa lam, machine learning, machine learning algorithms, ml ethics, privacy</itunes:keywords>
    <itunes:owner>
      <itunes:name>Babl AI</itunes:name>
      <itunes:email>jeffery-recker@bablai.com</itunes:email>
    </itunes:owner>
    <itunes:category text="Technology"/>
    <itunes:category text="Business">
      <itunes:category text="Management"/>
    </itunes:category>
    <itunes:category text="Education"/>
    <item>
      <guid isPermaLink="false">4fb267c6-b441-4a6d-8591-bd4798289595</guid>
      <title>Model Drift to Bias and Discrimination: The Many Risks of AI: Part 2</title>
      <description><![CDATA[In Part 2 of this Lunchtime BABLing series on AI risk, Dr. Shea Brown, CEO of BABL AI, is joined again by Jeffery Recker to continue their lightning-round exploration of the real challenges organizations face when deploying AI.

This episode dives deeper into critical concepts such as model drift, bias vs. discrimination, and growing explainability gaps in modern AI systems — especially as organizations increasingly rely on large language models and automated decision-making tools.

Together, they discuss:

-What model drift is and how organizations can detect and manage it
-Why users (not just developers) should understand performance drift in AI systems
-The important distinction between statistical bias and illegal discrimination
-How bias can emerge even when demographic data isn’t explicitly used
-The role of diversity of thought and structured risk assessments in uncovering AI risks
-Why explainability is becoming harder as AI models grow more complex
-The trade-offs between performance, trust, fairness, and regulatory compliance

The conversation also explores broader questions around how AI is being used today, the limitations of “black-box” systems, and why validation, testing, and governance are becoming essential capabilities for organizations adopting AI at scale. Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 23 Mar 2026 06:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown, Jeffery Recker)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/58db60e6-4066-47d2-8096-ef375e2510c7/lunchtime_babling_youtube_48.png" width="1280"/>
      <enclosure length="36618476" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/77c8f044-8775-4c4b-9260-c333a28e2b6d/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/audio/group/16a25fb2-dbe0-4ec9-b2e7-7c9645ae79e6/group-item/5984d4fb-6a03-4e8b-92d8-bf1e5422e923/128_default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>Model Drift to Bias and Discrimination: The Many Risks of AI: Part 2</itunes:title>
      <itunes:author>Shea Brown, Jeffery Recker</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/70d3411d-2f46-4af7-9c53-3ce421e3fd3c/3000x3000/lunchtime_babling_logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:35:16</itunes:duration>
      <itunes:summary>In Part 2 of this Lunchtime BABLing series on AI risk, Dr. Shea Brown, CEO of BABL AI, is joined again by Jeffery Recker to continue their lightning-round exploration of the real challenges organizations face when deploying AI.

This episode dives deeper into critical concepts such as model drift, bias vs. discrimination, and growing explainability gaps in modern AI systems — especially as organizations increasingly rely on large language models and automated decision-making tools.

Together, they discuss:

-What model drift is and how organizations can detect and manage it
-Why users (not just developers) should understand performance drift in AI systems
-The important distinction between statistical bias and illegal discrimination
-How bias can emerge even when demographic data isn’t explicitly used
-The role of diversity of thought and structured risk assessments in uncovering AI risks
-Why explainability is becoming harder as AI models grow more complex
-The trade-offs between performance, trust, fairness, and regulatory compliance

The conversation also explores broader questions around how AI is being used today, the limitations of “black-box” systems, and why validation, testing, and governance are becoming essential capabilities for organizations adopting AI at scale.</itunes:summary>
      <itunes:subtitle>In Part 2 of this Lunchtime BABLing series on AI risk, Dr. Shea Brown, CEO of BABL AI, is joined again by Jeffery Recker to continue their lightning-round exploration of the real challenges organizations face when deploying AI.

This episode dives deeper into critical concepts such as model drift, bias vs. discrimination, and growing explainability gaps in modern AI systems — especially as organizations increasingly rely on large language models and automated decision-making tools.

Together, they discuss:

-What model drift is and how organizations can detect and manage it
-Why users (not just developers) should understand performance drift in AI systems
-The important distinction between statistical bias and illegal discrimination
-How bias can emerge even when demographic data isn’t explicitly used
-The role of diversity of thought and structured risk assessments in uncovering AI risks
-Why explainability is becoming harder as AI models grow more complex
-The trade-offs between performance, trust, fairness, and regulatory compliance

The conversation also explores broader questions around how AI is being used today, the limitations of “black-box” systems, and why validation, testing, and governance are becoming essential capabilities for organizations adopting AI at scale.</itunes:subtitle>
      <itunes:keywords>model drift, discrimination, artificial intelligence, bias, data analytics, trust, responsible ai, fairness, ai compliance, ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>73</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">639faa21-8d68-4b5a-910d-f9859a15f2fc</guid>
      <title>Data Poisoning to Hallucinations: The Many Risks of AI Part 1</title>
      <description><![CDATA[Data Poisoning to Hallucinations: The Many Risks of AI | Part 1

In this episode of Lunchtime BABLing, Dr. Shea Brown, CEO of BABL AI, is joined by Jeffery Recker for a fast-paced, unscripted deep dive into the real risks behind today’s AI systems.

From data poisoning and model inversion to prompt injection, membership inference, and AI hallucinations, this lightning-round conversation breaks down the security, governance, and reliability challenges organizations must understand before deploying AI at scale.

But this episode doesn’t stop at definitions.

Shea and Jeffery also explore:

- The difference between direct vs. indirect prompt injection
- Whether AI hallucinations can ever truly be “solved”
- Why AI isn’t a truth machine
- Whether we’re using AI the wrong way
- What responsible validation should look like in enterprise AI deployment

As AI systems move from experimentation into real-world decision-making, understanding these risks isn’t optional — it’s foundational.

If you're working in AI governance, assurance, compliance, risk, or deploying AI inside your organization, this conversation will help you think more critically about how these systems actually behave.

🎯 Take the FREE assessment here: https://shea-1mb3pmep.scoreapp.com/ Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 9 Mar 2026 06:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown, Jeffery Recker)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/fbd3562e-86ed-403e-b131-2b9f467f7104/lunchtime_babling_youtube_46.png" width="1280"/>
      <enclosure length="35761660" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/77c8f044-8775-4c4b-9260-c333a28e2b6d/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/audio/group/f853d016-6407-47fa-a1f3-2ac0aba47aae/group-item/0f776afc-17e6-4986-921f-b80e2dfe6a95/128_default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>Data Poisoning to Hallucinations: The Many Risks of AI Part 1</itunes:title>
      <itunes:author>Shea Brown, Jeffery Recker</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/2844a07a-be29-48fd-a59e-4f6ae7ca2312/3000x3000/lunchtime_babling_logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:34:23</itunes:duration>
      <itunes:summary>Data Poisoning to Hallucinations: The Many Risks of AI | Part 1

In this episode of Lunchtime BABLing, Dr. Shea Brown, CEO of BABL AI, is joined by Jeffery Recker for a fast-paced, unscripted deep dive into the real risks behind today’s AI systems.

From data poisoning and model inversion to prompt injection, membership inference, and AI hallucinations, this lightning-round conversation breaks down the security, governance, and reliability challenges organizations must understand before deploying AI at scale.

But this episode doesn’t stop at definitions.

Shea and Jeffery also explore:

- The difference between direct vs. indirect prompt injection
- Whether AI hallucinations can ever truly be “solved”
- Why AI isn’t a truth machine
- Whether we’re using AI the wrong way
- What responsible validation should look like in enterprise AI deployment

As AI systems move from experimentation into real-world decision-making, understanding these risks isn’t optional — it’s foundational.

If you&apos;re working in AI governance, assurance, compliance, risk, or deploying AI inside your organization, this conversation will help you think more critically about how these systems actually behave.

🎯 Take the FREE assessment here: https://shea-1mb3pmep.scoreapp.com/</itunes:summary>
      <itunes:subtitle>Data Poisoning to Hallucinations: The Many Risks of AI | Part 1

In this episode of Lunchtime BABLing, Dr. Shea Brown, CEO of BABL AI, is joined by Jeffery Recker for a fast-paced, unscripted deep dive into the real risks behind today’s AI systems.

From data poisoning and model inversion to prompt injection, membership inference, and AI hallucinations, this lightning-round conversation breaks down the security, governance, and reliability challenges organizations must understand before deploying AI at scale.

But this episode doesn’t stop at definitions.

Shea and Jeffery also explore:

- The difference between direct vs. indirect prompt injection
- Whether AI hallucinations can ever truly be “solved”
- Why AI isn’t a truth machine
- Whether we’re using AI the wrong way
- What responsible validation should look like in enterprise AI deployment

As AI systems move from experimentation into real-world decision-making, understanding these risks isn’t optional — it’s foundational.

If you&apos;re working in AI governance, assurance, compliance, risk, or deploying AI inside your organization, this conversation will help you think more critically about how these systems actually behave.

🎯 Take the FREE assessment here: https://shea-1mb3pmep.scoreapp.com/</itunes:subtitle>
      <itunes:keywords>ai governance, ai auditing, ai hallucinations, ai risk, prompt injection, data poisoning, ai, risk management</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>72</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">a7a85229-b91b-4bc4-83cf-3824a6ae5b1b</guid>
      <title>AI Test, Evaluation, &amp; Red Teaming Specialist Bootcamp</title>
      <description><![CDATA[In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown introduces the new AI Test, Evaluation, & Red Teaming Specialist Bootcamp—a hands-on, technical program designed to train the next generation of AI assurance professionals.

Drawing directly from BABL AI’s internal methodologies used to audit and evaluate high-risk AI systems across industries, this bootcamp addresses one of the most critical gaps in the AI ecosystem: the lack of practical training in how to design, execute, and interpret rigorous AI testing and red teaming in real-world contexts. 

Dr. Brown explains:

-Why AI testing, evaluation, and red teaming are essential for high-risk AI systems

-How BABL AI developed its internal, risk-driven testing and assurance frameworks

-The difference between auditing AI systems and directly evaluating and validating them

-What participants will learn during the five-week, hands-on bootcamp

-The prerequisites, structure, and technical depth of the program

-How this bootcamp will evolve into BABL’s new AI Test, Evaluation, & Red Teaming Specialist Certification

-This exclusive early adopter cohort is limited to approximately 30 participants and is designed for professionals with foundational knowledge in AI auditing, governance, or assurance who want to develop practical technical capabilities in AI evaluation and red teaming.

-Participants will learn how to move systematically from an AI use case to defensible test results—building real test plans, executing evaluations, and developing assurance-relevant conclusions using BABL’s proven frameworks.

Take the test to see if you are a good candidate for the AI Test, Evaluation, & Red Teaming Specialist Bootcamp: https://zfrmz.eu/RBroC4VLZ9I41ihKl1XV

Learn more about BABL AI Certifications: www.babl.ai

About Lunchtime BABLing:

Lunchtime BABLing is hosted by Dr. Shea Brown, CEO of BABL AI, an independent AI assurance firm that audits algorithms for bias, risk, and governance. The podcast explores AI auditing, governance, regulation, and technical assurance practices shaping the future of trustworthy AI. Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 23 Feb 2026 07:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/dbf65471-ef21-4dc9-ae83-d89bdb8d83ce/lunchtime_babling_youtube_42.png" width="1280"/>
      <enclosure length="29910649" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/77c8f044-8775-4c4b-9260-c333a28e2b6d/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/audio/group/a7ec59b1-9d38-4381-b6b7-e735754dd782/group-item/ac49d92a-db28-4a07-b0bb-6daeea3b5a99/128_default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>AI Test, Evaluation, &amp; Red Teaming Specialist Bootcamp</itunes:title>
      <itunes:author>Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/e9b182af-8781-4feb-a6b3-50f4f96b5654/3000x3000/lunchtime_babling_logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:28:17</itunes:duration>
      <itunes:summary>In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown introduces the new AI Test, Evaluation, &amp; Red Teaming Specialist Bootcamp—a hands-on, technical program designed to train the next generation of AI assurance professionals.

Drawing directly from BABL AI’s internal methodologies used to audit and evaluate high-risk AI systems across industries, this bootcamp addresses one of the most critical gaps in the AI ecosystem: the lack of practical training in how to design, execute, and interpret rigorous AI testing and red teaming in real-world contexts. 

Dr. Brown explains:

-Why AI testing, evaluation, and red teaming are essential for high-risk AI systems

-How BABL AI developed its internal, risk-driven testing and assurance frameworks

-The difference between auditing AI systems and directly evaluating and validating them

-What participants will learn during the five-week, hands-on bootcamp

-The prerequisites, structure, and technical depth of the program

-How this bootcamp will evolve into BABL’s new AI Test, Evaluation, &amp; Red Teaming Specialist Certification

-This exclusive early adopter cohort is limited to approximately 30 participants and is designed for professionals with foundational knowledge in AI auditing, governance, or assurance who want to develop practical technical capabilities in AI evaluation and red teaming.

-Participants will learn how to move systematically from an AI use case to defensible test results—building real test plans, executing evaluations, and developing assurance-relevant conclusions using BABL’s proven frameworks.

Take the test to see if you are a good candidate for the AI Test, Evaluation, &amp; Red Teaming Specialist Bootcamp: https://zfrmz.eu/RBroC4VLZ9I41ihKl1XV

Learn more about BABL AI Certifications: www.babl.ai

About Lunchtime BABLing:

Lunchtime BABLing is hosted by Dr. Shea Brown, CEO of BABL AI, an independent AI assurance firm that audits algorithms for bias, risk, and governance. The podcast explores AI auditing, governance, regulation, and technical assurance practices shaping the future of trustworthy AI.</itunes:summary>
      <itunes:subtitle>In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown introduces the new AI Test, Evaluation, &amp; Red Teaming Specialist Bootcamp—a hands-on, technical program designed to train the next generation of AI assurance professionals.

Drawing directly from BABL AI’s internal methodologies used to audit and evaluate high-risk AI systems across industries, this bootcamp addresses one of the most critical gaps in the AI ecosystem: the lack of practical training in how to design, execute, and interpret rigorous AI testing and red teaming in real-world contexts. 

Dr. Brown explains:

-Why AI testing, evaluation, and red teaming are essential for high-risk AI systems

-How BABL AI developed its internal, risk-driven testing and assurance frameworks

-The difference between auditing AI systems and directly evaluating and validating them

-What participants will learn during the five-week, hands-on bootcamp

-The prerequisites, structure, and technical depth of the program

-How this bootcamp will evolve into BABL’s new AI Test, Evaluation, &amp; Red Teaming Specialist Certification

-This exclusive early adopter cohort is limited to approximately 30 participants and is designed for professionals with foundational knowledge in AI auditing, governance, or assurance who want to develop practical technical capabilities in AI evaluation and red teaming.

-Participants will learn how to move systematically from an AI use case to defensible test results—building real test plans, executing evaluations, and developing assurance-relevant conclusions using BABL’s proven frameworks.

Take the test to see if you are a good candidate for the AI Test, Evaluation, &amp; Red Teaming Specialist Bootcamp: https://zfrmz.eu/RBroC4VLZ9I41ihKl1XV

Learn more about BABL AI Certifications: www.babl.ai

About Lunchtime BABLing:

Lunchtime BABLing is hosted by Dr. Shea Brown, CEO of BABL AI, an independent AI assurance firm that audits algorithms for bias, risk, and governance. The podcast explores AI auditing, governance, regulation, and technical assurance practices shaping the future of trustworthy AI.</itunes:subtitle>
      <itunes:keywords>online courses, ai governance, ai auditing, online training, computer science, ai, risk management, training</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>71</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">ea54d1e7-3a8e-46c4-b1ff-a4694be685d5</guid>
      <title>An Interview with Mert Çuhadaroğlu</title>
      <description><![CDATA[In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with Mert Çuhadaroğlu, Program Manager of BABL AI’s AI & Algorithm Auditor Certification Program, for an in-depth conversation about careers in AI governance, responsible AI, and what it really takes to become an AI auditor.
Mert shares his unique professional journey — from banking and finance, to career coaching and publishing, to becoming a leading figure in AI ethics and auditing. Now based in Istanbul, Mert plays a critical role in guiding and evaluating BABL AI certification students, including reviewing capstone projects and supporting professionals from a wide range of backgrounds.
Together, Shea and Mert discuss:
What makes BABL AI’s AI & Algorithm Auditor Certification different from other AI governance programs
Whether you need a technical background to succeed in AI auditing
The real-world demand for AI auditors and AI governance professionals
Common career paths for certification graduates
What students actually do in the capstone project (including LLM and generative AI use cases)
How BABL AI’s certifications compare to other industry credentials
An overview of BABL AI’s additional certification programs, including EU AI Act Quality Management Systems, AI Governance for Business Professionals, and AI for Legal Professionals
This episode is both a behind-the-scenes look at BABL AI’s training philosophy and a practical guide for anyone considering a career in AI assurance, audit, or governance. Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 22 Dec 2025 11:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Mert Çuhadaroğlu, Shea Brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/55a52a87-7282-42bc-a762-f3b3e9836a0c/lunchtime-20babling-20youtube-41.jpg" width="1280"/>
      <enclosure length="36202189" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/74491e63-c9e1-47ed-83f2-78976fde02f0/audio/8767f1a3-2ac3-4274-93a4-537621ece42b/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>An Interview with Mert Çuhadaroğlu</itunes:title>
      <itunes:author>Mert Çuhadaroğlu, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/d5c47bec-ea31-482d-a5b5-e169a8ca9cae/3000x3000/lunchtime-20babling-20logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:34:50</itunes:duration>
      <itunes:summary>In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with Mert Çuhadaroğlu, Program Manager of BABL AI’s AI &amp; Algorithm Auditor Certification Program, for an in-depth conversation about careers in AI governance, responsible AI, and what it really takes to become an AI auditor.
Mert shares his unique professional journey — from banking and finance, to career coaching and publishing, to becoming a leading figure in AI ethics and auditing. Now based in Istanbul, Mert plays a critical role in guiding and evaluating BABL AI certification students, including reviewing capstone projects and supporting professionals from a wide range of backgrounds.
Together, Shea and Mert discuss:
What makes BABL AI’s AI &amp; Algorithm Auditor Certification different from other AI governance programs
Whether you need a technical background to succeed in AI auditing
The real-world demand for AI auditors and AI governance professionals
Common career paths for certification graduates
What students actually do in the capstone project (including LLM and generative AI use cases)
How BABL AI’s certifications compare to other industry credentials
An overview of BABL AI’s additional certification programs, including EU AI Act Quality Management Systems, AI Governance for Business Professionals, and AI for Legal Professionals
This episode is both a behind-the-scenes look at BABL AI’s training philosophy and a practical guide for anyone considering a career in AI assurance, audit, or governance.</itunes:summary>
      <itunes:subtitle>In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with Mert Çuhadaroğlu, Program Manager of BABL AI’s AI &amp; Algorithm Auditor Certification Program, for an in-depth conversation about careers in AI governance, responsible AI, and what it really takes to become an AI auditor.
Mert shares his unique professional journey — from banking and finance, to career coaching and publishing, to becoming a leading figure in AI ethics and auditing. Now based in Istanbul, Mert plays a critical role in guiding and evaluating BABL AI certification students, including reviewing capstone projects and supporting professionals from a wide range of backgrounds.
Together, Shea and Mert discuss:
What makes BABL AI’s AI &amp; Algorithm Auditor Certification different from other AI governance programs
Whether you need a technical background to succeed in AI auditing
The real-world demand for AI auditors and AI governance professionals
Common career paths for certification graduates
What students actually do in the capstone project (including LLM and generative AI use cases)
How BABL AI’s certifications compare to other industry credentials
An overview of BABL AI’s additional certification programs, including EU AI Act Quality Management Systems, AI Governance for Business Professionals, and AI for Legal Professionals
This episode is both a behind-the-scenes look at BABL AI’s training philosophy and a practical guide for anyone considering a career in AI assurance, audit, or governance.</itunes:subtitle>
      <itunes:keywords>ai governance, cyber security, ai auditing, ai education, ai compliance, ai, risk management</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>70</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">f3592f35-7e72-4e84-9b95-5f6644b65ed0</guid>
      <title>Diving into the AI Compliance Officer</title>
      <description><![CDATA[What does a Chief AI Compliance Officer actually do—and does your organization secretly need one already? 🤔

In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by co-hosts Jeffery Recker and Bryan Ilg to unpack what it really takes to own AI risk, compliance, and governance inside a modern organization. Drawing on BABL AI’s AI Compliance Officer Program and years of audit work, they break down the real pain points leaders are facing and how to move from confusion to a concrete plan.
Whether you’ve just been handed “AI compliance” on top of your day job, or you’re building AI products and worried about regulations, this one’s for you.


In this episode, they discuss:

What a Chief AI Compliance Officer role looks like in practice
– Why it often lands on general counsel, chief compliance officers, or chief AI officers
– Why this work can’t be owned by one person alone

The 3-part structure of BABL AI’s AI Compliance Officer Program

AI foundations – Governance, AI management systems, policies, procedures, and documentation

Fractional AI Compliance Officer support – Access to BABL’s research and audit team on an ongoing basis

Continuous monitoring & measurement – Keeping up with self-learning, changing AI systems over time

How to build an AI system inventory and triage risk

– Simple rubric for identifying high, medium, and low-risk AI systems
– When to treat a system as “high risk” by default
– Why simplicity is the antidote to feeling overwhelmed

Key AI risks every organization should know about

– Data poisoning and how malicious instructions can sneak into your systems
– Shadow AI (employees using unapproved tools like personal ChatGPT accounts)
– Model & data drift and why “it worked when we launched it” isn’t good enough
– How these risks connect to reputation, regulatory exposure, and business strategy

Why governance, risk & compliance (GRC) is not a “brake” on innovation

– How good governance actually lets you move faster and more confidently
– The value of a “SWAT team” style AI compliance function vs. going it alone

Who should watch/listen?

General counsel, chief compliance officers, chief risk officers
Chief AI / data / technology leaders

Product owners building AI-powered tools

Anyone who’s just been told: “You’re now responsible for AI compliance.” 🫠 Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 8 Dec 2025 11:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Bryan Ilg, Shea Brown, Jeffery Recker)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/61c56bf6-1cf2-4e3c-864c-aa8e9d2ffe68/lunchtime-20babling-20youtube-40.jpg" width="1280"/>
      <enclosure length="43632667" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/fda0a556-e6b9-4c1e-bdad-911c21d9a6d7/audio/146d7e14-126a-4dbf-8c75-47bf1fff8f35/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>Diving into the AI Compliance Officer</itunes:title>
      <itunes:author>Bryan Ilg, Shea Brown, Jeffery Recker</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/98e85b68-a385-4b29-b9f1-d748a2a13c30/3000x3000/lunchtime-20babling-20logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:42:35</itunes:duration>
      <itunes:summary>What does a Chief AI Compliance Officer actually do—and does your organization secretly need one already? 🤔

In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by co-hosts Jeffery Recker and Bryan Ilg to unpack what it really takes to own AI risk, compliance, and governance inside a modern organization. Drawing on BABL AI’s AI Compliance Officer Program and years of audit work, they break down the real pain points leaders are facing and how to move from confusion to a concrete plan.
Whether you’ve just been handed “AI compliance” on top of your day job, or you’re building AI products and worried about regulations, this one’s for you.


In this episode, they discuss:

What a Chief AI Compliance Officer role looks like in practice
– Why it often lands on general counsel, chief compliance officers, or chief AI officers
– Why this work can’t be owned by one person alone

The 3-part structure of BABL AI’s AI Compliance Officer Program

AI foundations – Governance, AI management systems, policies, procedures, and documentation

Fractional AI Compliance Officer support – Access to BABL’s research and audit team on an ongoing basis

Continuous monitoring &amp; measurement – Keeping up with self-learning, changing AI systems over time

How to build an AI system inventory and triage risk

– Simple rubric for identifying high, medium, and low-risk AI systems
– When to treat a system as “high risk” by default
– Why simplicity is the antidote to feeling overwhelmed

Key AI risks every organization should know about

– Data poisoning and how malicious instructions can sneak into your systems
– Shadow AI (employees using unapproved tools like personal ChatGPT accounts)
– Model &amp; data drift and why “it worked when we launched it” isn’t good enough
– How these risks connect to reputation, regulatory exposure, and business strategy

Why governance, risk &amp; compliance (GRC) is not a “brake” on innovation

– How good governance actually lets you move faster and more confidently
– The value of a “SWAT team” style AI compliance function vs. going it alone

Who should watch/listen?

General counsel, chief compliance officers, chief risk officers
Chief AI / data / technology leaders

Product owners building AI-powered tools

Anyone who’s just been told: “You’re now responsible for AI compliance.” 🫠</itunes:summary>
      <itunes:subtitle>What does a Chief AI Compliance Officer actually do—and does your organization secretly need one already? 🤔

In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by co-hosts Jeffery Recker and Bryan Ilg to unpack what it really takes to own AI risk, compliance, and governance inside a modern organization. Drawing on BABL AI’s AI Compliance Officer Program and years of audit work, they break down the real pain points leaders are facing and how to move from confusion to a concrete plan.
Whether you’ve just been handed “AI compliance” on top of your day job, or you’re building AI products and worried about regulations, this one’s for you.


In this episode, they discuss:

What a Chief AI Compliance Officer role looks like in practice
– Why it often lands on general counsel, chief compliance officers, or chief AI officers
– Why this work can’t be owned by one person alone

The 3-part structure of BABL AI’s AI Compliance Officer Program

AI foundations – Governance, AI management systems, policies, procedures, and documentation

Fractional AI Compliance Officer support – Access to BABL’s research and audit team on an ongoing basis

Continuous monitoring &amp; measurement – Keeping up with self-learning, changing AI systems over time

How to build an AI system inventory and triage risk

– Simple rubric for identifying high, medium, and low-risk AI systems
– When to treat a system as “high risk” by default
– Why simplicity is the antidote to feeling overwhelmed

Key AI risks every organization should know about

– Data poisoning and how malicious instructions can sneak into your systems
– Shadow AI (employees using unapproved tools like personal ChatGPT accounts)
– Model &amp; data drift and why “it worked when we launched it” isn’t good enough
– How these risks connect to reputation, regulatory exposure, and business strategy

Why governance, risk &amp; compliance (GRC) is not a “brake” on innovation

– How good governance actually lets you move faster and more confidently
– The value of a “SWAT team” style AI compliance function vs. going it alone

Who should watch/listen?

General counsel, chief compliance officers, chief risk officers
Chief AI / data / technology leaders

Product owners building AI-powered tools

Anyone who’s just been told: “You’re now responsible for AI compliance.” 🫠</itunes:subtitle>
      <itunes:keywords>consulting, ai regulation, executive, ai risk, ai compliance, ai, compliance, risk management</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>69</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">911dcdbc-62ad-4050-82d3-07c4ff2a9c42</guid>
      <title>Implementing AI into Your Career</title>
      <description><![CDATA[In this follow-up to our episode on AI, training, and the job market, BABL AI CEO Dr. Shea Brown is joined again by COO Jeffery Recker and Chief of Staff Emily Brown to get practical about one big question:

How do you actually implement AI into your career… without losing yourself (or your job) in the process?
Whether you’re secure in your role, worried about layoffs, or actively changing careers, this episode focuses on tactical, realistic steps you can start taking this week.

🎧 In this episode, we cover:

How to start using large language models (LLMs) and agents in your day-to-day work
Concrete examples for roles like lawyers, accountants, marketers, operations, HR, teachers, and journalists
What to do if your manager or organization is afraid of AI (data leaks, reputation risk, etc.)
How to avoid “AI slop” and become the person who provides clear, minimal, high-value outputs
A practical plan if you’ve been laid off or see layoffs coming: dual-track job search + AI pivot
Using AI ethically for resumes, ATS filters, and video interviews—without fabricating experience
Why you should make an “AI inventory” of tools already in your life (spoiler: it’s more than you think)
How to set boundaries with AI so it augments your work, not your identity or mental health
Mindset shifts for people who don’t feel “technical” but still need to adapt Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 24 Nov 2025 11:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Jeffery Recker, Shea Brown, Emily Brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/e562487c-f0df-471c-8603-77fd38df25c0/lunchtime-20babling-20youtube-36.jpg" width="1280"/>
      <enclosure length="48251534" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/f2be7b04-5109-4792-bd47-8bb50a511763/audio/16dac38f-6478-4507-9c43-07a56569e90c/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>Implementing AI into Your Career</itunes:title>
      <itunes:author>Jeffery Recker, Shea Brown, Emily Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/5bf3fbd5-1b5d-4f17-9ace-71d2c38f0b9f/3000x3000/lunchtime-20babling-20logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:47:23</itunes:duration>
      <itunes:summary>In this follow-up to our episode on AI, training, and the job market, BABL AI CEO Dr. Shea Brown is joined again by COO Jeffery Recker and Chief of Staff Emily Brown to get practical about one big question:

How do you actually implement AI into your career… without losing yourself (or your job) in the process?
Whether you’re secure in your role, worried about layoffs, or actively changing careers, this episode focuses on tactical, realistic steps you can start taking this week.

🎧 In this episode, we cover:

How to start using large language models (LLMs) and agents in your day-to-day work
Concrete examples for roles like lawyers, accountants, marketers, operations, HR, teachers, and journalists
What to do if your manager or organization is afraid of AI (data leaks, reputation risk, etc.)
How to avoid “AI slop” and become the person who provides clear, minimal, high-value outputs
A practical plan if you’ve been laid off or see layoffs coming: dual-track job search + AI pivot
Using AI ethically for resumes, ATS filters, and video interviews—without fabricating experience
Why you should make an “AI inventory” of tools already in your life (spoiler: it’s more than you think)
How to set boundaries with AI so it augments your work, not your identity or mental health
Mindset shifts for people who don’t feel “technical” but still need to adapt</itunes:summary>
      <itunes:subtitle>In this follow-up to our episode on AI, training, and the job market, BABL AI CEO Dr. Shea Brown is joined again by COO Jeffery Recker and Chief of Staff Emily Brown to get practical about one big question:

How do you actually implement AI into your career… without losing yourself (or your job) in the process?
Whether you’re secure in your role, worried about layoffs, or actively changing careers, this episode focuses on tactical, realistic steps you can start taking this week.

🎧 In this episode, we cover:

How to start using large language models (LLMs) and agents in your day-to-day work
Concrete examples for roles like lawyers, accountants, marketers, operations, HR, teachers, and journalists
What to do if your manager or organization is afraid of AI (data leaks, reputation risk, etc.)
How to avoid “AI slop” and become the person who provides clear, minimal, high-value outputs
A practical plan if you’ve been laid off or see layoffs coming: dual-track job search + AI pivot
Using AI ethically for resumes, ATS filters, and video interviews—without fabricating experience
Why you should make an “AI inventory” of tools already in your life (spoiler: it’s more than you think)
How to set boundaries with AI so it augments your work, not your identity or mental health
Mindset shifts for people who don’t feel “technical” but still need to adapt</itunes:subtitle>
      <itunes:keywords>ai jobs, ai, jobs</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>68</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">35e29ac3-33a4-414f-99e4-38fccb83c442</guid>
      <title>AI, Training &amp; the Job Market</title>
      <description><![CDATA[In this latest episode of Lunchtime BABLing, hosted by BABL AI CEO Dr. Shea Brown with COO Jeffery Recker and—making her first appearance—Chief of Staff Emily Brown, we dig into what today’s AI-shaped job market really means for knowledge workers, how to build durable skills, and why “human in the loop” still matters—especially in marketing, ops, and hiring.

🎧 What you’ll learn

Why AI anxiety is spiking—and how to respond with deliberate upskilling
The #1 meta-skill: building a strong filter (concise, expert-informed outputs > AI slop)
How AI literacy translates to any role (marketing, people ops, compliance, product)
Practical ways to pivot toward Responsible AI / AI assurance / AI auditing
Why specialization beats chasing every trend (go narrow, go deep, then pivot)
The value of community: mentorship, peer feedback, and portfolio/capstone work Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 10 Nov 2025 14:07:04 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Emily Brown, Jeffery Recker, Shea Brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/6d6ee13c-ebf6-468d-8aa2-80ef8c828231/lunchtime-20babling-20youtube-34.jpg" width="1280"/>
      <enclosure length="46042620" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/e101fb95-5eb1-443b-b672-88d315cd15ed/audio/6a89e974-e004-4261-b108-2ee3a239137c/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>AI, Training &amp; the Job Market</itunes:title>
      <itunes:author>Emily Brown, Jeffery Recker, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/d223d8ec-a5ad-43d4-a11a-cc3a56092913/3000x3000/lunchtime-20babling-20logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:45:05</itunes:duration>
      <itunes:summary>In this latest episode of Lunchtime BABLing, hosted by BABL AI CEO Dr. Shea Brown with COO Jeffery Recker and—making her first appearance—Chief of Staff Emily Brown, we dig into what today’s AI-shaped job market really means for knowledge workers, how to build durable skills, and why “human in the loop” still matters—especially in marketing, ops, and hiring.

🎧 What you’ll learn

Why AI anxiety is spiking—and how to respond with deliberate upskilling
The #1 meta-skill: building a strong filter (concise, expert-informed outputs &gt; AI slop)
How AI literacy translates to any role (marketing, people ops, compliance, product)
Practical ways to pivot toward Responsible AI / AI assurance / AI auditing
Why specialization beats chasing every trend (go narrow, go deep, then pivot)
The value of community: mentorship, peer feedback, and portfolio/capstone work</itunes:summary>
      <itunes:subtitle>In this latest episode of Lunchtime BABLing, hosted by BABL AI CEO Dr. Shea Brown with COO Jeffery Recker and—making her first appearance—Chief of Staff Emily Brown, we dig into what today’s AI-shaped job market really means for knowledge workers, how to build durable skills, and why “human in the loop” still matters—especially in marketing, ops, and hiring.

🎧 What you’ll learn

Why AI anxiety is spiking—and how to respond with deliberate upskilling
The #1 meta-skill: building a strong filter (concise, expert-informed outputs &gt; AI slop)
How AI literacy translates to any role (marketing, people ops, compliance, product)
Practical ways to pivot toward Responsible AI / AI assurance / AI auditing
Why specialization beats chasing every trend (go narrow, go deep, then pivot)
The value of community: mentorship, peer feedback, and portfolio/capstone work</itunes:subtitle>
      <itunes:keywords>ai, training, jobs</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>67</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">46a989de-e201-4c1a-abef-05ef1207f536</guid>
      <title>AI and Scheduling Optimization with Leon Ingelse</title>
      <description><![CDATA[From lesson-planning to long-haul trucking, good schedules make the world run—literally. In this episode, BABL AI CEO Dr. Shea Brown sits down with Leon Ingelse, writer-researcher at Croatian optimization studio Dots & Lines, to unpack the hidden math, ethics, and human stories behind modern scheduling and routing.

🔑 What we cover

Hard vs. soft constraints – why “can’t” and “prefer not to” need different math

Digital twins – building a virtual copy of a business before you touch the real one

Fairness & “karma” scheduling – balancing preferences over weeks, months, years

Transparency & compliance – explaining a timetable (and the laws baked into it)

Human-in-the-loop vs. full automation – when you still want a person pressing “publish”

Optimization ≠ LLMs – where stochastic AI falls short and formal models shine

The future of Dots & Lines and why bespoke solutions often beat off-the-shelf products Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 14 Jul 2025 06:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Leon Ingelse, Shea Brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/603b17f5-97e6-482e-91d2-d4924ebf97c6/lunchtime-20babling-20youtube-32.jpg" width="1280"/>
      <enclosure length="42166884" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/53391f3a-dbc1-496f-b579-1aa2c6054ee9/audio/4cb1ad17-b117-44bb-88c4-5707f79da803/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>AI and Scheduling Optimization with Leon Ingelse</itunes:title>
      <itunes:author>Leon Ingelse, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/a7d9051e-1546-4a5b-9cb9-e25b84c495dd/3000x3000/lunchtime-20babling-20logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:41:03</itunes:duration>
      <itunes:summary>From lesson-planning to long-haul trucking, good schedules make the world run—literally. In this episode, BABL AI CEO Dr. Shea Brown sits down with Leon Ingelse, writer-researcher at Croatian optimization studio Dots &amp; Lines, to unpack the hidden math, ethics, and human stories behind modern scheduling and routing.

🔑 What we cover

Hard vs. soft constraints – why “can’t” and “prefer not to” need different math

Digital twins – building a virtual copy of a business before you touch the real one

Fairness &amp; “karma” scheduling – balancing preferences over weeks, months, years

Transparency &amp; compliance – explaining a timetable (and the laws baked into it)

Human-in-the-loop vs. full automation – when you still want a person pressing “publish”

Optimization ≠ LLMs – where stochastic AI falls short and formal models shine

The future of Dots &amp; Lines and why bespoke solutions often beat off-the-shelf products</itunes:summary>
      <itunes:subtitle>From lesson-planning to long-haul trucking, good schedules make the world run—literally. In this episode, BABL AI CEO Dr. Shea Brown sits down with Leon Ingelse, writer-researcher at Croatian optimization studio Dots &amp; Lines, to unpack the hidden math, ethics, and human stories behind modern scheduling and routing.

🔑 What we cover

Hard vs. soft constraints – why “can’t” and “prefer not to” need different math

Digital twins – building a virtual copy of a business before you touch the real one

Fairness &amp; “karma” scheduling – balancing preferences over weeks, months, years

Transparency &amp; compliance – explaining a timetable (and the laws baked into it)

Human-in-the-loop vs. full automation – when you still want a person pressing “publish”

Optimization ≠ LLMs – where stochastic AI falls short and formal models shine

The future of Dots &amp; Lines and why bespoke solutions often beat off-the-shelf products</itunes:subtitle>
      <itunes:keywords>ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>66</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">7e1b421d-d2e2-4eac-9836-4780ded3a285</guid>
      <title>How to Break Into AI Governance?</title>
      <description><![CDATA[Ever wondered how to start a career in AI Governance, Responsible AI, or AI Risk Management? In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker and CSO Bryan Ilg for a no-nonsense, practical conversation about how to actually break in to this fast-growing, high-demand field.

🌟 What you'll learn in this episode

✅ What AI governance really is (and why it matters in every business using AI)
✅ The 3 main career paths into AI governance:
Dedicated governance roles
Expanding your current role to include AI oversight
Building something new as an entrepreneur/intrapreneur
✅ Do you need to be technical? How much?
✅ The real skills hiring managers want
✅ How to transition from zero experience to credible candidate
✅ Why governance is essential for scaling AI safely and responsibly

🧭 Key themes

Hands-on learning: You have to use AI to govern AI
Systems thinking: Understanding how decisions get made at scale
Risk awareness: The #1 thing employers want
Building your profile: Projects, credentials, volunteering, networking
Niche strategy: Why specializing beats general buzzwords
Marathon mindset: This is not a quick certification cash-in Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 30 Jun 2025 06:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Bryan Ilg, Jeffery Recker, Shea Brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/4cade137-3174-4847-8e39-835d9b46ae66/lunchtime-20babling-20youtube-33.jpg" width="1280"/>
      <enclosure length="49139697" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/d5adb442-f7b8-4179-95d5-0f0f1bee2a89/audio/d1aafb3a-e59b-4eb6-9b23-9af969635214/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>How to Break Into AI Governance?</itunes:title>
      <itunes:author>Bryan Ilg, Jeffery Recker, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/cd869975-04a9-41cd-8b16-d29a2cb94b49/3000x3000/lunchtime-20babling-20logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:48:19</itunes:duration>
      <itunes:summary>Ever wondered how to start a career in AI Governance, Responsible AI, or AI Risk Management? In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker and CSO Bryan Ilg for a no-nonsense, practical conversation about how to actually break in to this fast-growing, high-demand field.

🌟 What you&apos;ll learn in this episode

✅ What AI governance really is (and why it matters in every business using AI)
✅ The 3 main career paths into AI governance:
Dedicated governance roles
Expanding your current role to include AI oversight
Building something new as an entrepreneur/intrapreneur
✅ Do you need to be technical? How much?
✅ The real skills hiring managers want
✅ How to transition from zero experience to credible candidate
✅ Why governance is essential for scaling AI safely and responsibly

🧭 Key themes

Hands-on learning: You have to use AI to govern AI
Systems thinking: Understanding how decisions get made at scale
Risk awareness: The #1 thing employers want
Building your profile: Projects, credentials, volunteering, networking
Niche strategy: Why specializing beats general buzzwords
Marathon mindset: This is not a quick certification cash-in</itunes:summary>
      <itunes:subtitle>Ever wondered how to start a career in AI Governance, Responsible AI, or AI Risk Management? In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker and CSO Bryan Ilg for a no-nonsense, practical conversation about how to actually break in to this fast-growing, high-demand field.

🌟 What you&apos;ll learn in this episode

✅ What AI governance really is (and why it matters in every business using AI)
✅ The 3 main career paths into AI governance:
Dedicated governance roles
Expanding your current role to include AI oversight
Building something new as an entrepreneur/intrapreneur
✅ Do you need to be technical? How much?
✅ The real skills hiring managers want
✅ How to transition from zero experience to credible candidate
✅ Why governance is essential for scaling AI safely and responsibly

🧭 Key themes

Hands-on learning: You have to use AI to govern AI
Systems thinking: Understanding how decisions get made at scale
Risk awareness: The #1 thing employers want
Building your profile: Projects, credentials, volunteering, networking
Niche strategy: Why specializing beats general buzzwords
Marathon mindset: This is not a quick certification cash-in</itunes:subtitle>
      <itunes:keywords>ai governance, ai education, ai jobs, ai courses, ai, jobs</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>65</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">f7850a9f-7abd-4382-8b7b-099aa87cda82</guid>
      <title>AI Ethicist Reacts to Different Uses of AI</title>
      <description><![CDATA[In this fun and thought-provoking episode of Lunchtime BABLing, BABL AI CEO and AI ethicist Dr. Shea Brown is joined by COO Jeffery Recker and CSO Bryan Ilg for a rapid-fire discussion on some of the most surprising, bizarre, and controversial uses of AI circulating online.

From jailbreaking legal loopholes with ChatGPT, to AI-generated testimony from the deceased, to digital therapy bots and AI relationships—no use case is off-limits. The trio explores the ethical, legal, and emotional implications of everyday AI encounters, reacting in real-time with humor, insight, and a healthy dose of skepticism.

🎧 Topics include:

Can AI help someone get out of jail?
Is it ethical to use AI-generated avatars in court?
Talking to an AI version of a dead loved one—grief or avoidance?
Should AI replace your therapist?
Professors using ChatGPT to grade student essays
AI as your relationship coach (or third wheel)
Confirmation bias and the future of learning in the AI age

💬 This episode steps away from regulation and compliance to explore how AI is quietly reshaping human behavior—and whether we’re ready for it. Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 16 Jun 2025 10:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown, Bryan Ilg, Jeffery Recker)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/4ca8d127-6487-4738-a7f4-02681e59024e/lunchtime-20babling-20youtube-29.jpg" width="1280"/>
      <enclosure length="39219854" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/c33de5f9-cfeb-42e1-b525-19c9146185b0/audio/1e5c288c-ebc6-4f86-8ce8-8605f18d8f1a/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>AI Ethicist Reacts to Different Uses of AI</itunes:title>
      <itunes:author>Shea Brown, Bryan Ilg, Jeffery Recker</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/46312c35-c962-4228-b32e-487a29baa44c/3000x3000/lunchtime-20babling-20logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:37:59</itunes:duration>
      <itunes:summary>In this fun and thought-provoking episode of Lunchtime BABLing, BABL AI CEO and AI ethicist Dr. Shea Brown is joined by COO Jeffery Recker and CSO Bryan Ilg for a rapid-fire discussion on some of the most surprising, bizarre, and controversial uses of AI circulating online.

From jailbreaking legal loopholes with ChatGPT, to AI-generated testimony from the deceased, to digital therapy bots and AI relationships—no use case is off-limits. The trio explores the ethical, legal, and emotional implications of everyday AI encounters, reacting in real-time with humor, insight, and a healthy dose of skepticism.

🎧 Topics include:

Can AI help someone get out of jail?
Is it ethical to use AI-generated avatars in court?
Talking to an AI version of a dead loved one—grief or avoidance?
Should AI replace your therapist?
Professors using ChatGPT to grade student essays
AI as your relationship coach (or third wheel)
Confirmation bias and the future of learning in the AI age

💬 This episode steps away from regulation and compliance to explore how AI is quietly reshaping human behavior—and whether we’re ready for it.</itunes:summary>
      <itunes:subtitle>In this fun and thought-provoking episode of Lunchtime BABLing, BABL AI CEO and AI ethicist Dr. Shea Brown is joined by COO Jeffery Recker and CSO Bryan Ilg for a rapid-fire discussion on some of the most surprising, bizarre, and controversial uses of AI circulating online.

From jailbreaking legal loopholes with ChatGPT, to AI-generated testimony from the deceased, to digital therapy bots and AI relationships—no use case is off-limits. The trio explores the ethical, legal, and emotional implications of everyday AI encounters, reacting in real-time with humor, insight, and a healthy dose of skepticism.

🎧 Topics include:

Can AI help someone get out of jail?
Is it ethical to use AI-generated avatars in court?
Talking to an AI version of a dead loved one—grief or avoidance?
Should AI replace your therapist?
Professors using ChatGPT to grade student essays
AI as your relationship coach (or third wheel)
Confirmation bias and the future of learning in the AI age

💬 This episode steps away from regulation and compliance to explore how AI is quietly reshaping human behavior—and whether we’re ready for it.</itunes:subtitle>
      <itunes:keywords>ai governance, ai ethics, responsible ai, ai risk, ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>64</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1266cd99-338b-4595-bd9d-4efcaebcd8e8</guid>
      <title>What is ISO 42001?</title>
      <description><![CDATA[In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker to break down ISO/IEC 42001 — the first international standard for AI management systems.

Whether you're leading an AI team, navigating AI risk, or just starting your Responsible AI journey, this high-level introduction will help you understand:

What ISO 42001 is and why it matters

How it fits into global AI governance (including the EU AI Act and U.S. regulations)

Key components of the standard — from leadership, risk assessments, and operations to monitoring and continual improvement

Common challenges organizations face when adopting it

Practical first steps for implementation, even for startups and resource-limited teams

💡 ISO 42001 is quickly becoming the North Star for organizations aiming to demonstrate trustworthy and responsible AI practices — especially in today’s fast-moving regulatory environment. Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 2 Jun 2025 10:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Jeffery Recker, Shea Brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/59c5494d-5739-47dc-aa94-8e462e15dcb8/lunchtime-20babling-20youtube-28.jpg" width="1280"/>
      <enclosure length="28152087" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/1119fb8b-39b6-4d23-9945-db1ca966fe90/audio/b16c5753-e1f2-41ce-bd3c-453d20f20836/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>What is ISO 42001?</itunes:title>
      <itunes:author>Jeffery Recker, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/e853a31a-34b5-4737-b936-2ef9a9074f66/3000x3000/lunchtime-20babling-20logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:26:31</itunes:duration>
      <itunes:summary>In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker to break down ISO/IEC 42001 — the first international standard for AI management systems.

Whether you&apos;re leading an AI team, navigating AI risk, or just starting your Responsible AI journey, this high-level introduction will help you understand:

What ISO 42001 is and why it matters

How it fits into global AI governance (including the EU AI Act and U.S. regulations)

Key components of the standard — from leadership, risk assessments, and operations to monitoring and continual improvement

Common challenges organizations face when adopting it

Practical first steps for implementation, even for startups and resource-limited teams

💡 ISO 42001 is quickly becoming the North Star for organizations aiming to demonstrate trustworthy and responsible AI practices — especially in today’s fast-moving regulatory environment.</itunes:summary>
      <itunes:subtitle>In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker to break down ISO/IEC 42001 — the first international standard for AI management systems.

Whether you&apos;re leading an AI team, navigating AI risk, or just starting your Responsible AI journey, this high-level introduction will help you understand:

What ISO 42001 is and why it matters

How it fits into global AI governance (including the EU AI Act and U.S. regulations)

Key components of the standard — from leadership, risk assessments, and operations to monitoring and continual improvement

Common challenges organizations face when adopting it

Practical first steps for implementation, even for startups and resource-limited teams

💡 ISO 42001 is quickly becoming the North Star for organizations aiming to demonstrate trustworthy and responsible AI practices — especially in today’s fast-moving regulatory environment.</itunes:subtitle>
      <itunes:keywords>ai governance, iso 42001, ai risk, iso, ai, risk management</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>63</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">ceba3d59-1a52-4c2e-97c8-cb0717d1673b</guid>
      <title>A New Framework to Assess the Business VALUE of AI</title>
      <description><![CDATA[In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown unveils a powerful new framework to assess business value when implementing AI—shifting the conversation from “Which tool should I use?” to “What value do I want to create?”

Joined by CSO Bryan Ilg and COO Jeffery Recker, the trio dives into the origin, design, and real-world application of the AI VALUE Framework:

- Visualize your operations
- Ask the right questions
- Link to AI capabilities
- Understand feasibility & risk
- Experiment & evaluate

This episode is packed with insights for business leaders, innovation teams, and AI professionals navigating the hype, risk, and opportunity of artificial intelligence. The framework—originally developed for BABL AI’s upcoming certification for business professionals—is meant to reduce AI project failure and help organizations do it right, not fast.

💡 Key topics:

- The difference between asking about tools vs. asking about value

- Why most AI projects fail—and how to avoid it

- How AI governance can create value, not just mitigate risk

- The importance of metrics, pilot testing, and customer focus

- Why being proactive beats being reactive in AI implementation Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 19 May 2025 10:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Bryan Ilg, Shea Brown, Jeffery Recker)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/e132e8aa-1b80-4577-ae6b-cac4fe80230d/lunchtime-20babling-20youtube-27.jpg" width="1280"/>
      <enclosure length="33759217" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/2b5b28a6-25f0-4ba4-8ff8-e9882df9f579/audio/bf2e0d36-15b5-471f-a1f2-d96c00e9c282/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>A New Framework to Assess the Business VALUE of AI</itunes:title>
      <itunes:author>Bryan Ilg, Shea Brown, Jeffery Recker</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/1e5b58cd-b082-4270-a5d3-62a844edc8a2/3000x3000/lunchtime-20babling-20logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:32:17</itunes:duration>
      <itunes:summary>In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown unveils a powerful new framework to assess business value when implementing AI—shifting the conversation from “Which tool should I use?” to “What value do I want to create?”

Joined by CSO Bryan Ilg and COO Jeffery Recker, the trio dives into the origin, design, and real-world application of the AI VALUE Framework:

- Visualize your operations
- Ask the right questions
- Link to AI capabilities
- Understand feasibility &amp; risk
- Experiment &amp; evaluate

This episode is packed with insights for business leaders, innovation teams, and AI professionals navigating the hype, risk, and opportunity of artificial intelligence. The framework—originally developed for BABL AI’s upcoming certification for business professionals—is meant to reduce AI project failure and help organizations do it right, not fast.

💡 Key topics:

- The difference between asking about tools vs. asking about value

- Why most AI projects fail—and how to avoid it

- How AI governance can create value, not just mitigate risk

- The importance of metrics, pilot testing, and customer focus

- Why being proactive beats being reactive in AI implementation</itunes:summary>
      <itunes:subtitle>In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown unveils a powerful new framework to assess business value when implementing AI—shifting the conversation from “Which tool should I use?” to “What value do I want to create?”

Joined by CSO Bryan Ilg and COO Jeffery Recker, the trio dives into the origin, design, and real-world application of the AI VALUE Framework:

- Visualize your operations
- Ask the right questions
- Link to AI capabilities
- Understand feasibility &amp; risk
- Experiment &amp; evaluate

This episode is packed with insights for business leaders, innovation teams, and AI professionals navigating the hype, risk, and opportunity of artificial intelligence. The framework—originally developed for BABL AI’s upcoming certification for business professionals—is meant to reduce AI project failure and help organizations do it right, not fast.

💡 Key topics:

- The difference between asking about tools vs. asking about value

- Why most AI projects fail—and how to avoid it

- How AI governance can create value, not just mitigate risk

- The importance of metrics, pilot testing, and customer focus

- Why being proactive beats being reactive in AI implementation</itunes:subtitle>
      <itunes:keywords>framework, ai business, business ai, ai framework, ai, risk management</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>62</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">8e50ec8f-bb11-4d63-bed1-791726d47825</guid>
      <title>The Importance of AI Governance</title>
      <description><![CDATA[In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with BABL AI Chief Sales Officer Bryan Ilg to explore why AI governance is becoming critical for businesses of all sizes. Bryan shares insights from a recent speech he gave to a nonprofit in Richmond, Virginia, highlighting the real business value of strong AI governance practices — not just for ethical reasons, but as a competitive advantage.

They dive into key topics like the importance of early planning (with a great rocket ship analogy!), how AI governance ties into business success, practical steps organizations can take to get started, and why AI governance is not just about risk mitigation but about driving real business outcomes.

Shea and Bryan also discuss trends in AI governance roles, challenges organizations face, and BABL AI's new Foundations of AI Governance for Business Professionals certification program designed to equip non-technical leaders with essential AI governance skills.

If you're interested in responsible AI, business strategy, or understanding how to make AI work for your organization, this episode is packed with actionable insights! Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 28 Apr 2025 06:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Bryan Ilg, Shea Brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/cc5a82f4-a8a4-4934-989c-8217ba2550c5/lunchtime-20babling-20youtube-26.jpg" width="1280"/>
      <enclosure length="41782780" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/28038420-2191-49a7-98ff-80770a21d982/audio/90d0700f-9f7e-47b0-91c6-bc7ca7bd16ef/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>The Importance of AI Governance</itunes:title>
      <itunes:author>Bryan Ilg, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/abf48f33-4ef2-4686-906b-fa1520eed722/3000x3000/lunchtime-20babling-20logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:40:39</itunes:duration>
      <itunes:summary>In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with BABL AI Chief Sales Officer Bryan Ilg to explore why AI governance is becoming critical for businesses of all sizes. Bryan shares insights from a recent speech he gave to a nonprofit in Richmond, Virginia, highlighting the real business value of strong AI governance practices — not just for ethical reasons, but as a competitive advantage.

They dive into key topics like the importance of early planning (with a great rocket ship analogy!), how AI governance ties into business success, practical steps organizations can take to get started, and why AI governance is not just about risk mitigation but about driving real business outcomes.

Shea and Bryan also discuss trends in AI governance roles, challenges organizations face, and BABL AI&apos;s new Foundations of AI Governance for Business Professionals certification program designed to equip non-technical leaders with essential AI governance skills.

If you&apos;re interested in responsible AI, business strategy, or understanding how to make AI work for your organization, this episode is packed with actionable insights!</itunes:summary>
      <itunes:subtitle>In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with BABL AI Chief Sales Officer Bryan Ilg to explore why AI governance is becoming critical for businesses of all sizes. Bryan shares insights from a recent speech he gave to a nonprofit in Richmond, Virginia, highlighting the real business value of strong AI governance practices — not just for ethical reasons, but as a competitive advantage.

They dive into key topics like the importance of early planning (with a great rocket ship analogy!), how AI governance ties into business success, practical steps organizations can take to get started, and why AI governance is not just about risk mitigation but about driving real business outcomes.

Shea and Bryan also discuss trends in AI governance roles, challenges organizations face, and BABL AI&apos;s new Foundations of AI Governance for Business Professionals certification program designed to equip non-technical leaders with essential AI governance skills.

If you&apos;re interested in responsible AI, business strategy, or understanding how to make AI work for your organization, this episode is packed with actionable insights!</itunes:subtitle>
      <itunes:keywords>ai governance, ai risk management, ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>61</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b079231b-697c-402c-b506-b54b289bbb29</guid>
      <title>Ensuring LLM Safety</title>
      <description><![CDATA[In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown dives deep into one of the most pressing questions in AI governance today: how do we ensure the safety of Large Language Models (LLMs)?

With new regulations like the EU AI Act, Colorado’s AI law, and emerging state-level requirements in places like California and New York, organizations developing or deploying LLM-powered systems face increasing pressure to evaluate risk, ensure compliance, and document everything.

🎯 What you'll learn:

Why evaluations are essential for mitigating risk and supporting compliance

How to adopt a socio-technical mindset and think in terms of parameter spaces

What auditors (like BABL AI) look for when assessing LLM-powered systems

A practical, first-principles approach to building and documenting LLM test suites

How to connect risk assessments to specific LLM behaviors and evaluations

The importance of contextualizing evaluations to your use case—not just relying on generic benchmarks

Shea also introduces BABL AI’s CIDA framework (Context, Input, Decision, Action) and shows how it forms the foundation for meaningful risk analysis and test coverage.

Whether you're an AI developer, auditor, policymaker, or just trying to keep up with fast-moving AI regulations, this episode is packed with insights you can use right now.

📌 Don’t wait for a perfect standard to tell you what to do—learn how to build a solid, use-case-driven evaluation strategy today. Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 7 Apr 2025 10:51:30 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/cd8d400a-8ad4-4be9-b410-fc4ab8f823d9/lunchtime-20babling-20youtube-24.jpg" width="1280"/>
      <enclosure length="29607629" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/4dda42ad-bb12-4ac2-a64b-d41bb55c20ce/audio/cbfaddb6-f5dc-4455-8e2c-a050d3ad6fdd/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>Ensuring LLM Safety</itunes:title>
      <itunes:author>Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/8affb7cd-e121-4791-91b8-e3dbb4de377e/3000x3000/lunchtime-20babling-20logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:27:58</itunes:duration>
      <itunes:summary>In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown dives deep into one of the most pressing questions in AI governance today: how do we ensure the safety of Large Language Models (LLMs)?

With new regulations like the EU AI Act, Colorado’s AI law, and emerging state-level requirements in places like California and New York, organizations developing or deploying LLM-powered systems face increasing pressure to evaluate risk, ensure compliance, and document everything.

🎯 What you&apos;ll learn:

Why evaluations are essential for mitigating risk and supporting compliance

How to adopt a socio-technical mindset and think in terms of parameter spaces

What auditors (like BABL AI) look for when assessing LLM-powered systems

A practical, first-principles approach to building and documenting LLM test suites

How to connect risk assessments to specific LLM behaviors and evaluations

The importance of contextualizing evaluations to your use case—not just relying on generic benchmarks

Shea also introduces BABL AI’s CIDA framework (Context, Input, Decision, Action) and shows how it forms the foundation for meaningful risk analysis and test coverage.

Whether you&apos;re an AI developer, auditor, policymaker, or just trying to keep up with fast-moving AI regulations, this episode is packed with insights you can use right now.

📌 Don’t wait for a perfect standard to tell you what to do—learn how to build a solid, use-case-driven evaluation strategy today.</itunes:summary>
      <itunes:subtitle>In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown dives deep into one of the most pressing questions in AI governance today: how do we ensure the safety of Large Language Models (LLMs)?

With new regulations like the EU AI Act, Colorado’s AI law, and emerging state-level requirements in places like California and New York, organizations developing or deploying LLM-powered systems face increasing pressure to evaluate risk, ensure compliance, and document everything.

🎯 What you&apos;ll learn:

Why evaluations are essential for mitigating risk and supporting compliance

How to adopt a socio-technical mindset and think in terms of parameter spaces

What auditors (like BABL AI) look for when assessing LLM-powered systems

A practical, first-principles approach to building and documenting LLM test suites

How to connect risk assessments to specific LLM behaviors and evaluations

The importance of contextualizing evaluations to your use case—not just relying on generic benchmarks

Shea also introduces BABL AI’s CIDA framework (Context, Input, Decision, Action) and shows how it forms the foundation for meaningful risk analysis and test coverage.

Whether you&apos;re an AI developer, auditor, policymaker, or just trying to keep up with fast-moving AI regulations, this episode is packed with insights you can use right now.

📌 Don’t wait for a perfect standard to tell you what to do—learn how to build a solid, use-case-driven evaluation strategy today.</itunes:subtitle>
      <itunes:keywords>llm, large language model, ai, risk management</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>60</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">02e5774b-f41d-46d4-84c2-a2b92b8470ab</guid>
      <title>Explainability of AI</title>
      <description><![CDATA[What does it really mean for AI to be explainable? Can we trust AI systems to tell us why they do what they do—and should the average person even care?

In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by regular guests Jeffery Recker and Bryan Ilg to unpack the messy world of AI explainability—and why it matters more than you might think.

From recommender systems to large language models, we explore: 🔍 The difference between explainability and interpretability

-Why even humans struggle to explain their decisions
-What should be considered a “good enough” explanation
-The importance of stakeholder context in defining "useful" explanations
-Why AI literacy and trust go hand-in-hand
-How concepts from cybersecurity, like zero trust, could inform responsible AI oversight

Plus, hear about the latest report from the Center for Security and Emerging Technology calling for stronger explainability standards, and what it means for AI developers, regulators, and everyday users.

Mentioned in this episode:

🔗 Link to BABL AI's Article: https://babl.ai/report-finds-gaps-in-ai-explainability-testing-calls-for-stronger-evaluation-standards/

🔗 Link to "Putting Explainable AI to the Test" paper: https://cset.georgetown.edu/publication/putting-explainable-ai-to-the-test-a-critical-look-at-ai-evaluation-approaches/?utm_source=ai-week-in-review.beehiiv.com&utm_medium=referral&utm_campaign=ai-week-in-review-3-8-25

🔗 Link to BABL AI's "The Algorithm Audit" paper: https://babl.ai/algorithm-auditing-framework/ Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 31 Mar 2025 05:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Bryan Ilg, Shea Brown, Jeffery Recker)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/17f16323-29c6-43d8-b93e-1c7e49c4fab7/lunchtime-20babling-20youtube-22.jpg" width="1280"/>
      <enclosure length="35708579" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/8c30bb24-11d5-437d-8cba-be1f0a01f627/audio/fff2b39c-6ca8-4473-b97c-6f36f47444f4/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>Explainability of AI</itunes:title>
      <itunes:author>Bryan Ilg, Shea Brown, Jeffery Recker</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/771e5b61-46ae-4b1b-b781-92bfc4c9fce7/3000x3000/lunchtime-20babling-20logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:34:19</itunes:duration>
      <itunes:summary>What does it really mean for AI to be explainable? Can we trust AI systems to tell us why they do what they do—and should the average person even care?

In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by regular guests Jeffery Recker and Bryan Ilg to unpack the messy world of AI explainability—and why it matters more than you might think.

From recommender systems to large language models, we explore: 🔍 The difference between explainability and interpretability

-Why even humans struggle to explain their decisions
-What should be considered a “good enough” explanation
-The importance of stakeholder context in defining &quot;useful&quot; explanations
-Why AI literacy and trust go hand-in-hand
-How concepts from cybersecurity, like zero trust, could inform responsible AI oversight

Plus, hear about the latest report from the Center for Security and Emerging Technology calling for stronger explainability standards, and what it means for AI developers, regulators, and everyday users.

Mentioned in this episode:

🔗 Link to BABL AI&apos;s Article: https://babl.ai/report-finds-gaps-in-ai-explainability-testing-calls-for-stronger-evaluation-standards/

🔗 Link to &quot;Putting Explainable AI to the Test&quot; paper: https://cset.georgetown.edu/publication/putting-explainable-ai-to-the-test-a-critical-look-at-ai-evaluation-approaches/?utm_source=ai-week-in-review.beehiiv.com&amp;utm_medium=referral&amp;utm_campaign=ai-week-in-review-3-8-25

🔗 Link to BABL AI&apos;s &quot;The Algorithm Audit&quot; paper: https://babl.ai/algorithm-auditing-framework/</itunes:summary>
      <itunes:subtitle>What does it really mean for AI to be explainable? Can we trust AI systems to tell us why they do what they do—and should the average person even care?

In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by regular guests Jeffery Recker and Bryan Ilg to unpack the messy world of AI explainability—and why it matters more than you might think.

From recommender systems to large language models, we explore: 🔍 The difference between explainability and interpretability

-Why even humans struggle to explain their decisions
-What should be considered a “good enough” explanation
-The importance of stakeholder context in defining &quot;useful&quot; explanations
-Why AI literacy and trust go hand-in-hand
-How concepts from cybersecurity, like zero trust, could inform responsible AI oversight

Plus, hear about the latest report from the Center for Security and Emerging Technology calling for stronger explainability standards, and what it means for AI developers, regulators, and everyday users.

Mentioned in this episode:

🔗 Link to BABL AI&apos;s Article: https://babl.ai/report-finds-gaps-in-ai-explainability-testing-calls-for-stronger-evaluation-standards/

🔗 Link to &quot;Putting Explainable AI to the Test&quot; paper: https://cset.georgetown.edu/publication/putting-explainable-ai-to-the-test-a-critical-look-at-ai-evaluation-approaches/?utm_source=ai-week-in-review.beehiiv.com&amp;utm_medium=referral&amp;utm_campaign=ai-week-in-review-3-8-25

🔗 Link to BABL AI&apos;s &quot;The Algorithm Audit&quot; paper: https://babl.ai/algorithm-auditing-framework/</itunes:subtitle>
      <itunes:keywords>ai governance, ai regulation, responsible ai, explainable ai, ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>59</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">9e5837b0-f032-4b35-911b-5dddbc19cc9f</guid>
      <title>AI’s Impact on Democracy</title>
      <description><![CDATA[In this thought-provoking episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with Jeffery Recker and Bryan Ilg to unpack one of the most pressing topics of our time: AI’s impact on democracy. From algorithm-driven echo chambers and misinformation to the role of social media in shaping political discourse, the trio explores how AI is quietly—and sometimes loudly—reshaping our democratic systems.

- What happens when personalized content becomes political propaganda?
- Is YouTube the new social media without us realizing it?
- Can regulations keep up with AI’s accelerating influence?
- And are we already too far gone—or is there still time to rethink, regulate, and reclaim our democratic integrity?

This episode dives into:

- The unintended consequences of algorithmic curation
- The collapse of objective reality in the digital age
- AI-driven misinformation in elections
- The tension between regulation and free speech
- Global responses—from Finland’s education system to the EU AI Act
- What society can (and should) do to fight back

Whether you’re in tech, policy, or just trying to make sense of the chaos online, this is a conversation you won’t want to miss.

🔗 Jeffery’s free course, Intro to the EU AI Act, is available now! Get your Credly badge and learn how to start your compliance journey → https://babl.ai/introduction-to-the-eu-ai-act/ Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 24 Mar 2025 05:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown, Bryan Ilg, Jeffery Recker)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/16b842f3-cd65-4ff4-8108-e57734558ef7/lunchtime-20babling-20youtube-21.jpg" width="1280"/>
      <enclosure length="46776556" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/46733480-d94a-4db2-bad5-b55e3f27c05b/audio/54a02446-2f2b-4f75-b33b-52900a49f497/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>AI’s Impact on Democracy</itunes:title>
      <itunes:author>Shea Brown, Bryan Ilg, Jeffery Recker</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/9d5c73c9-b94d-4aaf-ad3d-7634adde5a85/3000x3000/lunchtime-20babling-20logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:45:51</itunes:duration>
      <itunes:summary>In this thought-provoking episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with Jeffery Recker and Bryan Ilg to unpack one of the most pressing topics of our time: AI’s impact on democracy. From algorithm-driven echo chambers and misinformation to the role of social media in shaping political discourse, the trio explores how AI is quietly—and sometimes loudly—reshaping our democratic systems.

- What happens when personalized content becomes political propaganda?
- Is YouTube the new social media without us realizing it?
- Can regulations keep up with AI’s accelerating influence?
- And are we already too far gone—or is there still time to rethink, regulate, and reclaim our democratic integrity?

This episode dives into:

- The unintended consequences of algorithmic curation
- The collapse of objective reality in the digital age
- AI-driven misinformation in elections
- The tension between regulation and free speech
- Global responses—from Finland’s education system to the EU AI Act
- What society can (and should) do to fight back

Whether you’re in tech, policy, or just trying to make sense of the chaos online, this is a conversation you won’t want to miss.

🔗 Jeffery’s free course, Intro to the EU AI Act, is available now! Get your Credly badge and learn how to start your compliance journey → https://babl.ai/introduction-to-the-eu-ai-act/</itunes:summary>
      <itunes:subtitle>In this thought-provoking episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with Jeffery Recker and Bryan Ilg to unpack one of the most pressing topics of our time: AI’s impact on democracy. From algorithm-driven echo chambers and misinformation to the role of social media in shaping political discourse, the trio explores how AI is quietly—and sometimes loudly—reshaping our democratic systems.

- What happens when personalized content becomes political propaganda?
- Is YouTube the new social media without us realizing it?
- Can regulations keep up with AI’s accelerating influence?
- And are we already too far gone—or is there still time to rethink, regulate, and reclaim our democratic integrity?

This episode dives into:

- The unintended consequences of algorithmic curation
- The collapse of objective reality in the digital age
- AI-driven misinformation in elections
- The tension between regulation and free speech
- Global responses—from Finland’s education system to the EU AI Act
- What society can (and should) do to fight back

Whether you’re in tech, policy, or just trying to make sense of the chaos online, this is a conversation you won’t want to miss.

🔗 Jeffery’s free course, Intro to the EU AI Act, is available now! Get your Credly badge and learn how to start your compliance journey → https://babl.ai/introduction-to-the-eu-ai-act/</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>58</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">cb6eb04e-c5a0-4847-a3b4-89f0ff1cc72d</guid>
      <title>AI Literacy</title>
      <description><![CDATA[In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by Jeffery Recker and Bryan Ilg to discuss the growing importance of AI literacy—what it means, why it matters, and how individuals and businesses can stay ahead in an AI-driven world.

Topics covered:

The evolution of AI education and BABL AI’s new subscription model for training & certifications.

Why AI auditing skills are becoming essential for professionals across industries.

How AI governance roles will shape the future of business leadership.

The impact of AI on workforce transition and how individuals can future-proof their careers.

The EU AI Act’s new AI literacy requirements—what they mean for organizations.

Want to level up your AI knowledge? Check out BABL AI’s courses & certifications!

🚀 Subscribe to our courses: https://courses.babl.ai/p/the-algorithmic-bias-lab-membership

👉 Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code "BABLING20".    Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 17 Mar 2025 16:55:56 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Jeffery Recker, Shea Brown, Bryan Ilg)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/6543f46e-8a0f-4fbe-8295-974e14c21fe7/lunchtime-20babling-20youtube-19.jpg" width="1280"/>
      <enclosure length="22832929" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/ee01618a-ec2d-46c5-9949-f5c075b9dac6/audio/381a2cc3-24ed-4dce-8b61-3339766e2d36/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>AI Literacy</itunes:title>
      <itunes:author>Jeffery Recker, Shea Brown, Bryan Ilg</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/fddf9267-3f90-4738-8368-39c184616c7e/3000x3000/lunchtime-20babling-20logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:20:55</itunes:duration>
      <itunes:summary>In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by Jeffery Recker and Bryan Ilg to discuss the growing importance of AI literacy—what it means, why it matters, and how individuals and businesses can stay ahead in an AI-driven world.

Topics covered:

The evolution of AI education and BABL AI’s new subscription model for training &amp; certifications.

Why AI auditing skills are becoming essential for professionals across industries.

How AI governance roles will shape the future of business leadership.

The impact of AI on workforce transition and how individuals can future-proof their careers.

The EU AI Act’s new AI literacy requirements—what they mean for organizations.

Want to level up your AI knowledge? Check out BABL AI’s courses &amp; certifications!

🚀 Subscribe to our courses: https://courses.babl.ai/p/the-algorithmic-bias-lab-membership

👉 Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code &quot;BABLING20&quot;.   </itunes:summary>
      <itunes:subtitle>In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by Jeffery Recker and Bryan Ilg to discuss the growing importance of AI literacy—what it means, why it matters, and how individuals and businesses can stay ahead in an AI-driven world.

Topics covered:

The evolution of AI education and BABL AI’s new subscription model for training &amp; certifications.

Why AI auditing skills are becoming essential for professionals across industries.

How AI governance roles will shape the future of business leadership.

The impact of AI on workforce transition and how individuals can future-proof their careers.

The EU AI Act’s new AI literacy requirements—what they mean for organizations.

Want to level up your AI knowledge? Check out BABL AI’s courses &amp; certifications!

🚀 Subscribe to our courses: https://courses.babl.ai/p/the-algorithmic-bias-lab-membership

👉 Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code &quot;BABLING20&quot;.   </itunes:subtitle>
      <itunes:keywords>eu ai act, online courses, education, ai literacy, ai ethics, responsible ai, ai, training</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>57</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">a1aaac78-d711-481b-8db5-d97dc352ff48</guid>
      <title>Shea Visits RightsCon 2025</title>
      <description><![CDATA[In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown joins us live from RightsCon 2025 in Taipei to break down the latest conversations at the intersection of AI, human rights, and global policy. He’s joined by BABL AI COO Jeffery Recker and CSO Bryan Ilg, as they dive into the big takeaways from the conference and what it means for the future of AI governance.

What’s in this episode?

✅ RightsCon Recap – How AI has taken over the human rights agenda
✅ AI Auditing & Accountability – Why organizations need to prove AI compliance
✅ Investors Are Paying Attention – Why AI risk management is becoming a priority
✅ The Role of Education – Why AI literacy is the key to ethical and responsible AI
✅ The International Association of Algorithmic Auditors – A new professional field is emerging

🚀 If you're passionate about AI, governance, and accountability, this episode is packed with insights you don’t want to miss. Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 3 Mar 2025 06:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown, Jeffery Recker, Bryan Ilg)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/faf17917-6dfc-4de2-a2e7-c7cd3c50cc22/lunchtime-20babling-20youtube-18.jpg" width="1280"/>
      <enclosure length="26043273" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/9919fdae-899b-4d6f-b1a0-3ed19b20fa5f/audio/1185f628-884f-4719-a61c-90e8de497dfc/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>Shea Visits RightsCon 2025</itunes:title>
      <itunes:author>Shea Brown, Jeffery Recker, Bryan Ilg</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/c7f37ccd-a317-45e4-b402-c27df0837021/3000x3000/lunchtime-20babling-20logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:24:15</itunes:duration>
      <itunes:summary>In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown joins us live from RightsCon 2025 in Taipei to break down the latest conversations at the intersection of AI, human rights, and global policy. He’s joined by BABL AI COO Jeffery Recker and CSO Bryan Ilg, as they dive into the big takeaways from the conference and what it means for the future of AI governance.

What’s in this episode?

✅ RightsCon Recap – How AI has taken over the human rights agenda
✅ AI Auditing &amp; Accountability – Why organizations need to prove AI compliance
✅ Investors Are Paying Attention – Why AI risk management is becoming a priority
✅ The Role of Education – Why AI literacy is the key to ethical and responsible AI
✅ The International Association of Algorithmic Auditors – A new professional field is emerging

🚀 If you&apos;re passionate about AI, governance, and accountability, this episode is packed with insights you don’t want to miss.</itunes:summary>
      <itunes:subtitle>In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown joins us live from RightsCon 2025 in Taipei to break down the latest conversations at the intersection of AI, human rights, and global policy. He’s joined by BABL AI COO Jeffery Recker and CSO Bryan Ilg, as they dive into the big takeaways from the conference and what it means for the future of AI governance.

What’s in this episode?

✅ RightsCon Recap – How AI has taken over the human rights agenda
✅ AI Auditing &amp; Accountability – Why organizations need to prove AI compliance
✅ Investors Are Paying Attention – Why AI risk management is becoming a priority
✅ The Role of Education – Why AI literacy is the key to ethical and responsible AI
✅ The International Association of Algorithmic Auditors – A new professional field is emerging

🚀 If you&apos;re passionate about AI, governance, and accountability, this episode is packed with insights you don’t want to miss.</itunes:subtitle>
      <itunes:keywords>ai auditing, ai ethics, responsible ai, ai, risk management, human rights</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>56</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">d8e23f4a-9ba0-4718-a7cd-9f2559111895</guid>
      <title>A Conversation with Ezra Schwartz on UX Design</title>
      <description><![CDATA[Join BABL AI CEO Dr. Shea Brown on Lunchtime BABLing as he sits down with UX Consultant Ezra Schwartz for an in-depth conversation about the evolving world of user experience—and how it intersects with responsible AI.

In this episode, you'll discover:

• Ezra’s Journey: From being a student in our AI & Algorithm Auditor Certification Program to becoming a seasoned UX consultant specializing in age tech.

• Beyond UI Design: Ezra breaks down the true essence of UX, explaining how it’s not just about pretty interfaces, but about creating intuitive, accessible, and human-centered experiences that build trust and drive user satisfaction.

• The Role of UX in AI: Learn how thoughtful UX design is essential in managing AI risks, facilitating cross-department collaboration, and ensuring that digital products truly serve their users.

• Age Tech Insights: Explore how innovative solutions, from fall detection systems to digital caregiving tools, are reshaping life for our aging population—and the importance of balancing technology with privacy and ethical considerations.

If you’re passionate about design, responsible AI, or just curious about the human side of technology, this episode is a must-listen.

👉 Connect with Ezra Schwartz:

Website: https://www.artandtech.com

LinkedIn: https://www.linkedin.com/in/ezraschwartz

Responsible AgeTech Conference I’m organizing: https://responsible-agetech.org Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 24 Feb 2025 06:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Ezra Schwartz, Shea Brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/26c8cdd7-ec24-42ba-ad36-990499f228bf/lunchtime-20babling-20youtube-17.jpg" width="1280"/>
      <enclosure length="34793249" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/89cf60a8-dc7a-473d-bb0a-9e165192c3f8/audio/3b860697-b513-4f4b-9e18-ecc8ba05bc44/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>A Conversation with Ezra Schwartz on UX Design</itunes:title>
      <itunes:author>Ezra Schwartz, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/18fa9220-1df7-4621-b318-89f0cb15e801/3000x3000/lunchtime-20babling-20logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:33:22</itunes:duration>
      <itunes:summary>Join BABL AI CEO Dr. Shea Brown on Lunchtime BABLing as he sits down with UX Consultant Ezra Schwartz for an in-depth conversation about the evolving world of user experience—and how it intersects with responsible AI.

In this episode, you&apos;ll discover:

• Ezra’s Journey: From being a student in our AI &amp; Algorithm Auditor Certification Program to becoming a seasoned UX consultant specializing in age tech.

• Beyond UI Design: Ezra breaks down the true essence of UX, explaining how it’s not just about pretty interfaces, but about creating intuitive, accessible, and human-centered experiences that build trust and drive user satisfaction.

• The Role of UX in AI: Learn how thoughtful UX design is essential in managing AI risks, facilitating cross-department collaboration, and ensuring that digital products truly serve their users.

• Age Tech Insights: Explore how innovative solutions, from fall detection systems to digital caregiving tools, are reshaping life for our aging population—and the importance of balancing technology with privacy and ethical considerations.

If you’re passionate about design, responsible AI, or just curious about the human side of technology, this episode is a must-listen.

👉 Connect with Ezra Schwartz:

Website: https://www.artandtech.com

LinkedIn: https://www.linkedin.com/in/ezraschwartz

Responsible AgeTech Conference I’m organizing: https://responsible-agetech.org</itunes:summary>
      <itunes:subtitle>Join BABL AI CEO Dr. Shea Brown on Lunchtime BABLing as he sits down with UX Consultant Ezra Schwartz for an in-depth conversation about the evolving world of user experience—and how it intersects with responsible AI.

In this episode, you&apos;ll discover:

• Ezra’s Journey: From being a student in our AI &amp; Algorithm Auditor Certification Program to becoming a seasoned UX consultant specializing in age tech.

• Beyond UI Design: Ezra breaks down the true essence of UX, explaining how it’s not just about pretty interfaces, but about creating intuitive, accessible, and human-centered experiences that build trust and drive user satisfaction.

• The Role of UX in AI: Learn how thoughtful UX design is essential in managing AI risks, facilitating cross-department collaboration, and ensuring that digital products truly serve their users.

• Age Tech Insights: Explore how innovative solutions, from fall detection systems to digital caregiving tools, are reshaping life for our aging population—and the importance of balancing technology with privacy and ethical considerations.

If you’re passionate about design, responsible AI, or just curious about the human side of technology, this episode is a must-listen.

👉 Connect with Ezra Schwartz:

Website: https://www.artandtech.com

LinkedIn: https://www.linkedin.com/in/ezraschwartz

Responsible AgeTech Conference I’m organizing: https://responsible-agetech.org</itunes:subtitle>
      <itunes:keywords>ai ethics, responsible ai, ux design, ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>55</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">d9925eb1-02c1-4815-a4eb-ecc487f89868</guid>
      <title>Interview with Mahesh Chandra Mukkamala from Quantpi</title>
      <description><![CDATA[🇩🇪 People can join Quantpi's "RAI in Action" event series kicking off in Germany in March: 

👉 https://www.quantpi.com/resources/events 

🇺🇸 U.S. based folks can join Quantpi's GTC session on March 20th called "A scalable approach toward trustworthy AI": 

👉 https://www.nvidia.com/gtc/session-catalog/?ncid=so-link-241456&linkId=100000328230011&tab.catalogallsessionstab=16566177511100015Kus&search=antoine#/session/1726160038299001jn0f

👉 Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code "BABLING20".   

📚 Sign up for our courses today: https://babl.ai/courses/ 

🔗 Follow us for more: https://linktr.ee/babl.ai

🎙️ Lunchtime BABLing: An Interview with Mahesh Chandra Mukkamala from Quantpi 🎙️

In this episode of Lunchtime BABLing, host Dr. Shea Brown, CEO of BABL AI, sits down with Mahesh Chandra Mukkamala, a data scientist from Quantpi, to discuss the complexities of black box AI testing, AI risk assessment, and compliance in the age of evolving AI regulations.

💡 Topics Covered:

✔️ What is black box AI testing, and why is it crucial?

✔️ How Quantpi ensures model robustness and fairness across different AI systems

✔️ The role of AI risk assessment in EU AI Act compliance and enterprise AI governance

✔️ Challenges businesses face in AI model evaluation and best practices for testing

✔️ Career insights for aspiring AI governance professionals

With increasing regulatory pressure from laws like the EU AI Act, companies need to test their AI models rigorously. Whether you’re an AI professional, compliance officer, or just curious about AI governance, this conversation is packed with valuable insights on ensuring AI systems are trustworthy, fair, and reliable.

🔔 Don’t forget to like, subscribe, and hit the notification bell to stay updated on the latest AI governance insights from BABL AI!

📢 Listen to the podcast on all major podcast streaming platforms

📩 Connect with Mahesh on Linkedin: https://www.linkedin.com/in/maheshchandra/

📌 Follow Quantpi for more AI insights: https://www.quantpi.com Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 17 Feb 2025 06:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Mahesh Chandra Mukkamala, Shea Brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/903bce2e-9c7a-4712-8309-46d419c68fa6/lunchtime-20babling-20youtube-16.jpg" width="1280"/>
      <enclosure length="29353092" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/6b9a5377-6714-4ef8-a02b-91876e587376/audio/c39fd2dd-9eec-4b79-a345-260afb72d8d0/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>Interview with Mahesh Chandra Mukkamala from Quantpi</itunes:title>
      <itunes:author>Mahesh Chandra Mukkamala, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/1af05a3e-9ade-40b8-886a-ed6228305a22/3000x3000/lunchtime-20babling-20logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:27:42</itunes:duration>
      <itunes:summary>🇩🇪 People can join Quantpi&apos;s &quot;RAI in Action&quot; event series kicking off in Germany in March: 

👉 https://www.quantpi.com/resources/events 

🇺🇸 U.S. based folks can join Quantpi&apos;s GTC session on March 20th called &quot;A scalable approach toward trustworthy AI&quot;: 

👉 https://www.nvidia.com/gtc/session-catalog/?ncid=so-link-241456&amp;linkId=100000328230011&amp;tab.catalogallsessionstab=16566177511100015Kus&amp;search=antoine#/session/1726160038299001jn0f

👉 Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code &quot;BABLING20&quot;.   

📚 Sign up for our courses today: https://babl.ai/courses/ 

🔗 Follow us for more: https://linktr.ee/babl.ai

🎙️ Lunchtime BABLing: An Interview with Mahesh Chandra Mukkamala from Quantpi 🎙️

In this episode of Lunchtime BABLing, host Dr. Shea Brown, CEO of BABL AI, sits down with Mahesh Chandra Mukkamala, a data scientist from Quantpi, to discuss the complexities of black box AI testing, AI risk assessment, and compliance in the age of evolving AI regulations.

💡 Topics Covered:

✔️ What is black box AI testing, and why is it crucial?

✔️ How Quantpi ensures model robustness and fairness across different AI systems

✔️ The role of AI risk assessment in EU AI Act compliance and enterprise AI governance

✔️ Challenges businesses face in AI model evaluation and best practices for testing

✔️ Career insights for aspiring AI governance professionals

With increasing regulatory pressure from laws like the EU AI Act, companies need to test their AI models rigorously. Whether you’re an AI professional, compliance officer, or just curious about AI governance, this conversation is packed with valuable insights on ensuring AI systems are trustworthy, fair, and reliable.

🔔 Don’t forget to like, subscribe, and hit the notification bell to stay updated on the latest AI governance insights from BABL AI!

📢 Listen to the podcast on all major podcast streaming platforms

📩 Connect with Mahesh on Linkedin: https://www.linkedin.com/in/maheshchandra/

📌 Follow Quantpi for more AI insights: https://www.quantpi.com</itunes:summary>
      <itunes:subtitle>🇩🇪 People can join Quantpi&apos;s &quot;RAI in Action&quot; event series kicking off in Germany in March: 

👉 https://www.quantpi.com/resources/events 

🇺🇸 U.S. based folks can join Quantpi&apos;s GTC session on March 20th called &quot;A scalable approach toward trustworthy AI&quot;: 

👉 https://www.nvidia.com/gtc/session-catalog/?ncid=so-link-241456&amp;linkId=100000328230011&amp;tab.catalogallsessionstab=16566177511100015Kus&amp;search=antoine#/session/1726160038299001jn0f

👉 Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code &quot;BABLING20&quot;.   

📚 Sign up for our courses today: https://babl.ai/courses/ 

🔗 Follow us for more: https://linktr.ee/babl.ai

🎙️ Lunchtime BABLing: An Interview with Mahesh Chandra Mukkamala from Quantpi 🎙️

In this episode of Lunchtime BABLing, host Dr. Shea Brown, CEO of BABL AI, sits down with Mahesh Chandra Mukkamala, a data scientist from Quantpi, to discuss the complexities of black box AI testing, AI risk assessment, and compliance in the age of evolving AI regulations.

💡 Topics Covered:

✔️ What is black box AI testing, and why is it crucial?

✔️ How Quantpi ensures model robustness and fairness across different AI systems

✔️ The role of AI risk assessment in EU AI Act compliance and enterprise AI governance

✔️ Challenges businesses face in AI model evaluation and best practices for testing

✔️ Career insights for aspiring AI governance professionals

With increasing regulatory pressure from laws like the EU AI Act, companies need to test their AI models rigorously. Whether you’re an AI professional, compliance officer, or just curious about AI governance, this conversation is packed with valuable insights on ensuring AI systems are trustworthy, fair, and reliable.

🔔 Don’t forget to like, subscribe, and hit the notification bell to stay updated on the latest AI governance insights from BABL AI!

📢 Listen to the podcast on all major podcast streaming platforms

📩 Connect with Mahesh on Linkedin: https://www.linkedin.com/in/maheshchandra/

📌 Follow Quantpi for more AI insights: https://www.quantpi.com</itunes:subtitle>
      <itunes:keywords>eu ai act, ai regulation, responsible ai, machine learning, quantpi, data science, ai compliance, ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>54</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">4d98b612-5cd0-4095-a136-87e97e85f890</guid>
      <title>EU AI Act Comes Into Effect and the Regulatory Uncertainty of North America</title>
      <description><![CDATA[Join host Dr. Shea Brown (CEO of BABL AI) along with guest speakers COO Jeffery Recker and CSO Bryan Ilg for an in-depth discussion on the rapidly evolving world of AI regulation. In this episode, our panel unpacks:

The EU AI Act in Action: Learn about the new obligations now in force under the EU AI Act—including the crucial requirements of AI literacy (Article 4) and the prohibition of high-risk AI practices (Article 5).

Compliance Timelines & What’s Next: Get the lowdown on the phased rollout, with upcoming standards and enforcement deadlines on the horizon, and discover practical steps companies should take to prepare.

North American Regulatory Landscape: Explore the contrasting regulatory approaches in North America, from the shifting federal stance in the US to state-specific laws (like Colorado’s AI Act and New York’s local law 144), and why this uncertainty matters for businesses.

Risk, Ethics & the Future of AI in Business: Delve into the importance of risk management, AI literacy training, and human-centered design. Our guests share insights on why responsible AI isn’t just about compliance—it’s also a competitive advantage in today’s fast-paced market.

Whether you’re a business leader, technologist, or policy enthusiast, this episode offers valuable perspectives on how organizations can navigate the complex, global landscape of AI governance while protecting their customers and staying ahead of regulatory demands. Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 10 Feb 2025 07:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Bryan Ilg, Shea Brown, Jeffery Recker)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/bc061141-d2a4-4a69-b648-babc3f20984f/lunchtime-20babling-20youtube-13.jpg" width="1280"/>
      <enclosure length="52123926" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/10fc44cb-aa99-44c0-8f2d-c993bd26eaa9/audio/7f29263e-8692-41c1-93ce-fd9e5c8a30ce/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>EU AI Act Comes Into Effect and the Regulatory Uncertainty of North America</itunes:title>
      <itunes:author>Bryan Ilg, Shea Brown, Jeffery Recker</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/92062a83-6655-421c-a58d-c5aa35b7413f/3000x3000/lunchtime-20babling-20logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:51:25</itunes:duration>
      <itunes:summary>Join host Dr. Shea Brown (CEO of BABL AI) along with guest speakers COO Jeffery Recker and CSO Bryan Ilg for an in-depth discussion on the rapidly evolving world of AI regulation. In this episode, our panel unpacks:

The EU AI Act in Action: Learn about the new obligations now in force under the EU AI Act—including the crucial requirements of AI literacy (Article 4) and the prohibition of high-risk AI practices (Article 5).

Compliance Timelines &amp; What’s Next: Get the lowdown on the phased rollout, with upcoming standards and enforcement deadlines on the horizon, and discover practical steps companies should take to prepare.

North American Regulatory Landscape: Explore the contrasting regulatory approaches in North America, from the shifting federal stance in the US to state-specific laws (like Colorado’s AI Act and New York’s local law 144), and why this uncertainty matters for businesses.

Risk, Ethics &amp; the Future of AI in Business: Delve into the importance of risk management, AI literacy training, and human-centered design. Our guests share insights on why responsible AI isn’t just about compliance—it’s also a competitive advantage in today’s fast-paced market.

Whether you’re a business leader, technologist, or policy enthusiast, this episode offers valuable perspectives on how organizations can navigate the complex, global landscape of AI governance while protecting their customers and staying ahead of regulatory demands.</itunes:summary>
      <itunes:subtitle>Join host Dr. Shea Brown (CEO of BABL AI) along with guest speakers COO Jeffery Recker and CSO Bryan Ilg for an in-depth discussion on the rapidly evolving world of AI regulation. In this episode, our panel unpacks:

The EU AI Act in Action: Learn about the new obligations now in force under the EU AI Act—including the crucial requirements of AI literacy (Article 4) and the prohibition of high-risk AI practices (Article 5).

Compliance Timelines &amp; What’s Next: Get the lowdown on the phased rollout, with upcoming standards and enforcement deadlines on the horizon, and discover practical steps companies should take to prepare.

North American Regulatory Landscape: Explore the contrasting regulatory approaches in North America, from the shifting federal stance in the US to state-specific laws (like Colorado’s AI Act and New York’s local law 144), and why this uncertainty matters for businesses.

Risk, Ethics &amp; the Future of AI in Business: Delve into the importance of risk management, AI literacy training, and human-centered design. Our guests share insights on why responsible AI isn’t just about compliance—it’s also a competitive advantage in today’s fast-paced market.

Whether you’re a business leader, technologist, or policy enthusiast, this episode offers valuable perspectives on how organizations can navigate the complex, global landscape of AI governance while protecting their customers and staying ahead of regulatory demands.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>53</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1bd4f486-3320-47e9-b76d-3294a12a7d93</guid>
      <title>Interview with Abhi Sanka</title>
      <description><![CDATA[🎙️ Lunchtime BABLing: Interview with Abhi Sanka 🎙️

Join BABL AI CEO Dr. Shea Brown as he chats with Abhi Sanka, a dynamic leader in responsible AI and a graduate of BABL AI's inaugural Algorithm Auditor Certificate Program. In this episode, Abhi reflects on his unique journey—from studying the ethics of the Human Genome Project at Duke University to shaping science and technology policy for the U.S. government, to now helping drive innovation at Microsoft.

Explore Abhi's insights on the parallels between the Human Genome Project and the current AI revolution, the challenges of governing agentic AI systems, and the importance of building trust through responsible design. They also discuss the evolving landscape of AI assurance and the critical need for collaboration between industry, policymakers, and civil society.

📌 Highlights:

Abhi’s academic and professional path to responsible AI.

The challenges of auditing agentic AI and aligning governance frameworks.

The importance of community and collaboration in advancing responsible AI.

Abhi’s goals for 2025 and his passion for staying connected to the wider AI ethics community.

Don’t miss this thought-provoking conversation packed with wisdom for anyone passionate about AI governance, policy, and innovation!

🔗 Abhi's Linkedin: https://www.linkedin.com/in/abhisanka/ Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 27 Jan 2025 11:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Abhi Sanka, Shea Brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/771b6eee-1b27-4f1e-ba7c-e7aca70a63fb/lunchtime-20babling-20youtube-11.jpg" width="1280"/>
      <enclosure length="34770261" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/994f786b-ae88-4ebc-b05c-1fb3edcf7ed7/audio/bca538b6-fd5e-441f-beb4-c014ae8effdc/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>Interview with Abhi Sanka</itunes:title>
      <itunes:author>Abhi Sanka, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/35a92a74-3a1f-4a4e-88d2-f58e83baad70/3000x3000/lunchtime-20babling-20logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:33:21</itunes:duration>
      <itunes:summary>🎙️ Lunchtime BABLing: Interview with Abhi Sanka 🎙️

Join BABL AI CEO Dr. Shea Brown as he chats with Abhi Sanka, a dynamic leader in responsible AI and a graduate of BABL AI&apos;s inaugural Algorithm Auditor Certificate Program. In this episode, Abhi reflects on his unique journey—from studying the ethics of the Human Genome Project at Duke University to shaping science and technology policy for the U.S. government, to now helping drive innovation at Microsoft.

Explore Abhi&apos;s insights on the parallels between the Human Genome Project and the current AI revolution, the challenges of governing agentic AI systems, and the importance of building trust through responsible design. They also discuss the evolving landscape of AI assurance and the critical need for collaboration between industry, policymakers, and civil society.

📌 Highlights:

Abhi’s academic and professional path to responsible AI.

The challenges of auditing agentic AI and aligning governance frameworks.

The importance of community and collaboration in advancing responsible AI.

Abhi’s goals for 2025 and his passion for staying connected to the wider AI ethics community.

Don’t miss this thought-provoking conversation packed with wisdom for anyone passionate about AI governance, policy, and innovation!

🔗 Abhi&apos;s Linkedin: https://www.linkedin.com/in/abhisanka/</itunes:summary>
      <itunes:subtitle>🎙️ Lunchtime BABLing: Interview with Abhi Sanka 🎙️

Join BABL AI CEO Dr. Shea Brown as he chats with Abhi Sanka, a dynamic leader in responsible AI and a graduate of BABL AI&apos;s inaugural Algorithm Auditor Certificate Program. In this episode, Abhi reflects on his unique journey—from studying the ethics of the Human Genome Project at Duke University to shaping science and technology policy for the U.S. government, to now helping drive innovation at Microsoft.

Explore Abhi&apos;s insights on the parallels between the Human Genome Project and the current AI revolution, the challenges of governing agentic AI systems, and the importance of building trust through responsible design. They also discuss the evolving landscape of AI assurance and the critical need for collaboration between industry, policymakers, and civil society.

📌 Highlights:

Abhi’s academic and professional path to responsible AI.

The challenges of auditing agentic AI and aligning governance frameworks.

The importance of community and collaboration in advancing responsible AI.

Abhi’s goals for 2025 and his passion for staying connected to the wider AI ethics community.

Don’t miss this thought-provoking conversation packed with wisdom for anyone passionate about AI governance, policy, and innovation!

🔗 Abhi&apos;s Linkedin: https://www.linkedin.com/in/abhisanka/</itunes:subtitle>
      <itunes:keywords>artificial intelligence, ai ethics, responsible ai, ai, ethical ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>52</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">0fa0df9a-3bec-44d8-ae9b-537bee62d72b</guid>
      <title>An Interview with Soribel Feliz</title>
      <description><![CDATA[🎙️ In this engaging episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with special guest Soribel Feliz, a former US diplomat turned AI governance expert. Soribel shares her fascinating career journey from the State Department to big tech roles at Meta and Microsoft, and now as an AI governance and compliance specialist at DHS. 🚀

From her early experiences moderating content algorithms at Meta to advising on AI policy in the US Senate, Soribel discusses the evolution of AI, its ethical challenges, and the crucial importance of data privacy and workforce impacts. She also opens up about transitioning into the tech world, overcoming technical learning curves, and her dedication to helping others navigate career uncertainties in the AI-driven future. 🌍✨

🔑 Key Highlights:

Soribel's career leap from diplomacy to tech and AI policy.
The ethical dilemmas and societal impacts of AI she’s witnessed firsthand.
Her thoughts on AI literacy gaps and the need for growth mindset education.
Practical advice for those transitioning into AI or confronting job uncertainties.

🌟 This episode is packed with wisdom, optimism, and actionable insights for young professionals, career changers, and anyone passionate about responsible AI.

📌 Follow Soribel Feliz for more on AI governance, career guidance, and navigating uncertainty in a rapidly evolving world. Links to her website and newsletter are in the description below.

Linkedin: https://www.linkedin.com/in/soribel-f-b5242b14/ Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 13 Jan 2025 06:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Soribel Feliz, Shea Brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/d89fa18d-d614-4508-b34f-bf4671d9613c/lunchtime-20babling-20youtube-9.jpg" width="1280"/>
      <enclosure length="26100115" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/738676d6-4cda-4fb0-baf9-9a0fbe46acd9/audio/f3d6570a-95c3-402a-95d8-1bbba5f8a1f5/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>An Interview with Soribel Feliz</itunes:title>
      <itunes:author>Soribel Feliz, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/980042e3-58c3-484e-b739-adffe2e76290/3000x3000/lunchtime-20babling-20logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:24:19</itunes:duration>
      <itunes:summary>🎙️ In this engaging episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with special guest Soribel Feliz, a former US diplomat turned AI governance expert. Soribel shares her fascinating career journey from the State Department to big tech roles at Meta and Microsoft, and now as an AI governance and compliance specialist at DHS. 🚀

From her early experiences moderating content algorithms at Meta to advising on AI policy in the US Senate, Soribel discusses the evolution of AI, its ethical challenges, and the crucial importance of data privacy and workforce impacts. She also opens up about transitioning into the tech world, overcoming technical learning curves, and her dedication to helping others navigate career uncertainties in the AI-driven future. 🌍✨

🔑 Key Highlights:

Soribel&apos;s career leap from diplomacy to tech and AI policy.
The ethical dilemmas and societal impacts of AI she’s witnessed firsthand.
Her thoughts on AI literacy gaps and the need for growth mindset education.
Practical advice for those transitioning into AI or confronting job uncertainties.

🌟 This episode is packed with wisdom, optimism, and actionable insights for young professionals, career changers, and anyone passionate about responsible AI.

📌 Follow Soribel Feliz for more on AI governance, career guidance, and navigating uncertainty in a rapidly evolving world. Links to her website and newsletter are in the description below.

Linkedin: https://www.linkedin.com/in/soribel-f-b5242b14/</itunes:summary>
      <itunes:subtitle>🎙️ In this engaging episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with special guest Soribel Feliz, a former US diplomat turned AI governance expert. Soribel shares her fascinating career journey from the State Department to big tech roles at Meta and Microsoft, and now as an AI governance and compliance specialist at DHS. 🚀

From her early experiences moderating content algorithms at Meta to advising on AI policy in the US Senate, Soribel discusses the evolution of AI, its ethical challenges, and the crucial importance of data privacy and workforce impacts. She also opens up about transitioning into the tech world, overcoming technical learning curves, and her dedication to helping others navigate career uncertainties in the AI-driven future. 🌍✨

🔑 Key Highlights:

Soribel&apos;s career leap from diplomacy to tech and AI policy.
The ethical dilemmas and societal impacts of AI she’s witnessed firsthand.
Her thoughts on AI literacy gaps and the need for growth mindset education.
Practical advice for those transitioning into AI or confronting job uncertainties.

🌟 This episode is packed with wisdom, optimism, and actionable insights for young professionals, career changers, and anyone passionate about responsible AI.

📌 Follow Soribel Feliz for more on AI governance, career guidance, and navigating uncertainty in a rapidly evolving world. Links to her website and newsletter are in the description below.

Linkedin: https://www.linkedin.com/in/soribel-f-b5242b14/</itunes:subtitle>
      <itunes:keywords>ai governance, education, ai education, ai jobs, careers, careers in ai, governance, ai, law</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>51</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">bf3fb88a-cb70-42b2-bbd8-84e978652e1c</guid>
      <title>2024 an AI Year in Review</title>
      <description><![CDATA[🎙️ Lunchtime BABLing: 2024 - An AI Year in Review 🎙️

Join Shea Brown (CEO, BABL AI), Jeffery Recker (COO, BABL AI), and Bryan Ilg (CSO, BABL AI) as they reflect on an extraordinary year in AI!

In this final episode of the year, the trio dives into:

🌟 The rapid growth of Responsible AI and algorithmic auditing in 2024.

📈 How large language models are redefining audits and operational workflows.

🌍 The global wave of AI regulations, including the EU AI Act, Colorado AI Act, and emerging laws worldwide.

📚 The rise of AI literacy and the "race for competency" in businesses and society.

🤖 Exciting (and risky!) trends like AI agents and their potential for transformation in 2025.

Jeffery also shares an exciting update about his free online course, Introduction to Responsible AI, available until January 13th, 2025. Don’t miss this opportunity to earn a certification badge and join a live Q&A session!

🎉 Looking Ahead to 2025

What’s next for AI governance, standards like ISO 42001, and the evolving role of education in shaping the future of AI? The team shares predictions, insights, and hopes for the year ahead.

📌 Key Takeaways:

AI is maturing rapidly, with businesses adopting governance frameworks and grappling with new regulations.

Education and competency-building are essential to navigating the changing AI landscape.

The global regulatory response is reshaping how AI is developed, deployed, and audited.

Link to Raymon Sun's Techie Ray Global AI Regulation Tracker: https://www.techieray.com/GlobalAIRegulationTracker

💡 Don’t miss this thought-provoking recap of 2024 and the exciting roadmap for 2025! Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 30 Dec 2024 06:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown, Bryan Ilg, Jeffery Recker)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/6e0825e4-6699-467a-946e-3a4cb983c0b4/lunchtime-20babling-20youtube-8.jpg" width="1280"/>
      <enclosure length="41370254" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/dad4c993-d8bf-4ff5-b358-5a7e1c42df61/audio/140c7ced-d8ed-41d1-8ba6-06798a411a32/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>2024 an AI Year in Review</itunes:title>
      <itunes:author>Shea Brown, Bryan Ilg, Jeffery Recker</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/d8a42911-59c4-49bb-84d0-67777ef31aec/3000x3000/lunchtime-20babling-20logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:40:13</itunes:duration>
      <itunes:summary>🎙️ Lunchtime BABLing: 2024 - An AI Year in Review 🎙️

Join Shea Brown (CEO, BABL AI), Jeffery Recker (COO, BABL AI), and Bryan Ilg (CSO, BABL AI) as they reflect on an extraordinary year in AI!

In this final episode of the year, the trio dives into:

🌟 The rapid growth of Responsible AI and algorithmic auditing in 2024.

📈 How large language models are redefining audits and operational workflows.

🌍 The global wave of AI regulations, including the EU AI Act, Colorado AI Act, and emerging laws worldwide.

📚 The rise of AI literacy and the &quot;race for competency&quot; in businesses and society.

🤖 Exciting (and risky!) trends like AI agents and their potential for transformation in 2025.

Jeffery also shares an exciting update about his free online course, Introduction to Responsible AI, available until January 13th, 2025. Don’t miss this opportunity to earn a certification badge and join a live Q&amp;A session!

🎉 Looking Ahead to 2025

What’s next for AI governance, standards like ISO 42001, and the evolving role of education in shaping the future of AI? The team shares predictions, insights, and hopes for the year ahead.

📌 Key Takeaways:

AI is maturing rapidly, with businesses adopting governance frameworks and grappling with new regulations.

Education and competency-building are essential to navigating the changing AI landscape.

The global regulatory response is reshaping how AI is developed, deployed, and audited.

Link to Raymon Sun&apos;s Techie Ray Global AI Regulation Tracker: https://www.techieray.com/GlobalAIRegulationTracker

💡 Don’t miss this thought-provoking recap of 2024 and the exciting roadmap for 2025!</itunes:summary>
      <itunes:subtitle>🎙️ Lunchtime BABLing: 2024 - An AI Year in Review 🎙️

Join Shea Brown (CEO, BABL AI), Jeffery Recker (COO, BABL AI), and Bryan Ilg (CSO, BABL AI) as they reflect on an extraordinary year in AI!

In this final episode of the year, the trio dives into:

🌟 The rapid growth of Responsible AI and algorithmic auditing in 2024.

📈 How large language models are redefining audits and operational workflows.

🌍 The global wave of AI regulations, including the EU AI Act, Colorado AI Act, and emerging laws worldwide.

📚 The rise of AI literacy and the &quot;race for competency&quot; in businesses and society.

🤖 Exciting (and risky!) trends like AI agents and their potential for transformation in 2025.

Jeffery also shares an exciting update about his free online course, Introduction to Responsible AI, available until January 13th, 2025. Don’t miss this opportunity to earn a certification badge and join a live Q&amp;A session!

🎉 Looking Ahead to 2025

What’s next for AI governance, standards like ISO 42001, and the evolving role of education in shaping the future of AI? The team shares predictions, insights, and hopes for the year ahead.

📌 Key Takeaways:

AI is maturing rapidly, with businesses adopting governance frameworks and grappling with new regulations.

Education and competency-building are essential to navigating the changing AI landscape.

The global regulatory response is reshaping how AI is developed, deployed, and audited.

Link to Raymon Sun&apos;s Techie Ray Global AI Regulation Tracker: https://www.techieray.com/GlobalAIRegulationTracker

💡 Don’t miss this thought-provoking recap of 2024 and the exciting roadmap for 2025!</itunes:subtitle>
      <itunes:keywords>chatgpt, ai governance, gen ai, ai regulation, responsible ai, generative ai, ai, babl ai, risk management</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>50</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1f628fad-aef9-487d-b732-5d37e0fabfb7</guid>
      <title>An Interview with Aleksandr Tiulkanov</title>
      <description><![CDATA[In this episode, BABL AI CEO Dr. Shea Brown interviews Aleksandr Tiulkanov, an expert in AI compliance and digital policy. Aleksandr shares his fascinating journey from being a commercial contracts lawyer to becoming a leader in AI policy at Deloitte and the Council of Europe. 🚀

🔍 What’s in this episode?

The transition from legal tech to AI compliance.

Key differences between the Council of Europe’s Framework Convention on AI and the EU AI Act.

How the EU AI Act fits into Europe’s product safety legislation.

The challenges and confusion around conformity assessments and AI literacy requirements.

Insights into Aleksandr’s courses designed for governance, risk, and compliance professionals.

🛠️ Aleksandr also dives into practical advice for preparing for the EU AI Act, even in the absence of finalized standards, and the role of frameworks like ISO 42,001.

📚 Learn more about Aleksandr’s courses: https://aia.tiulkanov.info
🤝 Follow Aleksandr on LinkedIn: https://www.linkedin.com/in/tyulkanov/ Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 16 Dec 2024 06:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Aleksandr Tiulkanov, Shea Brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/fd6a742a-849e-44c6-8270-c4649be5b970/lunchtime-20babling-20youtube-9.jpg" width="1280"/>
      <enclosure length="44216766" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/736ea742-d6eb-4c5f-9dd7-e0d61fa47783/audio/267a4ccb-363b-4f40-97df-c9bdbcd82735/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>An Interview with Aleksandr Tiulkanov</itunes:title>
      <itunes:author>Aleksandr Tiulkanov, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/f1d139d6-6ccb-42b5-8f15-37b0c92798de/3000x3000/lunchtime-20babling-20logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:43:15</itunes:duration>
      <itunes:summary>In this episode, BABL AI CEO Dr. Shea Brown interviews Aleksandr Tiulkanov, an expert in AI compliance and digital policy. Aleksandr shares his fascinating journey from being a commercial contracts lawyer to becoming a leader in AI policy at Deloitte and the Council of Europe. 🚀

🔍 What’s in this episode?

The transition from legal tech to AI compliance.

Key differences between the Council of Europe’s Framework Convention on AI and the EU AI Act.

How the EU AI Act fits into Europe’s product safety legislation.

The challenges and confusion around conformity assessments and AI literacy requirements.

Insights into Aleksandr’s courses designed for governance, risk, and compliance professionals.

🛠️ Aleksandr also dives into practical advice for preparing for the EU AI Act, even in the absence of finalized standards, and the role of frameworks like ISO 42,001.

📚 Learn more about Aleksandr’s courses: https://aia.tiulkanov.info
🤝 Follow Aleksandr on LinkedIn: https://www.linkedin.com/in/tyulkanov/</itunes:summary>
      <itunes:subtitle>In this episode, BABL AI CEO Dr. Shea Brown interviews Aleksandr Tiulkanov, an expert in AI compliance and digital policy. Aleksandr shares his fascinating journey from being a commercial contracts lawyer to becoming a leader in AI policy at Deloitte and the Council of Europe. 🚀

🔍 What’s in this episode?

The transition from legal tech to AI compliance.

Key differences between the Council of Europe’s Framework Convention on AI and the EU AI Act.

How the EU AI Act fits into Europe’s product safety legislation.

The challenges and confusion around conformity assessments and AI literacy requirements.

Insights into Aleksandr’s courses designed for governance, risk, and compliance professionals.

🛠️ Aleksandr also dives into practical advice for preparing for the EU AI Act, even in the absence of finalized standards, and the role of frameworks like ISO 42,001.

📚 Learn more about Aleksandr’s courses: https://aia.tiulkanov.info
🤝 Follow Aleksandr on LinkedIn: https://www.linkedin.com/in/tyulkanov/</itunes:subtitle>
      <itunes:keywords>eu ai act, regulation, education, ai regulation, ai law, legal, ai, law</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>49</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">6cda766a-a441-4c4a-a0cd-914c38be163e</guid>
      <title>The future of jobs with AI</title>
      <description><![CDATA[In this episode of Lunchtime BABLing, Dr. Shea Brown, CEO of BABL AI, is joined by Jeffery Recker and Bryan Ilg to tackle one of the most pressing questions of our time: How will AI impact the future of work?

From fears of job displacement to the rise of entirely new roles, the trio explores:

🔹 How AI will reshape industries and automate parts of our jobs.
🔹 The importance of upskilling to stay competitive in an AI-driven world.
🔹 Emerging career paths in responsible AI, compliance, and risk management.
🔹 The delicate balance between technological disruption and human creativity.

📌 Whether you're a seasoned professional, a student planning your career, or just curious about the future, this episode has something for you.

👉 Don’t miss this insightful conversation about navigating the rapidly changing job market and preparing for a future where AI is a part of nearly every role.

🎧 Listen on your favorite podcast platform or watch the full discussion here. Don’t forget to like, subscribe, and hit the notification bell to stay updated on the latest AI trends and insights! Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 2 Dec 2024 07:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Bryan Ilg, Jeffery Recker, Shea Brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/01943b7e-5d68-4f22-9b64-b172f7ba8453/lunchtime-20babling-20youtube-7.jpg" width="1280"/>
      <enclosure length="36148064" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/78a7251d-d6da-4d02-8892-2b0c054a05fa/audio/b5908450-d7e7-43e1-97f5-fc6bef99fdb3/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>The future of jobs with AI</itunes:title>
      <itunes:author>Bryan Ilg, Jeffery Recker, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/00880b5a-3124-4676-b1d0-d55e44bd3656/3000x3000/lunchtime-20babling-20logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:34:51</itunes:duration>
      <itunes:summary>In this episode of Lunchtime BABLing, Dr. Shea Brown, CEO of BABL AI, is joined by Jeffery Recker and Bryan Ilg to tackle one of the most pressing questions of our time: How will AI impact the future of work?

From fears of job displacement to the rise of entirely new roles, the trio explores:

🔹 How AI will reshape industries and automate parts of our jobs.
🔹 The importance of upskilling to stay competitive in an AI-driven world.
🔹 Emerging career paths in responsible AI, compliance, and risk management.
🔹 The delicate balance between technological disruption and human creativity.

📌 Whether you&apos;re a seasoned professional, a student planning your career, or just curious about the future, this episode has something for you.

👉 Don’t miss this insightful conversation about navigating the rapidly changing job market and preparing for a future where AI is a part of nearly every role.

🎧 Listen on your favorite podcast platform or watch the full discussion here. Don’t forget to like, subscribe, and hit the notification bell to stay updated on the latest AI trends and insights!</itunes:summary>
      <itunes:subtitle>In this episode of Lunchtime BABLing, Dr. Shea Brown, CEO of BABL AI, is joined by Jeffery Recker and Bryan Ilg to tackle one of the most pressing questions of our time: How will AI impact the future of work?

From fears of job displacement to the rise of entirely new roles, the trio explores:

🔹 How AI will reshape industries and automate parts of our jobs.
🔹 The importance of upskilling to stay competitive in an AI-driven world.
🔹 Emerging career paths in responsible AI, compliance, and risk management.
🔹 The delicate balance between technological disruption and human creativity.

📌 Whether you&apos;re a seasoned professional, a student planning your career, or just curious about the future, this episode has something for you.

👉 Don’t miss this insightful conversation about navigating the rapidly changing job market and preparing for a future where AI is a part of nearly every role.

🎧 Listen on your favorite podcast platform or watch the full discussion here. Don’t forget to like, subscribe, and hit the notification bell to stay updated on the latest AI trends and insights!</itunes:subtitle>
      <itunes:keywords>job, ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>48</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">46ceaf93-e19a-4b84-a0e2-755c3b683031</guid>
      <title>How will a Trump Presidency Impact AI Regulation</title>
      <description><![CDATA[🎙️ Lunchtime BABLing Podcast: What Will a Trump Presidency Mean for AI Regulations?

In this thought-provoking episode, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker and CSO Bryan Ilg to explore the potential impact of a Trump presidency on the landscape of AI regulation. 🚨🤖

Key topics include:

Federal deregulation and the push for state-level AI governance.

The potential repeal of Biden's executive order on AI.

Implications for organizations navigating a fragmented compliance framework.

The role of global AI policies, such as the EU AI Act, in shaping U.S. corporate strategies.

How deregulation might affect innovation, litigation, and risk management in AI development.

This is NOT a political podcast—we focus solely on the implications for AI governance and the tech landscape in the U.S. and beyond. Whether you're an industry professional, policymaker, or tech enthusiast, this episode offers essential insights into the evolving world of AI regulation. Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 18 Nov 2024 10:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown, Jeffery Recker, Bryan Ilg)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/ec997874-14a4-4f46-86d0-f9a8ee444848/lunchtime-20babling-20youtube-5.jpg" width="1280"/>
      <enclosure length="38115398" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/e8c1dbed-def4-41dc-8847-d811a38a66a9/audio/46268c5b-2073-4be3-a8cb-e4f25f28bd07/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>How will a Trump Presidency Impact AI Regulation</itunes:title>
      <itunes:author>Shea Brown, Jeffery Recker, Bryan Ilg</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/632059e4-b1f3-43bc-afee-75176387833a/3000x3000/lunchtime-20babling-20logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:36:54</itunes:duration>
      <itunes:summary>🎙️ Lunchtime BABLing Podcast: What Will a Trump Presidency Mean for AI Regulations?

In this thought-provoking episode, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker and CSO Bryan Ilg to explore the potential impact of a Trump presidency on the landscape of AI regulation. 🚨🤖

Key topics include:

Federal deregulation and the push for state-level AI governance.

The potential repeal of Biden&apos;s executive order on AI.

Implications for organizations navigating a fragmented compliance framework.

The role of global AI policies, such as the EU AI Act, in shaping U.S. corporate strategies.

How deregulation might affect innovation, litigation, and risk management in AI development.

This is NOT a political podcast—we focus solely on the implications for AI governance and the tech landscape in the U.S. and beyond. Whether you&apos;re an industry professional, policymaker, or tech enthusiast, this episode offers essential insights into the evolving world of AI regulation.</itunes:summary>
      <itunes:subtitle>🎙️ Lunchtime BABLing Podcast: What Will a Trump Presidency Mean for AI Regulations?

In this thought-provoking episode, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker and CSO Bryan Ilg to explore the potential impact of a Trump presidency on the landscape of AI regulation. 🚨🤖

Key topics include:

Federal deregulation and the push for state-level AI governance.

The potential repeal of Biden&apos;s executive order on AI.

Implications for organizations navigating a fragmented compliance framework.

The role of global AI policies, such as the EU AI Act, in shaping U.S. corporate strategies.

How deregulation might affect innovation, litigation, and risk management in AI development.

This is NOT a political podcast—we focus solely on the implications for AI governance and the tech landscape in the U.S. and beyond. Whether you&apos;re an industry professional, policymaker, or tech enthusiast, this episode offers essential insights into the evolving world of AI regulation.</itunes:subtitle>
      <itunes:keywords>ai governance, ai regulation, responsible ai, election, trump, ai, compliance, risk management</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>47</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">9e7216a8-1e16-413d-ab9e-267afd0852ac</guid>
      <title>A BABL Deep Dive</title>
      <description><![CDATA[Welcome to a special Lunchtime BABLing episode, BABL Deep Dive, hosted by BABL AI CEO Dr. Shea Brown and Chief Sales Officer Brian Ilg. This in-depth discussion explores the fundamentals and nuances of AI assurance—what it is, why it's crucial for modern enterprises, and how it works in practice.

Dr. Brown breaks down the concept of AI assurance, highlighting its role in mitigating risks, ensuring regulatory compliance, and building trust with stakeholders. Brian Ilg shares key insights from his conversations with clients, addressing common questions and challenges that arise when organizations seek to audit and assure their AI systems.

This episode features a detailed presentation from a recent risk conference, offering a behind-the-scenes look at how BABL AI conducts independent AI audits and assurance engagements. If you're a current or prospective client, an executive curious about AI compliance, or someone exploring careers in AI governance, this episode is packed with valuable information on frameworks, criteria, and best practices for AI risk management.

Watch now to learn how AI assurance can protect your organization from potential pitfalls and enhance your reputation as a responsible, forward-thinking entity in the age of AI! Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 4 Nov 2024 07:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown, Bryan Ilg)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/5cb7c991-dfb9-419e-9218-8c47cd602f69/lunchtime-20babling-20youtube-5.jpg" width="1280"/>
      <enclosure length="51915156" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/1be3200f-b4f4-4ddb-a217-a6ffe37b4800/audio/60447203-fb7e-4fc1-a89d-f70e0e5d53b9/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>A BABL Deep Dive</itunes:title>
      <itunes:author>Shea Brown, Bryan Ilg</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/2440c85c-1581-444d-a092-a991a8d072cb/3000x3000/lunchtime-20babling-20logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:51:16</itunes:duration>
      <itunes:summary>Welcome to a special Lunchtime BABLing episode, BABL Deep Dive, hosted by BABL AI CEO Dr. Shea Brown and Chief Sales Officer Brian Ilg. This in-depth discussion explores the fundamentals and nuances of AI assurance—what it is, why it&apos;s crucial for modern enterprises, and how it works in practice.

Dr. Brown breaks down the concept of AI assurance, highlighting its role in mitigating risks, ensuring regulatory compliance, and building trust with stakeholders. Brian Ilg shares key insights from his conversations with clients, addressing common questions and challenges that arise when organizations seek to audit and assure their AI systems.

This episode features a detailed presentation from a recent risk conference, offering a behind-the-scenes look at how BABL AI conducts independent AI audits and assurance engagements. If you&apos;re a current or prospective client, an executive curious about AI compliance, or someone exploring careers in AI governance, this episode is packed with valuable information on frameworks, criteria, and best practices for AI risk management.

Watch now to learn how AI assurance can protect your organization from potential pitfalls and enhance your reputation as a responsible, forward-thinking entity in the age of AI!</itunes:summary>
      <itunes:subtitle>Welcome to a special Lunchtime BABLing episode, BABL Deep Dive, hosted by BABL AI CEO Dr. Shea Brown and Chief Sales Officer Brian Ilg. This in-depth discussion explores the fundamentals and nuances of AI assurance—what it is, why it&apos;s crucial for modern enterprises, and how it works in practice.

Dr. Brown breaks down the concept of AI assurance, highlighting its role in mitigating risks, ensuring regulatory compliance, and building trust with stakeholders. Brian Ilg shares key insights from his conversations with clients, addressing common questions and challenges that arise when organizations seek to audit and assure their AI systems.

This episode features a detailed presentation from a recent risk conference, offering a behind-the-scenes look at how BABL AI conducts independent AI audits and assurance engagements. If you&apos;re a current or prospective client, an executive curious about AI compliance, or someone exploring careers in AI governance, this episode is packed with valuable information on frameworks, criteria, and best practices for AI risk management.

Watch now to learn how AI assurance can protect your organization from potential pitfalls and enhance your reputation as a responsible, forward-thinking entity in the age of AI!</itunes:subtitle>
      <itunes:keywords>ai governance, ai auditing, ai regulation, responsible ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>46</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">954f6996-52da-469d-a48f-a9bb55b3f695</guid>
      <title>AI Literacy Requirements of the EU AI Act</title>
      <description><![CDATA[👉 Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code "BABLING20". 


📚 Courses Mentioned:


1️⃣ AI Literacy Requirements Course: https://courses.babl.ai/p/ai-literacy-for-eu-ai-act-general-workforce

2️⃣ EU AI Act - Conformity Requirements for High-Risk AI Systems Course: https://courses.babl.ai/p/eu-ai-act-conformity-requirements-for-high-risk-ai-systems

3️⃣ EU AI Act - Quality Management System Certification: https://courses.babl.ai/p/eu-ai-act-quality-management-system-oversight-certification
 
4️⃣ BABL AI Course Catalog: https://babl.ai/courses/ 


🔗 Follow us for more: https://linktr.ee/babl.ai


In this episode of Lunchtime BABLing, CEO Dr. Shea Brown dives into the "AI Literacy Requirements of the EU AI Act," focusing on the upcoming compliance obligations set to take effect on February 2, 2025. Dr. Brown explains the significance of Article 4 and discusses what "AI literacy" means for companies that provide or deploy AI systems, offering practical insights into how organizations can meet these new regulatory requirements.

Throughout the episode, Dr. Brown covers:

AI literacy obligations for providers and deployers under the EU AI Act.

The importance of AI literacy in ensuring compliance.

An overview of BABL AI’s upcoming courses, including the AI Literacy Training for the general workforce, launching November 4. Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 21 Oct 2024 06:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/0cc6eb07-7e6b-4064-96bf-6ce7fe9bf912/lunchtime-babling-youtube-45.jpg" width="1280"/>
      <enclosure length="21995966" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/299c6fa2-9d50-48ee-aa70-dfa452cba90b/audio/1eb4226f-3437-41ce-a7d8-959becf8a3cd/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>AI Literacy Requirements of the EU AI Act</itunes:title>
      <itunes:author>Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/69b3cde3-be30-4317-9408-01e94edbc49f/3000x3000/lunchtime-babling-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:20:06</itunes:duration>
      <itunes:summary>👉 Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code &quot;BABLING20&quot;. 


📚 Courses Mentioned:


1️⃣ AI Literacy Requirements Course: https://courses.babl.ai/p/ai-literacy-for-eu-ai-act-general-workforce

2️⃣ EU AI Act - Conformity Requirements for High-Risk AI Systems Course: https://courses.babl.ai/p/eu-ai-act-conformity-requirements-for-high-risk-ai-systems

3️⃣ EU AI Act - Quality Management System Certification: https://courses.babl.ai/p/eu-ai-act-quality-management-system-oversight-certification
 
4️⃣ BABL AI Course Catalog: https://babl.ai/courses/ 


🔗 Follow us for more: https://linktr.ee/babl.ai


In this episode of Lunchtime BABLing, CEO Dr. Shea Brown dives into the &quot;AI Literacy Requirements of the EU AI Act,&quot; focusing on the upcoming compliance obligations set to take effect on February 2, 2025. Dr. Brown explains the significance of Article 4 and discusses what &quot;AI literacy&quot; means for companies that provide or deploy AI systems, offering practical insights into how organizations can meet these new regulatory requirements.

Throughout the episode, Dr. Brown covers:

AI literacy obligations for providers and deployers under the EU AI Act.

The importance of AI literacy in ensuring compliance.

An overview of BABL AI’s upcoming courses, including the AI Literacy Training for the general workforce, launching November 4.</itunes:summary>
      <itunes:subtitle>👉 Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code &quot;BABLING20&quot;. 


📚 Courses Mentioned:


1️⃣ AI Literacy Requirements Course: https://courses.babl.ai/p/ai-literacy-for-eu-ai-act-general-workforce

2️⃣ EU AI Act - Conformity Requirements for High-Risk AI Systems Course: https://courses.babl.ai/p/eu-ai-act-conformity-requirements-for-high-risk-ai-systems

3️⃣ EU AI Act - Quality Management System Certification: https://courses.babl.ai/p/eu-ai-act-quality-management-system-oversight-certification
 
4️⃣ BABL AI Course Catalog: https://babl.ai/courses/ 


🔗 Follow us for more: https://linktr.ee/babl.ai


In this episode of Lunchtime BABLing, CEO Dr. Shea Brown dives into the &quot;AI Literacy Requirements of the EU AI Act,&quot; focusing on the upcoming compliance obligations set to take effect on February 2, 2025. Dr. Brown explains the significance of Article 4 and discusses what &quot;AI literacy&quot; means for companies that provide or deploy AI systems, offering practical insights into how organizations can meet these new regulatory requirements.

Throughout the episode, Dr. Brown covers:

AI literacy obligations for providers and deployers under the EU AI Act.

The importance of AI literacy in ensuring compliance.

An overview of BABL AI’s upcoming courses, including the AI Literacy Training for the general workforce, launching November 4.</itunes:subtitle>
      <itunes:keywords>eu ai act, ai governance, education, ai literacy, artificial intelligence, ai ethics, responsible ai, legal, regulations, ai, law</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>45</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">74a4dc79-0fc6-4df9-8856-b5e3a6de6310</guid>
      <title>AI Frenzy: Will It Really Replace Our Jobs?</title>
      <description><![CDATA[In this episode of Lunchtime BABLing, hosted by Dr. Shea Brown, CEO of BABL AI, we're joined by frequent guest Jeffery Recker, Co-Founder and Chief Operating Officer of BABL AI. Together, they dive into an interesting question in the AI world today: Will AI really replace our jobs?

Drawing insights from a recent interview with MIT economist Daron Acemoglu, Shea and Jeffery discuss the projected economic impact of AI and what they believe the hype surrounding AI-driven job loss will actually look like. With only 5% of jobs expected to be heavily impacted by AI, is the AI revolution really what everyone thinks it is?

They explore themes such as the overcorrection in AI investment, the role of responsible AI governance, and how strategic implementation of AI can create competitive advantages for companies. Tune in for an honest and insightful conversation on what AI will mean for the future of work, the economy, and beyond.

If you enjoy this episode, don't forget to like and subscribe for more discussions on AI, ethics, and technology! Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 7 Oct 2024 05:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Jeffery Recker, Shea Brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/8dc10090-0ee2-4160-8903-f4ad04dcc2f8/lunchtime-babling-youtube-2.jpg" width="1280"/>
      <enclosure length="18955313" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/467ce964-dffc-4d23-a0a4-ad97b70c6974/audio/dff172ce-7804-4038-9eda-33757b9e4fb2/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>AI Frenzy: Will It Really Replace Our Jobs?</itunes:title>
      <itunes:author>Jeffery Recker, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/0709a1e7-aaaa-41d8-9822-02ddd53f6e59/3000x3000/lunchtime-babling-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:16:56</itunes:duration>
      <itunes:summary>In this episode of Lunchtime BABLing, hosted by Dr. Shea Brown, CEO of BABL AI, we&apos;re joined by frequent guest Jeffery Recker, Co-Founder and Chief Operating Officer of BABL AI. Together, they dive into an interesting question in the AI world today: Will AI really replace our jobs?

Drawing insights from a recent interview with MIT economist Daron Acemoglu, Shea and Jeffery discuss the projected economic impact of AI and what they believe the hype surrounding AI-driven job loss will actually look like. With only 5% of jobs expected to be heavily impacted by AI, is the AI revolution really what everyone thinks it is?

They explore themes such as the overcorrection in AI investment, the role of responsible AI governance, and how strategic implementation of AI can create competitive advantages for companies. Tune in for an honest and insightful conversation on what AI will mean for the future of work, the economy, and beyond.

If you enjoy this episode, don&apos;t forget to like and subscribe for more discussions on AI, ethics, and technology!</itunes:summary>
      <itunes:subtitle>In this episode of Lunchtime BABLing, hosted by Dr. Shea Brown, CEO of BABL AI, we&apos;re joined by frequent guest Jeffery Recker, Co-Founder and Chief Operating Officer of BABL AI. Together, they dive into an interesting question in the AI world today: Will AI really replace our jobs?

Drawing insights from a recent interview with MIT economist Daron Acemoglu, Shea and Jeffery discuss the projected economic impact of AI and what they believe the hype surrounding AI-driven job loss will actually look like. With only 5% of jobs expected to be heavily impacted by AI, is the AI revolution really what everyone thinks it is?

They explore themes such as the overcorrection in AI investment, the role of responsible AI governance, and how strategic implementation of AI can create competitive advantages for companies. Tune in for an honest and insightful conversation on what AI will mean for the future of work, the economy, and beyond.

If you enjoy this episode, don&apos;t forget to like and subscribe for more discussions on AI, ethics, and technology!</itunes:subtitle>
      <itunes:keywords>work, responsible ai, ai, ethical ai, jobs</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>44</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">606a2753-4a3e-4a14-bf74-11636059f289</guid>
      <title>How NIST Might Help Deloitte With the FTC</title>
      <description><![CDATA[Welcome back to another insightful episode of Lunchtime BABLing! In this episode, BABL AI CEO Dr. Shea Brown and COO Jeffery Recker dive into a fascinating discussion on how the NIST AI Risk Management Framework could play a crucial role in guiding companies like Deloitte through Federal Trade Commission (FTC) investigations.

In this episode, Shea and Jeffery on a recent complaint filed against Deloitte regarding its automated decision system for Medicaid eligibility in Texas, and how adherence to established frameworks could have mitigated the issues at hand.

📍 Topics discussed:

Deloitte’s Medicaid eligibility system in Texas

The role of the FTC and the NIST AI Risk Management Framework

How AI governance can safeguard against unintentional harm

Why proactive risk management is key, even for non-AI systems

What companies can learn from this case to improve compliance and oversight

Tune in now and stay ahead of the curve! 🔊✨

👍 If you found this episode helpful, please like and subscribe to stay updated on future episodes. Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 23 Sep 2024 13:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown, Jeffery Recker)</author>
      <link>https://babl.ai</link>
      <enclosure length="33737273" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/277620fd-de12-48fe-b9ec-819ab2f933df/audio/b6787ce3-30e9-4493-809d-1225b2378f0e/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>How NIST Might Help Deloitte With the FTC</itunes:title>
      <itunes:author>Shea Brown, Jeffery Recker</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/dc723775-76a5-4db7-bab8-86a5190bbdab/4602273b-d11d-4ef3-bcab-e4fa1cea6e39/3000x3000/lunchtime-babling-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:32:26</itunes:duration>
      <itunes:summary>Welcome back to another insightful episode of Lunchtime BABLing! In this episode, BABL AI CEO Dr. Shea Brown and COO Jeffery Recker dive into a fascinating discussion on how the NIST AI Risk Management Framework could play a crucial role in guiding companies like Deloitte through Federal Trade Commission (FTC) investigations.

In this episode, Shea and Jeffery on a recent complaint filed against Deloitte regarding its automated decision system for Medicaid eligibility in Texas, and how adherence to established frameworks could have mitigated the issues at hand.

📍 Topics discussed:

Deloitte’s Medicaid eligibility system in Texas

The role of the FTC and the NIST AI Risk Management Framework

How AI governance can safeguard against unintentional harm

Why proactive risk management is key, even for non-AI systems

What companies can learn from this case to improve compliance and oversight

Tune in now and stay ahead of the curve! 🔊✨

👍 If you found this episode helpful, please like and subscribe to stay updated on future episodes.</itunes:summary>
      <itunes:subtitle>Welcome back to another insightful episode of Lunchtime BABLing! In this episode, BABL AI CEO Dr. Shea Brown and COO Jeffery Recker dive into a fascinating discussion on how the NIST AI Risk Management Framework could play a crucial role in guiding companies like Deloitte through Federal Trade Commission (FTC) investigations.

In this episode, Shea and Jeffery on a recent complaint filed against Deloitte regarding its automated decision system for Medicaid eligibility in Texas, and how adherence to established frameworks could have mitigated the issues at hand.

📍 Topics discussed:

Deloitte’s Medicaid eligibility system in Texas

The role of the FTC and the NIST AI Risk Management Framework

How AI governance can safeguard against unintentional harm

Why proactive risk management is key, even for non-AI systems

What companies can learn from this case to improve compliance and oversight

Tune in now and stay ahead of the curve! 🔊✨

👍 If you found this episode helpful, please like and subscribe to stay updated on future episodes.</itunes:subtitle>
      <itunes:keywords>artificial intelligence, ftc, legal compliance, risk management, nist</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>43</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">85ef2a03-773e-4606-820e-b67e21a50484</guid>
      <title>&apos;The Regulatory Landscape for AI in Insurance&apos;</title>
      <description><![CDATA[<p><p>Check out the babl.ai website for more stuff on AI Governance and Responsible AI!</p></p>]]></description>
      <pubDate>Mon, 2 Sep 2024 06:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown, Jeffery Recker)</author>
      <link>https://babl.ai</link>
      <content:encoded><![CDATA[<p><p>Check out the babl.ai website for more stuff on AI Governance and Responsible AI!</p></p>]]></content:encoded>
      <enclosure length="35675349" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/30054f65-b75c-4983-8299-a1b304f2937d/audio/f14e62de-2c59-45b3-8c30-c3271531c9b3/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>&apos;The Regulatory Landscape for AI in Insurance&apos;</itunes:title>
      <itunes:author>Shea Brown, Jeffery Recker</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/dc723775-76a5-4db7-bab8-86a5190bbdab/7a724dea-6633-42aa-9248-d78f0d89f7ac/3000x3000/lunchtime-babling-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:34:27</itunes:duration>
      <itunes:summary>Welcome back to another insightful episode of Lunchtime BABLing! In this latest episode, our CEO Shea Brown is joined by guest host Jeffery Recker to dive deep into the rapidly evolving regulatory landscape surrounding AI in the insurance industry. 

In this episode, Shea and Jeffery explore:

The Key Regulations: Discover the major regulations affecting AI in insurance including New York State Circular Letter No. 7, and Colorado Regulation 10-1-1. Learn how these laws are shaping the industry and what insurers need to be aware of. 📜💼

Risk Management &amp; Governance: Understand the essentials of integrating AI into existing risk management frameworks and the importance of bias testing and mitigation. 📊🔎

Third-Party Vendor Oversight: Learn about the increased due diligence required for third-party vendors and how insurance companies can ensure compliance with these new regulations. 🕵️‍♂️🔗

Transparency &amp; Documentation: Get to grips with the requirements for transparency in AI use and the critical documentation needed to stay compliant. 📚🔒

Shea and Jeffery provide practical advice and best practices for navigating this complex regulatory environment, offering valuable insights for insurance professionals and compliance officers. Whether you&apos;re new to AI regulations or looking to refine your compliance strategy, this episode is packed with actionable information.

Tune in now and stay ahead of the curve! 🔊✨

👍 If you found this episode helpful, please like and subscribe to stay updated on future episodes.
</itunes:summary>
      <itunes:subtitle>Welcome back to another insightful episode of Lunchtime BABLing! In this latest episode, our CEO Shea Brown is joined by guest host Jeffery Recker to dive deep into the rapidly evolving regulatory landscape surrounding AI in the insurance industry. 

In this episode, Shea and Jeffery explore:

The Key Regulations: Discover the major regulations affecting AI in insurance including New York State Circular Letter No. 7, and Colorado Regulation 10-1-1. Learn how these laws are shaping the industry and what insurers need to be aware of. 📜💼

Risk Management &amp; Governance: Understand the essentials of integrating AI into existing risk management frameworks and the importance of bias testing and mitigation. 📊🔎

Third-Party Vendor Oversight: Learn about the increased due diligence required for third-party vendors and how insurance companies can ensure compliance with these new regulations. 🕵️‍♂️🔗

Transparency &amp; Documentation: Get to grips with the requirements for transparency in AI use and the critical documentation needed to stay compliant. 📚🔒

Shea and Jeffery provide practical advice and best practices for navigating this complex regulatory environment, offering valuable insights for insurance professionals and compliance officers. Whether you&apos;re new to AI regulations or looking to refine your compliance strategy, this episode is packed with actionable information.

Tune in now and stay ahead of the curve! 🔊✨

👍 If you found this episode helpful, please like and subscribe to stay updated on future episodes.
</itunes:subtitle>
      <itunes:keywords>audit and assurance, insurance, artificial intelligence, regulations, ai regulations</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>42</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">f67a9c66-08e3-4c60-97eb-58522b8ed349</guid>
      <title>Where to Get Started with the EU AI Act: Part Two</title>
      <description><![CDATA[In the second part of our in-depth discussion on the EU AI Act, BABL AI CEO Dr. Shea Brown and COO Jeffery Recker continue to explore the essential steps organizations need to take to comply with this groundbreaking regulation. If you missed Part One, be sure to check it out, as this episode builds on the foundational insights shared there.

In this episode, titled "Where to Get Started with the EU AI Act: Part Two," Dr. Brown and Mr. Recker dive deeper into the practical aspects of compliance, including:

Documentation & Transparency: Understanding the extensive documentation and transparency measures required to demonstrate compliance and maintain up-to-date records.

Challenges for Different Organizations: A look at how compliance challenges differ for small and medium-sized enterprises compared to larger organizations, and what proactive steps can be taken.

Global Compliance Considerations: Discussing the merits of pursuing global compliance strategies and the implications of the EU AI Act on businesses operating outside the EU.

Enforcement & Penalties: Insight into how the EU AI Act will be enforced, the bodies responsible for oversight, and the significant penalties for non-compliance.

Balancing Innovation with Regulation: How the EU AI Act aims to foster innovation while ensuring that AI systems are human-centric and trustworthy.

Whether you're a startup navigating the complexities of AI governance or a large enterprise seeking to align with global standards, this episode offers valuable guidance on how to approach the EU AI Act and ensure your AI systems are compliant, trustworthy, and ready for the future.

🔗 Key Topics Discussed:

What documentation and transparency measures are required to demonstrate compliance? 

How can businesses effectively maintain and update these records?

How will the EU AI Act be enforced, and which bodies are responsible for its oversight and implementation?

What are the biggest challenges you foresee in complying with the EU AI Act?

What resources or support mechanisms are being provided to businesses to help them comply with the new regulations?

How does the EU AI Act balance the need for regulation with the need to foster innovation and competitiveness in the AI sector?

What are the penalties for non-compliance, and how will they be determined and applied?

What guidelines should entities follow to ensure their AI systems are human-centric and trustworthy?

What proactive measures can entities take to ensure their AI systems remain compliant as technology and regulations evolve?

How do you see the EU AI Act evolving in the future, and what additional measures or amendments might be necessary?


👍 If you found this episode helpful, please like and subscribe to stay updated on future episodes. Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 12 Aug 2024 06:30:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Jeffery Recker, Shea Brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/cc716a1a-9745-45da-92e1-d35140cb66d3/25-15.jpg" width="1280"/>
      <enclosure length="46927646" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/5177e37b-19fe-4c80-b633-cc4b2eff482d/audio/6dd6d278-f830-4494-bf59-290327139f72/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>Where to Get Started with the EU AI Act: Part Two</itunes:title>
      <itunes:author>Jeffery Recker, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/12c27770-b170-49ea-8998-834a36cd6d5a/3000x3000/lunchtime-babling-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:46:10</itunes:duration>
      <itunes:summary>In the second part of our in-depth discussion on the EU AI Act, BABL AI CEO Dr. Shea Brown and COO Jeffery Recker continue to explore the essential steps organizations need to take to comply with this groundbreaking regulation. If you missed Part One, be sure to check it out, as this episode builds on the foundational insights shared there.

In this episode, titled &quot;Where to Get Started with the EU AI Act: Part Two,&quot; Dr. Brown and Mr. Recker dive deeper into the practical aspects of compliance, including:

Documentation &amp; Transparency: Understanding the extensive documentation and transparency measures required to demonstrate compliance and maintain up-to-date records.

Challenges for Different Organizations: A look at how compliance challenges differ for small and medium-sized enterprises compared to larger organizations, and what proactive steps can be taken.

Global Compliance Considerations: Discussing the merits of pursuing global compliance strategies and the implications of the EU AI Act on businesses operating outside the EU.

Enforcement &amp; Penalties: Insight into how the EU AI Act will be enforced, the bodies responsible for oversight, and the significant penalties for non-compliance.

Balancing Innovation with Regulation: How the EU AI Act aims to foster innovation while ensuring that AI systems are human-centric and trustworthy.

Whether you&apos;re a startup navigating the complexities of AI governance or a large enterprise seeking to align with global standards, this episode offers valuable guidance on how to approach the EU AI Act and ensure your AI systems are compliant, trustworthy, and ready for the future.

🔗 Key Topics Discussed:

What documentation and transparency measures are required to demonstrate compliance? 

How can businesses effectively maintain and update these records?

How will the EU AI Act be enforced, and which bodies are responsible for its oversight and implementation?

What are the biggest challenges you foresee in complying with the EU AI Act?

What resources or support mechanisms are being provided to businesses to help them comply with the new regulations?

How does the EU AI Act balance the need for regulation with the need to foster innovation and competitiveness in the AI sector?

What are the penalties for non-compliance, and how will they be determined and applied?

What guidelines should entities follow to ensure their AI systems are human-centric and trustworthy?

What proactive measures can entities take to ensure their AI systems remain compliant as technology and regulations evolve?

How do you see the EU AI Act evolving in the future, and what additional measures or amendments might be necessary?


👍 If you found this episode helpful, please like and subscribe to stay updated on future episodes.</itunes:summary>
      <itunes:subtitle>In the second part of our in-depth discussion on the EU AI Act, BABL AI CEO Dr. Shea Brown and COO Jeffery Recker continue to explore the essential steps organizations need to take to comply with this groundbreaking regulation. If you missed Part One, be sure to check it out, as this episode builds on the foundational insights shared there.

In this episode, titled &quot;Where to Get Started with the EU AI Act: Part Two,&quot; Dr. Brown and Mr. Recker dive deeper into the practical aspects of compliance, including:

Documentation &amp; Transparency: Understanding the extensive documentation and transparency measures required to demonstrate compliance and maintain up-to-date records.

Challenges for Different Organizations: A look at how compliance challenges differ for small and medium-sized enterprises compared to larger organizations, and what proactive steps can be taken.

Global Compliance Considerations: Discussing the merits of pursuing global compliance strategies and the implications of the EU AI Act on businesses operating outside the EU.

Enforcement &amp; Penalties: Insight into how the EU AI Act will be enforced, the bodies responsible for oversight, and the significant penalties for non-compliance.

Balancing Innovation with Regulation: How the EU AI Act aims to foster innovation while ensuring that AI systems are human-centric and trustworthy.

Whether you&apos;re a startup navigating the complexities of AI governance or a large enterprise seeking to align with global standards, this episode offers valuable guidance on how to approach the EU AI Act and ensure your AI systems are compliant, trustworthy, and ready for the future.

🔗 Key Topics Discussed:

What documentation and transparency measures are required to demonstrate compliance? 

How can businesses effectively maintain and update these records?

How will the EU AI Act be enforced, and which bodies are responsible for its oversight and implementation?

What are the biggest challenges you foresee in complying with the EU AI Act?

What resources or support mechanisms are being provided to businesses to help them comply with the new regulations?

How does the EU AI Act balance the need for regulation with the need to foster innovation and competitiveness in the AI sector?

What are the penalties for non-compliance, and how will they be determined and applied?

What guidelines should entities follow to ensure their AI systems are human-centric and trustworthy?

What proactive measures can entities take to ensure their AI systems remain compliant as technology and regulations evolve?

How do you see the EU AI Act evolving in the future, and what additional measures or amendments might be necessary?


👍 If you found this episode helpful, please like and subscribe to stay updated on future episodes.</itunes:subtitle>
      <itunes:keywords>legal, legal compliance, ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>41</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">66990bab-b24f-446e-bf85-54e2cc4555d7</guid>
      <title>Where to Get Started with the EU AI Act: Part One</title>
      <description><![CDATA[In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker to kick off a deep dive into the EU AI Act. Titled "Where to Get Started with the EU AI Act: Part One," this episode is designed for organizations navigating the complexities of the new regulations.

With the EU AI Act officially in place, the discussion centers on what businesses and AI developers need to do to prepare. Dr. Brown and Mr. Recker cover crucial topics including the primary objectives of the Act, the specific aspects of AI systems that will be audited, and the high-risk AI systems requiring special attention under the new regulations.

The episode also tackles practical questions, such as how often audits should be conducted to ensure ongoing compliance and how much of the process can realistically be automated. Whether you're just starting out with compliance or looking to refine your approach, this episode offers valuable insights into aligning your AI practices with the requirements of the EU AI Act.

Don't miss this informative session to ensure your organization is ready for the changes ahead!

🔗 Key Topics Discussed:

What are the primary objectives of the EU AI Act, and how does it aim to regulate AI technologies within the EU? 

What impact will this have outside the EU?

What specific aspects of AI systems will need conformity assessments for compliance with the EU AI Act?

Are there any particular high-risk AI systems that require special attention under the new regulations?

How do you assess and manage the risks associated with AI systems?

What are the key provisions and requirements of the Act that businesses and AI developers need to be aware of? 

How are we ensuring that our AI systems comply with GDPR and other relevant data protection regulations?

How often should these conformity assessments be conducted to ensure ongoing compliance with the EU AI Act? 


📌 Stay tuned for Part Two where we continue this discussion with more in-depth analysis and practical tips!

👍 If you found this episode helpful, please like and subscribe to stay updated on future episodes.

#AI #EUAIACT #ArtificialIntelligence #Compliance #TechRegulation #AIAudit #LunchtimeBABLing #BABLAI Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 12 Aug 2024 06:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown, Jeffery Recker)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/e01a04f0-bf89-461c-a263-cfc77bdb5c0e/25-5.jpg" width="1280"/>
      <enclosure length="23177534" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/dd1d392e-b618-4d29-9a7d-8640deba62a6/audio/586fd5eb-fe57-4f6d-96ae-e6ad5c021a12/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>Where to Get Started with the EU AI Act: Part One</itunes:title>
      <itunes:author>Shea Brown, Jeffery Recker</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/5a90446e-e689-4d93-87d6-4e129649b10a/3000x3000/lunchtime-babling-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:21:26</itunes:duration>
      <itunes:summary>In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker to kick off a deep dive into the EU AI Act. Titled &quot;Where to Get Started with the EU AI Act: Part One,&quot; this episode is designed for organizations navigating the complexities of the new regulations.

With the EU AI Act officially in place, the discussion centers on what businesses and AI developers need to do to prepare. Dr. Brown and Mr. Recker cover crucial topics including the primary objectives of the Act, the specific aspects of AI systems that will be audited, and the high-risk AI systems requiring special attention under the new regulations.

The episode also tackles practical questions, such as how often audits should be conducted to ensure ongoing compliance and how much of the process can realistically be automated. Whether you&apos;re just starting out with compliance or looking to refine your approach, this episode offers valuable insights into aligning your AI practices with the requirements of the EU AI Act.

Don&apos;t miss this informative session to ensure your organization is ready for the changes ahead!

🔗 Key Topics Discussed:

What are the primary objectives of the EU AI Act, and how does it aim to regulate AI technologies within the EU? 

What impact will this have outside the EU?

What specific aspects of AI systems will need conformity assessments for compliance with the EU AI Act?

Are there any particular high-risk AI systems that require special attention under the new regulations?

How do you assess and manage the risks associated with AI systems?

What are the key provisions and requirements of the Act that businesses and AI developers need to be aware of? 

How are we ensuring that our AI systems comply with GDPR and other relevant data protection regulations?

How often should these conformity assessments be conducted to ensure ongoing compliance with the EU AI Act? 


📌 Stay tuned for Part Two where we continue this discussion with more in-depth analysis and practical tips!

👍 If you found this episode helpful, please like and subscribe to stay updated on future episodes.

#AI #EUAIACT #ArtificialIntelligence #Compliance #TechRegulation #AIAudit #LunchtimeBABLing #BABLAI</itunes:summary>
      <itunes:subtitle>In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker to kick off a deep dive into the EU AI Act. Titled &quot;Where to Get Started with the EU AI Act: Part One,&quot; this episode is designed for organizations navigating the complexities of the new regulations.

With the EU AI Act officially in place, the discussion centers on what businesses and AI developers need to do to prepare. Dr. Brown and Mr. Recker cover crucial topics including the primary objectives of the Act, the specific aspects of AI systems that will be audited, and the high-risk AI systems requiring special attention under the new regulations.

The episode also tackles practical questions, such as how often audits should be conducted to ensure ongoing compliance and how much of the process can realistically be automated. Whether you&apos;re just starting out with compliance or looking to refine your approach, this episode offers valuable insights into aligning your AI practices with the requirements of the EU AI Act.

Don&apos;t miss this informative session to ensure your organization is ready for the changes ahead!

🔗 Key Topics Discussed:

What are the primary objectives of the EU AI Act, and how does it aim to regulate AI technologies within the EU? 

What impact will this have outside the EU?

What specific aspects of AI systems will need conformity assessments for compliance with the EU AI Act?

Are there any particular high-risk AI systems that require special attention under the new regulations?

How do you assess and manage the risks associated with AI systems?

What are the key provisions and requirements of the Act that businesses and AI developers need to be aware of? 

How are we ensuring that our AI systems comply with GDPR and other relevant data protection regulations?

How often should these conformity assessments be conducted to ensure ongoing compliance with the EU AI Act? 


📌 Stay tuned for Part Two where we continue this discussion with more in-depth analysis and practical tips!

👍 If you found this episode helpful, please like and subscribe to stay updated on future episodes.

#AI #EUAIACT #ArtificialIntelligence #Compliance #TechRegulation #AIAudit #LunchtimeBABLing #BABLAI</itunes:subtitle>
      <itunes:keywords>legal, legal compliance, ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>40</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">d7e72a67-c106-4b80-b2a1-044a074fe91b</guid>
      <title>Building Trust in AI</title>
      <description><![CDATA[Welcome back to Lunchtime BABLing! In this episode, BABL AI CEO Dr. Shea Brown and Bryan Ilg delve into the crucial topic of "Building Trust in AI."

Episode Highlights:

Trust Survey Insights: Bryan shares findings from a recent PwC trust survey, highlighting the importance of trust between businesses and their stakeholders, including consumers, employees, and investors.

AI's Role in Trust: Discussion on how AI adoption impacts trust and the bottom line for organizations.

Internal vs. External Trust: Insights into the significance of building both internal (employee) and external (consumer) trust.

Responsible AI: Exploring the need for responsible AI strategies, data privacy, bias and fairness, and the importance of transparency and accountability.

Practical Steps: Tips for businesses on how to bridge the trust gap and effectively communicate their AI governance and responsible practices.

Join us as we explore how businesses can build a trustworthy AI ecosystem, ensuring ethical practices and fostering a strong relationship with all stakeholders.

If you enjoyed this episode, please like, subscribe, and share your thoughts in the comments below! Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 8 Jul 2024 06:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Bryan Ilg, Shea Brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/5996f6b8-21b5-4f42-8bff-85986210f46a/25-4.jpg" width="1280"/>
      <enclosure length="31936704" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/6af4f302-34db-464f-8a67-f19375291087/audio/0c336a29-230c-4b93-88cd-aadb85bc9a32/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>Building Trust in AI</itunes:title>
      <itunes:author>Bryan Ilg, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/1774a29d-ce26-4a66-a80a-4aca71357c4b/3000x3000/lunchtime-babling-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:30:33</itunes:duration>
      <itunes:summary>Welcome back to Lunchtime BABLing! In this episode, BABL AI CEO Dr. Shea Brown and Bryan Ilg delve into the crucial topic of &quot;Building Trust in AI.&quot;

Episode Highlights:

Trust Survey Insights: Bryan shares findings from a recent PwC trust survey, highlighting the importance of trust between businesses and their stakeholders, including consumers, employees, and investors.

AI&apos;s Role in Trust: Discussion on how AI adoption impacts trust and the bottom line for organizations.

Internal vs. External Trust: Insights into the significance of building both internal (employee) and external (consumer) trust.

Responsible AI: Exploring the need for responsible AI strategies, data privacy, bias and fairness, and the importance of transparency and accountability.

Practical Steps: Tips for businesses on how to bridge the trust gap and effectively communicate their AI governance and responsible practices.

Join us as we explore how businesses can build a trustworthy AI ecosystem, ensuring ethical practices and fostering a strong relationship with all stakeholders.

If you enjoyed this episode, please like, subscribe, and share your thoughts in the comments below!</itunes:summary>
      <itunes:subtitle>Welcome back to Lunchtime BABLing! In this episode, BABL AI CEO Dr. Shea Brown and Bryan Ilg delve into the crucial topic of &quot;Building Trust in AI.&quot;

Episode Highlights:

Trust Survey Insights: Bryan shares findings from a recent PwC trust survey, highlighting the importance of trust between businesses and their stakeholders, including consumers, employees, and investors.

AI&apos;s Role in Trust: Discussion on how AI adoption impacts trust and the bottom line for organizations.

Internal vs. External Trust: Insights into the significance of building both internal (employee) and external (consumer) trust.

Responsible AI: Exploring the need for responsible AI strategies, data privacy, bias and fairness, and the importance of transparency and accountability.

Practical Steps: Tips for businesses on how to bridge the trust gap and effectively communicate their AI governance and responsible practices.

Join us as we explore how businesses can build a trustworthy AI ecosystem, ensuring ethical practices and fostering a strong relationship with all stakeholders.

If you enjoyed this episode, please like, subscribe, and share your thoughts in the comments below!</itunes:subtitle>
      <itunes:keywords>data trust, risk assessment, machine learning ethics, trust, responsible artificial intelligence, governance of artificial intelligence, building trust, branding, ai, babl ai, law</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>39</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">80c50d4b-af20-44e9-ae80-2deee7d67238</guid>
      <title>NYC AI Bias Law: One Year In and What to Consider</title>
      <description><![CDATA[Join us for an insightful episode of "Lunchtime BABLing" as BABL AI CEO Shea Brown and VP of Sales Bryan Ilg dive deep into New York City's Local Law 144, a year after its implementation. This law mandates the auditing of AI tools used in hiring for bias, ensuring fair and equitable practices in the workplace.

Episode Highlights:

Understanding Local Law 144: A breakdown of what the law entails, its goals, and its impact on employers and AI tool providers.

Year One Insights: What has been learned from the first year of compliance, including common challenges and successes.

Preparing for Year Two: Key considerations for organizations as they navigate the second year of compliance. Learn about the nuances of data sharing, audit requirements, and maintaining compliance.

Data Types and Testing: Detailed explanation of historical data vs. test data, and their roles in bias audits.

Practical Advice: Decision trees and strategic advice for employers on how to handle their data and audit needs effectively.

This episode is packed with valuable information for employers, HR professionals, and AI tool providers to ensure compliance with New York City's AI bias audit requirements. Stay informed and ahead of the curve with expert insights from Shea and Bryan.

🔗 Don't forget to like, subscribe, and share! If you're watching on YouTube, hit the like button and subscribe to stay updated with our latest episodes. If you're tuning in via podcast, thank you for listening! See you next week on Lunchtime BABLing. Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 1 Jul 2024 12:02:36 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Bryan Ilg, Shea Brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/952ac9a9-48ac-432a-88f0-580bc1c93d0a/25-15.jpg" width="1280"/>
      <enclosure length="22248828" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/d19a8240-ea26-4122-8acb-faeb9f6347c2/audio/f7fd9e5c-25ad-4da6-9525-599a9e587a3a/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>NYC AI Bias Law: One Year In and What to Consider</itunes:title>
      <itunes:author>Bryan Ilg, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/d1c5f4b0-3a40-47f7-a0ef-db2c0668b194/3000x3000/lunchtime-babling-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:20:28</itunes:duration>
      <itunes:summary>Join us for an insightful episode of &quot;Lunchtime BABLing&quot; as BABL AI CEO Shea Brown and VP of Sales Bryan Ilg dive deep into New York City&apos;s Local Law 144, a year after its implementation. This law mandates the auditing of AI tools used in hiring for bias, ensuring fair and equitable practices in the workplace.

Episode Highlights:

Understanding Local Law 144: A breakdown of what the law entails, its goals, and its impact on employers and AI tool providers.

Year One Insights: What has been learned from the first year of compliance, including common challenges and successes.

Preparing for Year Two: Key considerations for organizations as they navigate the second year of compliance. Learn about the nuances of data sharing, audit requirements, and maintaining compliance.

Data Types and Testing: Detailed explanation of historical data vs. test data, and their roles in bias audits.

Practical Advice: Decision trees and strategic advice for employers on how to handle their data and audit needs effectively.

This episode is packed with valuable information for employers, HR professionals, and AI tool providers to ensure compliance with New York City&apos;s AI bias audit requirements. Stay informed and ahead of the curve with expert insights from Shea and Bryan.

🔗 Don&apos;t forget to like, subscribe, and share! If you&apos;re watching on YouTube, hit the like button and subscribe to stay updated with our latest episodes. If you&apos;re tuning in via podcast, thank you for listening! See you next week on Lunchtime BABLing.</itunes:summary>
      <itunes:subtitle>Join us for an insightful episode of &quot;Lunchtime BABLing&quot; as BABL AI CEO Shea Brown and VP of Sales Bryan Ilg dive deep into New York City&apos;s Local Law 144, a year after its implementation. This law mandates the auditing of AI tools used in hiring for bias, ensuring fair and equitable practices in the workplace.

Episode Highlights:

Understanding Local Law 144: A breakdown of what the law entails, its goals, and its impact on employers and AI tool providers.

Year One Insights: What has been learned from the first year of compliance, including common challenges and successes.

Preparing for Year Two: Key considerations for organizations as they navigate the second year of compliance. Learn about the nuances of data sharing, audit requirements, and maintaining compliance.

Data Types and Testing: Detailed explanation of historical data vs. test data, and their roles in bias audits.

Practical Advice: Decision trees and strategic advice for employers on how to handle their data and audit needs effectively.

This episode is packed with valuable information for employers, HR professionals, and AI tool providers to ensure compliance with New York City&apos;s AI bias audit requirements. Stay informed and ahead of the curve with expert insights from Shea and Bryan.

🔗 Don&apos;t forget to like, subscribe, and share! If you&apos;re watching on YouTube, hit the like button and subscribe to stay updated with our latest episodes. If you&apos;re tuning in via podcast, thank you for listening! See you next week on Lunchtime BABLing.</itunes:subtitle>
      <itunes:keywords>nyc bias law, machine learning ethics, responsible artificial intelligence, algorithm bias law, human resources, nyc algorithm hiring law, hr, ai, babl ai, risk management</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>38</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">a150e68d-e1a8-4c56-a628-078913222404</guid>
      <title>Understanding Colorado&apos;s New AI Consumer Protection Law</title>
      <description><![CDATA[In this insightful episode of Lunchtime BABLing, BABL AI CEO Shea Brown and COO Jeffery Recker dive deep into Colorado's pioneering AI Consumer Protection Law. This legislation marks a significant move at the state level to regulate artificial intelligence, aiming to protect consumers from algorithmic discrimination. 

Shea and Jeffery discuss the implications for developers and deployers of AI systems, emphasizing the need for robust risk assessments, documentation, and compliance strategies. They explore how this law parallels the EU AI Act, focusing particularly on discrimination and the responsibilities laid out for both AI developers and deployers.

Listeners, don't miss the chance to enhance your understanding of AI governance with a special offer from BABL AI: Enjoy 20% off all courses using the coupon code "BABLING20." 

Explore our courses here: https://courses.babl.ai/ 

For a deeper dive into Colorado's AI law, check out our detailed blog post: "Colorado's Comprehensive AI Regulation: A Closer Look at the New AI Consumer Protection Law". Don't forget to subscribe to our newsletter at the bottom of the page for the latest updates and insights.

Link to the blog here: https://babl.ai/colorados-comprehensive-ai-regulation-a-closer-look-at-the-new-ai-consumer-protection-law/

Timestamps:
00:21 - Welcome and Introductions
00:43 - Overview of Colorado's AI Consumer Protection Law
01:52 - State vs. Federal Initiatives in AI Regulation
04:00 - Detailed Discussion on the Law's Provisions
07:02 - Risk Management and Compliance Techniques
09:51 - Importance of Proper Documentation
12:21 - Developer and Deployer Obligations
17:12 - Strategies for Public Disclosure and Risk Notification
20:48 - Annual Impact Assessments
22:44 - Transparency in AI Decision-Making
24:05 - Consumer Rights in AI Decisions
26:03 - Public Disclosure Requirements
28:36 - Final Thoughts and Takeaways

Remember to like, subscribe, and comment with your thoughts or questions. Your interaction helps us bring more valuable content to you! Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 3 Jun 2024 09:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown, Jeffery Recker)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/cd494cd3-efab-46e5-b4b4-ccb05e3002d9/25-14.jpg" width="1280"/>
      <enclosure length="32435748" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/e113ecca-6fb7-4e25-bce2-9086c463f093/audio/5745279b-9e70-4c1f-b03d-f271e19698fb/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>Understanding Colorado&apos;s New AI Consumer Protection Law</itunes:title>
      <itunes:author>Shea Brown, Jeffery Recker</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/77fc0cfa-c9c5-4e0e-bd54-8d9762f10308/3000x3000/lunchtime-babling-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:31:04</itunes:duration>
      <itunes:summary>In this insightful episode of Lunchtime BABLing, BABL AI CEO Shea Brown and COO Jeffery Recker dive deep into Colorado&apos;s pioneering AI Consumer Protection Law. This legislation marks a significant move at the state level to regulate artificial intelligence, aiming to protect consumers from algorithmic discrimination. 

Shea and Jeffery discuss the implications for developers and deployers of AI systems, emphasizing the need for robust risk assessments, documentation, and compliance strategies. They explore how this law parallels the EU AI Act, focusing particularly on discrimination and the responsibilities laid out for both AI developers and deployers.

Listeners, don&apos;t miss the chance to enhance your understanding of AI governance with a special offer from BABL AI: Enjoy 20% off all courses using the coupon code &quot;BABLING20.&quot; 

Explore our courses here: https://courses.babl.ai/ 

For a deeper dive into Colorado&apos;s AI law, check out our detailed blog post: &quot;Colorado&apos;s Comprehensive AI Regulation: A Closer Look at the New AI Consumer Protection Law&quot;. Don&apos;t forget to subscribe to our newsletter at the bottom of the page for the latest updates and insights.

Link to the blog here: https://babl.ai/colorados-comprehensive-ai-regulation-a-closer-look-at-the-new-ai-consumer-protection-law/

Timestamps:
00:21 - Welcome and Introductions
00:43 - Overview of Colorado&apos;s AI Consumer Protection Law
01:52 - State vs. Federal Initiatives in AI Regulation
04:00 - Detailed Discussion on the Law&apos;s Provisions
07:02 - Risk Management and Compliance Techniques
09:51 - Importance of Proper Documentation
12:21 - Developer and Deployer Obligations
17:12 - Strategies for Public Disclosure and Risk Notification
20:48 - Annual Impact Assessments
22:44 - Transparency in AI Decision-Making
24:05 - Consumer Rights in AI Decisions
26:03 - Public Disclosure Requirements
28:36 - Final Thoughts and Takeaways

Remember to like, subscribe, and comment with your thoughts or questions. Your interaction helps us bring more valuable content to you!</itunes:summary>
      <itunes:subtitle>In this insightful episode of Lunchtime BABLing, BABL AI CEO Shea Brown and COO Jeffery Recker dive deep into Colorado&apos;s pioneering AI Consumer Protection Law. This legislation marks a significant move at the state level to regulate artificial intelligence, aiming to protect consumers from algorithmic discrimination. 

Shea and Jeffery discuss the implications for developers and deployers of AI systems, emphasizing the need for robust risk assessments, documentation, and compliance strategies. They explore how this law parallels the EU AI Act, focusing particularly on discrimination and the responsibilities laid out for both AI developers and deployers.

Listeners, don&apos;t miss the chance to enhance your understanding of AI governance with a special offer from BABL AI: Enjoy 20% off all courses using the coupon code &quot;BABLING20.&quot; 

Explore our courses here: https://courses.babl.ai/ 

For a deeper dive into Colorado&apos;s AI law, check out our detailed blog post: &quot;Colorado&apos;s Comprehensive AI Regulation: A Closer Look at the New AI Consumer Protection Law&quot;. Don&apos;t forget to subscribe to our newsletter at the bottom of the page for the latest updates and insights.

Link to the blog here: https://babl.ai/colorados-comprehensive-ai-regulation-a-closer-look-at-the-new-ai-consumer-protection-law/

Timestamps:
00:21 - Welcome and Introductions
00:43 - Overview of Colorado&apos;s AI Consumer Protection Law
01:52 - State vs. Federal Initiatives in AI Regulation
04:00 - Detailed Discussion on the Law&apos;s Provisions
07:02 - Risk Management and Compliance Techniques
09:51 - Importance of Proper Documentation
12:21 - Developer and Deployer Obligations
17:12 - Strategies for Public Disclosure and Risk Notification
20:48 - Annual Impact Assessments
22:44 - Transparency in AI Decision-Making
24:05 - Consumer Rights in AI Decisions
26:03 - Public Disclosure Requirements
28:36 - Final Thoughts and Takeaways

Remember to like, subscribe, and comment with your thoughts or questions. Your interaction helps us bring more valuable content to you!</itunes:subtitle>
      <itunes:keywords>risk assessment, nyc ai hiring law, machine learning ethics, responsible artificial intelligence, algorithm bias law, regulations, legal compliance, ai, babl ai, colorado, law</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>37</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">6786e80d-48b1-4f5d-858a-93ce88ae9ea7</guid>
      <title>NIST AI Risk Management Framework &amp; Generative AI Profile</title>
      <description><![CDATA[🎙️ Welcome back to Lunchtime BABLing, where we bring you the latest insights into the rapidly evolving world of AI ethics and governance! In this episode, BABL AI CEO Shea Brown and VP of Sales Bryan Ilg delve into the intricacies of the newly released NIST AI Risk Management Framework, with a specific focus on its implications for generative AI technologies.

🔍 The conversation kicks off with Shea and Bryan providing an overview of the NIST framework, highlighting its significance as a voluntary guideline for governing AI systems. They discuss how the framework's "govern, map, measure, manage" functions serve as a roadmap for organizations to navigate the complex landscape of AI risk management.

📑 Titled "NIST AI Risk Management Framework: Generative AI Profile," this episode delves deep into the companion document that focuses specifically on generative AI. Shea and Bryan explore the unique challenges posed by generative AI in terms of information integrity, human-AI interactions, and automation bias.

🧠 Shea provides valuable insights into the distinctions between AI, machine learning, and generative AI, shedding light on the nuanced risks associated with generative AI's ability to create content autonomously. The discussion delves into the implications of misinformation and disinformation campaigns fueled by generative AI technologies.

🔒 As the conversation unfolds, Shea and Bryan discuss the voluntary nature of the NIST framework and explore strategies for driving industry-wide adoption. They examine the role of certifications and standards in building trust and credibility in AI systems, emphasizing the importance of transparent and accountable AI governance practices.

🌐 Join Shea and Bryan as they navigate the complex terrain of AI risk management, offering valuable insights into the evolving landscape of AI ethics and governance. Whether you're a seasoned AI practitioner or simply curious about the ethical implications of AI technologies, this episode is packed with actionable takeaways and thought-provoking discussions.

🎧 Tune in now to stay informed and engaged with the latest advancements in AI ethics and governance, and join the conversation on responsible AI development and deployment! Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 6 May 2024 11:14:22 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Bryan Ilg, Shea Brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/ad482c60-8e3d-44b2-a08a-04cdfce87811/25-14.jpg" width="1280"/>
      <enclosure length="44952790" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/9a805e2e-c04f-4745-a168-00be80862175/audio/f865cb9e-3325-4780-a63a-6c899ca850d2/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>NIST AI Risk Management Framework &amp; Generative AI Profile</itunes:title>
      <itunes:author>Bryan Ilg, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/b039f36c-1125-429d-ba3e-7a7f07e08269/3000x3000/lunchtime-babling-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:44:07</itunes:duration>
      <itunes:summary>🎙️ Welcome back to Lunchtime BABLing, where we bring you the latest insights into the rapidly evolving world of AI ethics and governance! In this episode, BABL AI CEO Shea Brown and VP of Sales Bryan Ilg delve into the intricacies of the newly released NIST AI Risk Management Framework, with a specific focus on its implications for generative AI technologies.

🔍 The conversation kicks off with Shea and Bryan providing an overview of the NIST framework, highlighting its significance as a voluntary guideline for governing AI systems. They discuss how the framework&apos;s &quot;govern, map, measure, manage&quot; functions serve as a roadmap for organizations to navigate the complex landscape of AI risk management.

📑 Titled &quot;NIST AI Risk Management Framework: Generative AI Profile,&quot; this episode delves deep into the companion document that focuses specifically on generative AI. Shea and Bryan explore the unique challenges posed by generative AI in terms of information integrity, human-AI interactions, and automation bias.

🧠 Shea provides valuable insights into the distinctions between AI, machine learning, and generative AI, shedding light on the nuanced risks associated with generative AI&apos;s ability to create content autonomously. The discussion delves into the implications of misinformation and disinformation campaigns fueled by generative AI technologies.

🔒 As the conversation unfolds, Shea and Bryan discuss the voluntary nature of the NIST framework and explore strategies for driving industry-wide adoption. They examine the role of certifications and standards in building trust and credibility in AI systems, emphasizing the importance of transparent and accountable AI governance practices.

🌐 Join Shea and Bryan as they navigate the complex terrain of AI risk management, offering valuable insights into the evolving landscape of AI ethics and governance. Whether you&apos;re a seasoned AI practitioner or simply curious about the ethical implications of AI technologies, this episode is packed with actionable takeaways and thought-provoking discussions.

🎧 Tune in now to stay informed and engaged with the latest advancements in AI ethics and governance, and join the conversation on responsible AI development and deployment!</itunes:summary>
      <itunes:subtitle>🎙️ Welcome back to Lunchtime BABLing, where we bring you the latest insights into the rapidly evolving world of AI ethics and governance! In this episode, BABL AI CEO Shea Brown and VP of Sales Bryan Ilg delve into the intricacies of the newly released NIST AI Risk Management Framework, with a specific focus on its implications for generative AI technologies.

🔍 The conversation kicks off with Shea and Bryan providing an overview of the NIST framework, highlighting its significance as a voluntary guideline for governing AI systems. They discuss how the framework&apos;s &quot;govern, map, measure, manage&quot; functions serve as a roadmap for organizations to navigate the complex landscape of AI risk management.

📑 Titled &quot;NIST AI Risk Management Framework: Generative AI Profile,&quot; this episode delves deep into the companion document that focuses specifically on generative AI. Shea and Bryan explore the unique challenges posed by generative AI in terms of information integrity, human-AI interactions, and automation bias.

🧠 Shea provides valuable insights into the distinctions between AI, machine learning, and generative AI, shedding light on the nuanced risks associated with generative AI&apos;s ability to create content autonomously. The discussion delves into the implications of misinformation and disinformation campaigns fueled by generative AI technologies.

🔒 As the conversation unfolds, Shea and Bryan discuss the voluntary nature of the NIST framework and explore strategies for driving industry-wide adoption. They examine the role of certifications and standards in building trust and credibility in AI systems, emphasizing the importance of transparent and accountable AI governance practices.

🌐 Join Shea and Bryan as they navigate the complex terrain of AI risk management, offering valuable insights into the evolving landscape of AI ethics and governance. Whether you&apos;re a seasoned AI practitioner or simply curious about the ethical implications of AI technologies, this episode is packed with actionable takeaways and thought-provoking discussions.

🎧 Tune in now to stay informed and engaged with the latest advancements in AI ethics and governance, and join the conversation on responsible AI development and deployment!</itunes:subtitle>
      <itunes:keywords>ai risk management framework, risk assessment, nist ai rmf genai, nist ai risk management framework, genai, ai act, generative ai, ai, risk management, nist</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>36</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1e1e5806-39d4-4a24-b728-46a611d86039</guid>
      <title>The EU AI Act: Prohibited and High-Risk Systems and why you should care</title>
      <description><![CDATA[In this episode of the Lunchtime BABLing Podcast, Dr. Shea Brown, CEO of BABL AI, dives into the intricacies of the EU AI Act alongside Jeffery Recker, the COO of BABL AI. Titled "The EU AI Act: Prohibited and High-Risk Systems and why you should care," this conversation sheds light on the recent passing of the EU AI Act by the parliament and its implications for businesses and individuals alike.

Dr. Brown and Jeffery explore the journey of the EU AI Act, from its proposal to its finalization, outlining the key milestones and upcoming steps. They delve into the categorization of AI systems into prohibited and high-risk categories, discussing the significance of compliance and the potential impacts on businesses operating within the EU.

The conversation extends to the importance of understanding biases in AI algorithms, the complexities surrounding compliance, and the value of getting ahead of the curve in implementing necessary measures. Dr. Brown offers insights into how BABL AI assists organizations in navigating the regulatory landscape, emphasizing the importance of building trust and quality products in the AI ecosystem.

Key Topics Covered:

Overview of the EU AI Act and its journey to enactment
Differentiating prohibited and high-risk AI systems
Understanding biases in AI algorithms and their implications
Compliance challenges and the importance of early action
How BABL AI supports organizations in achieving compliance and building trust

Why You Should Tune In:

Whether you're a business operating within the EU or an individual interested in the impact of AI regulation, this episode provides valuable insights into the evolving regulatory landscape and its implications. Dr. Shea Brown and Jeffery Recker offer expert perspectives on navigating compliance challenges and the importance of ethical AI governance.

Don't Miss Out:
Subscribe to the Lunchtime BABLing Podcast for more thought-provoking discussions on AI, ethics, and governance. Stay tuned for upcoming episodes and join the conversation on critical topics shaping the future of technology. Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 8 Apr 2024 13:10:02 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Jeffery Recker, Shea Brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/f22b932a-088a-4fd3-9471-0a010cfcd92d/25-13.jpg" width="1280"/>
      <enclosure length="24110966" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/fd58e113-a744-498c-bff7-a6567933d5b4/audio/90f2d06c-11ae-49b8-ad58-56279a2269b7/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>The EU AI Act: Prohibited and High-Risk Systems and why you should care</itunes:title>
      <itunes:author>Jeffery Recker, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/08895b19-126d-4e3a-a7c8-2a2bfedc5312/3000x3000/lunchtime-babling-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:25:06</itunes:duration>
      <itunes:summary>In this episode of the Lunchtime BABLing Podcast, Dr. Shea Brown, CEO of BABL AI, dives into the intricacies of the EU AI Act alongside Jeffery Recker, the COO of BABL AI. Titled &quot;The EU AI Act: Prohibited and High-Risk Systems and why you should care,&quot; this conversation sheds light on the recent passing of the EU AI Act by the parliament and its implications for businesses and individuals alike.

Dr. Brown and Jeffery explore the journey of the EU AI Act, from its proposal to its finalization, outlining the key milestones and upcoming steps. They delve into the categorization of AI systems into prohibited and high-risk categories, discussing the significance of compliance and the potential impacts on businesses operating within the EU.

The conversation extends to the importance of understanding biases in AI algorithms, the complexities surrounding compliance, and the value of getting ahead of the curve in implementing necessary measures. Dr. Brown offers insights into how BABL AI assists organizations in navigating the regulatory landscape, emphasizing the importance of building trust and quality products in the AI ecosystem.

Key Topics Covered:

Overview of the EU AI Act and its journey to enactment
Differentiating prohibited and high-risk AI systems
Understanding biases in AI algorithms and their implications
Compliance challenges and the importance of early action
How BABL AI supports organizations in achieving compliance and building trust

Why You Should Tune In:

Whether you&apos;re a business operating within the EU or an individual interested in the impact of AI regulation, this episode provides valuable insights into the evolving regulatory landscape and its implications. Dr. Shea Brown and Jeffery Recker offer expert perspectives on navigating compliance challenges and the importance of ethical AI governance.

Don&apos;t Miss Out:
Subscribe to the Lunchtime BABLing Podcast for more thought-provoking discussions on AI, ethics, and governance. Stay tuned for upcoming episodes and join the conversation on critical topics shaping the future of technology.</itunes:summary>
      <itunes:subtitle>In this episode of the Lunchtime BABLing Podcast, Dr. Shea Brown, CEO of BABL AI, dives into the intricacies of the EU AI Act alongside Jeffery Recker, the COO of BABL AI. Titled &quot;The EU AI Act: Prohibited and High-Risk Systems and why you should care,&quot; this conversation sheds light on the recent passing of the EU AI Act by the parliament and its implications for businesses and individuals alike.

Dr. Brown and Jeffery explore the journey of the EU AI Act, from its proposal to its finalization, outlining the key milestones and upcoming steps. They delve into the categorization of AI systems into prohibited and high-risk categories, discussing the significance of compliance and the potential impacts on businesses operating within the EU.

The conversation extends to the importance of understanding biases in AI algorithms, the complexities surrounding compliance, and the value of getting ahead of the curve in implementing necessary measures. Dr. Brown offers insights into how BABL AI assists organizations in navigating the regulatory landscape, emphasizing the importance of building trust and quality products in the AI ecosystem.

Key Topics Covered:

Overview of the EU AI Act and its journey to enactment
Differentiating prohibited and high-risk AI systems
Understanding biases in AI algorithms and their implications
Compliance challenges and the importance of early action
How BABL AI supports organizations in achieving compliance and building trust

Why You Should Tune In:

Whether you&apos;re a business operating within the EU or an individual interested in the impact of AI regulation, this episode provides valuable insights into the evolving regulatory landscape and its implications. Dr. Shea Brown and Jeffery Recker offer expert perspectives on navigating compliance challenges and the importance of ethical AI governance.

Don&apos;t Miss Out:
Subscribe to the Lunchtime BABLing Podcast for more thought-provoking discussions on AI, ethics, and governance. Stay tuned for upcoming episodes and join the conversation on critical topics shaping the future of technology.</itunes:subtitle>
      <itunes:keywords>risk assessment, consultant, machine learning ethics, ai ethics, legal, legal compliance, ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>35</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">6033b509-3d37-4e49-96d3-c6adb956024b</guid>
      <title>Live Webinar Q&amp;A Recording: Finding Your Place in AI Ethics Consulting</title>
      <description><![CDATA[Join us in this latest episode of the Lunchtime BABLing Podcast, where Shea Brown, CEO of BABL AI, shares invaluable insights from a live webinar Q&A session on carving out a niche in AI Ethics Consulting. Dive deep into the world of AI ethics, algorithm auditing, and the journey of building a boutique firm focused on ethical risk, bias, and effective governance in AI technologies.

In This Episode:

Introduction to AI Ethics Consulting: Shea Brown introduces the session, providing a backdrop for his journey and the birth of BABL AI.

Journey of BABL AI: Discover the challenges and milestones in creating and growing an AI ethics consulting firm.
Insights from the Field: Shea shares his experiences and learnings from auditing algorithms for ethical risks and navigating the evolving landscape of AI ethics.

Live Q&A Highlights: Audience questions range from enrolling in AI ethics courses, the role of lawyers in AI audits, to the importance of philosophy in AI ethics consulting.

Advice on Career Pivoting: Shea offers advice on pivoting into AI ethics consulting, highlighting the importance of understanding regulatory requirements and finding one’s niche.

Auditing Process Explained: Get a high-level overview of the auditing process, including the distinction between assessments and formal audits.

Building a Career in AI Ethics: Discussion on the demand for AI ethics consulting, networking strategies, and the interdisciplinary nature of audit teams.

Key Takeaways:

The essential blend of skills needed in AI ethics consulting.
Insights into the challenges and opportunities in the field of AI ethics.
Practical advice for individuals looking to enter or pivot into AI ethics consulting.
Don’t miss this opportunity to learn from one of the pioneers in AI ethics consulting. Whether you’re new to the field or looking to deepen your knowledge, this episode is packed with insights, experiences, and advice to guide you on your journey.

Listeners can use coupon code "FREEFEB" to get our "Finding Your Place in AI Ethics Consulting" course for free. Link on our Website. 

Lunchtime BABLing listeners can use coupon code "BABLING" to save 20% on all our course offerings.  Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 18 Mar 2024 04:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/dfcf4df8-1e5d-49cd-8b4b-8022c46f62b5/25-12.jpg" width="1280"/>
      <enclosure length="59590557" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/19e3e184-9cd7-419b-b1f3-99a067c56090/audio/dc6104f0-ce35-483d-8fad-e8220e2050e2/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>Live Webinar Q&amp;A Recording: Finding Your Place in AI Ethics Consulting</itunes:title>
      <itunes:author>Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/3b1227a7-4421-4024-89ac-b29b0f6ee9aa/3000x3000/lunchtime-babling-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:59:22</itunes:duration>
      <itunes:summary>Join us in this latest episode of the Lunchtime BABLing Podcast, where Shea Brown, CEO of BABL AI, shares invaluable insights from a live webinar Q&amp;A session on carving out a niche in AI Ethics Consulting. Dive deep into the world of AI ethics, algorithm auditing, and the journey of building a boutique firm focused on ethical risk, bias, and effective governance in AI technologies.

In This Episode:

Introduction to AI Ethics Consulting: Shea Brown introduces the session, providing a backdrop for his journey and the birth of BABL AI.

Journey of BABL AI: Discover the challenges and milestones in creating and growing an AI ethics consulting firm.
Insights from the Field: Shea shares his experiences and learnings from auditing algorithms for ethical risks and navigating the evolving landscape of AI ethics.

Live Q&amp;A Highlights: Audience questions range from enrolling in AI ethics courses, the role of lawyers in AI audits, to the importance of philosophy in AI ethics consulting.

Advice on Career Pivoting: Shea offers advice on pivoting into AI ethics consulting, highlighting the importance of understanding regulatory requirements and finding one’s niche.

Auditing Process Explained: Get a high-level overview of the auditing process, including the distinction between assessments and formal audits.

Building a Career in AI Ethics: Discussion on the demand for AI ethics consulting, networking strategies, and the interdisciplinary nature of audit teams.

Key Takeaways:

The essential blend of skills needed in AI ethics consulting.
Insights into the challenges and opportunities in the field of AI ethics.
Practical advice for individuals looking to enter or pivot into AI ethics consulting.
Don’t miss this opportunity to learn from one of the pioneers in AI ethics consulting. Whether you’re new to the field or looking to deepen your knowledge, this episode is packed with insights, experiences, and advice to guide you on your journey.

Listeners can use coupon code &quot;FREEFEB&quot; to get our &quot;Finding Your Place in AI Ethics Consulting&quot; course for free. Link on our Website. 

Lunchtime BABLing listeners can use coupon code &quot;BABLING&quot; to save 20% on all our course offerings. </itunes:summary>
      <itunes:subtitle>Join us in this latest episode of the Lunchtime BABLing Podcast, where Shea Brown, CEO of BABL AI, shares invaluable insights from a live webinar Q&amp;A session on carving out a niche in AI Ethics Consulting. Dive deep into the world of AI ethics, algorithm auditing, and the journey of building a boutique firm focused on ethical risk, bias, and effective governance in AI technologies.

In This Episode:

Introduction to AI Ethics Consulting: Shea Brown introduces the session, providing a backdrop for his journey and the birth of BABL AI.

Journey of BABL AI: Discover the challenges and milestones in creating and growing an AI ethics consulting firm.
Insights from the Field: Shea shares his experiences and learnings from auditing algorithms for ethical risks and navigating the evolving landscape of AI ethics.

Live Q&amp;A Highlights: Audience questions range from enrolling in AI ethics courses, the role of lawyers in AI audits, to the importance of philosophy in AI ethics consulting.

Advice on Career Pivoting: Shea offers advice on pivoting into AI ethics consulting, highlighting the importance of understanding regulatory requirements and finding one’s niche.

Auditing Process Explained: Get a high-level overview of the auditing process, including the distinction between assessments and formal audits.

Building a Career in AI Ethics: Discussion on the demand for AI ethics consulting, networking strategies, and the interdisciplinary nature of audit teams.

Key Takeaways:

The essential blend of skills needed in AI ethics consulting.
Insights into the challenges and opportunities in the field of AI ethics.
Practical advice for individuals looking to enter or pivot into AI ethics consulting.
Don’t miss this opportunity to learn from one of the pioneers in AI ethics consulting. Whether you’re new to the field or looking to deepen your knowledge, this episode is packed with insights, experiences, and advice to guide you on your journey.

Listeners can use coupon code &quot;FREEFEB&quot; to get our &quot;Finding Your Place in AI Ethics Consulting&quot; course for free. Link on our Website. 

Lunchtime BABLing listeners can use coupon code &quot;BABLING&quot; to save 20% on all our course offerings. </itunes:subtitle>
      <itunes:keywords>online courses, consulting, consultant, babl ai, ethical ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>34</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">d423c42e-0a36-4eb1-893f-34e67e250e23</guid>
      <title>NIST, ISO 42001, and BABL AI online courses</title>
      <description><![CDATA[Welcome to another enlightening episode of Lunchtime BABLing, proudly presented by BABL AI, where we dive deep into the evolving world of artificial intelligence and its governance. In this episode, Shea is thrilled to bring you a series of exciting updates and educational insights that are shaping the future of AI.

What's Inside:

1. BABL AI Joins the NIST Consortium: We kick off with the groundbreaking announcement that BABL AI has officially become a part of the prestigious NIST consortium. Discover what this means for the future of AI development and governance, and how this collaboration is set to elevate the standards of AI technologies and applications.

2. Introducing ISO 42001: Next, Shea delves into the newly announced ISO 42001, a comprehensive governance framework that promises to redefine AI governance. Join Shea as she explores the high-level components of this auditable framework, shedding light on its significance and the impact it's poised to have on the AI industry. 

3. Aligning Education with Innovation: We also explore how BABL AI’s  online courses are perfectly aligned with the NIST AI framework, ISO 42001, and other pivotal regulations and frameworks. Learn how our educational offerings are designed to empower you with the competencies needed to navigate and excel in the complex landscape of AI governance. Whether you're a professional looking to enhance your skills or a student eager to enter the AI field, our courses offer invaluable insights and knowledge that align with the latest standards and frameworks. Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 19 Feb 2024 05:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Babl AI)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/845957a2-3c74-4c05-87a8-26dd64bc323b/25-11.jpg" width="1280"/>
      <enclosure length="12996467" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/c75d406a-98b4-4c91-92ca-4e7c4e4b1918/audio/5d264166-f417-4bc2-897a-03ca1e19484c/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>NIST, ISO 42001, and BABL AI online courses</itunes:title>
      <itunes:author>Babl AI</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/3ef553d8-0439-43db-88bd-ffe88340b92c/3000x3000/lunchtime-babling-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:10:49</itunes:duration>
      <itunes:summary>Welcome to another enlightening episode of Lunchtime BABLing, proudly presented by BABL AI, where we dive deep into the evolving world of artificial intelligence and its governance. In this episode, Shea is thrilled to bring you a series of exciting updates and educational insights that are shaping the future of AI.

What&apos;s Inside:

1. BABL AI Joins the NIST Consortium: We kick off with the groundbreaking announcement that BABL AI has officially become a part of the prestigious NIST consortium. Discover what this means for the future of AI development and governance, and how this collaboration is set to elevate the standards of AI technologies and applications.

2. Introducing ISO 42001: Next, Shea delves into the newly announced ISO 42001, a comprehensive governance framework that promises to redefine AI governance. Join Shea as she explores the high-level components of this auditable framework, shedding light on its significance and the impact it&apos;s poised to have on the AI industry. 

3. Aligning Education with Innovation: We also explore how BABL AI’s  online courses are perfectly aligned with the NIST AI framework, ISO 42001, and other pivotal regulations and frameworks. Learn how our educational offerings are designed to empower you with the competencies needed to navigate and excel in the complex landscape of AI governance. Whether you&apos;re a professional looking to enhance your skills or a student eager to enter the AI field, our courses offer invaluable insights and knowledge that align with the latest standards and frameworks.</itunes:summary>
      <itunes:subtitle>Welcome to another enlightening episode of Lunchtime BABLing, proudly presented by BABL AI, where we dive deep into the evolving world of artificial intelligence and its governance. In this episode, Shea is thrilled to bring you a series of exciting updates and educational insights that are shaping the future of AI.

What&apos;s Inside:

1. BABL AI Joins the NIST Consortium: We kick off with the groundbreaking announcement that BABL AI has officially become a part of the prestigious NIST consortium. Discover what this means for the future of AI development and governance, and how this collaboration is set to elevate the standards of AI technologies and applications.

2. Introducing ISO 42001: Next, Shea delves into the newly announced ISO 42001, a comprehensive governance framework that promises to redefine AI governance. Join Shea as she explores the high-level components of this auditable framework, shedding light on its significance and the impact it&apos;s poised to have on the AI industry. 

3. Aligning Education with Innovation: We also explore how BABL AI’s  online courses are perfectly aligned with the NIST AI framework, ISO 42001, and other pivotal regulations and frameworks. Learn how our educational offerings are designed to empower you with the competencies needed to navigate and excel in the complex landscape of AI governance. Whether you&apos;re a professional looking to enhance your skills or a student eager to enter the AI field, our courses offer invaluable insights and knowledge that align with the latest standards and frameworks.</itunes:subtitle>
      <itunes:keywords>risk assessment, machine learning ethics, iso 42001, nist ai risk management framework, ai research, governance of artificial intelligence, iso, babl ai, risk management</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>33</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">3ccf6039-2e7d-46a5-8d3a-ac5f03a2df44</guid>
      <title>Navigating Global AI Regulatory Compliance</title>
      <description><![CDATA[Sign up for Free to our online course "Finding your place in AI Ethics Consulting," during the month of February 2024. 

🌍 In this news episode of Lunchtime BABLing, Shea does dive deep into the complex world of AI regulatory compliance on a global scale. As the digital frontier expands, understanding and adhering to AI regulations becomes crucial for businesses and technologists alike. This episode offers a high-level guide on what to consider for AI regulatory compliance globally.

🔍 Highlights of This Episode:

EU AI Act: Your Compliance Compass - Discover how the European Union's AI Act serves as a holistic framework that can guide you through 95% of global AI compliance challenges. 

Common Grounds in Global AI Laws - Shea explore the shared foundations across various AI regulations, highlighting the common themes across global regulatory requirements.

Proactive Mindset Shift - The importance of shifting corporate mindsets towards proactive risk management in AI cannot be overstated. We discuss why companies must start establishing Key Performance Indicators (KPIs) now to identify and mitigate risks before facing legal consequences. 

NIST's Role in Measuring AI Risk - Get insights into how the National Institute of Standards and Technology (NIST) is developing methodologies to quantify risk in AI systems, and what this means for the future of AI. 

🚀 Takeaway:

This episode is a must-listen for anyone involved in AI development, deployment, or governance. Whether you're a startup or a multinational corporation, aligning with global AI regulations is imperative. Lunchtime BABLing will provide you with the knowledge and strategies to navigate this complex landscape effectively, ensuring your AI solutions are not only innovative but also compliant and ethical.

👉 Subscribe to our channel for more insights into AI technology and its global impact. Don't forget to hit the like button if you find this episode valuable and share it with your network to spread the knowledge.

#AICompliance #EUAIAct #AIRegulation #RiskManagement #TechnologyPodcast #AIethics #GlobalAI #ArtificialIntelligence Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 5 Feb 2024 05:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (shea brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/abcfb155-d407-4cdb-95ff-23d595e58d73/25-10.jpg" width="1280"/>
      <enclosure length="13878779" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/c99cab6f-896c-4b2a-8780-65de296fe3fa/audio/7b762e6c-50f4-4dcc-9101-2563da7d88a2/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>Navigating Global AI Regulatory Compliance</itunes:title>
      <itunes:author>shea brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/191795db-22c5-452c-a583-c22b3dcb8b33/3000x3000/lunchtime-babling-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:11:45</itunes:duration>
      <itunes:summary>Sign up for Free to our online course &quot;Finding your place in AI Ethics Consulting,&quot; during the month of February 2024. 

🌍 In this news episode of Lunchtime BABLing, Shea does dive deep into the complex world of AI regulatory compliance on a global scale. As the digital frontier expands, understanding and adhering to AI regulations becomes crucial for businesses and technologists alike. This episode offers a high-level guide on what to consider for AI regulatory compliance globally.

🔍 Highlights of This Episode:

EU AI Act: Your Compliance Compass - Discover how the European Union&apos;s AI Act serves as a holistic framework that can guide you through 95% of global AI compliance challenges. 

Common Grounds in Global AI Laws - Shea explore the shared foundations across various AI regulations, highlighting the common themes across global regulatory requirements.

Proactive Mindset Shift - The importance of shifting corporate mindsets towards proactive risk management in AI cannot be overstated. We discuss why companies must start establishing Key Performance Indicators (KPIs) now to identify and mitigate risks before facing legal consequences. 

NIST&apos;s Role in Measuring AI Risk - Get insights into how the National Institute of Standards and Technology (NIST) is developing methodologies to quantify risk in AI systems, and what this means for the future of AI. 

🚀 Takeaway:

This episode is a must-listen for anyone involved in AI development, deployment, or governance. Whether you&apos;re a startup or a multinational corporation, aligning with global AI regulations is imperative. Lunchtime BABLing will provide you with the knowledge and strategies to navigate this complex landscape effectively, ensuring your AI solutions are not only innovative but also compliant and ethical.

👉 Subscribe to our channel for more insights into AI technology and its global impact. Don&apos;t forget to hit the like button if you find this episode valuable and share it with your network to spread the knowledge.

#AICompliance #EUAIAct #AIRegulation #RiskManagement #TechnologyPodcast #AIethics #GlobalAI #ArtificialIntelligence</itunes:summary>
      <itunes:subtitle>Sign up for Free to our online course &quot;Finding your place in AI Ethics Consulting,&quot; during the month of February 2024. 

🌍 In this news episode of Lunchtime BABLing, Shea does dive deep into the complex world of AI regulatory compliance on a global scale. As the digital frontier expands, understanding and adhering to AI regulations becomes crucial for businesses and technologists alike. This episode offers a high-level guide on what to consider for AI regulatory compliance globally.

🔍 Highlights of This Episode:

EU AI Act: Your Compliance Compass - Discover how the European Union&apos;s AI Act serves as a holistic framework that can guide you through 95% of global AI compliance challenges. 

Common Grounds in Global AI Laws - Shea explore the shared foundations across various AI regulations, highlighting the common themes across global regulatory requirements.

Proactive Mindset Shift - The importance of shifting corporate mindsets towards proactive risk management in AI cannot be overstated. We discuss why companies must start establishing Key Performance Indicators (KPIs) now to identify and mitigate risks before facing legal consequences. 

NIST&apos;s Role in Measuring AI Risk - Get insights into how the National Institute of Standards and Technology (NIST) is developing methodologies to quantify risk in AI systems, and what this means for the future of AI. 

🚀 Takeaway:

This episode is a must-listen for anyone involved in AI development, deployment, or governance. Whether you&apos;re a startup or a multinational corporation, aligning with global AI regulations is imperative. Lunchtime BABLing will provide you with the knowledge and strategies to navigate this complex landscape effectively, ensuring your AI solutions are not only innovative but also compliant and ethical.

👉 Subscribe to our channel for more insights into AI technology and its global impact. Don&apos;t forget to hit the like button if you find this episode valuable and share it with your network to spread the knowledge.

#AICompliance #EUAIAct #AIRegulation #RiskManagement #TechnologyPodcast #AIethics #GlobalAI #ArtificialIntelligence</itunes:subtitle>
      <itunes:keywords>ai governance, risk assessment, nist ai risk management framework, responsible artificial intelligence, governance, ai, babl ai, risk management, ethical ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>32</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">8fa9eb7f-813d-4bc0-a936-ae0685e8c901</guid>
      <title>Exploring the socio-technical side of AI Ethics (Re-uploaded) | Lunchtime BABLing .07</title>
      <description><![CDATA[Sign up for our Free during the month of February for our online course Finding you place in AI Ethics Consulting. 

Link here: https://courses.babl.ai/p/finding-your-place-ai-ethics-consulting

Lunchtime BABLing listeners can save 20% off all our online courses by using coupon code "BABLING." 

Link here: https://babl.ai/courses/

🤖 Welcome to another engaging episode of Lunchtime BABLing! In this episode, we delve into the intricate world of AI ethics with a special focus on its socio-technical aspects.

🎙️ Join our host, Shea Brown, as they welcome a distinguished guest, Borhane Blili-Hamelin, PhD. Together, they explore some thought-provoking parallels between implementing AI ethics in industry and research environments. This discussion promises to shed light on the challenges and nuances of applying ethical principles in the fast-evolving field of artificial intelligence.

🔍 The conversation is not just theoretical but is grounded in ongoing research. Borhane Blili-Hamelin and Leif Hancox-Li's joint work, which was a highlight at the NeurIPS 2022 workshop, forms the basis of this insightful discussion. The workshop, held on November 28 and December 5, 2022, provided a platform for presenting their findings and perspectives.

Link to paper here: https://arxiv.org/abs/2209.00692

💡 Whether you're a professional in the field, a student, or just someone intrigued by the ethical dimensions of AI, this episode is a must-watch! So, grab your lunch, sit back, and let's BABL about the socio-technical side of AI ethics.

👍 Don't forget to like, share, and subscribe for more insightful episodes of Lunchtime BABLing. Your support helps us continue to bring fascinating topics and expert insights to your screen.

📢 We love hearing from you! Share your thoughts on this episode in the comments below. What are your views on AI ethics in industry versus research? Let's keep the conversation going!

🔔 Stay tuned for more episodes by hitting the bell icon to get notified about our latest uploads.

#LunchtimeBABLing #AIethics #SocioTechnical #ArtificialIntelligence #EthicsInAI #NeurIPS2022 #AIResearch #IndustryVsResearch #TechEthics Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 29 Jan 2024 05:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Borhane Blili-Hamelin, Shea Brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/e833f623-44da-494d-9526-e792c757f388/25-9.jpg" width="1280"/>
      <enclosure length="48078417" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/eb19ff00-4d7c-48d2-89fa-cb9ef69f2552/audio/025bae0d-90aa-42f8-bb4e-6b33ffaf16bc/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>Exploring the socio-technical side of AI Ethics (Re-uploaded) | Lunchtime BABLing .07</itunes:title>
      <itunes:author>Borhane Blili-Hamelin, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/03b3a1d5-cc56-4c60-afbe-0ae0dac3180a/3000x3000/lunchtime-babling-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:50:04</itunes:duration>
      <itunes:summary>Sign up for our Free during the month of February for our online course Finding you place in AI Ethics Consulting. 

Link here: https://courses.babl.ai/p/finding-your-place-ai-ethics-consulting

Lunchtime BABLing listeners can save 20% off all our online courses by using coupon code &quot;BABLING.&quot; 

Link here: https://babl.ai/courses/

🤖 Welcome to another engaging episode of Lunchtime BABLing! In this episode, we delve into the intricate world of AI ethics with a special focus on its socio-technical aspects.

🎙️ Join our host, Shea Brown, as they welcome a distinguished guest, Borhane Blili-Hamelin, PhD. Together, they explore some thought-provoking parallels between implementing AI ethics in industry and research environments. This discussion promises to shed light on the challenges and nuances of applying ethical principles in the fast-evolving field of artificial intelligence.

🔍 The conversation is not just theoretical but is grounded in ongoing research. Borhane Blili-Hamelin and Leif Hancox-Li&apos;s joint work, which was a highlight at the NeurIPS 2022 workshop, forms the basis of this insightful discussion. The workshop, held on November 28 and December 5, 2022, provided a platform for presenting their findings and perspectives.

Link to paper here: https://arxiv.org/abs/2209.00692

💡 Whether you&apos;re a professional in the field, a student, or just someone intrigued by the ethical dimensions of AI, this episode is a must-watch! So, grab your lunch, sit back, and let&apos;s BABL about the socio-technical side of AI ethics.

👍 Don&apos;t forget to like, share, and subscribe for more insightful episodes of Lunchtime BABLing. Your support helps us continue to bring fascinating topics and expert insights to your screen.

📢 We love hearing from you! Share your thoughts on this episode in the comments below. What are your views on AI ethics in industry versus research? Let&apos;s keep the conversation going!

🔔 Stay tuned for more episodes by hitting the bell icon to get notified about our latest uploads.

#LunchtimeBABLing #AIethics #SocioTechnical #ArtificialIntelligence #EthicsInAI #NeurIPS2022 #AIResearch #IndustryVsResearch #TechEthics</itunes:summary>
      <itunes:subtitle>Sign up for our Free during the month of February for our online course Finding you place in AI Ethics Consulting. 

Link here: https://courses.babl.ai/p/finding-your-place-ai-ethics-consulting

Lunchtime BABLing listeners can save 20% off all our online courses by using coupon code &quot;BABLING.&quot; 

Link here: https://babl.ai/courses/

🤖 Welcome to another engaging episode of Lunchtime BABLing! In this episode, we delve into the intricate world of AI ethics with a special focus on its socio-technical aspects.

🎙️ Join our host, Shea Brown, as they welcome a distinguished guest, Borhane Blili-Hamelin, PhD. Together, they explore some thought-provoking parallels between implementing AI ethics in industry and research environments. This discussion promises to shed light on the challenges and nuances of applying ethical principles in the fast-evolving field of artificial intelligence.

🔍 The conversation is not just theoretical but is grounded in ongoing research. Borhane Blili-Hamelin and Leif Hancox-Li&apos;s joint work, which was a highlight at the NeurIPS 2022 workshop, forms the basis of this insightful discussion. The workshop, held on November 28 and December 5, 2022, provided a platform for presenting their findings and perspectives.

Link to paper here: https://arxiv.org/abs/2209.00692

💡 Whether you&apos;re a professional in the field, a student, or just someone intrigued by the ethical dimensions of AI, this episode is a must-watch! So, grab your lunch, sit back, and let&apos;s BABL about the socio-technical side of AI ethics.

👍 Don&apos;t forget to like, share, and subscribe for more insightful episodes of Lunchtime BABLing. Your support helps us continue to bring fascinating topics and expert insights to your screen.

📢 We love hearing from you! Share your thoughts on this episode in the comments below. What are your views on AI ethics in industry versus research? Let&apos;s keep the conversation going!

🔔 Stay tuned for more episodes by hitting the bell icon to get notified about our latest uploads.

#LunchtimeBABLing #AIethics #SocioTechnical #ArtificialIntelligence #EthicsInAI #NeurIPS2022 #AIResearch #IndustryVsResearch #TechEthics</itunes:subtitle>
      <itunes:keywords>ai governance, machine learning ethics, ai research, responsible artificial intelligence, algorithmic auditing, academia, research, ai, babl ai, ai audits</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>31</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e3062726-8c22-4a90-b612-48c264d859c1</guid>
      <title>What Companies Need To Consider When Implementing AI</title>
      <description><![CDATA[📺 About This Episode:
Join us on a riveting journey into the heart of AI integration in the business world in our latest episode of Lunchtime BABLing, where we talk about "What Things Should Companies Consider When Implementing AI." Host Shea Brown, CEO of BABL AI, teams up with Bryan Ilg, our VP of Sales, to unravel the complexities and opportunities presented by AI in the modern business landscape.

In this episode, we dive deep into the nuances of AI implementation, shedding light on often-overlooked aspects such as reputational and regulatory risks, and the paramount importance of trust and effective governance. Shea and Bryan offer their expert insights into the criticality of establishing robust AI governance frameworks and enhancing existing strategies to stay ahead in this rapidly evolving domain.

Whether you're a business owner, an executive, or simply intrigued by the ethical and practical dimensions of AI in business, this episode is packed with valuable insights and actionable advice.

🔗 Stay Connected:

Hit that like and subscribe button for more enlightening episodes.
Tune into our podcast across various platforms for your on-the-go AI insights.

👋 Thank you for joining us on Lunchtime BABLing as we explore the intricate dance of AI, business, and ethics. Can't wait to share more in our upcoming episodes! Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 22 Jan 2024 05:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Bryan Ilg, Shea Brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/750a74d1-013f-43ca-ade5-c40077dfc775/25-8.jpg" width="1280"/>
      <enclosure length="34124722" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/58d6b9b3-448b-4e98-af18-607a2674ad4e/audio/150b1830-b849-48ed-8665-8c9f97042699/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>What Companies Need To Consider When Implementing AI</itunes:title>
      <itunes:author>Bryan Ilg, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/9dc185f6-2187-4f8c-ba1d-c91a0080bfe6/3000x3000/lunchtime-babling-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:32:50</itunes:duration>
      <itunes:summary>📺 About This Episode:
Join us on a riveting journey into the heart of AI integration in the business world in our latest episode of Lunchtime BABLing, where we talk about &quot;What Things Should Companies Consider When Implementing AI.&quot; Host Shea Brown, CEO of BABL AI, teams up with Bryan Ilg, our VP of Sales, to unravel the complexities and opportunities presented by AI in the modern business landscape.

In this episode, we dive deep into the nuances of AI implementation, shedding light on often-overlooked aspects such as reputational and regulatory risks, and the paramount importance of trust and effective governance. Shea and Bryan offer their expert insights into the criticality of establishing robust AI governance frameworks and enhancing existing strategies to stay ahead in this rapidly evolving domain.

Whether you&apos;re a business owner, an executive, or simply intrigued by the ethical and practical dimensions of AI in business, this episode is packed with valuable insights and actionable advice.

🔗 Stay Connected:

Hit that like and subscribe button for more enlightening episodes.
Tune into our podcast across various platforms for your on-the-go AI insights.

👋 Thank you for joining us on Lunchtime BABLing as we explore the intricate dance of AI, business, and ethics. Can&apos;t wait to share more in our upcoming episodes!</itunes:summary>
      <itunes:subtitle>📺 About This Episode:
Join us on a riveting journey into the heart of AI integration in the business world in our latest episode of Lunchtime BABLing, where we talk about &quot;What Things Should Companies Consider When Implementing AI.&quot; Host Shea Brown, CEO of BABL AI, teams up with Bryan Ilg, our VP of Sales, to unravel the complexities and opportunities presented by AI in the modern business landscape.

In this episode, we dive deep into the nuances of AI implementation, shedding light on often-overlooked aspects such as reputational and regulatory risks, and the paramount importance of trust and effective governance. Shea and Bryan offer their expert insights into the criticality of establishing robust AI governance frameworks and enhancing existing strategies to stay ahead in this rapidly evolving domain.

Whether you&apos;re a business owner, an executive, or simply intrigued by the ethical and practical dimensions of AI in business, this episode is packed with valuable insights and actionable advice.

🔗 Stay Connected:

Hit that like and subscribe button for more enlightening episodes.
Tune into our podcast across various platforms for your on-the-go AI insights.

👋 Thank you for joining us on Lunchtime BABLing as we explore the intricate dance of AI, business, and ethics. Can&apos;t wait to share more in our upcoming episodes!</itunes:subtitle>
      <itunes:keywords>ai governance, risk assessment, ai, ethical ai, ai strategy</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>30</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">cc06120f-c9a7-41ee-b9f0-93f3fba19a1c</guid>
      <title>Key Takeaways of the EU AI Act | Lunchtime BABLing</title>
      <description><![CDATA[Description:
🔊 Welcome to another episode of Lunchtime BABLing, where we dive deep into the world of AI and its impact on our lives. In this episode, "Key Takeaways of the EU AI Act," join our hosts, Shea Brown, CEO of BABL AI, and Jeffrey Recker, for a comprehensive analysis of the recently agreed-upon EU AI Act.

🌍 The EU AI Act is making waves as a global law that regulates the use of artificial intelligence. It's comparable to how GDPR reshaped privacy laws, and now the EU AI Act is set to do the same for AI. This episode breaks down the Act's implications, its potential effects on companies and individuals, and what the future of AI governance might look like under this new regulation.

🔑 Highlights of the episode include:

A detailed explanation of what the EU AI Act entails and why it's a game-changer.
Insights into who will be affected by the Act and how it extends beyond European borders.
The classification of AI systems under the Act based on risk levels, including prohibited and high-risk categories.
A look into the conformity assessment process and the compliance requirements for organizations.
Practical steps organizations should take to prepare for compliance.
🤔 Whether you're a tech enthusiast, an AI professional, or just curious about how AI laws impact our world, this episode offers valuable insights. Join us as we unravel the complexities of the EU AI Act and its far-reaching consequences.

📣 Do you have specific questions about the EU AI Act or AI governance? Leave your comments below or reach out to us! Don't forget to like and subscribe if you're watching on YouTube, or thank you for listening if you're tuning in via podcast. Stay informed and ahead in the world of AI with Lunchtime BABLing!

#EUAIAct #ArtificialIntelligence #AILaw #TechGovernance #BabbleAI #Podcast Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 15 Jan 2024 05:55:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown, Jeffery Recker)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/3c0c7a6e-55c7-479d-8380-84e37cfa3309/25-5.jpg" width="1280"/>
      <enclosure length="24790150" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/255ecbb4-57bc-47a3-8dc9-fa9bf1555527/audio/dc574ec5-e5fa-40c3-a118-9a57eb650e4b/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>Key Takeaways of the EU AI Act | Lunchtime BABLing</itunes:title>
      <itunes:author>Shea Brown, Jeffery Recker</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/21966d58-3b71-4223-bad3-074a52b3cec5/3000x3000/lunchtime-babling-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:25:49</itunes:duration>
      <itunes:summary>Description:
🔊 Welcome to another episode of Lunchtime BABLing, where we dive deep into the world of AI and its impact on our lives. In this episode, &quot;Key Takeaways of the EU AI Act,&quot; join our hosts, Shea Brown, CEO of BABL AI, and Jeffrey Recker, for a comprehensive analysis of the recently agreed-upon EU AI Act.

🌍 The EU AI Act is making waves as a global law that regulates the use of artificial intelligence. It&apos;s comparable to how GDPR reshaped privacy laws, and now the EU AI Act is set to do the same for AI. This episode breaks down the Act&apos;s implications, its potential effects on companies and individuals, and what the future of AI governance might look like under this new regulation.

🔑 Highlights of the episode include:

A detailed explanation of what the EU AI Act entails and why it&apos;s a game-changer.
Insights into who will be affected by the Act and how it extends beyond European borders.
The classification of AI systems under the Act based on risk levels, including prohibited and high-risk categories.
A look into the conformity assessment process and the compliance requirements for organizations.
Practical steps organizations should take to prepare for compliance.
🤔 Whether you&apos;re a tech enthusiast, an AI professional, or just curious about how AI laws impact our world, this episode offers valuable insights. Join us as we unravel the complexities of the EU AI Act and its far-reaching consequences.

📣 Do you have specific questions about the EU AI Act or AI governance? Leave your comments below or reach out to us! Don&apos;t forget to like and subscribe if you&apos;re watching on YouTube, or thank you for listening if you&apos;re tuning in via podcast. Stay informed and ahead in the world of AI with Lunchtime BABLing!

#EUAIAct #ArtificialIntelligence #AILaw #TechGovernance #BabbleAI #Podcast</itunes:summary>
      <itunes:subtitle>Description:
🔊 Welcome to another episode of Lunchtime BABLing, where we dive deep into the world of AI and its impact on our lives. In this episode, &quot;Key Takeaways of the EU AI Act,&quot; join our hosts, Shea Brown, CEO of BABL AI, and Jeffrey Recker, for a comprehensive analysis of the recently agreed-upon EU AI Act.

🌍 The EU AI Act is making waves as a global law that regulates the use of artificial intelligence. It&apos;s comparable to how GDPR reshaped privacy laws, and now the EU AI Act is set to do the same for AI. This episode breaks down the Act&apos;s implications, its potential effects on companies and individuals, and what the future of AI governance might look like under this new regulation.

🔑 Highlights of the episode include:

A detailed explanation of what the EU AI Act entails and why it&apos;s a game-changer.
Insights into who will be affected by the Act and how it extends beyond European borders.
The classification of AI systems under the Act based on risk levels, including prohibited and high-risk categories.
A look into the conformity assessment process and the compliance requirements for organizations.
Practical steps organizations should take to prepare for compliance.
🤔 Whether you&apos;re a tech enthusiast, an AI professional, or just curious about how AI laws impact our world, this episode offers valuable insights. Join us as we unravel the complexities of the EU AI Act and its far-reaching consequences.

📣 Do you have specific questions about the EU AI Act or AI governance? Leave your comments below or reach out to us! Don&apos;t forget to like and subscribe if you&apos;re watching on YouTube, or thank you for listening if you&apos;re tuning in via podcast. Stay informed and ahead in the world of AI with Lunchtime BABLing!

#EUAIAct #ArtificialIntelligence #AILaw #TechGovernance #BabbleAI #Podcast</itunes:subtitle>
      <itunes:keywords>ai governance, ai research, ai ethics, responsible artificial intelligence, ai act, babl, ai, babl ai, ethical ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>29</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">7c24a821-1272-435b-a324-d862c364e279</guid>
      <title>028. International Association of Algorithmic Auditors</title>
      <description><![CDATA[Lunchtime BABLing listeners can use Coupon Code "BABLING" to save an 20% off all BABL AI courses. 

Courses: https://courses.babl.ai/p/ai-and-algorithm-auditor-certification

Description:
Welcome back to another episode of Lunchtime Babbling! In this episode, Shea Brown, CEO of BABL AI, joins forces with Jeffrey Recker, our COO, to delve into an intriguing topic on the newly formed International Association of Algorithmic Auditors (IAAA).

Throughout the episode, Shea and Jeffrey unpack the crucial role of the IAAA, in shaping the landscape of AI and algorithm auditing. They discuss the association's goals, its distinction from existing organizations, and its significance in ensuring that algorithms are audited for compliance, ethical standards, and the prevention of potential harm to individuals and society.

The discussion also highlights the challenges and complexities involved in algorithmic auditing, the importance of professional conduct in the field, and the emerging regulations like the EU AI Act. Moreover, they explore the different types of algorithmic audits and the vital role of transparency in the auditing process.

As one of the key founding members of the IAAA, Shea provides insights into the formation of this organization, its mission, and the importance of fostering a professional community among AI and algorithm auditors.

Whether you're a professional in the field, someone interested in the ethical aspects of AI, or simply curious about the future of technology governance, this episode offers valuable perspectives and critical discussions on the evolving world of algorithmic auditing.

IAAA website: https://iaaa-algorithmicauditors.org 

🎙️ Listen to the full episode to understand the significance of algorithmic audits, the role of IAAA in shaping the industry, and the future of AI governance. Don't forget to like and subscribe for more insightful discussions on Lunchtime BABLing!

#AI #AlgorithmicAuditing #IAAA #TechnologyEthics #LunchtimeBABLing Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 8 Jan 2024 05:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Jeffery Recker, Shea Brown)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/af25e5d0-4ae9-47b6-bb44-f68ac205b054/25-4.jpg" width="1280"/>
      <enclosure length="30003773" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/89301321-dfd3-4287-9740-6ef3f708fe33/audio/0f01321c-fc67-4881-a2ae-903427be6698/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>028. International Association of Algorithmic Auditors</itunes:title>
      <itunes:author>Jeffery Recker, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/b7185e66-2e8d-48de-89f5-cc4167c5a1bb/3000x3000/lunchtime-babling-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:31:15</itunes:duration>
      <itunes:summary>Lunchtime BABLing listeners can use Coupon Code &quot;BABLING&quot; to save an 20% off all BABL AI courses. 

Courses: https://courses.babl.ai/p/ai-and-algorithm-auditor-certification

Description:
Welcome back to another episode of Lunchtime Babbling! In this episode, Shea Brown, CEO of BABL AI, joins forces with Jeffrey Recker, our COO, to delve into an intriguing topic on the newly formed International Association of Algorithmic Auditors (IAAA).

Throughout the episode, Shea and Jeffrey unpack the crucial role of the IAAA, in shaping the landscape of AI and algorithm auditing. They discuss the association&apos;s goals, its distinction from existing organizations, and its significance in ensuring that algorithms are audited for compliance, ethical standards, and the prevention of potential harm to individuals and society.

The discussion also highlights the challenges and complexities involved in algorithmic auditing, the importance of professional conduct in the field, and the emerging regulations like the EU AI Act. Moreover, they explore the different types of algorithmic audits and the vital role of transparency in the auditing process.

As one of the key founding members of the IAAA, Shea provides insights into the formation of this organization, its mission, and the importance of fostering a professional community among AI and algorithm auditors.

Whether you&apos;re a professional in the field, someone interested in the ethical aspects of AI, or simply curious about the future of technology governance, this episode offers valuable perspectives and critical discussions on the evolving world of algorithmic auditing.

IAAA website: https://iaaa-algorithmicauditors.org 

🎙️ Listen to the full episode to understand the significance of algorithmic audits, the role of IAAA in shaping the industry, and the future of AI governance. Don&apos;t forget to like and subscribe for more insightful discussions on Lunchtime BABLing!

#AI #AlgorithmicAuditing #IAAA #TechnologyEthics #LunchtimeBABLing</itunes:summary>
      <itunes:subtitle>Lunchtime BABLing listeners can use Coupon Code &quot;BABLING&quot; to save an 20% off all BABL AI courses. 

Courses: https://courses.babl.ai/p/ai-and-algorithm-auditor-certification

Description:
Welcome back to another episode of Lunchtime Babbling! In this episode, Shea Brown, CEO of BABL AI, joins forces with Jeffrey Recker, our COO, to delve into an intriguing topic on the newly formed International Association of Algorithmic Auditors (IAAA).

Throughout the episode, Shea and Jeffrey unpack the crucial role of the IAAA, in shaping the landscape of AI and algorithm auditing. They discuss the association&apos;s goals, its distinction from existing organizations, and its significance in ensuring that algorithms are audited for compliance, ethical standards, and the prevention of potential harm to individuals and society.

The discussion also highlights the challenges and complexities involved in algorithmic auditing, the importance of professional conduct in the field, and the emerging regulations like the EU AI Act. Moreover, they explore the different types of algorithmic audits and the vital role of transparency in the auditing process.

As one of the key founding members of the IAAA, Shea provides insights into the formation of this organization, its mission, and the importance of fostering a professional community among AI and algorithm auditors.

Whether you&apos;re a professional in the field, someone interested in the ethical aspects of AI, or simply curious about the future of technology governance, this episode offers valuable perspectives and critical discussions on the evolving world of algorithmic auditing.

IAAA website: https://iaaa-algorithmicauditors.org 

🎙️ Listen to the full episode to understand the significance of algorithmic audits, the role of IAAA in shaping the industry, and the future of AI governance. Don&apos;t forget to like and subscribe for more insightful discussions on Lunchtime BABLing!

#AI #AlgorithmicAuditing #IAAA #TechnologyEthics #LunchtimeBABLing</itunes:subtitle>
      <itunes:keywords>ai governance, regulation, machine learning ethics, ai systems, international association of algorithmic auditors, algorithmic bias, ai research, ai ethics, responsible artificial intelligence, responsible ai, machine learning, regulations, ai audit, ai regulations, ai consulting, iaaa, machine learning algorithms, babl, algorithmic auditing, ai compliance, ai, babl ai, ethical ai, ai audits, auditing ai, law</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>28</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">31ffee06-3cd4-4e4c-a290-3b0f550cbde5</guid>
      <title>027. Understanding Fundamental Rights Impact Assessments in the EU AI Act</title>
      <description><![CDATA[Understanding the EU AI Act: Fundamental Rights Impact Assessments Explained

Description:
Join us in this eye-opening episode of the Lunchtime BABLing Podcast where Shea Brown, our host and CEO from BABL AI, teams up with Jeffery Recker, our COO, to delve deep into the recent developments in AI regulation, particularly focusing on the EU AI Act. This episode, "Understanding Fundamental Rights Impact Assessments in the EU AI Act," is a must-listen for anyone interested in the intersection of AI, regulation, and human rights.

Key Discussion Points:

Introduction to the EU AI Act: Gain insights into the EU AI Act's passing and its significance in shaping the future of AI regulation.
Role of Fundamental Rights Impact Assessments: Understand what these assessments are, their importance, and how they differ from traditional impact assessments.

Impact on Businesses and AI Deployers: Learn about the new obligations for companies, especially those deploying high-risk AI systems.
Practical Steps for Compliance: Shea Brown breaks down complex regulatory requirements into actionable steps for businesses of all sizes.
Future of AI and Trust: Discover how compliance with these regulations can build trust and pave the way for responsible AI innovation.
Episode Highlights:

Expert Insights: Jeffery Recker shares his firsthand experience with the increasing interest in AI regulations and the challenges faced by businesses.
Detailed Breakdown: Shea Brown offers a comprehensive analysis of the Fundamental Rights Impact Assessments, their implications, and the overall impact of the EU AI Act on the AI landscape.
Interactive Discussions: Engaging conversation between Shea and Jeffery, providing a nuanced understanding of the subject. Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 18 Dec 2023 05:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown, Jeffery Recker)</author>
      <link>https://babl.ai</link>
      <media:thumbnail height="720" url="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/1f47f946-edb9-454e-bbfa-083c2c400183/25-3.jpg" width="1280"/>
      <enclosure length="36687776" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/45d36a80-f15c-4910-8e85-6b2c2a902435/audio/99cdd531-061a-46d2-ab35-a87d91911b0e/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>027. Understanding Fundamental Rights Impact Assessments in the EU AI Act</itunes:title>
      <itunes:author>Shea Brown, Jeffery Recker</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/a90da2ae-290c-4b52-bf37-a326a2d843d4/3000x3000/lunchtime-babling-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:38:12</itunes:duration>
      <itunes:summary>Understanding the EU AI Act: Fundamental Rights Impact Assessments Explained

Description:
Join us in this eye-opening episode of the Lunchtime BABLing Podcast where Shea Brown, our host and CEO from BABL AI, teams up with Jeffery Recker, our COO, to delve deep into the recent developments in AI regulation, particularly focusing on the EU AI Act. This episode, &quot;Understanding Fundamental Rights Impact Assessments in the EU AI Act,&quot; is a must-listen for anyone interested in the intersection of AI, regulation, and human rights.

Key Discussion Points:

Introduction to the EU AI Act: Gain insights into the EU AI Act&apos;s passing and its significance in shaping the future of AI regulation.
Role of Fundamental Rights Impact Assessments: Understand what these assessments are, their importance, and how they differ from traditional impact assessments.

Impact on Businesses and AI Deployers: Learn about the new obligations for companies, especially those deploying high-risk AI systems.
Practical Steps for Compliance: Shea Brown breaks down complex regulatory requirements into actionable steps for businesses of all sizes.
Future of AI and Trust: Discover how compliance with these regulations can build trust and pave the way for responsible AI innovation.
Episode Highlights:

Expert Insights: Jeffery Recker shares his firsthand experience with the increasing interest in AI regulations and the challenges faced by businesses.
Detailed Breakdown: Shea Brown offers a comprehensive analysis of the Fundamental Rights Impact Assessments, their implications, and the overall impact of the EU AI Act on the AI landscape.
Interactive Discussions: Engaging conversation between Shea and Jeffery, providing a nuanced understanding of the subject.</itunes:summary>
      <itunes:subtitle>Understanding the EU AI Act: Fundamental Rights Impact Assessments Explained

Description:
Join us in this eye-opening episode of the Lunchtime BABLing Podcast where Shea Brown, our host and CEO from BABL AI, teams up with Jeffery Recker, our COO, to delve deep into the recent developments in AI regulation, particularly focusing on the EU AI Act. This episode, &quot;Understanding Fundamental Rights Impact Assessments in the EU AI Act,&quot; is a must-listen for anyone interested in the intersection of AI, regulation, and human rights.

Key Discussion Points:

Introduction to the EU AI Act: Gain insights into the EU AI Act&apos;s passing and its significance in shaping the future of AI regulation.
Role of Fundamental Rights Impact Assessments: Understand what these assessments are, their importance, and how they differ from traditional impact assessments.

Impact on Businesses and AI Deployers: Learn about the new obligations for companies, especially those deploying high-risk AI systems.
Practical Steps for Compliance: Shea Brown breaks down complex regulatory requirements into actionable steps for businesses of all sizes.
Future of AI and Trust: Discover how compliance with these regulations can build trust and pave the way for responsible AI innovation.
Episode Highlights:

Expert Insights: Jeffery Recker shares his firsthand experience with the increasing interest in AI regulations and the challenges faced by businesses.
Detailed Breakdown: Shea Brown offers a comprehensive analysis of the Fundamental Rights Impact Assessments, their implications, and the overall impact of the EU AI Act on the AI landscape.
Interactive Discussions: Engaging conversation between Shea and Jeffery, providing a nuanced understanding of the subject.</itunes:subtitle>
      <itunes:keywords>eu, eu ai act, ai governance, ai auditing, risk assessment, ai ethics, responsible ai, fundamental rights, fundamental rights impact assessment, ai audit, impact assessment, ai, risk management, risk and impact assessment, european union</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>27</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">bbb15a98-b82b-4e91-94a7-16f776ba35fa</guid>
      <title>026. National Conference on AI Law, Ethics, and Compliance</title>
      <description><![CDATA[🔹 New Episode: National Conference on AI Law, Ethics, and Compliance

In this latest installment of Lunchtime BABLing, Shea unpacks the developments from a major conference in Washington D.C., focusing on AI law, ethics, and compliance. He shares valuable insights from their workshop and interactions with legal experts in the field of AI governance.

Key Discussions:

-Understanding AI and the risks involved.
-Governance frameworks for AI deployment.
-The implications of the recent U.S. Executive Order on AI.
-Global initiatives for AI safety and governance.

Industry Spotlight:

-The surge of generative AI in corporate strategy.
-The evolving landscape of AI policy, privacy concerns, and intellectual property.

Engage with Us:

Lunchtime BABLing views/listeners can use a the coupon code below to receive 20% off all our online courses:

Coupon Code: “BABLING”

Link to the full AI and Algorithm Auditing Certificate Program is here:

https://courses.babl.ai/p/ai-and-algorithm-auditor-certification Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 6 Nov 2023 12:09:53 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown)</author>
      <link>https://babl.ai</link>
      <enclosure length="13069216" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/1bdf6d7c-6120-408a-86b3-bd3d6550cec9/audio/c449ca3b-8b23-47a7-8a0f-54616ee7e84c/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>026. National Conference on AI Law, Ethics, and Compliance</itunes:title>
      <itunes:author>Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/34e7e2dd-eee2-410b-a930-3ca686589c27/3000x3000/colorful-modern-microphone-illustrations-podcast-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:10:52</itunes:duration>
      <itunes:summary>🔹 New Episode: National Conference on AI Law, Ethics, and Compliance

In this latest installment of Lunchtime BABLing, Shea unpacks the developments from a major conference in Washington D.C., focusing on AI law, ethics, and compliance. He shares valuable insights from their workshop and interactions with legal experts in the field of AI governance.

Key Discussions:

-Understanding AI and the risks involved.
-Governance frameworks for AI deployment.
-The implications of the recent U.S. Executive Order on AI.
-Global initiatives for AI safety and governance.

Industry Spotlight:

-The surge of generative AI in corporate strategy.
-The evolving landscape of AI policy, privacy concerns, and intellectual property.

Engage with Us:

Lunchtime BABLing views/listeners can use a the coupon code below to receive 20% off all our online courses:

Coupon Code: “BABLING”

Link to the full AI and Algorithm Auditing Certificate Program is here:

https://courses.babl.ai/p/ai-and-algorithm-auditor-certification</itunes:summary>
      <itunes:subtitle>🔹 New Episode: National Conference on AI Law, Ethics, and Compliance

In this latest installment of Lunchtime BABLing, Shea unpacks the developments from a major conference in Washington D.C., focusing on AI law, ethics, and compliance. He shares valuable insights from their workshop and interactions with legal experts in the field of AI governance.

Key Discussions:

-Understanding AI and the risks involved.
-Governance frameworks for AI deployment.
-The implications of the recent U.S. Executive Order on AI.
-Global initiatives for AI safety and governance.

Industry Spotlight:

-The surge of generative AI in corporate strategy.
-The evolving landscape of AI policy, privacy concerns, and intellectual property.

Engage with Us:

Lunchtime BABLing views/listeners can use a the coupon code below to receive 20% off all our online courses:

Coupon Code: “BABLING”

Link to the full AI and Algorithm Auditing Certificate Program is here:

https://courses.babl.ai/p/ai-and-algorithm-auditor-certification</itunes:subtitle>
      <itunes:keywords>ai auditing, ai law, ai ethics, responsible ai, babl, ai compliance, ai, babl ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>26</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">78e2132c-2cac-44fc-9bf1-6bf0c26aa2c0</guid>
      <title>025. AI and Algorithm Auditing Certificate</title>
      <description><![CDATA[Lunchtime BABLing is back with a new season! 

In this episode, Shea briefly talks about what to expect in the upcoming weeks for Lunchtime BABLing, as well as diving into some detail about our AI and Algorithm Auditing Certification Program. 

Lunchtime BABLing views/listeners can use a the coupon code below to receive 20% off all our online courses: 

Coupon Code: "BABLING"

Link to the full AI and Algorithm Auditing Certificate Program is here: 

https://courses.babl.ai/p/ai-and-algorithm-auditor-certification

For more information about BABL AI and our services, as well as the latest news in AI Auditing and AI Governance, check out our website: 

https://babl.ai/ Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Tue, 24 Oct 2023 14:49:37 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown)</author>
      <link>https://babl.ai</link>
      <enclosure length="10386319" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/a8a3facf-a063-402e-9e4f-9c159f43ff48/audio/a3b0af83-5fd7-4e78-be78-ff1369b0495b/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>025. AI and Algorithm Auditing Certificate</itunes:title>
      <itunes:author>Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/d920b815-3615-4537-be35-62c8bfaa8c88/3000x3000/colorful-modern-microphone-illustrations-podcast-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:08:06</itunes:duration>
      <itunes:summary>Lunchtime BABLing is back with a new season! 

In this episode, Shea briefly talks about what to expect in the upcoming weeks for Lunchtime BABLing, as well as diving into some detail about our AI and Algorithm Auditing Certification Program. 

Lunchtime BABLing views/listeners can use a the coupon code below to receive 20% off all our online courses: 

Coupon Code: &quot;BABLING&quot;

Link to the full AI and Algorithm Auditing Certificate Program is here: 

https://courses.babl.ai/p/ai-and-algorithm-auditor-certification

For more information about BABL AI and our services, as well as the latest news in AI Auditing and AI Governance, check out our website: 

https://babl.ai/</itunes:summary>
      <itunes:subtitle>Lunchtime BABLing is back with a new season! 

In this episode, Shea briefly talks about what to expect in the upcoming weeks for Lunchtime BABLing, as well as diving into some detail about our AI and Algorithm Auditing Certification Program. 

Lunchtime BABLing views/listeners can use a the coupon code below to receive 20% off all our online courses: 

Coupon Code: &quot;BABLING&quot;

Link to the full AI and Algorithm Auditing Certificate Program is here: 

https://courses.babl.ai/p/ai-and-algorithm-auditor-certification

For more information about BABL AI and our services, as well as the latest news in AI Auditing and AI Governance, check out our website: 

https://babl.ai/</itunes:subtitle>
      <itunes:keywords>learning, eu ai act, online courses, audit and assurance, education, ai auditing, llm, online education, legal, online training, ai audit, babl, audit assurance, algorithm, auditing, legal compliance, algorithms, ai, babl ai, risk management, upskilling, training, digital services act, algorithm auditing, jobs, ml</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>25</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">32aec036-9dc1-41a3-a06d-7ab62f7827f3</guid>
      <title>024. Interview with Khoa Lam on AI Auditing</title>
      <description><![CDATA[On this week's Lunchtime BABLing, Shea talks with BABL AI auditor and technical expert, Khoa Lam. 

They discuss a wide range of topics including: 

1: How Khoa got into the field of Responsible AI 
2: His work at AI Incident Database
3: His thoughts on generative AI and large language models 
4: The technical aspects of AI and Algorithmic Auditing 

Khoa Lam Linkedin: https://www.linkedin.com/in/khoalklam/
AI Incident Database: https://incidentdatabase.ai

BABL AI
Courses: https://courses.babl.ai/
Website: https://babl.ai/
Linkedin: https://www.linkedin.com/company/babl-ai/ Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 8 May 2023 18:39:10 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Khoa Lam, Shea Brown)</author>
      <link>https://babl.ai</link>
      <enclosure length="56187728" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/08aabac2-46ea-4827-b20a-ae1b8477fc64/audio/cc144929-8e66-470a-874e-0706a81fc2bb/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>024. Interview with Khoa Lam on AI Auditing</itunes:title>
      <itunes:author>Khoa Lam, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/11484bb3-820b-4e6f-b2e4-553018440eae/3000x3000/colorful-modern-microphone-illustrations-podcast-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:55:49</itunes:duration>
      <itunes:summary>On this week&apos;s Lunchtime BABLing, Shea talks with BABL AI auditor and technical expert, Khoa Lam. 

They discuss a wide range of topics including: 

1: How Khoa got into the field of Responsible AI 
2: His work at AI Incident Database
3: His thoughts on generative AI and large language models 
4: The technical aspects of AI and Algorithmic Auditing 

Khoa Lam Linkedin: https://www.linkedin.com/in/khoalklam/
AI Incident Database: https://incidentdatabase.ai

BABL AI
Courses: https://courses.babl.ai/
Website: https://babl.ai/
Linkedin: https://www.linkedin.com/company/babl-ai/</itunes:summary>
      <itunes:subtitle>On this week&apos;s Lunchtime BABLing, Shea talks with BABL AI auditor and technical expert, Khoa Lam. 

They discuss a wide range of topics including: 

1: How Khoa got into the field of Responsible AI 
2: His work at AI Incident Database
3: His thoughts on generative AI and large language models 
4: The technical aspects of AI and Algorithmic Auditing 

Khoa Lam Linkedin: https://www.linkedin.com/in/khoalklam/
AI Incident Database: https://incidentdatabase.ai

BABL AI
Courses: https://courses.babl.ai/
Website: https://babl.ai/
Linkedin: https://www.linkedin.com/company/babl-ai/</itunes:subtitle>
      <itunes:keywords>ai auditing, risk assessment, ai audit, babl, generative ai, ai, babl ai, algorithm auditing</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>24</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">083eb82c-3b3d-4641-9beb-ec592ce3e8fb</guid>
      <title>023. AI Auditing with Jiahao Chen</title>
      <description><![CDATA[We welcomed back AI auditor and consultant Jiahao Chen to discuss all things responsible AI! 

Check out Jiahao at: https://responsibleai.tech/ Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Wed, 26 Apr 2023 14:36:33 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Jiahao Chen, Shea Brown)</author>
      <link>https://babl.ai</link>
      <enclosure length="43321206" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/c87ad9f6-3011-44a5-9585-47f0aa8f5147/audio/e0ab7464-47fd-4958-9d6c-2cc93fae5d15/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>023. AI Auditing with Jiahao Chen</itunes:title>
      <itunes:author>Jiahao Chen, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/839cfc97-29b6-4dfe-8dec-49d742b7285e/3000x3000/colorful-modern-microphone-illustrations-podcast-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:45:07</itunes:duration>
      <itunes:summary>We welcomed back AI auditor and consultant Jiahao Chen to discuss all things responsible AI! 

Check out Jiahao at: https://responsibleai.tech/</itunes:summary>
      <itunes:subtitle>We welcomed back AI auditor and consultant Jiahao Chen to discuss all things responsible AI! 

Check out Jiahao at: https://responsibleai.tech/</itunes:subtitle>
      <itunes:keywords>ai governance, artificial intelligence, ai audit, human resources, algorithmic auditing, auditing, ai, risk management</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>23</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">cf022d2e-19f3-44d6-b069-3ab1c0b304d2</guid>
      <title>022. Final Rules for NYC Local Law</title>
      <description><![CDATA[In this episode Shea reviews the new rules for NYC's Local Law No. 144, which requires bias audits of automated employment decision tools. 

The date for enforcement has been pushed back to July 5th, 2023 to give time for companies to seek independent auditors (which is still a requirement).

Sign up for our new "AI & Algorithm Auditor Certification Program" starting May 8th! 
https://courses.babl.ai/p/ai-and-algorithm-auditor-certification?affcode=616760_7ts3gujl  Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Fri, 21 Apr 2023 14:41:31 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Dr Shea Brown, Shea Brown)</author>
      <link>https://babl.ai</link>
      <enclosure length="42970290" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/7772461a-35b4-403c-8188-285a09ab8bdf/audio/6c4d16a8-7b30-4219-bcdd-01d03ae6abd9/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>022. Final Rules for NYC Local Law</itunes:title>
      <itunes:author>Dr Shea Brown, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/2abb5989-31d0-49e4-8ce6-308d0e06f6c4/3000x3000/colorful-modern-microphone-illustrations-podcast-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:44:45</itunes:duration>
      <itunes:summary>In this episode Shea reviews the new rules for NYC&apos;s Local Law No. 144, which requires bias audits of automated employment decision tools. 

The date for enforcement has been pushed back to July 5th, 2023 to give time for companies to seek independent auditors (which is still a requirement).

Sign up for our new &quot;AI &amp; Algorithm Auditor Certification Program&quot; starting May 8th! 
https://courses.babl.ai/p/ai-and-algorithm-auditor-certification?affcode=616760_7ts3gujl </itunes:summary>
      <itunes:subtitle>In this episode Shea reviews the new rules for NYC&apos;s Local Law No. 144, which requires bias audits of automated employment decision tools. 

The date for enforcement has been pushed back to July 5th, 2023 to give time for companies to seek independent auditors (which is still a requirement).

Sign up for our new &quot;AI &amp; Algorithm Auditor Certification Program&quot; starting May 8th! 
https://courses.babl.ai/p/ai-and-algorithm-auditor-certification?affcode=616760_7ts3gujl </itunes:subtitle>
      <itunes:keywords>ai governance, hiring, ai audit, babl, human resources, hr, ai, algorithm audit, babl ai, risk management</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>22</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">613a8b78-f672-4b90-9d1f-76388449c29b</guid>
      <title>021. Large Language Models, Open Letter Moratorium on AI, NIST&apos;s AI Risk Management Framework, and Algorithmic Bias Lab</title>
      <description><![CDATA[This week on Lunchtime BABLing, we discuss: 

1: The power, hype, and dangers of large language models like ChatGPT.

2: The recent open letter asking for a moratorium on AI research. 

3: In context learning of large language models the problems for auditing. 

4: NIST's AI Risk Management Framework and its influence on public policy like California's ASSEMBLY BILL NO. 331. 

5: Updates on The Algorithmic Bias Lab's new training program for AI auditors. 

https://babl.ai
https://courses.babl.ai/?affcode=616760_7ts3gujl Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Sun, 2 Apr 2023 15:11:43 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown, Dr Shea Brown)</author>
      <link>https://babl.ai</link>
      <enclosure length="52181843" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/8317fa83-2fb4-42fe-af97-a8f1bd352067/audio/8a5226f0-8efc-461f-ad75-cb190244d247/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>021. Large Language Models, Open Letter Moratorium on AI, NIST&apos;s AI Risk Management Framework, and Algorithmic Bias Lab</itunes:title>
      <itunes:author>Shea Brown, Dr Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/6512ac8b-64a7-4276-bb25-40054cd322f6/3000x3000/colorful-modern-microphone-illustrations-podcast-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:54:03</itunes:duration>
      <itunes:summary>This week on Lunchtime BABLing, we discuss: 

1: The power, hype, and dangers of large language models like ChatGPT.

2: The recent open letter asking for a moratorium on AI research. 

3: In context learning of large language models the problems for auditing. 

4: NIST&apos;s AI Risk Management Framework and its influence on public policy like California&apos;s ASSEMBLY BILL NO. 331. 

5: Updates on The Algorithmic Bias Lab&apos;s new training program for AI auditors. 

https://babl.ai
https://courses.babl.ai/?affcode=616760_7ts3gujl</itunes:summary>
      <itunes:subtitle>This week on Lunchtime BABLing, we discuss: 

1: The power, hype, and dangers of large language models like ChatGPT.

2: The recent open letter asking for a moratorium on AI research. 

3: In context learning of large language models the problems for auditing. 

4: NIST&apos;s AI Risk Management Framework and its influence on public policy like California&apos;s ASSEMBLY BILL NO. 331. 

5: Updates on The Algorithmic Bias Lab&apos;s new training program for AI auditors. 

https://babl.ai
https://courses.babl.ai/?affcode=616760_7ts3gujl</itunes:subtitle>
      <itunes:keywords>eu ai act, ai risk management framework, chatgpt, regulation, artificial intelligence, llm, ai research, legal, ai audit, audit, babl, human resources, tech, large language models, ai, algorithm audit, babl ai, risk management, law, california&apos;s assembly bill no. 331, nist</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>21</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1098abe2-e4b1-4193-9f62-7dee5da88434</guid>
      <title>020. AI Governance Report &amp; Auditor Training</title>
      <description><![CDATA[This week we discuss our recent report "The Current State of AI Governance", which is the culmination of a year-long research project looking into the effectiveness of AI governance controls. 

Full report here: https://babl.ai/wp-content/uploads/2023/03/AI-Governance-Report.pdf

We also discuss our new training program, the "AI & Algorithm Auditor Certificate Program", which starts in May 2023. This program has courses and certifications in 5 key areas necessary for AI auditing and Responsible AI in general: 

1: Algorithms, AI, & Machine Learning
2: Algorithmic Risk & Impact Assessments
3: AI Governance & Risk Management
4: Bias, Accuracy, & the Statistics of AI Testing
5: Algorithm Auditing & Assurance

Early pricing can be found here: https://courses.babl.ai/?affcode=616760_7ts3gujl

BABL AI: https://babl.ai Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Thu, 30 Mar 2023 09:43:36 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Dr Shea Brown, Shea Brown)</author>
      <link>https://babl.ai</link>
      <enclosure length="56419702" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/bb0c93d1-818a-4756-9299-4a88b2b303e3/audio/ce045eeb-7bd1-4a04-928c-f15bbd2a1f4e/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>020. AI Governance Report &amp; Auditor Training</itunes:title>
      <itunes:author>Dr Shea Brown, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/bdc1e8fb-109e-4fbb-872d-164c2a5ca564/3000x3000/colorful-modern-microphone-illustrations-podcast-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:58:14</itunes:duration>
      <itunes:summary>This week we discuss our recent report &quot;The Current State of AI Governance&quot;, which is the culmination of a year-long research project looking into the effectiveness of AI governance controls. 

Full report here: https://babl.ai/wp-content/uploads/2023/03/AI-Governance-Report.pdf

We also discuss our new training program, the &quot;AI &amp; Algorithm Auditor Certificate Program&quot;, which starts in May 2023. This program has courses and certifications in 5 key areas necessary for AI auditing and Responsible AI in general: 

1: Algorithms, AI, &amp; Machine Learning
2: Algorithmic Risk &amp; Impact Assessments
3: AI Governance &amp; Risk Management
4: Bias, Accuracy, &amp; the Statistics of AI Testing
5: Algorithm Auditing &amp; Assurance

Early pricing can be found here: https://courses.babl.ai/?affcode=616760_7ts3gujl

BABL AI: https://babl.ai</itunes:summary>
      <itunes:subtitle>This week we discuss our recent report &quot;The Current State of AI Governance&quot;, which is the culmination of a year-long research project looking into the effectiveness of AI governance controls. 

Full report here: https://babl.ai/wp-content/uploads/2023/03/AI-Governance-Report.pdf

We also discuss our new training program, the &quot;AI &amp; Algorithm Auditor Certificate Program&quot;, which starts in May 2023. This program has courses and certifications in 5 key areas necessary for AI auditing and Responsible AI in general: 

1: Algorithms, AI, &amp; Machine Learning
2: Algorithmic Risk &amp; Impact Assessments
3: AI Governance &amp; Risk Management
4: Bias, Accuracy, &amp; the Statistics of AI Testing
5: Algorithm Auditing &amp; Assurance

Early pricing can be found here: https://courses.babl.ai/?affcode=616760_7ts3gujl

BABL AI: https://babl.ai</itunes:subtitle>
      <itunes:keywords>eu ai act, ai governance, algorithmic risk, ai audit, impact assessment, ai, dsa, risk management, digital services act, algorithm auditing</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>20</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">db25f518-0265-4f69-9bef-ccf8ec097e03</guid>
      <title>019. Interrogating Large Language Models with Jiahao Chen</title>
      <description><![CDATA[On this week's Lunchtime BABLing (#19) we talk with Jiahao Chen; data scientist, researcher, and founder of Responsible Artificial Intelligence LLC. 

We discuss the evolving debate around large language models (LLMs) and their derivatives (ChatGPT, Bard, Bing AI Chatbot, etc.), including:

1: Do systems like ChatGPT reason?
2: How do businesses know whether LLMs are useful (and safe) for them to use in a product or business process? 
3: What kinds of gaurdrails are needed for the ethical use of LLMs (includes pompt engineering). 
4: Black-box vs. Whitebox testing of LLMs for algorithm auditing. 
5: Classical assessments of intelligence and their applicability to LLMs. 
6: Re-thinking education and assessment in the age of AI. 

Jiahao Chen Twitter: https://twitter.com/acidflask 
Responsible AI LLC: https://responsibleai.tech/  Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Mon, 20 Mar 2023 09:37:56 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Jiahao Chen, Shea Brown)</author>
      <link>https://babl.ai</link>
      <enclosure length="54928335" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/65fd929d-997a-44ec-94c5-4df348fd5283/audio/470f8067-c37f-4f33-8de2-c7eb2de15327/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>019. Interrogating Large Language Models with Jiahao Chen</itunes:title>
      <itunes:author>Jiahao Chen, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/e0546e08-31cf-4da6-a1ce-70fad4affc21/3000x3000/colorful-modern-microphone-illustrations-podcast-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:57:13</itunes:duration>
      <itunes:summary>On this week&apos;s Lunchtime BABLing (#19) we talk with Jiahao Chen; data scientist, researcher, and founder of Responsible Artificial Intelligence LLC. 

We discuss the evolving debate around large language models (LLMs) and their derivatives (ChatGPT, Bard, Bing AI Chatbot, etc.), including:

1: Do systems like ChatGPT reason?
2: How do businesses know whether LLMs are useful (and safe) for them to use in a product or business process? 
3: What kinds of gaurdrails are needed for the ethical use of LLMs (includes pompt engineering). 
4: Black-box vs. Whitebox testing of LLMs for algorithm auditing. 
5: Classical assessments of intelligence and their applicability to LLMs. 
6: Re-thinking education and assessment in the age of AI. 

Jiahao Chen Twitter: https://twitter.com/acidflask 
Responsible AI LLC: https://responsibleai.tech/ </itunes:summary>
      <itunes:subtitle>On this week&apos;s Lunchtime BABLing (#19) we talk with Jiahao Chen; data scientist, researcher, and founder of Responsible Artificial Intelligence LLC. 

We discuss the evolving debate around large language models (LLMs) and their derivatives (ChatGPT, Bard, Bing AI Chatbot, etc.), including:

1: Do systems like ChatGPT reason?
2: How do businesses know whether LLMs are useful (and safe) for them to use in a product or business process? 
3: What kinds of gaurdrails are needed for the ethical use of LLMs (includes pompt engineering). 
4: Black-box vs. Whitebox testing of LLMs for algorithm auditing. 
5: Classical assessments of intelligence and their applicability to LLMs. 
6: Re-thinking education and assessment in the age of AI. 

Jiahao Chen Twitter: https://twitter.com/acidflask 
Responsible AI LLC: https://responsibleai.tech/ </itunes:subtitle>
      <itunes:keywords>eu ai act, chatgpt, ai governance, ai auditing, responsible ai, regulations, algorithmic auditing, algorithms, ai, risk management, ethical ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>19</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e7f129df-8a81-4913-8c4d-6af005f4b9d1</guid>
      <title>018. The 5 Skills you NEED for AI Auditing</title>
      <description><![CDATA[You need way more than "five skills" to be an AI auditor, but there are five areas of study that auditors need basic competency in if they want to do the kinds of audits that BABL AI performs. 

This is part of our weekly webinar/podcast that went very long so we've cut out a lot of the Q&A, which covered a lot of questions that we'll address in future videos, like: 

What kind of training do I need to become an AI or algorithm auditor? 
Do I need technical knowledge of machine learning to do AI ethics?  Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Wed, 15 Mar 2023 11:59:11 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown, Dr Shea Brown)</author>
      <link>https://babl.ai</link>
      <enclosure length="52121925" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/8fa46d19-af45-4e2d-9aee-bb766b344186/audio/bf9fa379-1fbe-4d67-b24f-f8ced6285279/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>018. The 5 Skills you NEED for AI Auditing</itunes:title>
      <itunes:author>Shea Brown, Dr Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/c41fdaa8-6ae8-4659-8e1a-e787acda03c6/3000x3000/colorful-modern-microphone-illustrations-podcast-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:54:17</itunes:duration>
      <itunes:summary>You need way more than &quot;five skills&quot; to be an AI auditor, but there are five areas of study that auditors need basic competency in if they want to do the kinds of audits that BABL AI performs. 

This is part of our weekly webinar/podcast that went very long so we&apos;ve cut out a lot of the Q&amp;A, which covered a lot of questions that we&apos;ll address in future videos, like: 

What kind of training do I need to become an AI or algorithm auditor? 
Do I need technical knowledge of machine learning to do AI ethics? </itunes:summary>
      <itunes:subtitle>You need way more than &quot;five skills&quot; to be an AI auditor, but there are five areas of study that auditors need basic competency in if they want to do the kinds of audits that BABL AI performs. 

This is part of our weekly webinar/podcast that went very long so we&apos;ve cut out a lot of the Q&amp;A, which covered a lot of questions that we&apos;ll address in future videos, like: 

What kind of training do I need to become an AI or algorithm auditor? 
Do I need technical knowledge of machine learning to do AI ethics? </itunes:subtitle>
      <itunes:keywords>eu ai act, ai governance, nyc bias law, ai auditing, insurance, nyc local law 144, ai ethics, auditing algorithms, responsible ai, hiring, babl, human resources, algorithms, ai, babl ai, shea brown, dsa, dr shea brown, risk management, auditing ai, algorithm auditing</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>18</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1cd90b57-91ef-4359-9f34-6e73f4739f70</guid>
      <title>017. Criteria-Based Bias Audit</title>
      <description><![CDATA[On this week's Lunchtime BABLing, Shea goes over the difference between a direct engagement audit vs. an attestation engagement audit and give examples from our criteria-based attestation audit for NYC Local Law No. 144.  Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Sun, 5 Mar 2023 13:09:41 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown, Dr Shea Brown)</author>
      <link>https://babl.ai</link>
      <enclosure length="62815965" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/daf3085c-8fee-4343-ac17-b4f66b1f7eef/audio/5b5c8720-8258-476d-955c-f8f97e5d2e23/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>017. Criteria-Based Bias Audit</itunes:title>
      <itunes:author>Shea Brown, Dr Shea Brown</itunes:author>
      <itunes:duration>01:05:25</itunes:duration>
      <itunes:summary>On this week&apos;s Lunchtime BABLing, Shea goes over the difference between a direct engagement audit vs. an attestation engagement audit and give examples from our criteria-based attestation audit for NYC Local Law No. 144. </itunes:summary>
      <itunes:subtitle>On this week&apos;s Lunchtime BABLing, Shea goes over the difference between a direct engagement audit vs. an attestation engagement audit and give examples from our criteria-based attestation audit for NYC Local Law No. 144. </itunes:subtitle>
      <itunes:keywords>ai governance, ai auditing, artificial intelligence, ai ethics, responsible ai, ai audit, ai, algorithm audit, criteria based audit</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>17</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">064d2c97-5d26-4e99-bc76-926532232ac5</guid>
      <title>016. Breaking into AI Ethics (Part 2)</title>
      <description><![CDATA[In this Q&A session, Shea talks about strategies for applying the skills you already have to the emerging field of AI ethics, governance, and policy consulting? This is a follow-up to our first webinar on the topic. Questions include: 

1. Do I need an advanced degree to work in responsible AI?

2. How do I know what topics to focus on? 

3. Do I need programming skills to work in responsible AI? 

4. Where can I find training in AI ethics?  Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Sun, 26 Feb 2023 15:02:37 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown, Dr Shea Brown)</author>
      <link>https://babl.ai</link>
      <enclosure length="44489154" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/3b71d821-c426-4185-85a0-fe0b8daee4e3/audio/2b5f8b11-84a0-4d72-826b-40dd75d2cdb1/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>016. Breaking into AI Ethics (Part 2)</itunes:title>
      <itunes:author>Shea Brown, Dr Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/f11cd240-9e10-4fcd-a084-08d8b42cdfaf/3000x3000/colorful-modern-microphone-illustrations-podcast-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:46:20</itunes:duration>
      <itunes:summary>In this Q&amp;A session, Shea talks about strategies for applying the skills you already have to the emerging field of AI ethics, governance, and policy consulting? This is a follow-up to our first webinar on the topic. Questions include: 

1. Do I need an advanced degree to work in responsible AI?

2. How do I know what topics to focus on? 

3. Do I need programming skills to work in responsible AI? 

4. Where can I find training in AI ethics? </itunes:summary>
      <itunes:subtitle>In this Q&amp;A session, Shea talks about strategies for applying the skills you already have to the emerging field of AI ethics, governance, and policy consulting? This is a follow-up to our first webinar on the topic. Questions include: 

1. Do I need an advanced degree to work in responsible AI?

2. How do I know what topics to focus on? 

3. Do I need programming skills to work in responsible AI? 

4. Where can I find training in AI ethics? </itunes:subtitle>
      <itunes:keywords>ai governance, ai auditing, ai ethics, responsible ai, ai audit, babl, algorithmic auditing, algorithm, algorithms, ai, algorithm audit, babl ai, shea brown, dr shea brown</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>16</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">54d23cdb-bb60-4671-9c2c-9e6a3c350037</guid>
      <title>015. AI Risk Management Standards</title>
      <description><![CDATA[In this episode of Lunchtime BABLing, we discuss the emergence of new standards for AI Risk Management, as well as regulatory requirements involving AI risk management, including: 

1: NIST AI Risk Management Framework 

2: ISO/IEC 23894:2023 - Information technology — Artificial intelligence — Guidance on risk management 

3: Colorados SB21-169 - Protecting Consumers from Unfair Discrimination in Insurance Practices 

4: Risk Management in the [EU] Artificial Intelligence Act 

5: BABL AI Cheat Sheet on AI Governance  Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Sun, 19 Feb 2023 10:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Dr Shea Brown, Shea Brown)</author>
      <link>https://babl.ai</link>
      <enclosure length="50860243" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/b254ebcd-ab53-4733-8fcc-94d30c1c3a86/audio/52d7045f-e04e-4255-a9b5-a66ce0688ed2/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>015. AI Risk Management Standards</itunes:title>
      <itunes:author>Dr Shea Brown, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/f4352241-cb6e-4533-9abc-8874f703d0fb/3000x3000/colorful-modern-microphone-illustrations-podcast-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:52:58</itunes:duration>
      <itunes:summary>In this episode of Lunchtime BABLing, we discuss the emergence of new standards for AI Risk Management, as well as regulatory requirements involving AI risk management, including: 

1: NIST AI Risk Management Framework 

2: ISO/IEC 23894:2023 - Information technology — Artificial intelligence — Guidance on risk management 

3: Colorados SB21-169 - Protecting Consumers from Unfair Discrimination in Insurance Practices 

4: Risk Management in the [EU] Artificial Intelligence Act 

5: BABL AI Cheat Sheet on AI Governance </itunes:summary>
      <itunes:subtitle>In this episode of Lunchtime BABLing, we discuss the emergence of new standards for AI Risk Management, as well as regulatory requirements involving AI risk management, including: 

1: NIST AI Risk Management Framework 

2: ISO/IEC 23894:2023 - Information technology — Artificial intelligence — Guidance on risk management 

3: Colorados SB21-169 - Protecting Consumers from Unfair Discrimination in Insurance Practices 

4: Risk Management in the [EU] Artificial Intelligence Act 

5: BABL AI Cheat Sheet on AI Governance </itunes:subtitle>
      <itunes:keywords>risk assessment, artificial intelligence, nist ai risk management framework, ai audit, audit, human resources, ai, algorithm audit, risk management, information technology, nist</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>15</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">87005ddf-f35a-4e20-acc8-3b10b3a529fd</guid>
      <title>014. The Use and Regulation of AI in Hiring with Dr. Frida Polli</title>
      <description><![CDATA[Today on Lunchtime BABLing, Shea talks with the Chief Data Science Officer at Harver, Dr. Frida Polli. Prior to being at Harver, Frida was the founder and CEO of pymetrics; we talk about: 

✅ How AI is being used in hiring, 

✅ What it takes to use it responsibly, 

✅ Her own journey through this space, and

✅ Reflections on upcoming regulations, including the recent New York City Local Law 144 and the EU AI Act. 

Frida: https://www.linkedin.com/in/frida-polli-phd-03a1855/
Harver: https://harver.com/  Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Sun, 12 Feb 2023 10:48:50 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Dr Frida Polli, Dr Shea Brown)</author>
      <link>https://babl.ai</link>
      <enclosure length="41303051" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/dfa4f952-6820-41da-a403-801482de48dd/audio/286cbb89-d1cf-46b1-9525-ed53a40f1e00/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>014. The Use and Regulation of AI in Hiring with Dr. Frida Polli</itunes:title>
      <itunes:author>Dr Frida Polli, Dr Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/04694e4d-dc06-48ed-8735-36517171c31e/3000x3000/colorful-modern-microphone-illustrations-podcast-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:43:01</itunes:duration>
      <itunes:summary>Today on Lunchtime BABLing, Shea talks with the Chief Data Science Officer at Harver, Dr. Frida Polli. Prior to being at Harver, Frida was the founder and CEO of pymetrics; we talk about: 

✅ How AI is being used in hiring, 

✅ What it takes to use it responsibly, 

✅ Her own journey through this space, and

✅ Reflections on upcoming regulations, including the recent New York City Local Law 144 and the EU AI Act. 

Frida: https://www.linkedin.com/in/frida-polli-phd-03a1855/
Harver: https://harver.com/ </itunes:summary>
      <itunes:subtitle>Today on Lunchtime BABLing, Shea talks with the Chief Data Science Officer at Harver, Dr. Frida Polli. Prior to being at Harver, Frida was the founder and CEO of pymetrics; we talk about: 

✅ How AI is being used in hiring, 

✅ What it takes to use it responsibly, 

✅ Her own journey through this space, and

✅ Reflections on upcoming regulations, including the recent New York City Local Law 144 and the EU AI Act. 

Frida: https://www.linkedin.com/in/frida-polli-phd-03a1855/
Harver: https://harver.com/ </itunes:subtitle>
      <itunes:keywords>eu ai act, harver, ceo, new york city local law 144, dr frida polli, frida polli, ai ethics, responsible ai, legal, laws, regulations, ai regulations, babl, human resources, legal compliance, hr, ai, babl ai, shea brown, dr shea brown, risk management, pymetrics</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>14</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">96209db1-a4c5-4905-9e67-da1fdd82592f</guid>
      <title>013. Why Algorithm Auditing Needs Standards</title>
      <description><![CDATA[Today on Lunchtime BABLing, Shea talks with the Executive Director of the non-profit ForHumanity, Ryan Carrier, FHCA. 

Ryan discusses: 

✅ What role ForHumanity plays in the AI & Algorithm Auditing ecosystem

✅ Recent laws and activities that are relevant to AI auditing, including the DSA, NYC Bias Audit Law (Local Law 144), the EU AI Act, EEOC guidelines, and more

✅ Ways for people to get involved in the ecosystem. 
 
ForHumanity: https://forhumanity.center/ 
Website: https://babl.ai/
Linkedin: https://www.linkedin.com/company/babl-ai/
Courses: https://courses.babl.ai/ Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Sun, 5 Feb 2023 11:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Ryan Carrier, Shea Brown)</author>
      <link>https://babl.ai</link>
      <enclosure length="47842023" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/986eb4eb-d416-4266-8834-82b95b7ba709/audio/c33c5fa6-1874-40b6-b28d-75f52e8a85ef/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>013. Why Algorithm Auditing Needs Standards</itunes:title>
      <itunes:author>Ryan Carrier, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/63adb31f-575b-4f71-a3d1-b7ece7475135/3000x3000/colorful-modern-microphone-illustrations-podcast-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:49:50</itunes:duration>
      <itunes:summary>Today on Lunchtime BABLing, Shea talks with the Executive Director of the non-profit ForHumanity, Ryan Carrier, FHCA. 

Ryan discusses: 

✅ What role ForHumanity plays in the AI &amp; Algorithm Auditing ecosystem

✅ Recent laws and activities that are relevant to AI auditing, including the DSA, NYC Bias Audit Law (Local Law 144), the EU AI Act, EEOC guidelines, and more

✅ Ways for people to get involved in the ecosystem. 
 
ForHumanity: https://forhumanity.center/ 
Website: https://babl.ai/
Linkedin: https://www.linkedin.com/company/babl-ai/
Courses: https://courses.babl.ai/</itunes:summary>
      <itunes:subtitle>Today on Lunchtime BABLing, Shea talks with the Executive Director of the non-profit ForHumanity, Ryan Carrier, FHCA. 

Ryan discusses: 

✅ What role ForHumanity plays in the AI &amp; Algorithm Auditing ecosystem

✅ Recent laws and activities that are relevant to AI auditing, including the DSA, NYC Bias Audit Law (Local Law 144), the EU AI Act, EEOC guidelines, and more

✅ Ways for people to get involved in the ecosystem. 
 
ForHumanity: https://forhumanity.center/ 
Website: https://babl.ai/
Linkedin: https://www.linkedin.com/company/babl-ai/
Courses: https://courses.babl.ai/</itunes:subtitle>
      <itunes:keywords>forhumanity, eu ai act, ai governance, nyc bias law, ai auditing, legislation, algorithmic audit, ai ethics, responsible ai, laws, nyc ai bias law, regulations, ai audit, ai regulations, ryan carrier, babl, human resources, algorithmic auditing, nyc bias, auditing, governance, hr, ai, algorithm audit, babl ai, shea brown, risk management, ethical ai, law, algorithm auditing</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>13</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">2fc17409-400b-40e7-91f8-b7b6af3af3f9</guid>
      <title>012. What&apos;s in Store for AI Auditing in 2023?</title>
      <description><![CDATA[Today on Lunchtime BABLing, Shea reflects on recent meetings, events, and announcements, including:

✅ Public hearing for NYC Local Law No. 144

✅ European Commission Workshop on auditing for the DSA

✅ New AI laws and guidelines (e.g. NIST AI RMF, NJ, and NY laws)

Shea follows it up with his thoughts on the training needed for AI auditing, and why 2023 is when AI and Algorithm Auditing goes mainstream. 
 
Website: https://babl.ai/
https://www.linkedin.com/company/babl-ai/
Courses: https://courses.babl.ai/ Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Sun, 29 Jan 2023 11:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown)</author>
      <link>https://babl.ai</link>
      <enclosure length="40946532" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/d3dacd30-97b5-4c66-bdea-638a44e954b8/audio/63fdbeb9-066b-4fbd-ba64-6f390819efd4/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>012. What&apos;s in Store for AI Auditing in 2023?</itunes:title>
      <itunes:author>Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/ec20e463-04b5-48ce-8e75-e7dec87cbd3c/3000x3000/colorful-modern-microphone-illustrations-podcast-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:42:39</itunes:duration>
      <itunes:summary>Today on Lunchtime BABLing, Shea reflects on recent meetings, events, and announcements, including:

✅ Public hearing for NYC Local Law No. 144

✅ European Commission Workshop on auditing for the DSA

✅ New AI laws and guidelines (e.g. NIST AI RMF, NJ, and NY laws)

Shea follows it up with his thoughts on the training needed for AI auditing, and why 2023 is when AI and Algorithm Auditing goes mainstream. 
 
Website: https://babl.ai/
https://www.linkedin.com/company/babl-ai/
Courses: https://courses.babl.ai/</itunes:summary>
      <itunes:subtitle>Today on Lunchtime BABLing, Shea reflects on recent meetings, events, and announcements, including:

✅ Public hearing for NYC Local Law No. 144

✅ European Commission Workshop on auditing for the DSA

✅ New AI laws and guidelines (e.g. NIST AI RMF, NJ, and NY laws)

Shea follows it up with his thoughts on the training needed for AI auditing, and why 2023 is when AI and Algorithm Auditing goes mainstream. 
 
Website: https://babl.ai/
https://www.linkedin.com/company/babl-ai/
Courses: https://courses.babl.ai/</itunes:subtitle>
      <itunes:keywords>eu ai act, nyc bias law, ai auditing, nyc, consulting, algorithmic audit, auditing algorithms, legal, laws, european commission, babl, ai, babl ai, dsa, law, nist</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>12</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">bec3f456-1eff-46c0-a920-e53241cdce11</guid>
      <title>011. AI Audit &amp; Assurance</title>
      <description><![CDATA[On this episode of Lunchtime BABLing, Shea talks about AI Audit & Assurance, and where it fits into the emerging regulatory landscape.

✅ What laws, regulations, and guidelines are driving the need for AI audit and assurance?

✅ What the ecosystem looks like, and where I think it's going (he might mention your company here)?

✅ What is different about algorithm auditing as compared to other types of audit and assurance?

Courses: https://courses.babl.ai/  Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Sun, 22 Jan 2023 11:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Dr Shea Brown, Shea Brown)</author>
      <link>https://babl.ai</link>
      <enclosure length="43398280" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/c9a47b2c-9caa-49d8-bae3-e8453c39aafa/audio/6c733af2-0cfa-474a-88da-25710f8d083f/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>011. AI Audit &amp; Assurance</itunes:title>
      <itunes:author>Dr Shea Brown, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/b9d9b543-1396-4880-82c2-087e0ad7e8b5/3000x3000/colorful-modern-microphone-illustrations-podcast-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:45:12</itunes:duration>
      <itunes:summary>On this episode of Lunchtime BABLing, Shea talks about AI Audit &amp; Assurance, and where it fits into the emerging regulatory landscape.

✅ What laws, regulations, and guidelines are driving the need for AI audit and assurance?

✅ What the ecosystem looks like, and where I think it&apos;s going (he might mention your company here)?

✅ What is different about algorithm auditing as compared to other types of audit and assurance?

Courses: https://courses.babl.ai/ </itunes:summary>
      <itunes:subtitle>On this episode of Lunchtime BABLing, Shea talks about AI Audit &amp; Assurance, and where it fits into the emerging regulatory landscape.

✅ What laws, regulations, and guidelines are driving the need for AI audit and assurance?

✅ What the ecosystem looks like, and where I think it&apos;s going (he might mention your company here)?

✅ What is different about algorithm auditing as compared to other types of audit and assurance?

Courses: https://courses.babl.ai/ </itunes:subtitle>
      <itunes:keywords>audit and assurance, ai governance, nyc bias law, ai auditing, artificial intelligence, algorithmic bias law, algorithmic bias, ai ethics, auditing algorithms, responsible ai, laws, regulations, human resources, algorithmic auditing, governance, hr, ai, algorithm audit, shea brown, nyc algorithm law, ethical ai, auditing algorithm, algorithm auditing</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>11</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">a8fa341f-b58b-4cf5-bb82-8086e1d60a13</guid>
      <title>010. Breaking into AI Ethics Consulting</title>
      <description><![CDATA[How can you apply the skills you already have to the emerging field of AI ethics, governance, and policy consulting? In this edition of Lunchtime BALBing, Shea Brown talks about his experience and thoughts on finding your unique niche in the industry. 
 Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Sun, 15 Jan 2023 13:30:15 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown)</author>
      <link>https://babl.ai</link>
      <enclosure length="37651787" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/4414abe5-a0b7-4c4c-b012-2cc574b302af/audio/6ec691fe-8412-438f-ab8b-9416ad74ce46/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>010. Breaking into AI Ethics Consulting</itunes:title>
      <itunes:author>Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/a0077d9d-1e1b-4f05-b2a1-bef53bbbfcbe/3000x3000/colorful-modern-microphone-illustrations-podcast-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:39:00</itunes:duration>
      <itunes:summary>How can you apply the skills you already have to the emerging field of AI ethics, governance, and policy consulting? In this edition of Lunchtime BALBing, Shea Brown talks about his experience and thoughts on finding your unique niche in the industry. 
</itunes:summary>
      <itunes:subtitle>How can you apply the skills you already have to the emerging field of AI ethics, governance, and policy consulting? In this edition of Lunchtime BALBing, Shea Brown talks about his experience and thoughts on finding your unique niche in the industry. 
</itunes:subtitle>
      <itunes:keywords>careers in artificial intelligence, consulting, artificial intelligence, ai ethics, responsible artificial intelligence, responsible ai, careers, careers in ai, machine learning, ai consulting, ethics, human resources, new careers, hr, ai, shea brown, dr shea brown, business consulting, social sciences</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>10</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">40a233f2-b755-4502-9779-02134e44f229</guid>
      <title>009. The Future of AI Regulation in the US &amp; Europe with Merve Hickok</title>
      <description><![CDATA[On this week's Lunchtime BABLing, we're talking with Merve Hickok, a leading voice in AI policy and regulation. We discuss the future of AI regulation, especially the EU AI Act. Topics include: 

1: How can regulations best protect fundamental rights?
2: What will regulations require of companies and governments?
3: Why are responsible AI practices crucial for businesses?
4: What can companies do now to ensure they're on the right path?

Merve's LinkedIn: https://www.linkedin.com/in/mervehickok/ 
Free resources for Responsible AI: https://www.aiethicist.org/  Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Sun, 8 Jan 2023 11:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Babl AI)</author>
      <link>https://babl.ai</link>
      <enclosure length="54142757" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/ceeecda4-0186-4e50-b749-4980dc42b82b/audio/7364a81e-9646-410d-b653-54ee26b945cc/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>009. The Future of AI Regulation in the US &amp; Europe with Merve Hickok</itunes:title>
      <itunes:author>Babl AI</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/d0e4854a-8538-4402-9f4f-f4c2a16fffdc/3000x3000/colorful-modern-microphone-illustrations-podcast-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:56:23</itunes:duration>
      <itunes:summary>On this week&apos;s Lunchtime BABLing, we&apos;re talking with Merve Hickok, a leading voice in AI policy and regulation. We discuss the future of AI regulation, especially the EU AI Act. Topics include: 

1: How can regulations best protect fundamental rights?
2: What will regulations require of companies and governments?
3: Why are responsible AI practices crucial for businesses?
4: What can companies do now to ensure they&apos;re on the right path?

Merve&apos;s LinkedIn: https://www.linkedin.com/in/mervehickok/ 
Free resources for Responsible AI: https://www.aiethicist.org/ </itunes:summary>
      <itunes:subtitle>On this week&apos;s Lunchtime BABLing, we&apos;re talking with Merve Hickok, a leading voice in AI policy and regulation. We discuss the future of AI regulation, especially the EU AI Act. Topics include: 

1: How can regulations best protect fundamental rights?
2: What will regulations require of companies and governments?
3: Why are responsible AI practices crucial for businesses?
4: What can companies do now to ensure they&apos;re on the right path?

Merve&apos;s LinkedIn: https://www.linkedin.com/in/mervehickok/ 
Free resources for Responsible AI: https://www.aiethicist.org/ </itunes:subtitle>
      <itunes:keywords>ai auditing, equality, artificial intelligence, ai ethics, responsible ai, legal, regulations, human resources, merve hickok, hr, ai, shea brown, dr shea brown, human rights, law</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>9</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">167ff371-53ee-4e8b-9d8a-0a7eb8c0ff18</guid>
      <title>008. New York City’s Automated Employment Decision Tool (AEDT) law - December 2022 Update</title>
      <description><![CDATA[Last week the latest updates to the New York City’s Bias Audit Law for Automated Employment Decision Tool (AEDT), also known as NYC Local Law 144, were released. Join this weeks episode of Lunchtime BABLing with BABL AI CEO, Dr. Shea Brown, as he breaks down these latest changes to the NYC Bias Audit Law.  Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Sun, 1 Jan 2023 11:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown, Dr Shea Brown)</author>
      <link>https://babl.ai</link>
      <enclosure length="44703149" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/baf50530-b6b1-426b-8a60-b5f9187e244d/audio/4aa51ed4-6a28-4ce4-ac4d-12a1b8410cc4/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>008. New York City’s Automated Employment Decision Tool (AEDT) law - December 2022 Update</itunes:title>
      <itunes:author>Shea Brown, Dr Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/a773ce54-f590-4409-8c1a-a79ae620beaa/3000x3000/colorful-modern-microphone-illustrations-podcast-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:46:33</itunes:duration>
      <itunes:summary>Last week the latest updates to the New York City’s Bias Audit Law for Automated Employment Decision Tool (AEDT), also known as NYC Local Law 144, were released. Join this weeks episode of Lunchtime BABLing with BABL AI CEO, Dr. Shea Brown, as he breaks down these latest changes to the NYC Bias Audit Law. </itunes:summary>
      <itunes:subtitle>Last week the latest updates to the New York City’s Bias Audit Law for Automated Employment Decision Tool (AEDT), also known as NYC Local Law 144, were released. Join this weeks episode of Lunchtime BABLing with BABL AI CEO, Dr. Shea Brown, as he breaks down these latest changes to the NYC Bias Audit Law. </itunes:subtitle>
      <itunes:keywords>automated employment decision tool, nyc bias law, nyc, consulting, aedt, ceo, nyc local law 144, babl, nyc ai law, auditing, dr. shea brown, babl ai, shea brown, bias audit, law</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>8</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e4fef125-8506-4129-8ec9-e342cc3f1772</guid>
      <title>007. A conversation exploring the socio-technical side of AI Ethics with special guest Borhane Blili-Hamelin</title>
      <description><![CDATA[In today's episode of Lunchtime BABLing, Shea Brown and invites Borhane Blili-Hamelin, PhD to discuss some surprising parallels between the challenge of putting AI ethics into practice in industry versus research settings!  Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Sun, 11 Dec 2022 14:34:27 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Borhane Blili-Hamelin, Dr Shea Brown)</author>
      <link>https://babl.ai</link>
      <enclosure length="47080919" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/ab4e0e3a-864a-472c-af18-39d6529c79c0/audio/7a925d02-2920-49c0-88d5-4243a1148e13/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>007. A conversation exploring the socio-technical side of AI Ethics with special guest Borhane Blili-Hamelin</itunes:title>
      <itunes:author>Borhane Blili-Hamelin, Dr Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/203bb065-b120-4ca4-b78c-7d221a34140a/3000x3000/colorful-modern-microphone-illustrations-podcast-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:49:02</itunes:duration>
      <itunes:summary>In today&apos;s episode of Lunchtime BABLing, Shea Brown and invites Borhane Blili-Hamelin, PhD to discuss some surprising parallels between the challenge of putting AI ethics into practice in industry versus research settings! </itunes:summary>
      <itunes:subtitle>In today&apos;s episode of Lunchtime BABLing, Shea Brown and invites Borhane Blili-Hamelin, PhD to discuss some surprising parallels between the challenge of putting AI ethics into practice in industry versus research settings! </itunes:subtitle>
      <itunes:keywords>ai auditing, tech ethics, phd, ai ethics, research, algorithms, ai, socio-technical, shea brown, dr shea brown, borhane blili-hamelin</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>7</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">45a65457-4f1a-4dc1-b36d-4b8467c504c2</guid>
      <title>006. Process Audit for Disparate Impact Testing</title>
      <description><![CDATA[In this weeks episode, BABL AI’s CEO Shea Brown discusses what a process audit is, and how it can be used to verify disparate impact testing conducted by employers and vendors. 

New York City’s Local Law 144 requires independent bias audits for automated employment decision tools (AEDT) used to substantially assist or replace decisions in hiring or promotion. The law comes into effect on Jan 1, 2023.  Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Fri, 18 Nov 2022 10:00:00 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown)</author>
      <link>https://babl.ai</link>
      <enclosure length="39163100" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/76bbd39d-cdd7-45b0-89a3-d1970468fe1d/audio/599c3716-6ec4-4f08-983c-55a6a7dd96a4/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>006. Process Audit for Disparate Impact Testing</itunes:title>
      <itunes:author>Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/425fd3fc-851a-42f6-b227-33b61b47ac21/3000x3000/colorful-modern-microphone-illustrations-podcast-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:40:47</itunes:duration>
      <itunes:summary>In this weeks episode, BABL AI’s CEO Shea Brown discusses what a process audit is, and how it can be used to verify disparate impact testing conducted by employers and vendors. 

New York City’s Local Law 144 requires independent bias audits for automated employment decision tools (AEDT) used to substantially assist or replace decisions in hiring or promotion. The law comes into effect on Jan 1, 2023. </itunes:summary>
      <itunes:subtitle>In this weeks episode, BABL AI’s CEO Shea Brown discusses what a process audit is, and how it can be used to verify disparate impact testing conducted by employers and vendors. 

New York City’s Local Law 144 requires independent bias audits for automated employment decision tools (AEDT) used to substantially assist or replace decisions in hiring or promotion. The law comes into effect on Jan 1, 2023. </itunes:subtitle>
      <itunes:keywords>nyc bias law, algorithmic audit, nyc local law 144, ai ethics, auditing algorithms, regulations, ethics, babl, human resources, auditing, new york city bias law, ai, babl ai, shea brown, dr shea brown, process audit, bias audit, law</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>6</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">badcb168-11a2-4e0f-8614-f4212c9d434a</guid>
      <title>005. Algorithmic Auditing International Conference by Eticas</title>
      <description><![CDATA[This week CEO Dr. Shea Brown reflects on his time at the Algorithmic Auditing International Conference by Eticas.  Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Sun, 13 Nov 2022 16:21:10 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Shea Brown)</author>
      <link>https://babl.ai</link>
      <enclosure length="20490344" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/68a60381-3305-43c9-8d12-2f7435b75aab/audio/127fc58e-f3e4-482c-b805-22b2d4fe9fe4/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>005. Algorithmic Auditing International Conference by Eticas</itunes:title>
      <itunes:author>Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/fb4cf795-e27e-414d-b6ae-7f19f836e902/3000x3000/colorful-modern-microphone-illustrations-podcast-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:21:20</itunes:duration>
      <itunes:summary>This week CEO Dr. Shea Brown reflects on his time at the Algorithmic Auditing International Conference by Eticas. </itunes:summary>
      <itunes:subtitle>This week CEO Dr. Shea Brown reflects on his time at the Algorithmic Auditing International Conference by Eticas. </itunes:subtitle>
      <itunes:keywords>artificial intelligence act, nyc bias law, consulting, artificial intelligence, ceo, ai ethics, conference, ethics, ai act, babl, algorithmic auditing, tech, eticas, auditing, ai, babl ai, shea brown, european union</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>5</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">4f71a996-1dcf-455a-b420-8cbf5a837ca7</guid>
      <title>004. Ethical Risk &amp; Impact Assessments for AI Systems</title>
      <description><![CDATA[A number of forthcoming laws and regulations that will govern the use and development of AI will require mandatory risk or impact assessments. 

This week BABL AI’s CEO Shea Brown discusses what an ethical risk assessment is, and how your organization can implement them today. Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Fri, 4 Nov 2022 10:51:17 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Dr Shea Brown)</author>
      <link>https://babl.ai</link>
      <enclosure length="30817279" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/3afc26d7-7017-49d0-8da9-c083249b0dd9/audio/a709923e-4355-4cba-a878-6e4fbdd09f40/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>004. Ethical Risk &amp; Impact Assessments for AI Systems</itunes:title>
      <itunes:author>Dr Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/93c296a7-2a8c-4bf0-a974-4ad2c280311a/3000x3000/colorful-modern-microphone-illustrations-podcast-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:32:06</itunes:duration>
      <itunes:summary>A number of forthcoming laws and regulations that will govern the use and development of AI will require mandatory risk or impact assessments. 

This week BABL AI’s CEO Shea Brown discusses what an ethical risk assessment is, and how your organization can implement them today.</itunes:summary>
      <itunes:subtitle>A number of forthcoming laws and regulations that will govern the use and development of AI will require mandatory risk or impact assessments. 

This week BABL AI’s CEO Shea Brown discusses what an ethical risk assessment is, and how your organization can implement them today.</itunes:subtitle>
      <itunes:keywords>autonomous systems, nyc bias law, risk assessment, ai systems, ai ethics, ethical risk assessment, ethics, babl, impact assessment, ai, babl ai, shea brown, law</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>4</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">261c2713-7de2-4ab6-be5c-4ec178f60f32</guid>
      <title>003. What is an Algorithmic Bias Audit?</title>
      <description><![CDATA[New York City’s Local Law 144 requires independent bias audits for automated employment decision tools (AEDT) used to substantially assist or replace decisions in hiring or promotion. Despite the law coming into effect on Jan 1, 2023, several aspects of the bias audit have still to be clarified.

In our second weekly mini-webinar series, BABL AI’s CEO Shea Brown discusses what an algorithmic bias audit is, including:

1. Bias audit basics
2. Differences between employer and vendor audits
3. Open Q&A session Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Thu, 27 Oct 2022 21:16:30 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Babl AI)</author>
      <link>https://babl.ai</link>
      <enclosure length="37166091" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/f3969874-fd08-40e2-af6f-9f9bfd340c26/audio/fe1ab000-a19a-477c-8246-b07ae04744db/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>003. What is an Algorithmic Bias Audit?</itunes:title>
      <itunes:author>Babl AI</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/d82cd171-8d1e-4d23-8e78-566d75244d24/3000x3000/colorful-modern-microphone-illustrations-podcast-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:38:42</itunes:duration>
      <itunes:summary>New York City’s Local Law 144 requires independent bias audits for automated employment decision tools (AEDT) used to substantially assist or replace decisions in hiring or promotion. Despite the law coming into effect on Jan 1, 2023, several aspects of the bias audit have still to be clarified.

In our second weekly mini-webinar series, BABL AI’s CEO Shea Brown discusses what an algorithmic bias audit is, including:

1. Bias audit basics
2. Differences between employer and vendor audits
3. Open Q&amp;A session</itunes:summary>
      <itunes:subtitle>New York City’s Local Law 144 requires independent bias audits for automated employment decision tools (AEDT) used to substantially assist or replace decisions in hiring or promotion. Despite the law coming into effect on Jan 1, 2023, several aspects of the bias audit have still to be clarified.

In our second weekly mini-webinar series, BABL AI’s CEO Shea Brown discusses what an algorithmic bias audit is, including:

1. Bias audit basics
2. Differences between employer and vendor audits
3. Open Q&amp;A session</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>3</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">dbf9dd6d-f2da-41f3-a555-8005611ec828</guid>
      <title>002. NYC Algorithm Hiring Law - Update</title>
      <description><![CDATA[New York City’s Local Law 144 requires independent bias audits for automated employment decision tools (AEDT) used to substantially assist or replace decisions in hiring or promotion.

Despite the law coming into effect on Jan 1, 2023, several aspects of the bias audit have still to be clarified.

This week we’re discussing the amendments to the upcoming NYC hiring law and what they might entail for vendors, employers, and more. Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
]]></description>
      <pubDate>Thu, 20 Oct 2022 17:26:50 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Dr Shea Brown, Shea Brown)</author>
      <link>https://babl.ai</link>
      <enclosure length="39744887" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/2e7b6dca-c4f4-43db-9a0d-38e6045914ab/audio/1fef2af5-c193-4a82-bd7e-66ef36950813/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>002. NYC Algorithm Hiring Law - Update</itunes:title>
      <itunes:author>Dr Shea Brown, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/0a8760a7-3458-43e9-8112-1cc363c1ece9/3000x3000/colorful-modern-microphone-illustrations-podcast-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:41:24</itunes:duration>
      <itunes:summary>New York City’s Local Law 144 requires independent bias audits for automated employment decision tools (AEDT) used to substantially assist or replace decisions in hiring or promotion.

Despite the law coming into effect on Jan 1, 2023, several aspects of the bias audit have still to be clarified.

This week we’re discussing the amendments to the upcoming NYC hiring law and what they might entail for vendors, employers, and more.</itunes:summary>
      <itunes:subtitle>New York City’s Local Law 144 requires independent bias audits for automated employment decision tools (AEDT) used to substantially assist or replace decisions in hiring or promotion.

Despite the law coming into effect on Jan 1, 2023, several aspects of the bias audit have still to be clarified.

This week we’re discussing the amendments to the upcoming NYC hiring law and what they might entail for vendors, employers, and more.</itunes:subtitle>
      <itunes:keywords>autonomous systems, local law 144, nyc bias law, nyc ai hiring law, nyc, nyc hiring law, independent audit of ai systems, algorithmic bias, new york city local law 144, ai ethics, auditing algorithms, regulations, law 144, ethics, babl, new york hiring law, human resources, auditing of algorithms, nyc bias audit, hr, ai, babl ai, new york city hiring law, ai audits, law</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>2</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">44f83b3c-462d-4513-955e-966eba4c8ebc</guid>
      <title>001. Understanding the NYC Algorithm Hiring Law</title>
      <description><![CDATA[<p>The Algorithmic Bias Podcast, presented by Babl AI, covers everything related to algorithmic bias, auditing, governance, ethics and more. You can find a new episode uploaded every week on all major platforms, as well as a video recordings on the Babl AI YouTube channel. Please follow us on social media to stay up to date with new episode. </p><p>Babl AI is a leading boutique consultancy that focuses on responsible AI governance, algorithm risk and impact assessments, algorithmic bias assessments and audits, and corporate training on responsible AI.</p><p>We combine leading research expertise and extensive practitioner experience in AI and organizational ethics to drive impactful change at the frontier of technology for our clients.</p><p><a href="babl.ai">babl.ai</a></p><p><a href="https://www.youtube.com/channel/UCabVe81x_XHoGGDXTIWQSJQ">https://www.youtube.com/channel/UCabVe81x_XHoGGDXTIWQSJQ</a></p><p><a href="https://www.linkedin.com/company/babl-ai/">https://www.linkedin.com/company/babl-ai/</a></p><p> </p>
<p><p>Check out the babl.ai website for more stuff on AI Governance and Responsible AI!</p></p>]]></description>
      <pubDate>Sat, 10 Sep 2022 15:33:31 +0000</pubDate>
      <author>jeffery-recker@bablai.com (Jeffery Recker, Shea Brown)</author>
      <link>https://babl.ai</link>
      <content:encoded><![CDATA[<p>The Algorithmic Bias Podcast, presented by Babl AI, covers everything related to algorithmic bias, auditing, governance, ethics and more. You can find a new episode uploaded every week on all major platforms, as well as a video recordings on the Babl AI YouTube channel. Please follow us on social media to stay up to date with new episode. </p><p>Babl AI is a leading boutique consultancy that focuses on responsible AI governance, algorithm risk and impact assessments, algorithmic bias assessments and audits, and corporate training on responsible AI.</p><p>We combine leading research expertise and extensive practitioner experience in AI and organizational ethics to drive impactful change at the frontier of technology for our clients.</p><p><a href="babl.ai">babl.ai</a></p><p><a href="https://www.youtube.com/channel/UCabVe81x_XHoGGDXTIWQSJQ">https://www.youtube.com/channel/UCabVe81x_XHoGGDXTIWQSJQ</a></p><p><a href="https://www.linkedin.com/company/babl-ai/">https://www.linkedin.com/company/babl-ai/</a></p><p> </p>
<p><p>Check out the babl.ai website for more stuff on AI Governance and Responsible AI!</p></p>]]></content:encoded>
      <enclosure length="16979080" type="audio/mpeg" url="https://cdn.simplecast.com/audio/d77bc28c-0a2d-4ce6-968f-3f6b067d51dc/episodes/b41adfac-70fa-440d-81ea-dedb928b3fa5/audio/59142526-bfc4-4ed3-9d73-89fc296bbe80/default_tc.mp3?aid=rss_feed&amp;feed=EndWK70X"/>
      <itunes:title>001. Understanding the NYC Algorithm Hiring Law</itunes:title>
      <itunes:author>Jeffery Recker, Shea Brown</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/020fb3be-43a3-4a55-8cf3-7fe3ee8ad966/3000x3000/colorful-modern-microphone-illustrations-podcast-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:17:41</itunes:duration>
      <itunes:summary>In this episode, Jeffery Recker interview the CEO of Babl AI, Dr. Shea Brown on the basics of the New York City’s Local Law 144, otherwise known as the NYC Algorithmic Hiring Law, the NYC Bias Law, or NYC AI Hiring Law. Shea explains the basics of this new law and what companies need to understand before it takes effect January 1, 2023. </itunes:summary>
      <itunes:subtitle>In this episode, Jeffery Recker interview the CEO of Babl AI, Dr. Shea Brown on the basics of the New York City’s Local Law 144, otherwise known as the NYC Algorithmic Hiring Law, the NYC Bias Law, or NYC AI Hiring Law. Shea explains the basics of this new law and what companies need to understand before it takes effect January 1, 2023. </itunes:subtitle>
      <itunes:keywords>autonomous systems, nyc bias law, nyc ai hiring law, nyc, nyc hiring law, algorithmic bias, ai ethics, auditing algorithms, regulations, ethics, babl, human resources, nyc algorithm hiring law, new york city, hr, ai, babl ai, ai audits, new york city’s local law 144, law</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1</itunes:episode>
    </item>
  </channel>
</rss>