<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:media="http://search.yahoo.com/mrss/" xmlns:podcast="https://podcastindex.org/namespace/1.0">
  <channel>
    <atom:link href="https://feeds.simplecast.com/LhobkclV" rel="self" title="MP3 Audio" type="application/atom+xml"/>
    <atom:link href="https://simplecast.superfeedr.com" rel="hub" xmlns="http://www.w3.org/2005/Atom"/>
    <generator>https://simplecast.com</generator>
    <title>People and AI: Explore the Latest Advances, Research, and Risks of Artificial Intelligence</title>
    <description>Embark on an intellectual journey with &quot;People and AI,&quot; a podcast that opens the gateway to the forefront of artificial intelligence. Hosted by Karthik Ramakrishnan and powered by Armilla AI, this series goes beyond headlines, exploring the intricacies of AI development. Join Karthik and the Armilla AI experts as they dissect AI advances, applications, risks,  trends, and research, offering an unparalleled glimpse into the inner workings of the technologies shaping our future.

In each episode, we navigate the evolving landscape of ethical AI, contemplating the balance between innovation and responsibility. Explore the ethical quandaries that arise during AI development as Karthik Ramakrishnan engages with industry leaders and experts to shed light on the moral considerations embedded in the very core of artificial intelligence. With its cutting-edge capabilities, Armilla AI is the driving force behind these discussions, emphasizing the imperative need for ethical decision-making in the expanding realm of technology.

As we unravel the complete development cycles of AI, discover how &quot;People and AI&quot; serves as a hub for the latest insights from the minds at the forefront of the field. Armilla AI&apos;s expertise fuels the discourse on tech ethics, providing listeners with a comprehensive understanding of the ethical dimensions of creating and deploying AI technologies. Karthik Ramakrishnan steers these conversations, ensuring that each episode informs and inspires a thoughtful consideration of the responsible development and application of artificial intelligence.

Tune in to &quot;People and AI&quot; for an immersive experience where Karthik Ramakrishnan and Armilla AI collaborate to demystify the complexities of AI, offering a unique perspective on the intersection of AI design principles, ethical considerations, and the limitless potential of technology in our rapidly evolving world.</description>
    <copyright>2024 Armilla AI</copyright>
    <language>en</language>
    <pubDate>Thu, 19 Sep 2024 10:00:00 +0000</pubDate>
    <lastBuildDate>Thu, 19 Sep 2024 10:00:10 +0000</lastBuildDate>
    
    <link>https://people-ai.simplecast.com</link>
    <itunes:type>episodic</itunes:type>
    <itunes:summary>Embark on an intellectual journey with &quot;People and AI,&quot; a podcast that opens the gateway to the forefront of artificial intelligence. Hosted by Karthik Ramakrishnan and powered by Armilla AI, this series goes beyond headlines, exploring the intricacies of AI development. Join Karthik and the Armilla AI experts as they dissect AI advances, applications, risks,  trends, and research, offering an unparalleled glimpse into the inner workings of the technologies shaping our future.

In each episode, we navigate the evolving landscape of ethical AI, contemplating the balance between innovation and responsibility. Explore the ethical quandaries that arise during AI development as Karthik Ramakrishnan engages with industry leaders and experts to shed light on the moral considerations embedded in the very core of artificial intelligence. With its cutting-edge capabilities, Armilla AI is the driving force behind these discussions, emphasizing the imperative need for ethical decision-making in the expanding realm of technology.

As we unravel the complete development cycles of AI, discover how &quot;People and AI&quot; serves as a hub for the latest insights from the minds at the forefront of the field. Armilla AI&apos;s expertise fuels the discourse on tech ethics, providing listeners with a comprehensive understanding of the ethical dimensions of creating and deploying AI technologies. Karthik Ramakrishnan steers these conversations, ensuring that each episode informs and inspires a thoughtful consideration of the responsible development and application of artificial intelligence.

Tune in to &quot;People and AI&quot; for an immersive experience where Karthik Ramakrishnan and Armilla AI collaborate to demystify the complexities of AI, offering a unique perspective on the intersection of AI design principles, ethical considerations, and the limitless potential of technology in our rapidly evolving world.</itunes:summary>
    <itunes:author>Karthik Ramakrishnan, Griffin Wahl</itunes:author>
    <itunes:explicit>false</itunes:explicit>
    <itunes:image href="https://image.simplecastcdn.com/images/d4a672f1-93ae-4a28-a2f0-663dad101e56/c4b95efc-3186-4628-8bbb-f3879ebd19eb/3000x3000/podcast-cover-1.jpg?aid=rss_feed"/>
    <itunes:new-feed-url>https://feeds.simplecast.com/LhobkclV</itunes:new-feed-url>
    <itunes:keywords>ai, artificial intelligence, machine learning, ml, product developement, entrepreneurship</itunes:keywords>
    <itunes:owner>
      <itunes:name>Armilla AI</itunes:name>
      <itunes:email>griffin@armilla.ai</itunes:email>
    </itunes:owner>
    <itunes:category text="Technology"/>
    <itunes:category text="Science">
      <itunes:category text="Mathematics"/>
    </itunes:category>
    <itunes:category text="Science">
      <itunes:category text="Physics"/>
    </itunes:category>
    <item>
      <guid isPermaLink="false">fdbea814-3853-4147-98ca-0593f99ebcaf</guid>
      <title>Operationalizing AI Ethics: A Guide for Companies with Benjamin Roome</title>
      <description><![CDATA[<p>In this episode of "People and AI," host Karthik Ramakrishnan interviews Dr. Benjamin Roome, founder of Ethical Resolve and CEO of Badge List, to discuss the intersection of AI, ethics, and innovation. Dr. Roome shares insights on operationalizing ethical AI practices, exploring issues like disparate impact, the four-fifths rule, and the importance of continuous improvement beyond legal compliance. The conversation covers real-world cases such as Realpage's antitrust investigation and Mednition’s responsible AI in healthcare, along with strategies for mitigating risks, fostering transparency, and using digital credentialing to bridge workforce gaps.</p>
]]></description>
      <pubDate>Thu, 19 Sep 2024 10:00:00 +0000</pubDate>
      <author>griffin@armilla.ai (Armilla AI)</author>
      <link>https://people-ai.simplecast.com/episodes/ben-roome-txpqAPMb</link>
      <content:encoded><![CDATA[<p>In this episode of "People and AI," host Karthik Ramakrishnan interviews Dr. Benjamin Roome, founder of Ethical Resolve and CEO of Badge List, to discuss the intersection of AI, ethics, and innovation. Dr. Roome shares insights on operationalizing ethical AI practices, exploring issues like disparate impact, the four-fifths rule, and the importance of continuous improvement beyond legal compliance. The conversation covers real-world cases such as Realpage's antitrust investigation and Mednition’s responsible AI in healthcare, along with strategies for mitigating risks, fostering transparency, and using digital credentialing to bridge workforce gaps.</p>
]]></content:encoded>
      <enclosure length="42461695" type="audio/mpeg" url="https://cdn.simplecast.com/audio/04e81a69-02b3-4ddd-a5fb-ab8be33b2587/episodes/110c035a-426d-4332-9cac-2a41a389fffc/audio/7d981ebd-4445-405f-9408-14c14babe023/default_tc.mp3?aid=rss_feed&amp;feed=LhobkclV"/>
      <itunes:title>Operationalizing AI Ethics: A Guide for Companies with Benjamin Roome</itunes:title>
      <itunes:author>Armilla AI</itunes:author>
      <itunes:duration>00:44:11</itunes:duration>
      <itunes:summary>In this episode of &quot;People and AI,&quot; host Karthik Ramakrishnan interviews Dr. Benjamin Roome, founder of Ethical Resolve and CEO of Badge List, to discuss the intersection of AI, ethics, and innovation. </itunes:summary>
      <itunes:subtitle>In this episode of &quot;People and AI,&quot; host Karthik Ramakrishnan interviews Dr. Benjamin Roome, founder of Ethical Resolve and CEO of Badge List, to discuss the intersection of AI, ethics, and innovation. </itunes:subtitle>
      <itunes:keywords>real estate optimization, employment practices, digital credentialing, disparate impact, four fifths rule, economic impact, arbitrary thresholds, algorithmic collusion, accountability in ai, ai governance, ethical debt, ai ethics, continuous improvement, skills taxonomies, ai in education, fairness in ai, social impact, risk mitigation, ai bias, paperclip problem, stakeholder impact mapping, general artificial intelligence (agi), operationalizing ethics, ai-generated false information., workforce development, sepsis prediction models, transparency in ai, ai regulations, legal challenges, ethical ai policies</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>20</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">5b424f27-8b22-489e-955d-25f966d95752</guid>
      <title>Examining Media Bias: How It Shapes Our World with Dr. Federica Fornaciari</title>
      <description><![CDATA[<p>In this episode, host Karthik Ramakrishnan sits down with Dr. Federica Fornaciari, a distinguished professor and academic program director at the National University. Dr. Federica dives deep into her fascinating work exploring media studies, political communication, and the profound implications of privacy in our digital age.</p><p>The dialogue touches on how media frames shape public perception and the critical importance of equitable media representation and privacy protection in fostering a fairer society.</p>
]]></description>
      <pubDate>Thu, 5 Sep 2024 13:39:24 +0000</pubDate>
      <author>griffin@armilla.ai (Armilla AI)</author>
      <link>https://people-ai.simplecast.com/episodes/federica-39aHE2rh</link>
      <content:encoded><![CDATA[<p>In this episode, host Karthik Ramakrishnan sits down with Dr. Federica Fornaciari, a distinguished professor and academic program director at the National University. Dr. Federica dives deep into her fascinating work exploring media studies, political communication, and the profound implications of privacy in our digital age.</p><p>The dialogue touches on how media frames shape public perception and the critical importance of equitable media representation and privacy protection in fostering a fairer society.</p>
]]></content:encoded>
      <enclosure length="47249954" type="audio/mpeg" url="https://cdn.simplecast.com/audio/04e81a69-02b3-4ddd-a5fb-ab8be33b2587/episodes/3a06af16-1212-4fd4-ad3c-a3dc3c220c49/audio/976d0e58-a7d2-40aa-9594-7b400fe16b39/default_tc.mp3?aid=rss_feed&amp;feed=LhobkclV"/>
      <itunes:title>Examining Media Bias: How It Shapes Our World with Dr. Federica Fornaciari</itunes:title>
      <itunes:author>Armilla AI</itunes:author>
      <itunes:duration>00:49:09</itunes:duration>
      <itunes:summary>In this episode, host Karthik Ramakrishnan sits down with Dr. Federica Fornaciari, a distinguished professor and academic program director at the National University. Dr. Federica dives deep into her fascinating work exploring media studies, political communication, and the profound implications of privacy in our digital age.
</itunes:summary>
      <itunes:subtitle>In this episode, host Karthik Ramakrishnan sits down with Dr. Federica Fornaciari, a distinguished professor and academic program director at the National University. Dr. Federica dives deep into her fascinating work exploring media studies, political communication, and the profound implications of privacy in our digital age.
</itunes:subtitle>
      <itunes:keywords>dr. federica fornachiari, misinformation, critical thinking, equitable media representation, gdpr, disinformation., ai and digital technologies, ai algorithms, generative ai, social justice, national university, social media platforms, data ethics, media studies, project nightingale, cambridge analytica, female political candidates, deepfakes, ethical use of media, commodification of personal data, privacy protection, technology, ai-generated content, karthik ramakrishnan, media narratives, telegraph, interdisciplinary studies, political communication, consumer data, public perception, privacy in the digital age, people and ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>19</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">af7df935-b1d7-40b0-8615-5d9a78f620c2</guid>
      <title>Birago Jones: Revolutionizing AI with Structured Prompts</title>
      <description><![CDATA[<p>In this episode, we sit down with Birago Jones, the visionary founder and CEO of Pienso, to explore the innovative intersection of artificial intelligence (AI) and human expertise. Discover how Pienso’s platform redefines AI applications by incorporating subject matter experts into the model training process.</p><p> </p><p>Birago shares the origins of Pienso, starting from research projects at MIT focused on mitigating cyberbullying, to building a full-fledged AI company. He discusses the core challenges enterprises face, such as GPU costs, data residency, and privacy issues, and explains how Pienzo adapts to the customer’s preferred environment.</p><p> </p>
]]></description>
      <pubDate>Thu, 22 Aug 2024 14:00:00 +0000</pubDate>
      <author>griffin@armilla.ai (Armilla AI)</author>
      <link>https://people-ai.simplecast.com/episodes/birago-jones-revolutionizing-ai-with-structured-prompts-emNtXmTM</link>
      <content:encoded><![CDATA[<p>In this episode, we sit down with Birago Jones, the visionary founder and CEO of Pienso, to explore the innovative intersection of artificial intelligence (AI) and human expertise. Discover how Pienso’s platform redefines AI applications by incorporating subject matter experts into the model training process.</p><p> </p><p>Birago shares the origins of Pienso, starting from research projects at MIT focused on mitigating cyberbullying, to building a full-fledged AI company. He discusses the core challenges enterprises face, such as GPU costs, data residency, and privacy issues, and explains how Pienzo adapts to the customer’s preferred environment.</p><p> </p>
]]></content:encoded>
      <enclosure length="43606304" type="audio/mpeg" url="https://cdn.simplecast.com/audio/04e81a69-02b3-4ddd-a5fb-ab8be33b2587/episodes/f006495d-8cab-43c3-ba9e-4e978e33a1fc/audio/b1f5acd9-ce9b-4e0d-94f5-7734f6d02133/default_tc.mp3?aid=rss_feed&amp;feed=LhobkclV"/>
      <itunes:title>Birago Jones: Revolutionizing AI with Structured Prompts</itunes:title>
      <itunes:author>Armilla AI</itunes:author>
      <itunes:duration>00:45:22</itunes:duration>
      <itunes:summary>In this episode, we sit down with Birago Jones, the visionary founder and CEO of Pienso, to explore the innovative intersection of artificial intelligence (AI) and human expertise. Discover how Pienso’s platform redefines AI applications by incorporating subject matter experts into the model training process.</itunes:summary>
      <itunes:subtitle>In this episode, we sit down with Birago Jones, the visionary founder and CEO of Pienso, to explore the innovative intersection of artificial intelligence (AI) and human expertise. Discover how Pienso’s platform redefines AI applications by incorporating subject matter experts into the model training process.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>18</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">98b0e737-ab0d-4b62-b147-aebacdab472c</guid>
      <title>Radical Ventures: Investing Billions in the Future of Enterprise AI with Salim Teja</title>
      <description><![CDATA[<p>In this episode, we explore AI investing, entrepreneurship, and the future of the industry with Salim Teja, Partner at Radical Ventures,  with over 30 years of experience in the tech sector. </p><p>Salim and our hosts delve into the differentiation of Cohere in the large language model (LLM) market, focusing on enterprise needs and cybersecurity. They also discuss the criteria for funding AI-inventing companies, emphasizing the importance of technical strength, product vision, and go-to-market skills.</p>
]]></description>
      <pubDate>Thu, 8 Aug 2024 04:00:00 +0000</pubDate>
      <author>griffin@armilla.ai (Armilla AI)</author>
      <link>https://people-ai.simplecast.com/episodes/radical-ventures-investing-billions-in-the-future-of-enterprise-ai-with-salim-teja-tGicMjbg</link>
      <content:encoded><![CDATA[<p>In this episode, we explore AI investing, entrepreneurship, and the future of the industry with Salim Teja, Partner at Radical Ventures,  with over 30 years of experience in the tech sector. </p><p>Salim and our hosts delve into the differentiation of Cohere in the large language model (LLM) market, focusing on enterprise needs and cybersecurity. They also discuss the criteria for funding AI-inventing companies, emphasizing the importance of technical strength, product vision, and go-to-market skills.</p>
]]></content:encoded>
      <enclosure length="44575129" type="audio/mpeg" url="https://cdn.simplecast.com/audio/04e81a69-02b3-4ddd-a5fb-ab8be33b2587/episodes/8cb19b40-447c-4800-86fb-2a9bc6691a65/audio/f279c417-cd01-40e9-8065-177d45d448f8/default_tc.mp3?aid=rss_feed&amp;feed=LhobkclV"/>
      <itunes:title>Radical Ventures: Investing Billions in the Future of Enterprise AI with Salim Teja</itunes:title>
      <itunes:author>Armilla AI</itunes:author>
      <itunes:duration>00:46:24</itunes:duration>
      <itunes:summary>In this episode, we explore AI investing, entrepreneurship, and the future of the industry with Salim Teja, Partner at Radical Ventures,  with over 30 years of experience in the tech sector. </itunes:summary>
      <itunes:subtitle>In this episode, we explore AI investing, entrepreneurship, and the future of the industry with Salim Teja, Partner at Radical Ventures,  with over 30 years of experience in the tech sector. </itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>17</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">92377a34-bbee-4ad0-91d5-609bd1c40d5b</guid>
      <title>Transforming Enterprise AI with Alex Panait of Lazarus AI</title>
      <description><![CDATA[In this episode, we explore AI's transformative potential and challenges in the enterprise landscape with Alex Panait, VP of Strategy at Lazarus AI. Alex shares his journey from Romania to tech, emphasizing the importance of enterprise adoption for growth. We discuss proving AI's ROI, overcoming resistance, and practical advice on experimenting with AI and redesigning processes. Key innovations include Lazarus AI’s document understanding using positional embeddings and lightweight, industry-specific models. The episode also covers regulatory considerations, transparency, and real-world use cases like OCR and data extraction. Tune in for insights on AI's impact and strategies to stay competitive in the AI-driven market!
]]></description>
      <pubDate>Thu, 25 Jul 2024 09:00:00 +0000</pubDate>
      <author>griffin@armilla.ai (Armilla AI)</author>
      <link>https://people-ai.simplecast.com/episodes/transforming-enterprise-ai-with-alex-panait-of-lazarus-ai-9gJP04lm</link>
      <enclosure length="49956733" type="audio/mpeg" url="https://cdn.simplecast.com/audio/04e81a69-02b3-4ddd-a5fb-ab8be33b2587/episodes/f65b665d-df22-4f72-a695-d6cb809ed52b/audio/87a701d5-f8b3-4366-81bd-1f88d62382fe/default_tc.mp3?aid=rss_feed&amp;feed=LhobkclV"/>
      <itunes:title>Transforming Enterprise AI with Alex Panait of Lazarus AI</itunes:title>
      <itunes:author>Armilla AI</itunes:author>
      <itunes:duration>00:51:59</itunes:duration>
      <itunes:summary>In this episode, we explore AI&apos;s transformative potential and challenges in the enterprise landscape with Alex Panait, VP of Strategy at Lazarus AI. Alex shares his journey from Romania to tech, emphasizing the importance of enterprise adoption for growth. We discuss proving AI&apos;s ROI, overcoming resistance, and practical advice on experimenting with AI and redesigning processes. Key innovations include Lazarus AI’s document understanding using positional embeddings and lightweight, industry-specific models. The episode also covers regulatory considerations, transparency, and real-world use cases like OCR and data extraction. Tune in for insights on AI&apos;s impact and strategies to stay competitive in the AI-driven market!</itunes:summary>
      <itunes:subtitle>In this episode, we explore AI&apos;s transformative potential and challenges in the enterprise landscape with Alex Panait, VP of Strategy at Lazarus AI. Alex shares his journey from Romania to tech, emphasizing the importance of enterprise adoption for growth. We discuss proving AI&apos;s ROI, overcoming resistance, and practical advice on experimenting with AI and redesigning processes. Key innovations include Lazarus AI’s document understanding using positional embeddings and lightweight, industry-specific models. The episode also covers regulatory considerations, transparency, and real-world use cases like OCR and data extraction. Tune in for insights on AI&apos;s impact and strategies to stay competitive in the AI-driven market!</itunes:subtitle>
      <itunes:keywords>process redesign, critical business processes, model accuracy, ai errors, enterprise adoption, context window, revenue generation, competitive advantage, process change, company growth, technology experimentation, factual reasoning, ai technology, positional embeddings, transformation, capital commitment, resistance to change, sustainability, ocr documents, bounding box coordinates, potential risks, data extraction, return on investment, roi, hallucination ratio, document structure, content classification., language model, people change, greenfield operations</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>15</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">c41f2fb9-f4f1-4a2d-9cb7-2b799ba424a7</guid>
      <title>Revolutionizing Video Generation with Genmo AI</title>
      <description><![CDATA[<p>In this episode, host Karthik Ramakrishnan sits down with the minds behind Genmo AI, Paras Jain and Ajay Jain. Join us as we delve into the groundbreaking advancements in video AI generation, including the ability to detect and respond to harmful content, orchestrate GPU resources for real-time streaming, and leverage diffusion models to create high-resolution videos. Discover how Genmo AI is poised to revolutionize the content creation industry, empowering users and developers. This episode is packed with insights, future projections, and the innovative spirit driving Genmo AI forward.<br /> </p>
]]></description>
      <pubDate>Thu, 11 Jul 2024 09:00:00 +0000</pubDate>
      <author>griffin@armilla.ai (Armilla AI)</author>
      <link>https://people-ai.simplecast.com/episodes/revolutionizing-video-generation-with-genmo-ai-x3EMMvJ_</link>
      <content:encoded><![CDATA[<p>In this episode, host Karthik Ramakrishnan sits down with the minds behind Genmo AI, Paras Jain and Ajay Jain. Join us as we delve into the groundbreaking advancements in video AI generation, including the ability to detect and respond to harmful content, orchestrate GPU resources for real-time streaming, and leverage diffusion models to create high-resolution videos. Discover how Genmo AI is poised to revolutionize the content creation industry, empowering users and developers. This episode is packed with insights, future projections, and the innovative spirit driving Genmo AI forward.<br /> </p>
]]></content:encoded>
      <enclosure length="41352629" type="audio/mpeg" url="https://cdn.simplecast.com/audio/04e81a69-02b3-4ddd-a5fb-ab8be33b2587/episodes/0265966b-ba3a-47ef-b2f4-0b76d53fffca/audio/b3173526-aba9-4929-bc7b-6697d20d6e9b/default_tc.mp3?aid=rss_feed&amp;feed=LhobkclV"/>
      <itunes:title>Revolutionizing Video Generation with Genmo AI</itunes:title>
      <itunes:author>Armilla AI</itunes:author>
      <itunes:duration>00:43:04</itunes:duration>
      <itunes:summary>In this episode, host Karthik Ramakrishnan sits down with the minds behind Genmo AI, Paras Jain and Ajay Jain. Join us as we delve into the groundbreaking advancements in video AI generation, including the ability to detect and respond to harmful content, orchestrate GPU resources for real-time streaming, and leverage diffusion models to create high-resolution videos. Discover how Genmo AI is poised to revolutionize the content creation industry, empowering users and developers. This episode is packed with insights, future projections, and the innovative spirit driving Genmo AI forward.</itunes:summary>
      <itunes:subtitle>In this episode, host Karthik Ramakrishnan sits down with the minds behind Genmo AI, Paras Jain and Ajay Jain. Join us as we delve into the groundbreaking advancements in video AI generation, including the ability to detect and respond to harmful content, orchestrate GPU resources for real-time streaming, and leverage diffusion models to create high-resolution videos. Discover how Genmo AI is poised to revolutionize the content creation industry, empowering users and developers. This episode is packed with insights, future projections, and the innovative spirit driving Genmo AI forward.</itunes:subtitle>
      <itunes:keywords>diffusion models, content creation, ethical considerations, ai content creation, real-time streaming, video rendering time, copyright violations, video quality, technology safeguards, video uploads, gpu resources, innovation, generative ai, deep learning, compute power, input filters, red teaming, deepfakes, mitigating risks, ai model watermarking, video generation, ai industry expansion., startup advise, community standards, output filters, memory constraints, user engagement, r&amp;d investment, academic to commercial transition, ai competition, technical expertise</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>14</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">6021f6a0-3b4f-4054-a5c7-74059deb868d</guid>
      <title>Avatars, Digital Companions, and Ethical Considerations with Darren Wilson</title>
      <description><![CDATA[<p>In this episode of "People & AI," host Karthik Ramakrishnan engages in a thought-provoking discussion with Darren Wilson, the chief product officer at Soul Machines. With over two decades of experience in product design and development, Darren dives into the transformative potential of avatar technology, highlighting its applications from language practice and personal coaching to enterprise and film industry uses. Together, they explore the ethical considerations and guardrails needed in creating digital companions and therapy bots, share insights on the future of AI in improving workflows, and emphasize the importance of speed, agility, and customer value in the fast-paced startup world. Join Karthik and Darren as they balance groundbreaking innovation and empathetic leadership in the evolving landscape of AI and digital avatars.</p>
]]></description>
      <pubDate>Thu, 20 Jun 2024 09:00:00 +0000</pubDate>
      <author>griffin@armilla.ai (avatar technology, digital companions, conversational content, chat GPT technology, language practice partners, personal coaching, enterprise applications, movie industry, consumer use cases, AI technology, Scarlett Johansson, product adaptability, startup industry, innovation, customer value, end-user needs, Soul Machine studio, AI workflows, WhatsApp user base, personal assistants, engaging content, ethical concerns, digital avatars, deceased loved ones, therapist bot, industry guardrails, empathetic leadership, meaningful innovation, career development, tech-driven environments, Fjord)</author>
      <link>https://people-ai.simplecast.com/episodes/avatars-digital-companions-and-ethical-considerations-with-darren-wilson-Z__buYx1</link>
      <content:encoded><![CDATA[<p>In this episode of "People & AI," host Karthik Ramakrishnan engages in a thought-provoking discussion with Darren Wilson, the chief product officer at Soul Machines. With over two decades of experience in product design and development, Darren dives into the transformative potential of avatar technology, highlighting its applications from language practice and personal coaching to enterprise and film industry uses. Together, they explore the ethical considerations and guardrails needed in creating digital companions and therapy bots, share insights on the future of AI in improving workflows, and emphasize the importance of speed, agility, and customer value in the fast-paced startup world. Join Karthik and Darren as they balance groundbreaking innovation and empathetic leadership in the evolving landscape of AI and digital avatars.</p>
]]></content:encoded>
      <enclosure length="39360208" type="audio/mpeg" url="https://cdn.simplecast.com/audio/04e81a69-02b3-4ddd-a5fb-ab8be33b2587/episodes/2e8dcbfa-4b7e-479c-b469-025a1c384837/audio/0d118c24-2c15-4bc8-bcb7-c3dd688c3b1c/default_tc.mp3?aid=rss_feed&amp;feed=LhobkclV"/>
      <itunes:title>Avatars, Digital Companions, and Ethical Considerations with Darren Wilson</itunes:title>
      <itunes:author>avatar technology, digital companions, conversational content, chat GPT technology, language practice partners, personal coaching, enterprise applications, movie industry, consumer use cases, AI technology, Scarlett Johansson, product adaptability, startup industry, innovation, customer value, end-user needs, Soul Machine studio, AI workflows, WhatsApp user base, personal assistants, engaging content, ethical concerns, digital avatars, deceased loved ones, therapist bot, industry guardrails, empathetic leadership, meaningful innovation, career development, tech-driven environments, Fjord</itunes:author>
      <itunes:duration>00:40:59</itunes:duration>
      <itunes:summary></itunes:summary>
      <itunes:subtitle></itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>13</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b7ff7420-bad6-412d-a29a-7980cc6c73f3</guid>
      <title>Bridging AI and Pathology with Julianna Ianni</title>
      <description><![CDATA[<p>In this episode, Julianna Ianni from Proscia discusses the transformative power of AI in medical imaging and pathology with host Karthik Ramakrishnan. They explore the importance of diverse data for model development, the global applicability of AI models, and the exciting future of AI in accelerating diagnosis and improving accuracy. Julianna emphasizes the need to understand business aspects and improve communication skills for researchers. They also discuss Proscia's innovations in digitizing pathology slides, automated quality control, and balancing innovation with practicality. The conversation highlights regulatory challenges, the impact on healthcare accessibility, and advice for women in the AI industry. The podcast concludes with Julianna's career journey and key breakthroughs in AI, including DeepMind's AlphaFold.</p>
]]></description>
      <pubDate>Thu, 6 Jun 2024 09:00:00 +0000</pubDate>
      <author>griffin@armilla.ai (Armilla AI)</author>
      <link>https://people-ai.simplecast.com/episodes/bridging-ai-and-pathology-with-julianna-ianni-E09UtUkE</link>
      <content:encoded><![CDATA[<p>In this episode, Julianna Ianni from Proscia discusses the transformative power of AI in medical imaging and pathology with host Karthik Ramakrishnan. They explore the importance of diverse data for model development, the global applicability of AI models, and the exciting future of AI in accelerating diagnosis and improving accuracy. Julianna emphasizes the need to understand business aspects and improve communication skills for researchers. They also discuss Proscia's innovations in digitizing pathology slides, automated quality control, and balancing innovation with practicality. The conversation highlights regulatory challenges, the impact on healthcare accessibility, and advice for women in the AI industry. The podcast concludes with Julianna's career journey and key breakthroughs in AI, including DeepMind's AlphaFold.</p>
]]></content:encoded>
      <enclosure length="34091550" type="audio/mpeg" url="https://cdn.simplecast.com/audio/04e81a69-02b3-4ddd-a5fb-ab8be33b2587/episodes/3c723a6c-00c4-485f-a893-c1bc80207927/audio/f96b7666-4a1e-4185-ab08-ecf18d5bde39/default_tc.mp3?aid=rss_feed&amp;feed=LhobkclV"/>
      <itunes:title>Bridging AI and Pathology with Julianna Ianni</itunes:title>
      <itunes:author>Armilla AI</itunes:author>
      <itunes:duration>00:40:35</itunes:duration>
      <itunes:summary>In this episode, Julianna Ianni from Proscia discusses with host Karthik Ramakrishnan the transformative power of AI in medical imaging and pathology, emphasizing the importance of diverse data, global applicability, and the future of AI in diagnosis. They cover Proscia’s innovations, regulatory challenges, and offer advice for women in AI, concluding with Julianna’s career journey and breakthroughs like DeepMind’s AlphaFold.</itunes:summary>
      <itunes:subtitle>In this episode, Julianna Ianni from Proscia discusses with host Karthik Ramakrishnan the transformative power of AI in medical imaging and pathology, emphasizing the importance of diverse data, global applicability, and the future of AI in diagnosis. They cover Proscia’s innovations, regulatory challenges, and offer advice for women in AI, concluding with Julianna’s career journey and breakthroughs like DeepMind’s AlphaFold.</itunes:subtitle>
      <itunes:keywords>diverse data, deepmind&apos;s alphafold, automated quality control, overcoming imposter syndrome, us model applicability, chat dpt, diversity in data, balancing innovation, generative ai, vision models, digitizing pathology slides, generative ai challenges, fda regulation, communication skills, commercial side of research, best practices for ai, ai in medical applications, women in ai industry, productizing ai models, diagnostic accuracy, subjective ground truth, developer tools, foundation models, regulatory challenges, language models, commercial aspect of ai, prosha automated quality control., accessibility of medical care, medical imaging, clinical environment, drug development regulation</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>12</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">41abb869-f756-40d1-981f-3e3c25ba082d</guid>
      <title>Remote Work and the Future of Teams with Dan Chuparkoff</title>
      <description><![CDATA[<p>In this episode of 'People in AI,' host Karthik Ramakrishnan converses with AI and innovation expert Dan Chuparkoff. They dive into the misconceptions and realities of AI's role in the future of work, emphasizing the irreplaceable value of human creativity and the necessity for adaptability. Chuparkoff provides a nuanced discussion on the limitations of AI, the importance of emotional connection and autonomy in teams, and the significant shift required in leadership mindsets to navigate AI transformation successfully. Through anecdotes from his impactful careers at Google, McKinsey, and Atlassian, Chuparkoff shares insights on digital transformation, the democratization of information, and the evolving landscape of remote work and decision-making. Tune in to explore the interplay of human intellect and artificial intelligence in shaping the future workplace.</p>
]]></description>
      <pubDate>Thu, 23 May 2024 09:00:00 +0000</pubDate>
      <author>griffin@armilla.ai (Armilla AI)</author>
      <link>https://people-ai.simplecast.com/episodes/remote-work-and-the-future-of-teams-with-dan-chuparkoff-RTC6bpfJ</link>
      <content:encoded><![CDATA[<p>In this episode of 'People in AI,' host Karthik Ramakrishnan converses with AI and innovation expert Dan Chuparkoff. They dive into the misconceptions and realities of AI's role in the future of work, emphasizing the irreplaceable value of human creativity and the necessity for adaptability. Chuparkoff provides a nuanced discussion on the limitations of AI, the importance of emotional connection and autonomy in teams, and the significant shift required in leadership mindsets to navigate AI transformation successfully. Through anecdotes from his impactful careers at Google, McKinsey, and Atlassian, Chuparkoff shares insights on digital transformation, the democratization of information, and the evolving landscape of remote work and decision-making. Tune in to explore the interplay of human intellect and artificial intelligence in shaping the future workplace.</p>
]]></content:encoded>
      <enclosure length="43693221" type="audio/mpeg" url="https://cdn.simplecast.com/audio/04e81a69-02b3-4ddd-a5fb-ab8be33b2587/episodes/2ed91504-86d9-40e3-bb6b-5cf3ff6b3f1b/audio/9d5675dc-1f1d-4e90-9169-2f355dbfaa99/default_tc.mp3?aid=rss_feed&amp;feed=LhobkclV"/>
      <itunes:title>Remote Work and the Future of Teams with Dan Chuparkoff</itunes:title>
      <itunes:author>Armilla AI</itunes:author>
      <itunes:duration>00:45:30</itunes:duration>
      <itunes:summary></itunes:summary>
      <itunes:subtitle></itunes:subtitle>
      <itunes:keywords>human intervention in ai, ai democratizing information, mentorship programs, future of work, emotional connection at work, autonomy in teams, ai adoption readiness, leadership in ai, decision-making in ai, digital transformation, technical specialists, learning in remote environment, workplace adaptability, innovation, ai transformation, traditional industries, ai limitations, productivity without meetings, reducing meeting times, conversational customer service, real-time translation, human creativity, ai agents, ai collaboration, team decision-making, non-english speakers translation, roles and responsibilities, ai in meetings, self-driving organization, remote work, ai advancements</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>11</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">9a87d219-30c6-4979-b6de-99181229ee88</guid>
      <title>Dr. Yoshua Bengio on Ethical AI and Navigating Moral Machines</title>
      <description><![CDATA[<p>In this episode of "People + AI," which first aired in February 2022, host Karthik Ramakrishnan sits down with renowned AI luminary Dr. Yoshua Bengio to delve into the cutting edge of artificial intelligence research and its practical applications. Dr. Bengio, a pivotal figure in developing deep learning technologies, shares his profound insights on the necessity of aligning AI systems with human ethics, the challenges in achieving out-of-distribution generalization, and the critical need for responsible AI governance globally. Highlighting his groundbreaking work on system one and system two thinking integration and modular neural networks, Dr. Bengio also discusses the societal and ethical implications of AI, including the mitigation of biases and the potential to significantly impact fields such as healthcare, environmental sustainability, and global security. This episode not only illuminates Dr. Bengio’s contributions to AI but also explores the broader landscape of AI development, invoking a thoughtful reflection on how this technology should evolve in harmony with human values and global needs.</p>
]]></description>
      <pubDate>Thu, 9 May 2024 09:00:00 +0000</pubDate>
      <author>griffin@armilla.ai (Armilla AI)</author>
      <link>https://people-ai.simplecast.com/episodes/dr-yoshua-bengio-on-ethical-ai-and-navigating-moral-machines-JPlAuvDB</link>
      <content:encoded><![CDATA[<p>In this episode of "People + AI," which first aired in February 2022, host Karthik Ramakrishnan sits down with renowned AI luminary Dr. Yoshua Bengio to delve into the cutting edge of artificial intelligence research and its practical applications. Dr. Bengio, a pivotal figure in developing deep learning technologies, shares his profound insights on the necessity of aligning AI systems with human ethics, the challenges in achieving out-of-distribution generalization, and the critical need for responsible AI governance globally. Highlighting his groundbreaking work on system one and system two thinking integration and modular neural networks, Dr. Bengio also discusses the societal and ethical implications of AI, including the mitigation of biases and the potential to significantly impact fields such as healthcare, environmental sustainability, and global security. This episode not only illuminates Dr. Bengio’s contributions to AI but also explores the broader landscape of AI development, invoking a thoughtful reflection on how this technology should evolve in harmony with human values and global needs.</p>
]]></content:encoded>
      <enclosure length="37475842" type="audio/mpeg" url="https://cdn.simplecast.com/audio/04e81a69-02b3-4ddd-a5fb-ab8be33b2587/episodes/5c57a46e-59b0-41e0-b520-beb1a928d1c8/audio/1a92ce5f-4ec7-4cb5-8e4a-3ae447be9c52/default_tc.mp3?aid=rss_feed&amp;feed=LhobkclV"/>
      <itunes:title>Dr. Yoshua Bengio on Ethical AI and Navigating Moral Machines</itunes:title>
      <itunes:author>Armilla AI</itunes:author>
      <itunes:duration>00:44:36</itunes:duration>
      <itunes:summary></itunes:summary>
      <itunes:subtitle></itunes:subtitle>
      <itunes:keywords>system one and two thinking, out-of-distribution generalization, modeling causality, ai movies, global partnership on ai, modular neural networks, moral machines, ethical ai, g flownets, consciousness priors, societal norms, open research, ai regulation., racial bias in ai, neural networks, deep learning, healthcare ai, democracy and human rights, ai for climate change, intuitive understanding of physics, legal systems, dr. yoshua bengio, universal moral instincts, environmental ai, verbal language understanding, artificial intelligence, antimicrobial resistance, data selection, transfer learning, ai competition</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>10</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">52a23b2c-1470-4963-b01e-eae2fd106d9f</guid>
      <title>AI Enhancing Customer Interactions with George Davis</title>
      <description><![CDATA[<p>Karthik Ramakrishnan sits down with George Davis, the visionary founder and CEO of Frame AI. George shares his journey from academia at Carnegie Mellon to revolutionizing customer support with AI through his company, Frame AI. The episode delves into how Frame AI leverages generative AI and real-time analysis to transform unstructured data from customer interactions into actionable insights, vastly improving customer relationships and reducing operational costs.</p>
]]></description>
      <pubDate>Thu, 25 Apr 2024 09:00:00 +0000</pubDate>
      <author>griffin@armilla.ai (Armilla AI)</author>
      <link>https://people-ai.simplecast.com/episodes/ai-enhancing-customer-interactions-with-george-davis-Ex7Jdu1i</link>
      <content:encoded><![CDATA[<p>Karthik Ramakrishnan sits down with George Davis, the visionary founder and CEO of Frame AI. George shares his journey from academia at Carnegie Mellon to revolutionizing customer support with AI through his company, Frame AI. The episode delves into how Frame AI leverages generative AI and real-time analysis to transform unstructured data from customer interactions into actionable insights, vastly improving customer relationships and reducing operational costs.</p>
]]></content:encoded>
      <enclosure length="49968441" type="audio/mpeg" url="https://cdn.simplecast.com/audio/04e81a69-02b3-4ddd-a5fb-ab8be33b2587/episodes/45dffa69-e903-45f4-9c60-f124b5dbd110/audio/c1ce8fad-26c1-4b97-8f1b-48185bd2cacd/default_tc.mp3?aid=rss_feed&amp;feed=LhobkclV"/>
      <itunes:title>AI Enhancing Customer Interactions with George Davis</itunes:title>
      <itunes:author>Armilla AI</itunes:author>
      <itunes:duration>00:52:02</itunes:duration>
      <itunes:summary>Karthik Ramakrishnan sits down with George Davis, the visionary founder and CEO of Frame AI. George shares his journey from academia at Carnegie Mellon to revolutionizing customer support with AI through his company, Frame AI. The episode delves into how Frame AI leverages generative AI and real-time analysis to transform unstructured data from customer interactions into actionable insights, vastly improving customer relationships and reducing operational costs.</itunes:summary>
      <itunes:subtitle>Karthik Ramakrishnan sits down with George Davis, the visionary founder and CEO of Frame AI. George shares his journey from academia at Carnegie Mellon to revolutionizing customer support with AI through his company, Frame AI. The episode delves into how Frame AI leverages generative AI and real-time analysis to transform unstructured data from customer interactions into actionable insights, vastly improving customer relationships and reducing operational costs.</itunes:subtitle>
      <itunes:keywords>ai automation, real-time data analysis, data governance, csat prediction, ai customization, consent environments, ai deployment, ai ethics, data curation, stag systems, bert-based models, generative ai, job opportunities in ai, customer effort estimation, unstructured data, frame ai, streaming data, proactive insights, rag systems, machine learning, cloud architecture, customer support, financial services, enterprise ai applications, cross-sell identification, attention management, natural language processing (nlp), health industry, transformer architectures, domain knowledge in machine learning</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>9</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">9d4139bf-7973-465c-89be-b0cbd4719e13</guid>
      <title>Decoding AI Ethics and Governance with Var Shankar</title>
      <description><![CDATA[<p>In this of People + AI, host Karthik Ramakrishnan engages with Var Shankar, Executive Director of the Responsible AI Institute. Together, they dissect the complexities and necessities of advocating for ethical artificial intelligence – a critical conversation in today's advancing tech world. Var brings knowledge from diverse arenas, including international policymaking and legal academia, to illuminate the operational challenges and international standards shaping responsible AI. Listen in as they delve into the intersection of law and AI governance, AI implementation in enterprises, and decode policy instruments like the G7 code of conduct and ISO 42. This episode is a must-listen for anyone passionate about the implications and evolution of AI ethics and governance.</p>
]]></description>
      <pubDate>Thu, 11 Apr 2024 08:00:00 +0000</pubDate>
      <author>griffin@armilla.ai (Armilla AI)</author>
      <link>https://people-ai.simplecast.com/episodes/decoding-ai-ethics-and-governance-with-var-shankar-qKRr69YJ</link>
      <content:encoded><![CDATA[<p>In this of People + AI, host Karthik Ramakrishnan engages with Var Shankar, Executive Director of the Responsible AI Institute. Together, they dissect the complexities and necessities of advocating for ethical artificial intelligence – a critical conversation in today's advancing tech world. Var brings knowledge from diverse arenas, including international policymaking and legal academia, to illuminate the operational challenges and international standards shaping responsible AI. Listen in as they delve into the intersection of law and AI governance, AI implementation in enterprises, and decode policy instruments like the G7 code of conduct and ISO 42. This episode is a must-listen for anyone passionate about the implications and evolution of AI ethics and governance.</p>
]]></content:encoded>
      <enclosure length="34950319" type="audio/mpeg" url="https://cdn.simplecast.com/audio/04e81a69-02b3-4ddd-a5fb-ab8be33b2587/episodes/014b8162-97de-4db7-946a-7ccd211c6104/audio/032e2cff-b242-4985-9b9e-bb595bef8be8/default_tc.mp3?aid=rss_feed&amp;feed=LhobkclV"/>
      <itunes:title>Decoding AI Ethics and Governance with Var Shankar</itunes:title>
      <itunes:author>Armilla AI</itunes:author>
      <itunes:duration>00:36:24</itunes:duration>
      <itunes:summary>This episode of &apos;People + AI&apos; features Var Shankar, Executive Director of the Responsible AI Institute, discussing the landscape of AI ethics, governance, challenges in operationalization, international standards&apos; impact on national policies, and offering insights into the future of responsible AI development.
</itunes:summary>
      <itunes:subtitle>This episode of &apos;People + AI&apos; features Var Shankar, Executive Director of the Responsible AI Institute, discussing the landscape of AI ethics, governance, challenges in operationalization, international standards&apos; impact on national policies, and offering insights into the future of responsible AI development.
</itunes:subtitle>
      <itunes:keywords>ai explainability, ai use cases, podcast success, ai upskilling and training, ai adoption challenges, regulatory compliance, ai technological change, ai risk assessments, responsible ai institute, ai ethics, ai industry impact, ai legal standards, linkedin networking, high-quality data access, ai program assessments, ai in customer support, ai privacy issues, ai budget efficiency, responsible ai, ai ethical governance, ai investment skepticism, ai in creative industries, generative ai hype, ai governance ecosystem, canada ai funding., ai adoption trends, responsible ai certification, ai accuracy, ai policy instruments, ai governance models</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>8</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">734828ec-120d-49e5-9eac-cf8a803c870d</guid>
      <title>Navigating Privacy in AI with Patricia Thaine</title>
      <description><![CDATA[<p>In this episode of People in AI, join host Karthik Ramakrishnan and his guest, Patricia Thaine, CEO and co-founder of Private AI, as they unravel the complexities of AI in a privacy-centric world. Patricia illuminates the risks of creating embeddings with personal information and shares essential advice on selling technical products to large enterprises. The duo discusses the nuances of deploying AI models, the intricacies of data privacy regulations, and the evolving business landscapes in the post-GPT era. With a rich linguistics and computer science background, Patricia offers unique insights into the importance of privacy in innovation, revealing how Private AI technology is pioneering in data anonymization. Do not miss this insightful conversation filled with expert analysis, personal anecdotes, and a shared admiration for the intersection of AI and privacy.</p>
]]></description>
      <pubDate>Thu, 28 Mar 2024 09:00:00 +0000</pubDate>
      <author>griffin@armilla.ai (Armilla AI)</author>
      <link>https://people-ai.simplecast.com/episodes/navigating-privacy-in-ai-with-patricia-thaine-VHSRtozf</link>
      <content:encoded><![CDATA[<p>In this episode of People in AI, join host Karthik Ramakrishnan and his guest, Patricia Thaine, CEO and co-founder of Private AI, as they unravel the complexities of AI in a privacy-centric world. Patricia illuminates the risks of creating embeddings with personal information and shares essential advice on selling technical products to large enterprises. The duo discusses the nuances of deploying AI models, the intricacies of data privacy regulations, and the evolving business landscapes in the post-GPT era. With a rich linguistics and computer science background, Patricia offers unique insights into the importance of privacy in innovation, revealing how Private AI technology is pioneering in data anonymization. Do not miss this insightful conversation filled with expert analysis, personal anecdotes, and a shared admiration for the intersection of AI and privacy.</p>
]]></content:encoded>
      <enclosure length="31329354" type="audio/mpeg" url="https://cdn.simplecast.com/audio/04e81a69-02b3-4ddd-a5fb-ab8be33b2587/episodes/c88b14ff-cba0-47aa-b496-3a9d84c5ba3d/audio/dee04416-b3df-4c13-9bc0-a36f1fefd44a/default_tc.mp3?aid=rss_feed&amp;feed=LhobkclV"/>
      <itunes:title>Navigating Privacy in AI with Patricia Thaine</itunes:title>
      <itunes:author>Armilla AI</itunes:author>
      <itunes:duration>00:37:17</itunes:duration>
      <itunes:summary>In this episode of People in AI, join host Karthik Ramakrishnan and his guest, Patricia Thaine, CEO and co-founder of Private AI, as they unravel the complexities of AI in a privacy-centric world. Patricia illuminates the risks of creating embeddings with personal information and shares essential advice on selling technical products to large enterprises. The duo discusses the nuances of deploying AI models, the intricacies of data privacy regulations, and the evolving business landscapes in the post-GPT era. With a rich linguistics and computer science background, Patricia offers unique insights into the importance of privacy in innovation, revealing how Private AI technology is pioneering in data anonymization. Do not miss this insightful conversation filled with expert analysis, personal anecdotes, and a shared admiration for the intersection of AI and privacy.</itunes:summary>
      <itunes:subtitle>In this episode of People in AI, join host Karthik Ramakrishnan and his guest, Patricia Thaine, CEO and co-founder of Private AI, as they unravel the complexities of AI in a privacy-centric world. Patricia illuminates the risks of creating embeddings with personal information and shares essential advice on selling technical products to large enterprises. The duo discusses the nuances of deploying AI models, the intricacies of data privacy regulations, and the evolving business landscapes in the post-GPT era. With a rich linguistics and computer science background, Patricia offers unique insights into the importance of privacy in innovation, revealing how Private AI technology is pioneering in data anonymization. Do not miss this insightful conversation filled with expert analysis, personal anecdotes, and a shared admiration for the intersection of AI and privacy.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>7</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">8af20e49-334b-4679-b39b-7fb729f1d539</guid>
      <title>AI, Risk Management, and ISO Standards with Marta Janczarski</title>
      <description><![CDATA[<p>On this episode of People & AI, host Karthik Ramakrishnan welcomes Marta Janczarski and Phil Dawson for an in-depth discussion on SO standards, regulatory compliance, and AI systems. Marta, an expert in AI governance, delves into the intricate world of ISO standards that she helped create like 9001 and 27001, and their adaptability for organizations of all sizes, alongside the newly introduced ISO 2005 for impact assessment. Phil queries the practicality of these standards in the face of evolving regulations, sparking a comprehensive conversation on how ISO certifications serve as a foundation for risk management and play a pivotal role in the burgeoning relationship between AI standards and the insurance industry. The trio also explores the development of industry-specific handbooks and the necessity for standards to evolve with AI technology, reinforcing the importance of a common language across stakeholders in AI governance and risk mitigation.</p>
]]></description>
      <pubDate>Thu, 14 Mar 2024 09:00:00 +0000</pubDate>
      <author>griffin@armilla.ai (Armilla AI)</author>
      <link>https://people-ai.simplecast.com/episodes/ai-risk-management-and-iso-standards-with-marta-janczarski-HOvvmToo</link>
      <content:encoded><![CDATA[<p>On this episode of People & AI, host Karthik Ramakrishnan welcomes Marta Janczarski and Phil Dawson for an in-depth discussion on SO standards, regulatory compliance, and AI systems. Marta, an expert in AI governance, delves into the intricate world of ISO standards that she helped create like 9001 and 27001, and their adaptability for organizations of all sizes, alongside the newly introduced ISO 2005 for impact assessment. Phil queries the practicality of these standards in the face of evolving regulations, sparking a comprehensive conversation on how ISO certifications serve as a foundation for risk management and play a pivotal role in the burgeoning relationship between AI standards and the insurance industry. The trio also explores the development of industry-specific handbooks and the necessity for standards to evolve with AI technology, reinforcing the importance of a common language across stakeholders in AI governance and risk mitigation.</p>
]]></content:encoded>
      <enclosure length="34900550" type="audio/mpeg" url="https://cdn.simplecast.com/audio/04e81a69-02b3-4ddd-a5fb-ab8be33b2587/episodes/4b6714b4-6c14-4b91-b3ec-2573822bceff/audio/ce591784-3c7b-468f-9cf2-cd8f5b06d5c1/default_tc.mp3?aid=rss_feed&amp;feed=LhobkclV"/>
      <itunes:title>AI, Risk Management, and ISO Standards with Marta Janczarski</itunes:title>
      <itunes:author>Armilla AI</itunes:author>
      <itunes:duration>00:41:32</itunes:duration>
      <itunes:summary></itunes:summary>
      <itunes:subtitle></itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>6</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">30b4d569-b2bb-493d-80f3-b0e121ee64cf</guid>
      <title>Future of AI: 2024 Prediction on Regulation, Risk, and Innovation</title>
      <description><![CDATA[<p>After a long break, People + AI is back! Host Karthik Ramakrishnan welcomes Armilla AI co-founders Phil Dawson and Dan Adamson for a deep dive into the pivotal advancements and regulatory landscapes surrounding AI in the coming year. The trio dissects the transformative effects of generative AI in enterprise environments, particularly in light of the IMF's reports on labor impacts and the surging interest from companies like Anthropic. They navigate the complex discussions of AI governance with a lens on the EU's forthcoming AI Act and the active regulatory debates in North America. The episode showcases the evolution of AI's capabilities, notably in generative AI models, and their predictions for open-source model disruption in the enterprise sector. Join the conversation as they address AI's precarious intersection with copyright, privacy, and standards and ponder the industry's strides toward equitable and safe AI deployment. From high-stakes regulation to in-house AI development, People + AI promises a comprehensive discourse on the state and future of artificial intelligence.</p><p><br /> </p>
]]></description>
      <pubDate>Thu, 29 Feb 2024 09:00:00 +0000</pubDate>
      <author>griffin@armilla.ai (Armilla AI)</author>
      <link>https://people-ai.simplecast.com/episodes/future-of-ai-2024-prediction-on-regulation-risk-and-innovation-Mec1GFxr</link>
      <content:encoded><![CDATA[<p>After a long break, People + AI is back! Host Karthik Ramakrishnan welcomes Armilla AI co-founders Phil Dawson and Dan Adamson for a deep dive into the pivotal advancements and regulatory landscapes surrounding AI in the coming year. The trio dissects the transformative effects of generative AI in enterprise environments, particularly in light of the IMF's reports on labor impacts and the surging interest from companies like Anthropic. They navigate the complex discussions of AI governance with a lens on the EU's forthcoming AI Act and the active regulatory debates in North America. The episode showcases the evolution of AI's capabilities, notably in generative AI models, and their predictions for open-source model disruption in the enterprise sector. Join the conversation as they address AI's precarious intersection with copyright, privacy, and standards and ponder the industry's strides toward equitable and safe AI deployment. From high-stakes regulation to in-house AI development, People + AI promises a comprehensive discourse on the state and future of artificial intelligence.</p><p><br /> </p>
]]></content:encoded>
      <enclosure length="36636636" type="audio/mpeg" url="https://cdn.simplecast.com/audio/04e81a69-02b3-4ddd-a5fb-ab8be33b2587/episodes/d5a78240-832d-4577-abd4-9895394b16f4/audio/1c526793-e454-4021-8dcb-be6f2c98a03a/default_tc.mp3?aid=rss_feed&amp;feed=LhobkclV"/>
      <itunes:title>Future of AI: 2024 Prediction on Regulation, Risk, and Innovation</itunes:title>
      <itunes:author>Armilla AI</itunes:author>
      <itunes:duration>00:43:36</itunes:duration>
      <itunes:summary>In this episode of People + AI, the founders of Armilla AI discuss significant AI developments in 2023, focusing on the impact of generative AI on the workforce, changes in global AI regulation, and predictions for the forthcoming year. 
</itunes:summary>
      <itunes:subtitle>In this episode of People + AI, the founders of Armilla AI discuss significant AI developments in 2023, focusing on the impact of generative AI on the workforce, changes in global AI regulation, and predictions for the forthcoming year. 
</itunes:subtitle>
      <itunes:keywords>in-house ai models, ai in creative professions, general-purpose ai models, ai laws in us, ethical ai, enterprise adoption, ai governance, eu ai act, people and ai podcast, labor force, ai deployment risks, risk-based approach, ai laws in canada, generative ai, ai technology, ai regulation, imf report, revenue forecast 2024, ai accountability, closed source ai models, ai spending, copyright in ai, open source ai models, privacy in ai, responsible ai development, ai risk management, executive orders on ai, ai in white-collar professions., ai standards, ai safety</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>5</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">2d2717e0-d21f-4024-bd46-805987992cdb</guid>
      <title>Barclays CDO &amp; CIO Data Sheetal Patole</title>
      <description><![CDATA[<p>The use of AI in the context of enterprise is commonly misunderstood, but today’s guest is an expert in this field, and she joins us today to dispel some pervasive myths. Sheetal Patole is currently the CIO for customer engagement, data, insights, and operations at Barclays UK. Her prior range of experience spans from healthcare to mining, but she has always been focused on introducing new technology to transform organizations. In this episode, Sheetal shares some of the use cases that she has worked on during her career, and how she uses a combination of data, hardware, and software to solve real problems that businesses are experiencing. She also explains what is needed to implement AI at scale, why you shouldn’t wed yourself to particular models and techniques, what can be done about the problem of talent acquisition being faced by the AI industry, the importance of AI governance in real-time, and why, despite what you may have heard, AI really does need people! </p><p>Key Points From This Episode:</p><ul><li>The breadth of experience and field of expertise of today’s guest, Sheetal Patole.</li><li>Sheetal explains the purpose of AI in the context of an enterprise, which is commonly misunderstood.</li><li>Examples of the different ways that AI is used in different industries.</li><li>Benefits of the autonomous trucks that are being used on mines.</li><li>Sheetal shares how she and her team used AI to solve a transportation challenge.</li><li>How to determine whether a business challenge can be solved using AI.</li><li>Dos and don’ts when it comes to putting together a strategy for solving a business problem.</li><li>The importance of having a coherent and mature strategy across an entire organization before implementing AI at scale.</li><li>How to cultivate a level of maturity in an organization.</li><li>Sheetal’s recommendation for navigating inter-team dynamics in an organization.</li><li>A challenge that the AI industry is currently facing, and Sheetal’s thoughts on how it can be combated.</li><li>AI governance in real-time; Sheetal shares her thoughts on why this is necessary, and how it should be implemented.</li><li>What Sheetal wishes she could do differently, advice to her younger self, and an element of the AI realm that she overestimated.</li></ul><p>Tweetables:</p><p>“Using data intelligently and moving away from a system of record to a system of intelligence that allows your organization to learn from this information and data sets to drive change.” — Sheetal Patole [0:03:18]</p><p>“[AI] has never been something just purely to make money, it’s been to transform the organization from where it is, and it’s always gone to somewhere better.” — Sheetal Patole [0:08:36]</p><p>“If you want to do data analytics and AI at scale, you need to build a level of maturity in your organization to be able to do it. The way you build that maturity is first to centralize the team.” — Sheetal Patole [0:27:57]</p><p>“AI is not a mythical creature or black box, it requires human beings.” — Sheetal Patole [0:38:35]</p><p>Links Mentioned in Today’s Episode:</p><p><a href="https://www.linkedin.com/in/sheetal-patole-1323622/">Sheetal Patole on LinkedIn</a></p><p><a href="https://www.barclays.co.uk/">Barclays UK</a></p>
]]></description>
      <pubDate>Thu, 7 Apr 2022 19:58:08 +0000</pubDate>
      <author>griffin@armilla.ai (Armilla AI)</author>
      <link>https://people-ai.simplecast.com/episodes/barclays-cdo-cio-data-sheetal-patole-tAlsHf9q</link>
      <content:encoded><![CDATA[<p>The use of AI in the context of enterprise is commonly misunderstood, but today’s guest is an expert in this field, and she joins us today to dispel some pervasive myths. Sheetal Patole is currently the CIO for customer engagement, data, insights, and operations at Barclays UK. Her prior range of experience spans from healthcare to mining, but she has always been focused on introducing new technology to transform organizations. In this episode, Sheetal shares some of the use cases that she has worked on during her career, and how she uses a combination of data, hardware, and software to solve real problems that businesses are experiencing. She also explains what is needed to implement AI at scale, why you shouldn’t wed yourself to particular models and techniques, what can be done about the problem of talent acquisition being faced by the AI industry, the importance of AI governance in real-time, and why, despite what you may have heard, AI really does need people! </p><p>Key Points From This Episode:</p><ul><li>The breadth of experience and field of expertise of today’s guest, Sheetal Patole.</li><li>Sheetal explains the purpose of AI in the context of an enterprise, which is commonly misunderstood.</li><li>Examples of the different ways that AI is used in different industries.</li><li>Benefits of the autonomous trucks that are being used on mines.</li><li>Sheetal shares how she and her team used AI to solve a transportation challenge.</li><li>How to determine whether a business challenge can be solved using AI.</li><li>Dos and don’ts when it comes to putting together a strategy for solving a business problem.</li><li>The importance of having a coherent and mature strategy across an entire organization before implementing AI at scale.</li><li>How to cultivate a level of maturity in an organization.</li><li>Sheetal’s recommendation for navigating inter-team dynamics in an organization.</li><li>A challenge that the AI industry is currently facing, and Sheetal’s thoughts on how it can be combated.</li><li>AI governance in real-time; Sheetal shares her thoughts on why this is necessary, and how it should be implemented.</li><li>What Sheetal wishes she could do differently, advice to her younger self, and an element of the AI realm that she overestimated.</li></ul><p>Tweetables:</p><p>“Using data intelligently and moving away from a system of record to a system of intelligence that allows your organization to learn from this information and data sets to drive change.” — Sheetal Patole [0:03:18]</p><p>“[AI] has never been something just purely to make money, it’s been to transform the organization from where it is, and it’s always gone to somewhere better.” — Sheetal Patole [0:08:36]</p><p>“If you want to do data analytics and AI at scale, you need to build a level of maturity in your organization to be able to do it. The way you build that maturity is first to centralize the team.” — Sheetal Patole [0:27:57]</p><p>“AI is not a mythical creature or black box, it requires human beings.” — Sheetal Patole [0:38:35]</p><p>Links Mentioned in Today’s Episode:</p><p><a href="https://www.linkedin.com/in/sheetal-patole-1323622/">Sheetal Patole on LinkedIn</a></p><p><a href="https://www.barclays.co.uk/">Barclays UK</a></p>
]]></content:encoded>
      <enclosure length="40005094" type="audio/mpeg" url="https://cdn.simplecast.com/audio/04e81a69-02b3-4ddd-a5fb-ab8be33b2587/episodes/e58871ef-953d-45ad-aa41-d47955a9c461/audio/5c00caed-8470-4292-a404-0193a87a30ae/default_tc.mp3?aid=rss_feed&amp;feed=LhobkclV"/>
      <itunes:title>Barclays CDO &amp; CIO Data Sheetal Patole</itunes:title>
      <itunes:author>Armilla AI</itunes:author>
      <itunes:duration>00:44:03</itunes:duration>
      <itunes:summary>Sheetal Patole is the CDO &amp; CIO for customer engagement, data, insights, and operations at Barclays UK. Her range of experience spans from healthcare to mining, and she has always been focused on introducing new technology to transform organizations. In this episode, Sheetal shares some of the use cases that she has worked on during her career, and how AI is used in combination with data, hardware, and software to solve real business problems.</itunes:summary>
      <itunes:subtitle>Sheetal Patole is the CDO &amp; CIO for customer engagement, data, insights, and operations at Barclays UK. Her range of experience spans from healthcare to mining, and she has always been focused on introducing new technology to transform organizations. In this episode, Sheetal shares some of the use cases that she has worked on during her career, and how AI is used in combination with data, hardware, and software to solve real business problems.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>4</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">ff40c9b6-c668-4807-b8fd-ac574578a55a</guid>
      <title>Philippe Beaudoin - Building Empathetic AI</title>
      <description><![CDATA[<p>Philippe Beaudoin, CEO and Co-Founder at Waverly, shares his thoughts on how to bridge the gap between academics and industry professionals, the immense challenge of balancing company optimization strategies with the best interests of the users and why empathy should be a much bigger part of the AI conversation.</p><p>Key Points From This Episode:</p><ul><li>Phil shares the exciting career journey that led him to where he is currently.</li><li>The changes that have taken place in the AI world since Phil entered it.</li><li>Why a large percentage of AI in enterprise still fails.</li><li>A major difference between the academics researching AI and those applying that research.</li><li>Phil’s thoughts on how to bridge the gap between academia and enterprise.</li><li>An explanation of one of the lesser-known threats of AI.</li><li>The inspiration behind Phil’s app, Waverly, and how it works.</li><li>Why companies are hesitant to change the way their recommendation engines are built.</li><li>Challenges of balancing company optimization with the best interests of the user.</li><li>Problems that Phil sees with the implementation of policies and regulations around AI technology.</li><li>The empathy component of AI technology that Phil thinks we should be focusing on.</li><li>Advice for researchers and practitioners for dealing with the unintended consequences of working in the AI field.</li></ul><p>Tweetables:</p><p>“[Waverly] is my take on how to build better AI systems, and for me, better means AI systems that care more about the user they are trying to help than we are used to seeing.” — @<a href="https://twitter.com/philbeaudoin">PhilBeaudoin</a> [0:02:43]</p><p>“Vision has improved a lot, natural language understanding, speech recognition, our ability to find patterns, all of that has improved a lot. It’s not AGI but it has improved quite a bit.” — <a href="https://twitter.com/philbeaudoin">@PhilBeaudoin</a> [0:04:46]</p><p>“Bring an open mind, bring a lot of respect, and try to see the importance of the other party.” — <a href="https://twitter.com/philbeaudoin">@PhilBeaudoin</a> [0:13:05]</p><p>“Most people have aspirations, most people have a direction they want to grow into, and when they go about their everyday life they are super happy to have assistance, but this assistance should be at the service of our aspirations.” — <a href="https://twitter.com/philbeaudoin">@PhilBeaudoin</a> [0:18:50]</p><p>Links Mentioned in Today’s Episode:</p><p><a href="https://philbeaudoin.com/">Phil Beaudoin</a></p><p><a href="https://www.elementai.com/">Element AI</a></p><p><a href="http://mywaverly.com/">Waverly</a></p>
]]></description>
      <pubDate>Wed, 16 Mar 2022 19:03:05 +0000</pubDate>
      <author>griffin@armilla.ai (Armilla AI)</author>
      <link>https://people-ai.simplecast.com/episodes/philippe-beaudoin-building-empathetic-ai-Z2qlRRaz</link>
      <content:encoded><![CDATA[<p>Philippe Beaudoin, CEO and Co-Founder at Waverly, shares his thoughts on how to bridge the gap between academics and industry professionals, the immense challenge of balancing company optimization strategies with the best interests of the users and why empathy should be a much bigger part of the AI conversation.</p><p>Key Points From This Episode:</p><ul><li>Phil shares the exciting career journey that led him to where he is currently.</li><li>The changes that have taken place in the AI world since Phil entered it.</li><li>Why a large percentage of AI in enterprise still fails.</li><li>A major difference between the academics researching AI and those applying that research.</li><li>Phil’s thoughts on how to bridge the gap between academia and enterprise.</li><li>An explanation of one of the lesser-known threats of AI.</li><li>The inspiration behind Phil’s app, Waverly, and how it works.</li><li>Why companies are hesitant to change the way their recommendation engines are built.</li><li>Challenges of balancing company optimization with the best interests of the user.</li><li>Problems that Phil sees with the implementation of policies and regulations around AI technology.</li><li>The empathy component of AI technology that Phil thinks we should be focusing on.</li><li>Advice for researchers and practitioners for dealing with the unintended consequences of working in the AI field.</li></ul><p>Tweetables:</p><p>“[Waverly] is my take on how to build better AI systems, and for me, better means AI systems that care more about the user they are trying to help than we are used to seeing.” — @<a href="https://twitter.com/philbeaudoin">PhilBeaudoin</a> [0:02:43]</p><p>“Vision has improved a lot, natural language understanding, speech recognition, our ability to find patterns, all of that has improved a lot. It’s not AGI but it has improved quite a bit.” — <a href="https://twitter.com/philbeaudoin">@PhilBeaudoin</a> [0:04:46]</p><p>“Bring an open mind, bring a lot of respect, and try to see the importance of the other party.” — <a href="https://twitter.com/philbeaudoin">@PhilBeaudoin</a> [0:13:05]</p><p>“Most people have aspirations, most people have a direction they want to grow into, and when they go about their everyday life they are super happy to have assistance, but this assistance should be at the service of our aspirations.” — <a href="https://twitter.com/philbeaudoin">@PhilBeaudoin</a> [0:18:50]</p><p>Links Mentioned in Today’s Episode:</p><p><a href="https://philbeaudoin.com/">Phil Beaudoin</a></p><p><a href="https://www.elementai.com/">Element AI</a></p><p><a href="http://mywaverly.com/">Waverly</a></p>
]]></content:encoded>
      <enclosure length="36178234" type="audio/mpeg" url="https://cdn.simplecast.com/audio/04e81a69-02b3-4ddd-a5fb-ab8be33b2587/episodes/4d3c1c9b-5437-4fe4-b3be-ef677d8a295a/audio/21dccc69-5bc7-44f0-9b62-01cd9cfdc7af/default_tc.mp3?aid=rss_feed&amp;feed=LhobkclV"/>
      <itunes:title>Philippe Beaudoin - Building Empathetic AI</itunes:title>
      <itunes:author>Armilla AI</itunes:author>
      <itunes:duration>00:37:42</itunes:duration>
      <itunes:summary>A lesser-known challenge in the development of AI technologies is the harm it can cause to our ability to develop and grow optimally as humans. Philippe Beaudoin’s experience in the AI field spans from academia to enterprise, and through his latest venture, Waverly, he aims to hand agency and autonomy back to humans.</itunes:summary>
      <itunes:subtitle>A lesser-known challenge in the development of AI technologies is the harm it can cause to our ability to develop and grow optimally as humans. Philippe Beaudoin’s experience in the AI field spans from academia to enterprise, and through his latest venture, Waverly, he aims to hand agency and autonomy back to humans.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>3</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">c555caa7-7862-4855-97e3-744fd7293dfc</guid>
      <title>Dr. Gillian Hadfield - Explainable vs. Justifiable AI</title>
      <description><![CDATA[<p>A broad thinker from an unusual background, Dr. Gillian Hadfield shares a different take on building these models from the general norm, as well as how to incorporate transparency into justifiable systems, and the hypothesis of building a system where decisions are attached back to a person responsible. We also talk about the need for safe, consistent, and up-to-date regulatory structures, and the effects of not having this, before closing with some powerful advice around the work we have to do going forward in this sector! We hope you can join us for this hugely insightful conversation.</p><p>Key Points From This Episode:</p><ul><li>Introducing Dr. Gillian Hadfield and what drew her to the space of law and globalization.</li><li>How the challenges of AI align with the challenges of economics.</li><li>The need for people in social sciences and humanities to engage in design and building.</li><li>The objective of the Schwartz Reisman Institute for Technology and Society.</li><li>Defining AI governance to address the alignment problem.</li><li>Comparing AI with conventional programming and the difficulties with test sets.</li><li>The difference between AI explainability and justifiability.</li><li>Talking about the GDPR and what they are really looking for.</li><li>A legal analogy on incorporating transparency into justifiable systems.</li><li>Discussing the chicken-and-egg confusion that regulators are feeling.</li><li>Why we haven't seen the growth of AI we would expect.</li><li>How regulatory regimes haven't kept up with the speed of globalization and digitization.</li><li>The balance of having the right kind of regulation.</li><li>A walk through the current landscape of AI regulation.</li><li>What regulatory technologies look like.</li><li>The focus on fairness and algorithmic bias and AI's capacity in all domains.</li><li>Dr. Hadfield's advice for people who are looking for AI integration in their practices.</li></ul><p>Tweetables:</p><p>“There's no one solution to how you align AI.” — <a href="https://twitter.com/ghadfield">@ghadfield</a> [0:10:19]</p><p>“We have the alignment problem everywhere. How do you get a corporation to do what you want it to do, how do you get governments to do what you want them to do?” — <a href="https://twitter.com/ghadfield">@ghadfield</a> [0:23:52]</p><p>“AI is a general-purpose technology, it's a way of solving problems, it's a way of coming up with new ideas. It's going to be everywhere. I prefer to think of the regulatory challenge as, how is AI changing your capacity to achieve your regulatory goals, in any domain?” — <a href="https://twitter.com/ghadfield">@ghadfield</a> [0:37:41]</p><p>“We need way more people who are not engineers, deeply engaged in the process of building our systems.” — <a href="https://twitter.com/ghadfield">@ghadfield</a> [0:42:13]</p><p>Links Mentioned in Today’s Episode:</p><p><a href="https://www.linkedin.com/in/gillian-k-hadfield-1773987/?originalSubdomain=ca">Gillian Hadfield on LinkedIn</a></p><p><a href="https://twitter.com/ghadfield">Gillian Hadfield on Twitter</a></p><p><a href="https://vectorinstitute.ai/">The Vector Institute</a></p><p><a href="https://srinstitute.utoronto.ca/">Schwartz Reisman Institute for Technology and Society </a></p><p><a href="https://gdpr-info.eu/">GDPR</a></p><p><a href="https://mila.quebec/en/">Mila</a></p>
]]></description>
      <pubDate>Wed, 9 Mar 2022 21:18:26 +0000</pubDate>
      <author>griffin@armilla.ai (Armilla AI)</author>
      <link>https://people-ai.simplecast.com/episodes/dr-gillian-hadfield-explainable-vs-justifiable-ai-9w8Zthk9</link>
      <content:encoded><![CDATA[<p>A broad thinker from an unusual background, Dr. Gillian Hadfield shares a different take on building these models from the general norm, as well as how to incorporate transparency into justifiable systems, and the hypothesis of building a system where decisions are attached back to a person responsible. We also talk about the need for safe, consistent, and up-to-date regulatory structures, and the effects of not having this, before closing with some powerful advice around the work we have to do going forward in this sector! We hope you can join us for this hugely insightful conversation.</p><p>Key Points From This Episode:</p><ul><li>Introducing Dr. Gillian Hadfield and what drew her to the space of law and globalization.</li><li>How the challenges of AI align with the challenges of economics.</li><li>The need for people in social sciences and humanities to engage in design and building.</li><li>The objective of the Schwartz Reisman Institute for Technology and Society.</li><li>Defining AI governance to address the alignment problem.</li><li>Comparing AI with conventional programming and the difficulties with test sets.</li><li>The difference between AI explainability and justifiability.</li><li>Talking about the GDPR and what they are really looking for.</li><li>A legal analogy on incorporating transparency into justifiable systems.</li><li>Discussing the chicken-and-egg confusion that regulators are feeling.</li><li>Why we haven't seen the growth of AI we would expect.</li><li>How regulatory regimes haven't kept up with the speed of globalization and digitization.</li><li>The balance of having the right kind of regulation.</li><li>A walk through the current landscape of AI regulation.</li><li>What regulatory technologies look like.</li><li>The focus on fairness and algorithmic bias and AI's capacity in all domains.</li><li>Dr. Hadfield's advice for people who are looking for AI integration in their practices.</li></ul><p>Tweetables:</p><p>“There's no one solution to how you align AI.” — <a href="https://twitter.com/ghadfield">@ghadfield</a> [0:10:19]</p><p>“We have the alignment problem everywhere. How do you get a corporation to do what you want it to do, how do you get governments to do what you want them to do?” — <a href="https://twitter.com/ghadfield">@ghadfield</a> [0:23:52]</p><p>“AI is a general-purpose technology, it's a way of solving problems, it's a way of coming up with new ideas. It's going to be everywhere. I prefer to think of the regulatory challenge as, how is AI changing your capacity to achieve your regulatory goals, in any domain?” — <a href="https://twitter.com/ghadfield">@ghadfield</a> [0:37:41]</p><p>“We need way more people who are not engineers, deeply engaged in the process of building our systems.” — <a href="https://twitter.com/ghadfield">@ghadfield</a> [0:42:13]</p><p>Links Mentioned in Today’s Episode:</p><p><a href="https://www.linkedin.com/in/gillian-k-hadfield-1773987/?originalSubdomain=ca">Gillian Hadfield on LinkedIn</a></p><p><a href="https://twitter.com/ghadfield">Gillian Hadfield on Twitter</a></p><p><a href="https://vectorinstitute.ai/">The Vector Institute</a></p><p><a href="https://srinstitute.utoronto.ca/">Schwartz Reisman Institute for Technology and Society </a></p><p><a href="https://gdpr-info.eu/">GDPR</a></p><p><a href="https://mila.quebec/en/">Mila</a></p>
]]></content:encoded>
      <enclosure length="39284112" type="audio/mpeg" url="https://cdn.simplecast.com/audio/04e81a69-02b3-4ddd-a5fb-ab8be33b2587/episodes/5bfb455d-b5fc-4673-9a6a-11d4bc22a7d5/audio/351ead7a-f2a0-4afd-8595-3e4cb7e49ccf/default_tc.mp3?aid=rss_feed&amp;feed=LhobkclV"/>
      <itunes:title>Dr. Gillian Hadfield - Explainable vs. Justifiable AI</itunes:title>
      <itunes:author>Armilla AI</itunes:author>
      <itunes:duration>00:45:55</itunes:duration>
      <itunes:summary>Our guest today is Dr. Gillian Hadfield from the Schwartz Reisman Institute for Technology and Society, who is here to bring her extensive experience and acute insight to issues around AI governance, regulation, and the challenge of alignment. The conversation covers the difference between AI justifiability and explainability, and how to build the legal and economic environment for AI that builds value. </itunes:summary>
      <itunes:subtitle>Our guest today is Dr. Gillian Hadfield from the Schwartz Reisman Institute for Technology and Society, who is here to bring her extensive experience and acute insight to issues around AI governance, regulation, and the challenge of alignment. The conversation covers the difference between AI justifiability and explainability, and how to build the legal and economic environment for AI that builds value. </itunes:subtitle>
      <itunes:keywords>ai, artificial intelligence, explainable ai, machine learning, justifiable ai, ml</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>2</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b277a48a-20dd-48f3-9d34-86cde4277934</guid>
      <title>Dr. Yoshua Bengio</title>
      <description><![CDATA[<p>Today’s esteemed guest is one of the world’s best-recognized AI experts, and most cited computer scientists. Dr. Yoshua Bengio began his AI journey in the field of neural networks, following which he spent many years focusing on deep learning. He is currently working towards bridging the gap between human intelligence and state-of-the-art machine learning technologies. In this episode, we discuss system one versus system two thinking, and how understanding these systems can contribute to building more moral machines. Dr. Bengio also explains the positive impacts that AI can have on people and the planet, and how the risks of AI can be mitigated through a variety of approaches. </p><p>Key Points From This Episode:</p><ul><li>An introduction to today’s esteemed guest, Yoshua Bengio.</li><li>Yoshua briefly shares his thoughts on the crisis that is currently taking place in Ukraine. </li><li>Origins of Yoshua’s journey in the AI world.</li><li>The research field that Yoshua is most excited about at the moment. </li><li>Yosuha explains the concept of Consciousness Prior.</li><li>System one versus system two thinking. </li><li>How neural networks solved the problem of ‘search’ in AI. </li><li>Ways of mitigating the risks of AI, and some of the organizations that are working on this. </li><li>How our inductive biases can be problematic.</li><li>Lack of regulation in the computing industry, and why this needs to change.</li><li>The global threats that AI has the potential to solve. </li><li>The importance of knowledge sharing.</li><li>Advice from Yoshua for all scientific researchers.</li><li>Yoshua’s favorite AI movie. </li></ul><p>Tweetables:</p><p>“The idea that there would be general principles that could explain intelligence, both ours, the intelligence of animals, and would allow us to build intelligent machines, I found that so exciting, and I’ve been riding that wave since then.” — Yoshua Bengio </p><p>“I really believe in the importance of a diversity of research paths and research directions.” — Yoshua Bengio </p><p>“The system one, system two division is a path towards making more moral machines.” — Yoshua Bengio </p><p>“The area of healthcare is one where AI has the greatest potential of touching human beings positively in the coming years, and really saving a lot of lives.” — Yoshua Bengio</p><p>Links Mentioned in Today’s Episode:</p><p><a href="https://yoshuabengio.org/">Yoshua Bengio</a></p><p><a href="https://www.linkedin.com/in/yoshuabengio/?originalSubdomain=ca">Yoshua Bengio on LinkedIn</a></p><p><a href="https://www.researchgate.net/publication/320033021_The_Consciousness_Prior">The Consciousness Prior</a></p><p><a href="https://mila.quebec/en/">Mila</a></p><p><a href="https://www.umontreal.ca/en/">University of Montreal</a></p><p><a href="https://amturing.acm.org/">A.M. Turing Award</a></p><p><a href="https://www.imdb.com/title/tt0062622/"><i>2001: A Space Odyssey</i></a></p>
]]></description>
      <pubDate>Mon, 7 Mar 2022 20:53:12 +0000</pubDate>
      <author>griffin@armilla.ai (Yoshua Bengio, Karthik Ramakrishnan)</author>
      <link>https://people-ai.simplecast.com/episodes/dr-yoshua-bengio-xdC8qVwz</link>
      <content:encoded><![CDATA[<p>Today’s esteemed guest is one of the world’s best-recognized AI experts, and most cited computer scientists. Dr. Yoshua Bengio began his AI journey in the field of neural networks, following which he spent many years focusing on deep learning. He is currently working towards bridging the gap between human intelligence and state-of-the-art machine learning technologies. In this episode, we discuss system one versus system two thinking, and how understanding these systems can contribute to building more moral machines. Dr. Bengio also explains the positive impacts that AI can have on people and the planet, and how the risks of AI can be mitigated through a variety of approaches. </p><p>Key Points From This Episode:</p><ul><li>An introduction to today’s esteemed guest, Yoshua Bengio.</li><li>Yoshua briefly shares his thoughts on the crisis that is currently taking place in Ukraine. </li><li>Origins of Yoshua’s journey in the AI world.</li><li>The research field that Yoshua is most excited about at the moment. </li><li>Yosuha explains the concept of Consciousness Prior.</li><li>System one versus system two thinking. </li><li>How neural networks solved the problem of ‘search’ in AI. </li><li>Ways of mitigating the risks of AI, and some of the organizations that are working on this. </li><li>How our inductive biases can be problematic.</li><li>Lack of regulation in the computing industry, and why this needs to change.</li><li>The global threats that AI has the potential to solve. </li><li>The importance of knowledge sharing.</li><li>Advice from Yoshua for all scientific researchers.</li><li>Yoshua’s favorite AI movie. </li></ul><p>Tweetables:</p><p>“The idea that there would be general principles that could explain intelligence, both ours, the intelligence of animals, and would allow us to build intelligent machines, I found that so exciting, and I’ve been riding that wave since then.” — Yoshua Bengio </p><p>“I really believe in the importance of a diversity of research paths and research directions.” — Yoshua Bengio </p><p>“The system one, system two division is a path towards making more moral machines.” — Yoshua Bengio </p><p>“The area of healthcare is one where AI has the greatest potential of touching human beings positively in the coming years, and really saving a lot of lives.” — Yoshua Bengio</p><p>Links Mentioned in Today’s Episode:</p><p><a href="https://yoshuabengio.org/">Yoshua Bengio</a></p><p><a href="https://www.linkedin.com/in/yoshuabengio/?originalSubdomain=ca">Yoshua Bengio on LinkedIn</a></p><p><a href="https://www.researchgate.net/publication/320033021_The_Consciousness_Prior">The Consciousness Prior</a></p><p><a href="https://mila.quebec/en/">Mila</a></p><p><a href="https://www.umontreal.ca/en/">University of Montreal</a></p><p><a href="https://amturing.acm.org/">A.M. Turing Award</a></p><p><a href="https://www.imdb.com/title/tt0062622/"><i>2001: A Space Odyssey</i></a></p>
]]></content:encoded>
      <enclosure length="42594439" type="audio/mpeg" url="https://cdn.simplecast.com/audio/04e81a69-02b3-4ddd-a5fb-ab8be33b2587/episodes/db103ba1-7995-4638-bc54-372feee15b42/audio/5e3a5d73-bd16-406e-a58d-b29feea2fbe9/default_tc.mp3?aid=rss_feed&amp;feed=LhobkclV"/>
      <itunes:title>Dr. Yoshua Bengio</itunes:title>
      <itunes:author>Yoshua Bengio, Karthik Ramakrishnan</itunes:author>
      <itunes:duration>00:44:23</itunes:duration>
      <itunes:summary>Today’s esteemed guest is one of the world’s best-recognized AI experts, and most cited computer scientists. Dr. Yoshua Bengio began his AI journey in the field of neural networks, following which he spent many years focusing on deep learning. He is currently working towards bridging the gap between human intelligence and state-of-the-art machine learning technologies</itunes:summary>
      <itunes:subtitle>Today’s esteemed guest is one of the world’s best-recognized AI experts, and most cited computer scientists. Dr. Yoshua Bengio began his AI journey in the field of neural networks, following which he spent many years focusing on deep learning. He is currently working towards bridging the gap between human intelligence and state-of-the-art machine learning technologies</itunes:subtitle>
      <itunes:keywords>ai, artificial intelligence, machine learning, ml</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">2ceefe5a-287d-4a20-995d-e01c76450754</guid>
      <title>Introducing People + AI</title>
      <description><![CDATA[People + AI gives you access to great minds in the field of artificial intelligence. We're exploring the key design principles, ethical quandaries, and full development cycles responsible for inventing the technologies of the future, today. Powered by Armilla AI.
]]></description>
      <pubDate>Fri, 4 Mar 2022 17:57:58 +0000</pubDate>
      <author>griffin@armilla.ai (Armilla AI)</author>
      <link>https://people-ai.simplecast.com/episodes/introducing-people-ai-LhEKyF8g</link>
      <enclosure length="1453045" type="audio/mpeg" url="https://cdn.simplecast.com/audio/04e81a69-02b3-4ddd-a5fb-ab8be33b2587/episodes/6912ee6a-6faa-4412-b015-7ecc4a47d4de/audio/e0010431-a007-4f10-b34c-9d647198372f/default_tc.mp3?aid=rss_feed&amp;feed=LhobkclV"/>
      <itunes:title>Introducing People + AI</itunes:title>
      <itunes:author>Armilla AI</itunes:author>
      <itunes:duration>00:01:31</itunes:duration>
      <itunes:summary>People + AI gives you access to great minds in the field of artificial intelligence. We&apos;re exploring the key design principles, ethical quandaries, and full development cycles responsible for inventing the technologies of the future, today. Powered by Armilla AI.</itunes:summary>
      <itunes:subtitle>People + AI gives you access to great minds in the field of artificial intelligence. We&apos;re exploring the key design principles, ethical quandaries, and full development cycles responsible for inventing the technologies of the future, today. Powered by Armilla AI.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>trailer</itunes:episodeType>
    </item>
  </channel>
</rss>