<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:media="http://search.yahoo.com/mrss/" xmlns:podcast="https://podcastindex.org/namespace/1.0">
  <channel>
    <atom:link href="https://feeds.simplecast.com/c1PFREqr" rel="self" title="MP3 Audio" type="application/atom+xml"/>
    <atom:link href="https://simplecast.superfeedr.com" rel="hub" xmlns="http://www.w3.org/2005/Atom"/>
    <generator>https://simplecast.com</generator>
    <title>No Math AI</title>
    <description>No Math AI is a monthly podcast for the AI community and enthusiasts to better understand how AI research translates into real-world business impact—without the complex equations.

Hosted by Dr. Akash Srivastava, Chief Architect at Red Hat AI Innovation, and Isha Puri, AI PhD student at MIT, No Math AI distills critical AI concepts into practical takeaways. Each episode unpacks cutting-edge research and translates it into actionable insights that help practitioners, businesses, and curious minds accelerate their adoption of AI with confidence.</description>
    <copyright>Isha Puri and Akash Srivastava</copyright>
    <language>en</language>
    <pubDate>Mon, 16 Jun 2025 10:52:20 +0000</pubDate>
    <lastBuildDate>Mon, 16 Jun 2025 10:52:31 +0000</lastBuildDate>
    
    <link>https://podcasters.spotify.com/pod/show/isha028</link>
    <itunes:type>episodic</itunes:type>
    <itunes:summary>No Math AI is a monthly podcast for the AI community and enthusiasts to better understand how AI research translates into real-world business impact—without the complex equations.

Hosted by Dr. Akash Srivastava, Chief Architect at Red Hat AI Innovation, and Isha Puri, AI PhD student at MIT, No Math AI distills critical AI concepts into practical takeaways. Each episode unpacks cutting-edge research and translates it into actionable insights that help practitioners, businesses, and curious minds accelerate their adoption of AI with confidence.</itunes:summary>
    <itunes:author>Isha Puri and Akash Srivastava</itunes:author>
    <itunes:explicit>false</itunes:explicit>
    <itunes:image href="https://image.simplecastcdn.com/images/06b725f1-0678-41cb-8c56-beecf51db1e8/899d13c6-9eec-487c-9c9f-993077328f0b/3000x3000/no-20math-20ai.jpg?aid=rss_feed"/>
    <itunes:new-feed-url>https://feeds.simplecast.com/c1PFREqr</itunes:new-feed-url>
    <itunes:owner>
      <itunes:name>Isha Puri and Akash Srivastava</itunes:name>
    </itunes:owner>
    <itunes:category text="News">
      <itunes:category text="Tech News"/>
    </itunes:category>
    <item>
      <guid isPermaLink="false">50436a28-4210-4920-bb10-545aea976504</guid>
      <title>Inference Time Scaling for Enterprises</title>
      <description><![CDATA[In Episode 3 of No Math AI, Red Hat CEO Matt Hicks and CTO Chris Wright join hosts Akash Srivastava and Isha Puri to explore what it really takes to scale large language model inference time scaling in production. From cost concerns and platform orchestration to the launch of llm-d, they break down the transition from static models to dynamic, reasoning-heavy applications and how open source collaboration is making scalable AI a reality for enterprise teams.
]]></description>
      <pubDate>Mon, 16 Jun 2025 10:52:20 +0000</pubDate>
      <author>Chris Wright, Matt Hicks, Akash Srivastava, Isha Puri</author>
      <link>https://podcasters.spotify.com/pod/show/isha028</link>
      <enclosure length="9019696" type="audio/mpeg" url="https://www.claritaspod.com/measure/cdn.simplecast.com/audio/f23eff61-48a8-420c-b872-cfc8e3b40c09/episodes/5dc4d119-0a28-4309-8083-04b2c39fccbe/audio/806269bd-997d-41e9-88d6-865967018a3d/default_tc.mp3?aid=rss_feed&amp;feed=c1PFREqr"/>
      <itunes:title>Inference Time Scaling for Enterprises</itunes:title>
      <itunes:author>Chris Wright, Matt Hicks, Akash Srivastava, Isha Puri</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/06b725f1-0678-41cb-8c56-beecf51db1e8/57e2a9de-3c1e-4792-bcce-a364bffbb7da/3000x3000/inference-time-20scaling-1.jpg?aid=rss_feed"/>
      <itunes:duration>00:09:23</itunes:duration>
      <itunes:summary>In Episode 3 of No Math AI, Red Hat CEO Matt Hicks and CTO Chris Wright join hosts Akash Srivastava and Isha Puri to explore what it really takes to scale large language model inference time scaling in production. From cost concerns and platform orchestration to the launch of llm-d, they break down the transition from static models to dynamic, reasoning-heavy applications and how open source collaboration is making scalable AI a reality for enterprise teams.</itunes:summary>
      <itunes:subtitle>In Episode 3 of No Math AI, Red Hat CEO Matt Hicks and CTO Chris Wright join hosts Akash Srivastava and Isha Puri to explore what it really takes to scale large language model inference time scaling in production. From cost concerns and platform orchestration to the launch of llm-d, they break down the transition from static models to dynamic, reasoning-heavy applications and how open source collaboration is making scalable AI a reality for enterprise teams.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>3</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">3b0cab04-e2d8-4fed-8c84-16f3e35f6a20</guid>
      <title>Generative Optimization</title>
      <description><![CDATA[In this episode of No Math AI, we're joined by Dr. Faez Ahmed, a professor at MIT and leader of the Design Computation and Digital Engineering Lab. He works at the fascinating intersection of generative AI, optimization, and engineering design, where he's redefining how we create everything from bicycles to next-generation aerospace systems. Together, Isha, Akash, and Faez discuss the future of engineering work, harnessing "generative optimization" to automate engineering design, balancing the needs for precision and creativity, and more. 
]]></description>
      <pubDate>Wed, 23 Apr 2025 15:11:20 +0000</pubDate>
      <author>Isha Puri, Akash Srivastava, Faez Ahmed</author>
      <link>https://podcasters.spotify.com/pod/show/isha028</link>
      <enclosure length="24553567" type="audio/mpeg" url="https://www.claritaspod.com/measure/cdn.simplecast.com/audio/f23eff61-48a8-420c-b872-cfc8e3b40c09/episodes/e7788da3-1817-45ca-9815-f58975b09614/audio/951d4490-31f2-4fde-9b76-684e8c0efe9c/default_tc.mp3?aid=rss_feed&amp;feed=c1PFREqr"/>
      <itunes:title>Generative Optimization</itunes:title>
      <itunes:author>Isha Puri, Akash Srivastava, Faez Ahmed</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/06b725f1-0678-41cb-8c56-beecf51db1e8/425f76ab-7080-4a0f-b34a-8fb4c19d98ba/3000x3000/inference-time-20scaling-2.jpg?aid=rss_feed"/>
      <itunes:duration>00:25:34</itunes:duration>
      <itunes:summary>In this episode of No Math AI, we&apos;re joined by Dr. Faez Ahmed, a professor at MIT and leader of the Design Computation and Digital Engineering Lab. He works at the fascinating intersection of generative AI, optimization, and engineering design, where he&apos;s redefining how we create everything from bicycles to next-generation aerospace systems. Together, Isha, Akash, and Faez discuss the future of engineering work, harnessing &quot;generative optimization&quot; to automate engineering design, balancing the needs for precision and creativity, and more. </itunes:summary>
      <itunes:subtitle>In this episode of No Math AI, we&apos;re joined by Dr. Faez Ahmed, a professor at MIT and leader of the Design Computation and Digital Engineering Lab. He works at the fascinating intersection of generative AI, optimization, and engineering design, where he&apos;s redefining how we create everything from bicycles to next-generation aerospace systems. Together, Isha, Akash, and Faez discuss the future of engineering work, harnessing &quot;generative optimization&quot; to automate engineering design, balancing the needs for precision and creativity, and more. </itunes:subtitle>
      <itunes:keywords>generative ai, ai, llm, red hat</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>2</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">f359d8c7-66d0-4720-9f93-a5fc07ad096a</guid>
      <title>Why Inference-Time Scaling?</title>
      <description><![CDATA[<p>In our first episode of No Math AI, Akash and Isha are joined by guest research engineers, Shivchander Sudalairaj, GX Xu, and Kai Xu, to discuss a crucial topic that’s making waves in AI performance: inference-time scaling.</p><p>Simple put, inference-time scaling is a cost-effective method for improving AI model performance. Discover how this technique enhances reasoning in smaller language models, powers agentic AI, and ensures higher accuracy in mission-critical applications where precision is key.</p><p>The discussion covers how inference-time scaling boosts model performance and decision-making in AI systems. Our guests also highlight a groundbreaking research paper that unveils how a probabilistic approach to selecting the best answers in reasoning models can significantly enhance accuracy.<br /><br />Read the research paper: <a href="https://probabilistic-inference-scaling.github.io/">https://probabilistic-inference-scaling.github.io/</a></p><p>Guests:</p><ul><li>Shivchander Sudalairaj</li><li>GX Xu</li><li>Kai Xu</li></ul>
]]></description>
      <pubDate>Tue, 18 Mar 2025 03:12:08 +0000</pubDate>
      <author>Shivchander Sudalairaj, GX Xu, Kai Xu</author>
      <link>https://podcasters.spotify.com/pod/show/isha028</link>
      <content:encoded><![CDATA[<p>In our first episode of No Math AI, Akash and Isha are joined by guest research engineers, Shivchander Sudalairaj, GX Xu, and Kai Xu, to discuss a crucial topic that’s making waves in AI performance: inference-time scaling.</p><p>Simple put, inference-time scaling is a cost-effective method for improving AI model performance. Discover how this technique enhances reasoning in smaller language models, powers agentic AI, and ensures higher accuracy in mission-critical applications where precision is key.</p><p>The discussion covers how inference-time scaling boosts model performance and decision-making in AI systems. Our guests also highlight a groundbreaking research paper that unveils how a probabilistic approach to selecting the best answers in reasoning models can significantly enhance accuracy.<br /><br />Read the research paper: <a href="https://probabilistic-inference-scaling.github.io/">https://probabilistic-inference-scaling.github.io/</a></p><p>Guests:</p><ul><li>Shivchander Sudalairaj</li><li>GX Xu</li><li>Kai Xu</li></ul>
]]></content:encoded>
      <enclosure length="22754158" type="audio/mpeg" url="https://www.claritaspod.com/measure/cdn.simplecast.com/audio/f23eff61-48a8-420c-b872-cfc8e3b40c09/episodes/2ddecf8a-9876-40f1-aa55-d5a8d4d12ce3/audio/2eb85455-5a0f-4fc2-a833-2e0a5867d25a/default_tc.mp3?aid=rss_feed&amp;feed=c1PFREqr"/>
      <itunes:title>Why Inference-Time Scaling?</itunes:title>
      <itunes:author>Shivchander Sudalairaj, GX Xu, Kai Xu</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/06b725f1-0678-41cb-8c56-beecf51db1e8/249f05bc-bb5c-4556-94ae-b4127c69446a/3000x3000/inference-time-20scaling.jpg?aid=rss_feed"/>
      <itunes:duration>00:23:42</itunes:duration>
      <itunes:summary>Akash and Isha are joined by guest research engineers, Shivchander Sudalairaj, GX Xu, and Kai Xu, to discuss a crucial topic that’s making waves in AI performance: inference-time scaling.</itunes:summary>
      <itunes:subtitle>Akash and Isha are joined by guest research engineers, Shivchander Sudalairaj, GX Xu, and Kai Xu, to discuss a crucial topic that’s making waves in AI performance: inference-time scaling.</itunes:subtitle>
      <itunes:keywords>ai, inference-time scaling</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1</itunes:episode>
    </item>
  </channel>
</rss>