<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:media="http://search.yahoo.com/mrss/" xmlns:podcast="https://podcastindex.org/namespace/1.0">
  <channel>
    <atom:link href="https://feeds.simplecast.com/dOSE_bdP" rel="self" title="MP3 Audio" type="application/atom+xml"/>
    <atom:link href="https://simplecast.superfeedr.com" rel="hub" xmlns="http://www.w3.org/2005/Atom"/>
    <generator>https://simplecast.com</generator>
    <title>Unsupervised Learning with Jacob Effron</title>
    <description>We probe the sharpest minds in AI in search for the truth about what’s real today, what will be real in the future and what it all means for businesses and the world. If you’re a builder, researcher or investor navigating the AI world, this podcast will help you deconstruct and understand the most important breakthroughs and see a clearer picture of reality. Follow this show and consider enabling notifications to stay up to date on our latest episodes. 

Unsupervised Learning is a podcast by Redpoint Ventures, an early-stage venture capital fund that has invested in companies like Snowflake, Stripe, and Mistral.  

Hosted by Redpoint investor Jacob Effron alongside Patrick Chase, Jordan Segall and Erica Brescia.</description>
    <copyright>2025</copyright>
    <language>en</language>
    <pubDate>Thu, 2 Apr 2026 14:05:15 +0000</pubDate>
    <lastBuildDate>Thu, 2 Apr 2026 15:04:30 +0000</lastBuildDate>
    
    <link>https://unsupervised-learning.simplecast.com</link>
    <itunes:type>episodic</itunes:type>
    <itunes:summary>We probe the sharpest minds in AI in search for the truth about what’s real today, what will be real in the future and what it all means for businesses and the world. If you’re a builder, researcher or investor navigating the AI world, this podcast will help you deconstruct and understand the most important breakthroughs and see a clearer picture of reality. Follow this show and consider enabling notifications to stay up to date on our latest episodes. 

Unsupervised Learning is a podcast by Redpoint Ventures, an early-stage venture capital fund that has invested in companies like Snowflake, Stripe, and Mistral.  

Hosted by Redpoint investor Jacob Effron alongside Patrick Chase, Jordan Segall and Erica Brescia.</itunes:summary>
    <itunes:author>by Redpoint Ventures</itunes:author>
    <itunes:explicit>false</itunes:explicit>
    <itunes:image href="https://image.simplecastcdn.com/images/ff0dbf2f-5711-4964-9172-807c39ca4824/be7895b2-4fcc-4f70-9864-584710596b1d/3000x3000/redpoint-unsupervised-learning-logo.jpg?aid=rss_feed"/>
    <itunes:new-feed-url>https://feeds.simplecast.com/dOSE_bdP</itunes:new-feed-url>
    <itunes:keywords>ai, artificial intelligence</itunes:keywords>
    <itunes:owner>
      <itunes:name>Redpoint Ventures</itunes:name>
      <itunes:email>jeffron@redpoint.com</itunes:email>
    </itunes:owner>
    <itunes:applepodcastsverify>8072bd30-279c-11f0-9cc4-150527c8437a</itunes:applepodcastsverify>
    <itunes:category text="Technology"/>
    <item>
      <guid isPermaLink="false">a500d697-6224-4847-86c7-0f5881056733</guid>
      <title>Ep 83: Owning the System of Record, AI-Native Org Charts, &amp; Why ITSM is The Most Vulnerable Legacy Category</title>
      <description><![CDATA[<p>Serval is one of the fastest-growing AI-native enterprise software companies right now, and this episode is a rare inside look at the deliberate architectural, go-to-market, and talent decisions behind that growth. Jake Stauch breaks down why he made the contrarian bet to build a full system of record rather than layer on top of existing tools, why ITSM is more vulnerable to AI disruption than CRM, ERP, or HRIS, and how Serval is winning Fortune 500 deals against a $14B incumbent with a fraction of the resources. Beyond the product, Jake gets into the organizational decisions that underpin Serval's velocity — why recruiting is the #1 job of every employee, how to prevent talent bar decay as you scale from 8 to 200 people, and how the role of the manager is shifting as ICs own more scope than ever. Threading it all together is a founder's honest account of what it means to build a horizontal software company when the models are improving, the infrastructure is shifting, and the window to displace a legacy incumbent is open but won't stay open forever.</p>
<p> </p>
<p>(0:00) Intro<br>
 (1:25) What is Serval?<br>
 (4:51) Early Doubts and Strategy<br>
 (6:34) AI Tailwinds in ITSM<br>
 (8:04) Competing with ServiceNow<br>
 (9:41) Why ITSM Is Vulnerable<br>
 (11:52) Automation via Codegen<br>
 (16:27) Critical Guardrails<br>
 (28:32) Internal Support Complexity<br>
 (30:24) Hiring as the Moat<br>
 (31:44) Dream Team Recruiting<br>
 (33:49) Managers vs Super ICs<br>
 (36:44) Junior Engineers and AI Native Workflows<br>
 (43:13) Quickfire</p>
<p> </p>
<p>With your co-hosts: </p>
<p>@jacobeffron </p>
<p>- Partner at Redpoint, Former PM Flatiron Health </p>
<p>@patrickachase </p>
<p>- Partner at Redpoint, Former ML Engineer LinkedIn </p>
<p>@ericabrescia </p>
<p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p>
<p>@jordan_segall </p>
<p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Thu, 2 Apr 2026 14:05:15 +0000</pubDate>
      <author>jeffron@redpoint.com (Jacob Effron, Patrick Chase, Jake Stauch)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-83-owning-the-system-of-record-ai-native-org-charts-why-itsm-is-the-most-vulnerable-legacy-category-fjqR1Bpt</link>
      <content:encoded><![CDATA[<p>Serval is one of the fastest-growing AI-native enterprise software companies right now, and this episode is a rare inside look at the deliberate architectural, go-to-market, and talent decisions behind that growth. Jake Stauch breaks down why he made the contrarian bet to build a full system of record rather than layer on top of existing tools, why ITSM is more vulnerable to AI disruption than CRM, ERP, or HRIS, and how Serval is winning Fortune 500 deals against a $14B incumbent with a fraction of the resources. Beyond the product, Jake gets into the organizational decisions that underpin Serval's velocity — why recruiting is the #1 job of every employee, how to prevent talent bar decay as you scale from 8 to 200 people, and how the role of the manager is shifting as ICs own more scope than ever. Threading it all together is a founder's honest account of what it means to build a horizontal software company when the models are improving, the infrastructure is shifting, and the window to displace a legacy incumbent is open but won't stay open forever.</p>
<p> </p>
<p>(0:00) Intro<br>
 (1:25) What is Serval?<br>
 (4:51) Early Doubts and Strategy<br>
 (6:34) AI Tailwinds in ITSM<br>
 (8:04) Competing with ServiceNow<br>
 (9:41) Why ITSM Is Vulnerable<br>
 (11:52) Automation via Codegen<br>
 (16:27) Critical Guardrails<br>
 (28:32) Internal Support Complexity<br>
 (30:24) Hiring as the Moat<br>
 (31:44) Dream Team Recruiting<br>
 (33:49) Managers vs Super ICs<br>
 (36:44) Junior Engineers and AI Native Workflows<br>
 (43:13) Quickfire</p>
<p> </p>
<p>With your co-hosts: </p>
<p>@jacobeffron </p>
<p>- Partner at Redpoint, Former PM Flatiron Health </p>
<p>@patrickachase </p>
<p>- Partner at Redpoint, Former ML Engineer LinkedIn </p>
<p>@ericabrescia </p>
<p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p>
<p>@jordan_segall </p>
<p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="51925296" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/b3414ac6-61c8-4752-8722-491e1457c3bf/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/audio/group/b46ee0b2-3d1a-4d16-8184-e05bd010bbd7/group-item/2fc0e6c4-e9a9-47a9-932d-16a57aee4ff8/128_default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 83: Owning the System of Record, AI-Native Org Charts, &amp; Why ITSM is The Most Vulnerable Legacy Category</itunes:title>
      <itunes:author>Jacob Effron, Patrick Chase, Jake Stauch</itunes:author>
      <itunes:duration>00:54:05</itunes:duration>
      <itunes:summary>Serval is one of the fastest-growing AI-native enterprise software companies right now, and this episode is a rare inside look at the deliberate architectural, go-to-market, and talent decisions behind that growth. Jake Stauch breaks down why he made the contrarian bet to build a full system of record rather than layer on top of existing tools, why ITSM is more vulnerable to AI disruption than CRM, ERP, or HRIS, and how Serval is winning Fortune 500 deals against a $14B incumbent with a fraction of the resources. Beyond the product, Jake gets into the organizational decisions that underpin Serval&apos;s velocity — why recruiting is the #1 job of every employee, how to prevent talent bar decay as you scale from 8 to 200 people, and how the role of the manager is shifting as ICs own more scope than ever. Threading it all together is a founder&apos;s honest account of what it means to build a horizontal software company when the models are improving, the infrastructure is shifting, and the window to displace a legacy incumbent is open but won&apos;t stay open forever.</itunes:summary>
      <itunes:subtitle>Serval is one of the fastest-growing AI-native enterprise software companies right now, and this episode is a rare inside look at the deliberate architectural, go-to-market, and talent decisions behind that growth. Jake Stauch breaks down why he made the contrarian bet to build a full system of record rather than layer on top of existing tools, why ITSM is more vulnerable to AI disruption than CRM, ERP, or HRIS, and how Serval is winning Fortune 500 deals against a $14B incumbent with a fraction of the resources. Beyond the product, Jake gets into the organizational decisions that underpin Serval&apos;s velocity — why recruiting is the #1 job of every employee, how to prevent talent bar decay as you scale from 8 to 200 people, and how the role of the manager is shifting as ICs own more scope than ever. Threading it all together is a founder&apos;s honest account of what it means to build a horizontal software company when the models are improving, the infrastructure is shifting, and the window to displace a legacy incumbent is open but won&apos;t stay open forever.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>83</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">4492db3f-f8ed-41c3-b2c1-d963dee4ea7e</guid>
      <title>Ep 82: Behind Legora&apos;s $550M Raise, Model Competition, Doubling Revenue Every Quarter, &amp; US Expansion</title>
      <description><![CDATA[<p>Max Jungestål, CEO of Legora, joins Jacob Effron and Logan Bartlett to discuss the company's $550M Series D and share a candid account of what building an AI-native company at speed actually looks like from the inside.</p>
<p>Max argues that the AI application layer requires a fundamentally different operating model than traditional SaaS, one built on low ego, constant reinvention, and a willingness to watch nine months of work get washed away by a model update. He walks through how step-function improvements in the underlying models, particularly Opus 4.5 and 4.6, have repeatedly forced Legora to rebuild core product features from scratch, and why he sees that as a feature, not a bug.</p>
<p>On the legal industry, Max offers a ground-level view of how AI is actually diffusing through law firms, less through top-down mandates and more through competitive pressure between firms and, increasingly, from enterprise clients demanding efficiency from their outside counsel. He pushes back on the viability of AI-native law firms, dismisses outcome-based pricing as harder than it looks, and makes the case for why foundation model competition creates tailwinds rather than threats for a company with Legora's depth.</p>
<p>The episode closes with a detailed look at the US expansion strategy, including the deliberate cultural decisions, like flying all New York hires to Stockholm for onboarding, that Max believes are the real source of Legora's compounding advantage.</p>
<p> </p>
<p>[0:00] Intro</p>
<p>[1:16] Legora's Series D Story</p>
<p>[3:24] Why You Need Low Ego to Build in AI</p>
<p>[5:58] From 60% to 100% Accuracy in One Summer</p>
<p>[7:04] Law Firm Economics Shift</p>
<p>[14:09] Pricing Seats Vs Outcomes</p>
<p>[18:31] Why Foundation Models Entering Legal Helps Legora</p>
<p>[30:10] Convincing a 75-Year-Old Partner to Go All In</p>
<p>[33:02] Hiring Legal Engineers</p>
<p>[34:32] Running an AI-Native Company</p>
<p>[35:57] The Opus 4.5 Christmas Breakthrough</p>
<p>[40:02] Building With Customers</p>
<p>[44:01] All In On US Expansion</p>
<p>[51:22] Stockholm Startup DNA</p>
<p> </p>
<p>With your co-hosts: </p>
<p>@jacobeffron </p>
<p>- Partner at Redpoint, Former PM Flatiron Health </p>
<p>@patrickachase </p>
<p>- Partner at Redpoint, Former ML Engineer LinkedIn </p>
<p>@ericabrescia </p>
<p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p>
<p>@jordan_segall </p>
<p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Wed, 11 Mar 2026 13:47:34 +0000</pubDate>
      <author>jeffron@redpoint.com (Jacob Effron, Logan Bartlett, Max Jungestål)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-82-behind-legoras-550m-raise-model-competition-doubling-revenue-every-quarter-us-expansion-NZdtRFa6</link>
      <content:encoded><![CDATA[<p>Max Jungestål, CEO of Legora, joins Jacob Effron and Logan Bartlett to discuss the company's $550M Series D and share a candid account of what building an AI-native company at speed actually looks like from the inside.</p>
<p>Max argues that the AI application layer requires a fundamentally different operating model than traditional SaaS, one built on low ego, constant reinvention, and a willingness to watch nine months of work get washed away by a model update. He walks through how step-function improvements in the underlying models, particularly Opus 4.5 and 4.6, have repeatedly forced Legora to rebuild core product features from scratch, and why he sees that as a feature, not a bug.</p>
<p>On the legal industry, Max offers a ground-level view of how AI is actually diffusing through law firms, less through top-down mandates and more through competitive pressure between firms and, increasingly, from enterprise clients demanding efficiency from their outside counsel. He pushes back on the viability of AI-native law firms, dismisses outcome-based pricing as harder than it looks, and makes the case for why foundation model competition creates tailwinds rather than threats for a company with Legora's depth.</p>
<p>The episode closes with a detailed look at the US expansion strategy, including the deliberate cultural decisions, like flying all New York hires to Stockholm for onboarding, that Max believes are the real source of Legora's compounding advantage.</p>
<p> </p>
<p>[0:00] Intro</p>
<p>[1:16] Legora's Series D Story</p>
<p>[3:24] Why You Need Low Ego to Build in AI</p>
<p>[5:58] From 60% to 100% Accuracy in One Summer</p>
<p>[7:04] Law Firm Economics Shift</p>
<p>[14:09] Pricing Seats Vs Outcomes</p>
<p>[18:31] Why Foundation Models Entering Legal Helps Legora</p>
<p>[30:10] Convincing a 75-Year-Old Partner to Go All In</p>
<p>[33:02] Hiring Legal Engineers</p>
<p>[34:32] Running an AI-Native Company</p>
<p>[35:57] The Opus 4.5 Christmas Breakthrough</p>
<p>[40:02] Building With Customers</p>
<p>[44:01] All In On US Expansion</p>
<p>[51:22] Stockholm Startup DNA</p>
<p> </p>
<p>With your co-hosts: </p>
<p>@jacobeffron </p>
<p>- Partner at Redpoint, Former PM Flatiron Health </p>
<p>@patrickachase </p>
<p>- Partner at Redpoint, Former ML Engineer LinkedIn </p>
<p>@ericabrescia </p>
<p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p>
<p>@jordan_segall </p>
<p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="52316877" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/b3414ac6-61c8-4752-8722-491e1457c3bf/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/audio/group/c8ab69a1-93c2-4743-a97f-1cb1d271a433/group-item/0cd4b084-ae1a-427d-87dd-bececcd09b57/128_default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 82: Behind Legora&apos;s $550M Raise, Model Competition, Doubling Revenue Every Quarter, &amp; US Expansion</itunes:title>
      <itunes:author>Jacob Effron, Logan Bartlett, Max Jungestål</itunes:author>
      <itunes:duration>00:54:29</itunes:duration>
      <itunes:summary>Max Jungestål, CEO of Legora, joins Jacob Effron and Logan Bartlett to discuss the company&apos;s $550M Series D and share a candid account of what building an AI-native company at speed actually looks like from the inside.

Max argues that the AI application layer requires a fundamentally different operating model than traditional SaaS, one built on low ego, constant reinvention, and a willingness to watch nine months of work get washed away by a model update. He walks through how step-function improvements in the underlying models, particularly Opus 4.5 and 4.6, have repeatedly forced Legora to rebuild core product features from scratch, and why he sees that as a feature, not a bug.

On the legal industry, Max offers a ground-level view of how AI is actually diffusing through law firms, less through top-down mandates and more through competitive pressure between firms and, increasingly, from enterprise clients demanding efficiency from their outside counsel. He pushes back on the viability of AI-native law firms, dismisses outcome-based pricing as harder than it looks, and makes the case for why foundation model competition creates tailwinds rather than threats for a company with Legora&apos;s depth.

The episode closes with a detailed look at the US expansion strategy, including the deliberate cultural decisions, like flying all New York hires to Stockholm for onboarding, that Max believes are the real source of Legora&apos;s compounding advantage.</itunes:summary>
      <itunes:subtitle>Max Jungestål, CEO of Legora, joins Jacob Effron and Logan Bartlett to discuss the company&apos;s $550M Series D and share a candid account of what building an AI-native company at speed actually looks like from the inside.

Max argues that the AI application layer requires a fundamentally different operating model than traditional SaaS, one built on low ego, constant reinvention, and a willingness to watch nine months of work get washed away by a model update. He walks through how step-function improvements in the underlying models, particularly Opus 4.5 and 4.6, have repeatedly forced Legora to rebuild core product features from scratch, and why he sees that as a feature, not a bug.

On the legal industry, Max offers a ground-level view of how AI is actually diffusing through law firms, less through top-down mandates and more through competitive pressure between firms and, increasingly, from enterprise clients demanding efficiency from their outside counsel. He pushes back on the viability of AI-native law firms, dismisses outcome-based pricing as harder than it looks, and makes the case for why foundation model competition creates tailwinds rather than threats for a company with Legora&apos;s depth.

The episode closes with a detailed look at the US expansion strategy, including the deliberate cultural decisions, like flying all New York hires to Stockholm for onboarding, that Max believes are the real source of Legora&apos;s compounding advantage.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>82</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1219879d-79bf-4e5b-8206-95051d4eebe8</guid>
      <title>Ep 81: Ex-OpenAI Researcher On Why He Left, His Honest AGI Timeline, &amp; The Limits of Scaling RL</title>
      <description><![CDATA[<p>This episode features Jerry Tworek, a key architect behind OpenAI's breakthrough reasoning models (o1, o3) and Codex, discussing the current state and future of AI. Jerry explores the real limits and promise of scaling pre-training and reinforcement learning, arguing that while these paradigms deliver predictable improvements, they're fundamentally constrained by data availability and struggle with generalization beyond their training objectives. He reveals his updated belief that continual learning—the ability for models to update themselves based on failure and work through problems autonomously—is necessary for AGI, as current models hit walls and become "hopeless" when stuck. Jerry discusses the convergence of major labs toward similar approaches driven by economic forces, the tension between exploration and exploitation in research, and why he left OpenAI to pursue new research directions. He offers candid insights on the competitive dynamics between labs, the focus required to win in specific domains like coding, what makes great AI researchers, and his surprisingly near-term predictions for robotics (2-3 years) while warning about the societal implications of widespread work automation that we're not adequately preparing for.</p><p> </p><p>(0:00) Intro<br />(1:26) Scaling Paradigms in AI<br />(3:36) Challenges in Reinforcement Learning<br />(11:48) AGI Timelines<br />(18:36) Converging Labs<br />(25:05) Jerry’s Departure from OpenAI<br />(31:18) Pivotal Decisions in OpenAI’s Journey<br />(35:06) Balancing Research and Product Development<br />(38:42) The Future of AI Coding<br />(41:33) Specialization vs. Generalization in AI<br />(48:47) Hiring and Building Research Teams<br />(55:21) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Thu, 29 Jan 2026 15:13:20 +0000</pubDate>
      <author>jeffron@redpoint.com (Jerry Tworek, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-81-ex-openai-researcher-on-why-he-left-his-honest-agi-timeline-the-limits-of-scaling-rl-v8n6gexx-fZAknHYv</link>
      <content:encoded><![CDATA[<p>This episode features Jerry Tworek, a key architect behind OpenAI's breakthrough reasoning models (o1, o3) and Codex, discussing the current state and future of AI. Jerry explores the real limits and promise of scaling pre-training and reinforcement learning, arguing that while these paradigms deliver predictable improvements, they're fundamentally constrained by data availability and struggle with generalization beyond their training objectives. He reveals his updated belief that continual learning—the ability for models to update themselves based on failure and work through problems autonomously—is necessary for AGI, as current models hit walls and become "hopeless" when stuck. Jerry discusses the convergence of major labs toward similar approaches driven by economic forces, the tension between exploration and exploitation in research, and why he left OpenAI to pursue new research directions. He offers candid insights on the competitive dynamics between labs, the focus required to win in specific domains like coding, what makes great AI researchers, and his surprisingly near-term predictions for robotics (2-3 years) while warning about the societal implications of widespread work automation that we're not adequately preparing for.</p><p> </p><p>(0:00) Intro<br />(1:26) Scaling Paradigms in AI<br />(3:36) Challenges in Reinforcement Learning<br />(11:48) AGI Timelines<br />(18:36) Converging Labs<br />(25:05) Jerry’s Departure from OpenAI<br />(31:18) Pivotal Decisions in OpenAI’s Journey<br />(35:06) Balancing Research and Product Development<br />(38:42) The Future of AI Coding<br />(41:33) Specialization vs. Generalization in AI<br />(48:47) Hiring and Building Research Teams<br />(55:21) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="60354697" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/4b81d86c-802e-49d5-82c2-8973e1b0ca79/audio/f0fa8cb7-9fe5-488a-bea3-c4ef87cfbc47/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 81: Ex-OpenAI Researcher On Why He Left, His Honest AGI Timeline, &amp; The Limits of Scaling RL</itunes:title>
      <itunes:author>Jerry Tworek, Jacob Effron</itunes:author>
      <itunes:duration>01:02:52</itunes:duration>
      <itunes:summary>This episode features Jerry Tworek, a key architect behind OpenAI&apos;s breakthrough reasoning models (o1, o3) and Codex, discussing the current state and future of AI. Jerry explores the real limits and promise of scaling pre-training and reinforcement learning, arguing that while these paradigms deliver predictable improvements, they&apos;re fundamentally constrained by data availability and struggle with generalization beyond their training objectives. He reveals his updated belief that continual learning—the ability for models to update themselves based on failure and work through problems autonomously—is necessary for AGI, as current models hit walls and become &quot;hopeless&quot; when stuck. Jerry discusses the convergence of major labs toward similar approaches driven by economic forces, the tension between exploration and exploitation in research, and why he left OpenAI to pursue new research directions. He offers candid insights on the competitive dynamics between labs, the focus required to win in specific domains like coding, what makes great AI researchers, and his surprisingly near-term predictions for robotics (2-3 years) while warning about the societal implications of widespread work automation that we&apos;re not adequately preparing for.</itunes:summary>
      <itunes:subtitle>This episode features Jerry Tworek, a key architect behind OpenAI&apos;s breakthrough reasoning models (o1, o3) and Codex, discussing the current state and future of AI. Jerry explores the real limits and promise of scaling pre-training and reinforcement learning, arguing that while these paradigms deliver predictable improvements, they&apos;re fundamentally constrained by data availability and struggle with generalization beyond their training objectives. He reveals his updated belief that continual learning—the ability for models to update themselves based on failure and work through problems autonomously—is necessary for AGI, as current models hit walls and become &quot;hopeless&quot; when stuck. Jerry discusses the convergence of major labs toward similar approaches driven by economic forces, the tension between exploration and exploitation in research, and why he left OpenAI to pursue new research directions. He offers candid insights on the competitive dynamics between labs, the focus required to win in specific domains like coding, what makes great AI researchers, and his surprisingly near-term predictions for robotics (2-3 years) while warning about the societal implications of widespread work automation that we&apos;re not adequately preparing for.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>81</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">4ddfe5c3-038d-4b0c-bcda-33588ac705d9</guid>
      <title>AI Vibe Check: The Actual Bottleneck In Research, SSI’s Mystique, &amp; Spicy 2026 Predictions</title>
      <description><![CDATA[<p>Ari Morcos and Rob Toews return for their spiciest conversation yet. Fresh from NeurIPS, they debate whether models are truly plateauing or if we're just myopically focused on LLMs while breakthroughs happen in other modalities.</p><p>They reveal why infinite capital at labs may actually constrain innovation, explain the narrow "Goldilocks zone" where RL actually works, and argue why U.S. chip restrictions may have backfired catastrophically—accelerating China's path to self-sufficiency by a decade. The conversation covers OpenAI's code red moment and structural vulnerabilities, the mystique surrounding SSI and Ilya's "two words," and why the real bottleneck in AI research is compute, not ideas.</p><p>The episode closes with bold 2026 predictions: Rob forecasts Sam Altman won't be OpenAI's CEO by year-end, while Ari gives 50%+ odds a Chinese open-source model will be the world's best at least once next year.</p><p> </p><p>(0:00) Intro<br />(1:51) Reflections on NeurIPS Conference<br />(5:14) Are AI Models Plateauing?<br />(11:12) Reinforcement Learning and Enterprise Adoption<br />(16:16) Future Research Vectors in AI<br />(28:40) The Role of Neo Labs<br />(39:35) The Myth of the Great Man Theory in Science<br />(41:47) OpenAI's Code Red and Market Position<br />(47:19) Disney and OpenAI's Strategic Partnership<br />(51:28) Meta's Super Intelligence Team Challenges<br />(54:33) US-China AI Chip Dynamics<br />(1:00:54) Amazon's Nova Forge and Enterprise AI<br />(1:03:38) End of Year Reflections and Predictions</p><p> </p><p>With your co-hosts:</p><p>@jacobeffron  </p><p>- Partner at Redpoint, Former PM Flatiron Health</p><p>@patrickachase  </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn</p><p>@ericabrescia  </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)</p><p>@jordan_segall  </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Thu, 18 Dec 2025 16:57:44 +0000</pubDate>
      <author>jeffron@redpoint.com (Rob Toews, Ari Morcos, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ai-vibe-check-the-actual-bottleneck-in-research-ssis-mystique-spicy-2026-predictions-OsWTj5OH</link>
      <content:encoded><![CDATA[<p>Ari Morcos and Rob Toews return for their spiciest conversation yet. Fresh from NeurIPS, they debate whether models are truly plateauing or if we're just myopically focused on LLMs while breakthroughs happen in other modalities.</p><p>They reveal why infinite capital at labs may actually constrain innovation, explain the narrow "Goldilocks zone" where RL actually works, and argue why U.S. chip restrictions may have backfired catastrophically—accelerating China's path to self-sufficiency by a decade. The conversation covers OpenAI's code red moment and structural vulnerabilities, the mystique surrounding SSI and Ilya's "two words," and why the real bottleneck in AI research is compute, not ideas.</p><p>The episode closes with bold 2026 predictions: Rob forecasts Sam Altman won't be OpenAI's CEO by year-end, while Ari gives 50%+ odds a Chinese open-source model will be the world's best at least once next year.</p><p> </p><p>(0:00) Intro<br />(1:51) Reflections on NeurIPS Conference<br />(5:14) Are AI Models Plateauing?<br />(11:12) Reinforcement Learning and Enterprise Adoption<br />(16:16) Future Research Vectors in AI<br />(28:40) The Role of Neo Labs<br />(39:35) The Myth of the Great Man Theory in Science<br />(41:47) OpenAI's Code Red and Market Position<br />(47:19) Disney and OpenAI's Strategic Partnership<br />(51:28) Meta's Super Intelligence Team Challenges<br />(54:33) US-China AI Chip Dynamics<br />(1:00:54) Amazon's Nova Forge and Enterprise AI<br />(1:03:38) End of Year Reflections and Predictions</p><p> </p><p>With your co-hosts:</p><p>@jacobeffron  </p><p>- Partner at Redpoint, Former PM Flatiron Health</p><p>@patrickachase  </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn</p><p>@ericabrescia  </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)</p><p>@jordan_segall  </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="74959026" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/62a361aa-5980-4a95-a00f-2868d388477f/audio/299ff012-46a2-4bf2-a04a-22e0553ab7f7/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>AI Vibe Check: The Actual Bottleneck In Research, SSI’s Mystique, &amp; Spicy 2026 Predictions</itunes:title>
      <itunes:author>Rob Toews, Ari Morcos, Jacob Effron</itunes:author>
      <itunes:duration>01:18:04</itunes:duration>
      <itunes:summary>Ari Morcos and Rob Toews return for their spiciest conversation yet. Fresh from NeurIPS, they debate whether models are truly plateauing or if we&apos;re just myopically focused on LLMs while breakthroughs happen in other modalities.

They reveal why infinite capital at labs may actually constrain innovation, explain the narrow &quot;Goldilocks zone&quot; where RL actually works, and argue why U.S. chip restrictions may have backfired catastrophically—accelerating China&apos;s path to self-sufficiency by a decade. The conversation covers OpenAI&apos;s code red moment and structural vulnerabilities, the mystique surrounding SSI and Ilya&apos;s &quot;two words,&quot; and why the real bottleneck in AI research is compute, not ideas.

The episode closes with bold 2026 predictions: Rob forecasts Sam Altman won&apos;t be OpenAI&apos;s CEO by year-end, while Ari gives 50%+ odds a Chinese open-source model will be the world&apos;s best at least once next year.</itunes:summary>
      <itunes:subtitle>Ari Morcos and Rob Toews return for their spiciest conversation yet. Fresh from NeurIPS, they debate whether models are truly plateauing or if we&apos;re just myopically focused on LLMs while breakthroughs happen in other modalities.

They reveal why infinite capital at labs may actually constrain innovation, explain the narrow &quot;Goldilocks zone&quot; where RL actually works, and argue why U.S. chip restrictions may have backfired catastrophically—accelerating China&apos;s path to self-sufficiency by a decade. The conversation covers OpenAI&apos;s code red moment and structural vulnerabilities, the mystique surrounding SSI and Ilya&apos;s &quot;two words,&quot; and why the real bottleneck in AI research is compute, not ideas.

The episode closes with bold 2026 predictions: Rob forecasts Sam Altman won&apos;t be OpenAI&apos;s CEO by year-end, while Ari gives 50%+ odds a Chinese open-source model will be the world&apos;s best at least once next year.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>bonus</itunes:episodeType>
    </item>
    <item>
      <guid isPermaLink="false">4168a2ba-9b7a-4d95-9d99-6d9373b4cec6</guid>
      <title>Ep 80: CEO of Surge AI Edwin Chen on Why Frontier Labs Are Diverging, RL Environments &amp; Developing Model Taste</title>
      <description><![CDATA[<p>Edwin Chen is the founder and CEO of Surge AI, the data infrastructure company behind nearly every major frontier model. Surge works with OpenAI, Anthropic, Meta, and Google, providing the high-quality data and evaluation infrastructure that powers their models. </p><p> </p><p>Edwin reveals why optimizing for popular benchmarks like LMArena is "basically optimizing for clickbait," how one frontier lab's models regressed for 6-12 months without anyone knowing, and why the industry's approach to measurement is fundamentally broken. Jacob and Edwin discuss what actually makes elite AI evaluators, why "there's never going to be a one size fits all solution" for AI models, and how frontier labs are taking surprisingly divergent paths to AGI.</p><p> </p><p>(0:00) Intro<br />(0:56) The Pitfalls of Optimizing for LMArena<br />(4:34) Issues with Data Quality and Measurement<br />(9:44) The Importance of Human Evaluations<br />(13:40) The Rise of RL Environments<br />(17:21) Challenges and Lessons in Model Training<br />(19:59) Silicon Valley's Pivot Culture<br />(23:06) Technology-Driven Approach<br />(24:18) Quality Beyond Credentials<br />(27:51) Impact of Scale Acquisition<br />(28:35) Hiring for Research Culture<br />(30:48) Divergence in AI Training Paradigms<br />(34:16) Future of AI Models<br />(39:32) Multimodal AI and Quality<br />(43:44) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Mon, 15 Dec 2025 13:50:20 +0000</pubDate>
      <author>jeffron@redpoint.com (Edwin Chen, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-80-ceo-of-surge-ai-edwin-chen-on-why-frontier-labs-are-diverging-rl-environments-developing-model-taste-_RAZbWeP</link>
      <content:encoded><![CDATA[<p>Edwin Chen is the founder and CEO of Surge AI, the data infrastructure company behind nearly every major frontier model. Surge works with OpenAI, Anthropic, Meta, and Google, providing the high-quality data and evaluation infrastructure that powers their models. </p><p> </p><p>Edwin reveals why optimizing for popular benchmarks like LMArena is "basically optimizing for clickbait," how one frontier lab's models regressed for 6-12 months without anyone knowing, and why the industry's approach to measurement is fundamentally broken. Jacob and Edwin discuss what actually makes elite AI evaluators, why "there's never going to be a one size fits all solution" for AI models, and how frontier labs are taking surprisingly divergent paths to AGI.</p><p> </p><p>(0:00) Intro<br />(0:56) The Pitfalls of Optimizing for LMArena<br />(4:34) Issues with Data Quality and Measurement<br />(9:44) The Importance of Human Evaluations<br />(13:40) The Rise of RL Environments<br />(17:21) Challenges and Lessons in Model Training<br />(19:59) Silicon Valley's Pivot Culture<br />(23:06) Technology-Driven Approach<br />(24:18) Quality Beyond Credentials<br />(27:51) Impact of Scale Acquisition<br />(28:35) Hiring for Research Culture<br />(30:48) Divergence in AI Training Paradigms<br />(34:16) Future of AI Models<br />(39:32) Multimodal AI and Quality<br />(43:44) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="46103921" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/90a74f7d-fd57-41fa-8f5d-f56c0ae9c860/audio/11d3a2e1-6726-4865-9c4f-245894b4f3b9/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 80: CEO of Surge AI Edwin Chen on Why Frontier Labs Are Diverging, RL Environments &amp; Developing Model Taste</itunes:title>
      <itunes:author>Edwin Chen, Jacob Effron</itunes:author>
      <itunes:duration>00:48:01</itunes:duration>
      <itunes:summary>Edwin Chen is the founder and CEO of Surge AI, the data infrastructure company behind nearly every major frontier model. Surge works with OpenAI, Anthropic, Meta, and Google, providing the high-quality data and evaluation infrastructure that powers their models.

Edwin reveals why optimizing for popular benchmarks like LMArena is &quot;basically optimizing for clickbait,&quot; how one frontier lab&apos;s models regressed for 6-12 months without anyone knowing, and why the industry&apos;s approach to measurement is fundamentally broken. Jacob and Edwin discuss what actually makes elite AI evaluators, why &quot;there&apos;s never going to be a one size fits all solution&quot; for AI models, and how frontier labs are taking surprisingly divergent paths to AGI.</itunes:summary>
      <itunes:subtitle>Edwin Chen is the founder and CEO of Surge AI, the data infrastructure company behind nearly every major frontier model. Surge works with OpenAI, Anthropic, Meta, and Google, providing the high-quality data and evaluation infrastructure that powers their models.

Edwin reveals why optimizing for popular benchmarks like LMArena is &quot;basically optimizing for clickbait,&quot; how one frontier lab&apos;s models regressed for 6-12 months without anyone knowing, and why the industry&apos;s approach to measurement is fundamentally broken. Jacob and Edwin discuss what actually makes elite AI evaluators, why &quot;there&apos;s never going to be a one size fits all solution&quot; for AI models, and how frontier labs are taking surprisingly divergent paths to AGI.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>80</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">51a70a67-1df4-4fef-9cb5-16c1c237fff8</guid>
      <title>Ep 79: OpenAI&apos;s Head of Product on How the Best Teams Build, Ship and Scale AI Products</title>
      <description><![CDATA[<p>This episode features Olivier Godement, Head of Product for Business Products at OpenAI, discussing the current state and future of AI adoption in enterprises, with a particular focus on the recent releases of GPT 5.1 and Codex. The conversation explores how these models are achieving meaningful automation in specific domains like coding, customer support, and life sciences: where companies like Amgen are using AI to accelerate drug development timelines from months to weeks through automated regulatory documentation. Olivier reveals that while complete job automation remains challenging and requires substantial scaffolding, harnesses, and evaluation frameworks, certain use cases like coding are reaching a tipping point where engineers would "riot" if AI tools were taken away. The discussion covers the importance of cost reduction in unlocking new use cases, the emerging significance of reinforcement fine-tuning (RFT) for frontier customers, and OpenAI's philosophy of providing not just models but reference architectures and harnesses to maximize developer success.</p><p> </p><p>(0:00) Intro<br />(1:46) Discussing GPT-5.1<br />(2:57) Adoption and Impact of Codex<br />(4:09) Scientific Community's Use of GPT-5.1<br />(6:37) Challenges in AI Automation<br />(8:19) AI in Life Sciences and Pharma<br />(11:48) Enterprise AI Adoption and Ecosystem<br />(16:04) Future of AI Models and Continuous Learning<br />(24:20) Cost and Efficiency in AI Deployment<br />(27:10) Reinforcement Learning and Enterprise Use Cases<br />(31:17) Key Factors Influencing Model Choice<br />(34:21) Challenges in Model Deployment and Adaptation<br />(38:29) Voice Technology: The Next Frontier<br />(41:08) The Rise of AI in Software Engineering<br />(52:09) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Wed, 10 Dec 2025 13:56:48 +0000</pubDate>
      <author>jeffron@redpoint.com (Olivier Godement, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-79-openais-head-of-product-on-how-the-best-teams-build-ship-and-scale-ai-products-q_iDa_wK</link>
      <content:encoded><![CDATA[<p>This episode features Olivier Godement, Head of Product for Business Products at OpenAI, discussing the current state and future of AI adoption in enterprises, with a particular focus on the recent releases of GPT 5.1 and Codex. The conversation explores how these models are achieving meaningful automation in specific domains like coding, customer support, and life sciences: where companies like Amgen are using AI to accelerate drug development timelines from months to weeks through automated regulatory documentation. Olivier reveals that while complete job automation remains challenging and requires substantial scaffolding, harnesses, and evaluation frameworks, certain use cases like coding are reaching a tipping point where engineers would "riot" if AI tools were taken away. The discussion covers the importance of cost reduction in unlocking new use cases, the emerging significance of reinforcement fine-tuning (RFT) for frontier customers, and OpenAI's philosophy of providing not just models but reference architectures and harnesses to maximize developer success.</p><p> </p><p>(0:00) Intro<br />(1:46) Discussing GPT-5.1<br />(2:57) Adoption and Impact of Codex<br />(4:09) Scientific Community's Use of GPT-5.1<br />(6:37) Challenges in AI Automation<br />(8:19) AI in Life Sciences and Pharma<br />(11:48) Enterprise AI Adoption and Ecosystem<br />(16:04) Future of AI Models and Continuous Learning<br />(24:20) Cost and Efficiency in AI Deployment<br />(27:10) Reinforcement Learning and Enterprise Use Cases<br />(31:17) Key Factors Influencing Model Choice<br />(34:21) Challenges in Model Deployment and Adaptation<br />(38:29) Voice Technology: The Next Frontier<br />(41:08) The Rise of AI in Software Engineering<br />(52:09) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="54026338" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/8f4f1836-cefe-405c-8a9d-e8129adc2f6b/audio/2830e39c-c511-4de4-8f7d-0a64daa20c4e/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 79: OpenAI&apos;s Head of Product on How the Best Teams Build, Ship and Scale AI Products</itunes:title>
      <itunes:author>Olivier Godement, Jacob Effron</itunes:author>
      <itunes:duration>00:56:16</itunes:duration>
      <itunes:summary>This episode features Olivier Godement, Head of Product for Business Products at OpenAI, discussing the current state and future of AI adoption in enterprises, with a particular focus on the recent releases of GPT 5.1 and Codex. The conversation explores how these models are achieving meaningful automation in specific domains like coding, customer support, and life sciences: where companies like Amgen are using AI to accelerate drug development timelines from months to weeks through automated regulatory documentation. Olivier reveals that while complete job automation remains challenging and requires substantial scaffolding, harnesses, and evaluation frameworks, certain use cases like coding are reaching a tipping point where engineers would &quot;riot&quot; if AI tools were taken away. The discussion covers the importance of cost reduction in unlocking new use cases, the emerging significance of reinforcement fine-tuning (RFT) for frontier customers, and OpenAI&apos;s philosophy of providing not just models but reference architectures and harnesses to maximize developer success. </itunes:summary>
      <itunes:subtitle>This episode features Olivier Godement, Head of Product for Business Products at OpenAI, discussing the current state and future of AI adoption in enterprises, with a particular focus on the recent releases of GPT 5.1 and Codex. The conversation explores how these models are achieving meaningful automation in specific domains like coding, customer support, and life sciences: where companies like Amgen are using AI to accelerate drug development timelines from months to weeks through automated regulatory documentation. Olivier reveals that while complete job automation remains challenging and requires substantial scaffolding, harnesses, and evaluation frameworks, certain use cases like coding are reaching a tipping point where engineers would &quot;riot&quot; if AI tools were taken away. The discussion covers the importance of cost reduction in unlocking new use cases, the emerging significance of reinforcement fine-tuning (RFT) for frontier customers, and OpenAI&apos;s philosophy of providing not just models but reference architectures and harnesses to maximize developer success. </itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>79</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">d9bea12f-a204-48de-931c-bc917c1432ff</guid>
      <title>Ep 78: Jordan Schneider, Host of China Talk, on AI Race, Key Policy Decisions &amp; Unpacking Geopolitical Chip Tension</title>
      <description><![CDATA[<p>This week on Unsupervised Learning, Jacob Effron is joined by Jordan Schneider, host of China Talk, who challenges widespread assumptions about US-China AI competition. China's AI development is driven by private capital and market competition—not central government planning—with companies like DeepSeek, Alibaba, and ByteDance operating more like Silicon Valley startups than state projects. The critical bottleneck is compute: the West maintains a 10-15x advantage in advanced chips, and US export controls implemented one month before ChatGPT created a structural edge favoring America for years. Chinese companies aggressively open-source models from strategic necessity—they couldn't establish a quality gap justifying paid access like OpenAI. Jordan explains why the "Goldilocks strategy" of controlled chip dependency fails, why expert consensus opposes selling advanced semiconductors to China despite Nvidia's lobbying, and how Taiwan's invasion risk is driven more by domestic politics than AGI scenarios. China's real advantage may emerge in robotics manufacturing at scale, where they're already deploying while the US debates strategy.</p><p> </p><p>Inside the Politburo's AI Study Session: https://www.chinatalk.media/p/xi-takes-an-ai-masterclass</p><p>Submit your questions to Jacob here: https://docs.google.com/forms/d/1vHBYv0bTT_EgFWTjbKnLr_sn3pZnFmcFGWYVTltKEco/edit</p><p> </p><p>(0:00) Intro<br />(1:45) The Chinese AI Ecosystem: Pre and Post ChatGPT<br />(3:45) Government Influence and Private Sector Dynamics<br />(6:40) Venture Funding and Major Players<br />(8:36) Talent and International Collaboration<br />(11:25) Open Source Models and Market Dynamics<br />(15:24) What Role Does The Chinese Government Play?<br />(31:17) US-China AI Policy and Strategic Competition<br />(36:18) The Argument for Selling AI Accelerators<br />(37:02) Risks of Not Selling to China<br />(43:34) Technological Constraints and Huawei's Challenges<br />(51:18) US-China Relations and Taiwan<br />(1:02:46) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Fri, 5 Dec 2025 18:52:26 +0000</pubDate>
      <author>jeffron@redpoint.com (Jordan Schneider, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-78-jordan-schneider-host-of-china-talk-on-ai-race-key-policy-decisions-unpacking-geopolitical-chip-tension-ZpR9EnsU</link>
      <content:encoded><![CDATA[<p>This week on Unsupervised Learning, Jacob Effron is joined by Jordan Schneider, host of China Talk, who challenges widespread assumptions about US-China AI competition. China's AI development is driven by private capital and market competition—not central government planning—with companies like DeepSeek, Alibaba, and ByteDance operating more like Silicon Valley startups than state projects. The critical bottleneck is compute: the West maintains a 10-15x advantage in advanced chips, and US export controls implemented one month before ChatGPT created a structural edge favoring America for years. Chinese companies aggressively open-source models from strategic necessity—they couldn't establish a quality gap justifying paid access like OpenAI. Jordan explains why the "Goldilocks strategy" of controlled chip dependency fails, why expert consensus opposes selling advanced semiconductors to China despite Nvidia's lobbying, and how Taiwan's invasion risk is driven more by domestic politics than AGI scenarios. China's real advantage may emerge in robotics manufacturing at scale, where they're already deploying while the US debates strategy.</p><p> </p><p>Inside the Politburo's AI Study Session: https://www.chinatalk.media/p/xi-takes-an-ai-masterclass</p><p>Submit your questions to Jacob here: https://docs.google.com/forms/d/1vHBYv0bTT_EgFWTjbKnLr_sn3pZnFmcFGWYVTltKEco/edit</p><p> </p><p>(0:00) Intro<br />(1:45) The Chinese AI Ecosystem: Pre and Post ChatGPT<br />(3:45) Government Influence and Private Sector Dynamics<br />(6:40) Venture Funding and Major Players<br />(8:36) Talent and International Collaboration<br />(11:25) Open Source Models and Market Dynamics<br />(15:24) What Role Does The Chinese Government Play?<br />(31:17) US-China AI Policy and Strategic Competition<br />(36:18) The Argument for Selling AI Accelerators<br />(37:02) Risks of Not Selling to China<br />(43:34) Technological Constraints and Huawei's Challenges<br />(51:18) US-China Relations and Taiwan<br />(1:02:46) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="70443777" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/f56a3b36-3a5e-4ec5-9ee1-3b7d07c2c421/audio/88363033-00f2-4d72-8352-851f2131fa43/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 78: Jordan Schneider, Host of China Talk, on AI Race, Key Policy Decisions &amp; Unpacking Geopolitical Chip Tension</itunes:title>
      <itunes:author>Jordan Schneider, Jacob Effron</itunes:author>
      <itunes:duration>01:13:22</itunes:duration>
      <itunes:summary>This week on Unsupervised Learning, Jacob Effron is joined by Jordan Schneider, host of China Talk, who challenges widespread assumptions about US-China AI competition. China&apos;s AI development is driven by private capital and market competition—not central government planning—with companies like DeepSeek, Alibaba, and ByteDance operating more like Silicon Valley startups than state projects. The critical bottleneck is compute: the West maintains a 10-15x advantage in advanced chips, and US export controls implemented one month before ChatGPT created a structural edge favoring America for years. Chinese companies aggressively open-source models from strategic necessity—they couldn&apos;t establish a quality gap justifying paid access like OpenAI. Jordan explains why the &quot;Goldilocks strategy&quot; of controlled chip dependency fails, why expert consensus opposes selling advanced semiconductors to China despite Nvidia&apos;s lobbying, and how Taiwan&apos;s invasion risk is driven more by domestic politics than AGI scenarios. China&apos;s real advantage may emerge in robotics manufacturing at scale, where they&apos;re already deploying while the US debates strategy.</itunes:summary>
      <itunes:subtitle>This week on Unsupervised Learning, Jacob Effron is joined by Jordan Schneider, host of China Talk, who challenges widespread assumptions about US-China AI competition. China&apos;s AI development is driven by private capital and market competition—not central government planning—with companies like DeepSeek, Alibaba, and ByteDance operating more like Silicon Valley startups than state projects. The critical bottleneck is compute: the West maintains a 10-15x advantage in advanced chips, and US export controls implemented one month before ChatGPT created a structural edge favoring America for years. Chinese companies aggressively open-source models from strategic necessity—they couldn&apos;t establish a quality gap justifying paid access like OpenAI. Jordan explains why the &quot;Goldilocks strategy&quot; of controlled chip dependency fails, why expert consensus opposes selling advanced semiconductors to China despite Nvidia&apos;s lobbying, and how Taiwan&apos;s invasion risk is driven more by domestic politics than AGI scenarios. China&apos;s real advantage may emerge in robotics manufacturing at scale, where they&apos;re already deploying while the US debates strategy.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>78</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">63ab986f-88b1-4825-9878-f9a834b76db6</guid>
      <title>Ep 77: Anthropic’s Dianne Na Penn on Opus 4.5, Rethinking Model Scaffolding &amp; Safety as a Competitive Advantage</title>
      <description><![CDATA[<p>This episode features Dianne Na Penn, a senior product leader at Anthropic, discussing the launch of Claude Opus 4.5 and the evolution of frontier AI models. The conversation explores how Anthropic approaches model development—balancing ambitious capability roadmaps with user feedback, making strategic bets on areas like agentic coding and computer use while deliberately avoiding others like image generation. Dianne shares insights on the shifting nature of AI evaluation (moving beyond saturated benchmarks like SWE-bench toward more open-ended measures), the evolution of scaffolding from "training wheels" to intelligence amplifiers, and why she believes we're closer to transformative long-running AI than most people think. She also discusses Anthropic's distinctive culture of authenticity, the under appreciated benefits of model alignment for producing independent-thinking AI, and why the real bottleneck to AI agents isn't model capability anymore but product innovation.</p><p> </p><p>(0:00) Intro</p><p>(0:57) Starting the Work on Opus 4.5</p><p>(2:04) Model Capabilities and Surprises</p><p>(5:59) Computer Use and Practical Applications</p><p>(7:21) Pricing and Positioning</p><p>(10:02) Customer Feedback and Early Access</p><p>(16:44) The Reality of Enterprise Agents</p><p>(18:47) Future of AI and Long-Running Intelligence</p><p>(28:06) Anthropic's Culture and Decision Making</p><p>(30:31) Key Decisions and Fun Moments</p><p>(33:45) Quickfire</p><p><br /> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 2 Dec 2025 15:19:11 +0000</pubDate>
      <author>jeffron@redpoint.com (Dianne Na Penn, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-77-anthropics-dianne-na-penn-on-opus-45-rethinking-model-scaffolding-safety-as-a-competitive-advantage-ByQsRs77</link>
      <content:encoded><![CDATA[<p>This episode features Dianne Na Penn, a senior product leader at Anthropic, discussing the launch of Claude Opus 4.5 and the evolution of frontier AI models. The conversation explores how Anthropic approaches model development—balancing ambitious capability roadmaps with user feedback, making strategic bets on areas like agentic coding and computer use while deliberately avoiding others like image generation. Dianne shares insights on the shifting nature of AI evaluation (moving beyond saturated benchmarks like SWE-bench toward more open-ended measures), the evolution of scaffolding from "training wheels" to intelligence amplifiers, and why she believes we're closer to transformative long-running AI than most people think. She also discusses Anthropic's distinctive culture of authenticity, the under appreciated benefits of model alignment for producing independent-thinking AI, and why the real bottleneck to AI agents isn't model capability anymore but product innovation.</p><p> </p><p>(0:00) Intro</p><p>(0:57) Starting the Work on Opus 4.5</p><p>(2:04) Model Capabilities and Surprises</p><p>(5:59) Computer Use and Practical Applications</p><p>(7:21) Pricing and Positioning</p><p>(10:02) Customer Feedback and Early Access</p><p>(16:44) The Reality of Enterprise Agents</p><p>(18:47) Future of AI and Long-Running Intelligence</p><p>(28:06) Anthropic's Culture and Decision Making</p><p>(30:31) Key Decisions and Fun Moments</p><p>(33:45) Quickfire</p><p><br /> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="40369466" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/7ecad26e-9b38-4494-bd02-0d8c78a9c7b8/audio/5fb0074a-ad7d-4122-9c86-bf52e0188405/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 77: Anthropic’s Dianne Na Penn on Opus 4.5, Rethinking Model Scaffolding &amp; Safety as a Competitive Advantage</itunes:title>
      <itunes:author>Dianne Na Penn, Jacob Effron</itunes:author>
      <itunes:duration>00:42:03</itunes:duration>
      <itunes:summary>This episode features Dianne Na Penn, a senior product leader at Anthropic, discussing the launch of Claude Opus 4.5 and the evolution of frontier AI models. The conversation explores how Anthropic approaches model development—balancing ambitious capability roadmaps with user feedback, making strategic bets on areas like agentic coding and computer use while deliberately avoiding others like image generation. Dianne shares insights on the shifting nature of AI evaluation (moving beyond saturated benchmarks like SWE-bench toward more open-ended measures), the evolution of scaffolding from &quot;training wheels&quot; to intelligence amplifiers, and why she believes we&apos;re closer to transformative long-running AI than most people think. She also discusses Anthropic&apos;s distinctive culture of authenticity, the under appreciated benefits of model alignment for producing independent-thinking AI, and why the real bottleneck to AI agents isn&apos;t model capability anymore but product innovation.</itunes:summary>
      <itunes:subtitle>This episode features Dianne Na Penn, a senior product leader at Anthropic, discussing the launch of Claude Opus 4.5 and the evolution of frontier AI models. The conversation explores how Anthropic approaches model development—balancing ambitious capability roadmaps with user feedback, making strategic bets on areas like agentic coding and computer use while deliberately avoiding others like image generation. Dianne shares insights on the shifting nature of AI evaluation (moving beyond saturated benchmarks like SWE-bench toward more open-ended measures), the evolution of scaffolding from &quot;training wheels&quot; to intelligence amplifiers, and why she believes we&apos;re closer to transformative long-running AI than most people think. She also discusses Anthropic&apos;s distinctive culture of authenticity, the under appreciated benefits of model alignment for producing independent-thinking AI, and why the real bottleneck to AI agents isn&apos;t model capability anymore but product innovation.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>77</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">c4c5512a-4d56-4aeb-b42f-b95d56ad6e0b</guid>
      <title>Ep 76: Sora Creators Bill Peebles, Rohan Sahai &amp; Thomas Dimson on Their Unexpected Viral Success</title>
      <description><![CDATA[<p>This episode features the core team behind Sora, OpenAI's groundbreaking video generation platform that became the #1 app in the App Store. Bill Peebles (research lead), Rohan Sahai (product lead), and Thomas Dimson (engineering/product lead with Instagram background) discuss the unexpected viral success of Sora's launch, the product journey that led to the breakthrough "cameo" feature (putting yourself in AI-generated videos), and their philosophy of building a creator-first social network that prioritizes human creativity over passive consumption. They reveal the technical milestones in video generation, their small team size (under 50 people total at launch), navigation of content moderation challenges, early monetization strategy, and their ambitious vision for video models as world simulators that could eventually contribute to scientific breakthroughs by 2028. The conversation captures both the tactical product decisions and strategic philosophy that made Sora a cultural phenomenon.</p><p> </p><p>(0:00) Intro<br />(1:35) Unexpected Success of ChatGPT and Sora<br />(3:55) Sora as an Independent App<br />(5:38) Sora Prototypes and Evolution<br />(8:07) User Creativity and Surprising Use Cases<br />(14:46) Celebrity Engagement and Rights Management<br />(17:58) Competition and Future of AI Video Models<br />(25:42) Empowering Creators<br />(31:21) The Evolution of Image Generation<br />(33:36) How Do Models Need to Improve?<br />(42:10) Monetization of Sora<br />(45:54) Global Reach and Cultural Impact<br />(48:38) Moderation and Safety Challenges<br />(50:09) Integration with Other OpenAI Products<br />(52:07) How do Models Learn Physics?<br />(55:16) Quickfire</p><p> </p><p>With your co-hosts:  </p><p>@jacobeffron  </p><p>- Partner at Redpoint, Former PM Flatiron Health  </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn  </p><p>@ericabrescia  </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)  </p><p>@jordan_segall  </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Mon, 3 Nov 2025 14:42:44 +0000</pubDate>
      <author>jeffron@redpoint.com (Bill Peebles, Rohan Sahai, Thomas Dimson)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-76-sora-creators-bill-peebles-rohan-sahai-thomas-dimson-on-their-unexpected-viral-success-hlfcz1HK</link>
      <content:encoded><![CDATA[<p>This episode features the core team behind Sora, OpenAI's groundbreaking video generation platform that became the #1 app in the App Store. Bill Peebles (research lead), Rohan Sahai (product lead), and Thomas Dimson (engineering/product lead with Instagram background) discuss the unexpected viral success of Sora's launch, the product journey that led to the breakthrough "cameo" feature (putting yourself in AI-generated videos), and their philosophy of building a creator-first social network that prioritizes human creativity over passive consumption. They reveal the technical milestones in video generation, their small team size (under 50 people total at launch), navigation of content moderation challenges, early monetization strategy, and their ambitious vision for video models as world simulators that could eventually contribute to scientific breakthroughs by 2028. The conversation captures both the tactical product decisions and strategic philosophy that made Sora a cultural phenomenon.</p><p> </p><p>(0:00) Intro<br />(1:35) Unexpected Success of ChatGPT and Sora<br />(3:55) Sora as an Independent App<br />(5:38) Sora Prototypes and Evolution<br />(8:07) User Creativity and Surprising Use Cases<br />(14:46) Celebrity Engagement and Rights Management<br />(17:58) Competition and Future of AI Video Models<br />(25:42) Empowering Creators<br />(31:21) The Evolution of Image Generation<br />(33:36) How Do Models Need to Improve?<br />(42:10) Monetization of Sora<br />(45:54) Global Reach and Cultural Impact<br />(48:38) Moderation and Safety Challenges<br />(50:09) Integration with Other OpenAI Products<br />(52:07) How do Models Learn Physics?<br />(55:16) Quickfire</p><p> </p><p>With your co-hosts:  </p><p>@jacobeffron  </p><p>- Partner at Redpoint, Former PM Flatiron Health  </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn  </p><p>@ericabrescia  </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)  </p><p>@jordan_segall  </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="60856666" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/012c3370-f738-4ca2-8e9d-5fc50c5b1f3a/audio/d463b5f6-1d55-4771-93de-00fbe9e45b76/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 76: Sora Creators Bill Peebles, Rohan Sahai &amp; Thomas Dimson on Their Unexpected Viral Success</itunes:title>
      <itunes:author>Bill Peebles, Rohan Sahai, Thomas Dimson</itunes:author>
      <itunes:duration>01:03:23</itunes:duration>
      <itunes:summary>This episode features the core team behind Sora, OpenAI&apos;s groundbreaking video generation platform that became the #1 app in the App Store. Bill Peebles (research lead), Rohan Sahai (product lead), and Thomas Dimson (engineering/product lead with Instagram background) discuss the unexpected viral success of Sora&apos;s launch, the product journey that led to the breakthrough &quot;cameo&quot; feature (putting yourself in AI-generated videos), and their philosophy of building a creator-first social network that prioritizes human creativity over passive consumption. They reveal the technical milestones in video generation, their small team size (under 50 people total at launch), navigation of content moderation challenges, early monetization strategy, and their ambitious vision for video models as world simulators that could eventually contribute to scientific breakthroughs by 2028. The conversation captures both the tactical product decisions and strategic philosophy that made Sora a cultural phenomenon.</itunes:summary>
      <itunes:subtitle>This episode features the core team behind Sora, OpenAI&apos;s groundbreaking video generation platform that became the #1 app in the App Store. Bill Peebles (research lead), Rohan Sahai (product lead), and Thomas Dimson (engineering/product lead with Instagram background) discuss the unexpected viral success of Sora&apos;s launch, the product journey that led to the breakthrough &quot;cameo&quot; feature (putting yourself in AI-generated videos), and their philosophy of building a creator-first social network that prioritizes human creativity over passive consumption. They reveal the technical milestones in video generation, their small team size (under 50 people total at launch), navigation of content moderation challenges, early monetization strategy, and their ambitious vision for video models as world simulators that could eventually contribute to scientific breakthroughs by 2028. The conversation captures both the tactical product decisions and strategic philosophy that made Sora a cultural phenomenon.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>76</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">be47ef22-3265-4dbf-ae0f-dd2d36f46092</guid>
      <title>AI Round Up: Ari Morcos from Datalogy AI and Rob Toews from Radical VC on Karpathy Reactions, OpenAI’s Dealmaking, &amp; Bubble Reality Check</title>
      <description><![CDATA[<p>This episode features Rob Toews from Radical Ventures and Ari Morcos, Head of Research at Datology AI, reacting to Andrej Karpathy's recent statement that AGI is at least a decade away and that current AI capabilities are "slop." The discussion explores whether we're in an AI bubble, with both guests pushing back on overly bearish narratives while acknowledging legitimate concerns about hype and excessive CapEx spending. They debate the sustainability of AI scaling, examining whether continued progress will come from massive compute increases or from efficiency gains through better data quality, architectural innovations, and post-training techniques like reinforcement learning. The conversation also tackles which companies truly need frontier models versus those that can succeed with slightly-behind-the-curve alternatives, the surprisingly static landscape of AI application categories (coding, healthcare, and legal remain dominant), and emerging opportunities from brain-computer interfaces to more efficient scaling methods.</p><p> </p><p>(0:00) Intro<br />(1:04) Debating the AI Bubble<br />(1:50) Over-Hyping AI: Realities and Misconceptions<br />(3:21) Enterprise AI and Data Center Investments<br />(7:46) Consumer Adoption and Monetization Challenges<br />(8:55) AI in Browsers and the Future of Internet Use<br />(14:37) Deepfakes and Ethical Concerns<br />(26:29) AI's Impact on Job Markets and Training<br />(31:38) Google and Anthropic: Strategic Partnerships<br />(34:51) OpenAI's Strategic Deals and Future Prospects<br />(37:12) The Evolution of Vibe Coding<br />(44:35) AI Outside of San Francisco<br />(48:09) Data Moats in AI Startups<br />(50:38) Comparing AI to the Human Brain<br />(56:07) The Role of Physical Infrastructure in AI<br />(56:55) The Potential of Chinese AI Models<br />(1:03:15) Apple's AI Strategy<br />(1:12:35) The Future of AI Applications</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Fri, 24 Oct 2025 12:56:17 +0000</pubDate>
      <author>jeffron@redpoint.com (Ari Morcos, Rob Toews, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-77-ari-morcos-from-datalogy-ai-and-rob-toews-from-radical-vc-on-ai-round-up-karpathy-reactions-openais-dealmaking-bubble-reality-check-usBQrhMj</link>
      <content:encoded><![CDATA[<p>This episode features Rob Toews from Radical Ventures and Ari Morcos, Head of Research at Datology AI, reacting to Andrej Karpathy's recent statement that AGI is at least a decade away and that current AI capabilities are "slop." The discussion explores whether we're in an AI bubble, with both guests pushing back on overly bearish narratives while acknowledging legitimate concerns about hype and excessive CapEx spending. They debate the sustainability of AI scaling, examining whether continued progress will come from massive compute increases or from efficiency gains through better data quality, architectural innovations, and post-training techniques like reinforcement learning. The conversation also tackles which companies truly need frontier models versus those that can succeed with slightly-behind-the-curve alternatives, the surprisingly static landscape of AI application categories (coding, healthcare, and legal remain dominant), and emerging opportunities from brain-computer interfaces to more efficient scaling methods.</p><p> </p><p>(0:00) Intro<br />(1:04) Debating the AI Bubble<br />(1:50) Over-Hyping AI: Realities and Misconceptions<br />(3:21) Enterprise AI and Data Center Investments<br />(7:46) Consumer Adoption and Monetization Challenges<br />(8:55) AI in Browsers and the Future of Internet Use<br />(14:37) Deepfakes and Ethical Concerns<br />(26:29) AI's Impact on Job Markets and Training<br />(31:38) Google and Anthropic: Strategic Partnerships<br />(34:51) OpenAI's Strategic Deals and Future Prospects<br />(37:12) The Evolution of Vibe Coding<br />(44:35) AI Outside of San Francisco<br />(48:09) Data Moats in AI Startups<br />(50:38) Comparing AI to the Human Brain<br />(56:07) The Role of Physical Infrastructure in AI<br />(56:55) The Potential of Chinese AI Models<br />(1:03:15) Apple's AI Strategy<br />(1:12:35) The Future of AI Applications</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="73814236" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/7b41d5be-bc65-408d-a357-966da6ff2dd3/audio/3fe8826b-3e5e-448c-a6b7-4d06c208ad6d/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>AI Round Up: Ari Morcos from Datalogy AI and Rob Toews from Radical VC on Karpathy Reactions, OpenAI’s Dealmaking, &amp; Bubble Reality Check</itunes:title>
      <itunes:author>Ari Morcos, Rob Toews, Jacob Effron</itunes:author>
      <itunes:duration>01:16:53</itunes:duration>
      <itunes:summary>This episode features Rob Toews from Radical Ventures and Ari Morcos, Head of Research at Datology AI, reacting to Andrej Karpathy&apos;s recent statement that AGI is at least a decade away and that current AI capabilities are &quot;slop.&quot; The discussion explores whether we&apos;re in an AI bubble, with both guests pushing back on overly bearish narratives while acknowledging legitimate concerns about hype and excessive CapEx spending. They debate the sustainability of AI scaling, examining whether continued progress will come from massive compute increases or from efficiency gains through better data quality, architectural innovations, and post-training techniques like reinforcement learning. The conversation also tackles which companies truly need frontier models versus those that can succeed with slightly-behind-the-curve alternatives, the surprisingly static landscape of AI application categories (coding, healthcare, and legal remain dominant), and emerging opportunities from brain-computer interfaces to more efficient scaling methods.</itunes:summary>
      <itunes:subtitle>This episode features Rob Toews from Radical Ventures and Ari Morcos, Head of Research at Datology AI, reacting to Andrej Karpathy&apos;s recent statement that AGI is at least a decade away and that current AI capabilities are &quot;slop.&quot; The discussion explores whether we&apos;re in an AI bubble, with both guests pushing back on overly bearish narratives while acknowledging legitimate concerns about hype and excessive CapEx spending. They debate the sustainability of AI scaling, examining whether continued progress will come from massive compute increases or from efficiency gains through better data quality, architectural innovations, and post-training techniques like reinforcement learning. The conversation also tackles which companies truly need frontier models versus those that can succeed with slightly-behind-the-curve alternatives, the surprisingly static landscape of AI application categories (coding, healthcare, and legal remain dominant), and emerging opportunities from brain-computer interfaces to more efficient scaling methods.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>bonus</itunes:episodeType>
    </item>
    <item>
      <guid isPermaLink="false">062052e8-22eb-4f03-a639-1e5ad7535f51</guid>
      <title>AI Round Up: Ari Morcos from Datalogy AI and Rob Toews from Radical VC on AI Talent Wars, xAI’s $200B Valuation, &amp; Google’s Comeback</title>
      <description><![CDATA[<p>This episode features a deep dive into the current state of AI model progress with Ari Morcos (CEO of Datalogy AI and former DeepMind/Meta researcher) and Rob Toews (partner at Radical Ventures). The conversation tackles whether model progress is genuinely slowing down or simply shifting into new paradigms, exploring the role of reinforcement learning in scaling capabilities beyond traditional pre-training. They examine the talent wars reshaping AI labs, Google's resurgence with Gemini, the sustainability of massive valuations for companies like OpenAI and Anthropic, and the infrastructure ecosystem supporting this rapid evolution. The discussion weaves together technical insights on data quality, synthetic data generation, and RL environments with strategic perspectives on acquisitions, regulatory challenges, and the future intersection of AI with physical robotics and brain-computer interfaces.</p><p> </p><p>(0:00) Intro<br />(2:59) Debate on Model Progress<br />(8:03) Challenges in AI Generalization and RL Environments<br />(15:44) Enterprise AI and Custom Models<br />(20:27) Google's AI Ascent and Market Impact<br />(24:30) Valuations and Future of AI Companies<br />(27:55) Evaluating xAI's Position in the AI Landscape<br />(30:31) The Talent War in AI Research<br />(35:45) The Impact of Acquihires on Startups<br />(42:35) The Future of AI Infrastructure<br />(48:28) The Potential of Brain-Computer Interfaces<br />(54:45) The Evolution of AI and Robotics<br />(1:00:50) The Importance of Data in AI Research</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Wed, 24 Sep 2025 12:58:52 +0000</pubDate>
      <author>jeffron@redpoint.com (Rob Toews, Jacob Effron, Ari Morcos)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-76-ari-marcos-from-datalogy-ai-and-rob-toews-from-radical-vc-on-ai-talent-wars-xais-200b-valuation-googles-comeback-8F45rfzE</link>
      <content:encoded><![CDATA[<p>This episode features a deep dive into the current state of AI model progress with Ari Morcos (CEO of Datalogy AI and former DeepMind/Meta researcher) and Rob Toews (partner at Radical Ventures). The conversation tackles whether model progress is genuinely slowing down or simply shifting into new paradigms, exploring the role of reinforcement learning in scaling capabilities beyond traditional pre-training. They examine the talent wars reshaping AI labs, Google's resurgence with Gemini, the sustainability of massive valuations for companies like OpenAI and Anthropic, and the infrastructure ecosystem supporting this rapid evolution. The discussion weaves together technical insights on data quality, synthetic data generation, and RL environments with strategic perspectives on acquisitions, regulatory challenges, and the future intersection of AI with physical robotics and brain-computer interfaces.</p><p> </p><p>(0:00) Intro<br />(2:59) Debate on Model Progress<br />(8:03) Challenges in AI Generalization and RL Environments<br />(15:44) Enterprise AI and Custom Models<br />(20:27) Google's AI Ascent and Market Impact<br />(24:30) Valuations and Future of AI Companies<br />(27:55) Evaluating xAI's Position in the AI Landscape<br />(30:31) The Talent War in AI Research<br />(35:45) The Impact of Acquihires on Startups<br />(42:35) The Future of AI Infrastructure<br />(48:28) The Potential of Brain-Computer Interfaces<br />(54:45) The Evolution of AI and Robotics<br />(1:00:50) The Importance of Data in AI Research</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="60387297" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/c68baa40-5782-49a8-83dc-5206f0aac88a/audio/8d5e1f74-b277-4dbc-8b13-86a97d07f658/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>AI Round Up: Ari Morcos from Datalogy AI and Rob Toews from Radical VC on AI Talent Wars, xAI’s $200B Valuation, &amp; Google’s Comeback</itunes:title>
      <itunes:author>Rob Toews, Jacob Effron, Ari Morcos</itunes:author>
      <itunes:duration>01:02:54</itunes:duration>
      <itunes:summary>This episode features a deep dive into the current state of AI model progress with Ari Morcos (CEO of Datalogy AI and former DeepMind/Meta researcher) and Rob Toews (partner at Radical Ventures). The conversation tackles whether model progress is genuinely slowing down or simply shifting into new paradigms, exploring the role of reinforcement learning in scaling capabilities beyond traditional pre-training. They examine the talent wars reshaping AI labs, Google&apos;s resurgence with Gemini, the sustainability of massive valuations for companies like OpenAI and Anthropic, and the infrastructure ecosystem supporting this rapid evolution. The discussion weaves together technical insights on data quality, synthetic data generation, and RL environments with strategic perspectives on acquisitions, regulatory challenges, and the future intersection of AI with physical robotics and brain-computer interfaces.</itunes:summary>
      <itunes:subtitle>This episode features a deep dive into the current state of AI model progress with Ari Morcos (CEO of Datalogy AI and former DeepMind/Meta researcher) and Rob Toews (partner at Radical Ventures). The conversation tackles whether model progress is genuinely slowing down or simply shifting into new paradigms, exploring the role of reinforcement learning in scaling capabilities beyond traditional pre-training. They examine the talent wars reshaping AI labs, Google&apos;s resurgence with Gemini, the sustainability of massive valuations for companies like OpenAI and Anthropic, and the infrastructure ecosystem supporting this rapid evolution. The discussion weaves together technical insights on data quality, synthetic data generation, and RL environments with strategic perspectives on acquisitions, regulatory challenges, and the future intersection of AI with physical robotics and brain-computer interfaces.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>bonus</itunes:episodeType>
    </item>
    <item>
      <guid isPermaLink="false">e55024e7-d95c-47b9-9e5f-2bc09ac9cf14</guid>
      <title>Ep 75: Nano Banana’s Oliver Wang and Nicole Brichtova - Behind the Breakthrough as Gemini Tops the Charts</title>
      <description><![CDATA[<p>Fill out this short listener survey to help us improve the show: https://forms.gle/bbcRiPTRwKoG2tJx8</p><p>This week on Unsupervised Learning, Jacob sits down with Nicole Brichtova and Oliver Wang, the Google researchers behind "Nano Banana" - the breakthrough AI image model that achieved unprecedented character consistency and took over social media.</p><p>The conversation covers how their model fits into creative workflows, why we're still in the early innings of image AI development despite impressive current capabilities, and how image and video generation are converging toward unified models. They also share honest perspectives on current limitations, safety approaches, and why the expectation of going from prompt to production-ready content is fundamentally overhyped.</p><p>(0:00) Intro<br />(1:42) Early Nano Banana Use Cases and Character Consistency<br />(3:05) Popular Features and User Requests<br />(3:54) Future Frontiers in Image Models<br />(5:26) Personalization and Aesthetic Models<br />(7:39) Model Success and User Engagement<br />(10:59) Product Design for Different Users<br />(19:30) Advanced Use Cases and Future Workflows<br />(23:14) Editing Workflows and Chatbots<br />(25:14) Google's Image Model Applications<br />(27:12) Milestones in Image Generation<br />(29:30) MidJourney's Success<br />(30:54) Future of Image Models<br />(33:55) Image Models vs. Video Models<br />(36:35) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Wed, 17 Sep 2025 13:00:24 +0000</pubDate>
      <author>jeffron@redpoint.com (Oliver Wang, Nicole Brichtova, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-75-nano-bananas-oliver-wang-and-nicole-brichtova-behind-the-breakthrough-as-gemini-tops-the-charts-DUZv8L2o</link>
      <content:encoded><![CDATA[<p>Fill out this short listener survey to help us improve the show: https://forms.gle/bbcRiPTRwKoG2tJx8</p><p>This week on Unsupervised Learning, Jacob sits down with Nicole Brichtova and Oliver Wang, the Google researchers behind "Nano Banana" - the breakthrough AI image model that achieved unprecedented character consistency and took over social media.</p><p>The conversation covers how their model fits into creative workflows, why we're still in the early innings of image AI development despite impressive current capabilities, and how image and video generation are converging toward unified models. They also share honest perspectives on current limitations, safety approaches, and why the expectation of going from prompt to production-ready content is fundamentally overhyped.</p><p>(0:00) Intro<br />(1:42) Early Nano Banana Use Cases and Character Consistency<br />(3:05) Popular Features and User Requests<br />(3:54) Future Frontiers in Image Models<br />(5:26) Personalization and Aesthetic Models<br />(7:39) Model Success and User Engagement<br />(10:59) Product Design for Different Users<br />(19:30) Advanced Use Cases and Future Workflows<br />(23:14) Editing Workflows and Chatbots<br />(25:14) Google's Image Model Applications<br />(27:12) Milestones in Image Generation<br />(29:30) MidJourney's Success<br />(30:54) Future of Image Models<br />(33:55) Image Models vs. Video Models<br />(36:35) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="39435798" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/2d863667-8f6b-4e0f-9b57-a6d8c5ca5d26/audio/95bbe419-3bac-4f97-a84d-e3d18d53e59e/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 75: Nano Banana’s Oliver Wang and Nicole Brichtova - Behind the Breakthrough as Gemini Tops the Charts</itunes:title>
      <itunes:author>Oliver Wang, Nicole Brichtova, Jacob Effron</itunes:author>
      <itunes:duration>00:41:04</itunes:duration>
      <itunes:summary>This week on Unsupervised Learning, Jacob sits down with Nicole Brichtova and Oliver Wang, the Google researchers behind &quot;Nano Banana&quot; - the breakthrough AI image model that achieved unprecedented character consistency and took over social media.

The conversation covers how their model fits into creative workflows, why we&apos;re still in the early innings of image AI development despite impressive current capabilities, and how image and video generation are converging toward unified models. They also share honest perspectives on current limitations, safety approaches, and why the expectation of going from prompt to production-ready content is fundamentally overhyped.</itunes:summary>
      <itunes:subtitle>This week on Unsupervised Learning, Jacob sits down with Nicole Brichtova and Oliver Wang, the Google researchers behind &quot;Nano Banana&quot; - the breakthrough AI image model that achieved unprecedented character consistency and took over social media.

The conversation covers how their model fits into creative workflows, why we&apos;re still in the early innings of image AI development despite impressive current capabilities, and how image and video generation are converging toward unified models. They also share honest perspectives on current limitations, safety approaches, and why the expectation of going from prompt to production-ready content is fundamentally overhyped.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>75</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">265f7733-8815-47fc-ba23-0df5161d01c0</guid>
      <title>Ep 74: Chief Scientist of Together.AI Tri Dao On The End of Nvidia&apos;s Dominance, Why Inference Costs Fell &amp; The Next 10X in Speed</title>
      <description><![CDATA[<p>Fill out this short listener survey to help us improve the show: https://forms.gle/bbcRiPTRwKoG2tJx8</p><p> </p><p>Tri Dao, Chief Scientist at Together AI and Princeton professor who created Flash Attention and Mamba, discusses how inference optimization has driven costs down 100x since ChatGPT's launch through memory optimization, sparsity advances, and hardware-software co-design. He predicts the AI hardware landscape will shift from Nvidia's current 90% dominance to a more diversified ecosystem within 2-3 years, as specialized chips emerge for distinct workload categories: low-latency agentic systems, high-throughput batch processing, and interactive chatbots. Dao shares his surprise at AI models becoming genuinely useful for expert-level work, making him 1.5x more productive at GPU kernel optimization through tools like Claude Code and O1. The conversation explores whether current transformer architectures can reach expert-level AI performance or if approaches like mixture of experts and state space models are necessary to achieve AGI at reasonable costs. Looking ahead, Dao sees another 10x cost reduction coming from continued hardware specialization, improved kernels, and architectural advances like ultra-sparse models, while emphasizing that the biggest challenge remains generating expert-level training data for domains lacking extensive internet coverage.</p><p> </p><p>(0:00) Intro</p><p>(1:58) Nvidia's Dominance and Competitors</p><p>(4:01) Challenges in Chip Design</p><p>(6:26) Innovations in AI Hardware</p><p>(9:21) The Role of AI in Chip Optimization</p><p>(11:38) Future of AI and Hardware Abstractions</p><p>(16:46) Inference Optimization Techniques</p><p>(33:10) Specialization in AI Inference</p><p>(35:18) Deep Work Preferences and Low Latency Workloads</p><p>(38:19) Fleet Level Optimization and Batch Inference</p><p>(39:34) Evolving AI Workloads and Open Source Tooling</p><p>(41:15) Future of AI: Agentic Workloads and Real-Time Video Generation</p><p>(44:35) Architectural Innovations and AI Expert Level</p><p>(50:10) Robotics and Multi-Resolution Processing</p><p>(52:26) Balancing Academia and Industry in AI Research</p><p>(57:37) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Wed, 10 Sep 2025 12:50:19 +0000</pubDate>
      <author>jeffron@redpoint.com (Tri Dao, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-74-chief-scientist-of-togetherai-tri-dao-on-ai-super-researcher-the-end-of-nvidias-dominance-why-inference-costs-fell-the-next-10x-in-speed-hqhgrAYK</link>
      <content:encoded><![CDATA[<p>Fill out this short listener survey to help us improve the show: https://forms.gle/bbcRiPTRwKoG2tJx8</p><p> </p><p>Tri Dao, Chief Scientist at Together AI and Princeton professor who created Flash Attention and Mamba, discusses how inference optimization has driven costs down 100x since ChatGPT's launch through memory optimization, sparsity advances, and hardware-software co-design. He predicts the AI hardware landscape will shift from Nvidia's current 90% dominance to a more diversified ecosystem within 2-3 years, as specialized chips emerge for distinct workload categories: low-latency agentic systems, high-throughput batch processing, and interactive chatbots. Dao shares his surprise at AI models becoming genuinely useful for expert-level work, making him 1.5x more productive at GPU kernel optimization through tools like Claude Code and O1. The conversation explores whether current transformer architectures can reach expert-level AI performance or if approaches like mixture of experts and state space models are necessary to achieve AGI at reasonable costs. Looking ahead, Dao sees another 10x cost reduction coming from continued hardware specialization, improved kernels, and architectural advances like ultra-sparse models, while emphasizing that the biggest challenge remains generating expert-level training data for domains lacking extensive internet coverage.</p><p> </p><p>(0:00) Intro</p><p>(1:58) Nvidia's Dominance and Competitors</p><p>(4:01) Challenges in Chip Design</p><p>(6:26) Innovations in AI Hardware</p><p>(9:21) The Role of AI in Chip Optimization</p><p>(11:38) Future of AI and Hardware Abstractions</p><p>(16:46) Inference Optimization Techniques</p><p>(33:10) Specialization in AI Inference</p><p>(35:18) Deep Work Preferences and Low Latency Workloads</p><p>(38:19) Fleet Level Optimization and Batch Inference</p><p>(39:34) Evolving AI Workloads and Open Source Tooling</p><p>(41:15) Future of AI: Agentic Workloads and Real-Time Video Generation</p><p>(44:35) Architectural Innovations and AI Expert Level</p><p>(50:10) Robotics and Multi-Resolution Processing</p><p>(52:26) Balancing Academia and Industry in AI Research</p><p>(57:37) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="56275739" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/4e1ea3f1-9c21-4670-9ee8-75a1d636a67a/audio/8ea58360-ea33-4b72-9c64-14a609d832b2/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 74: Chief Scientist of Together.AI Tri Dao On The End of Nvidia&apos;s Dominance, Why Inference Costs Fell &amp; The Next 10X in Speed</itunes:title>
      <itunes:author>Tri Dao, Jacob Effron</itunes:author>
      <itunes:duration>00:58:37</itunes:duration>
      <itunes:summary>Tri Dao, Chief Scientist at Together AI and Princeton professor who created Flash Attention and Mamba, discusses how inference optimization has driven costs down 100x since ChatGPT&apos;s launch through memory optimization, sparsity advances, and hardware-software co-design. He predicts the AI hardware landscape will shift from Nvidia&apos;s current 90% dominance to a more diversified ecosystem within 2-3 years, as specialized chips emerge for distinct workload categories: low-latency agentic systems, high-throughput batch processing, and interactive chatbots. Dao shares his surprise at AI models becoming genuinely useful for expert-level work, making him 1.5x more productive at GPU kernel optimization through tools like Claude Code and O1. The conversation explores whether current transformer architectures can reach expert-level AI performance or if approaches like mixture of experts and state space models are necessary to achieve AGI at reasonable costs. Looking ahead, Dao sees another 10x cost reduction coming from continued hardware specialization, improved kernels, and architectural advances like ultra-sparse models, while emphasizing that the biggest challenge remains generating expert-level training data for domains lacking extensive internet coverage.</itunes:summary>
      <itunes:subtitle>Tri Dao, Chief Scientist at Together AI and Princeton professor who created Flash Attention and Mamba, discusses how inference optimization has driven costs down 100x since ChatGPT&apos;s launch through memory optimization, sparsity advances, and hardware-software co-design. He predicts the AI hardware landscape will shift from Nvidia&apos;s current 90% dominance to a more diversified ecosystem within 2-3 years, as specialized chips emerge for distinct workload categories: low-latency agentic systems, high-throughput batch processing, and interactive chatbots. Dao shares his surprise at AI models becoming genuinely useful for expert-level work, making him 1.5x more productive at GPU kernel optimization through tools like Claude Code and O1. The conversation explores whether current transformer architectures can reach expert-level AI performance or if approaches like mixture of experts and state space models are necessary to achieve AGI at reasonable costs. Looking ahead, Dao sees another 10x cost reduction coming from continued hardware specialization, improved kernels, and architectural advances like ultra-sparse models, while emphasizing that the biggest challenge remains generating expert-level training data for domains lacking extensive internet coverage.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>74</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">89a071f0-6a7d-4bbb-8174-3e9386ac15d6</guid>
      <title>Ep 73: General Partner of Felicis Peter Deng on on AI Pricing Tactics, Reaction to GPT-5 &amp; Why Voice is Underrated</title>
      <description><![CDATA[<p>In this episode, Jacob sits down with Peter Deng, General Partner at Felicis and former Product Leader at OpenAI, Facebook, and Uber. Peter shares his insider perspective on building ChatGPT Enterprise in just seven weeks and leading voice mode development at OpenAI. The conversation covers everything from why traditional SaaS pricing models are broken for AI products to how evals became the new product specs, the "AI under your fingernails" test for founding teams, and why current agents are massively overhyped.</p><p>They also explore how consumer AI will fragment across multiple winners rather than consolidate into a single super app, the coming integration between ChatGPT and apps like Uber, and why voice AI will unlock entirely new categories of applications. Plus, insights on the changing dynamics between foundation models and startups, and what it really takes to build defensible AI companies. It's a comprehensive look at AI product strategy from someone who's been at the center of the industry's biggest breakthroughs.</p><p> </p><p>(0:00) Intro<br />(1:17) AI Business Models and Pricing Strategies<br />(7:48) Product Development in AI Companies<br />(18:36) The Role of Product Managers in AI<br />(23:06) Voice Interaction and AI<br />(26:43) AI in Education<br />(30:39) Consumer and Enterprise Adoption of AI<br />(33:36) The Impact of AI on Salaries and HR<br />(40:37) The Role of Unique Data in AI Development<br />(49:03) Challenges and Strategies for AI Companies<br />(52:58) The Future of AI and Its Impact on Society<br />(57:31) Reflections on OpenAI<br />(58:38) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 26 Aug 2025 12:45:27 +0000</pubDate>
      <author>jeffron@redpoint.com (Peter Deng, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-73-general-partner-of-felicis-peter-deng-on-on-ai-pricing-tactics-reaction-to-gpt-5-why-voice-is-underrated-7PUrQHa_</link>
      <content:encoded><![CDATA[<p>In this episode, Jacob sits down with Peter Deng, General Partner at Felicis and former Product Leader at OpenAI, Facebook, and Uber. Peter shares his insider perspective on building ChatGPT Enterprise in just seven weeks and leading voice mode development at OpenAI. The conversation covers everything from why traditional SaaS pricing models are broken for AI products to how evals became the new product specs, the "AI under your fingernails" test for founding teams, and why current agents are massively overhyped.</p><p>They also explore how consumer AI will fragment across multiple winners rather than consolidate into a single super app, the coming integration between ChatGPT and apps like Uber, and why voice AI will unlock entirely new categories of applications. Plus, insights on the changing dynamics between foundation models and startups, and what it really takes to build defensible AI companies. It's a comprehensive look at AI product strategy from someone who's been at the center of the industry's biggest breakthroughs.</p><p> </p><p>(0:00) Intro<br />(1:17) AI Business Models and Pricing Strategies<br />(7:48) Product Development in AI Companies<br />(18:36) The Role of Product Managers in AI<br />(23:06) Voice Interaction and AI<br />(26:43) AI in Education<br />(30:39) Consumer and Enterprise Adoption of AI<br />(33:36) The Impact of AI on Salaries and HR<br />(40:37) The Role of Unique Data in AI Development<br />(49:03) Challenges and Strategies for AI Companies<br />(52:58) The Future of AI and Its Impact on Society<br />(57:31) Reflections on OpenAI<br />(58:38) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="61669974" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/73dcc448-b8cd-41e9-b78f-993942a63e07/audio/dc5a9435-14fe-45c9-bcbf-4828fd912ad5/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 73: General Partner of Felicis Peter Deng on on AI Pricing Tactics, Reaction to GPT-5 &amp; Why Voice is Underrated</itunes:title>
      <itunes:author>Peter Deng, Jacob Effron</itunes:author>
      <itunes:duration>01:04:14</itunes:duration>
      <itunes:summary>In this episode, Jacob sits down with Peter Deng, General Partner at Felicis and former Product Leader at OpenAI, Facebook, and Uber. Peter shares his insider perspective on building ChatGPT Enterprise in just seven weeks and leading voice mode development at OpenAI. The conversation covers everything from why traditional SaaS pricing models are broken for AI products to how evals became the new product specs, the &quot;AI under your fingernails&quot; test for founding teams, and why current agents are massively overhyped.

They also explore how consumer AI will fragment across multiple winners rather than consolidate into a single super app, the coming integration between ChatGPT and apps like Uber, and why voice AI will unlock entirely new categories of applications. Plus, insights on the changing dynamics between foundation models and startups, and what it really takes to build defensible AI companies. It&apos;s a comprehensive look at AI product strategy from someone who&apos;s been at the center of the industry&apos;s biggest breakthroughs.</itunes:summary>
      <itunes:subtitle>In this episode, Jacob sits down with Peter Deng, General Partner at Felicis and former Product Leader at OpenAI, Facebook, and Uber. Peter shares his insider perspective on building ChatGPT Enterprise in just seven weeks and leading voice mode development at OpenAI. The conversation covers everything from why traditional SaaS pricing models are broken for AI products to how evals became the new product specs, the &quot;AI under your fingernails&quot; test for founding teams, and why current agents are massively overhyped.

They also explore how consumer AI will fragment across multiple winners rather than consolidate into a single super app, the coming integration between ChatGPT and apps like Uber, and why voice AI will unlock entirely new categories of applications. Plus, insights on the changing dynamics between foundation models and startups, and what it really takes to build defensible AI companies. It&apos;s a comprehensive look at AI product strategy from someone who&apos;s been at the center of the industry&apos;s biggest breakthroughs.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>73</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">7ae8bca7-3b27-4d5c-a338-5387c1fbc8ea</guid>
      <title>Ep 72: Co-Founder of Chai Discovery Joshua Meier on 99% Faster Drug Discovery, BioTech’s AlphaGo Moment, Building Photoshop for Molecules</title>
      <description><![CDATA[<p>In this episode, Jacob sits down with Joshua Meier, co-founder of Chai Discovery and former Chief AI Officer at Absci, to explore the breakthrough moment happening in AI drug discovery. They discuss how the field has evolved through three distinct waves, with the current generation of companies finally achieving success rates that seemed impossible just years ago. </p><p> </p><p>The conversation covers everything from moving drug discovery out of the lab and into computers, to why AI models think differently than human chemists, to the strategic decisions around open sourcing foundational models while keeping design capabilities proprietary. It's an in-depth look at how AI is fundamentally changing pharmaceutical innovation and what it means for the future of medicine.</p><p> </p><p>Check out the full Chai-2 Zero-Shot Antibody report linked here: https://www.biorxiv.org/content/10.1101/2025.07.05.663018v1.full.pdf</p><p> </p><p>(0:00) Intro<br />(2:10) The Evolution of AI in Drug Discovery<br />(6:09) Current State and Future of AI in Biotech<br />(11:15) Challenges and Modalities in Therapeutics<br />(15:19) Data Generation and Model Training<br />(23:59) Open Source and Model Development at Chai<br />(28:35) Protein Structure Prediction and Diffusion Models<br />(30:57) Open Source Models and Their Impact<br />(35:41) How Should Chai-2 Be Used?<br />(39:34) The Future of AI in Pharma and Biotech<br />(43:51) Key Milestones and Metrics in AI-Driven Drug Discovery<br />(48:24) Critiques and Hesitation<br />(55:06) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Wed, 13 Aug 2025 12:45:57 +0000</pubDate>
      <author>jeffron@redpoint.com (Joshua Meier, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-72-co-founder-of-chai-discovery-joshua-meier-on-99-faster-drug-discovery-biotechs-alphago-moment-building-photoshop-for-molecules-l2bPtBY2</link>
      <content:encoded><![CDATA[<p>In this episode, Jacob sits down with Joshua Meier, co-founder of Chai Discovery and former Chief AI Officer at Absci, to explore the breakthrough moment happening in AI drug discovery. They discuss how the field has evolved through three distinct waves, with the current generation of companies finally achieving success rates that seemed impossible just years ago. </p><p> </p><p>The conversation covers everything from moving drug discovery out of the lab and into computers, to why AI models think differently than human chemists, to the strategic decisions around open sourcing foundational models while keeping design capabilities proprietary. It's an in-depth look at how AI is fundamentally changing pharmaceutical innovation and what it means for the future of medicine.</p><p> </p><p>Check out the full Chai-2 Zero-Shot Antibody report linked here: https://www.biorxiv.org/content/10.1101/2025.07.05.663018v1.full.pdf</p><p> </p><p>(0:00) Intro<br />(2:10) The Evolution of AI in Drug Discovery<br />(6:09) Current State and Future of AI in Biotech<br />(11:15) Challenges and Modalities in Therapeutics<br />(15:19) Data Generation and Model Training<br />(23:59) Open Source and Model Development at Chai<br />(28:35) Protein Structure Prediction and Diffusion Models<br />(30:57) Open Source Models and Their Impact<br />(35:41) How Should Chai-2 Be Used?<br />(39:34) The Future of AI in Pharma and Biotech<br />(43:51) Key Milestones and Metrics in AI-Driven Drug Discovery<br />(48:24) Critiques and Hesitation<br />(55:06) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="54969616" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/32b6bac9-6b55-4ac7-8e16-23ea50b13110/audio/95184f9e-d739-4ae9-b08b-313052fd0f11/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 72: Co-Founder of Chai Discovery Joshua Meier on 99% Faster Drug Discovery, BioTech’s AlphaGo Moment, Building Photoshop for Molecules</itunes:title>
      <itunes:author>Joshua Meier, Jacob Effron</itunes:author>
      <itunes:duration>00:57:15</itunes:duration>
      <itunes:summary>In this episode, Jacob sits down with Joshua Meier, co-founder of Chai Discovery and former Chief AI Officer at Absci, to explore the breakthrough moment happening in AI drug discovery. They discuss how the field has evolved through three distinct waves, with the current generation of companies finally achieving success rates that seemed impossible just years ago. 

The conversation covers everything from moving drug discovery out of the lab and into computers, to why AI models think differently than human chemists, to the strategic decisions around open sourcing foundational models while keeping design capabilities proprietary. It&apos;s an in-depth look at how AI is fundamentally changing pharmaceutical innovation and what it means for the future of medicine.</itunes:summary>
      <itunes:subtitle>In this episode, Jacob sits down with Joshua Meier, co-founder of Chai Discovery and former Chief AI Officer at Absci, to explore the breakthrough moment happening in AI drug discovery. They discuss how the field has evolved through three distinct waves, with the current generation of companies finally achieving success rates that seemed impossible just years ago. 

The conversation covers everything from moving drug discovery out of the lab and into computers, to why AI models think differently than human chemists, to the strategic decisions around open sourcing foundational models while keeping design capabilities proprietary. It&apos;s an in-depth look at how AI is fundamentally changing pharmaceutical innovation and what it means for the future of medicine.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>72</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b6475c26-55c3-48a0-934f-6a62da5cfdca</guid>
      <title>Ep 71: CEO of TurboPuffer Simon Eskildsen on Building Smarter Retrieval, AI App Must-Have Features &amp; Current State of Vector DBs</title>
      <description><![CDATA[<p>Fill out this short listener survey to help us improve the show: https://forms.gle/bbcRiPTRwKoG2tJx8</p><p>In this episode, Simon Eskildsen, co-founder and CEO of TurboPuffer, lays out a compelling vision for how AI-native infrastructure needs to evolve in an era where every application wants to connect massive amounts of context to large language models. He breaks down why traditional databases and even large context windows fall short—especially at scale—and why object-storage-native search is the inevitable next step. Drawing on his experience from Shopify and Readwise, Simon introduces the SCRAP framework to explain the limits of context stuffing and makes a clear case for why cost, recall, performance, and access control drive the need for smarter retrieval systems. From practical lessons in building highly reliable infra to hard technical problems in vector indexing, this conversation distills the future of AI infra into first principles—with clarity and depth.</p><p> </p><p>(0:00) Intro<br />(0:49) The Evolution of AI Context Windows<br />(2:32) Challenges in AI Data Integration<br />(3:56) SCRAP: Scale, Cost, Recall, ACLs, and Performance<br />(9:21) The Rise of Object-Oriented Storage<br />(16:47) Turbo Puffer Use Cases<br />(22:32) Challenges in Vector Search<br />(27:02) Challenges in Query Planning and Data Filtering<br />(27:53) Focusing on Core Problems and Simplicity<br />(28:28) Customer Feedback and Future Directions<br />(29:11) Reliability and Simplicity in Design<br />(30:39) Evaluating Embedding Models and Search Performance<br />(32:17) The Role of Vectors in Search Engines<br />(34:16) Balancing Focus and Expansion<br />(35:57) AI Infrastructure and Market Trends<br />(38:36) The Future of Memory in AI<br />(43:01) Table Stakes for AI in SaaS Applications<br />(45:55) Multimodal Data and Market Observations<br />(46:57) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 22 Jul 2025 12:50:04 +0000</pubDate>
      <author>jeffron@redpoint.com (Simon Eskildsen, Jacob Effron, Jordan Segall)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-71-ceo-of-turbopuffer-simon-eskildsen-on-building-smarter-retrieval-ai-app-must-have-features-current-state-of-vector-dbs-wDVoriM7</link>
      <content:encoded><![CDATA[<p>Fill out this short listener survey to help us improve the show: https://forms.gle/bbcRiPTRwKoG2tJx8</p><p>In this episode, Simon Eskildsen, co-founder and CEO of TurboPuffer, lays out a compelling vision for how AI-native infrastructure needs to evolve in an era where every application wants to connect massive amounts of context to large language models. He breaks down why traditional databases and even large context windows fall short—especially at scale—and why object-storage-native search is the inevitable next step. Drawing on his experience from Shopify and Readwise, Simon introduces the SCRAP framework to explain the limits of context stuffing and makes a clear case for why cost, recall, performance, and access control drive the need for smarter retrieval systems. From practical lessons in building highly reliable infra to hard technical problems in vector indexing, this conversation distills the future of AI infra into first principles—with clarity and depth.</p><p> </p><p>(0:00) Intro<br />(0:49) The Evolution of AI Context Windows<br />(2:32) Challenges in AI Data Integration<br />(3:56) SCRAP: Scale, Cost, Recall, ACLs, and Performance<br />(9:21) The Rise of Object-Oriented Storage<br />(16:47) Turbo Puffer Use Cases<br />(22:32) Challenges in Vector Search<br />(27:02) Challenges in Query Planning and Data Filtering<br />(27:53) Focusing on Core Problems and Simplicity<br />(28:28) Customer Feedback and Future Directions<br />(29:11) Reliability and Simplicity in Design<br />(30:39) Evaluating Embedding Models and Search Performance<br />(32:17) The Role of Vectors in Search Engines<br />(34:16) Balancing Focus and Expansion<br />(35:57) AI Infrastructure and Market Trends<br />(38:36) The Future of Memory in AI<br />(43:01) Table Stakes for AI in SaaS Applications<br />(45:55) Multimodal Data and Market Observations<br />(46:57) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="49096872" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/22a5fb80-e8c7-4d71-80ad-f46f42e99a3a/audio/7a562e0e-fc92-45b7-9cf2-de7f52fd5f44/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 71: CEO of TurboPuffer Simon Eskildsen on Building Smarter Retrieval, AI App Must-Have Features &amp; Current State of Vector DBs</itunes:title>
      <itunes:author>Simon Eskildsen, Jacob Effron, Jordan Segall</itunes:author>
      <itunes:duration>00:51:08</itunes:duration>
      <itunes:summary>In this episode, Simon Eskildsen, co-founder and CEO of TurboPuffer, lays out a compelling vision for how AI-native infrastructure needs to evolve in an era where every application wants to connect massive amounts of context to large language models. He breaks down why traditional databases and even large context windows fall short—especially at scale—and why object-storage-native search is the inevitable next step. Drawing on his experience from Shopify and Readwise, Simon introduces the SCRAP framework to explain the limits of context stuffing and makes a clear case for why cost, recall, performance, and access control drive the need for smarter retrieval systems. From practical lessons in building highly reliable infra to hard technical problems in vector indexing, this conversation distills the future of AI infra into first principles—with clarity and depth.</itunes:summary>
      <itunes:subtitle>In this episode, Simon Eskildsen, co-founder and CEO of TurboPuffer, lays out a compelling vision for how AI-native infrastructure needs to evolve in an era where every application wants to connect massive amounts of context to large language models. He breaks down why traditional databases and even large context windows fall short—especially at scale—and why object-storage-native search is the inevitable next step. Drawing on his experience from Shopify and Readwise, Simon introduces the SCRAP framework to explain the limits of context stuffing and makes a clear case for why cost, recall, performance, and access control drive the need for smarter retrieval systems. From practical lessons in building highly reliable infra to hard technical problems in vector indexing, this conversation distills the future of AI infra into first principles—with clarity and depth.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>71</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">efeac2ce-47a3-4a03-aa30-046fe990cabb</guid>
      <title>Ep 70: Karol Hausman and Danny Driess (Physical Intelligence) Unpack the Most Recent Breakthroughs &amp; Path to Generalist Robots</title>
      <description><![CDATA[<p>In this episode, Jacob sits down with Karol Hausman (Co-Founder) and Danny Driess (Research Scientist) from Physical Intelligence, two of the minds behind some of the most exciting advances in robotics. They unpack the last decade of progress in AI robotics, from early skepticism to the breakthroughs powering today’s generalist robot models. </p><p> </p><p>The conversation covers everything from folding laundry with robots to building scalable data pipelines, the limits of simulation, and what it’ll take to bring robot assistants into everyday homes. It's a wide-ranging and thoughtful look at where robotics is headed, as well as how fast we might get there.</p><p> </p><p>(0:00) Intro<br />(1:31) Early Days in Robotics<br />(2:08) Shift to Learning-Based Robotics<br />(4:50) Challenges and Breakthroughs<br />(8:45) Google's Role and Spin-Out Decision<br />(15:08) Comparing Robotics to Self-Driving Cars<br />(19:18) Hardware and Intelligence<br />(21:05) Future Milestones and Scaling Challenges<br />(33:23) Data Collection and Infrastructure Needs<br />(35:49) Choosing and Tackling Complex Tasks<br />(38:49) Evaluating Model Performance<br />(41:28) The Role of Simulation in Robotics<br />(44:27) Research Strategies and Hiring<br />(48:16) Open Source and Community Impact<br />(52:27) Advancements in Training and Model Efficiency<br />(58:45) Future of Robotics and AI<br />(1:01:16) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 8 Jul 2025 12:52:03 +0000</pubDate>
      <author>jeffron@redpoint.com (Karol Hausman, Danny Driess, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-70-karol-hausman-and-danny-driess-physical-intelligence-unpack-the-most-recent-breakthroughs-path-to-generalist-robots-nzrPFaRT</link>
      <content:encoded><![CDATA[<p>In this episode, Jacob sits down with Karol Hausman (Co-Founder) and Danny Driess (Research Scientist) from Physical Intelligence, two of the minds behind some of the most exciting advances in robotics. They unpack the last decade of progress in AI robotics, from early skepticism to the breakthroughs powering today’s generalist robot models. </p><p> </p><p>The conversation covers everything from folding laundry with robots to building scalable data pipelines, the limits of simulation, and what it’ll take to bring robot assistants into everyday homes. It's a wide-ranging and thoughtful look at where robotics is headed, as well as how fast we might get there.</p><p> </p><p>(0:00) Intro<br />(1:31) Early Days in Robotics<br />(2:08) Shift to Learning-Based Robotics<br />(4:50) Challenges and Breakthroughs<br />(8:45) Google's Role and Spin-Out Decision<br />(15:08) Comparing Robotics to Self-Driving Cars<br />(19:18) Hardware and Intelligence<br />(21:05) Future Milestones and Scaling Challenges<br />(33:23) Data Collection and Infrastructure Needs<br />(35:49) Choosing and Tackling Complex Tasks<br />(38:49) Evaluating Model Performance<br />(41:28) The Role of Simulation in Robotics<br />(44:27) Research Strategies and Hiring<br />(48:16) Open Source and Community Impact<br />(52:27) Advancements in Training and Model Efficiency<br />(58:45) Future of Robotics and AI<br />(1:01:16) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="67161904" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/61285ff2-b8df-443d-8e13-43ad7a50e069/audio/3d4ea975-aa7f-48c7-9c8d-5237a8e31b59/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 70: Karol Hausman and Danny Driess (Physical Intelligence) Unpack the Most Recent Breakthroughs &amp; Path to Generalist Robots</itunes:title>
      <itunes:author>Karol Hausman, Danny Driess, Jacob Effron</itunes:author>
      <itunes:duration>01:09:57</itunes:duration>
      <itunes:summary>In this episode, Jacob sits down with Karol Hausman (Co-Founder) and Danny Driess (Research Scientist) from Physical Intelligence, two of the minds behind some of the most exciting advances in robotics. They unpack the last decade of progress in AI robotics, from early skepticism to the breakthroughs powering today’s generalist robot models. The conversation covers everything from folding laundry with robots to building scalable data pipelines, the limits of simulation, and what it’ll take to bring robot assistants into everyday homes. It&apos;s a wide-ranging and thoughtful look at where robotics is headed, as well as how fast we might get there.</itunes:summary>
      <itunes:subtitle>In this episode, Jacob sits down with Karol Hausman (Co-Founder) and Danny Driess (Research Scientist) from Physical Intelligence, two of the minds behind some of the most exciting advances in robotics. They unpack the last decade of progress in AI robotics, from early skepticism to the breakthroughs powering today’s generalist robot models. The conversation covers everything from folding laundry with robots to building scalable data pipelines, the limits of simulation, and what it’ll take to bring robot assistants into everyday homes. It&apos;s a wide-ranging and thoughtful look at where robotics is headed, as well as how fast we might get there.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>70</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">eb4f57ea-e2bc-48cb-a968-8be88edf5313</guid>
      <title>Ep 69: Co-Founder of Databricks &amp; LMArena on Current Eval Limitations, Why China is Winning Open Source and Future of AI Infrastructure</title>
      <description><![CDATA[<p>Ion Stoica helped define the modern data stack. Now he’s coming for AI evaluation. From co-founding Databricks and Anyscale to launching LMArena, Ion has shaped the infrastructure underlying some of the biggest shifts in computing. In this conversation, he unpacks what most people get wrong about model evaluation, the infrastructure challenges ahead for agents and heterogeneous compute, and why he believes the U.S. is structurally disadvantaged in open-source AI compared to China.</p><p> </p><p>(0:00) Intro<br />(0:49) Launching a New Startup: LMArena<br />(1:01) The Origin of the Vicuna Model<br />(1:54) Challenges in Model Evaluation<br />(6:33) Becoming a Company<br />(7:47) Expanding Evaluation Capabilities<br />(13:48) The Importance of Human-Based Evaluations<br />(18:56) Open Source vs. Proprietary Models<br />(23:05) Infrastructure and Collaboration Challenges<br />(28:22) China's Strategic Advantages in Technology<br />(29:54) Opportunities in AI Infrastructure<br />(31:50) Challenges in AI Model Optimization<br />(35:49) The Role of Data in AI Enterprises<br />(39:31) Reflections on AI Progress and Predictions<br />(50:40) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 17 Jun 2025 12:56:23 +0000</pubDate>
      <author>jeffron@redpoint.com (Ion Stoica, Jacob Effron, Rob Toews)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-69-co-founder-of-databricks-lmarena-on-current-eval-limitations-why-china-is-winning-open-source-and-future-of-ai-infrastructure-I9ANfUP9</link>
      <content:encoded><![CDATA[<p>Ion Stoica helped define the modern data stack. Now he’s coming for AI evaluation. From co-founding Databricks and Anyscale to launching LMArena, Ion has shaped the infrastructure underlying some of the biggest shifts in computing. In this conversation, he unpacks what most people get wrong about model evaluation, the infrastructure challenges ahead for agents and heterogeneous compute, and why he believes the U.S. is structurally disadvantaged in open-source AI compared to China.</p><p> </p><p>(0:00) Intro<br />(0:49) Launching a New Startup: LMArena<br />(1:01) The Origin of the Vicuna Model<br />(1:54) Challenges in Model Evaluation<br />(6:33) Becoming a Company<br />(7:47) Expanding Evaluation Capabilities<br />(13:48) The Importance of Human-Based Evaluations<br />(18:56) Open Source vs. Proprietary Models<br />(23:05) Infrastructure and Collaboration Challenges<br />(28:22) China's Strategic Advantages in Technology<br />(29:54) Opportunities in AI Infrastructure<br />(31:50) Challenges in AI Model Optimization<br />(35:49) The Role of Data in AI Enterprises<br />(39:31) Reflections on AI Progress and Predictions<br />(50:40) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="52759448" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/12204783-5123-4771-a61d-172a2577a1df/audio/14df9619-685d-4c97-8fb4-b969e25ed2ca/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 69: Co-Founder of Databricks &amp; LMArena on Current Eval Limitations, Why China is Winning Open Source and Future of AI Infrastructure</itunes:title>
      <itunes:author>Ion Stoica, Jacob Effron, Rob Toews</itunes:author>
      <itunes:duration>00:54:57</itunes:duration>
      <itunes:summary>Ion Stoica helped define the modern data stack. Now he’s coming for AI evaluation. From co-founding Databricks and Anyscale to launching LMArena, Ion has shaped the infrastructure underlying some of the biggest shifts in computing. In this conversation, he unpacks what most people get wrong about model evaluation, the infrastructure challenges ahead for agents and heterogeneous compute, and why he believes the U.S. is structurally disadvantaged in open-source AI compared to China.</itunes:summary>
      <itunes:subtitle>Ion Stoica helped define the modern data stack. Now he’s coming for AI evaluation. From co-founding Databricks and Anyscale to launching LMArena, Ion has shaped the infrastructure underlying some of the biggest shifts in computing. In this conversation, he unpacks what most people get wrong about model evaluation, the infrastructure challenges ahead for agents and heterogeneous compute, and why he believes the U.S. is structurally disadvantaged in open-source AI compared to China.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>69</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">7f37f04d-792a-4948-8e2f-a9f2fa4c5194</guid>
      <title>Ep 68: CEO of Mercor Brendan Foody on Evals Replacing Knowledge Work, AI x Hiring Today &amp; the Future of Data Labeling</title>
      <description><![CDATA[<p>Brendan Foody is the co-founder and CEO of Mercor, a company building the infrastructure for AI-native labor markets. Mercor’s platform is already used by top AI labs to label data, evaluate human and AI candidates, and make performance-driven hiring decisions. </p><p> </p><p>They’re operating at the intersection of recruiting, evals, and foundation model development—helping companies shift from intuition to measurable prediction. Brendan and his team recently raised $100M and are working with some of the most advanced players in the AI ecosystem today.</p><p> </p><p>(0:00) Intro</p><p>(1:17) State of AI in Talent Evaluation</p><p>(1:54) Improvements in AI Models</p><p>(4:07) Mercor Background and Mission</p><p>(5:09) AI Use Cases in Hiring</p><p>(13:43) Data Labeling Landscape</p><p>(16:48) Expanding Beyond Coding</p><p>(18:39) Company Vision and Market Strategy</p><p>(21:11) Meeting with xAI</p><p>(23:47) Does Mercor Use Their Own Product?</p><p>(25:41) Exploring Multimodal Capabilities</p><p>(28:03) Skills for the Future: Embracing AI</p><p>(29:29) The Demand for Software Engineers</p><p>(34:55) Foundation Model Landscape</p><p>(38:42) AI Regulations</p><p>(39:57) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron  </p><p>- Partner at Redpoint, Former PM Flatiron Health  </p><p>@patrickachase  </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn  </p><p>@ericabrescia  </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)  </p><p>@jordan_segall  </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Wed, 4 Jun 2025 12:58:50 +0000</pubDate>
      <author>jeffron@redpoint.com (Brendan Foody, Patrick Chase, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-68-ceo-of-mercor-brendan-foody-on-evals-replacing-knowledge-work-ai-x-hiring-today-the-future-of-data-labeling-Aag6jNVf</link>
      <content:encoded><![CDATA[<p>Brendan Foody is the co-founder and CEO of Mercor, a company building the infrastructure for AI-native labor markets. Mercor’s platform is already used by top AI labs to label data, evaluate human and AI candidates, and make performance-driven hiring decisions. </p><p> </p><p>They’re operating at the intersection of recruiting, evals, and foundation model development—helping companies shift from intuition to measurable prediction. Brendan and his team recently raised $100M and are working with some of the most advanced players in the AI ecosystem today.</p><p> </p><p>(0:00) Intro</p><p>(1:17) State of AI in Talent Evaluation</p><p>(1:54) Improvements in AI Models</p><p>(4:07) Mercor Background and Mission</p><p>(5:09) AI Use Cases in Hiring</p><p>(13:43) Data Labeling Landscape</p><p>(16:48) Expanding Beyond Coding</p><p>(18:39) Company Vision and Market Strategy</p><p>(21:11) Meeting with xAI</p><p>(23:47) Does Mercor Use Their Own Product?</p><p>(25:41) Exploring Multimodal Capabilities</p><p>(28:03) Skills for the Future: Embracing AI</p><p>(29:29) The Demand for Software Engineers</p><p>(34:55) Foundation Model Landscape</p><p>(38:42) AI Regulations</p><p>(39:57) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron  </p><p>- Partner at Redpoint, Former PM Flatiron Health  </p><p>@patrickachase  </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn  </p><p>@ericabrescia  </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)  </p><p>@jordan_segall  </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="42291661" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/79a16bc9-2c12-4db0-99a4-a929014d80fc/audio/a9855f42-7a4b-4dc3-a42b-59987c587e47/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 68: CEO of Mercor Brendan Foody on Evals Replacing Knowledge Work, AI x Hiring Today &amp; the Future of Data Labeling</itunes:title>
      <itunes:author>Brendan Foody, Patrick Chase, Jacob Effron</itunes:author>
      <itunes:duration>00:44:03</itunes:duration>
      <itunes:summary>Brendan Foody is the co-founder and CEO of Mercor, a company building the infrastructure for AI-native labor markets. Mercor’s platform is already used by top AI labs to label data, evaluate human and AI candidates, and make performance-driven hiring decisions. They’re operating at the intersection of recruiting, evals, and foundation model development—helping companies shift from intuition to measurable prediction. Brendan and his team recently raised $100M and are working with some of the most advanced players in the AI ecosystem today.</itunes:summary>
      <itunes:subtitle>Brendan Foody is the co-founder and CEO of Mercor, a company building the infrastructure for AI-native labor markets. Mercor’s platform is already used by top AI labs to label data, evaluate human and AI candidates, and make performance-driven hiring decisions. They’re operating at the intersection of recruiting, evals, and foundation model development—helping companies shift from intuition to measurable prediction. Brendan and his team recently raised $100M and are working with some of the most advanced players in the AI ecosystem today.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>68</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">a1ea83cb-4385-4e79-835b-7f81be5b77d6</guid>
      <title>Ep 67: Max Junestrand (CEO, Legora) on Differentiating and Pricing AI Apps &amp; How the Legal Industry Will Evolve</title>
      <description><![CDATA[<p>Jacob and Logan sit down with Max Junestrand, founder and CEO of Legora - a rapidly growing legal AI platform (and Redpoint portfolio company). After announcing their Series B last week, Max joined the show to discuss why law is uniquely suited for AI, what it takes to scale an enterprise-ready product across global markets, and a few crazy moments from Legora’s journey so far. They dig into product strategy, lessons on evolving alongside foundational models, and how AI is reshaping the future of law firms. Whether you're building in AI or just curious how it’s being applied in complex industries, this one’s packed with practical insights.</p><p> </p><p>(0:00) Intro</p><p>(1:30) The Evolution of AI in Law</p><p>(2:43) AI's Impact on Legal Processes</p><p>(8:28) Advantages Over Other Players in the AI Law Space</p><p>(12:19) Challenges in Educating Users</p><p>(17:28) The Hardest Part of Building Legora</p><p>(18:46) Pricing Models and Cost Management</p><p>(25:42) YC Experience and Commercial Focus</p><p>(28:11) Being Patient When Releasing Products</p><p>(30:58) Maintaining a Fast-Paced Work Culture</p><p>(33:24) Rapid Growth and Market Penetration</p><p>(36:59) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 27 May 2025 13:41:46 +0000</pubDate>
      <author>jeffron@redpoint.com (Max Junestrand, Jacob Effron, Logan Bartlett)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-67-max-junestrand-ceo-legora-on-replacing-billable-hours-pricing-ai-apps-evolving-legal-teams-k94KadSo</link>
      <content:encoded><![CDATA[<p>Jacob and Logan sit down with Max Junestrand, founder and CEO of Legora - a rapidly growing legal AI platform (and Redpoint portfolio company). After announcing their Series B last week, Max joined the show to discuss why law is uniquely suited for AI, what it takes to scale an enterprise-ready product across global markets, and a few crazy moments from Legora’s journey so far. They dig into product strategy, lessons on evolving alongside foundational models, and how AI is reshaping the future of law firms. Whether you're building in AI or just curious how it’s being applied in complex industries, this one’s packed with practical insights.</p><p> </p><p>(0:00) Intro</p><p>(1:30) The Evolution of AI in Law</p><p>(2:43) AI's Impact on Legal Processes</p><p>(8:28) Advantages Over Other Players in the AI Law Space</p><p>(12:19) Challenges in Educating Users</p><p>(17:28) The Hardest Part of Building Legora</p><p>(18:46) Pricing Models and Cost Management</p><p>(25:42) YC Experience and Commercial Focus</p><p>(28:11) Being Patient When Releasing Products</p><p>(30:58) Maintaining a Fast-Paced Work Culture</p><p>(33:24) Rapid Growth and Market Penetration</p><p>(36:59) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="42391553" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/fcb747e3-fedb-4203-90f5-3ee930fc17eb/audio/d180430a-c8ae-4671-abbd-78268764c688/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 67: Max Junestrand (CEO, Legora) on Differentiating and Pricing AI Apps &amp; How the Legal Industry Will Evolve</itunes:title>
      <itunes:author>Max Junestrand, Jacob Effron, Logan Bartlett</itunes:author>
      <itunes:duration>00:44:09</itunes:duration>
      <itunes:summary>Jacob and Logan sit down with Max Junestrand, founder and CEO of Legora - a rapidly growing legal AI platform (and Redpoint portfolio company). After announcing their Series B last week, Max joined the show to discuss why law is uniquely suited for AI, what it takes to scale an enterprise-ready product across global markets, and a few crazy moments from Legora’s journey so far. They dig into product strategy, lessons on evolving alongside foundational models, and how AI is reshaping the future of law firms. Whether you&apos;re building in AI or just curious how it’s being applied in complex industries, this one’s packed with practical insights.</itunes:summary>
      <itunes:subtitle>Jacob and Logan sit down with Max Junestrand, founder and CEO of Legora - a rapidly growing legal AI platform (and Redpoint portfolio company). After announcing their Series B last week, Max joined the show to discuss why law is uniquely suited for AI, what it takes to scale an enterprise-ready product across global markets, and a few crazy moments from Legora’s journey so far. They dig into product strategy, lessons on evolving alongside foundational models, and how AI is reshaping the future of law firms. Whether you&apos;re building in AI or just curious how it’s being applied in complex industries, this one’s packed with practical insights.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>67</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">53f982f2-f3d0-43c0-bd93-4ae755dbc980</guid>
      <title>Ep 66: Member of Technical Staff at Anthropic Sholto Douglas on Claude 4, Next Phase for AI Coding, and the Path to AI Coworkers</title>
      <description><![CDATA[<p>Sholto Douglas, a Member of Technical Staff at Anthropic, joined Unsupervised Learning to break down why coding is the clearest early signal of model progress, how AI agents are already accelerating research, and what it’ll take to unlock real-world breakthroughs in fields like biology and robotics.</p><p> </p><p>(0:00) Intro<br />(0:48) Claude 4<br />(1:30) Capabilities and Improvements<br />(2:29) Practical Applications and Advice<br />(3:04) Future of AI in Coding<br />(4:38) Managing Multiple AI Models<br />(11:20) The Barrier to Agents is Reliability<br />(16:35) Agents Conducting Research<br />(19:54) Impact of Models on World GDP<br />(25:14) Most Important Metrics in Model Improvement<br />(29:53) Stories of Model Creativity<br />(32:45) How Often Will New Models Be Shipped in the Future?<br />(39:51) Day-to-Day Work of AI Researchers<br />(46:46) The Future of AI and Society<br />(51:26) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Thu, 22 May 2025 16:57:11 +0000</pubDate>
      <author>jeffron@redpoint.com (Sholto Douglas, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-66-member-of-technical-staff-at-anthropic-sholto-douglas-on-claude-4-next-phase-for-ai-coding-and-the-path-to-ai-coworker-jZX0gaQW</link>
      <content:encoded><![CDATA[<p>Sholto Douglas, a Member of Technical Staff at Anthropic, joined Unsupervised Learning to break down why coding is the clearest early signal of model progress, how AI agents are already accelerating research, and what it’ll take to unlock real-world breakthroughs in fields like biology and robotics.</p><p> </p><p>(0:00) Intro<br />(0:48) Claude 4<br />(1:30) Capabilities and Improvements<br />(2:29) Practical Applications and Advice<br />(3:04) Future of AI in Coding<br />(4:38) Managing Multiple AI Models<br />(11:20) The Barrier to Agents is Reliability<br />(16:35) Agents Conducting Research<br />(19:54) Impact of Models on World GDP<br />(25:14) Most Important Metrics in Model Improvement<br />(29:53) Stories of Model Creativity<br />(32:45) How Often Will New Models Be Shipped in the Future?<br />(39:51) Day-to-Day Work of AI Researchers<br />(46:46) The Future of AI and Society<br />(51:26) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="55454867" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/7e1a72d4-eb58-44c5-ac8f-7c14158138c7/audio/1386ab2a-a129-4be3-9897-7380b4f29d0d/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 66: Member of Technical Staff at Anthropic Sholto Douglas on Claude 4, Next Phase for AI Coding, and the Path to AI Coworkers</itunes:title>
      <itunes:author>Sholto Douglas, Jacob Effron</itunes:author>
      <itunes:duration>00:57:45</itunes:duration>
      <itunes:summary>Sholto Douglas, a Member of Technical Staff at Anthropic, joined Unsupervised Learning to break down why coding is the clearest early signal of model progress, how AI agents are already accelerating research, and what it’ll take to unlock real-world breakthroughs in fields like biology and robotics. </itunes:summary>
      <itunes:subtitle>Sholto Douglas, a Member of Technical Staff at Anthropic, joined Unsupervised Learning to break down why coding is the clearest early signal of model progress, how AI agents are already accelerating research, and what it’ll take to unlock real-world breakthroughs in fields like biology and robotics. </itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>66</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">a0f950b1-8b45-4e70-89b3-ad83b951e8d8</guid>
      <title>Ep 65: Co-Authors of AI-2027 Daniel Kokotajlo and Thomas Larsen On Their Detailed AI Predictions for the Coming Years</title>
      <description><![CDATA[<p>The recent <i>AI 2027</i> report sparked widespread discussion with its stark warnings about the near-term risks of unaligned AI.</p><p>Authors @Daniel Kokotajlo (former OpenAI researcher now focused full-time on alignment through his nonprofit, @AI Futures, and one of TIME’s 100 most influential people in AI) and @Thomas Larsen joined the show to unpack their findings.</p><p>We talk through the key takeaways from the report, its policy implications, and what they believe it will take to build safer, more aligned models.</p><p> </p><p>(0:00) Intro<br />(1:15) Overview of AI 2027<br />(2:32) AI Development Timeline<br />(4:10) Race and Slowdown Branches<br />(12:52) US vs China<br />(18:09) Potential AI Misalignment<br />(31:06) Getting Serious About the Threat of AI<br />(47:23) Predictions for AI Development by 2027<br />(48:33) Public and Government Reactions to AI Concerns<br />(49:27) Policy Recommendations for AI Safety<br />(52:22) Diverging Views on AI Alignment Timelines<br />(1:01:30) The Role of Public Awareness in AI Safety<br />(1:02:38) Reflections on Insider vs. Outsider Strategies<br />(1:10:53) Future Research and Scenario Planning<br />(1:14:01) Best and Worst Case Outcomes for AI<br />(1:17:02) Final Thoughts and Hopes for the Future</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Wed, 14 May 2025 12:55:06 +0000</pubDate>
      <author>jeffron@redpoint.com (Daniel Kokotajlo, Thomas Larsen, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-65-co-authors-of-ai-2027-daniel-kokotajlo-and-thomas-larsen-on-their-detailed-ai-predictions-for-the-coming-years-eZ7itP0k</link>
      <content:encoded><![CDATA[<p>The recent <i>AI 2027</i> report sparked widespread discussion with its stark warnings about the near-term risks of unaligned AI.</p><p>Authors @Daniel Kokotajlo (former OpenAI researcher now focused full-time on alignment through his nonprofit, @AI Futures, and one of TIME’s 100 most influential people in AI) and @Thomas Larsen joined the show to unpack their findings.</p><p>We talk through the key takeaways from the report, its policy implications, and what they believe it will take to build safer, more aligned models.</p><p> </p><p>(0:00) Intro<br />(1:15) Overview of AI 2027<br />(2:32) AI Development Timeline<br />(4:10) Race and Slowdown Branches<br />(12:52) US vs China<br />(18:09) Potential AI Misalignment<br />(31:06) Getting Serious About the Threat of AI<br />(47:23) Predictions for AI Development by 2027<br />(48:33) Public and Government Reactions to AI Concerns<br />(49:27) Policy Recommendations for AI Safety<br />(52:22) Diverging Views on AI Alignment Timelines<br />(1:01:30) The Role of Public Awareness in AI Safety<br />(1:02:38) Reflections on Insider vs. Outsider Strategies<br />(1:10:53) Future Research and Scenario Planning<br />(1:14:01) Best and Worst Case Outcomes for AI<br />(1:17:02) Final Thoughts and Hopes for the Future</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="80119150" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/e1006d24-6105-43ff-86ce-7afb72b8e86a/audio/55d9468a-59d2-4633-927d-ef932a258d20/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 65: Co-Authors of AI-2027 Daniel Kokotajlo and Thomas Larsen On Their Detailed AI Predictions for the Coming Years</itunes:title>
      <itunes:author>Daniel Kokotajlo, Thomas Larsen, Jacob Effron</itunes:author>
      <itunes:duration>01:23:27</itunes:duration>
      <itunes:summary>The recent AI 2027 report sparked widespread discussion with its stark warnings about the near-term risks of unaligned AI. Authors @Daniel Kokotajlo (former OpenAI researcher now focused full-time on alignment through his nonprofit, @AI Futures, and one of TIME’s 100 most influential people in AI) and @Thomas Larsen joined the show to unpack their findings. We talk through the key takeaways from the report, its policy implications, and what they believe it will take to build safer, more aligned models.</itunes:summary>
      <itunes:subtitle>The recent AI 2027 report sparked widespread discussion with its stark warnings about the near-term risks of unaligned AI. Authors @Daniel Kokotajlo (former OpenAI researcher now focused full-time on alignment through his nonprofit, @AI Futures, and one of TIME’s 100 most influential people in AI) and @Thomas Larsen joined the show to unpack their findings. We talk through the key takeaways from the report, its policy implications, and what they believe it will take to build safer, more aligned models.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>65</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">87a80e9d-f0b6-4d84-9d04-c26fc9039a7a</guid>
      <title>Ep 64: GPT 4.1 Lead at OpenAI Michelle Pokrass: RFT Launch, How OpenAI Improves Its Models &amp; the State of AI Agents Today</title>
      <description><![CDATA[<p>In this episode, I sit down with Michelle Pokrass, who leads a research team at OpenAI within post-training focused on improving models for power users: developers using OpenAI models in the API and power users in ChatGPT. We unpack how OpenAI prioritized instruction-following and long context, why evals have a 3-month shelf life, what separates successful AI startups, and how the best teams are fine-tuning to push past the current frontier.</p><p>If you’ve ever wondered how OpenAI really decides what to build, and how it affects what you should build, this one’s for you.</p><p> </p><p>(0:00) Intro</p><p>(1:03) Deep Dive into GPT-4.1 Development</p><p>(2:23) User Feedback and Model Evaluation</p><p>(4:01) Challenges and Improvements in Model Training</p><p>(5:54) Advancements in AI Coding Capabilities</p><p>(9:11) Future of AI Models and Fine-Tuning</p><p>(20:44) Multimodal Capabilities</p><p>(22:59) Deep Tech Applications and Data Efficiency</p><p>(24:14) Preference Fine Tuning vs. RFT</p><p>(26:29) Choosing the Right Model for Your Needs</p><p>(28:18) Prompting Techniques and Model Improvements</p><p>(32:10) Future Research and Model Enhancements</p><p>(39:14) Power Users and Personalization</p><p>(40:22) Personal Journey and Organizational Growth</p><p>(43:37) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Thu, 8 May 2025 17:50:48 +0000</pubDate>
      <author>jeffron@redpoint.com (Michelle Pokrass, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-64-gpt-41-lead-at-openai-michelle-pokrass-rft-launch-how-openai-improves-its-models-the-state-of-ai-agents-today-lvbC57zk</link>
      <content:encoded><![CDATA[<p>In this episode, I sit down with Michelle Pokrass, who leads a research team at OpenAI within post-training focused on improving models for power users: developers using OpenAI models in the API and power users in ChatGPT. We unpack how OpenAI prioritized instruction-following and long context, why evals have a 3-month shelf life, what separates successful AI startups, and how the best teams are fine-tuning to push past the current frontier.</p><p>If you’ve ever wondered how OpenAI really decides what to build, and how it affects what you should build, this one’s for you.</p><p> </p><p>(0:00) Intro</p><p>(1:03) Deep Dive into GPT-4.1 Development</p><p>(2:23) User Feedback and Model Evaluation</p><p>(4:01) Challenges and Improvements in Model Training</p><p>(5:54) Advancements in AI Coding Capabilities</p><p>(9:11) Future of AI Models and Fine-Tuning</p><p>(20:44) Multimodal Capabilities</p><p>(22:59) Deep Tech Applications and Data Efficiency</p><p>(24:14) Preference Fine Tuning vs. RFT</p><p>(26:29) Choosing the Right Model for Your Needs</p><p>(28:18) Prompting Techniques and Model Improvements</p><p>(32:10) Future Research and Model Enhancements</p><p>(39:14) Power Users and Personalization</p><p>(40:22) Personal Journey and Organizational Growth</p><p>(43:37) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="45314759" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/d91eebb6-4e9a-4961-b9cb-62aa2ee43243/audio/7d8e9275-7a19-4d5f-89fc-4dbbe625fd44/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 64: GPT 4.1 Lead at OpenAI Michelle Pokrass: RFT Launch, How OpenAI Improves Its Models &amp; the State of AI Agents Today</itunes:title>
      <itunes:author>Michelle Pokrass, Jacob Effron</itunes:author>
      <itunes:duration>00:47:12</itunes:duration>
      <itunes:summary>In this episode, I sit down with Michelle Pokrass, who leads a research team at OpenAI within post-training focused on improving models for power users: developers using OpenAI models in the API and power users in ChatGPT. We unpack how OpenAI prioritized instruction-following and long context, why evals have a 3-month shelf life, what separates successful AI startups, and how the best teams are fine-tuning to push past the current frontier. If you’ve ever wondered how OpenAI really decides what to build, and how it affects what you should build, this one’s for you.</itunes:summary>
      <itunes:subtitle>In this episode, I sit down with Michelle Pokrass, who leads a research team at OpenAI within post-training focused on improving models for power users: developers using OpenAI models in the API and power users in ChatGPT. We unpack how OpenAI prioritized instruction-following and long context, why evals have a 3-month shelf life, what separates successful AI startups, and how the best teams are fine-tuning to push past the current frontier. If you’ve ever wondered how OpenAI really decides what to build, and how it affects what you should build, this one’s for you.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>64</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">a378b549-1e4a-4efb-b6ae-d14c7a0712f0</guid>
      <title>Ep 63: Khan Academy Founder/CEO on Salman Khan on Classrooms in 20 years, Rolling out to 1.4M Users &amp; Sal’s Hopes for AI Education</title>
      <description><![CDATA[<p>When Khan Academy launched Khanmigo, Salman Khan thought they might reach 100k users by 2025. Today, they’re at 1.4 million. </p><p>🎧 Sal joined us on Unsupervised Learning to talk about AI's role in education— from the vantage point of someone deploying it at scale. As founder of Khan Academy, he’s overseen the rollout of AI tools to over a million teachers and students, giving him a front-row seat to what’s actually working in classrooms.</p><p> </p><p>(0:00) Intro<br />(1:17) The Vision for Future Classrooms<br />(4:28) Khan Academy's AI Initiatives<br />(7:06) Proactive AI and Engagement<br />(10:33) Teacher and Student Experiences<br />(18:24) District-Level Adoption and Policy<br />(25:56) Gamification and Engagement<br />(27:36) Evaluating AI Models<br />(31:03) Global Impact and Future Prospects<br />(37:43) Challenges and Innovations in AI Education<br />(44:19) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 29 Apr 2025 13:00:48 +0000</pubDate>
      <author>jeffron@redpoint.com (Salman Khan, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-63-khan-academy-founder-ceo-on-salman-khan-on-classrooms-in-20-years-rolling-out-to-14m-users-sals-hopes-for-ai-education-6U445CUu</link>
      <content:encoded><![CDATA[<p>When Khan Academy launched Khanmigo, Salman Khan thought they might reach 100k users by 2025. Today, they’re at 1.4 million. </p><p>🎧 Sal joined us on Unsupervised Learning to talk about AI's role in education— from the vantage point of someone deploying it at scale. As founder of Khan Academy, he’s overseen the rollout of AI tools to over a million teachers and students, giving him a front-row seat to what’s actually working in classrooms.</p><p> </p><p>(0:00) Intro<br />(1:17) The Vision for Future Classrooms<br />(4:28) Khan Academy's AI Initiatives<br />(7:06) Proactive AI and Engagement<br />(10:33) Teacher and Student Experiences<br />(18:24) District-Level Adoption and Policy<br />(25:56) Gamification and Engagement<br />(27:36) Evaluating AI Models<br />(31:03) Global Impact and Future Prospects<br />(37:43) Challenges and Innovations in AI Education<br />(44:19) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="48852784" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/c168cdf0-f3d7-4e0a-9f2d-af118465a630/audio/4ff5fa1d-8f54-4fd8-99d4-b5ef244e61b8/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 63: Khan Academy Founder/CEO on Salman Khan on Classrooms in 20 years, Rolling out to 1.4M Users &amp; Sal’s Hopes for AI Education</itunes:title>
      <itunes:author>Salman Khan, Jacob Effron</itunes:author>
      <itunes:duration>00:50:53</itunes:duration>
      <itunes:summary>When Khan Academy launched Khanmigo, Salman Khan thought they might reach 100k users by 2025. Today, they’re at 1.4 million. 

🎧 Sal joined us on Unsupervised Learning to talk about AI&apos;s role in education— from the vantage point of someone deploying it at scale. As founder of Khan Academy, he’s overseen the rollout of AI tools to over a million teachers and students, giving him a front-row seat to what’s actually working in classrooms.
</itunes:summary>
      <itunes:subtitle>When Khan Academy launched Khanmigo, Salman Khan thought they might reach 100k users by 2025. Today, they’re at 1.4 million. 

🎧 Sal joined us on Unsupervised Learning to talk about AI&apos;s role in education— from the vantage point of someone deploying it at scale. As founder of Khan Academy, he’s overseen the rollout of AI tools to over a million teachers and students, giving him a front-row seat to what’s actually working in classrooms.
</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>63</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">93c920b7-7f45-49bc-aaaf-2f1364e36196</guid>
      <title>Ep 62: CEO of Cohere Aidan Gomez on Scaling Limits Emerging, AI Use-cases with PMF &amp; Life After Transformers</title>
      <description><![CDATA[<p>Aidan joined this week’s Unsupervised Learning for a wide-ranging conversation on model architectures, enterprise adoption, and what’s breaking in the foundation model stack. </p><p>If you’re building or investing in AI infrastructure, Aidan is worth listening to. He co-authored the original Transformer paper, leads one of the most advanced model labs outside of the hyperscalers, and is now building for real-world enterprise deployment with Cohere’s agent platform, North. Cohere serves thousands of customers across sectors like finance, telco, and healthcare — and they’ve made a name for themselves by staying model-agnostic, privacy-forward, and deeply international (with major bets in Japan and Korea)</p><p> </p><p>(0:00) Intro<br />(0:32) Enterprise AI<br />(3:23) Custom Integrations and Future of AI Agents<br />(4:33) Enterprise Use Cases for Gen AI<br />(7:02) The Importance of Reasoning in AI Models<br />(10:38) Custom Models and Synthetic Data<br />(17:48) Cohere's Approach to AI Applications<br />(23:24) Future Use Cases and Market Fit<br />(27:11) Building a Unified Automation Platform<br />(27:34) Strategic Decisions in the AI Journey<br />(29:19) International Partnerships and Language Models<br />(31:05) Future of Foundation Models<br />(32:27) AI in Specialized Domains<br />(34:40) Challenges in Data Integration<br />(35:06) Emerging Foundation Model Companies<br />(35:31) Technological Frontiers and Architectures<br />(37:29) Scaling Hypothesis and Model Capabilities<br />(42:26) AI Research Culture and Team Building<br />(44:39) Future of AI and Societal Impact<br />(48:31) Addressing AI Risks</p><p> </p><p>With your co-hosts:  </p><p>@jacobeffron  </p><p>- Partner at Redpoint, Former PM Flatiron Health  </p><p>@patrickachase  </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn  </p><p>@ericabrescia  </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)  </p><p>@jordan_segall  </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 15 Apr 2025 13:00:05 +0000</pubDate>
      <author>jeffron@redpoint.com (Aidan Gomez, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-62-ceo-of-cohere-aidan-gomez-on-scaling-limits-emerging-ai-use-cases-with-pmf-life-after-transformers-cq4ILxTh</link>
      <content:encoded><![CDATA[<p>Aidan joined this week’s Unsupervised Learning for a wide-ranging conversation on model architectures, enterprise adoption, and what’s breaking in the foundation model stack. </p><p>If you’re building or investing in AI infrastructure, Aidan is worth listening to. He co-authored the original Transformer paper, leads one of the most advanced model labs outside of the hyperscalers, and is now building for real-world enterprise deployment with Cohere’s agent platform, North. Cohere serves thousands of customers across sectors like finance, telco, and healthcare — and they’ve made a name for themselves by staying model-agnostic, privacy-forward, and deeply international (with major bets in Japan and Korea)</p><p> </p><p>(0:00) Intro<br />(0:32) Enterprise AI<br />(3:23) Custom Integrations and Future of AI Agents<br />(4:33) Enterprise Use Cases for Gen AI<br />(7:02) The Importance of Reasoning in AI Models<br />(10:38) Custom Models and Synthetic Data<br />(17:48) Cohere's Approach to AI Applications<br />(23:24) Future Use Cases and Market Fit<br />(27:11) Building a Unified Automation Platform<br />(27:34) Strategic Decisions in the AI Journey<br />(29:19) International Partnerships and Language Models<br />(31:05) Future of Foundation Models<br />(32:27) AI in Specialized Domains<br />(34:40) Challenges in Data Integration<br />(35:06) Emerging Foundation Model Companies<br />(35:31) Technological Frontiers and Architectures<br />(37:29) Scaling Hypothesis and Model Capabilities<br />(42:26) AI Research Culture and Team Building<br />(44:39) Future of AI and Societal Impact<br />(48:31) Addressing AI Risks</p><p> </p><p>With your co-hosts:  </p><p>@jacobeffron  </p><p>- Partner at Redpoint, Former PM Flatiron Health  </p><p>@patrickachase  </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn  </p><p>@ericabrescia  </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)  </p><p>@jordan_segall  </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="48719455" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/3d3dfedf-b084-487b-8110-66d5b4f31fb1/audio/68f6bc4e-a6ef-448f-a5b7-c5e143c16d71/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 62: CEO of Cohere Aidan Gomez on Scaling Limits Emerging, AI Use-cases with PMF &amp; Life After Transformers</itunes:title>
      <itunes:author>Aidan Gomez, Jacob Effron</itunes:author>
      <itunes:duration>00:50:44</itunes:duration>
      <itunes:summary>Aidan joined this week’s Unsupervised Learning for a wide-ranging conversation on model architectures, enterprise adoption, and what’s breaking in the foundation model stack.

If you’re building or investing in AI infrastructure, Aidan is worth listening to. He co-authored the original Transformer paper, leads one of the most advanced model labs outside of the hyperscalers, and is now building for real-world enterprise deployment with Cohere’s agent platform, North. Cohere serves thousands of customers across sectors like finance, telco, and healthcare — and they’ve made a name for themselves by staying model-agnostic, privacy-forward, and deeply international (with major bets in Japan and Korea)</itunes:summary>
      <itunes:subtitle>Aidan joined this week’s Unsupervised Learning for a wide-ranging conversation on model architectures, enterprise adoption, and what’s breaking in the foundation model stack.

If you’re building or investing in AI infrastructure, Aidan is worth listening to. He co-authored the original Transformer paper, leads one of the most advanced model labs outside of the hyperscalers, and is now building for real-world enterprise deployment with Cohere’s agent platform, North. Cohere serves thousands of customers across sectors like finance, telco, and healthcare — and they’ve made a name for themselves by staying model-agnostic, privacy-forward, and deeply international (with major bets in Japan and Korea)</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>62</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">0bee293d-9757-4060-bbf7-2d22156fd053</guid>
      <title>Ep 61: Redpoint’s AI Investors Break Down What Separates Enduring AI Companies from the Hype</title>
      <description><![CDATA[<p>At Redpoint’s annual meeting with investors, Redpoint partners Scott Raney, Alex Bard, Patrick Chase and Jacob Effron shared unfiltered thoughts on some of the most topical questions in AI today, where value will accrue, which industries are best positioned for defensibility at the application layer, and more.</p><p> </p><p>(0:00) Intro<br />(0:39) AI Investment Landscape<br />(1:30) Market Projections<br />(3:25) Strategic Imperatives in AI<br />(6:04) AI Model Layer Insights<br />(10:06) Infrastructure Layer<br />(11:46) Application Layer<br />(15:04) Vertical AI SaaS Opportunities<br />(21:45) Evaluating Early-Stage Founders<br />(23:39) First Mover Advantage and Competitive Dynamics<br />(25:28) Domain Expertise vs. AI Expertise<br />(26:58) Why AI Startups Fail<br />(29:41) Startups vs. Incumbents<br />(35:15) Navigating High AI Valuations</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Wed, 9 Apr 2025 13:00:28 +0000</pubDate>
      <author>jeffron@redpoint.com (Alex Bard, Jacob Effron, Patrick Chase, Scott Raney)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ul-live-draft-mMaz5D73</link>
      <content:encoded><![CDATA[<p>At Redpoint’s annual meeting with investors, Redpoint partners Scott Raney, Alex Bard, Patrick Chase and Jacob Effron shared unfiltered thoughts on some of the most topical questions in AI today, where value will accrue, which industries are best positioned for defensibility at the application layer, and more.</p><p> </p><p>(0:00) Intro<br />(0:39) AI Investment Landscape<br />(1:30) Market Projections<br />(3:25) Strategic Imperatives in AI<br />(6:04) AI Model Layer Insights<br />(10:06) Infrastructure Layer<br />(11:46) Application Layer<br />(15:04) Vertical AI SaaS Opportunities<br />(21:45) Evaluating Early-Stage Founders<br />(23:39) First Mover Advantage and Competitive Dynamics<br />(25:28) Domain Expertise vs. AI Expertise<br />(26:58) Why AI Startups Fail<br />(29:41) Startups vs. Incumbents<br />(35:15) Navigating High AI Valuations</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="41739118" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/8f4d0a3a-9b36-44e6-808c-60b56f6cbc50/audio/64b929c5-dcbd-4faa-a7f3-890f522dec8d/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 61: Redpoint’s AI Investors Break Down What Separates Enduring AI Companies from the Hype</itunes:title>
      <itunes:author>Alex Bard, Jacob Effron, Patrick Chase, Scott Raney</itunes:author>
      <itunes:duration>00:43:28</itunes:duration>
      <itunes:summary>At Redpoint’s annual meeting with investors, Redpoint partners Scott Raney, Alex Bard, Patrick Chase and Jacob Effron shared unfiltered thoughts on some of the most topical questions in AI today, where value will accrue, which industries are best positioned for defensibility at the application layer, and more. </itunes:summary>
      <itunes:subtitle>At Redpoint’s annual meeting with investors, Redpoint partners Scott Raney, Alex Bard, Patrick Chase and Jacob Effron shared unfiltered thoughts on some of the most topical questions in AI today, where value will accrue, which industries are best positioned for defensibility at the application layer, and more. </itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>61</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">52760d85-fe3f-4dcd-bd44-ecdbb7d61aa7</guid>
      <title>Ep 60: Swyx and Alessio (Latent Space) on What has PMF Today, Google is Cooking &amp; GPT Wrappers are Winning</title>
      <description><![CDATA[<p>To unpack some of the most topical questions in AI, I’m joined by two fellow AI podcasters: Swyx and Alessio Fanelli, co-hosts of the Latent Space podcast. We’ve been wanting to do a cross-over episode for a while and finally made it happen.</p><p>Swyx brings deep experience from his time at AWS, Temporal, and Airbyte, and is now focused on AI agents and dev tools. Alessio is an investor at Decibel, where he’s been backing early technical teams pushing the boundaries of infrastructure and applied AI. Together they run Latent Space, a technical newsletter and podcast by and for AI engineers.</p><p>To subscribe or learn more about Latent Space, click here: <a href="https://www.google.com/url?q=https://www.latent.space/&sa=D&source=docs&ust=1743172281702054&usg=AOvVaw0xPxSnQ_bDmXKzIffRWLzC" target="_blank">https://www.latent.space/</a></p><p> </p><p>(0:00) Intro<br />(1:08) Reflecting on AI Surprises of the Past Year<br />(2:24) Open Source Models and Their Adoption<br />(6:48) The Rise of GPT Wrappers<br />(7:49) Challenges in AI Model Training<br />(10:33) Over-hyped and Under-hyped AI Trends<br />(24:00) The Future of AI Product Market Fit<br />(30:27) Google's Momentum and Customer Support Insights<br />(33:16) Emerging AI Applications and Market Trends<br />(35:13) Challenges and Opportunities in AI Development<br />(39:02) Defensibility in AI Applications<br />(42:42) Infrastructure and Security in AI<br />(50:04) Future of AI and Unanswered Questions<br />(55:34) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Fri, 28 Mar 2025 13:02:05 +0000</pubDate>
      <author>jeffron@redpoint.com (Swyx, Alessio Fanelli, Jordan Segall, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-60-swyx-and-alessio-latent-space-on-what-has-pmf-today-google-is-cooking-gpt-wrappers-are-winning-Rj1dd8OQ</link>
      <content:encoded><![CDATA[<p>To unpack some of the most topical questions in AI, I’m joined by two fellow AI podcasters: Swyx and Alessio Fanelli, co-hosts of the Latent Space podcast. We’ve been wanting to do a cross-over episode for a while and finally made it happen.</p><p>Swyx brings deep experience from his time at AWS, Temporal, and Airbyte, and is now focused on AI agents and dev tools. Alessio is an investor at Decibel, where he’s been backing early technical teams pushing the boundaries of infrastructure and applied AI. Together they run Latent Space, a technical newsletter and podcast by and for AI engineers.</p><p>To subscribe or learn more about Latent Space, click here: <a href="https://www.google.com/url?q=https://www.latent.space/&sa=D&source=docs&ust=1743172281702054&usg=AOvVaw0xPxSnQ_bDmXKzIffRWLzC" target="_blank">https://www.latent.space/</a></p><p> </p><p>(0:00) Intro<br />(1:08) Reflecting on AI Surprises of the Past Year<br />(2:24) Open Source Models and Their Adoption<br />(6:48) The Rise of GPT Wrappers<br />(7:49) Challenges in AI Model Training<br />(10:33) Over-hyped and Under-hyped AI Trends<br />(24:00) The Future of AI Product Market Fit<br />(30:27) Google's Momentum and Customer Support Insights<br />(33:16) Emerging AI Applications and Market Trends<br />(35:13) Challenges and Opportunities in AI Development<br />(39:02) Defensibility in AI Applications<br />(42:42) Infrastructure and Security in AI<br />(50:04) Future of AI and Unanswered Questions<br />(55:34) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="59050570" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/a0bf687d-e42a-44c0-b487-1736d6cae585/audio/a0be20ee-54c3-4b6d-a06c-8bd4bc6fbb13/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 60: Swyx and Alessio (Latent Space) on What has PMF Today, Google is Cooking &amp; GPT Wrappers are Winning</itunes:title>
      <itunes:author>Swyx, Alessio Fanelli, Jordan Segall, Jacob Effron</itunes:author>
      <itunes:duration>01:01:30</itunes:duration>
      <itunes:summary>To unpack some of the most topical questions in AI, I’m joined by two fellow AI podcasters: Swyx and Alessio Fanelli, co-hosts of the Latent Space podcast. We’ve been wanting to do a cross-over episode for a while and finally made it happen. Swyx brings deep experience from his time at AWS, Temporal, and Airbyte, and is now focused on AI agents and dev tools. Alessio is an investor at Decibel, where he’s been backing early technical teams pushing the boundaries of infrastructure and applied AI. Together they run Latent Space, a technical newsletter and podcast by and for AI engineers.</itunes:summary>
      <itunes:subtitle>To unpack some of the most topical questions in AI, I’m joined by two fellow AI podcasters: Swyx and Alessio Fanelli, co-hosts of the Latent Space podcast. We’ve been wanting to do a cross-over episode for a while and finally made it happen. Swyx brings deep experience from his time at AWS, Temporal, and Airbyte, and is now focused on AI agents and dev tools. Alessio is an investor at Decibel, where he’s been backing early technical teams pushing the boundaries of infrastructure and applied AI. Together they run Latent Space, a technical newsletter and podcast by and for AI engineers.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>60</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">d1c75f88-2863-45b5-88f1-7c41b156edb5</guid>
      <title>Ep 59: OpenAI Product &amp; Eng Leads Nikunj Handa and Steve Coffey on OpenAI’s New Agent Development Tools</title>
      <description><![CDATA[<p>Two weeks ago, OpenAI released its set of tools to help developers build agentic systems. Today on Unsupervised Learning, Nikunj Handa (Product Lead) and Steve Coffey (Eng Lead) answer some of the biggest questions around how developers should be thinking about building in the agentic paradigm in 2025.</p><p> </p><p>(0:00) Intro<br />(0:53) OpenAI’s Vision for Consumer Interaction<br />(4:51) Building Multi-Agent Systems for Business Solutions<br />(6:53) Challenges and Innovations in AI Fine-Tuning<br />(13:20) Exploring Computer Use Cases and Applications<br />(17:20) Advanced Use Cases and Developer Insights<br />(25:29) Challenges with Context Storage and Chat Completions<br />(26:09) Introducing the Responses API and MCP<br />(27:16) AI Infrastructure Companies and Their Role<br />(29:35) Building the Tools Ecosystem<br />(30:17) Exploring Computer Use Models<br />(31:47) The Future of AI and Developer Tools<br />(38:36) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 25 Mar 2025 13:35:37 +0000</pubDate>
      <author>jeffron@redpoint.com (Nikunj Handa, Jacob Effron, Steve Coffey)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-59-openai-product-eng-leads-nikunj-handa-and-steven-heidel-on-openais-new-agent-development-tools-x9_i37Gl</link>
      <content:encoded><![CDATA[<p>Two weeks ago, OpenAI released its set of tools to help developers build agentic systems. Today on Unsupervised Learning, Nikunj Handa (Product Lead) and Steve Coffey (Eng Lead) answer some of the biggest questions around how developers should be thinking about building in the agentic paradigm in 2025.</p><p> </p><p>(0:00) Intro<br />(0:53) OpenAI’s Vision for Consumer Interaction<br />(4:51) Building Multi-Agent Systems for Business Solutions<br />(6:53) Challenges and Innovations in AI Fine-Tuning<br />(13:20) Exploring Computer Use Cases and Applications<br />(17:20) Advanced Use Cases and Developer Insights<br />(25:29) Challenges with Context Storage and Chat Completions<br />(26:09) Introducing the Responses API and MCP<br />(27:16) AI Infrastructure Companies and Their Role<br />(29:35) Building the Tools Ecosystem<br />(30:17) Exploring Computer Use Models<br />(31:47) The Future of AI and Developer Tools<br />(38:36) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="42840023" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/d0d9b967-628b-4a2d-a1c7-cdd371978874/audio/0162de24-ca44-44b6-b9ef-5b447dd52305/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 59: OpenAI Product &amp; Eng Leads Nikunj Handa and Steve Coffey on OpenAI’s New Agent Development Tools</itunes:title>
      <itunes:author>Nikunj Handa, Jacob Effron, Steve Coffey</itunes:author>
      <itunes:duration>00:44:37</itunes:duration>
      <itunes:summary>Two weeks ago, OpenAI released its set of tools to help developers build agentic systems. Today on Unsupervised Learning, Nikunj Handa (Product Lead) and Steve Coffey (Eng Lead) answer some of the biggest questions around how developers should be thinking about building in the agentic paradigm in 2025.</itunes:summary>
      <itunes:subtitle>Two weeks ago, OpenAI released its set of tools to help developers build agentic systems. Today on Unsupervised Learning, Nikunj Handa (Product Lead) and Steve Coffey (Eng Lead) answer some of the biggest questions around how developers should be thinking about building in the agentic paradigm in 2025.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>59</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">51198dfb-5e03-41b8-9245-bfc9e0dff0a6</guid>
      <title>Ep 58: Google Researchers Noam Shazeer and Jack Rae on Scaling Test-time Compute, Reactions to Ilya &amp; AGI</title>
      <description><![CDATA[<p>On the latest episode of Unsupervised Learning, Jacob is joined by two of the most influential minds in AI today. </p><p>🔹 Noam Shazeer, co-inventor of the Transformer</p><p>🔹 Jack Rae, Research Director at DeepMind and one of the leads behind Gemini’s Flash Thinking</p><p>We got to ask them all of the top-of-mind questions in AI today about where we are, where we’re headed and what it means for businesses and the world. Some key take-aways:</p><p> </p><p>(0:00) Intro<br />(1:30) Exploring Gemini 2.0<br />(4:04) Challenges with Evals and Benchmarks<br />(6:14) Reinforcement Loops and AI Productivity<br />(8:15) Agentic Coding and AI in Development<br />(13:02) Multimodal Capabilities and Applications<br />(15:21) Future of AI: Complexity and Reliability<br />(19:02) Test Time Compute and Data Efficiency<br />(31:20) AI Research Culture and Breakthroughs<br />(38:59) Reflecting on Large Language Models<br />(39:37) The Rise of Vision-Based Models<br />(41:01) Native Image Generation and General Purpose Models<br />(41:35) AI in Healthcare and Specialized Models<br />(43:32) Shifting Timelines and Rapid Adoption<br />(46:48) Open Source Models and Competitive Edge<br />(49:30) AI's Impact on Education and Personal Lives<br />(55:10) AGI Risks and Safety Considerations<br />(57:33) Future of AI Companions<br />(1:02:17) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Mon, 17 Mar 2025 15:25:21 +0000</pubDate>
      <author>jeffron@redpoint.com (Jack Rae, Noam Shazeer, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-58-google-researchers-noam-shazeer-and-jack-rae-on-scaling-test-time-compute-reactions-to-ilya-agi-xxqHoqkX</link>
      <content:encoded><![CDATA[<p>On the latest episode of Unsupervised Learning, Jacob is joined by two of the most influential minds in AI today. </p><p>🔹 Noam Shazeer, co-inventor of the Transformer</p><p>🔹 Jack Rae, Research Director at DeepMind and one of the leads behind Gemini’s Flash Thinking</p><p>We got to ask them all of the top-of-mind questions in AI today about where we are, where we’re headed and what it means for businesses and the world. Some key take-aways:</p><p> </p><p>(0:00) Intro<br />(1:30) Exploring Gemini 2.0<br />(4:04) Challenges with Evals and Benchmarks<br />(6:14) Reinforcement Loops and AI Productivity<br />(8:15) Agentic Coding and AI in Development<br />(13:02) Multimodal Capabilities and Applications<br />(15:21) Future of AI: Complexity and Reliability<br />(19:02) Test Time Compute and Data Efficiency<br />(31:20) AI Research Culture and Breakthroughs<br />(38:59) Reflecting on Large Language Models<br />(39:37) The Rise of Vision-Based Models<br />(41:01) Native Image Generation and General Purpose Models<br />(41:35) AI in Healthcare and Specialized Models<br />(43:32) Shifting Timelines and Rapid Adoption<br />(46:48) Open Source Models and Competitive Edge<br />(49:30) AI's Impact on Education and Personal Lives<br />(55:10) AGI Risks and Safety Considerations<br />(57:33) Future of AI Companions<br />(1:02:17) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="66702985" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/44b59c5c-fcc0-4bc0-b768-12b258d29e79/audio/d91cc671-8f8c-46d6-9eda-a6f915ad729b/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 58: Google Researchers Noam Shazeer and Jack Rae on Scaling Test-time Compute, Reactions to Ilya &amp; AGI</itunes:title>
      <itunes:author>Jack Rae, Noam Shazeer, Jacob Effron</itunes:author>
      <itunes:duration>01:09:28</itunes:duration>
      <itunes:summary>On the latest episode of Unsupervised Learning, Jacob is joined by two of the most influential minds in AI today. 

🔹 Noam Shazeer, co-inventor of the Transformer
🔹 Jack Rae, Research Director at DeepMind and one of the leads behind Gemini’s Flash Thinking

We got to ask them all of the top-of-mind questions in AI today about where we are, where we’re headed and what it means for businesses and the world. </itunes:summary>
      <itunes:subtitle>On the latest episode of Unsupervised Learning, Jacob is joined by two of the most influential minds in AI today. 

🔹 Noam Shazeer, co-inventor of the Transformer
🔹 Jack Rae, Research Director at DeepMind and one of the leads behind Gemini’s Flash Thinking

We got to ask them all of the top-of-mind questions in AI today about where we are, where we’re headed and what it means for businesses and the world. </itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>58</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">102dc080-00d9-4255-927a-b89978bf0668</guid>
      <title>Ep 57: Former CTO of Meta Mike Schroepfer on the Path to Powering the AI Revolution</title>
      <description><![CDATA[<p>On today’s Unsupervised Learning, <a href="https://www.linkedin.com/in/schrep/">Mike Schroepfer</a> (ex-CTO of Meta and founder of <a href="https://www.linkedin.com/company/gigascale/">Gigascale Capital</a>) reveals why energy is a key bottleneck holding AI progress back. Mike discusses how we can scale energy production to democratize AI globally and explores AI’s role in climate change. He also reflects on a decade as Meta’s CTO and how AI coding is transforming the CTO role. Finally, he offers predictions on the future of AI developer tools, VR, and open-source models.</p><p> </p><p>(0:00) Intro<br />(0:43) AI's Role in Energy and Climate Change<br />(4:32) Innovative Energy Solutions<br />(14:50) Open Source and AI Development<br />(22:35) Challenges in Chip Design<br />(24:04) Balancing Data Center Capacity<br />(25:55) The Future of VR and AI Integration<br />(29:41) AI's Role in Climate Solutions<br />(31:41) AI in Material Science and Beyond<br />(34:31) Personal AI Assistants and Their Impact<br />(38:47) Reflections on AI and Future Predictions<br />(41:23) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Wed, 5 Mar 2025 14:00:25 +0000</pubDate>
      <author>jeffron@redpoint.com (Mike Schroepfer, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-57-former-cto-of-meta-mike-schroepfer-on-the-path-to-powering-the-ai-revolution-KAQ_cq4Q</link>
      <content:encoded><![CDATA[<p>On today’s Unsupervised Learning, <a href="https://www.linkedin.com/in/schrep/">Mike Schroepfer</a> (ex-CTO of Meta and founder of <a href="https://www.linkedin.com/company/gigascale/">Gigascale Capital</a>) reveals why energy is a key bottleneck holding AI progress back. Mike discusses how we can scale energy production to democratize AI globally and explores AI’s role in climate change. He also reflects on a decade as Meta’s CTO and how AI coding is transforming the CTO role. Finally, he offers predictions on the future of AI developer tools, VR, and open-source models.</p><p> </p><p>(0:00) Intro<br />(0:43) AI's Role in Energy and Climate Change<br />(4:32) Innovative Energy Solutions<br />(14:50) Open Source and AI Development<br />(22:35) Challenges in Chip Design<br />(24:04) Balancing Data Center Capacity<br />(25:55) The Future of VR and AI Integration<br />(29:41) AI's Role in Climate Solutions<br />(31:41) AI in Material Science and Beyond<br />(34:31) Personal AI Assistants and Their Impact<br />(38:47) Reflections on AI and Future Predictions<br />(41:23) Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="42998430" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/f901a5fd-1f93-4787-99bf-0dda35df25cb/audio/179c365c-ab3f-4984-9d60-659450f88e39/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 57: Former CTO of Meta Mike Schroepfer on the Path to Powering the AI Revolution</itunes:title>
      <itunes:author>Mike Schroepfer, Jacob Effron</itunes:author>
      <itunes:duration>00:44:47</itunes:duration>
      <itunes:summary>On today’s Unsupervised Learning, Mike Schroepfer (ex-CTO of Meta and founder of Gigascale Capital) reveals why energy is a key bottleneck holding AI progress back. Mike discusses how we can scale energy production to democratize AI globally and explores AI’s role in climate change. He also reflects on a decade as Meta’s CTO and how AI coding is transforming the CTO role. Finally, he offers predictions on the future of AI developer tools, VR, and open-source models.</itunes:summary>
      <itunes:subtitle>On today’s Unsupervised Learning, Mike Schroepfer (ex-CTO of Meta and founder of Gigascale Capital) reveals why energy is a key bottleneck holding AI progress back. Mike discusses how we can scale energy production to democratize AI globally and explores AI’s role in climate change. He also reflects on a decade as Meta’s CTO and how AI coding is transforming the CTO role. Finally, he offers predictions on the future of AI developer tools, VR, and open-source models.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>57</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">38a8c550-eaca-437d-a085-327d7a0d698c</guid>
      <title>Ep 56: Distinguished Engineer at Waymo Vincent Vanhoucke Unpacks the Breakthroughs and Bottlenecks of Self-Driving</title>
      <description><![CDATA[<p>Waymo is an autonomous driving technology company with the mission to be the world's most trusted driver. The company operates a 24/7 public ride-hail service and provides over 150,000 trips each week across San Francisco, Los Angeles, Phoenix, and Austin, making mobility more accessible, sustainable, and safer for everyone.</p><p>In this week’s episode of Unsupervised Learning, we dive deep into the frontier where AI meets hardware — and there’s no better guide than Vincent Vanhoucke, Distinguished Engineer at Waymo and former Head of Robotics at DeepMind.</p><p> </p><p>[0:00] Intro</p><p>[0:50] Waymo's Technological Evolution</p><p>[2:40] The Role of LLMs in Autonomous Driving</p><p>[6:02] Vincent's Journey to Waymo</p><p>[9:17] Challenges in Autonomous Driving</p><p>[11:58] Simulation and World Models</p><p>[27:44] Future Milestones and Expansion</p><p>[30:10] Broader Robotics and AI</p><p>[36:12] Future of General Robotics Models</p><p>[38:14] Hardware vs. Software Approaches in Robotics</p><p>[40:19] Challenges in Robotic Data Acquisition</p><p>[40:38] Simulation vs. Real-World Data in Robotics</p><p>[43:02] Human-Robot Interaction for Data Collection</p><p>[45:03] Advancements in Multimodal Models</p><p>[47:08] Unanswered Questions in Robotics</p><p>[52:02] Reasoning Capabilities in AI</p><p>[54:57] Future of Robotics and AI</p><p>[1:00:51] Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Wed, 26 Feb 2025 14:00:23 +0000</pubDate>
      <author>jeffron@redpoint.com (Vincent Vanhoucke, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-56-distinguished-engineer-waymo-vincent-vanhoucke-unpacks-the-breakthroughs-and-bottlenecks-of-self-driving-MtM_szxN</link>
      <content:encoded><![CDATA[<p>Waymo is an autonomous driving technology company with the mission to be the world's most trusted driver. The company operates a 24/7 public ride-hail service and provides over 150,000 trips each week across San Francisco, Los Angeles, Phoenix, and Austin, making mobility more accessible, sustainable, and safer for everyone.</p><p>In this week’s episode of Unsupervised Learning, we dive deep into the frontier where AI meets hardware — and there’s no better guide than Vincent Vanhoucke, Distinguished Engineer at Waymo and former Head of Robotics at DeepMind.</p><p> </p><p>[0:00] Intro</p><p>[0:50] Waymo's Technological Evolution</p><p>[2:40] The Role of LLMs in Autonomous Driving</p><p>[6:02] Vincent's Journey to Waymo</p><p>[9:17] Challenges in Autonomous Driving</p><p>[11:58] Simulation and World Models</p><p>[27:44] Future Milestones and Expansion</p><p>[30:10] Broader Robotics and AI</p><p>[36:12] Future of General Robotics Models</p><p>[38:14] Hardware vs. Software Approaches in Robotics</p><p>[40:19] Challenges in Robotic Data Acquisition</p><p>[40:38] Simulation vs. Real-World Data in Robotics</p><p>[43:02] Human-Robot Interaction for Data Collection</p><p>[45:03] Advancements in Multimodal Models</p><p>[47:08] Unanswered Questions in Robotics</p><p>[52:02] Reasoning Capabilities in AI</p><p>[54:57] Future of Robotics and AI</p><p>[1:00:51] Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="70109352" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/8b679269-97ae-4f2b-8c90-785556271d64/audio/c080309f-9643-4c82-9b91-73c2b3a7ada5/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 56: Distinguished Engineer at Waymo Vincent Vanhoucke Unpacks the Breakthroughs and Bottlenecks of Self-Driving</itunes:title>
      <itunes:author>Vincent Vanhoucke, Jacob Effron</itunes:author>
      <itunes:duration>01:13:01</itunes:duration>
      <itunes:summary>Waymo is an autonomous driving technology company with the mission to be the world&apos;s most trusted driver. The company operates a 24/7 public ride-hail service and provides over 150,000 trips each week across San Francisco, Los Angeles, Phoenix, and Austin, making mobility more accessible, sustainable, and safer for everyone. In this week’s episode of Unsupervised Learning, we dive deep into the frontier where AI meets hardware — and there’s no better guide than Vincent Vanhoucke, Distinguished Engineer at Waymo and former Head of Robotics at DeepMind.</itunes:summary>
      <itunes:subtitle>Waymo is an autonomous driving technology company with the mission to be the world&apos;s most trusted driver. The company operates a 24/7 public ride-hail service and provides over 150,000 trips each week across San Francisco, Los Angeles, Phoenix, and Austin, making mobility more accessible, sustainable, and safer for everyone. In this week’s episode of Unsupervised Learning, we dive deep into the frontier where AI meets hardware — and there’s no better guide than Vincent Vanhoucke, Distinguished Engineer at Waymo and former Head of Robotics at DeepMind.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>56</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">ded266d3-e4bc-447a-9444-05601b39498f</guid>
      <title>Ep 55: Head of Amazon AGI Lab David Luan on DeepSeek’s Significance, What’s Next for Agents &amp; Lessons from OpenAI</title>
      <description><![CDATA[<p>David is an OG in AI who has been at the forefront of many of the major breakthroughs of the past decade. His resume: VP of Engineering at OpenAI, a key contributor to Google Brain, co-founder of Adept, and now leading Amazon’s SF AGI Lab. In this episode we focused on how far test-time compute gets us, the real implications of DeepSeek, what agents milestones he’s looking for and more.</p><p>[0:00] Intro<br />[1:14] DeepSeek Reactions and Market Implications<br />[2:44] AI Models and Efficiency<br />[4:11] Challenges in Building AGI<br />[7:58] Research Problems in AI Development<br />[11:17] The Future of AI Agents<br />[15:12] Engineering Challenges and Innovations<br />[19:45] The Path to Reliable AI Agents<br />[21:48] Defining AGI and Its Impact<br />[22:47] Challenges and Gating Factors<br />[24:05] Future Human-Computer Interaction<br />[25:00] Specialized Models and Policy<br />[25:58] Technical Challenges and Model Evaluation<br />[28:36] Amazon's Role in AGI Development<br />[30:33] Data Labeling and Team Building<br />[36:37] Reflections on OpenAI<br />[42:12] Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Wed, 19 Feb 2025 14:41:29 +0000</pubDate>
      <author>jeffron@redpoint.com (David Luan, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-55-head-of-amazon-agi-lab-david-luan-on-deepseeks-significance-whats-next-for-agents-lessons-from-openai-Um2qv0AW</link>
      <content:encoded><![CDATA[<p>David is an OG in AI who has been at the forefront of many of the major breakthroughs of the past decade. His resume: VP of Engineering at OpenAI, a key contributor to Google Brain, co-founder of Adept, and now leading Amazon’s SF AGI Lab. In this episode we focused on how far test-time compute gets us, the real implications of DeepSeek, what agents milestones he’s looking for and more.</p><p>[0:00] Intro<br />[1:14] DeepSeek Reactions and Market Implications<br />[2:44] AI Models and Efficiency<br />[4:11] Challenges in Building AGI<br />[7:58] Research Problems in AI Development<br />[11:17] The Future of AI Agents<br />[15:12] Engineering Challenges and Innovations<br />[19:45] The Path to Reliable AI Agents<br />[21:48] Defining AGI and Its Impact<br />[22:47] Challenges and Gating Factors<br />[24:05] Future Human-Computer Interaction<br />[25:00] Specialized Models and Policy<br />[25:58] Technical Challenges and Model Evaluation<br />[28:36] Amazon's Role in AGI Development<br />[30:33] Data Labeling and Team Building<br />[36:37] Reflections on OpenAI<br />[42:12] Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="42075158" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/51d1ca3b-d732-418e-97e7-df287afdaff1/audio/261e4c5b-2d35-4e13-8ef2-5c3684f7e3e0/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 55: Head of Amazon AGI Lab David Luan on DeepSeek’s Significance, What’s Next for Agents &amp; Lessons from OpenAI</itunes:title>
      <itunes:author>David Luan, Jacob Effron</itunes:author>
      <itunes:duration>00:43:49</itunes:duration>
      <itunes:summary>David is an OG in AI who has been at the forefront of many of the major breakthroughs of the past decade. His resume: VP of Engineering at OpenAI, a key contributor to Google Brain, co-founder of Adept, and now leading Amazon’s SF AGI Lab. 

In this episode we focused on how far test-time compute gets us, the real implications of DeepSeek, what agents milestones he’s looking for and more. </itunes:summary>
      <itunes:subtitle>David is an OG in AI who has been at the forefront of many of the major breakthroughs of the past decade. His resume: VP of Engineering at OpenAI, a key contributor to Google Brain, co-founder of Adept, and now leading Amazon’s SF AGI Lab. 

In this episode we focused on how far test-time compute gets us, the real implications of DeepSeek, what agents milestones he’s looking for and more. </itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>55</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1c055be6-d9c3-4e65-b2ae-f644db713725</guid>
      <title>Ep 54: Princeton Researcher Arvind Narayanan on the Limitations of Agent Evals, AI’s Societal Impact &amp; Important Lessons from History</title>
      <description><![CDATA[<p>Arvind Narayanan is one of the leading voices in AI when it comes to cutting through the hype. As a Princeton professor and co-author of AI Snake Oil, he’s one of the most thoughtful voices cautioning against both unfounded fears and overblown promises in AI. In this episode, Arvind dissects the future of AI in education, its parallels to past tech revolutions, and how our jobs are already shifting toward managing these powerful tools. Some of our favorite take-aways:</p><p> </p><p>[0:00] Intro<br />[0:46] Reasoning Models and Their Uneven Progress<br />[2:46] Challenges in AI Benchmarks and Real-World Applications<br />[5:03] Inference Scaling and Verifier Imperfections<br />[7:33] Agentic AI: Tools vs. Autonomous Actions<br />[12:07] Future of AI in Everyday Life<br />[15:34] Evaluating AI Agents and Collaboration<br />[24:49] Regulatory and Policy Implications of AI<br />[27:49] Analyzing Generative AI Adoption Rates<br />[29:17] Educational Policies and Generative AI<br />[30:09] Flaws in Predictive AI Models<br />[31:31] Regulation and Safety in AI<br />[33:47] Academia's Role in AI Development<br />[36:13] AI in Scientific Research<br />[38:22] AI and Human Minds<br />[46:04] Economic Impacts of AI<br />[49:42] Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Thu, 30 Jan 2025 15:37:10 +0000</pubDate>
      <author>jeffron@redpoint.com (Arvind Narayanan, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-54-princeton-researcher-arvind-narayanan-on-the-limitations-of-agent-evals-ais-societal-impact-important-lessons-from-history-tLlMytNX</link>
      <content:encoded><![CDATA[<p>Arvind Narayanan is one of the leading voices in AI when it comes to cutting through the hype. As a Princeton professor and co-author of AI Snake Oil, he’s one of the most thoughtful voices cautioning against both unfounded fears and overblown promises in AI. In this episode, Arvind dissects the future of AI in education, its parallels to past tech revolutions, and how our jobs are already shifting toward managing these powerful tools. Some of our favorite take-aways:</p><p> </p><p>[0:00] Intro<br />[0:46] Reasoning Models and Their Uneven Progress<br />[2:46] Challenges in AI Benchmarks and Real-World Applications<br />[5:03] Inference Scaling and Verifier Imperfections<br />[7:33] Agentic AI: Tools vs. Autonomous Actions<br />[12:07] Future of AI in Everyday Life<br />[15:34] Evaluating AI Agents and Collaboration<br />[24:49] Regulatory and Policy Implications of AI<br />[27:49] Analyzing Generative AI Adoption Rates<br />[29:17] Educational Policies and Generative AI<br />[30:09] Flaws in Predictive AI Models<br />[31:31] Regulation and Safety in AI<br />[33:47] Academia's Role in AI Development<br />[36:13] AI in Scientific Research<br />[38:22] AI and Human Minds<br />[46:04] Economic Impacts of AI<br />[49:42] Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="54871396" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/8fde3219-5be6-48ee-9f82-d886e0f7b036/audio/c3faf821-14ca-471e-bd7e-5fdf102d509d/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 54: Princeton Researcher Arvind Narayanan on the Limitations of Agent Evals, AI’s Societal Impact &amp; Important Lessons from History</itunes:title>
      <itunes:author>Arvind Narayanan, Jacob Effron</itunes:author>
      <itunes:duration>00:57:09</itunes:duration>
      <itunes:summary>Arvind Narayanan is one of the leading voices in AI when it comes to cutting through the hype. As a Princeton professor and co-author of AI Snake Oil, he’s one of the most thoughtful voices cautioning against both unfounded fears and overblown promises in AI. In this episode, Arvind dissects the future of AI in education, its parallels to past tech revolutions, and how our jobs are already shifting toward managing these powerful tools.</itunes:summary>
      <itunes:subtitle>Arvind Narayanan is one of the leading voices in AI when it comes to cutting through the hype. As a Princeton professor and co-author of AI Snake Oil, he’s one of the most thoughtful voices cautioning against both unfounded fears and overblown promises in AI. In this episode, Arvind dissects the future of AI in education, its parallels to past tech revolutions, and how our jobs are already shifting toward managing these powerful tools.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>54</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">13920b54-b1aa-4eb1-8acf-c766aec110a1</guid>
      <title>Ep 53: SemiAnalysis Founder Dylan Patel on New AI Regulations, Future of Chinese AI &amp; xAI’s Scrappy Surge to Hyperscale</title>
      <description><![CDATA[<p>In this episode of Unsupervised Learning, we sit down with Dylan Patel, Chief Analyst at SemiAnalysis, to break down what these sweeping changes really mean. From how they consolidate power among Big Tech to China's narrowing options for AI dominance, we unpacked the impact of this regulatory shift.</p><p>Follow SemiAnalysis: https://semianalysis.com/</p><p> </p><p>[0:00] Intro<br />[1:07] Grading the AI Diffusion Rule<br />[3:48] What Will Happen to the Malaysian Data Centers?<br />[7:23] How do the Regulations Favor Giant Tech Companies?<br />[9:07] Pre-Regulation AI Landscape<br />[13:00] Where Does Chinese AI Go From Here?<br />[22:00] The Goldie Locks Approach to Regulation<br />[24:16] Size of Cluster Buildouts Today<br />[37:47] How Big Will Cluster Buildouts Get?<br />[43:00] Are Open-Source Models Falling Behind?<br />[47:51] Questions Dylan Wants the Answer To<br />[51:30] Hardware Startups<br />[1:01:05] The Future of Enterprise AI<br />[1:05:10] What Made CoreWeave So Successful?<br />[1:19:28] Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 21 Jan 2025 14:01:05 +0000</pubDate>
      <author>jeffron@redpoint.com (Dylan Patel, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-53-semianalysis-founder-dylan-patel-on-new-ai-regulations-future-of-chinese-ai-xais-scrappy-surge-to-hyperscale-gKqlz4lT</link>
      <content:encoded><![CDATA[<p>In this episode of Unsupervised Learning, we sit down with Dylan Patel, Chief Analyst at SemiAnalysis, to break down what these sweeping changes really mean. From how they consolidate power among Big Tech to China's narrowing options for AI dominance, we unpacked the impact of this regulatory shift.</p><p>Follow SemiAnalysis: https://semianalysis.com/</p><p> </p><p>[0:00] Intro<br />[1:07] Grading the AI Diffusion Rule<br />[3:48] What Will Happen to the Malaysian Data Centers?<br />[7:23] How do the Regulations Favor Giant Tech Companies?<br />[9:07] Pre-Regulation AI Landscape<br />[13:00] Where Does Chinese AI Go From Here?<br />[22:00] The Goldie Locks Approach to Regulation<br />[24:16] Size of Cluster Buildouts Today<br />[37:47] How Big Will Cluster Buildouts Get?<br />[43:00] Are Open-Source Models Falling Behind?<br />[47:51] Questions Dylan Wants the Answer To<br />[51:30] Hardware Startups<br />[1:01:05] The Future of Enterprise AI<br />[1:05:10] What Made CoreWeave So Successful?<br />[1:19:28] Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="80892699" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/82ecc1c4-03ab-47e4-bfc7-7f58ee5e8f1a/audio/43609863-a0ce-4254-90d2-4a7a56748a48/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 53: SemiAnalysis Founder Dylan Patel on New AI Regulations, Future of Chinese AI &amp; xAI’s Scrappy Surge to Hyperscale</itunes:title>
      <itunes:author>Dylan Patel, Jacob Effron</itunes:author>
      <itunes:duration>01:24:15</itunes:duration>
      <itunes:summary>In this episode of Unsupervised Learning, we sit down with Dylan Patel, Chief Analyst at SemiAnalysis, to break down what these sweeping changes really mean. From how they consolidate power among Big Tech to China&apos;s narrowing options for AI dominance, we unpacked the impact of this regulatory shift.</itunes:summary>
      <itunes:subtitle>In this episode of Unsupervised Learning, we sit down with Dylan Patel, Chief Analyst at SemiAnalysis, to break down what these sweeping changes really mean. From how they consolidate power among Big Tech to China&apos;s narrowing options for AI dominance, we unpacked the impact of this regulatory shift.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>53</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">6103495b-37ee-4a82-a487-e4b891616347</guid>
      <title>Ep 52: Marc Benioff Responds to Satya and Unpacks the Agentic Era</title>
      <description><![CDATA[<p>In this cross-over episode with The Logan Bartltett Show, Marc Benioff (CEO, Salesforce) responds to Satya Nadella’s recent predictions and shares his thoughts on the current reality of Agi. He dives into the rise of digital labor, the multi-trillion-dollar potential of agentic technology, and what the future split between software and agentic revenue might look like. Marc also discusses why CEOs need to stay grounded in delivering actionable solutions, and he emphasizes the moral obligation businesses have to retrain employees and invest in communities as AI continues to evolve.</p><p> </p><p>(00:00) Intro</p><p>(01:45) Salesforce's AI Impact on Business</p><p>(03:03) The Future of Digital Labor</p><p>(05:28) Agentic AI and Customer Success</p><p>(07:42) Salesforce's Competitive Edge</p><p>(11:48) Marc Benioff's Response to Satya Nadella</p><p>(14:16) The Role of AI in Enterprise Software</p><p>(20:14) The Balance of AI and Human Labor</p><p>(28:34) Salesforce's Philanthropic Efforts</p><p>(36:24) The Future of AI and Regulation</p><p>(40:24) Conclusion and Farewell</p>
]]></description>
      <pubDate>Fri, 10 Jan 2025 14:00:00 +0000</pubDate>
      <author>jeffron@redpoint.com (Mark Benioff, Jacob Effron, Logan Bartlett)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/cross-over-episode-marc-benioff-responds-to-satya-and-unpacks-the-agentic-era-with-the-logan-bartlett-show-jKum7knG</link>
      <content:encoded><![CDATA[<p>In this cross-over episode with The Logan Bartltett Show, Marc Benioff (CEO, Salesforce) responds to Satya Nadella’s recent predictions and shares his thoughts on the current reality of Agi. He dives into the rise of digital labor, the multi-trillion-dollar potential of agentic technology, and what the future split between software and agentic revenue might look like. Marc also discusses why CEOs need to stay grounded in delivering actionable solutions, and he emphasizes the moral obligation businesses have to retrain employees and invest in communities as AI continues to evolve.</p><p> </p><p>(00:00) Intro</p><p>(01:45) Salesforce's AI Impact on Business</p><p>(03:03) The Future of Digital Labor</p><p>(05:28) Agentic AI and Customer Success</p><p>(07:42) Salesforce's Competitive Edge</p><p>(11:48) Marc Benioff's Response to Satya Nadella</p><p>(14:16) The Role of AI in Enterprise Software</p><p>(20:14) The Balance of AI and Human Labor</p><p>(28:34) Salesforce's Philanthropic Efforts</p><p>(36:24) The Future of AI and Regulation</p><p>(40:24) Conclusion and Farewell</p>
]]></content:encoded>
      <enclosure length="38850854" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/fdbaecdc-5170-42b3-97e9-8155908eafa0/audio/700ea468-27e1-4841-a67e-29a1d56fd567/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 52: Marc Benioff Responds to Satya and Unpacks the Agentic Era</itunes:title>
      <itunes:author>Mark Benioff, Jacob Effron, Logan Bartlett</itunes:author>
      <itunes:duration>00:39:57</itunes:duration>
      <itunes:summary></itunes:summary>
      <itunes:subtitle></itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>52</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">5e079548-a0d8-4e17-b9e6-3e8b3e338736</guid>
      <title>Ep 51: Former Chief Research Officer of OpenAI Bob McGrew - What Comes Next for AI?</title>
      <description><![CDATA[<p>In our new world of AI, few minds shine as brightly as Bob McGrew's. Until November Bob was the Chief Research Officer at OpenAI, and before that led Palantir’s engineering and product management for the first decade of its existence. He’s seen it all and we were fortunate to get his insights and vision for the future in one of my favorite episodes of Unsupervised Learning to date:</p><p> </p><p>[0:00] Intro<br />[0:44] Debating AI Model Capabilities<br />[0:57] Inside vs Outside Perspectives on AI Progress<br />[1:39] Challenges in AI Pre-Training<br />[3:02] Reinforcement Learning and Future Models<br />[3:48] AI Progress in 2025<br />[5:58] New Form Factors for AI Models<br />[8:56] Reliability and Enterprise Integration<br />[18:14] Multimodal AI and Video Models<br />[24:05] The Future of Robotics<br />[32:46] The Complexity of Automating Jobs with AI<br />[34:08] AI in Startups: Tackling Boring Problems<br />[35:33] AI's Impact on Productivity and Consultants<br />[36:43] Traits of Top AI Researchers<br />[40:52] The Evolution of OpenAI's Mission<br />[46:57] The Challenges of Scaling AI<br />[49:16] The Future of AI and Human Agency<br />[54:47] AI in Social Sciences and Academia<br />[1:01:15] Reflections and Future Plans<br />[1:02:57] Quickfire</p><p> </p><p>With your co-hosts:  </p><p>@jacobeffron  </p><p>- Partner at Redpoint, Former PM Flatiron Health  </p><p>@patrickachase  </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn  </p><p>@ericabrescia  </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)  </p><p>@jordan_segall  </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Wed, 18 Dec 2024 15:30:58 +0000</pubDate>
      <author>jeffron@redpoint.com (Bob McGrew, Jordan Segall, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-51-former-chief-research-officer-of-openai-bob-mcgrew-what-comes-next-for-ai-AsjUYRJa</link>
      <content:encoded><![CDATA[<p>In our new world of AI, few minds shine as brightly as Bob McGrew's. Until November Bob was the Chief Research Officer at OpenAI, and before that led Palantir’s engineering and product management for the first decade of its existence. He’s seen it all and we were fortunate to get his insights and vision for the future in one of my favorite episodes of Unsupervised Learning to date:</p><p> </p><p>[0:00] Intro<br />[0:44] Debating AI Model Capabilities<br />[0:57] Inside vs Outside Perspectives on AI Progress<br />[1:39] Challenges in AI Pre-Training<br />[3:02] Reinforcement Learning and Future Models<br />[3:48] AI Progress in 2025<br />[5:58] New Form Factors for AI Models<br />[8:56] Reliability and Enterprise Integration<br />[18:14] Multimodal AI and Video Models<br />[24:05] The Future of Robotics<br />[32:46] The Complexity of Automating Jobs with AI<br />[34:08] AI in Startups: Tackling Boring Problems<br />[35:33] AI's Impact on Productivity and Consultants<br />[36:43] Traits of Top AI Researchers<br />[40:52] The Evolution of OpenAI's Mission<br />[46:57] The Challenges of Scaling AI<br />[49:16] The Future of AI and Human Agency<br />[54:47] AI in Social Sciences and Academia<br />[1:01:15] Reflections and Future Plans<br />[1:02:57] Quickfire</p><p> </p><p>With your co-hosts:  </p><p>@jacobeffron  </p><p>- Partner at Redpoint, Former PM Flatiron Health  </p><p>@patrickachase  </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn  </p><p>@ericabrescia  </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)  </p><p>@jordan_segall  </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="64847664" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/62660a38-8319-47ab-9ffe-4e78e17de4e5/audio/ed447d6a-96aa-47d1-9d7f-ab40f5d4beda/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 51: Former Chief Research Officer of OpenAI Bob McGrew - What Comes Next for AI?</itunes:title>
      <itunes:author>Bob McGrew, Jordan Segall, Jacob Effron</itunes:author>
      <itunes:duration>01:07:32</itunes:duration>
      <itunes:summary>In our new world of AI, few minds shine as brightly as Bob McGrew&apos;s. Until November Bob was the Chief Research Officer at OpenAI, and before that led Palantir’s engineering and product management for the first decade of its existence. He’s seen it all and we were fortunate to get his insights and vision for the future in one of my favorite episodes of Unsupervised Learning to date.
</itunes:summary>
      <itunes:subtitle>In our new world of AI, few minds shine as brightly as Bob McGrew&apos;s. Until November Bob was the Chief Research Officer at OpenAI, and before that led Palantir’s engineering and product management for the first decade of its existence. He’s seen it all and we were fortunate to get his insights and vision for the future in one of my favorite episodes of Unsupervised Learning to date.
</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>51</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">5021fbfb-ce2c-4bbe-a1f3-306c8b907494</guid>
      <title>Ep 50: Fireworks CEO Lin Qiao on Why There Won’t be a Single Model, Will Hyperscalers Win Inference &amp; AI Use-cases with PMF</title>
      <description><![CDATA[<p>Lin Qiao, the co-founder of Fireworks.ai, sits down for a deep dive into the future of AI. Lin ran the PyTorch team at Meta, which developed some of the most fundamental open-source AI software in use today. She’s got a riveting perspective on the AI landscape that is a must-listen.</p><p> </p><p>[0:00] Intro</p><p>[1:06] Fireworks: Revolutionizing AI Inference</p><p>[2:12] Challenges in AI Model Development</p><p>[4:05] The Future of AI: Compound Systems</p><p>[4:32] Designing Effective AI Tools</p><p>[10:26] Customization and Fine-Tuning in AI</p><p>[14:06] Human-in-the-Loop Automation</p><p>[16:38] Evaluating AI Models</p><p>[19:18] Building Complex AI Systems</p><p>[21:18] Function Calling and AI Orchestration</p><p>[26:52] AI Infrastructure and Hardware</p><p>[31:08] Small Expert Models</p><p>[31:27] Hyperscalers and Resource Management</p><p>[32:14] Inference Systems and Scalability</p><p>[33:08] Running Models Locally: Cost and Privacy</p><p>[35:20] Open Source Models and Meta's Role</p><p>[36:41] The Evolution of AI Training and Inference</p><p>[38:04] Fireworks' Vision and Market Strategy</p><p>[40:46] The Impact of Generative AI</p><p>[45:18] AI Research and Future Trends</p><p>[46:58] Building for a Rapidly Changing AI Landscape</p><p>[49:36] Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Mon, 16 Dec 2024 19:13:54 +0000</pubDate>
      <author>jeffron@redpoint.com (Lin Qiao, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-50-fireworks-ceo-lin-qiao-on-why-there-wont-be-a-single-model-will-hyperscalers-win-inference-ai-use-cases-with-pmf-447u3Ami</link>
      <content:encoded><![CDATA[<p>Lin Qiao, the co-founder of Fireworks.ai, sits down for a deep dive into the future of AI. Lin ran the PyTorch team at Meta, which developed some of the most fundamental open-source AI software in use today. She’s got a riveting perspective on the AI landscape that is a must-listen.</p><p> </p><p>[0:00] Intro</p><p>[1:06] Fireworks: Revolutionizing AI Inference</p><p>[2:12] Challenges in AI Model Development</p><p>[4:05] The Future of AI: Compound Systems</p><p>[4:32] Designing Effective AI Tools</p><p>[10:26] Customization and Fine-Tuning in AI</p><p>[14:06] Human-in-the-Loop Automation</p><p>[16:38] Evaluating AI Models</p><p>[19:18] Building Complex AI Systems</p><p>[21:18] Function Calling and AI Orchestration</p><p>[26:52] AI Infrastructure and Hardware</p><p>[31:08] Small Expert Models</p><p>[31:27] Hyperscalers and Resource Management</p><p>[32:14] Inference Systems and Scalability</p><p>[33:08] Running Models Locally: Cost and Privacy</p><p>[35:20] Open Source Models and Meta's Role</p><p>[36:41] The Evolution of AI Training and Inference</p><p>[38:04] Fireworks' Vision and Market Strategy</p><p>[40:46] The Impact of Generative AI</p><p>[45:18] AI Research and Future Trends</p><p>[46:58] Building for a Rapidly Changing AI Landscape</p><p>[49:36] Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="53593277" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/c540724c-9e2a-4110-b00c-8358b96803cb/audio/155a8dfb-bb16-42a7-99a4-84271c13fae7/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 50: Fireworks CEO Lin Qiao on Why There Won’t be a Single Model, Will Hyperscalers Win Inference &amp; AI Use-cases with PMF</itunes:title>
      <itunes:author>Lin Qiao, Jacob Effron</itunes:author>
      <itunes:duration>00:55:49</itunes:duration>
      <itunes:summary>Lin Qiao, the co-founder of Fireworks.ai, sits down for a deep dive into the future of AI. Lin ran the PyTorch team at Meta, which developed some of the most fundamental open-source AI software in use today. She’s got a riveting perspective on the AI landscape that is a must-listen.</itunes:summary>
      <itunes:subtitle>Lin Qiao, the co-founder of Fireworks.ai, sits down for a deep dive into the future of AI. Lin ran the PyTorch team at Meta, which developed some of the most fundamental open-source AI software in use today. She’s got a riveting perspective on the AI landscape that is a must-listen.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>50</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">480a1e84-3031-4acb-99b0-3b775028a6f4</guid>
      <title>Ep 49: OpenAI Researcher Noam Brown Unpacks the Full Release of o1 and the Path to AGI</title>
      <description><![CDATA[<p>Noam Brown, renowned AI researcher and key figure at OpenAI, joins us for a deep dive into the o1 release. Recorded just one day before o1’s full public debut, this episode explores the groundbreaking advancements and challenges behind this innovative test-time compute model.</p><p>We discuss the technical breakthroughs that set o1 apart, its unique capabilities compared to previous models, and how it disrupts traditional paradigms in AI development. Noam also shares insights into OpenAI’s approach to innovation, the economic realities of scaling AI, and what the future holds for the field.</p><p> </p><p>[0:00] Intro</p><p>[0:50] Scaling Model Capabilities and Economic Constraints</p><p>[2:48] Excitement Around Test Time Compute</p><p>[4:50] Challenges and Future Directions in AI Research</p><p>[8:11] Noam Brown's Journey and OpenAI's Research Focus</p><p>[16:08] The Role of Specialized Models and Tools</p><p>[21:18] Unexpected Use Cases and Future Milestones</p><p>[23:44] Proof of Concept: o1's Capabilities</p><p>[24:48] The Bitter Lesson: Insights from Richard Sutton</p><p>[25:59] Scaffolding Techniques and Their Future</p><p>[27:56] Challenges in Academia and AI Research</p><p>[30:30] Evaluating AI Models: Metrics and Trends</p><p>[34:47] The Role of AI in Social Sciences</p><p>[39:39] AI Agents and Emergent Communication</p><p>[40:17] Future of AI Robotics</p><p>[41:13] Advancing Scientific Research with AI</p><p>[43:30] Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Fri, 6 Dec 2024 17:31:39 +0000</pubDate>
      <author>jeffron@redpoint.com (Noam Brown, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-49-openai-researcher-noam-brown-unpacks-the-full-release-of-o1-and-the-path-to-agi-y0DQRM6m</link>
      <content:encoded><![CDATA[<p>Noam Brown, renowned AI researcher and key figure at OpenAI, joins us for a deep dive into the o1 release. Recorded just one day before o1’s full public debut, this episode explores the groundbreaking advancements and challenges behind this innovative test-time compute model.</p><p>We discuss the technical breakthroughs that set o1 apart, its unique capabilities compared to previous models, and how it disrupts traditional paradigms in AI development. Noam also shares insights into OpenAI’s approach to innovation, the economic realities of scaling AI, and what the future holds for the field.</p><p> </p><p>[0:00] Intro</p><p>[0:50] Scaling Model Capabilities and Economic Constraints</p><p>[2:48] Excitement Around Test Time Compute</p><p>[4:50] Challenges and Future Directions in AI Research</p><p>[8:11] Noam Brown's Journey and OpenAI's Research Focus</p><p>[16:08] The Role of Specialized Models and Tools</p><p>[21:18] Unexpected Use Cases and Future Milestones</p><p>[23:44] Proof of Concept: o1's Capabilities</p><p>[24:48] The Bitter Lesson: Insights from Richard Sutton</p><p>[25:59] Scaffolding Techniques and Their Future</p><p>[27:56] Challenges in Academia and AI Research</p><p>[30:30] Evaluating AI Models: Metrics and Trends</p><p>[34:47] The Role of AI in Social Sciences</p><p>[39:39] AI Agents and Emergent Communication</p><p>[40:17] Future of AI Robotics</p><p>[41:13] Advancing Scientific Research with AI</p><p>[43:30] Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="45318939" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/478b3b3b-068d-42e7-ada4-570886b27ab0/audio/9a4e07f4-a43f-4de0-a030-943bc4080b37/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 49: OpenAI Researcher Noam Brown Unpacks the Full Release of o1 and the Path to AGI</itunes:title>
      <itunes:author>Noam Brown, Jacob Effron</itunes:author>
      <itunes:duration>00:47:12</itunes:duration>
      <itunes:summary>Noam Brown, renowned AI researcher and key figure at OpenAI, joins us for a deep dive into the o1 release. Recorded just one day before o1’s full public debut, this episode explores the groundbreaking advancements and challenges behind this innovative test-time compute model.

We discuss the technical breakthroughs that set o1 apart, its unique capabilities compared to previous models, and how it disrupts traditional paradigms in AI development. Noam also shares insights into OpenAI’s approach to innovation, the economic realities of scaling AI, and what the future holds for the field.
</itunes:summary>
      <itunes:subtitle>Noam Brown, renowned AI researcher and key figure at OpenAI, joins us for a deep dive into the o1 release. Recorded just one day before o1’s full public debut, this episode explores the groundbreaking advancements and challenges behind this innovative test-time compute model.

We discuss the technical breakthroughs that set o1 apart, its unique capabilities compared to previous models, and how it disrupts traditional paradigms in AI development. Noam also shares insights into OpenAI’s approach to innovation, the economic realities of scaling AI, and what the future holds for the field.
</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>49</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">ed7f4a3b-e64a-43f6-83e3-16041e1b8f7d</guid>
      <title>Ep 48: Co-Founder/CEO of LiveKit Russ d&apos;Sa on the Best ChatGPT Voice Use-Cases, New UX Paradigms in AI, and When Voice Makes Sense</title>
      <description><![CDATA[<p>For this episode of Unsupervised Learning we spoke with Russ d'Sa, co-founder of LiveKit, a company at the forefront of voice AI technology. Russ thinks of LiveKit as a “nervous system,” powering the sensory interfaces humans use to interact with AI – including the Advanced Voice feature in ChatGPT as well as applications like Character.ai, Spotify and many more. </p><p>Russ talked about when voice makes sense as an interface, the exciting new UX paradigms on the horizon, the intersection of voice and robotics and Anthropic's Computer Use API.</p><p> </p><p>[0:00] Intro</p><p>[0:24] Using ChatGPT Voice in Daily Life</p><p>[2:26] How LiveKit Works with ChatGPT Voice</p><p>[5:16] LiveKit as the Nervous System for AI</p><p>[8:11] Future of Work and AI Interfaces</p><p>[18:31] Emerging Use Cases for Voice AI</p><p>[22:29] AI Models in Customer Support</p><p>[23:10] Latency Improvements in AI</p><p>[24:37] Challenges in System Integration</p><p>[26:01] Multimodal AI and Browser Integration</p><p>[29:40] Telephony and AI in Healthcare</p><p>[32:11] Humanoid Robotics and On-Device AI</p><p>[33:50] Cloud vs. On-Device Inference</p><p>[36:50] Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 19 Nov 2024 14:03:34 +0000</pubDate>
      <author>jeffron@redpoint.com (Russ d&apos;Sa, Patrick Chase, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-48-co-founder-ceo-of-livekit-russ-dsa-on-the-best-chatgpt-voice-use-cases-new-ux-paradigms-in-ai-and-when-voice-makes-sense-GG4IpuZo</link>
      <content:encoded><![CDATA[<p>For this episode of Unsupervised Learning we spoke with Russ d'Sa, co-founder of LiveKit, a company at the forefront of voice AI technology. Russ thinks of LiveKit as a “nervous system,” powering the sensory interfaces humans use to interact with AI – including the Advanced Voice feature in ChatGPT as well as applications like Character.ai, Spotify and many more. </p><p>Russ talked about when voice makes sense as an interface, the exciting new UX paradigms on the horizon, the intersection of voice and robotics and Anthropic's Computer Use API.</p><p> </p><p>[0:00] Intro</p><p>[0:24] Using ChatGPT Voice in Daily Life</p><p>[2:26] How LiveKit Works with ChatGPT Voice</p><p>[5:16] LiveKit as the Nervous System for AI</p><p>[8:11] Future of Work and AI Interfaces</p><p>[18:31] Emerging Use Cases for Voice AI</p><p>[22:29] AI Models in Customer Support</p><p>[23:10] Latency Improvements in AI</p><p>[24:37] Challenges in System Integration</p><p>[26:01] Multimodal AI and Browser Integration</p><p>[29:40] Telephony and AI in Healthcare</p><p>[32:11] Humanoid Robotics and On-Device AI</p><p>[33:50] Cloud vs. On-Device Inference</p><p>[36:50] Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="43660059" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/21d73dcf-8031-42e7-991e-3ba106dad872/audio/25b67e22-c736-40e4-ad93-bf6eb248f390/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 48: Co-Founder/CEO of LiveKit Russ d&apos;Sa on the Best ChatGPT Voice Use-Cases, New UX Paradigms in AI, and When Voice Makes Sense</itunes:title>
      <itunes:author>Russ d&apos;Sa, Patrick Chase, Jacob Effron</itunes:author>
      <itunes:duration>00:45:28</itunes:duration>
      <itunes:summary>For this episode of Unsupervised Learning we spoke with Russ d&apos;Sa, co-founder of LiveKit, a company at the forefront of voice AI technology. Russ thinks of LiveKit as a “nervous system,” powering the sensory interfaces humans use to interact with AI – including the Advanced Voice feature in ChatGPT as well as applications like Character.ai, Spotify and many more. Russ talked about when voice makes sense as an interface, the exciting new UX paradigms on the horizon, the intersection of voice and robotics and Anthropic&apos;s Computer Use API.</itunes:summary>
      <itunes:subtitle>For this episode of Unsupervised Learning we spoke with Russ d&apos;Sa, co-founder of LiveKit, a company at the forefront of voice AI technology. Russ thinks of LiveKit as a “nervous system,” powering the sensory interfaces humans use to interact with AI – including the Advanced Voice feature in ChatGPT as well as applications like Character.ai, Spotify and many more. Russ talked about when voice makes sense as an interface, the exciting new UX paradigms on the horizon, the intersection of voice and robotics and Anthropic&apos;s Computer Use API.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>48</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">0643417c-2d4b-4d29-81ec-65fa90147ca0</guid>
      <title>Ep 47: Chief AI Scientist of Databricks Jonathan Frankle on Why New Model Architectures are Unlikely, When to Pre-Train or Fine Tune, and Hopes for Future AI Policy</title>
      <description><![CDATA[<p>Jonathan Frankle is the Chief AI Scientist at Databricks ($43B), which he joined through the acquisition of MosaicML in July 2023. Databricks has over 12,000 customers on the cutting edge of AI; Jonathan works to anticipate their needs and offer solutions even as the tech is rapidly evolving.</p><p> </p><p>[0:00] Intro</p><p>[0:52] Incentives and Team Motivation at Databricks</p><p>[2:40] The Evolution of AI Models: Transformers vs. LSTMs</p><p>[5:27] Mosaic and Databricks: A Strategic Merger</p><p>[7:31] Guidance on AI Model Training and Fine-Tuning</p><p>[11:11] Building Effective AI Evaluations</p><p>[16:02] Domain-Specific AI Models and Their Importance</p><p>[19:37] The Future of AI: Challenges and Opportunities</p><p>[25:07] Ethical Considerations and Human-AI Interaction</p><p>[29:13] Customer Collaboration and AI Implementation</p><p>[30:45] Navigating AI Tools and Techniques</p><p>[35:41] The Role of Open Source Models</p><p>[36:46] AI Infrastructure and Partnerships</p><p>[48:27] Academia's Role in AI Research</p><p>[52:09] Ethics and Policy in AI</p><p>[57:47] Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 12 Nov 2024 14:00:10 +0000</pubDate>
      <author>jeffron@redpoint.com (Jonathan Frankle, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-47-chief-ai-officer-of-databricks-jonathan-frankle-on-why-new-model-architectures-are-unlikely-when-to-pre-train-or-fine-tune-and-hopes-for-future-ai-policy-41lvcsey</link>
      <content:encoded><![CDATA[<p>Jonathan Frankle is the Chief AI Scientist at Databricks ($43B), which he joined through the acquisition of MosaicML in July 2023. Databricks has over 12,000 customers on the cutting edge of AI; Jonathan works to anticipate their needs and offer solutions even as the tech is rapidly evolving.</p><p> </p><p>[0:00] Intro</p><p>[0:52] Incentives and Team Motivation at Databricks</p><p>[2:40] The Evolution of AI Models: Transformers vs. LSTMs</p><p>[5:27] Mosaic and Databricks: A Strategic Merger</p><p>[7:31] Guidance on AI Model Training and Fine-Tuning</p><p>[11:11] Building Effective AI Evaluations</p><p>[16:02] Domain-Specific AI Models and Their Importance</p><p>[19:37] The Future of AI: Challenges and Opportunities</p><p>[25:07] Ethical Considerations and Human-AI Interaction</p><p>[29:13] Customer Collaboration and AI Implementation</p><p>[30:45] Navigating AI Tools and Techniques</p><p>[35:41] The Role of Open Source Models</p><p>[36:46] AI Infrastructure and Partnerships</p><p>[48:27] Academia's Role in AI Research</p><p>[52:09] Ethics and Policy in AI</p><p>[57:47] Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="61845463" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/65dc713b-3b50-4e86-8fd9-4dfefa2e3613/audio/09d2b605-111e-4280-88ac-7627aadacdfe/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 47: Chief AI Scientist of Databricks Jonathan Frankle on Why New Model Architectures are Unlikely, When to Pre-Train or Fine Tune, and Hopes for Future AI Policy</itunes:title>
      <itunes:author>Jonathan Frankle, Jacob Effron</itunes:author>
      <itunes:duration>01:04:25</itunes:duration>
      <itunes:summary>Jonathan Frankle is the Chief AI Scientist at Databricks ($43B), which he joined through the acquisition of MosaicML in July 2023. Databricks has over 12,000 customers on the cutting edge of AI; Jonathan works to anticipate their needs and offer solutions even as the tech is rapidly evolving.</itunes:summary>
      <itunes:subtitle>Jonathan Frankle is the Chief AI Scientist at Databricks ($43B), which he joined through the acquisition of MosaicML in July 2023. Databricks has over 12,000 customers on the cutting edge of AI; Jonathan works to anticipate their needs and offer solutions even as the tech is rapidly evolving.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>47</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">d01a00fd-74eb-4043-b6b4-9b75906f67a2</guid>
      <title>Ep 46: CEO of DeepL Jarek Kutylowski on Specialized vs. General Models, Beating Google and a Future with Synchronous Translation</title>
      <description><![CDATA[<p>I sat down with DeepL cofounder Jarek Kutylowski. DeepL is a comprehensive Language AI platform that enables organizations to communicate effectively across languages, cultures, and markets. Jarek shared a treasure trove of insights on the past, present, and future of AI translation. Here were some standout moments:</p><p> </p><p>[0:00] Intro<br />[0:38] The Rise of AI and DeepL's Journey<br />[1:41] DeepL's Competitive Edge and Market Impact<br />[2:41] Innovations in AI Translation<br />[4:39] DeepL's Product and Use Cases<br />[7:35] Challenges and Strategies in AI Translation<br />[14:29] Human Translators and Data Labeling<br />[24:39] Building and Scaling AI Infrastructure<br />[32:52] Evaluating AI Models: Objective vs Subjective<br />[34:05] Translation Quality Metrics<br />[35:35] The Debate on AI Moats and Specialized Models<br />[40:16] The Impact of Real-Time Translation on Business<br />[45:05] Challenges in Developing Synchronous Speaking Models<br />[46:49] Adjacent AI Technologies and Their Potential<br />[48:33] Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 29 Oct 2024 13:00:35 +0000</pubDate>
      <author>jeffron@redpoint.com (Jarek Kutylowski, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-46-ceo-of-deepl-jarek-kutylowski-on-specialized-vs-general-models-beating-google-and-a-future-with-synchronous-translation-Y4mx33X7</link>
      <content:encoded><![CDATA[<p>I sat down with DeepL cofounder Jarek Kutylowski. DeepL is a comprehensive Language AI platform that enables organizations to communicate effectively across languages, cultures, and markets. Jarek shared a treasure trove of insights on the past, present, and future of AI translation. Here were some standout moments:</p><p> </p><p>[0:00] Intro<br />[0:38] The Rise of AI and DeepL's Journey<br />[1:41] DeepL's Competitive Edge and Market Impact<br />[2:41] Innovations in AI Translation<br />[4:39] DeepL's Product and Use Cases<br />[7:35] Challenges and Strategies in AI Translation<br />[14:29] Human Translators and Data Labeling<br />[24:39] Building and Scaling AI Infrastructure<br />[32:52] Evaluating AI Models: Objective vs Subjective<br />[34:05] Translation Quality Metrics<br />[35:35] The Debate on AI Moats and Specialized Models<br />[40:16] The Impact of Real-Time Translation on Business<br />[45:05] Challenges in Developing Synchronous Speaking Models<br />[46:49] Adjacent AI Technologies and Their Potential<br />[48:33] Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="53929734" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/2fc92534-6223-439f-b487-dc40522adf05/audio/dbdeb51d-7eb4-41b3-b61c-8d7825990d9f/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 46: CEO of DeepL Jarek Kutylowski on Specialized vs. General Models, Beating Google and a Future with Synchronous Translation</itunes:title>
      <itunes:author>Jarek Kutylowski, Jacob Effron</itunes:author>
      <itunes:duration>00:56:10</itunes:duration>
      <itunes:summary>I sat down with DeepL cofounder Jarek Kutylowski. DeepL is a comprehensive Language AI platform that enables organizations to communicate effectively across languages, cultures, and markets. Jarek shared a treasure trove of insights on the past, present, and future of AI translation.</itunes:summary>
      <itunes:subtitle>I sat down with DeepL cofounder Jarek Kutylowski. DeepL is a comprehensive Language AI platform that enables organizations to communicate effectively across languages, cultures, and markets. Jarek shared a treasure trove of insights on the past, present, and future of AI translation.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>46</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">5f30df9d-6910-4def-9187-d315cce6820e</guid>
      <title>Ep 45: Founder &amp; CEO of HeyGen Josh Xu on TikTok’s GenAI Dilemma, Trust and Safety and the Path to Interactive Avatars</title>
      <description><![CDATA[<p>Joshua Xu is co-founder and CEO of HeyGen -- the fast-growing AI video creation and translation platform. You can upload a video, or create a new one from a script using an AI avatar as your star, and HeyGen will translate it into 175 languages. HeyGen now serves over 40,000 customers and is generating $35+ million in revenue.</p><p> </p><p>[0:00] Intro  </p><p>[0:37] HeyGen's Viral Moments  </p><p>[1:23] Creating Magic with AI  </p><p>[3:30] The Future of AI in Video Production  </p><p>[9:29] HeyGen's Use Cases and Customer Base  </p><p>[13:15] AI Avatars  </p><p>[25:46] The Future of Content  </p><p>[26:43] Competing with Industry Giants  </p><p>[27:16] Innovating for New Markets  </p><p>[31:24] Enterprise Push: Lessons and Surprises  </p><p>[33:07] Trust and Safety in AI  </p><p>[37:03] Fundraising and Financial Strategies for AI Startups  </p><p>[41:22] Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 15 Oct 2024 13:00:09 +0000</pubDate>
      <author>jeffron@redpoint.com (Josh Xu, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-45-founder-ceo-of-heygen-josh-xu-on-tiktoks-genai-dilemma-trust-and-safety-and-the-path-to-interactive-avatars-g0VzV40c</link>
      <content:encoded><![CDATA[<p>Joshua Xu is co-founder and CEO of HeyGen -- the fast-growing AI video creation and translation platform. You can upload a video, or create a new one from a script using an AI avatar as your star, and HeyGen will translate it into 175 languages. HeyGen now serves over 40,000 customers and is generating $35+ million in revenue.</p><p> </p><p>[0:00] Intro  </p><p>[0:37] HeyGen's Viral Moments  </p><p>[1:23] Creating Magic with AI  </p><p>[3:30] The Future of AI in Video Production  </p><p>[9:29] HeyGen's Use Cases and Customer Base  </p><p>[13:15] AI Avatars  </p><p>[25:46] The Future of Content  </p><p>[26:43] Competing with Industry Giants  </p><p>[27:16] Innovating for New Markets  </p><p>[31:24] Enterprise Push: Lessons and Surprises  </p><p>[33:07] Trust and Safety in AI  </p><p>[37:03] Fundraising and Financial Strategies for AI Startups  </p><p>[41:22] Quickfire</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="49702913" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/45091f61-c607-495f-a621-f4a4d845f109/audio/7f65eeaa-3069-4b61-97d8-e8e7da99a3bc/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 45: Founder &amp; CEO of HeyGen Josh Xu on TikTok’s GenAI Dilemma, Trust and Safety and the Path to Interactive Avatars</itunes:title>
      <itunes:author>Josh Xu, Jacob Effron</itunes:author>
      <itunes:duration>00:51:46</itunes:duration>
      <itunes:summary>Joshua Xu is co-founder and CEO of HeyGen -- the fast-growing AI video creation and translation platform. You can upload a video, or create a new one from a script using an AI avatar as your star, and HeyGen will translate it into 175 languages. HeyGen now serves over 40,000 customers and is generating $35+ million in revenue.</itunes:summary>
      <itunes:subtitle>Joshua Xu is co-founder and CEO of HeyGen -- the fast-growing AI video creation and translation platform. You can upload a video, or create a new one from a script using an AI avatar as your star, and HeyGen will translate it into 175 languages. HeyGen now serves over 40,000 customers and is generating $35+ million in revenue.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>45</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">859ddc8f-dafe-4f25-b069-19edf0ce5bc1</guid>
      <title>Ep 44: Co-Founder of Together.AI Percy Liang on What’s Next in Research, Reaction to o1 and How AI will Change Simulation</title>
      <description><![CDATA[<p>Percy Liang is a Stanford professor and co-founder of Together AI, driving some of the most critical advances in AI research. </p><p>Percy is also a trained classical pianist, which clearly influences the way he thinks about technology. We explored the evolution of AI from simple token prediction to autonomous agents capable of long-term problem-solving, the problem of interpretability, and the future of AI safety in complex, real-world systems.</p><p> </p><p>[0:00] Intro</p><p>[0:46] Discussing OpenAI's O1 Model  </p><p>[2:21] The Evolution of AI Agents  </p><p>[3:27] Challenges and Benchmarks in AI  </p><p>[4:38] Compatibility and Integration Issues  </p><p>[6:17] The Future of AI Scaffolding  </p><p>[10:05] Academia's Role in AI Research  </p><p>[15:17] AI Safety and Holistic Approaches  </p><p>[18:32] Regulation and Transparency in AI  </p><p>[21:42] Generative Agents and Social Simulations  </p><p>[29:14] The State of AI Evaluations  </p><p>[32:07] Exploring Evaluation in Language Models  </p><p>[35:13] The Challenge of Interpretability  </p><p>[39:31] Innovations in Model Architectures  </p><p>[43:18] The Future of Inference and Customization  </p><p>[46:46] Milestones in AI Research and Reasoning  </p><p>[49:43] Robotics and AI: The Road Ahead  </p><p>[52:24] AI in Music: A Harmonious Future  </p><p>[55:52] AI's Role in Education and Beyond  </p><p>[56:30] Quickfire</p><p>[59:16] Jacob and Pat Debrief</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Thu, 3 Oct 2024 13:01:17 +0000</pubDate>
      <author>jeffron@redpoint.com (Percy Liang, Patrick Chase, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-44-co-founder-of-togetherai-percy-liang-on-whats-next-in-research-reaction-to-o1-and-how-ai-will-change-simulation-HlR7mC28</link>
      <content:encoded><![CDATA[<p>Percy Liang is a Stanford professor and co-founder of Together AI, driving some of the most critical advances in AI research. </p><p>Percy is also a trained classical pianist, which clearly influences the way he thinks about technology. We explored the evolution of AI from simple token prediction to autonomous agents capable of long-term problem-solving, the problem of interpretability, and the future of AI safety in complex, real-world systems.</p><p> </p><p>[0:00] Intro</p><p>[0:46] Discussing OpenAI's O1 Model  </p><p>[2:21] The Evolution of AI Agents  </p><p>[3:27] Challenges and Benchmarks in AI  </p><p>[4:38] Compatibility and Integration Issues  </p><p>[6:17] The Future of AI Scaffolding  </p><p>[10:05] Academia's Role in AI Research  </p><p>[15:17] AI Safety and Holistic Approaches  </p><p>[18:32] Regulation and Transparency in AI  </p><p>[21:42] Generative Agents and Social Simulations  </p><p>[29:14] The State of AI Evaluations  </p><p>[32:07] Exploring Evaluation in Language Models  </p><p>[35:13] The Challenge of Interpretability  </p><p>[39:31] Innovations in Model Architectures  </p><p>[43:18] The Future of Inference and Customization  </p><p>[46:46] Milestones in AI Research and Reasoning  </p><p>[49:43] Robotics and AI: The Road Ahead  </p><p>[52:24] AI in Music: A Harmonious Future  </p><p>[55:52] AI's Role in Education and Beyond  </p><p>[56:30] Quickfire</p><p>[59:16] Jacob and Pat Debrief</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="63690617" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/c83f73f4-f443-4679-9e88-03b94f0f19aa/audio/4a4af54c-fb22-481f-8923-2e1453b08c8c/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 44: Co-Founder of Together.AI Percy Liang on What’s Next in Research, Reaction to o1 and How AI will Change Simulation</itunes:title>
      <itunes:author>Percy Liang, Patrick Chase, Jacob Effron</itunes:author>
      <itunes:duration>01:06:20</itunes:duration>
      <itunes:summary>Percy Liang is a Stanford professor and co-founder of Together AI, driving some of the most critical advances in AI research. Percy is also a trained classical pianist, which clearly influences the way he thinks about technology. We explored the evolution of AI from simple token prediction to autonomous agents capable of long-term problem-solving, the problem of interpretability, and the future of AI safety in complex, real-world systems.
</itunes:summary>
      <itunes:subtitle>Percy Liang is a Stanford professor and co-founder of Together AI, driving some of the most critical advances in AI research. Percy is also a trained classical pianist, which clearly influences the way he thinks about technology. We explored the evolution of AI from simple token prediction to autonomous agents capable of long-term problem-solving, the problem of interpretability, and the future of AI safety in complex, real-world systems.
</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>44</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">ea57c2b5-0352-4ebb-acf4-e3803c963fa7</guid>
      <title>Ep 43: CEO/Co-Founder of Contextual AI Douwe Kiela Reaction to o1, What’s Next in Reasoning and Innovations in Post-Training</title>
      <description><![CDATA[<p>Douwe’s contributions to AI are truly a part of its bedrock foundations. He wrote the first paper on retrieval-augmented generation (RAG) and has raised over $100 million to help enterprises build contextual language models that fit their use cases. Before Contextual he was the head of research at Hugging Face, worked on the Facebook AI research team (i.e. Llama) and remains a professor at Stanford. Douwe was incredibly open about his take on AI’s recent history and where he thinks it’s going.</p><p> </p><p>[0:00] Intro<br />[0:51] Exploring the Impact of Systems Thinking in AI<br />[1:49] Latency Constraints and AI Deployments<br />[2:05] Benchmarks and Real-World Applications<br />[3:27] Transition to Contextual and Company Vision<br />[5:12] Challenges and Innovations in Enterprise AI<br />[8:51] The Evolution and Future of RAG<br />[15:26] Alignment and Reinforcement Learning in AI<br />[23:52] Collaborations and the Role of Academia<br />[29:15] The Evolving Role of AI Developers<br />[30:19] Changing Perspectives in AI Research<br />[30:44] Synthetic Data and Agentic Workflows<br />[33:47] The Future of Multimodal Data<br />[35:31] Reasoning Capabilities in AI Models<br />[42:56] The Rise of Multi-Agent Systems<br />[45:24] Hugging Face and the AI Ecosystem<br />[46:59] Building Contextual and AI Startups<br />[49:51] The Future of AI and Personalized Entertainment<br />[50:41] Quickfire Round: Overhyped and Underhyped AI<br />[56:25] Final Thoughts and Parting Words</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Wed, 18 Sep 2024 13:02:10 +0000</pubDate>
      <author>jeffron@redpoint.com (Douwe Kiela, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-43-ceo-co-founder-of-contextual-ai-douwe-kiela-reaction-to-o1-whats-next-in-reasoning-and-innovations-in-post-training-_2QjQOU5</link>
      <content:encoded><![CDATA[<p>Douwe’s contributions to AI are truly a part of its bedrock foundations. He wrote the first paper on retrieval-augmented generation (RAG) and has raised over $100 million to help enterprises build contextual language models that fit their use cases. Before Contextual he was the head of research at Hugging Face, worked on the Facebook AI research team (i.e. Llama) and remains a professor at Stanford. Douwe was incredibly open about his take on AI’s recent history and where he thinks it’s going.</p><p> </p><p>[0:00] Intro<br />[0:51] Exploring the Impact of Systems Thinking in AI<br />[1:49] Latency Constraints and AI Deployments<br />[2:05] Benchmarks and Real-World Applications<br />[3:27] Transition to Contextual and Company Vision<br />[5:12] Challenges and Innovations in Enterprise AI<br />[8:51] The Evolution and Future of RAG<br />[15:26] Alignment and Reinforcement Learning in AI<br />[23:52] Collaborations and the Role of Academia<br />[29:15] The Evolving Role of AI Developers<br />[30:19] Changing Perspectives in AI Research<br />[30:44] Synthetic Data and Agentic Workflows<br />[33:47] The Future of Multimodal Data<br />[35:31] Reasoning Capabilities in AI Models<br />[42:56] The Rise of Multi-Agent Systems<br />[45:24] Hugging Face and the AI Ecosystem<br />[46:59] Building Contextual and AI Startups<br />[49:51] The Future of AI and Personalized Entertainment<br />[50:41] Quickfire Round: Overhyped and Underhyped AI<br />[56:25] Final Thoughts and Parting Words</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="55101274" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/1571bcc7-ce67-4be4-b197-3ecccbf829f1/audio/618b4dfc-b18c-411f-80d8-e29a13cd045a/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 43: CEO/Co-Founder of Contextual AI Douwe Kiela Reaction to o1, What’s Next in Reasoning and Innovations in Post-Training</itunes:title>
      <itunes:author>Douwe Kiela, Jacob Effron</itunes:author>
      <itunes:duration>00:57:23</itunes:duration>
      <itunes:summary>Douwe’s contributions to AI are truly a part of its bedrock foundations. He wrote the first paper on retrieval-augmented generation (RAG) and has raised over $100 million to help enterprises build contextual language models that fit their use cases. Before Contextual he was the head of research at Hugging Face, worked on the Facebook AI research team (i.e. Llama) and remains a professor at Stanford. Douwe was incredibly open about his take on AI’s recent history and where he thinks it’s going. </itunes:summary>
      <itunes:subtitle>Douwe’s contributions to AI are truly a part of its bedrock foundations. He wrote the first paper on retrieval-augmented generation (RAG) and has raised over $100 million to help enterprises build contextual language models that fit their use cases. Before Contextual he was the head of research at Hugging Face, worked on the Facebook AI research team (i.e. Llama) and remains a professor at Stanford. Douwe was incredibly open about his take on AI’s recent history and where he thinks it’s going. </itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>43</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">40542c2e-9906-4f25-be25-b2553b6ac713</guid>
      <title>Ep 42: CEO of Grammarly Rahul Roy-Chowdhury on the Future of Communication, Impact of LLMs and How Grammarly Does Eval</title>
      <description><![CDATA[<p>If you don’t know Grammarly, it’s a personalized AI assistant for writing that has over 30 Million Daily Active Users. Grammarly has been building AI productivity tooling long before the most recent GenAI wave and has raised over $400M, with a current valuation of $13B. Rahul believes AI will enable everyone to focus on more meaningful, creative interactions by automating the "drudgery" of daily tasks. It was interesting to hear him talk about how he thinks about competition and his longer-term perspective on how AI will be adopted by the enterprise.</p><p> </p><p>[0:00] Intro<br />[1:03] The Future of AI in Human Communication<br />[3:47] Grammarly's Evolution and Product Overview<br />[8:21] Limitations of LLMs<br />[15:31] The Impact of ChatGPT and Future Prospects<br />[23:52] Fine-Tuning AI for User Needs<br />[30:16] Competitive Landscape and Differentiators<br />[39:14] AI in Education<br />[46:25] Over-hyped/Under-hyped<br />[49:57] Most Exciting AI Startups</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Fri, 6 Sep 2024 13:00:18 +0000</pubDate>
      <author>jeffron@redpoint.com (Rahul Roy-Chowdhury, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-42-ceo-of-grammarly-rahul-roy-chowdhury-on-the-future-of-communication-impact-of-llms-and-how-grammarly-does-eval-JFulIcLY</link>
      <content:encoded><![CDATA[<p>If you don’t know Grammarly, it’s a personalized AI assistant for writing that has over 30 Million Daily Active Users. Grammarly has been building AI productivity tooling long before the most recent GenAI wave and has raised over $400M, with a current valuation of $13B. Rahul believes AI will enable everyone to focus on more meaningful, creative interactions by automating the "drudgery" of daily tasks. It was interesting to hear him talk about how he thinks about competition and his longer-term perspective on how AI will be adopted by the enterprise.</p><p> </p><p>[0:00] Intro<br />[1:03] The Future of AI in Human Communication<br />[3:47] Grammarly's Evolution and Product Overview<br />[8:21] Limitations of LLMs<br />[15:31] The Impact of ChatGPT and Future Prospects<br />[23:52] Fine-Tuning AI for User Needs<br />[30:16] Competitive Landscape and Differentiators<br />[39:14] AI in Education<br />[46:25] Over-hyped/Under-hyped<br />[49:57] Most Exciting AI Startups</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="49310869" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/3fcbaa75-1d18-485a-adab-380edd884fc3/audio/f9a3fe1a-d546-490e-aba3-f6cf2d7b1b9b/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 42: CEO of Grammarly Rahul Roy-Chowdhury on the Future of Communication, Impact of LLMs and How Grammarly Does Eval</itunes:title>
      <itunes:author>Rahul Roy-Chowdhury, Jacob Effron</itunes:author>
      <itunes:duration>00:51:21</itunes:duration>
      <itunes:summary>If you don’t know Grammarly, it’s a personalized AI assistant for writing that has over 30 Million Daily Active Users. Grammarly has been building AI productivity tooling long before the most recent GenAI wave and has raised over $400M, with a current valuation of $13B. 

Rahul believes AI will enable everyone to focus on more meaningful, creative interactions by automating the &quot;drudgery&quot; of daily tasks. It was interesting to hear him talk about how he thinks about competition and his longer-term perspective on how AI will be adopted by the enterprise.</itunes:summary>
      <itunes:subtitle>If you don’t know Grammarly, it’s a personalized AI assistant for writing that has over 30 Million Daily Active Users. Grammarly has been building AI productivity tooling long before the most recent GenAI wave and has raised over $400M, with a current valuation of $13B. 

Rahul believes AI will enable everyone to focus on more meaningful, creative interactions by automating the &quot;drudgery&quot; of daily tasks. It was interesting to hear him talk about how he thinks about competition and his longer-term perspective on how AI will be adopted by the enterprise.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>42</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">6d580024-8c59-486d-be47-7a8bb8335b7a</guid>
      <title>Bonus Episode: Trunk Tools CEO Sarah Buchner is Building Trunk Tools to Simplify the $13T Construction Industry</title>
      <description><![CDATA[<p>Sarah Buchner is the Founder and CEO of Trunk Tools (https://trunktools.com/). She is a one-of-a-kind founder, having spent her young life as a carpenter in Austria and then working her way up the ranks of the construction industry. She’s also earned several graduate degrees including an MS in Civil Eng, a PHD in Data Science and an MBA from Stanford.</p><p>Trunk Tools is an AI Tool for the $13T construction industry aimed at enhancing project management and addressing the skilled Labor shortage.</p><p>We wanted to have Sarah on to share her experience in building a vertical AI tool - specifically: where AI will have the biggest impact on construction and behind the scenes of how they’ve built tools like TrunkText and TrunkScheduler to massively reduce the amount of rework in construction.</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Wed, 4 Sep 2024 15:00:30 +0000</pubDate>
      <author>jeffron@redpoint.com (Sarah Buchner, Meera Clark)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/bonus-episode-trunk-tools-ceo-sarah-buchner-is-building-trunk-tools-to-simplify-the-13t-construction-industry-NPGCwEwY</link>
      <content:encoded><![CDATA[<p>Sarah Buchner is the Founder and CEO of Trunk Tools (https://trunktools.com/). She is a one-of-a-kind founder, having spent her young life as a carpenter in Austria and then working her way up the ranks of the construction industry. She’s also earned several graduate degrees including an MS in Civil Eng, a PHD in Data Science and an MBA from Stanford.</p><p>Trunk Tools is an AI Tool for the $13T construction industry aimed at enhancing project management and addressing the skilled Labor shortage.</p><p>We wanted to have Sarah on to share her experience in building a vertical AI tool - specifically: where AI will have the biggest impact on construction and behind the scenes of how they’ve built tools like TrunkText and TrunkScheduler to massively reduce the amount of rework in construction.</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="20347134" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/180ddbd8-5656-484c-a047-47de9d21f5a3/audio/53f89967-907a-4dc0-b52e-8b921322e073/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Bonus Episode: Trunk Tools CEO Sarah Buchner is Building Trunk Tools to Simplify the $13T Construction Industry</itunes:title>
      <itunes:author>Sarah Buchner, Meera Clark</itunes:author>
      <itunes:duration>00:21:11</itunes:duration>
      <itunes:summary>Sarah Buchner is the Founder and CEO of Trunk Tools (https://trunktools.com/). She is a one-of-a-kind founder, having spent her young life as a carpenter in Austria and then working her way up the ranks of the construction industry. She’s also earned several graduate degrees including an MS in Civil Eng, a PHD in Data Science and an MBA from Stanford. 

Trunk Tools is an AI Tool for the $13T construction industry aimed at enhancing project management and addressing the skilled Labor shortage. 

We wanted to have Sarah on to share her experience in building a vertical AI tool - specifically: where AI will have the biggest impact on construction and behind the scenes of how they’ve built tools like TrunkText and TrunkScheduler to massively reduce the amount of rework in construction.</itunes:summary>
      <itunes:subtitle>Sarah Buchner is the Founder and CEO of Trunk Tools (https://trunktools.com/). She is a one-of-a-kind founder, having spent her young life as a carpenter in Austria and then working her way up the ranks of the construction industry. She’s also earned several graduate degrees including an MS in Civil Eng, a PHD in Data Science and an MBA from Stanford. 

Trunk Tools is an AI Tool for the $13T construction industry aimed at enhancing project management and addressing the skilled Labor shortage. 

We wanted to have Sarah on to share her experience in building a vertical AI tool - specifically: where AI will have the biggest impact on construction and behind the scenes of how they’ve built tools like TrunkText and TrunkScheduler to massively reduce the amount of rework in construction.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>bonus</itunes:episodeType>
    </item>
    <item>
      <guid isPermaLink="false">9fcafc62-1688-4b67-a67c-3b104bfdda5c</guid>
      <title>Ep 41: Head of AI at Snowflake Baris Gultekin on Why They Built Their Own LLM, Governance as a Moat, and the Most Common Enterprise Use-Cases</title>
      <description><![CDATA[<p>Snowflake sits in a unique position in the AI landscape: They enable fast, secure, and scalable proprietary data access for thousands of customers, many of whom are building AI tools. They also maintain their own suite of AI products, increasing the utility of their platform and empowering customers who may not have the resources to build their own.</p><p>That’s why it was so fascinating to speak with Baris Gultekin, Snowflake’s Head of AI, on our latest Unsupervised Learning. Baris has helped Snowflake launch several key products, including Cortex, AI Data Cloud, and even Snowflake’s own LLM, called Arctic. Baris was a founder himself — he joined Snowflake through an acquisition of his blockchain API startup, nxyz. He has a unique window into the future of AI via his role building key infrastructure. It was really fun to dive deep into this side of the AI ecosystem with Baris.</p><p> </p><p>[0:00] Intro</p><p>[0:33] Snowflake's AI Product Portfolio</p><p>[0:52] Building Arctic LLM: Challenges and Innovations</p><p>[2:57] Use Cases and Applications of Arctic LLM</p><p>[3:10] Cortex: Snowflake's Managed Service for LLMs</p><p>[4:26] Tackling BI Challenges with Cortex Analyst</p><p>[8:46] Data Extraction and Analysis with AI</p><p>[10:15] Snowflake's AI Strategy and Future Directions</p><p>[13:12] Governance and Security in AI Deployments</p><p>[22:25] Supporting and Integrating New AI Models</p><p>[24:02] Discussing the 405B AI Model</p><p>[25:02] Enterprise Use Cases and Cost Considerations</p><p>[26:09] Future of Arctic LM and Security Concerns</p><p>[26:59] Open Source Models and Guardrails</p><p>[30:21] Internal Use of LLMs at Snowflake</p><p>[31:32] Comparing Snowflake and Databricks AI Strategies</p><p>[33:24] Opportunities for AI Startups</p><p>[34:14] Exciting AI Developments and Future Prospects</p><p>[40:34] Over-hyped/Under-hyped</p><p>[45:58] Closing Thoughts and Resources</p><p><br /> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Wed, 28 Aug 2024 13:00:00 +0000</pubDate>
      <author>jeffron@redpoint.com (Baris Gultekin, Patrick Chase, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-41-head-of-ai-at-snowflake-baris-gultekin-on-why-they-built-their-own-llm-governance-as-a-moat-and-the-most-common-enterprise-use-cases-aLcYHz31</link>
      <content:encoded><![CDATA[<p>Snowflake sits in a unique position in the AI landscape: They enable fast, secure, and scalable proprietary data access for thousands of customers, many of whom are building AI tools. They also maintain their own suite of AI products, increasing the utility of their platform and empowering customers who may not have the resources to build their own.</p><p>That’s why it was so fascinating to speak with Baris Gultekin, Snowflake’s Head of AI, on our latest Unsupervised Learning. Baris has helped Snowflake launch several key products, including Cortex, AI Data Cloud, and even Snowflake’s own LLM, called Arctic. Baris was a founder himself — he joined Snowflake through an acquisition of his blockchain API startup, nxyz. He has a unique window into the future of AI via his role building key infrastructure. It was really fun to dive deep into this side of the AI ecosystem with Baris.</p><p> </p><p>[0:00] Intro</p><p>[0:33] Snowflake's AI Product Portfolio</p><p>[0:52] Building Arctic LLM: Challenges and Innovations</p><p>[2:57] Use Cases and Applications of Arctic LLM</p><p>[3:10] Cortex: Snowflake's Managed Service for LLMs</p><p>[4:26] Tackling BI Challenges with Cortex Analyst</p><p>[8:46] Data Extraction and Analysis with AI</p><p>[10:15] Snowflake's AI Strategy and Future Directions</p><p>[13:12] Governance and Security in AI Deployments</p><p>[22:25] Supporting and Integrating New AI Models</p><p>[24:02] Discussing the 405B AI Model</p><p>[25:02] Enterprise Use Cases and Cost Considerations</p><p>[26:09] Future of Arctic LM and Security Concerns</p><p>[26:59] Open Source Models and Guardrails</p><p>[30:21] Internal Use of LLMs at Snowflake</p><p>[31:32] Comparing Snowflake and Databricks AI Strategies</p><p>[33:24] Opportunities for AI Startups</p><p>[34:14] Exciting AI Developments and Future Prospects</p><p>[40:34] Over-hyped/Under-hyped</p><p>[45:58] Closing Thoughts and Resources</p><p><br /> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="44982484" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/5ee09b86-0179-4fac-acee-5e411914b571/audio/49bbb1d2-e525-42f6-8090-1041f09d7ec1/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 41: Head of AI at Snowflake Baris Gultekin on Why They Built Their Own LLM, Governance as a Moat, and the Most Common Enterprise Use-Cases</itunes:title>
      <itunes:author>Baris Gultekin, Patrick Chase, Jacob Effron</itunes:author>
      <itunes:duration>00:46:51</itunes:duration>
      <itunes:summary>Snowflake sits in a unique position in the AI landscape: They enable fast, secure, and scalable proprietary data access for thousands of customers, many of whom are building AI tools. They also maintain their own suite of AI products, increasing the utility of their platform and empowering customers who may not have the resources to build their own.

That’s why it was so fascinating to speak with Baris Gultekin, Snowflake’s Head of AI, on our latest Unsupervised Learning. Baris has helped Snowflake launch several key products, including Cortex, AI Data Cloud, and even Snowflake’s own LLM, called Arctic. Baris was a founder himself — he joined Snowflake through an acquisition of his blockchain API startup, nxyz. He has a unique window into the future of AI via his role building key infrastructure. It was really fun to dive deep into this side of the AI ecosystem with Baris.</itunes:summary>
      <itunes:subtitle>Snowflake sits in a unique position in the AI landscape: They enable fast, secure, and scalable proprietary data access for thousands of customers, many of whom are building AI tools. They also maintain their own suite of AI products, increasing the utility of their platform and empowering customers who may not have the resources to build their own.

That’s why it was so fascinating to speak with Baris Gultekin, Snowflake’s Head of AI, on our latest Unsupervised Learning. Baris has helped Snowflake launch several key products, including Cortex, AI Data Cloud, and even Snowflake’s own LLM, called Arctic. Baris was a founder himself — he joined Snowflake through an acquisition of his blockchain API startup, nxyz. He has a unique window into the future of AI via his role building key infrastructure. It was really fun to dive deep into this side of the AI ecosystem with Baris.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>41</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">3c2adeef-df7e-459b-bde7-f47e44ab45de</guid>
      <title>Ep 40: CEO of Speak.com Connor Zwick on How AI Will Change the Way we Learn</title>
      <description><![CDATA[<p>We’re excited to bring you Connor Zwick, CEO and cofounder of Speak, on the podcast this week! Speak helps people learning a language have conversations with an AI speaking partner, which is critical to gaining fluency. It’s backed by OpenAI and most recently raised at a $500M valuation. Since launching in its inaugural market of South Korea in 2019, Speak has grown to over 10 million users and now has customers in more than 40 countries. </p><p>We learned so much about how language works and how Connor has built this startup. Some of our favorite bits:</p><p> </p><p>[0:00] Intro </p><p>[0:38] Connor's Entrepreneurial Journey </p><p>[3:40] Diving into AI and Language Learning </p><p>[6:07] The Evolution of Speak </p><p>[9:30] Building Specialized Models and Overcoming Challenges </p><p>[18:17] User Experience and Interface Design </p><p>[24:18] Future of AI in Language Learning </p><p>[35:38] Comparing Duolingo and Speak </p><p>[38:00] Challenges in Translation and Human Connection </p><p>[41:18] Specialized AI Models and Their Impact </p><p>[47:41] Opportunities in Professional and Personal Learning </p><p>[53:38] The Evolution of Education with AI </p><p>[59:31] Final Thoughts and Reflections</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 13 Aug 2024 14:07:20 +0000</pubDate>
      <author>jeffron@redpoint.com (Connor Zwick, Patrick Chase, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-40-ceo-of-speakcom-connor-zwick-the-ai-english-tutor-taking-the-world-by-storm-o2s3TNtl</link>
      <content:encoded><![CDATA[<p>We’re excited to bring you Connor Zwick, CEO and cofounder of Speak, on the podcast this week! Speak helps people learning a language have conversations with an AI speaking partner, which is critical to gaining fluency. It’s backed by OpenAI and most recently raised at a $500M valuation. Since launching in its inaugural market of South Korea in 2019, Speak has grown to over 10 million users and now has customers in more than 40 countries. </p><p>We learned so much about how language works and how Connor has built this startup. Some of our favorite bits:</p><p> </p><p>[0:00] Intro </p><p>[0:38] Connor's Entrepreneurial Journey </p><p>[3:40] Diving into AI and Language Learning </p><p>[6:07] The Evolution of Speak </p><p>[9:30] Building Specialized Models and Overcoming Challenges </p><p>[18:17] User Experience and Interface Design </p><p>[24:18] Future of AI in Language Learning </p><p>[35:38] Comparing Duolingo and Speak </p><p>[38:00] Challenges in Translation and Human Connection </p><p>[41:18] Specialized AI Models and Their Impact </p><p>[47:41] Opportunities in Professional and Personal Learning </p><p>[53:38] The Evolution of Education with AI </p><p>[59:31] Final Thoughts and Reflections</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="66435075" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/dae16474-c8e5-46ea-804e-93e85b7472d5/audio/191a5eef-8ec8-4af1-9120-9fdc99418b9c/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 40: CEO of Speak.com Connor Zwick on How AI Will Change the Way we Learn</itunes:title>
      <itunes:author>Connor Zwick, Patrick Chase, Jacob Effron</itunes:author>
      <itunes:duration>01:09:12</itunes:duration>
      <itunes:summary>We’re excited to bring you Connor Zwick, CEO and cofounder of Speak, on the podcast this week! Speak helps people learning a language have conversations with an AI speaking partner, which is critical to gaining fluency. It’s backed by OpenAI and most recently raised at a $500M valuation. Since launching in its inaugural market of South Korea in 2019, Speak has grown to over 10 million users and now has customers in more than 40 countries. 
</itunes:summary>
      <itunes:subtitle>We’re excited to bring you Connor Zwick, CEO and cofounder of Speak, on the podcast this week! Speak helps people learning a language have conversations with an AI speaking partner, which is critical to gaining fluency. It’s backed by OpenAI and most recently raised at a $500M valuation. Since launching in its inaugural market of South Korea in 2019, Speak has grown to over 10 million users and now has customers in more than 40 countries. 
</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>40</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">2f2ae5e4-b3af-4601-be92-9fbe9b546c6c</guid>
      <title>Ep 39: Github CEO Thomas Dohmke on Building Copilot, Scaling to 1.2M Users and the Future of Code</title>
      <description><![CDATA[<p>It was very special to have Github CEO Thomas Dhomke on the pod for many reasons, not the least of which is that my partner Erica Brescia was COO at Gitlab just before joining Redpoint! Thomas has been at Github for almost 6 years, and has been CEO for almost 3 of those years. He has a pulse on what engineers around the world are looking for from the world’s leading developer platform, and incredible vision and empathy in making Github delivers it.</p><p> </p><p>[0:00] Intro</p><p>[1:08] The Magic of GitHub Copilot </p><p>[2:07] AI's Impact on Software Development </p><p>[2:49] Global Adoption and Democratization of AI </p><p>[4:20] Keeping Developers in the Creative Flow </p><p>[6:59] Future of Software Development with AI </p><p>[9:54] Challenges and Opportunities in AI-Powered Coding </p><p>[11:31] The Role of Agents in Copilot's Strategy </p><p>[17:03] AI's Influence on Open Source Ecosystem </p><p>[24:08] Fine-Tuning and Customization of Copilot </p><p>[28:22] The Rapid Evolution of Copilot </p><p>[30:26] Future Innovations and Accelerating Pace of AI </p><p>[33:09] The Future of AI in Software Development </p><p>[33:44] AI's Impact on Different Tech Stacks </p><p>[34:21] The Evolution of Media Consumption </p><p>[35:15] The Competitive Landscape of AI Models </p><p>[37:02] The Infinite Game of Business </p><p>[38:17] The Role of Multiple AI Models in Enterprises </p><p>[43:01] Advice for Founders Competing with Incumbents </p><p>[45:06] The Importance of Focus in Startups </p><p>[45:50] Expanding the GitHub Ecosystem with Extensions </p><p>[47:20] The Next Wave of Copilot </p><p>[50:57] Quick Fire Questions </p><p>[54:35] Erica and Jordan Debrief</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 30 Jul 2024 13:22:00 +0000</pubDate>
      <author>jeffron@redpoint.com (Erica Brescia, Jacob Effron, Jordan Segall)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-39-github-ceo-thomas-dohmke-on-building-copilot-scaling-to-12m-users-and-the-future-of-code-QNUnWIuc</link>
      <content:encoded><![CDATA[<p>It was very special to have Github CEO Thomas Dhomke on the pod for many reasons, not the least of which is that my partner Erica Brescia was COO at Gitlab just before joining Redpoint! Thomas has been at Github for almost 6 years, and has been CEO for almost 3 of those years. He has a pulse on what engineers around the world are looking for from the world’s leading developer platform, and incredible vision and empathy in making Github delivers it.</p><p> </p><p>[0:00] Intro</p><p>[1:08] The Magic of GitHub Copilot </p><p>[2:07] AI's Impact on Software Development </p><p>[2:49] Global Adoption and Democratization of AI </p><p>[4:20] Keeping Developers in the Creative Flow </p><p>[6:59] Future of Software Development with AI </p><p>[9:54] Challenges and Opportunities in AI-Powered Coding </p><p>[11:31] The Role of Agents in Copilot's Strategy </p><p>[17:03] AI's Influence on Open Source Ecosystem </p><p>[24:08] Fine-Tuning and Customization of Copilot </p><p>[28:22] The Rapid Evolution of Copilot </p><p>[30:26] Future Innovations and Accelerating Pace of AI </p><p>[33:09] The Future of AI in Software Development </p><p>[33:44] AI's Impact on Different Tech Stacks </p><p>[34:21] The Evolution of Media Consumption </p><p>[35:15] The Competitive Landscape of AI Models </p><p>[37:02] The Infinite Game of Business </p><p>[38:17] The Role of Multiple AI Models in Enterprises </p><p>[43:01] Advice for Founders Competing with Incumbents </p><p>[45:06] The Importance of Focus in Startups </p><p>[45:50] Expanding the GitHub Ecosystem with Extensions </p><p>[47:20] The Next Wave of Copilot </p><p>[50:57] Quick Fire Questions </p><p>[54:35] Erica and Jordan Debrief</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="61783189" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/8f24b821-ee97-4592-b339-257b95622492/audio/f0d50c9f-2c5e-4a2a-9f9a-2f967301d08b/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 39: Github CEO Thomas Dohmke on Building Copilot, Scaling to 1.2M Users and the Future of Code</itunes:title>
      <itunes:author>Erica Brescia, Jacob Effron, Jordan Segall</itunes:author>
      <itunes:duration>01:04:21</itunes:duration>
      <itunes:summary>It was very special to have Github CEO Thomas Dhomke on the pod for many reasons, not the least of which is that my partner Erica Brescia was COO at Gitlab just before joining Redpoint! Thomas has been at Github for almost 6 years, and has been CEO for almost 3 of those years. He has a pulse on what engineers around the world are looking for from the world’s leading developer platform, and incredible vision and empathy in making Github delivers it.</itunes:summary>
      <itunes:subtitle>It was very special to have Github CEO Thomas Dhomke on the pod for many reasons, not the least of which is that my partner Erica Brescia was COO at Gitlab just before joining Redpoint! Thomas has been at Github for almost 6 years, and has been CEO for almost 3 of those years. He has a pulse on what engineers around the world are looking for from the world’s leading developer platform, and incredible vision and empathy in making Github delivers it.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>39</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">a906e06a-a449-421b-befa-895c14390350</guid>
      <title>Ep 38: Runway CEO Cristobal Valenzuela on the Next Frontiers for AI Media and The Role of Human Taste in AI Filmmaking</title>
      <description><![CDATA[<p>Cris is the co-founder and CEO of Runway, which builds breathtakingly real video AI tools, including the incredible Gen-3 Alpha foundation model. Cris sits right at the intersection of technology and creativity, and in 2023 was named to TIME’s 100 Most Influential People in AI. Runway is reported to be in talks to raise capital at a $4 Billion valuation.</p><p> </p><p>(0:00) intro<br />(0:37) how early are we in the AI for creative tools space<br />(3:01) how Cris tests new models<br />(8:21) who uses Runway?<br />(10:37) how Runway teaches new users<br />(14:08) does UI matter?<br />(29:09) what are the next frontiers for video models?<br />(32:32) allocating resources for the research time<br />(39:15) how Cris thinks about pricing<br />(42:10) one video model to rule them all?<br />(44:24) incorporating IP<br />(52:45) over-hyped/under-hyped<br />(53:30) biggest surprises while building Runway<br />(55:12) advice for art students</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 16 Jul 2024 13:11:40 +0000</pubDate>
      <author>jeffron@redpoint.com (Cristobal Valenzuela, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-38-runway-ceo-cristobal-valenzuela-on-the-next-frontiers-for-ai-media-and-the-role-of-human-taste-in-ai-filmmaking-FQI9Nu1c</link>
      <content:encoded><![CDATA[<p>Cris is the co-founder and CEO of Runway, which builds breathtakingly real video AI tools, including the incredible Gen-3 Alpha foundation model. Cris sits right at the intersection of technology and creativity, and in 2023 was named to TIME’s 100 Most Influential People in AI. Runway is reported to be in talks to raise capital at a $4 Billion valuation.</p><p> </p><p>(0:00) intro<br />(0:37) how early are we in the AI for creative tools space<br />(3:01) how Cris tests new models<br />(8:21) who uses Runway?<br />(10:37) how Runway teaches new users<br />(14:08) does UI matter?<br />(29:09) what are the next frontiers for video models?<br />(32:32) allocating resources for the research time<br />(39:15) how Cris thinks about pricing<br />(42:10) one video model to rule them all?<br />(44:24) incorporating IP<br />(52:45) over-hyped/under-hyped<br />(53:30) biggest surprises while building Runway<br />(55:12) advice for art students</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="55542222" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/f60c0015-0aa9-49ab-ba34-3f67401458e8/audio/965ba7e1-790e-4248-bce7-9de6936f035f/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 38: Runway CEO Cristobal Valenzuela on the Next Frontiers for AI Media and The Role of Human Taste in AI Filmmaking</itunes:title>
      <itunes:author>Cristobal Valenzuela, Jacob Effron</itunes:author>
      <itunes:duration>00:57:51</itunes:duration>
      <itunes:summary>Cris is the co-founder and CEO of Runway, which builds breathtakingly real video AI tools, including the incredible Gen-3 Alpha foundation model. Cris sits right at the intersection of technology and creativity, and in 2023 was named to TIME’s 100 Most Influential People in AI. Runway is reported to be in talks to raise capital at a $4 Billion valuation. </itunes:summary>
      <itunes:subtitle>Cris is the co-founder and CEO of Runway, which builds breathtakingly real video AI tools, including the incredible Gen-3 Alpha foundation model. Cris sits right at the intersection of technology and creativity, and in 2023 was named to TIME’s 100 Most Influential People in AI. Runway is reported to be in talks to raise capital at a $4 Billion valuation. </itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>38</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">af85a802-5867-4ad9-b892-626c7af3dd08</guid>
      <title>Ep 37: Co-Founder and CEO of Fireflies.ai Krish Ramineni on How AI Catalyzed Fireflies to 16M Users</title>
      <description><![CDATA[<p>According to Ramp’s (ramp.com) quarterly spend report, Fireflies was the 4th highest AI platform by spend. With over 300,000 customers worldwide and 16M users, Fireflies.ai is operating at some of the largest scale amongst AI companies today. </p><p>This week on Unsupervised Learning we had Krish Ramineni, Co-founder & CEO of Fireflies on to talk about how he sees AI changing the way we work and conduct meetings, and share his biggest learnings around AI in building Fireflies.</p><p> </p><p>(0:00) intro</p><p>(1:01) how will AI change meetings going forward</p><p>(4:03) Fireflies’ capabilities</p><p>(8:18) how new models change Fireflies</p><p>(11:19) shortcomings of current models</p><p>(16:36) Krish’s lack of belief in fine-tuning</p><p>(26:19) dealing with high inference costs</p><p>(34:00) taking on incumbents</p><p>(40:50) what metrics matter most to Fireflies</p><p>(46:37) how is Fireflies so fast?</p><p>(54:28) over-hyped/under-hyped</p><p>(55:11) biggest surprises in building Fireflies</p><p>(1:01:15) Jacob and Rashad debrief</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Mon, 1 Jul 2024 13:00:00 +0000</pubDate>
      <author>jeffron@redpoint.com (Krish Ramineni, Jacob Effron, Rashad Assir)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-37-co-founder-and-ceo-of-firefliesai-krish-ramineni-on-how-ai-catalyzed-fireflies-to-16m-users-tmY2BHO_</link>
      <content:encoded><![CDATA[<p>According to Ramp’s (ramp.com) quarterly spend report, Fireflies was the 4th highest AI platform by spend. With over 300,000 customers worldwide and 16M users, Fireflies.ai is operating at some of the largest scale amongst AI companies today. </p><p>This week on Unsupervised Learning we had Krish Ramineni, Co-founder & CEO of Fireflies on to talk about how he sees AI changing the way we work and conduct meetings, and share his biggest learnings around AI in building Fireflies.</p><p> </p><p>(0:00) intro</p><p>(1:01) how will AI change meetings going forward</p><p>(4:03) Fireflies’ capabilities</p><p>(8:18) how new models change Fireflies</p><p>(11:19) shortcomings of current models</p><p>(16:36) Krish’s lack of belief in fine-tuning</p><p>(26:19) dealing with high inference costs</p><p>(34:00) taking on incumbents</p><p>(40:50) what metrics matter most to Fireflies</p><p>(46:37) how is Fireflies so fast?</p><p>(54:28) over-hyped/under-hyped</p><p>(55:11) biggest surprises in building Fireflies</p><p>(1:01:15) Jacob and Rashad debrief</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="65811898" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/a61b8a7f-5025-4290-9845-df2272a824e0/audio/e8da116c-88af-4e06-9a12-32a231843d4b/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 37: Co-Founder and CEO of Fireflies.ai Krish Ramineni on How AI Catalyzed Fireflies to 16M Users</itunes:title>
      <itunes:author>Krish Ramineni, Jacob Effron, Rashad Assir</itunes:author>
      <itunes:duration>01:08:33</itunes:duration>
      <itunes:summary>According to Ramp’s (ramp.com) quarterly spend report, Fireflies was the 4th highest AI platform by spend. With over 300,000 customers worldwide and 16M users, Fireflies.ai is operating at some of the largest scale amongst AI companies today.

This week on Unsupervised Learning we had Krish Ramineni, Co-founder &amp; CEO of Fireflies on to talk about how he sees AI changing the way we work and conduct meetings, and share his biggest learnings around AI in building Fireflies.</itunes:summary>
      <itunes:subtitle>According to Ramp’s (ramp.com) quarterly spend report, Fireflies was the 4th highest AI platform by spend. With over 300,000 customers worldwide and 16M users, Fireflies.ai is operating at some of the largest scale amongst AI companies today.

This week on Unsupervised Learning we had Krish Ramineni, Co-founder &amp; CEO of Fireflies on to talk about how he sees AI changing the way we work and conduct meetings, and share his biggest learnings around AI in building Fireflies.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>37</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">64d41601-5d80-4b8e-9da8-eb4d9fb6b30b</guid>
      <title>Ep 36: Adobe CPO Scott Belsky on How AI Will Transform Creative Workflows</title>
      <description><![CDATA[<p>Scott Belsky is the founder of Behance, a creative platform sold to Adobe in 2012. He’s since gone on to take on the role of Chief Product Officer at Adobe leading design for all products across Creative Cloud, Document Cloud and the Digital Experience business. This week on Unsupervised Learning Scott shares his thoughts on the future of creative tools with AI, a future where hyper-personalization wins and what humans will do when content is commoditized.</p><p> </p><p>(0:00) intro</p><p>(2:27) Adobe’s new AI tools</p><p>(4:40) best uses of Adobe’s AI features</p><p>(7:22) educating users</p><p>(9:28) will the future have one model or thousands?</p><p>(11:01) Adobe building their own models</p><p>(15:12) goals for video generation</p><p>(19:40) hyper-personalized media</p><p>(22:11) AI music</p><p>(26:30) pricing for AI features</p><p>(28:02) biggest surprises in building AI features for Adobe</p><p>(30:58) most exciting AI startups (KoBold AI)</p><p>(32:13) where else would Scott build in the AI world</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 11 Jun 2024 13:08:54 +0000</pubDate>
      <author>jeffron@redpoint.com (Scott Belsky, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-36-behance-founder-scott-belsky-on-how-ai-will-transform-creative-workflows-ZB3EUjLo</link>
      <content:encoded><![CDATA[<p>Scott Belsky is the founder of Behance, a creative platform sold to Adobe in 2012. He’s since gone on to take on the role of Chief Product Officer at Adobe leading design for all products across Creative Cloud, Document Cloud and the Digital Experience business. This week on Unsupervised Learning Scott shares his thoughts on the future of creative tools with AI, a future where hyper-personalization wins and what humans will do when content is commoditized.</p><p> </p><p>(0:00) intro</p><p>(2:27) Adobe’s new AI tools</p><p>(4:40) best uses of Adobe’s AI features</p><p>(7:22) educating users</p><p>(9:28) will the future have one model or thousands?</p><p>(11:01) Adobe building their own models</p><p>(15:12) goals for video generation</p><p>(19:40) hyper-personalized media</p><p>(22:11) AI music</p><p>(26:30) pricing for AI features</p><p>(28:02) biggest surprises in building AI features for Adobe</p><p>(30:58) most exciting AI startups (KoBold AI)</p><p>(32:13) where else would Scott build in the AI world</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="34346676" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/67d9408f-3dd3-42a4-ba90-88a1474702ae/audio/996f9318-17fb-4b8d-af1e-f13e6426e882/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 36: Adobe CPO Scott Belsky on How AI Will Transform Creative Workflows</itunes:title>
      <itunes:author>Scott Belsky, Jacob Effron</itunes:author>
      <itunes:duration>00:35:46</itunes:duration>
      <itunes:summary>Scott Belsky is the founder of Behance, a creative platform sold to Adobe in 2012. He’s since gone on to take on the role of Chief Product Officer at Adobe leading design for all products across Creative Cloud, Document Cloud and the Digital Experience business. This week on Unsupervised Learning Scott shares his thoughts on the future of creative tools with AI, a future where hyper-personalization wins and what humans will do when content is commoditized.
</itunes:summary>
      <itunes:subtitle>Scott Belsky is the founder of Behance, a creative platform sold to Adobe in 2012. He’s since gone on to take on the role of Chief Product Officer at Adobe leading design for all products across Creative Cloud, Document Cloud and the Digital Experience business. This week on Unsupervised Learning Scott shares his thoughts on the future of creative tools with AI, a future where hyper-personalization wins and what humans will do when content is commoditized.
</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>36</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">50d7239c-f0ba-4d96-a338-321c02a157b3</guid>
      <title>Ep 35: CEO of Suno Mikey Shulman on Future of Music with AI, Tactics for Model Eval and Solving the Blank Canvas Problem</title>
      <description><![CDATA[<p>Last week, Suno announced $125M in funding, marking a significant milestone in their journey to reshape the music creation landscape. On this week's episode of Unsupervised Learning, we caught up with Suno's founder, Mikey Shulman, to dive into their approach to multiplayer music collaboration, how they got Suno to be so fast and the future of digital concerts (and workshopped new intro music for the pod 👀).</p><p> </p><p>(0:00) intro<br />(0:31) Mikey’s favorite Suno songs<br />(5:17) who uses Suno?<br />(7:50) teaching people how to use Suno<br />(9:57) new ways to prompt models<br />(13:24) the social aspect of Suno<br />(17:45) how does Suno approach pricing?<br />(19:37) model eval<br />(23:27) how models can improve<br />(24:58) how is Suno so fast?<br />(26:50) handling usage spikes<br />(34:28) raising $125 million<br />(41:00) IP partnerships<br />(45:55) over-hyped/under-hyped<br />(46:56) biggest surprises in building Suno<br />(50:53) generating an Unsupervised Learning theme song<br />(53:26) Jacob and Rashad debrief</p><p> </p><p>With your co-hosts:  </p><p>@jacobeffron  </p><p>- Partner at Redpoint, Former PM Flatiron Health  </p><p>@patrickachase  </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn  </p><p>@ericabrescia  </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)  </p><p>@jordan_segall  </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Thu, 30 May 2024 13:42:46 +0000</pubDate>
      <author>jeffron@redpoint.com (Jacob Effron, Mikey Shulman, Rashad Assir)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-35-ceo-of-suno-mikey-shulman-on-future-of-music-with-ai-tactics-for-model-eval-and-solving-the-blank-canvas-problem-t4PdXA_V</link>
      <content:encoded><![CDATA[<p>Last week, Suno announced $125M in funding, marking a significant milestone in their journey to reshape the music creation landscape. On this week's episode of Unsupervised Learning, we caught up with Suno's founder, Mikey Shulman, to dive into their approach to multiplayer music collaboration, how they got Suno to be so fast and the future of digital concerts (and workshopped new intro music for the pod 👀).</p><p> </p><p>(0:00) intro<br />(0:31) Mikey’s favorite Suno songs<br />(5:17) who uses Suno?<br />(7:50) teaching people how to use Suno<br />(9:57) new ways to prompt models<br />(13:24) the social aspect of Suno<br />(17:45) how does Suno approach pricing?<br />(19:37) model eval<br />(23:27) how models can improve<br />(24:58) how is Suno so fast?<br />(26:50) handling usage spikes<br />(34:28) raising $125 million<br />(41:00) IP partnerships<br />(45:55) over-hyped/under-hyped<br />(46:56) biggest surprises in building Suno<br />(50:53) generating an Unsupervised Learning theme song<br />(53:26) Jacob and Rashad debrief</p><p> </p><p>With your co-hosts:  </p><p>@jacobeffron  </p><p>- Partner at Redpoint, Former PM Flatiron Health  </p><p>@patrickachase  </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn  </p><p>@ericabrescia  </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)  </p><p>@jordan_segall  </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="59521194" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/1e43038b-e00d-48da-91d0-75bfc7370110/audio/87aa7189-a607-4f98-a239-38cc2a736eb6/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 35: CEO of Suno Mikey Shulman on Future of Music with AI, Tactics for Model Eval and Solving the Blank Canvas Problem</itunes:title>
      <itunes:author>Jacob Effron, Mikey Shulman, Rashad Assir</itunes:author>
      <itunes:duration>01:02:00</itunes:duration>
      <itunes:summary>Last week, Suno announced $125M in funding, marking a significant milestone in their journey to reshape the music creation landscape. On this week&apos;s episode of Unsupervised Learning, we caught up with Suno&apos;s founder, Mikey Shulman, to dive into their approach to multiplayer music collaboration, how they got Suno to be so fast and the future of digital concerts (and workshopped new intro music for the pod 👀).</itunes:summary>
      <itunes:subtitle>Last week, Suno announced $125M in funding, marking a significant milestone in their journey to reshape the music creation landscape. On this week&apos;s episode of Unsupervised Learning, we caught up with Suno&apos;s founder, Mikey Shulman, to dive into their approach to multiplayer music collaboration, how they got Suno to be so fast and the future of digital concerts (and workshopped new intro music for the pod 👀).</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>35</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e61b2ccf-da57-441c-ac37-c68eeeaa8318</guid>
      <title>Ep 34: Eric Ries and Jeremy Howard (Answer.ai) on the Biggest Mistakes AI Founders are Making and Building the Bell Labs of AI</title>
      <description><![CDATA[<p>In this week’s episode of Unsupervised Learning, we delve into the forefront of AI innovation with Eric Ries and Jeremy Howard. Eric Ries, renowned for pioneering the Lean Startup movement, has consistently influenced modern entrepreneurial strategies with his emphasis on agile, sustainable growth through innovation. Jeremy Howard is known for his contributions to deep learning and data science, co-founding the fast.ai educational initiative that democratizes access to cutting-edge AI learning.</p><p> </p><p>Eric's new podcast and newsletter:</p><p><a href="https://www.ericriesshow.com/" target="_blank">https://www.ericriesshow.com/</a></p><p><a href="https://ericries.carrd.co/" target="_blank">https://ericries.carrd.co/</a></p><p> </p><p>(0:00) intro </p><p>(0:33) The Lean Startup </p><p>4:34) thinking about defensibility </p><p>(9:10) best way to get caught up on AI </p><p>(11:34) starting Answer.ai </p><p>(23:48) efficient fine-tuning of Llama 3 </p><p>(38:21) AI regulations </p><p>(48:27) over-hyped/under-hyped </p><p>(48:53) most exciting AI startups </p><p>(55:37) Jacob and Jordan debrief</p><p><br /> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Wed, 22 May 2024 13:05:42 +0000</pubDate>
      <author>jeffron@redpoint.com (Eric Ries, Jeremy Howard, Jacob Effron, Jordan Segall)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-34-answerai-founders-eric-ries-and-jeremy-howard-on-the-biggest-mistakes-ai-founders-are-making-today-EoaXpsbr</link>
      <content:encoded><![CDATA[<p>In this week’s episode of Unsupervised Learning, we delve into the forefront of AI innovation with Eric Ries and Jeremy Howard. Eric Ries, renowned for pioneering the Lean Startup movement, has consistently influenced modern entrepreneurial strategies with his emphasis on agile, sustainable growth through innovation. Jeremy Howard is known for his contributions to deep learning and data science, co-founding the fast.ai educational initiative that democratizes access to cutting-edge AI learning.</p><p> </p><p>Eric's new podcast and newsletter:</p><p><a href="https://www.ericriesshow.com/" target="_blank">https://www.ericriesshow.com/</a></p><p><a href="https://ericries.carrd.co/" target="_blank">https://ericries.carrd.co/</a></p><p> </p><p>(0:00) intro </p><p>(0:33) The Lean Startup </p><p>4:34) thinking about defensibility </p><p>(9:10) best way to get caught up on AI </p><p>(11:34) starting Answer.ai </p><p>(23:48) efficient fine-tuning of Llama 3 </p><p>(38:21) AI regulations </p><p>(48:27) over-hyped/under-hyped </p><p>(48:53) most exciting AI startups </p><p>(55:37) Jacob and Jordan debrief</p><p><br /> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="57608193" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/0783b3ce-39c4-447b-a365-a164bf099fd3/audio/13695ce3-33c1-4b8b-ad6d-d43b3000bc0f/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 34: Eric Ries and Jeremy Howard (Answer.ai) on the Biggest Mistakes AI Founders are Making and Building the Bell Labs of AI</itunes:title>
      <itunes:author>Eric Ries, Jeremy Howard, Jacob Effron, Jordan Segall</itunes:author>
      <itunes:duration>01:00:00</itunes:duration>
      <itunes:summary>In this week’s episode of Unsupervised Learning, we delve into the forefront of AI innovation with Eric Ries and Jeremy Howard. Eric Ries, renowned for pioneering the Lean Startup movement, has consistently influenced modern entrepreneurial strategies with his emphasis on agile, sustainable growth through innovation. Jeremy Howard is known for his contributions to deep learning and data science, co-founding the fast.ai educational initiative that democratizes access to cutting-edge AI learning.</itunes:summary>
      <itunes:subtitle>In this week’s episode of Unsupervised Learning, we delve into the forefront of AI innovation with Eric Ries and Jeremy Howard. Eric Ries, renowned for pioneering the Lean Startup movement, has consistently influenced modern entrepreneurial strategies with his emphasis on agile, sustainable growth through innovation. Jeremy Howard is known for his contributions to deep learning and data science, co-founding the fast.ai educational initiative that democratizes access to cutting-edge AI learning.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>34</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">2ae6c7f5-a6f2-4c74-93af-b18bc08c4f76</guid>
      <title>Bonus Episode: Sam Altman (CEO, OpenAI) Talks GPT-4o and Predicts the Future of AI</title>
      <description><![CDATA[<p>In this cross-over episode, Sam Altman sat down with Logan on the day of the ChatGPT-4o announcement to share behind-the-scenes details of the launch and offer his predictions for the future of AI. Altman delves into OpenAI’s vision, discusses the timeline for achieving AGI, and explores the societal impact of humanoid robots. He also expresses his excitement and concerns about AI personal assistants, highlights the biggest opportunities and risks in the AI landscape today, and much more.</p><p>(0:00) Intro<br />(00:41) The Personal Impact of Leading OpenAI<br />(01:35) Unveiling Multimodal AI: A Leap in Technology<br />(02:38) The Surprising Use Cases and Benefits of Multimodal AI<br />(03:14) Behind the Scenes: Making Multimodal AI Possible<br />(08:27) Envisioning the Future of AI in Communication and Creativity<br />(10:12) The Business of AI: Monetization, Open Source, and Future Directions<br />(16:33) AI's Role in Shaping Future Jobs and Experiences<br />(20:20) Debunking AGI: A Continuous Journey Towards Advanced AI<br />(23:55) Exploring the Pace of Scientific and Technological Progress<br />(24:09) The Importance of Interpretability in AI<br />(25:02) Navigating AI Ethics and Regulation<br />(27:17) The Safety Paradigm in AI and Beyond<br />(28:46) Personal Reflections and the Impact of AI on Society<br />(29:02) The Future of AI: Fast Takeoff Scenarios and Societal Changes<br />(30:50) Navigating Personal and Professional Challenges<br />(40:12) The Role of AI in Creative and Personal Identity<br />(43:00) Educational System Adaptations for the AI Era<br />(44:21) Contemplating the Future with Advanced AI<br />(45:21) Jacob and Pat Debrief</p><p>With your co-hosts:</p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq'd by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Mon, 20 May 2024 16:00:00 +0000</pubDate>
      <author>jeffron@redpoint.com (Sam Altman, Logan Bartlett, Jacob Effron, Patrick Chase)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/bonus-episode-sam-altman-ceo-openai-talks-gpt-4o-and-predicts-the-future-of-ai-_sl6NEMt</link>
      <content:encoded><![CDATA[<p>In this cross-over episode, Sam Altman sat down with Logan on the day of the ChatGPT-4o announcement to share behind-the-scenes details of the launch and offer his predictions for the future of AI. Altman delves into OpenAI’s vision, discusses the timeline for achieving AGI, and explores the societal impact of humanoid robots. He also expresses his excitement and concerns about AI personal assistants, highlights the biggest opportunities and risks in the AI landscape today, and much more.</p><p>(0:00) Intro<br />(00:41) The Personal Impact of Leading OpenAI<br />(01:35) Unveiling Multimodal AI: A Leap in Technology<br />(02:38) The Surprising Use Cases and Benefits of Multimodal AI<br />(03:14) Behind the Scenes: Making Multimodal AI Possible<br />(08:27) Envisioning the Future of AI in Communication and Creativity<br />(10:12) The Business of AI: Monetization, Open Source, and Future Directions<br />(16:33) AI's Role in Shaping Future Jobs and Experiences<br />(20:20) Debunking AGI: A Continuous Journey Towards Advanced AI<br />(23:55) Exploring the Pace of Scientific and Technological Progress<br />(24:09) The Importance of Interpretability in AI<br />(25:02) Navigating AI Ethics and Regulation<br />(27:17) The Safety Paradigm in AI and Beyond<br />(28:46) Personal Reflections and the Impact of AI on Society<br />(29:02) The Future of AI: Fast Takeoff Scenarios and Societal Changes<br />(30:50) Navigating Personal and Professional Challenges<br />(40:12) The Role of AI in Creative and Personal Identity<br />(43:00) Educational System Adaptations for the AI Era<br />(44:21) Contemplating the Future with Advanced AI<br />(45:21) Jacob and Pat Debrief</p><p>With your co-hosts:</p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq'd by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="51478403" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/647c7a38-deb4-4afe-9ede-59202d41fc01/audio/fcaf0f55-1f92-44de-8b14-92ab030abd28/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Bonus Episode: Sam Altman (CEO, OpenAI) Talks GPT-4o and Predicts the Future of AI</itunes:title>
      <itunes:author>Sam Altman, Logan Bartlett, Jacob Effron, Patrick Chase</itunes:author>
      <itunes:duration>00:53:37</itunes:duration>
      <itunes:summary>In this cross-over episode, Sam Altman sat down with Logan Bartlett (Investor, Redpoint) on the day of the ChatGPT-4o announcement to share behind-the-scenes details of the launch and offer his predictions for the future of AI. If you&apos;ve already listened to the episode, check out Pat and Jacob&apos;s debrief at the end of the episode! 


Altman delves into OpenAI’s vision, discusses the timeline for achieving AGI, and explores the societal impact of humanoid robots. He also expresses his excitement and concerns about AI personal assistants, highlights the biggest opportunities and risks in the AI landscape today, and much more.</itunes:summary>
      <itunes:subtitle>In this cross-over episode, Sam Altman sat down with Logan Bartlett (Investor, Redpoint) on the day of the ChatGPT-4o announcement to share behind-the-scenes details of the launch and offer his predictions for the future of AI. If you&apos;ve already listened to the episode, check out Pat and Jacob&apos;s debrief at the end of the episode! 


Altman delves into OpenAI’s vision, discusses the timeline for achieving AGI, and explores the societal impact of humanoid robots. He also expresses his excitement and concerns about AI personal assistants, highlights the biggest opportunities and risks in the AI landscape today, and much more.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>bonus</itunes:episodeType>
    </item>
    <item>
      <guid isPermaLink="false">71d995c5-53f9-4c25-8fde-cd39e35cb37a</guid>
      <title>Ep 33: CTO and Co-Founder of Sourcegraph on Current Landscape and Future of Software Development, How to Make RAG Better, and Building Towards the Agentic Future</title>
      <description><![CDATA[<p>On this week’s Unsupervised Learning, Pat and I sat down with CTO and Co-Founder of Sourcegraph, Beyang Liu. Sourcegraph is a leader in the AI coding space, and recently launched AI coding assistant, Cody. Beyang shared with us his view on the current landscape of AI coding and the future of coding and software development. He also shared how Sourcegraph has tried to make RAG better, and their model eval approaches.</p><p> </p><p>(0:00) intro<br />(0:47) advice for young coders<br />(3:34) AI products at Sourcegraph<br />(6:17) the current state of AI coding<br />(12:33) what happens when a new GPT model comes out?<br />(20:16) what types of developers benefit from these AI tools?<br />(30:45) how important is inference cost?<br />(35:31) how does Sourcegraph structure AI teams?<br />(41:27) what metrics does Sourcegraph use to evaluate their products?<br />(50:02) customizing RAG<br />(56:55) getting ahead of the agentic future<br />(1:05:05) will there be more or less engineers in the future?<br />(1:13:50) over-hyped/under-hyped<br />(1:16:56) surprises during the Sourcegraph journey<br />(1:18:26) cognition buzz and Devin<br />(1:26:48) Jacob and Pat debrief</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 30 Apr 2024 13:21:37 +0000</pubDate>
      <author>jeffron@redpoint.com (Beyang Liu, Jacob Effron, Patrick Chase)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-33-cto-and-co-founder-of-sourcegraph-on-current-landscape-and-future-of-software-development-how-to-make-rag-better-and-building-towards-the-agentic-future-t_Q6r4qF</link>
      <content:encoded><![CDATA[<p>On this week’s Unsupervised Learning, Pat and I sat down with CTO and Co-Founder of Sourcegraph, Beyang Liu. Sourcegraph is a leader in the AI coding space, and recently launched AI coding assistant, Cody. Beyang shared with us his view on the current landscape of AI coding and the future of coding and software development. He also shared how Sourcegraph has tried to make RAG better, and their model eval approaches.</p><p> </p><p>(0:00) intro<br />(0:47) advice for young coders<br />(3:34) AI products at Sourcegraph<br />(6:17) the current state of AI coding<br />(12:33) what happens when a new GPT model comes out?<br />(20:16) what types of developers benefit from these AI tools?<br />(30:45) how important is inference cost?<br />(35:31) how does Sourcegraph structure AI teams?<br />(41:27) what metrics does Sourcegraph use to evaluate their products?<br />(50:02) customizing RAG<br />(56:55) getting ahead of the agentic future<br />(1:05:05) will there be more or less engineers in the future?<br />(1:13:50) over-hyped/under-hyped<br />(1:16:56) surprises during the Sourcegraph journey<br />(1:18:26) cognition buzz and Devin<br />(1:26:48) Jacob and Pat debrief</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="92965033" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/2799921e-c881-48eb-a68c-4f00ca0caace/audio/253408c2-b2d9-4329-bec9-24a14e6ec018/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 33: CTO and Co-Founder of Sourcegraph on Current Landscape and Future of Software Development, How to Make RAG Better, and Building Towards the Agentic Future</itunes:title>
      <itunes:author>Beyang Liu, Jacob Effron, Patrick Chase</itunes:author>
      <itunes:duration>01:36:50</itunes:duration>
      <itunes:summary>On this week’s Unsupervised Learning, Pat and I sat down with CTO and Co-Founder of Sourcegraph, Beyang Liu. Sourcegraph is a leader in the AI coding space, and recently launched AI coding assistant, Cody. Beyang shared with us his view on the current landscape of AI coding and the future of coding and software development. He also shared how Sourcegraph has tried to make RAG better, and their model eval approaches.</itunes:summary>
      <itunes:subtitle>On this week’s Unsupervised Learning, Pat and I sat down with CTO and Co-Founder of Sourcegraph, Beyang Liu. Sourcegraph is a leader in the AI coding space, and recently launched AI coding assistant, Cody. Beyang shared with us his view on the current landscape of AI coding and the future of coding and software development. He also shared how Sourcegraph has tried to make RAG better, and their model eval approaches.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>33</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">d7120d95-5007-4b70-83de-ec001c91a96c</guid>
      <title>Ep 32: CEO and Founder of Pinecone Edo Liberty on Pioneering Vector Databases, Barriers to Productionalizing Models and Why What’s Happening with GPUs is Not Sustainable</title>
      <description><![CDATA[<p>Pinecone has raised over $130 million and was most recently valued at $750 million. On this week’s Unsupervised Learning, we sat down with CEO and Founder of Pinecone, Edo Liberty. Pinecone is arguably one of the most important elements in today's modern datastack. Edo shared with us the most common use cases of Pinecone, the evolving landscape of vector databases, challenges in building vector databases, the "painful" launch of serverless model, and what people get wrong the most about Pinecone.</p><p> </p><p>(0:00) intro</p><p>(0:33) what was it like when ChatGBT came out?</p><p>(6:29) Edo’s favorite applications built on Pinecone</p><p>(10:34) will we see more image and video applications in 2024?</p><p>(14:58) best ways to deal with hallucinations</p><p>(18:12) the evolving landscape of vector databases</p><p>(20:27) if Edo had to build a product, what would his stack look like?</p><p>(31:45) helping clients versus letting them figure things out</p><p>(36:38) moving to a serverless model</p><p>(40:33) what areas of AI should new startups target?</p><p>(45:18) Amazon SageMaker</p><p>(50:38) over-hyped/under-hyped</p><p>(51:30) biggest surprises while building Pinecone</p><p>(56:13) Jacob and Pat debrief</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 16 Apr 2024 17:23:28 +0000</pubDate>
      <author>jeffron@redpoint.com (Edo Liberty, Jacob Effron, Patrick Chase)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-32-ceo-and-founder-of-pinecone-edo-liberty-on-pioneering-vector-databases-barriers-to-productionalizing-models-and-why-whats-happening-with-gpus-is-not-sustainable-_kSGhWcu</link>
      <content:encoded><![CDATA[<p>Pinecone has raised over $130 million and was most recently valued at $750 million. On this week’s Unsupervised Learning, we sat down with CEO and Founder of Pinecone, Edo Liberty. Pinecone is arguably one of the most important elements in today's modern datastack. Edo shared with us the most common use cases of Pinecone, the evolving landscape of vector databases, challenges in building vector databases, the "painful" launch of serverless model, and what people get wrong the most about Pinecone.</p><p> </p><p>(0:00) intro</p><p>(0:33) what was it like when ChatGBT came out?</p><p>(6:29) Edo’s favorite applications built on Pinecone</p><p>(10:34) will we see more image and video applications in 2024?</p><p>(14:58) best ways to deal with hallucinations</p><p>(18:12) the evolving landscape of vector databases</p><p>(20:27) if Edo had to build a product, what would his stack look like?</p><p>(31:45) helping clients versus letting them figure things out</p><p>(36:38) moving to a serverless model</p><p>(40:33) what areas of AI should new startups target?</p><p>(45:18) Amazon SageMaker</p><p>(50:38) over-hyped/under-hyped</p><p>(51:30) biggest surprises while building Pinecone</p><p>(56:13) Jacob and Pat debrief</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="58760089" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/c5a92982-8b03-4160-a737-b063b88337bf/audio/ca382fc3-1036-4612-ad0e-bbf1b05afa43/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 32: CEO and Founder of Pinecone Edo Liberty on Pioneering Vector Databases, Barriers to Productionalizing Models and Why What’s Happening with GPUs is Not Sustainable</itunes:title>
      <itunes:author>Edo Liberty, Jacob Effron, Patrick Chase</itunes:author>
      <itunes:duration>01:01:12</itunes:duration>
      <itunes:summary>Pinecone has raised over $130 million and was most recently valued at $750 million. On this week’s Unsupervised Learning, we sat down with CEO and Founder of Pinecone, Edo Liberty. Pinecone is arguably one of the most important elements in today&apos;s modern datastack. Edo shared with us the most common use cases of Pinecone, the evolving landscape of vector databases, challenges in building vector databases, the &quot;painful&quot; launch of serverless model, and what people get wrong the most about Pinecone.</itunes:summary>
      <itunes:subtitle>Pinecone has raised over $130 million and was most recently valued at $750 million. On this week’s Unsupervised Learning, we sat down with CEO and Founder of Pinecone, Edo Liberty. Pinecone is arguably one of the most important elements in today&apos;s modern datastack. Edo shared with us the most common use cases of Pinecone, the evolving landscape of vector databases, challenges in building vector databases, the &quot;painful&quot; launch of serverless model, and what people get wrong the most about Pinecone.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>32</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">bafccd80-06ac-469a-922e-4b4b2d92a39e</guid>
      <title>Ep 31: CEO and Co-Founder of Mistral Arthur Mensch on the Next Frontiers for LLMs, Why Open Source Will Prevail and AI Safety</title>
      <description><![CDATA[<p>Mistral AI is often seen as the startup challenging OpenAI and incumbents developing LLMs. On this week’s episode of Unsupervised Learning, we sat down with CEO and Co-Founder at Mistral AI, Arthur Mensch. Arthur shared with us his view on why open-source will prevail, how Mistral gets LLMs into the hands of enterprises, the build vs. partnership decisions, the competitive landscape and future of LLMs, and how he’d regulate AI safety. </p><p> </p><p>(0:00) intro</p><p>(0:46) origins of the name “Mistral”</p><p>(2:20) logo origins</p><p>(3:06) closed-source vs open-source models</p><p>(6:31) “training models is what we do best”</p><p>(7:50) Mistral’s partnership strategy</p><p>(10:12) the next frontiers for LLMs</p><p>(11:47) Meta’s GPU announcement</p><p>(13:03) when will Mistral catch up to ChatGPT?</p><p>(16:00) NVIDIA chips</p><p>(16:55) AI regulation and EU AI Act</p><p>(20:07) who should handle AI safety?</p><p>(20:51) policy changes that Arthur would make</p><p>(22:52) foundation models around the world</p><p>(25:50) starting Mistral</p><p>(26:54) releasing Le Chat</p><p>(30:19) over-hyped/under-hyped</p><p>(30:32) surprises while building Mistral</p><p>(31:55) AI startups Arthur is excited about</p><p>(32:19) what application would Arthur build</p><p>(33:46) Jacob and Jordan debrief</p><p> </p><p>With your co-hosts:  </p><p>@jacobeffron  </p><p>- Partner at Redpoint, Former PM Flatiron Health  </p><p>@patrickachase  </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn  </p><p>@ericabrescia  </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)  </p><p>@jordan_segall  </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Thu, 28 Mar 2024 13:00:00 +0000</pubDate>
      <author>jeffron@redpoint.com (Arthur Mensch, Jacob Effron, Jordan Segall)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-31-ceo-and-co-founder-of-mistral-arthur-mensch-on-the-next-frontiers-for-llms-why-open-source-will-prevail-and-ai-safety-SPRFdq_i</link>
      <content:encoded><![CDATA[<p>Mistral AI is often seen as the startup challenging OpenAI and incumbents developing LLMs. On this week’s episode of Unsupervised Learning, we sat down with CEO and Co-Founder at Mistral AI, Arthur Mensch. Arthur shared with us his view on why open-source will prevail, how Mistral gets LLMs into the hands of enterprises, the build vs. partnership decisions, the competitive landscape and future of LLMs, and how he’d regulate AI safety. </p><p> </p><p>(0:00) intro</p><p>(0:46) origins of the name “Mistral”</p><p>(2:20) logo origins</p><p>(3:06) closed-source vs open-source models</p><p>(6:31) “training models is what we do best”</p><p>(7:50) Mistral’s partnership strategy</p><p>(10:12) the next frontiers for LLMs</p><p>(11:47) Meta’s GPU announcement</p><p>(13:03) when will Mistral catch up to ChatGPT?</p><p>(16:00) NVIDIA chips</p><p>(16:55) AI regulation and EU AI Act</p><p>(20:07) who should handle AI safety?</p><p>(20:51) policy changes that Arthur would make</p><p>(22:52) foundation models around the world</p><p>(25:50) starting Mistral</p><p>(26:54) releasing Le Chat</p><p>(30:19) over-hyped/under-hyped</p><p>(30:32) surprises while building Mistral</p><p>(31:55) AI startups Arthur is excited about</p><p>(32:19) what application would Arthur build</p><p>(33:46) Jacob and Jordan debrief</p><p> </p><p>With your co-hosts:  </p><p>@jacobeffron  </p><p>- Partner at Redpoint, Former PM Flatiron Health  </p><p>@patrickachase  </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn  </p><p>@ericabrescia  </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)  </p><p>@jordan_segall  </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="38356994" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/d05b3aaf-4722-436f-a4ef-252341a8309b/audio/c733850e-8df8-45dc-bf78-3a2749bd37ee/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 31: CEO and Co-Founder of Mistral Arthur Mensch on the Next Frontiers for LLMs, Why Open Source Will Prevail and AI Safety</itunes:title>
      <itunes:author>Arthur Mensch, Jacob Effron, Jordan Segall</itunes:author>
      <itunes:duration>00:39:57</itunes:duration>
      <itunes:summary>Mistral AI is often seen as the startup challenging OpenAI and incumbents developing LLMs. On this week’s episode of Unsupervised Learning, we sat down with CEO and Co-Founder at Mistral AI, Arthur Mensch. Arthur shared with us his view on why open-source will prevail, how Mistral gets LLMs into the hands of enterprises, the build vs. partnership decisions, the competitive landscape and future of LLMs, and how he’d regulate AI safety. </itunes:summary>
      <itunes:subtitle>Mistral AI is often seen as the startup challenging OpenAI and incumbents developing LLMs. On this week’s episode of Unsupervised Learning, we sat down with CEO and Co-Founder at Mistral AI, Arthur Mensch. Arthur shared with us his view on why open-source will prevail, how Mistral gets LLMs into the hands of enterprises, the build vs. partnership decisions, the competitive landscape and future of LLMs, and how he’d regulate AI safety. </itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>31</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">fc9df636-905a-453b-afc2-dd9d42955809</guid>
      <title>Ep 30: Superhuman CEO Rahul Vohra on the Future of Email with AI and the Role Agents Will Play</title>
      <description><![CDATA[<p>Superhuman recently launched AI-powered Summarize and Instant Reply features, and has since processed 4 billion emails. On this week’s episode of Unsupervised Learning, we sat down with CEO and Founder at Superhuman, Rahul Vohra. Rahul shared with us what email will look like in the future, the internal product design decisions in building Summarize and Instant Reply, why he’s bullish on the agentic future, and why and how startups should go after incumbents.</p><p> </p><p>(0:00) intro </p><p>(1:20) why email will never die</p><p>(10:46) how ChatGBT changed Superhuman </p><p>(17:01) making design decisions </p><p>(24:34) how Superhuman personalizes email voices </p><p>(28:35) choosing which models to use </p><p>(31:00) how does cost play into decision-making </p><p>(34:10) teaching users how to use AI </p><p>(46:27) competing with incumbents </p><p>(56:57) how work has evolved </p><p>(59:02) how Rahul would redesign Slack / Slack Agent </p><p>(1:04:54) Jacob and Pat debrief</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 19 Mar 2024 14:22:02 +0000</pubDate>
      <author>jeffron@redpoint.com (Rahul Vohra, Patrick Chase, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-30-superhuman-ceo-rahul-vohra-on-the-future-of-email-with-ai-and-the-role-agents-will-play-jG0kCJnK</link>
      <content:encoded><![CDATA[<p>Superhuman recently launched AI-powered Summarize and Instant Reply features, and has since processed 4 billion emails. On this week’s episode of Unsupervised Learning, we sat down with CEO and Founder at Superhuman, Rahul Vohra. Rahul shared with us what email will look like in the future, the internal product design decisions in building Summarize and Instant Reply, why he’s bullish on the agentic future, and why and how startups should go after incumbents.</p><p> </p><p>(0:00) intro </p><p>(1:20) why email will never die</p><p>(10:46) how ChatGBT changed Superhuman </p><p>(17:01) making design decisions </p><p>(24:34) how Superhuman personalizes email voices </p><p>(28:35) choosing which models to use </p><p>(31:00) how does cost play into decision-making </p><p>(34:10) teaching users how to use AI </p><p>(46:27) competing with incumbents </p><p>(56:57) how work has evolved </p><p>(59:02) how Rahul would redesign Slack / Slack Agent </p><p>(1:04:54) Jacob and Pat debrief</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="68836667" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/515bb358-12c4-4da1-988d-d2ed51c22692/audio/9fc083cb-bc80-404e-863f-06f6a1c66651/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 30: Superhuman CEO Rahul Vohra on the Future of Email with AI and the Role Agents Will Play</itunes:title>
      <itunes:author>Rahul Vohra, Patrick Chase, Jacob Effron</itunes:author>
      <itunes:duration>01:11:42</itunes:duration>
      <itunes:summary>Superhuman recently launched AI-powered Summarize and Instant Reply features, and has since processed 4 billion emails. On this week’s episode of Unsupervised Learning, we sat down with CEO and Founder at Superhuman, Rahul Vohra. Rahul shared with us what email will look like in the future, the internal product design decisions in building Summarize and Instant Reply, why he’s bullish on the agentic future, and why and how startups should go after incumbents.</itunes:summary>
      <itunes:subtitle>Superhuman recently launched AI-powered Summarize and Instant Reply features, and has since processed 4 billion emails. On this week’s episode of Unsupervised Learning, we sat down with CEO and Founder at Superhuman, Rahul Vohra. Rahul shared with us what email will look like in the future, the internal product design decisions in building Summarize and Instant Reply, why he’s bullish on the agentic future, and why and how startups should go after incumbents.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>30</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">607a7776-eb12-452a-9cb6-5d355a216824</guid>
      <title>Ep 29: Salesforce AI CEO Clara Shih on Future of Slack, How Gucci Uses AI and Working with Marc Benioff</title>
      <description><![CDATA[<p>There’s an ongoing debate about where the most value will accrue in AI between incumbents and startups. Of the incumbents, few have shipped product faster than SalesforceAI. Today on Unsupervised Learning we had on Clara Shih, CEO of SalesforceAI and one of Time Magazine’s 100 Most Influential People in AI. </p><p> </p><p>(0:00) intro</p><p>(0:50) work practices that will become irrelevant</p><p>(1:37) revolutionizing reply recommendations and case summaries</p><p>(4:57) newest Salesforce products</p><p>(5:53) structuring teams</p><p>(7:22) engineering trust into AI products</p><p>(11:58) combining in-house models with ChatGBT</p><p>(13:33) Gucci’s AI adoption</p><p>(16:01) how does Salesforce choose who to share their data with?</p><p>(20:29) AI costs</p><p>(26:29 creating unique voices for brands</p><p>(27:45) AI incumbents vs. startups</p><p>(29:54) what Clara would build if she had the time</p><p>(32:28) the future of Slack</p><p>(35:55) what percent of customer support questions can be answered by AI?</p><p>(38:37) over-hyped/under-hyped</p><p>(39:32) working with Mark Benioff</p><p>(40:46) Jacob and Pat debrief</p><p>(44:42) Slack is the perfect interface for generative AI</p><p>(46:10) Abridge investment</p><p>(48:15) Ideogram investment</p><p> </p><p>With your co-hosts:  </p><p>@jacobeffron  </p><p>- Partner at Redpoint, Former PM Flatiron Health  </p><p>@patrickachase  </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn  </p><p>@ericabrescia  </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)  </p><p>@jordan_segall  </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Wed, 6 Mar 2024 13:55:38 +0000</pubDate>
      <author>jeffron@redpoint.com (Clara Shih, Patrick Chase, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-29-salesforce-ai-ceo-clara-shih-on-future-of-slack-how-gucci-uses-ai-and-working-with-marc-benioff-DyDf_SJU</link>
      <content:encoded><![CDATA[<p>There’s an ongoing debate about where the most value will accrue in AI between incumbents and startups. Of the incumbents, few have shipped product faster than SalesforceAI. Today on Unsupervised Learning we had on Clara Shih, CEO of SalesforceAI and one of Time Magazine’s 100 Most Influential People in AI. </p><p> </p><p>(0:00) intro</p><p>(0:50) work practices that will become irrelevant</p><p>(1:37) revolutionizing reply recommendations and case summaries</p><p>(4:57) newest Salesforce products</p><p>(5:53) structuring teams</p><p>(7:22) engineering trust into AI products</p><p>(11:58) combining in-house models with ChatGBT</p><p>(13:33) Gucci’s AI adoption</p><p>(16:01) how does Salesforce choose who to share their data with?</p><p>(20:29) AI costs</p><p>(26:29 creating unique voices for brands</p><p>(27:45) AI incumbents vs. startups</p><p>(29:54) what Clara would build if she had the time</p><p>(32:28) the future of Slack</p><p>(35:55) what percent of customer support questions can be answered by AI?</p><p>(38:37) over-hyped/under-hyped</p><p>(39:32) working with Mark Benioff</p><p>(40:46) Jacob and Pat debrief</p><p>(44:42) Slack is the perfect interface for generative AI</p><p>(46:10) Abridge investment</p><p>(48:15) Ideogram investment</p><p> </p><p>With your co-hosts:  </p><p>@jacobeffron  </p><p>- Partner at Redpoint, Former PM Flatiron Health  </p><p>@patrickachase  </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn  </p><p>@ericabrescia  </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)  </p><p>@jordan_segall  </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="50525457" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/a68bf27b-a0d0-4203-be63-b8f95c919be4/audio/f859c71c-d465-46a9-8625-32e80b3c6eb0/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 29: Salesforce AI CEO Clara Shih on Future of Slack, How Gucci Uses AI and Working with Marc Benioff</itunes:title>
      <itunes:author>Clara Shih, Patrick Chase, Jacob Effron</itunes:author>
      <itunes:duration>00:52:37</itunes:duration>
      <itunes:summary>There’s an ongoing debate about where the most value will accrue in AI between incumbents and startups. Of the incumbents, few have shipped product faster than SalesforceAI. Today on Unsupervised Learning we had on Clara Shih, CEO of SalesforceAI and one of Time Magazine’s 100 Most Influential People in AI. </itunes:summary>
      <itunes:subtitle>There’s an ongoing debate about where the most value will accrue in AI between incumbents and startups. Of the incumbents, few have shipped product faster than SalesforceAI. Today on Unsupervised Learning we had on Clara Shih, CEO of SalesforceAI and one of Time Magazine’s 100 Most Influential People in AI. </itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>29</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">75d2ff18-de4c-4b02-b79f-ac47ca7e7d62</guid>
      <title>Ep 28: LangChain CEO Harrison Chase on the Current State of Eval and Agents and The LLM Apps that Will Define 2024</title>
      <description><![CDATA[<p>Last week LangChain announced a $20M Series A led by Sequoia and released the paid version of LangSmith, which has already been used by 1K+ teams and driven 80K signups. On this week’s episode of Unsupervised Learning, we sat down with LangChain Co-Founder and CEO Harrison Chase to talk about the current state of LLM evaluation, observability, and the agent landscape.</p><p> </p><p>(0:00) intro </p><p>(1:07) applications of AI in the sports world </p><p>(3:26) what does LangChain do? </p><p>(7:51) building with LangSmith </p><p>(10:00) best AI eval practices </p><p>(16:51) to what extent is eval generalizable? </p><p>(21:11) the current agent landscape </p><p>(29:35) balancing present and future at LangChain </p><p>(36:27) using LangServe to deploy LangChain applications </p><p>(41:37) more complex chatbots are coming </p><p>(45:51) current AI practices that will become obsolete </p><p>(48:55) over-hyped/under-hyped </p><p>(49:25) bigger surprise in building LangChain </p><p>(51:50) how ubiquitous will open-source models be in the future? </p><p>(52:43) most exciting AI startups </p><p>(56:07) being an AI “celebrity” </p><p>(58:09) Jacob and Jordan debrief </p><p> </p><p>With your co-hosts:  </p><p>@jacobeffron  </p><p>- Partner at Redpoint, Former PM Flatiron Health  </p><p>@patrickachase  </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn  </p><p>@ericabrescia  </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)  </p><p>@jordan_segall  </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 20 Feb 2024 14:00:00 +0000</pubDate>
      <author>jeffron@redpoint.com (Harrison Chase, Jacob Effron, Jordan Segall)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/langchain-ceo-harrison-chase-on-the-current-state-of-eval-agent-landscape-and-when-to-use-open-vs-closed-source-models-2Jk_TPtq</link>
      <content:encoded><![CDATA[<p>Last week LangChain announced a $20M Series A led by Sequoia and released the paid version of LangSmith, which has already been used by 1K+ teams and driven 80K signups. On this week’s episode of Unsupervised Learning, we sat down with LangChain Co-Founder and CEO Harrison Chase to talk about the current state of LLM evaluation, observability, and the agent landscape.</p><p> </p><p>(0:00) intro </p><p>(1:07) applications of AI in the sports world </p><p>(3:26) what does LangChain do? </p><p>(7:51) building with LangSmith </p><p>(10:00) best AI eval practices </p><p>(16:51) to what extent is eval generalizable? </p><p>(21:11) the current agent landscape </p><p>(29:35) balancing present and future at LangChain </p><p>(36:27) using LangServe to deploy LangChain applications </p><p>(41:37) more complex chatbots are coming </p><p>(45:51) current AI practices that will become obsolete </p><p>(48:55) over-hyped/under-hyped </p><p>(49:25) bigger surprise in building LangChain </p><p>(51:50) how ubiquitous will open-source models be in the future? </p><p>(52:43) most exciting AI startups </p><p>(56:07) being an AI “celebrity” </p><p>(58:09) Jacob and Jordan debrief </p><p> </p><p>With your co-hosts:  </p><p>@jacobeffron  </p><p>- Partner at Redpoint, Former PM Flatiron Health  </p><p>@patrickachase  </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn  </p><p>@ericabrescia  </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)  </p><p>@jordan_segall  </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="62225489" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/bd80eb3b-65c6-4497-8407-1ac0f6291a3c/audio/f42eb3bc-4426-4470-9647-58b54217f655/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 28: LangChain CEO Harrison Chase on the Current State of Eval and Agents and The LLM Apps that Will Define 2024</itunes:title>
      <itunes:author>Harrison Chase, Jacob Effron, Jordan Segall</itunes:author>
      <itunes:duration>01:04:44</itunes:duration>
      <itunes:summary>Last week LangChain announced a $20M Series A led by Sequoia and released the paid version of LangSmith, which has already been used by 1K+ teams and driven 80K signups. On this week’s episode of Unsupervised Learning, we sat down with LangChain Co-Founder and CEO Harrison Chase to talk about the current state of LLM evaluation, observability, and the agent landscape.</itunes:summary>
      <itunes:subtitle>Last week LangChain announced a $20M Series A led by Sequoia and released the paid version of LangSmith, which has already been used by 1K+ teams and driven 80K signups. On this week’s episode of Unsupervised Learning, we sat down with LangChain Co-Founder and CEO Harrison Chase to talk about the current state of LLM evaluation, observability, and the agent landscape.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>28</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">88d0e1e3-7ffd-4d08-a28c-ec6f90bd079b</guid>
      <title>Ep 27: Oscar Co-Founder Mario Schlosser on How LLMs Can Be Used in Healthcare Today and The Path to AI Doctors</title>
      <description><![CDATA[<p>Oscar Health is a $4B public healthcare company, providing healthcare insurance to nearly 1 million members. Oscar is at the forefront of AI adoption, continuously developing new AI use cases in healthcare. On this week’s episode of Unsupervised Learning, we sat down with Oscar Health Co-Founder, former CEO, and now President of Technology Mario Schlosser to talk about where AI will have the biggest impact in healthcare, top AI use cases at Oscar today, AI adoption challenges Oscar is facing, and limitations of GPT-4 in healthcare. Mario also shared his takes on open-source vs. off-the-shelf vs. healthcare-specific LLMs, and why can't we have robot doctors today.</p><p> </p><p>(0:00) intro</p><p>(1:26) how will AI change healthcare in the next decade</p><p>(9:29) how Oscar uses AI</p><p>(19:00) how to build around healthcare requirements</p><p>(26:06) when would GPT-4 fail "miserably" and fundamental limitations of LLMs</p><p>(36:48) we shouldn’t piss off our smartest robots</p><p>(38:35) sharing AI knowledge between companies</p><p>(42:10) developing healthcare-specific models</p><p>(44:55) hackathons and karaoke nights at Oscar</p><p>(49:27) the need for a safety layer in LLMs</p><p>(51:53) best commercial opportunities in healthcare</p><p>(55:39) will their be AI doctors this decade?</p><p>(59:38) over-hyped/under-hyped</p><p> </p><p>With your co-hosts:  </p><p>@jacobeffron  </p><p>- Partner at Redpoint, Former PM Flatiron Health  </p><p>@patrickachase  </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn  </p><p>@ericabrescia  </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)  </p><p>@jordan_segall  </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 13 Feb 2024 14:01:30 +0000</pubDate>
      <author>jeffron@redpoint.com (Mario Schlosser, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-27-oscar-ceo-mario-schlosser-on-how-llms-can-be-used-in-healthcare-today-and-the-path-to-ai-doctors-gaorpWbr</link>
      <content:encoded><![CDATA[<p>Oscar Health is a $4B public healthcare company, providing healthcare insurance to nearly 1 million members. Oscar is at the forefront of AI adoption, continuously developing new AI use cases in healthcare. On this week’s episode of Unsupervised Learning, we sat down with Oscar Health Co-Founder, former CEO, and now President of Technology Mario Schlosser to talk about where AI will have the biggest impact in healthcare, top AI use cases at Oscar today, AI adoption challenges Oscar is facing, and limitations of GPT-4 in healthcare. Mario also shared his takes on open-source vs. off-the-shelf vs. healthcare-specific LLMs, and why can't we have robot doctors today.</p><p> </p><p>(0:00) intro</p><p>(1:26) how will AI change healthcare in the next decade</p><p>(9:29) how Oscar uses AI</p><p>(19:00) how to build around healthcare requirements</p><p>(26:06) when would GPT-4 fail "miserably" and fundamental limitations of LLMs</p><p>(36:48) we shouldn’t piss off our smartest robots</p><p>(38:35) sharing AI knowledge between companies</p><p>(42:10) developing healthcare-specific models</p><p>(44:55) hackathons and karaoke nights at Oscar</p><p>(49:27) the need for a safety layer in LLMs</p><p>(51:53) best commercial opportunities in healthcare</p><p>(55:39) will their be AI doctors this decade?</p><p>(59:38) over-hyped/under-hyped</p><p> </p><p>With your co-hosts:  </p><p>@jacobeffron  </p><p>- Partner at Redpoint, Former PM Flatiron Health  </p><p>@patrickachase  </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn  </p><p>@ericabrescia  </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)  </p><p>@jordan_segall  </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="61275385" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/19e81854-2079-429c-bd28-e201306628f0/audio/bf6a4240-0977-4620-b748-fbc56e5d3a51/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 27: Oscar Co-Founder Mario Schlosser on How LLMs Can Be Used in Healthcare Today and The Path to AI Doctors</itunes:title>
      <itunes:author>Mario Schlosser, Jacob Effron</itunes:author>
      <itunes:duration>01:03:44</itunes:duration>
      <itunes:summary>Oscar Health is a $4B public healthcare company, providing healthcare insurance to nearly 1 million members. Oscar is at the forefront of AI adoption, continuously developing new AI use cases in healthcare. On this week’s episode of Unsupervised Learning, we sat down with Oscar Health Co-Founder, former CEO, and now President of Technology Mario Schlosser to talk about where AI will have the biggest impact in healthcare, top AI use cases at Oscar today, AI adoption challenges Oscar is facing, and limitations of GPT-4 in healthcare. Mario also shared his takes on open-source vs. off-the-shelf vs. healthcare-specific LLMs, and why can&apos;t we have robot doctors today.</itunes:summary>
      <itunes:subtitle>Oscar Health is a $4B public healthcare company, providing healthcare insurance to nearly 1 million members. Oscar is at the forefront of AI adoption, continuously developing new AI use cases in healthcare. On this week’s episode of Unsupervised Learning, we sat down with Oscar Health Co-Founder, former CEO, and now President of Technology Mario Schlosser to talk about where AI will have the biggest impact in healthcare, top AI use cases at Oscar today, AI adoption challenges Oscar is facing, and limitations of GPT-4 in healthcare. Mario also shared his takes on open-source vs. off-the-shelf vs. healthcare-specific LLMs, and why can&apos;t we have robot doctors today.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>27</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">305ffe37-557e-4218-b749-ac5a516a90f1</guid>
      <title>Ep 26: Replit Founder Amjad Masad on the 1000x Engineer, ChatBots are Overhyped and Why We Don’t Really Have True Open-Source Models</title>
      <description><![CDATA[<p>Replit raised nearly $100M at $1.2B valuation last April and powers over 20M developers. On this week’s episode of Unsupervised Learning, we sat down with Replit Founder and CEO Amjad Masad to talk about the future of software development, how Replit is empowering young users, how Replit developed its own models, and the data advantage Replit has. Amjad also shared his takes on why he’s bullish on agents, where the value in AI will most likely accrue and why open-source models might not be truly open today.</p><p> </p><p>(0:00) intro</p><p>(0:45) advice for new coders</p><p>(6:20) how Replit uses AI</p><p>(10:36) AI’s coding capabilities</p><p>(15:49) what makes the best data</p><p>(20:52) educating new Replit AI users</p><p>(23:46) structuring AI teams</p><p>(27:02) building an in-house model</p><p>(36:54) “the world is gonna get way weirder”</p><p>(38:10) Kim and Taylor teaching calculus</p><p>(44:19) usage based pricing is going to get more prevalent</p><p>(51:05) will Microsoft win it all?</p><p>(55:00) Llama and vibe-checking AI models</p><p>(57:35) chatbots are overhyped</p><p>(58:18) latency matters</p><p>(59:50) why Sam Altman is the GOAT</p><p>(1:01:16) over 10 years we’ll see companies really shrink in size</p><p>(1:03:51) Jacob and Pat debrief</p><p>(1:05:27) training coding models on random data </p><p>(1:06:36) Amjad’s take on agents</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 6 Feb 2024 14:12:12 +0000</pubDate>
      <author>jeffron@redpoint.com (Amjad Masad, Jacob Effron, Patrick Chase)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-26-replit-co-founder-amjad-masad-on-the-1000x-engineer-chatbots-are-overhyped-and-why-we-dont-really-have-true-open-source-models-dgSSd9co</link>
      <content:encoded><![CDATA[<p>Replit raised nearly $100M at $1.2B valuation last April and powers over 20M developers. On this week’s episode of Unsupervised Learning, we sat down with Replit Founder and CEO Amjad Masad to talk about the future of software development, how Replit is empowering young users, how Replit developed its own models, and the data advantage Replit has. Amjad also shared his takes on why he’s bullish on agents, where the value in AI will most likely accrue and why open-source models might not be truly open today.</p><p> </p><p>(0:00) intro</p><p>(0:45) advice for new coders</p><p>(6:20) how Replit uses AI</p><p>(10:36) AI’s coding capabilities</p><p>(15:49) what makes the best data</p><p>(20:52) educating new Replit AI users</p><p>(23:46) structuring AI teams</p><p>(27:02) building an in-house model</p><p>(36:54) “the world is gonna get way weirder”</p><p>(38:10) Kim and Taylor teaching calculus</p><p>(44:19) usage based pricing is going to get more prevalent</p><p>(51:05) will Microsoft win it all?</p><p>(55:00) Llama and vibe-checking AI models</p><p>(57:35) chatbots are overhyped</p><p>(58:18) latency matters</p><p>(59:50) why Sam Altman is the GOAT</p><p>(1:01:16) over 10 years we’ll see companies really shrink in size</p><p>(1:03:51) Jacob and Pat debrief</p><p>(1:05:27) training coding models on random data </p><p>(1:06:36) Amjad’s take on agents</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="67556134" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/0ac80cd1-c780-4778-8c60-89697db3f6a4/audio/a24bc39a-f746-4f68-aa3e-3d319da58bf9/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 26: Replit Founder Amjad Masad on the 1000x Engineer, ChatBots are Overhyped and Why We Don’t Really Have True Open-Source Models</itunes:title>
      <itunes:author>Amjad Masad, Jacob Effron, Patrick Chase</itunes:author>
      <itunes:duration>01:10:22</itunes:duration>
      <itunes:summary>Replit raised nearly $100M at $1.2B valuation last April and powers over 20M developers. On this week’s episode of Unsupervised Learning, we sat down with Replit Founder and CEO Amjad Masad to talk about the future of software development, how Replit is empowering young users, how Replit developed its own models, and the data advantage Replit has. Amjad also shared his takes on why he’s bullish on agents, where the value in AI will most likely accrue and why open-source models might not be truly open today.</itunes:summary>
      <itunes:subtitle>Replit raised nearly $100M at $1.2B valuation last April and powers over 20M developers. On this week’s episode of Unsupervised Learning, we sat down with Replit Founder and CEO Amjad Masad to talk about the future of software development, how Replit is empowering young users, how Replit developed its own models, and the data advantage Replit has. Amjad also shared his takes on why he’s bullish on agents, where the value in AI will most likely accrue and why open-source models might not be truly open today.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>26</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">af28eeba-a6c7-40d1-b843-e2eec309aaa7</guid>
      <title>Ep 25: Intercom Co-Founder Des Traynor on Building Winning AI Strategy and Where We Are in the AI Adoption Curve</title>
      <description><![CDATA[<p>Intercom is one of the earliest adopters of AI - its AI product Fin has generated over two million answers and been used by thousands of users since it was launched last March. On this week’s episode of Unsupervised Learning, we sat down with Intercom Co-Founder and Chief Strategy Officer Des Traynor to talk about how AI is incorporated into Intercom, structuring its AI team, using RAG vs. fine-tuning techniques, where we are in the AI adoption curve, and his advice for startups building on top of AI. </p><p> </p><p>(0:00) intro </p><p>(0:31) Intercom reaction to ChatGBT </p><p>(3:18) how AI is incorporated into Intercom products </p><p>(6:33) guardrails preventing hallucinations 9:49 exploration versus optimizing cost </p><p>(19:00) structuring AI teams </p><p>(31:17) fine-tuning for customers vs RAG </p><p>(37:03) solving the 'actions' problem </p><p>(38:38) lessons learned from transitioning into AI </p><p>(44:28) over-hyped/under-hyped </p><p>(45:53) companies that have implemented AI well/poorly </p><p>(48:27) Jacob and Jordan debrief</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Wed, 24 Jan 2024 14:00:00 +0000</pubDate>
      <author>jeffron@redpoint.com (Des Traynor, Jacob Effron, Jordan Segall)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-25-intercom-co-founder-on-building-winning-ai-strategy-where-we-are-in-the-ai-adoption-curve-and-how-ai-startups-could-win-AkdhUGXw</link>
      <content:encoded><![CDATA[<p>Intercom is one of the earliest adopters of AI - its AI product Fin has generated over two million answers and been used by thousands of users since it was launched last March. On this week’s episode of Unsupervised Learning, we sat down with Intercom Co-Founder and Chief Strategy Officer Des Traynor to talk about how AI is incorporated into Intercom, structuring its AI team, using RAG vs. fine-tuning techniques, where we are in the AI adoption curve, and his advice for startups building on top of AI. </p><p> </p><p>(0:00) intro </p><p>(0:31) Intercom reaction to ChatGBT </p><p>(3:18) how AI is incorporated into Intercom products </p><p>(6:33) guardrails preventing hallucinations 9:49 exploration versus optimizing cost </p><p>(19:00) structuring AI teams </p><p>(31:17) fine-tuning for customers vs RAG </p><p>(37:03) solving the 'actions' problem </p><p>(38:38) lessons learned from transitioning into AI </p><p>(44:28) over-hyped/under-hyped </p><p>(45:53) companies that have implemented AI well/poorly </p><p>(48:27) Jacob and Jordan debrief</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="55064474" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/c0726aff-799f-4200-92bc-8a3d61a587dc/audio/a25a92bd-01ee-4730-859a-efa900764c91/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 25: Intercom Co-Founder Des Traynor on Building Winning AI Strategy and Where We Are in the AI Adoption Curve</itunes:title>
      <itunes:author>Des Traynor, Jacob Effron, Jordan Segall</itunes:author>
      <itunes:duration>00:57:15</itunes:duration>
      <itunes:summary>Intercom is one of the earliest adopters of AI - its AI product Fin has generated over two million answers and been used by thousands of users since it was launched last March. On this week’s episode of Unsupervised Learning, we sat down with Intercom Co-Founder and Chief Strategy Officer Des Traynor to talk about how AI is incorporated into Intercom, structuring its AI team, using RAG vs. fine-tuning techniques, where we are in the AI adoption curve, and his advice for startups building on top of AI. </itunes:summary>
      <itunes:subtitle>Intercom is one of the earliest adopters of AI - its AI product Fin has generated over two million answers and been used by thousands of users since it was launched last March. On this week’s episode of Unsupervised Learning, we sat down with Intercom Co-Founder and Chief Strategy Officer Des Traynor to talk about how AI is incorporated into Intercom, structuring its AI team, using RAG vs. fine-tuning techniques, where we are in the AI adoption curve, and his advice for startups building on top of AI. </itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>25</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">74785089-5015-4703-8290-5d993030d8b2</guid>
      <title>Ep 24: OpenAI Head of DevRel Logan Kilpatrick on The Best ChatGPT Use Cases, Future of Agents, and Google Gemini</title>
      <description><![CDATA[<p>OpenAI's inaugural DevDay sparked excitement in the AI community, with several product releases and ChatGPT hitting the milestone of reaching 100M weekly active users. On this week’s episode of Unsupervised Learning, we sat down with the Head of Developer Relations at OpenAI, Logan Kilpatrick. Logan shared with us how OpenAI prioritizes product builds internally, the interesting use cases he's seen for several OpenAI products, where OpenAI is headed, and what the Gemini release means for the ecosystem.</p><p> </p><p>(0:00) intro</p><p>(0:33) how Logan uses ChatGBT</p><p>(1:36) underrated OpenAI products</p><p>(6:08) when is using GPT-4 necessary?</p><p>(7:22) custom GPT models</p><p>(9:05) are we at peak need for custom models in 2024?</p><p>(11:45) how does OpenAI prioritize products</p><p>(13:31) OpenAI’s text-to-speech model</p><p>(14:31) benefits of using open-source models</p><p>(21:00) what kind of company would Logan start if he left OpenAI?</p><p>(23:40) Google Gemini</p><p>(24:41) assistants API</p><p>(30:00) the need for a text-first AI-assistant experience</p><p>(35:18) putting limitations on agents</p><p>(42:18) the future of DALL-E and art generation</p><p>(48:00) over-hyped/under-hyped</p><p>(48:30) rare disappointments for OpenAI</p><p>(49:25) surprise successes for OpenAI</p><p>(50:03) how has OpenAI’s team developed?</p><p>(58:22) debrief with Pat</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 9 Jan 2024 17:10:22 +0000</pubDate>
      <author>jeffron@redpoint.com (Logan Kilpatrick, Jacob Effron, Patrick Chase)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-24-openai-head-of-devrel-logan-kilpatrick-on-the-best-chatgpt-use-cases-future-of-agents-and-google-gemini-bDVvHil4</link>
      <content:encoded><![CDATA[<p>OpenAI's inaugural DevDay sparked excitement in the AI community, with several product releases and ChatGPT hitting the milestone of reaching 100M weekly active users. On this week’s episode of Unsupervised Learning, we sat down with the Head of Developer Relations at OpenAI, Logan Kilpatrick. Logan shared with us how OpenAI prioritizes product builds internally, the interesting use cases he's seen for several OpenAI products, where OpenAI is headed, and what the Gemini release means for the ecosystem.</p><p> </p><p>(0:00) intro</p><p>(0:33) how Logan uses ChatGBT</p><p>(1:36) underrated OpenAI products</p><p>(6:08) when is using GPT-4 necessary?</p><p>(7:22) custom GPT models</p><p>(9:05) are we at peak need for custom models in 2024?</p><p>(11:45) how does OpenAI prioritize products</p><p>(13:31) OpenAI’s text-to-speech model</p><p>(14:31) benefits of using open-source models</p><p>(21:00) what kind of company would Logan start if he left OpenAI?</p><p>(23:40) Google Gemini</p><p>(24:41) assistants API</p><p>(30:00) the need for a text-first AI-assistant experience</p><p>(35:18) putting limitations on agents</p><p>(42:18) the future of DALL-E and art generation</p><p>(48:00) over-hyped/under-hyped</p><p>(48:30) rare disappointments for OpenAI</p><p>(49:25) surprise successes for OpenAI</p><p>(50:03) how has OpenAI’s team developed?</p><p>(58:22) debrief with Pat</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="64124908" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/82c02f82-5067-4d21-9e21-dbb6811c1042/audio/52eff759-2c37-4bab-a466-e3fb71f307e2/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 24: OpenAI Head of DevRel Logan Kilpatrick on The Best ChatGPT Use Cases, Future of Agents, and Google Gemini</itunes:title>
      <itunes:author>Logan Kilpatrick, Jacob Effron, Patrick Chase</itunes:author>
      <itunes:duration>01:06:37</itunes:duration>
      <itunes:summary>OpenAI&apos;s inaugural DevDay sparked excitement in the AI community, with several product releases and ChatGPT hitting the milestone of reaching 100M weekly active users. On this week’s episode of Unsupervised Learning, we sat down with the Head of Developer Relations at OpenAI, Logan Kilpatrick. Logan shared with us how OpenAI prioritizes product builds internally, the interesting use cases he&apos;s seen for several OpenAI products, where OpenAI is headed, and what the Gemini release means for the ecosystem.</itunes:summary>
      <itunes:subtitle>OpenAI&apos;s inaugural DevDay sparked excitement in the AI community, with several product releases and ChatGPT hitting the milestone of reaching 100M weekly active users. On this week’s episode of Unsupervised Learning, we sat down with the Head of Developer Relations at OpenAI, Logan Kilpatrick. Logan shared with us how OpenAI prioritizes product builds internally, the interesting use cases he&apos;s seen for several OpenAI products, where OpenAI is headed, and what the Gemini release means for the ecosystem.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>24</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">07d5e566-05c0-492a-813e-eda7cd4efdf5</guid>
      <title>Ep 23: Perplexity CEO Aravind Srinivas on the future of Search, OpenAI Wrappers and Using AI to Talk to Loved Ones</title>
      <description><![CDATA[<p>Perplexity is a next-gen search tool going after Google, with 1M Android app installs and 1M iOS installs within only 8 months of product launch. On this week’s episode of Unsupervised Learning, we sat down with the CEO and Co-Founder of Perplexity AI, Aravind Srinivas. Aravind shared with us the behind-the-scenes stories of how Perplexity AI was born (37:19), how he thinks about Perplexity being viewed as a "wrapper" (24:01), where search will be in 10 years (23:08), and where Perplexity is headed.</p><p> </p><p>(0:00) intro</p><p>(0:48) the simplicity of Perplexity</p><p>(5:16) how Perplexity allocates resources</p><p>(7:09) don’t waste your time building your own models</p><p>(11:39) being a “wrapper” for OpenAI</p><p>(14:37) the future of Quora and Wikipedia</p><p>(19:38) what does it take to compete with Google</p><p>(23:08) what does search look like in 10 years</p><p>(24:01) showing users that Perplexity is more than a “wrapper”</p><p>(27:28) RAG solutions and solving hallucinations</p><p>(30:13) guiding users’ questions</p><p>(32:44) discover tab</p><p>(35:48) attracting new users vs. pleasing “power users”</p><p>(37:19) how Perplexity landed on search</p><p>(53:01) should AI be regulated?</p><p>(54:56) what other company would Aravind work at</p><p>(56:56) OpenAI</p><p>(58:42) Jacob and Pat debrief</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Thu, 14 Dec 2023 14:10:31 +0000</pubDate>
      <author>jeffron@redpoint.com (Aravind Srinivas, Jacob Effron, Patrick Chase)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-23-perplexity-ceo-aravind-srinivas-on-the-future-of-search-openai-wrappers-and-using-ai-to-talk-to-loved-ones-jTkHZhQG</link>
      <content:encoded><![CDATA[<p>Perplexity is a next-gen search tool going after Google, with 1M Android app installs and 1M iOS installs within only 8 months of product launch. On this week’s episode of Unsupervised Learning, we sat down with the CEO and Co-Founder of Perplexity AI, Aravind Srinivas. Aravind shared with us the behind-the-scenes stories of how Perplexity AI was born (37:19), how he thinks about Perplexity being viewed as a "wrapper" (24:01), where search will be in 10 years (23:08), and where Perplexity is headed.</p><p> </p><p>(0:00) intro</p><p>(0:48) the simplicity of Perplexity</p><p>(5:16) how Perplexity allocates resources</p><p>(7:09) don’t waste your time building your own models</p><p>(11:39) being a “wrapper” for OpenAI</p><p>(14:37) the future of Quora and Wikipedia</p><p>(19:38) what does it take to compete with Google</p><p>(23:08) what does search look like in 10 years</p><p>(24:01) showing users that Perplexity is more than a “wrapper”</p><p>(27:28) RAG solutions and solving hallucinations</p><p>(30:13) guiding users’ questions</p><p>(32:44) discover tab</p><p>(35:48) attracting new users vs. pleasing “power users”</p><p>(37:19) how Perplexity landed on search</p><p>(53:01) should AI be regulated?</p><p>(54:56) what other company would Aravind work at</p><p>(56:56) OpenAI</p><p>(58:42) Jacob and Pat debrief</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="63496497" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/e7b83218-3c2e-4e98-9d36-4be81b0e146a/audio/5b91fde3-9239-4005-94ea-8787402be88d/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 23: Perplexity CEO Aravind Srinivas on the future of Search, OpenAI Wrappers and Using AI to Talk to Loved Ones</itunes:title>
      <itunes:author>Aravind Srinivas, Jacob Effron, Patrick Chase</itunes:author>
      <itunes:duration>01:06:08</itunes:duration>
      <itunes:summary>Perplexity is a next-gen search tool going after Google, with 1M Android app installs and 1M iOS installs within only 8 months of product launch. On this week’s episode of Unsupervised Learning, we sat down with the CEO and Co-Founder of Perplexity AI, Aravind Srinivas. Aravind shared with us the behind-the-scenes stories of how Perplexity AI was born, how he thinks about Perplexity being viewed as a &quot;wrapper&quot;, where search will be in 10 years, and where Perplexity is headed.</itunes:summary>
      <itunes:subtitle>Perplexity is a next-gen search tool going after Google, with 1M Android app installs and 1M iOS installs within only 8 months of product launch. On this week’s episode of Unsupervised Learning, we sat down with the CEO and Co-Founder of Perplexity AI, Aravind Srinivas. Aravind shared with us the behind-the-scenes stories of how Perplexity AI was born, how he thinks about Perplexity being viewed as a &quot;wrapper&quot;, where search will be in 10 years, and where Perplexity is headed.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>23</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">54f62489-d837-4c7c-9fe5-afd98809cb36</guid>
      <title>Special Episode: A Full Breakdown of the OpenAI Saga and What It Means for AI Founders</title>
      <description><![CDATA[<p>In light of one of the biggest news stories in AI, we’ve put together a special episode to discuss the ramifications of Sam Altmans’ firing from OpenAI. Regardless of what happens between now and when you’re listening to this, the implications of the events that happened over the past few days are certainly worth unpacking. </p><p>About our guests: </p><ul><li>Alex Konrad, a journalist at Forbes covering Venture Capital and Tech </li><li>Jason Warner, former CTO at GitHub, partner at Redpoint, and now Founder of AGI start-up Poolside (https://www.poolside.ai/)</li></ul><p>We go over what the implications are for AI Startups, who came away as the biggest winners, what’s going on behind the scenes and more. Hopefully you enjoy this episode. </p><p> </p><p>With your co-hosts: </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Wed, 22 Nov 2023 02:17:07 +0000</pubDate>
      <author>jeffron@redpoint.com (Alex Konrad, Jacob Effron, Jason Warner)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/special-episode-a-full-breakdown-of-the-openai-saga-and-what-it-means-for-ai-founders-bR1DzB1O</link>
      <content:encoded><![CDATA[<p>In light of one of the biggest news stories in AI, we’ve put together a special episode to discuss the ramifications of Sam Altmans’ firing from OpenAI. Regardless of what happens between now and when you’re listening to this, the implications of the events that happened over the past few days are certainly worth unpacking. </p><p>About our guests: </p><ul><li>Alex Konrad, a journalist at Forbes covering Venture Capital and Tech </li><li>Jason Warner, former CTO at GitHub, partner at Redpoint, and now Founder of AGI start-up Poolside (https://www.poolside.ai/)</li></ul><p>We go over what the implications are for AI Startups, who came away as the biggest winners, what’s going on behind the scenes and more. Hopefully you enjoy this episode. </p><p> </p><p>With your co-hosts: </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="45135874" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/14683557-519c-47b2-a8e7-e3cc2d803262/audio/6de93d17-43e9-4f35-838a-357ac8545ca8/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Special Episode: A Full Breakdown of the OpenAI Saga and What It Means for AI Founders</itunes:title>
      <itunes:author>Alex Konrad, Jacob Effron, Jason Warner</itunes:author>
      <itunes:duration>00:47:00</itunes:duration>
      <itunes:summary>In light of one of the biggest news stories in AI, we’ve put together a special episode to discuss the ramifications of Sam Altmans’ firing from OpenAI. Regardless of what happens between now and when you’re listening to this, the implications of the events that happened over the past few days are certainly worth unpacking. </itunes:summary>
      <itunes:subtitle>In light of one of the biggest news stories in AI, we’ve put together a special episode to discuss the ramifications of Sam Altmans’ firing from OpenAI. Regardless of what happens between now and when you’re listening to this, the implications of the events that happened over the past few days are certainly worth unpacking. </itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>bonus</itunes:episodeType>
    </item>
    <item>
      <guid isPermaLink="false">9d18f290-5801-4dcc-acc2-a04619b474c3</guid>
      <title>Ep 22: Notion AI Engineer Linus Lee: Behind the Scenes of Notion AI</title>
      <description><![CDATA[<p>Linus Lee is an AI engineer at Notion, one of the earliest and most effective adopters of AI. In the episode, Linus shares how Notion developed its AI products, including Writer, Autofill, and Q&A, which just launched on Tuesday. It was fascinating to learn how Notion structures its AI team and dogfoods its development process. Linus also explores the hardest to anticipate when going to market with new AI features, and how Notion thinks of its LLM partnerships. Overall, a wide-ranging conversation about the behind-the-scenes stories of one of the most widely used AI tools today.</p><p> </p><p>(0:00) intro</p><p>(0:37) T-Swift</p><p>(2:07) Notion AI</p><p>(9:08) approach to staffing</p><p>(16:51) educating users and user behavior</p><p>(22:32) challenges in developing Notion Q&A</p><p>(30:42) working with Anthropic and Open AI</p><p>(35:50) avoiding hallucinations</p><p>(36:23) switching AI models</p><p>(39:32) iterating on interfaces</p><p>(42:03) over-hyped/under-hyped</p><p>(48:03) Midjourney</p><p>(51:07) Pat and Jacob debrief</p><p> </p><p>With your co-hosts: </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Thu, 16 Nov 2023 17:56:21 +0000</pubDate>
      <author>jeffron@redpoint.com (Linus Lee, Jacob Effron, Patrick Chase)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-22-notion-ai-engineer-linus-lee-behind-the-scenes-of-notion-ai-__ez7N6k</link>
      <content:encoded><![CDATA[<p>Linus Lee is an AI engineer at Notion, one of the earliest and most effective adopters of AI. In the episode, Linus shares how Notion developed its AI products, including Writer, Autofill, and Q&A, which just launched on Tuesday. It was fascinating to learn how Notion structures its AI team and dogfoods its development process. Linus also explores the hardest to anticipate when going to market with new AI features, and how Notion thinks of its LLM partnerships. Overall, a wide-ranging conversation about the behind-the-scenes stories of one of the most widely used AI tools today.</p><p> </p><p>(0:00) intro</p><p>(0:37) T-Swift</p><p>(2:07) Notion AI</p><p>(9:08) approach to staffing</p><p>(16:51) educating users and user behavior</p><p>(22:32) challenges in developing Notion Q&A</p><p>(30:42) working with Anthropic and Open AI</p><p>(35:50) avoiding hallucinations</p><p>(36:23) switching AI models</p><p>(39:32) iterating on interfaces</p><p>(42:03) over-hyped/under-hyped</p><p>(48:03) Midjourney</p><p>(51:07) Pat and Jacob debrief</p><p> </p><p>With your co-hosts: </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="59317741" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/5e57c634-b099-4a84-bd6a-65e32586c2ba/audio/5deb348a-fcdb-46d7-a4aa-44ffa91caa79/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 22: Notion AI Engineer Linus Lee: Behind the Scenes of Notion AI</itunes:title>
      <itunes:author>Linus Lee, Jacob Effron, Patrick Chase</itunes:author>
      <itunes:duration>01:01:47</itunes:duration>
      <itunes:summary>Linus Lee is an AI engineer at Notion, one of the earliest and most effective adopters of AI. In the episode, Linus shares how Notion developed its AI products, including Writer, Autofill, and Q&amp;A, which just launched on Tuesday. It was fascinating to learn how Notion structures its AI team and dogfoods its development process. Linus also explores the hardest to anticipate when going to market with new AI features, and how Notion thinks of its LLM partnerships. Overall, a wide-ranging conversation about the behind-the-scenes stories of one of the most widely used AI tools today.</itunes:summary>
      <itunes:subtitle>Linus Lee is an AI engineer at Notion, one of the earliest and most effective adopters of AI. In the episode, Linus shares how Notion developed its AI products, including Writer, Autofill, and Q&amp;A, which just launched on Tuesday. It was fascinating to learn how Notion structures its AI team and dogfoods its development process. Linus also explores the hardest to anticipate when going to market with new AI features, and how Notion thinks of its LLM partnerships. Overall, a wide-ranging conversation about the behind-the-scenes stories of one of the most widely used AI tools today.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>22</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">d18ff381-c85a-4b51-9de2-2f5265099661</guid>
      <title>Ep 21: Modal CEO Erik Bernhardsson on Bringing Development to the Cloud, the GPU Market, and GenAI Music</title>
      <description><![CDATA[<p>Jacob and Pat sit down with Erik Bernhardsson, the founder of Modal Labs, a data infrastructure company providing GPU compute to data teams. On this episode we discussed Erik’s thoughts on the AI chip market, the most popular GenAI use cases on Modal, and even Oracle Cloud’s resurgence in the AI start-up market.</p><p> </p><p>0:00 intro</p><p>0:45 motivation for founding Modal</p><p>6:35 advantages that Modal gives developers</p><p>9:21 early applications built with Modal</p><p>11:58 challenges for AI developers</p><p>16:31 GPU access today</p><p>20:09 Vector DB companies</p><p>24:55 why is cloud adoption so slow?</p><p>31:30 Oracle Cloud</p><p>39:22 AI music generation</p><p>42:05 over-hyped/under-hyped</p><p>43:26 what Erik wishes he knew when starting Modal</p><p>45:53 episode debrief</p><p> </p><p>With your co-hosts: </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 31 Oct 2023 17:43:32 +0000</pubDate>
      <author>jeffron@redpoint.com (Erik Bernhardsson, Jacob Effron, Patrick Chase)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-21-erik-bernhardsson-founder-modal-on-the-gpu-market-bringing-development-to-the-cloud-and-genai-music-Ar3NIz5I</link>
      <content:encoded><![CDATA[<p>Jacob and Pat sit down with Erik Bernhardsson, the founder of Modal Labs, a data infrastructure company providing GPU compute to data teams. On this episode we discussed Erik’s thoughts on the AI chip market, the most popular GenAI use cases on Modal, and even Oracle Cloud’s resurgence in the AI start-up market.</p><p> </p><p>0:00 intro</p><p>0:45 motivation for founding Modal</p><p>6:35 advantages that Modal gives developers</p><p>9:21 early applications built with Modal</p><p>11:58 challenges for AI developers</p><p>16:31 GPU access today</p><p>20:09 Vector DB companies</p><p>24:55 why is cloud adoption so slow?</p><p>31:30 Oracle Cloud</p><p>39:22 AI music generation</p><p>42:05 over-hyped/under-hyped</p><p>43:26 what Erik wishes he knew when starting Modal</p><p>45:53 episode debrief</p><p> </p><p>With your co-hosts: </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="51837431" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/1700d363-ff3e-4c75-a48f-8391e054ab52/audio/43ff2a6d-e1b0-4ad0-b4da-42708deb153c/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 21: Modal CEO Erik Bernhardsson on Bringing Development to the Cloud, the GPU Market, and GenAI Music</itunes:title>
      <itunes:author>Erik Bernhardsson, Jacob Effron, Patrick Chase</itunes:author>
      <itunes:duration>00:53:59</itunes:duration>
      <itunes:summary>Jacob and Pat sit down with Erik Bernhardsson, the founder of Modal Labs, a data infrastructure company providing GPU compute to data teams. On this episode we discussed Erik’s thoughts on the AI chip market, the most popular GenAI use cases on Modal, and even Oracle Cloud’s resurgence in the AI start-up market.</itunes:summary>
      <itunes:subtitle>Jacob and Pat sit down with Erik Bernhardsson, the founder of Modal Labs, a data infrastructure company providing GPU compute to data teams. On this episode we discussed Erik’s thoughts on the AI chip market, the most popular GenAI use cases on Modal, and even Oracle Cloud’s resurgence in the AI start-up market.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>21</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">379d0953-2f8b-4868-8e49-1b31f89b973a</guid>
      <title>Ep 20: Anthropic CEO Dario Amodei on the Future of AGI, Leading Anthropic, and AI Doom Chances</title>
      <description><![CDATA[<p>Special Crossover Episode: We're excited to share this conversation from "The Logan Bartlett Show," another Redpoint podcast that focuses on untold stories from tech's inner circle. In the episode, Logan Bartlett interviews Dario Amodei (CEO, Anthropic) on the future of AI.</p><p><br />In the episode, Dario gives detailed predictions on the AI industry for 2024, 2025, and beyond. He discusses his days at OpenAI, leaving to start Anthropic, why he doesn’t like the term AGI, what AI developments he’s most excited about right now, and much more. Sharing here because we believe our AI enthusiasts will find this episode particularly enlightening. Enjoy!</p><p> </p><p>(0:00) Intro </p><p>(0:40) Joining OpenAI </p><p>(13:51) Are scaling and AI safety intertwined? </p><p>(19:51) Anthropic Early Days </p><p>(23:24) Amazon's Investment in Anthropic </p><p>(23:39) FTX investment in Anthropic </p><p>(25:10) Anthropic's Business Today </p><p>(30:11) Dario's Advice For Builders </p><p>(33:14) Should we pause AI progress? </p><p>(35:47) Future of AI </p><p>(37:15) Dario's Biggest AI Safety Concerns </p><p>(44:17) How Anthropic Deals With AI Bias </p><p>(48:49) Anthropic's Responsible Scaling Policy </p><p>(55:56) Testifying in front of Congress </p><p>(58:45) Will AI destroy humanity? </p><p>(59:20) GPT3 vs GPT4 </p><p>(1:01:46) The memification of a CEO </p><p>(1:08:50) What are you most surprised by with AI? </p><p>(1:16:23) Why don't you like the term AGI? </p><p>(1:21:10) 2024 AI Predictions </p><p>(1:33:05) Dario's opinion on open-source models </p><p>(1:37:23) Probability of AI Catastrophe </p><p>(1:40:12) Misuse of AI </p><p>(1:44:04) Looking ahead: Dario's optimistic outlook on AI</p><p> </p><p>With your co-hosts: </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Mon, 16 Oct 2023 16:07:37 +0000</pubDate>
      <author>jeffron@redpoint.com (Logan Bartlett, Patrick Chase, Dario Amodei)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ul-with-dario-draft-BWv4MtLs</link>
      <content:encoded><![CDATA[<p>Special Crossover Episode: We're excited to share this conversation from "The Logan Bartlett Show," another Redpoint podcast that focuses on untold stories from tech's inner circle. In the episode, Logan Bartlett interviews Dario Amodei (CEO, Anthropic) on the future of AI.</p><p><br />In the episode, Dario gives detailed predictions on the AI industry for 2024, 2025, and beyond. He discusses his days at OpenAI, leaving to start Anthropic, why he doesn’t like the term AGI, what AI developments he’s most excited about right now, and much more. Sharing here because we believe our AI enthusiasts will find this episode particularly enlightening. Enjoy!</p><p> </p><p>(0:00) Intro </p><p>(0:40) Joining OpenAI </p><p>(13:51) Are scaling and AI safety intertwined? </p><p>(19:51) Anthropic Early Days </p><p>(23:24) Amazon's Investment in Anthropic </p><p>(23:39) FTX investment in Anthropic </p><p>(25:10) Anthropic's Business Today </p><p>(30:11) Dario's Advice For Builders </p><p>(33:14) Should we pause AI progress? </p><p>(35:47) Future of AI </p><p>(37:15) Dario's Biggest AI Safety Concerns </p><p>(44:17) How Anthropic Deals With AI Bias </p><p>(48:49) Anthropic's Responsible Scaling Policy </p><p>(55:56) Testifying in front of Congress </p><p>(58:45) Will AI destroy humanity? </p><p>(59:20) GPT3 vs GPT4 </p><p>(1:01:46) The memification of a CEO </p><p>(1:08:50) What are you most surprised by with AI? </p><p>(1:16:23) Why don't you like the term AGI? </p><p>(1:21:10) 2024 AI Predictions </p><p>(1:33:05) Dario's opinion on open-source models </p><p>(1:37:23) Probability of AI Catastrophe </p><p>(1:40:12) Misuse of AI </p><p>(1:44:04) Looking ahead: Dario's optimistic outlook on AI</p><p> </p><p>With your co-hosts: </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="104754408" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/f331cb06-ce0e-4ca4-84f9-a97cc428c9e8/audio/526de1d7-8084-474e-87ca-c15fb71561bc/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 20: Anthropic CEO Dario Amodei on the Future of AGI, Leading Anthropic, and AI Doom Chances</itunes:title>
      <itunes:author>Logan Bartlett, Patrick Chase, Dario Amodei</itunes:author>
      <itunes:duration>01:49:07</itunes:duration>
      <itunes:summary>Special Crossover Episode: We&apos;re excited to share this conversation from &quot;The Logan Bartlett Show,&quot; another Redpoint podcast that focuses on untold stories from tech&apos;s inner circle. In the episode, Logan Bartlett interviews Dario Amodei (CEO, Anthropic) on the future of AI.

In the episode, Dario gives detailed predictions on the AI industry for 2024, 2025, and beyond. He discusses his days at OpenAI, leaving to start Anthropic, why he doesn’t like the term AGI, what AI developments he’s most excited about right now, and much more. Sharing here because we believe our AI enthusiasts will find this episode particularly enlightening. Enjoy!</itunes:summary>
      <itunes:subtitle>Special Crossover Episode: We&apos;re excited to share this conversation from &quot;The Logan Bartlett Show,&quot; another Redpoint podcast that focuses on untold stories from tech&apos;s inner circle. In the episode, Logan Bartlett interviews Dario Amodei (CEO, Anthropic) on the future of AI.

In the episode, Dario gives detailed predictions on the AI industry for 2024, 2025, and beyond. He discusses his days at OpenAI, leaving to start Anthropic, why he doesn’t like the term AGI, what AI developments he’s most excited about right now, and much more. Sharing here because we believe our AI enthusiasts will find this episode particularly enlightening. Enjoy!</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>20</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">8ce8ee65-ed2e-43b1-bd6b-eb66504a2a3a</guid>
      <title>Ep 19: Tome CEO Keith Peiris on Disrupting Powerpoint and Scaling To Millions of Users</title>
      <description><![CDATA[<p>Jacob and Pat sit down with Tome Co-Founder and CEO Keith Peiris to discuss Tome’s go-to market strategy, deciphering through the “AI tourists” to identify their ideal customer profile (ICP), and the different hardware cost considerations when reaching enterprise scale.</p><p> </p><p>0:00 intro</p><p>1:35 founding Tome</p><p>5:05 designing Tome</p><p>10:30 how users want to interact with AI</p><p>12:26 teaching users how to use Tome</p><p>20:13 partnering with model providers vs. building your own</p><p>28:43 model evaluation</p><p>31:23 building an enterprise product</p><p>37:48 thinking about pricing</p><p>43:43 Keith’s favorite Tome projects</p><p>44:26 over-hyped/under-hyped</p><p> </p><p>With your co-hosts: </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Thu, 28 Sep 2023 17:08:37 +0000</pubDate>
      <author>jeffron@redpoint.com (Keith Peiris, Patrick Chase, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-19-tome-ceo-keith-peiris-on-generative-ai-conversational-design-and-disrupting-powerpoint-EcTZ6Wki</link>
      <content:encoded><![CDATA[<p>Jacob and Pat sit down with Tome Co-Founder and CEO Keith Peiris to discuss Tome’s go-to market strategy, deciphering through the “AI tourists” to identify their ideal customer profile (ICP), and the different hardware cost considerations when reaching enterprise scale.</p><p> </p><p>0:00 intro</p><p>1:35 founding Tome</p><p>5:05 designing Tome</p><p>10:30 how users want to interact with AI</p><p>12:26 teaching users how to use Tome</p><p>20:13 partnering with model providers vs. building your own</p><p>28:43 model evaluation</p><p>31:23 building an enterprise product</p><p>37:48 thinking about pricing</p><p>43:43 Keith’s favorite Tome projects</p><p>44:26 over-hyped/under-hyped</p><p> </p><p>With your co-hosts: </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="45012211" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/58c911fa-7803-42a0-a3bf-45c1994400fa/audio/44197b45-beb9-44c7-a46b-8344686c51a1/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 19: Tome CEO Keith Peiris on Disrupting Powerpoint and Scaling To Millions of Users</itunes:title>
      <itunes:author>Keith Peiris, Patrick Chase, Jacob Effron</itunes:author>
      <itunes:duration>00:46:53</itunes:duration>
      <itunes:summary>Jacob and Pat sit down with Tome Co-Founder and CEO Keith Peiris to discuss Tome’s go-to market strategy, deciphering through the “AI tourists” to identify their ideal customer profile (ICP), and the different hardware cost considerations when reaching enterprise scale.</itunes:summary>
      <itunes:subtitle>Jacob and Pat sit down with Tome Co-Founder and CEO Keith Peiris to discuss Tome’s go-to market strategy, deciphering through the “AI tourists” to identify their ideal customer profile (ICP), and the different hardware cost considerations when reaching enterprise scale.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>19</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">2b92948d-572e-4667-9046-8e8866f841e9</guid>
      <title>Ep 18: LlamaIndex CEO Jerry Liu on Trends in LLM Applications</title>
      <description><![CDATA[<p>Jacob and Pat sit down with LlamaIndex CEO Jerry Liu to discuss his motivations for building LlamaIndex, thoughts on building enterprise-ready LLM applications and agents, and when fine-tuning makes sense.</p><p> </p><p>0:00 intro</p><p>1:02 the evolution of LlamaIndex</p><p>3:48 apps being built with LlamaIndex</p><p>6:39 making agents more effective</p><p>12:58 retrieval augmented generation</p><p>16:49 what’s the right level of abstraction for LlamaIndex?</p><p>19:42 balancing reasoning and knowledge</p><p>30:46 storage for embeddings</p><p>36:03 underutilized features of LlamaIndex</p><p>40:38 over-hyped/under-hyped</p><p> </p><p>With your co-hosts:</p><p>@ericabrescia</p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)</p><p>@patrickachase</p><p>- Partner at Redpoint, Former ML Engineer LinkedIn</p><p>@jacobeffron</p><p>- Partner at Redpoint, Former PM Flatiron Health</p><p>@jordan_segall</p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 19 Sep 2023 16:00:00 +0000</pubDate>
      <author>jeffron@redpoint.com (Jerry Liu, Patrick Chase, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-18-llamaindex-ceo-jerry-liu-on-trends-in-llm-applications-1KqSmF0C</link>
      <content:encoded><![CDATA[<p>Jacob and Pat sit down with LlamaIndex CEO Jerry Liu to discuss his motivations for building LlamaIndex, thoughts on building enterprise-ready LLM applications and agents, and when fine-tuning makes sense.</p><p> </p><p>0:00 intro</p><p>1:02 the evolution of LlamaIndex</p><p>3:48 apps being built with LlamaIndex</p><p>6:39 making agents more effective</p><p>12:58 retrieval augmented generation</p><p>16:49 what’s the right level of abstraction for LlamaIndex?</p><p>19:42 balancing reasoning and knowledge</p><p>30:46 storage for embeddings</p><p>36:03 underutilized features of LlamaIndex</p><p>40:38 over-hyped/under-hyped</p><p> </p><p>With your co-hosts:</p><p>@ericabrescia</p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)</p><p>@patrickachase</p><p>- Partner at Redpoint, Former ML Engineer LinkedIn</p><p>@jacobeffron</p><p>- Partner at Redpoint, Former PM Flatiron Health</p><p>@jordan_segall</p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="45372078" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/c0b4cd44-e9cb-480b-b08b-d356d221285c/audio/ccdf80ec-8e53-4d1b-a598-47bc52eaa0a7/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 18: LlamaIndex CEO Jerry Liu on Trends in LLM Applications</itunes:title>
      <itunes:author>Jerry Liu, Patrick Chase, Jacob Effron</itunes:author>
      <itunes:duration>00:47:15</itunes:duration>
      <itunes:summary>Jacob and Pat sit down with LlamaIndex CEO Jerry Liu to discuss his motivations for building LlamaIndex, thoughts on building enterprise-ready LLM applications and agents, and when fine-tuning makes sense.</itunes:summary>
      <itunes:subtitle>Jacob and Pat sit down with LlamaIndex CEO Jerry Liu to discuss his motivations for building LlamaIndex, thoughts on building enterprise-ready LLM applications and agents, and when fine-tuning makes sense.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>18</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">80d903bc-d68f-4966-9884-cacb8944bc11</guid>
      <title>Ep 17: Nomic AI Co-Founder Andriy Mulyar on &quot;GPT-4-All&quot;, LLMs in Video Games, and Apple&apos;s AI Strategy</title>
      <description><![CDATA[<p>Jordan and Erica sit down with Andriy Mulyar, Founder & CTO of Nomic AI, and discuss his motivation for creating GPT4ALL, the importance of data-centric AI, the use of LLMs in video games, and which technology companies are well positioned to “win” in the GenAI market long term.</p><p> </p><p>0:00 intro</p><p>0:59 getting into AI and meeting Brandon</p><p>2:27 starting Nomic</p><p>7:43 how people are using Atlas</p><p>10:31 hallucinations in LLMs</p><p>13:05 gpt4all</p><p>17:25 building LLMs into video games</p><p>26:31 where does Nomic go from here?</p><p>37:39 Apple’s role in the LLM space</p><p>38:57 Andriy’s thoughts on AGI</p><p>40:41 over-hyped/under-hyped</p><p>42:13 AI regulation going forward</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 12 Sep 2023 14:58:08 +0000</pubDate>
      <author>jeffron@redpoint.com (Andriy Mulyar, Jordan Segall, Erica Brescia)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-17-nomic-ai-co-founder-andriy-mulyar-on-gpt-4-all-llms-in-video-games-and-apples-ai-strategy-9wJI5Zye</link>
      <content:encoded><![CDATA[<p>Jordan and Erica sit down with Andriy Mulyar, Founder & CTO of Nomic AI, and discuss his motivation for creating GPT4ALL, the importance of data-centric AI, the use of LLMs in video games, and which technology companies are well positioned to “win” in the GenAI market long term.</p><p> </p><p>0:00 intro</p><p>0:59 getting into AI and meeting Brandon</p><p>2:27 starting Nomic</p><p>7:43 how people are using Atlas</p><p>10:31 hallucinations in LLMs</p><p>13:05 gpt4all</p><p>17:25 building LLMs into video games</p><p>26:31 where does Nomic go from here?</p><p>37:39 Apple’s role in the LLM space</p><p>38:57 Andriy’s thoughts on AGI</p><p>40:41 over-hyped/under-hyped</p><p>42:13 AI regulation going forward</p><p> </p><p>With your co-hosts: </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="43208307" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/e49828b7-ed1a-4846-bb85-694f43f79c2e/audio/52da0a9a-71bd-4e91-a8d1-d4cdf741555a/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 17: Nomic AI Co-Founder Andriy Mulyar on &quot;GPT-4-All&quot;, LLMs in Video Games, and Apple&apos;s AI Strategy</itunes:title>
      <itunes:author>Andriy Mulyar, Jordan Segall, Erica Brescia</itunes:author>
      <itunes:duration>00:45:00</itunes:duration>
      <itunes:summary>Jordan and Erica sit down with Andriy Mulyar, Founder &amp; CTO of Nomic AI, and discuss his motivation for creating GPT4ALL, the importance of data-centric AI, the use of LLMs in video games, and which technology companies are well positioned to “win” in the GenAI market long term.</itunes:summary>
      <itunes:subtitle>Jordan and Erica sit down with Andriy Mulyar, Founder &amp; CTO of Nomic AI, and discuss his motivation for creating GPT4ALL, the importance of data-centric AI, the use of LLMs in video games, and which technology companies are well positioned to “win” in the GenAI market long term.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>17</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">62296961-edf1-4dc4-a8b8-1eb08748bbc6</guid>
      <title>Ep 16: VP of Generative AI at Adobe Alexandru Costin on The Future of Content Creation</title>
      <description><![CDATA[<p>Patrick and Jacob sit down with Alexandru Costin, the VP of Generative AI and Sensei at Adobe, and discuss how Adobe’s early projects with generative AI in 2019 helped them move quickly upon the release of LLMs and diffusion models. Before leading Adobe’s generative AI efforts he founded InterAKT, a web development company, and led Adobe Romania for 10 years.</p><p> </p><p>00:00 intro</p><p>01:44 Adobe Romania and background</p><p>02:26 AI projects at Adobe</p><p>10:03 incorporating AI into existing products</p><p>16:34 educating Adobe’s user base</p><p>25:13 avoiding copyright issues with Ai-generated content</p><p>32:12 using customer feedback</p><p>40:30 what is the right way to structure an AI team?</p><p>45:11 pricing AI products</p><p>48:00 the future of Adobe</p><p> </p><p>With your co-hosts: </p><p>@jasoncwarner </p><p>- Former CTO GitHub, VP Eng Heroku & Canonical </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 29 Aug 2023 17:31:21 +0000</pubDate>
      <author>jeffron@redpoint.com (Alexandru Costin, Patrick Chase, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-16-vp-of-generative-ai-at-adobe-alexandru-costin-on-the-future-of-content-creation-KW3vNOZk</link>
      <content:encoded><![CDATA[<p>Patrick and Jacob sit down with Alexandru Costin, the VP of Generative AI and Sensei at Adobe, and discuss how Adobe’s early projects with generative AI in 2019 helped them move quickly upon the release of LLMs and diffusion models. Before leading Adobe’s generative AI efforts he founded InterAKT, a web development company, and led Adobe Romania for 10 years.</p><p> </p><p>00:00 intro</p><p>01:44 Adobe Romania and background</p><p>02:26 AI projects at Adobe</p><p>10:03 incorporating AI into existing products</p><p>16:34 educating Adobe’s user base</p><p>25:13 avoiding copyright issues with Ai-generated content</p><p>32:12 using customer feedback</p><p>40:30 what is the right way to structure an AI team?</p><p>45:11 pricing AI products</p><p>48:00 the future of Adobe</p><p> </p><p>With your co-hosts: </p><p>@jasoncwarner </p><p>- Former CTO GitHub, VP Eng Heroku & Canonical </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="50244235" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/5c2fb732-944b-4b1b-b37d-4398632deb39/audio/2c17d9b6-224a-424d-bbe6-78a33c14ecb4/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 16: VP of Generative AI at Adobe Alexandru Costin on The Future of Content Creation</itunes:title>
      <itunes:author>Alexandru Costin, Patrick Chase, Jacob Effron</itunes:author>
      <itunes:duration>00:52:20</itunes:duration>
      <itunes:summary>Patrick and Jacob sit down with Alexandru Costin, the VP of Generative AI and Sensei at Adobe, and discuss how Adobe’s early projects with generative AI in 2019 helped them move quickly upon the release of LLMs and diffusion models. Before leading Adobe’s generative AI efforts he founded InterAKT, a web development company, and led Adobe Romania for 10 years.</itunes:summary>
      <itunes:subtitle>Patrick and Jacob sit down with Alexandru Costin, the VP of Generative AI and Sensei at Adobe, and discuss how Adobe’s early projects with generative AI in 2019 helped them move quickly upon the release of LLMs and diffusion models. Before leading Adobe’s generative AI efforts he founded InterAKT, a web development company, and led Adobe Romania for 10 years.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>16</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">f703fea6-9155-42a7-8bc3-46fc9d50d823</guid>
      <title>Ep 15: Snorkel AI CEO Alex Ratner on What&apos;s Needed for Wider-Spread Enterprise AI Adoption</title>
      <description><![CDATA[<p>Jacob sits down with Alex to discuss how Snorkel grew from an open-source project in a Stanford AI lab to a $1B company. Alex shares his thoughts on why data development is at the heart of AI development, why enterprises are slow to deploy LLM applications, and the importance of academia in the future of AI development. </p><p> </p><p>00:00 intro </p><p>01:03 moving from academia to Snorkel </p><p>05:08 the evolution of Snorkel </p><p>18:33 improving pre-training </p><p>21:37 avoiding hallucinations and other errors </p><p>33:00 barriers to enterprises deploying AI </p><p>36:59 the Snorkel footprint of the future </p><p>39:37 the role of academia in AI development </p><p>42:57 over-hyped/under-hyped </p><p>44:50 how should AI regulation change going forward? </p><p> </p><p>With your co-hosts: </p><p>@jasoncwarner </p><p>- Former CTO GitHub, VP Eng Heroku & Canonical </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Fri, 18 Aug 2023 15:07:52 +0000</pubDate>
      <author>jeffron@redpoint.com (Alex Ratner, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-15-snorkel-ai-ceo-alex-ratner-on-whats-needed-for-wider-spread-enterprise-ai-adoption-NF9YT46d</link>
      <content:encoded><![CDATA[<p>Jacob sits down with Alex to discuss how Snorkel grew from an open-source project in a Stanford AI lab to a $1B company. Alex shares his thoughts on why data development is at the heart of AI development, why enterprises are slow to deploy LLM applications, and the importance of academia in the future of AI development. </p><p> </p><p>00:00 intro </p><p>01:03 moving from academia to Snorkel </p><p>05:08 the evolution of Snorkel </p><p>18:33 improving pre-training </p><p>21:37 avoiding hallucinations and other errors </p><p>33:00 barriers to enterprises deploying AI </p><p>36:59 the Snorkel footprint of the future </p><p>39:37 the role of academia in AI development </p><p>42:57 over-hyped/under-hyped </p><p>44:50 how should AI regulation change going forward? </p><p> </p><p>With your co-hosts: </p><p>@jasoncwarner </p><p>- Former CTO GitHub, VP Eng Heroku & Canonical </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="44888919" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/2f33b96c-d122-436d-bee8-9792ecef40b4/audio/68dd7389-9489-490c-bdf9-73c7c4a93c1b/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 15: Snorkel AI CEO Alex Ratner on What&apos;s Needed for Wider-Spread Enterprise AI Adoption</itunes:title>
      <itunes:author>Alex Ratner, Jacob Effron</itunes:author>
      <itunes:duration>00:46:45</itunes:duration>
      <itunes:summary>Jacob sits down with Alex to discuss how Snorkel grew from an open-source project in a Stanford AI lab to a $1B company. Alex shares his thoughts on why data development is at the heart of AI development, why enterprises are slow to deploy LLM applications, and the importance of academia in the future of AI development.</itunes:summary>
      <itunes:subtitle>Jacob sits down with Alex to discuss how Snorkel grew from an open-source project in a Stanford AI lab to a $1B company. Alex shares his thoughts on why data development is at the heart of AI development, why enterprises are slow to deploy LLM applications, and the importance of academia in the future of AI development.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>15</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">75b81e35-1e0d-4b9b-a732-4a8501560c54</guid>
      <title>Ep 14: Chroma CEO Jeff Huber on Vector Databases, Multimodal Embeddings &amp; Building an AI Startup</title>
      <description><![CDATA[<p>On today’s episode we talk with Jeff Huber, the CEO and Co-founder of Chroma. We talk about what sets Chroma apart from its competitors, new developments in AI technology, and advice for listeners who want to get started in AI.</p><p> </p><p>0:00 intro </p><p>1:02 starting chroma </p><p>6:08 vector databases </p><p>10:03 interesting use cases for vector databases </p><p>13:14 what sets chroma apart? </p><p>23:00 unresolved questions in LLMs </p><p>32:45 multiple agents vs. one agent to rule them all </p><p>34:50 chroma’s future </p><p>38:00 embedding models </p><p>43:00 over-hyped/under-hyped </p><p>44:30 AI regulation </p><p>48:42 is it too late to get into AI? </p><p> </p><p>With your co-hosts: </p><p>@jasoncwarner </p><p>- Former CTO GitHub, VP Eng Heroku & Canonical </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 1 Aug 2023 16:12:11 +0000</pubDate>
      <author>jeffron@redpoint.com (Jeff Huber, Erica Brescia, Jordan Segall)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-14-chroma-ceo-jeff-huber-on-vector-databases-multimodal-embeddings-building-an-ai-startup-WTZIeGgt</link>
      <content:encoded><![CDATA[<p>On today’s episode we talk with Jeff Huber, the CEO and Co-founder of Chroma. We talk about what sets Chroma apart from its competitors, new developments in AI technology, and advice for listeners who want to get started in AI.</p><p> </p><p>0:00 intro </p><p>1:02 starting chroma </p><p>6:08 vector databases </p><p>10:03 interesting use cases for vector databases </p><p>13:14 what sets chroma apart? </p><p>23:00 unresolved questions in LLMs </p><p>32:45 multiple agents vs. one agent to rule them all </p><p>34:50 chroma’s future </p><p>38:00 embedding models </p><p>43:00 over-hyped/under-hyped </p><p>44:30 AI regulation </p><p>48:42 is it too late to get into AI? </p><p> </p><p>With your co-hosts: </p><p>@jasoncwarner </p><p>- Former CTO GitHub, VP Eng Heroku & Canonical </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health </p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="46759701" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/83b1bc94-0802-42da-9d75-30a9228e3831/audio/342e07ff-bde1-4a0f-bfa2-72de36327adc/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 14: Chroma CEO Jeff Huber on Vector Databases, Multimodal Embeddings &amp; Building an AI Startup</itunes:title>
      <itunes:author>Jeff Huber, Erica Brescia, Jordan Segall</itunes:author>
      <itunes:duration>00:48:42</itunes:duration>
      <itunes:summary>On today’s episode we talk with Jeff Huber, the CEO and Co-founder of Chroma. We talk about what sets Chroma apart from its competitors, new developments in AI technology, and advice for listeners who want to get started in AI.</itunes:summary>
      <itunes:subtitle>On today’s episode we talk with Jeff Huber, the CEO and Co-founder of Chroma. We talk about what sets Chroma apart from its competitors, new developments in AI technology, and advice for listeners who want to get started in AI.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>14</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">95098545-56b4-4ad0-8ff5-770dc81c4d18</guid>
      <title>Ep 13: PathAI CEO Dr. Andy Beck on the Future of AI in Medical Diagnosis</title>
      <description><![CDATA[<p>Jacob sits down with PathAI Ceo Dr. Andy Beck to discuss the state of AI adoption in diagnosis, why Path acquired their own lab, pathologists' jobs in the future and nailing GTM to reach a ~$1B valuation. </p><p> </p><p>00:00 intro </p><p>01:01 pathology and AI </p><p>13:30 nailing go-to-market strategy </p><p>19:47 how pathology labs can go digital </p><p>25:36 regulatory frameworks and roadblocks </p><p>33:05 do improvements in foundation models impact PathAI? </p><p>36:26 standardizing diagnosis </p><p>40:05 how will the job of a pathologist change going forward? </p><p>42:35 NASH </p><p>47:30 working in Daphne Koller’s lab</p><p> </p><p>With your co-hosts: </p><p>@jasoncwarner - Former CTO GitHub, VP Eng Heroku & Canonical </p><p>@ericabrescia - Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@jacobeffron - Partner at Redpoint, Former PM Flatiron Health </p><p>@jordan_segall - Partner at Redpoint</p>
]]></description>
      <pubDate>Tue, 25 Jul 2023 17:00:54 +0000</pubDate>
      <author>jeffron@redpoint.com (Jacob Effron, Andy Beck)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-13-pathai-ceo-dr-andy-beck-on-the-future-of-ai-in-medical-diagnosis-P1A7X9Uq</link>
      <content:encoded><![CDATA[<p>Jacob sits down with PathAI Ceo Dr. Andy Beck to discuss the state of AI adoption in diagnosis, why Path acquired their own lab, pathologists' jobs in the future and nailing GTM to reach a ~$1B valuation. </p><p> </p><p>00:00 intro </p><p>01:01 pathology and AI </p><p>13:30 nailing go-to-market strategy </p><p>19:47 how pathology labs can go digital </p><p>25:36 regulatory frameworks and roadblocks </p><p>33:05 do improvements in foundation models impact PathAI? </p><p>36:26 standardizing diagnosis </p><p>40:05 how will the job of a pathologist change going forward? </p><p>42:35 NASH </p><p>47:30 working in Daphne Koller’s lab</p><p> </p><p>With your co-hosts: </p><p>@jasoncwarner - Former CTO GitHub, VP Eng Heroku & Canonical </p><p>@ericabrescia - Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@jacobeffron - Partner at Redpoint, Former PM Flatiron Health </p><p>@jordan_segall - Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="48644272" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/77e6c4f7-0952-450b-b1aa-3bc91501637a/audio/b32c7a95-fb6f-4344-be67-578309a16de9/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 13: PathAI CEO Dr. Andy Beck on the Future of AI in Medical Diagnosis</itunes:title>
      <itunes:author>Jacob Effron, Andy Beck</itunes:author>
      <itunes:duration>00:50:40</itunes:duration>
      <itunes:summary>Jacob sits down with PathAI Ceo Dr. Andy Beck to discuss the state of AI adoption in diagnosis, why Path acquired their own lab, pathologists&apos; jobs in the future and nailing GTM to reach a ~$1B valuation. </itunes:summary>
      <itunes:subtitle>Jacob sits down with PathAI Ceo Dr. Andy Beck to discuss the state of AI adoption in diagnosis, why Path acquired their own lab, pathologists&apos; jobs in the future and nailing GTM to reach a ~$1B valuation. </itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>13</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">8d0947ef-b655-48b3-aebe-1f5c02f3b80f</guid>
      <title>Ep 12: EleutherAI&apos;s Aran Komatsuzaki on Open-Source Models&apos; Future and Thought Cloning</title>
      <description><![CDATA[<p>Jacob and Jordan sit down with EleutherAI's Aran Komatsuzaki to discuss the future of open-source models, thought cloning, his work on GPT-J and more.</p><p> </p><p>0:00 intro</p><p>01:06 Aran’s background</p><p>02:58 starting work on gpt-j</p><p>05:49 gathering data for Lion and gpt-j</p><p>08:51 history of EleutherAI</p><p>11:16 open vs. closed-source models</p><p>19:06 how will open-source models be used going forward</p><p>21:33 thought cloning</p><p>25:51 building AI models that understand video</p><p>29:35 one model to rule them all</p><p>31:58 influence of academia in the LLM space</p><p>34:33 over-hyped/under-hyped</p><p>38:01 Aran’s thoughts on AGI</p><p> </p><p>With your co-hosts: </p><p>@jasoncwarner </p><p>- Former CTO GitHub, VP Eng Heroku & Canonical </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health</p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></description>
      <pubDate>Wed, 19 Jul 2023 19:44:41 +0000</pubDate>
      <author>jeffron@redpoint.com (Aran Komatsuzaki, Jordan Segall, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-12-eleutherais-aran-komatsuzaki-on-open-source-models-future-and-thought-cloning-a8SJWOcH</link>
      <content:encoded><![CDATA[<p>Jacob and Jordan sit down with EleutherAI's Aran Komatsuzaki to discuss the future of open-source models, thought cloning, his work on GPT-J and more.</p><p> </p><p>0:00 intro</p><p>01:06 Aran’s background</p><p>02:58 starting work on gpt-j</p><p>05:49 gathering data for Lion and gpt-j</p><p>08:51 history of EleutherAI</p><p>11:16 open vs. closed-source models</p><p>19:06 how will open-source models be used going forward</p><p>21:33 thought cloning</p><p>25:51 building AI models that understand video</p><p>29:35 one model to rule them all</p><p>31:58 influence of academia in the LLM space</p><p>34:33 over-hyped/under-hyped</p><p>38:01 Aran’s thoughts on AGI</p><p> </p><p>With your co-hosts: </p><p>@jasoncwarner </p><p>- Former CTO GitHub, VP Eng Heroku & Canonical </p><p>@ericabrescia </p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@patrickachase </p><p>- Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@jacobeffron </p><p>- Partner at Redpoint, Former PM Flatiron Health</p><p>@jordan_segall </p><p>- Partner at Redpoint</p>
]]></content:encoded>
      <enclosure length="41671878" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/a1320b05-842d-49f0-bd2e-41ca09eeffed/audio/5a05e23e-da9c-4fac-b112-7d5240e148bf/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 12: EleutherAI&apos;s Aran Komatsuzaki on Open-Source Models&apos; Future and Thought Cloning</itunes:title>
      <itunes:author>Aran Komatsuzaki, Jordan Segall, Jacob Effron</itunes:author>
      <itunes:duration>00:43:24</itunes:duration>
      <itunes:summary>Jacob and Jordan sit down with EleutherAI&apos;s Aran Komatsuzaki to discuss the future of open-source models, thought cloning, his work on GPT-J and more.</itunes:summary>
      <itunes:subtitle>Jacob and Jordan sit down with EleutherAI&apos;s Aran Komatsuzaki to discuss the future of open-source models, thought cloning, his work on GPT-J and more.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>12</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e1043657-bf6d-4ca6-95e2-f3e5b02296ce</guid>
      <title>Ep 11: Stanford Professor Tatsu Hashimoto on AI Biases and Improving LLM Performance</title>
      <description><![CDATA[<p>Patrick and Jacob sit down with Tatsu Hashimoto, Professor of AI at Stanford, to discuss the incredible open source projects from his research group like Alpaca and AlpacaFarm, whether data, algorithms, fine-tuning or RLHF is most important for performance, if AI is liberal or conservative, and much more!</p><p> </p><p>(0:00) - intro</p><p>(1:05) - journey to Stanford</p><p>(2:50) - origins of Alpaca</p><p>(6:08) - capabilities of the Alpaca model</p><p>(16:39) - the future of AI</p><p>(20:07) - AlpacaFarm</p><p>(21:37) - how to improve language models</p><p>(29:15) - do language models form opinions?</p><p>(32:15) - how to solve bias in ai</p><p>(34:18) - how does academia fit into the world of AI</p><p>(42:01) - over-hyped/under-hyped</p><p>(46:35) - questions Tatsu doesn’t have time for</p><p> </p><p>With your co-hosts: </p><p>@jasoncwarner - Former CTO GitHub, VP Eng Heroku & Canonical </p><p>@ericabrescia - Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @jacobeffron - Partner at Redpoint, Former PM Flatiron Health</p><p>@jacobeffron - Partner at Redpoint, Former PM Flatiron Health</p>
]]></description>
      <pubDate>Wed, 5 Jul 2023 19:25:25 +0000</pubDate>
      <author>jeffron@redpoint.com (Tatsu Hashimoto, Patrick Chase, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-11-stanford-professor-tatsu-hashimoto-on-ai-biases-and-improving-llm-performance-TbxVfqza</link>
      <content:encoded><![CDATA[<p>Patrick and Jacob sit down with Tatsu Hashimoto, Professor of AI at Stanford, to discuss the incredible open source projects from his research group like Alpaca and AlpacaFarm, whether data, algorithms, fine-tuning or RLHF is most important for performance, if AI is liberal or conservative, and much more!</p><p> </p><p>(0:00) - intro</p><p>(1:05) - journey to Stanford</p><p>(2:50) - origins of Alpaca</p><p>(6:08) - capabilities of the Alpaca model</p><p>(16:39) - the future of AI</p><p>(20:07) - AlpacaFarm</p><p>(21:37) - how to improve language models</p><p>(29:15) - do language models form opinions?</p><p>(32:15) - how to solve bias in ai</p><p>(34:18) - how does academia fit into the world of AI</p><p>(42:01) - over-hyped/under-hyped</p><p>(46:35) - questions Tatsu doesn’t have time for</p><p> </p><p>With your co-hosts: </p><p>@jasoncwarner - Former CTO GitHub, VP Eng Heroku & Canonical </p><p>@ericabrescia - Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @jacobeffron - Partner at Redpoint, Former PM Flatiron Health</p><p>@jacobeffron - Partner at Redpoint, Former PM Flatiron Health</p>
]]></content:encoded>
      <enclosure length="48270211" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/67931e61-a1b5-4d68-8e7f-40e81b15b449/audio/e27ea0e5-cfc5-4ea4-82c9-ea8683ab640b/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 11: Stanford Professor Tatsu Hashimoto on AI Biases and Improving LLM Performance</itunes:title>
      <itunes:author>Tatsu Hashimoto, Patrick Chase, Jacob Effron</itunes:author>
      <itunes:duration>00:50:16</itunes:duration>
      <itunes:summary>Patrick and Jacob sit down with Tatsu Hashimoto, Professor of AI at Stanford, to discuss the incredible open source projects from his research group like Alpaca and AlpacaFarm, whether data, algorithms, fine-tuning or RLHF is most important for performance, if AI is liberal or conservative, and much more!</itunes:summary>
      <itunes:subtitle>Patrick and Jacob sit down with Tatsu Hashimoto, Professor of AI at Stanford, to discuss the incredible open source projects from his research group like Alpaca and AlpacaFarm, whether data, algorithms, fine-tuning or RLHF is most important for performance, if AI is liberal or conservative, and much more!</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>11</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">10e6aecd-c319-499f-9c4d-de1640e5bcc8</guid>
      <title>Ep 10: Insitro CEO Daphne Koller on Using ML to Change Drug Discovery</title>
      <description><![CDATA[<p>Jacob sits down with Insitro CEO Daphne Koller to discuss founding Coursera, where and how ML can drive the most impact in drug development, and if foundation models can transform core drug discovery work and edtech. </p><p> </p><p>(00:00) - intro </p><p>(00:54) - Daphne’s journey </p><p>(09:18) - AI and biology discovery </p><p>(10:59) - insitro vs. traditional pharma </p><p>(20:04) - phenotyping patients </p><p>(26:01) - early mistakes </p><p>(29:51) - the future of data </p><p>(35:33) - partnering with larger pharma companies </p><p>(38:17) - impact of LLMs on biopharma </p><p>(44:04) - over-hyped/under-hyped </p><p> </p><p>With your co-hosts: </p><p>@jasoncwarner - Former CTO GitHub, VP Eng Heroku & Canonical </p><p>@ericabrescia - Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@jacobeffron - Partner at Redpoint, Former PM Flatiron Health</p>
]]></description>
      <pubDate>Tue, 13 Jun 2023 16:45:17 +0000</pubDate>
      <author>jeffron@redpoint.com (Jacob Effron, Daphne Koller)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-10-insitro-ceo-daphne-koller-on-using-ml-to-change-drug-discovery-xqHX3Pwv</link>
      <content:encoded><![CDATA[<p>Jacob sits down with Insitro CEO Daphne Koller to discuss founding Coursera, where and how ML can drive the most impact in drug development, and if foundation models can transform core drug discovery work and edtech. </p><p> </p><p>(00:00) - intro </p><p>(00:54) - Daphne’s journey </p><p>(09:18) - AI and biology discovery </p><p>(10:59) - insitro vs. traditional pharma </p><p>(20:04) - phenotyping patients </p><p>(26:01) - early mistakes </p><p>(29:51) - the future of data </p><p>(35:33) - partnering with larger pharma companies </p><p>(38:17) - impact of LLMs on biopharma </p><p>(44:04) - over-hyped/under-hyped </p><p> </p><p>With your co-hosts: </p><p>@jasoncwarner - Former CTO GitHub, VP Eng Heroku & Canonical </p><p>@ericabrescia - Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn </p><p>@jacobeffron - Partner at Redpoint, Former PM Flatiron Health</p>
]]></content:encoded>
      <enclosure length="47071483" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/e916398e-1941-4b87-8722-76033142512c/audio/13c9b612-95c9-490f-8a93-a4ad0941ce60/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 10: Insitro CEO Daphne Koller on Using ML to Change Drug Discovery</itunes:title>
      <itunes:author>Jacob Effron, Daphne Koller</itunes:author>
      <itunes:duration>00:49:01</itunes:duration>
      <itunes:summary>Jacob sits down with Insitro CEO Daphne Koller to discuss founding Coursera, where and how ML can drive the most impact in drug development, and if foundation models can transform core drug discovery work and edtech. 

</itunes:summary>
      <itunes:subtitle>Jacob sits down with Insitro CEO Daphne Koller to discuss founding Coursera, where and how ML can drive the most impact in drug development, and if foundation models can transform core drug discovery work and edtech. 

</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>10</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">fe6c830a-9f60-4b0e-8ad2-c56fa08543e8</guid>
      <title>Ep 9: Open AI VP of Product Peter Welinder on OpenAI’s Strategy and AI’s Future</title>
      <description><![CDATA[<p>Jacob sits down with OpenAI VP of Product & Partnerships Peter Welinder and guest host Rob Toews (Radical VC Partner) to discuss OpenAI’s strategy, how they think about what they will/won’t build, the future of open source models and when we’ll reach AGI. </p><p> </p><p>(00:00) - intro</p><p>(00:42) - where is the value in AI?</p><p>(07:51) - how OpenAI prioritizes projects</p><p>(14:15) - open-source AI</p><p>(25:49) - gaps in AI</p><p>(29:36) - risks and downsides</p><p>(34:02) - when will we reach super-intellegence?</p><p>(40:40) - super-intellegence safety</p><p>(43:37) - how OpenAI uses ChatGPT</p><p> </p><p>With your co-hosts: </p><p>@jasoncwarner - Former CTO GitHub, VP Eng Heroku & Canonical </p><p>@ericabrescia - Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @jacobeffron - Partner at Redpoint, Former PM Flatiron Health</p><p>@jacobeffron - Partner at Redpoint, Former PM Flatiron Health</p>
]]></description>
      <pubDate>Wed, 7 Jun 2023 15:21:05 +0000</pubDate>
      <author>jeffron@redpoint.com (Rob Toews, Peter Welinder, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-9-open-ai-vp-of-product-peter-welinder-on-openais-strategy-and-ais-future-PY9TVJmA</link>
      <content:encoded><![CDATA[<p>Jacob sits down with OpenAI VP of Product & Partnerships Peter Welinder and guest host Rob Toews (Radical VC Partner) to discuss OpenAI’s strategy, how they think about what they will/won’t build, the future of open source models and when we’ll reach AGI. </p><p> </p><p>(00:00) - intro</p><p>(00:42) - where is the value in AI?</p><p>(07:51) - how OpenAI prioritizes projects</p><p>(14:15) - open-source AI</p><p>(25:49) - gaps in AI</p><p>(29:36) - risks and downsides</p><p>(34:02) - when will we reach super-intellegence?</p><p>(40:40) - super-intellegence safety</p><p>(43:37) - how OpenAI uses ChatGPT</p><p> </p><p>With your co-hosts: </p><p>@jasoncwarner - Former CTO GitHub, VP Eng Heroku & Canonical </p><p>@ericabrescia - Former COO Github, Founder Bitnami (acq’d by VMWare) </p><p>@patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @jacobeffron - Partner at Redpoint, Former PM Flatiron Health</p><p>@jacobeffron - Partner at Redpoint, Former PM Flatiron Health</p>
]]></content:encoded>
      <enclosure length="48191216" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/846e2960-b510-4691-a94f-7a6df396b19b/audio/b0fb330b-ba4e-45b0-aed5-69dd6048786b/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 9: Open AI VP of Product Peter Welinder on OpenAI’s Strategy and AI’s Future</itunes:title>
      <itunes:author>Rob Toews, Peter Welinder, Jacob Effron</itunes:author>
      <itunes:duration>00:50:11</itunes:duration>
      <itunes:summary>Jacob sits down with OpenAI VP of Product &amp; Partnerships Peter Welinder and guest host Rob Toews (Radical VC Partner) to discuss OpenAI’s strategy, how they think about what they will/won’t build, the future of open source models and when we’ll reach AGI.</itunes:summary>
      <itunes:subtitle>Jacob sits down with OpenAI VP of Product &amp; Partnerships Peter Welinder and guest host Rob Toews (Radical VC Partner) to discuss OpenAI’s strategy, how they think about what they will/won’t build, the future of open source models and when we’ll reach AGI.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>9</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">cffe6b7c-9c76-4e84-9359-fe899255d2ab</guid>
      <title>Ep 8: Ex-DeepMind/WhiteHouse Operator Tantum Collins on AI Regulation and Geopolitical Impacts</title>
      <description><![CDATA[<p>Jacob sits down with ex-DeepMind / White House operator Teddy Collins to discuss Sam Altman’s Congressional testimony, how AI can enable direct democracy, state-driven economic planning and better orgs, US-China competition and more.</p><p> </p><p>(00:00) - intro</p><p>(01:15) - how Teddy came to the AI world</p><p>(06:26) - working in government</p><p>(12:27) - AI regulation and Sam Altman testimony</p><p>(19:19) - the end of humanity</p><p>(34:34) - nearer term issues with AI</p><p>(41:10) - how the government can use AI</p><p>(47:13) - lessons from DeepMind</p><p>(54:54) - over-hyped/under-hyped</p><p> </p><p>With your co-hosts:</p><p>@jasoncwarner</p><p>- Former CTO GitHub, VP Eng Heroku & Canonical</p><p>@ericabrescia</p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)</p><p>@patrickachase</p><p>- Partner at Redpoint, Former ML Engineer LinkedIn</p><p>@jacobeffron</p><p>- Partner at Redpoint, Former PM Flatiron Health</p>
]]></description>
      <pubDate>Wed, 31 May 2023 15:30:07 +0000</pubDate>
      <author>jeffron@redpoint.com (Tantum Collins, Jacob Effron)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-8-ex-deepmind-whitehouse-operator-tantum-collins-on-ai-regulation-and-geopolitical-impacts-MVjkr4yw</link>
      <content:encoded><![CDATA[<p>Jacob sits down with ex-DeepMind / White House operator Teddy Collins to discuss Sam Altman’s Congressional testimony, how AI can enable direct democracy, state-driven economic planning and better orgs, US-China competition and more.</p><p> </p><p>(00:00) - intro</p><p>(01:15) - how Teddy came to the AI world</p><p>(06:26) - working in government</p><p>(12:27) - AI regulation and Sam Altman testimony</p><p>(19:19) - the end of humanity</p><p>(34:34) - nearer term issues with AI</p><p>(41:10) - how the government can use AI</p><p>(47:13) - lessons from DeepMind</p><p>(54:54) - over-hyped/under-hyped</p><p> </p><p>With your co-hosts:</p><p>@jasoncwarner</p><p>- Former CTO GitHub, VP Eng Heroku & Canonical</p><p>@ericabrescia</p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)</p><p>@patrickachase</p><p>- Partner at Redpoint, Former ML Engineer LinkedIn</p><p>@jacobeffron</p><p>- Partner at Redpoint, Former PM Flatiron Health</p>
]]></content:encoded>
      <enclosure length="58622624" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/834e4343-4f0a-421f-bc13-b05842800394/audio/4bfc5a0e-634e-4fa8-8b40-1ce5378e3ac3/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 8: Ex-DeepMind/WhiteHouse Operator Tantum Collins on AI Regulation and Geopolitical Impacts</itunes:title>
      <itunes:author>Tantum Collins, Jacob Effron</itunes:author>
      <itunes:duration>01:01:03</itunes:duration>
      <itunes:summary>Jacob sits down with ex-DeepMind / White House operator Teddy Collins to discuss Sam Altman’s Congressional testimony, how AI can enable direct democracy, state-driven economic planning and better orgs, US-China competition and more.</itunes:summary>
      <itunes:subtitle>Jacob sits down with ex-DeepMind / White House operator Teddy Collins to discuss Sam Altman’s Congressional testimony, how AI can enable direct democracy, state-driven economic planning and better orgs, US-China competition and more.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>8</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">056896e3-780f-4d69-a290-7957429899cb</guid>
      <title>Ep 7: Co-Creator of Databricks Dolly Mike Conover on Open-Source LLMs</title>
      <description><![CDATA[<p>Patrick and Jacob sit down with Mike Conover, Staff Software Engineer at Databricks and Co-Creator of Databricks Dolly, the world’s first truly open instruction-tuned LLM, to discuss the magic behind Dolly, Alpaca and other instruction-tuned LLMs, the unreasonable effectiveness of fine-tuning, how they got all Databricks employees to help them curate the Dolly dataset (hint: google forms), and more.</p><p> </p><p>(0:00) - Intro</p><p>(5:54) - The birth of Dolly</p><p>(12:03) - Data curation at Databricks</p><p>(15:34) - Advice for building LLMs</p><p>(24:10) - The future of instruction-tuning datasets</p><p>(30:43) - UI innovation</p><p>(38:16) - The future of machine learning infrastructure</p><p>(42:05) - How SkipFlag would be different with the tools we have today</p><p>(47:01) - What Mike has learned since Dolly</p><p> </p><p>With your co-hosts:</p><p>@jasoncwarner</p><p>- Former CTO GitHub, VP Eng Heroku & Canonical</p><p>@ericabrescia</p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)</p><p>@patrickachase</p><p>- Partner at Redpoint, Former ML Engineer LinkedIn</p><p>@jacobeffron</p><p>- Partner at Redpoint, Former PM Flatiron Health</p>
]]></description>
      <pubDate>Thu, 11 May 2023 19:51:09 +0000</pubDate>
      <author>jeffron@redpoint.com (Mike Conover, Jacob Effron, Patrick Chase)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-7-co-creator-of-databricks-dolly-mike-conover-on-open-source-llms-j_x7xUp3</link>
      <content:encoded><![CDATA[<p>Patrick and Jacob sit down with Mike Conover, Staff Software Engineer at Databricks and Co-Creator of Databricks Dolly, the world’s first truly open instruction-tuned LLM, to discuss the magic behind Dolly, Alpaca and other instruction-tuned LLMs, the unreasonable effectiveness of fine-tuning, how they got all Databricks employees to help them curate the Dolly dataset (hint: google forms), and more.</p><p> </p><p>(0:00) - Intro</p><p>(5:54) - The birth of Dolly</p><p>(12:03) - Data curation at Databricks</p><p>(15:34) - Advice for building LLMs</p><p>(24:10) - The future of instruction-tuning datasets</p><p>(30:43) - UI innovation</p><p>(38:16) - The future of machine learning infrastructure</p><p>(42:05) - How SkipFlag would be different with the tools we have today</p><p>(47:01) - What Mike has learned since Dolly</p><p> </p><p>With your co-hosts:</p><p>@jasoncwarner</p><p>- Former CTO GitHub, VP Eng Heroku & Canonical</p><p>@ericabrescia</p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)</p><p>@patrickachase</p><p>- Partner at Redpoint, Former ML Engineer LinkedIn</p><p>@jacobeffron</p><p>- Partner at Redpoint, Former PM Flatiron Health</p>
]]></content:encoded>
      <enclosure length="35232791" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/04808866-4456-40ba-89aa-ff916adf3a67/audio/9be6747c-ca0e-49d4-ab70-e1efdffa5dba/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 7: Co-Creator of Databricks Dolly Mike Conover on Open-Source LLMs</itunes:title>
      <itunes:author>Mike Conover, Jacob Effron, Patrick Chase</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/ff0dbf2f-5711-4964-9172-807c39ca4824/cc85d511-1c36-42b6-bd2b-654637775f14/3000x3000/download-2.jpg?aid=rss_feed"/>
      <itunes:duration>00:48:56</itunes:duration>
      <itunes:summary>Patrick and Jacob sit down with Mike Conover, Staff Software Engineer at Databricks and Co-Creator of Databricks Dolly, the world’s first truly open instruction-tuned LLM, to discuss the magic behind Dolly, Alpaca and other instruction-tuned LLMs, the unreasonable effectiveness of fine-tuning, how they got all Databricks employees to help them curate the Dolly dataset (hint: google forms), and more.</itunes:summary>
      <itunes:subtitle>Patrick and Jacob sit down with Mike Conover, Staff Software Engineer at Databricks and Co-Creator of Databricks Dolly, the world’s first truly open instruction-tuned LLM, to discuss the magic behind Dolly, Alpaca and other instruction-tuned LLMs, the unreasonable effectiveness of fine-tuning, how they got all Databricks employees to help them curate the Dolly dataset (hint: google forms), and more.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>7</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">d39a52b4-c7f8-44e2-a0f1-5806cb97dba4</guid>
      <title>Ep 6: Jasper CEO Dave Rogenmoser on the Future of Writing with AI</title>
      <description><![CDATA[<p>Jacob and Erica sit down with Jasper CEO Dave Rogenmoser to discuss the future of writing with AI and what it means for marketers and the internet, Jasper post ChatGPT, hosting a massive Gen AI conference, going upmarket and more.</p><p>(00:56) - How Jasper works and how people are using it</p><p>(02:00) - Dave's journey leading up to Jasper</p><p>(04:30) - The moment Dave knew he was onto something with Jasper</p><p>(07:30) - Where Jasper works well for business applications</p><p>(10:55) - How the content of the internet might change going forward with the use of AI for content creation</p><p>(13:00) - What a writer's workflow in the future looks like with AI</p><p>(14:50) - How the introduction of ChatGPT impacted Jasper</p><p>(17:46) - Looking at the long term opportunity for differentiation for Jasper</p><p>(23:16) - Tuning a 'brand voice' through Jasper's AI</p><p>(26:15) - Jasper's role in bringing community together, ie with their recent Gen AI Conference</p><p>(30:00) - How Dave thinks about the future of work for his kids</p><p>(32:20) - The productivity boost clients see through using Jasper</p><p>(37:00) - Quick fire round</p><p> </p><p>With your co-hosts:</p><p>@jasoncwarner</p><p>- Former CTO GitHub, VP Eng Heroku & Canonical</p><p>@ericabrescia</p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)</p><p>@patrickachase</p><p>- Partner at Redpoint, Former ML Engineer LinkedIn</p><p>@jacobeffron</p><p>- Partner at Redpoint, Former PM Flatiron Health</p>
]]></description>
      <pubDate>Wed, 19 Apr 2023 13:00:00 +0000</pubDate>
      <author>jeffron@redpoint.com (Redpoint Ventures)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-6-jasper-ceo-dave-rogenmoser-on-the-future-of-writing-with-ai-wGS1TulE</link>
      <content:encoded><![CDATA[<p>Jacob and Erica sit down with Jasper CEO Dave Rogenmoser to discuss the future of writing with AI and what it means for marketers and the internet, Jasper post ChatGPT, hosting a massive Gen AI conference, going upmarket and more.</p><p>(00:56) - How Jasper works and how people are using it</p><p>(02:00) - Dave's journey leading up to Jasper</p><p>(04:30) - The moment Dave knew he was onto something with Jasper</p><p>(07:30) - Where Jasper works well for business applications</p><p>(10:55) - How the content of the internet might change going forward with the use of AI for content creation</p><p>(13:00) - What a writer's workflow in the future looks like with AI</p><p>(14:50) - How the introduction of ChatGPT impacted Jasper</p><p>(17:46) - Looking at the long term opportunity for differentiation for Jasper</p><p>(23:16) - Tuning a 'brand voice' through Jasper's AI</p><p>(26:15) - Jasper's role in bringing community together, ie with their recent Gen AI Conference</p><p>(30:00) - How Dave thinks about the future of work for his kids</p><p>(32:20) - The productivity boost clients see through using Jasper</p><p>(37:00) - Quick fire round</p><p> </p><p>With your co-hosts:</p><p>@jasoncwarner</p><p>- Former CTO GitHub, VP Eng Heroku & Canonical</p><p>@ericabrescia</p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)</p><p>@patrickachase</p><p>- Partner at Redpoint, Former ML Engineer LinkedIn</p><p>@jacobeffron</p><p>- Partner at Redpoint, Former PM Flatiron Health</p>
]]></content:encoded>
      <enclosure length="39674819" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/89a51d8b-36b0-4fe1-aa07-7d50f2443536/audio/af207236-16ab-4fb1-b9de-5b44470227be/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 6: Jasper CEO Dave Rogenmoser on the Future of Writing with AI</itunes:title>
      <itunes:author>Redpoint Ventures</itunes:author>
      <itunes:duration>00:41:19</itunes:duration>
      <itunes:summary>Jacob and Erica sit down with Jasper CEO Dave Rogenmoser to discuss the future of writing with AI and what it means for marketers and the internet, Jasper post ChatGPT, hosting a massive Gen AI conference, going upmarket and more.
You can find Dave on Twitter (@DaveRogenmoser) and learn more at Jasper.ai</itunes:summary>
      <itunes:subtitle>Jacob and Erica sit down with Jasper CEO Dave Rogenmoser to discuss the future of writing with AI and what it means for marketers and the internet, Jasper post ChatGPT, hosting a massive Gen AI conference, going upmarket and more.
You can find Dave on Twitter (@DaveRogenmoser) and learn more at Jasper.ai</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>6</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">23d1f9dd-51be-43b2-9fad-a09f441018a2</guid>
      <title>Ep 5: You.com CEO Richard Socher on The Future of Search, Open Source Models and AGI</title>
      <description><![CDATA[<p>Jacob and Erica sit down with <a href="http://you.com/" target="_blank">You.com</a> CEO and former Salesforce Chief Scientist Richard Socher to discuss building a new search engine and the future of search, his predictions on when open source models will catch up to GPT-4, AGI and more.</p><p>(1:29) - Richard's career moves and what motivated them</p><p>(6:44) - More about You.com and the ethos behind the company</p><p>(9:47) - How You.com integrates 3rd-party apps</p><p>(13:30) - Richard's thoughts on the future of AI</p><p>(23:19) - The role of academia in the world of AI going forward</p><p>(27:30) - Things in AI that Richard is excited about</p><p>(31:53) - Quickfire Round</p><p> </p><p>With your co-hosts:</p><p>@jasoncwarner</p><p>- Former CTO GitHub, VP Eng Heroku & Canonical</p><p>@ericabrescia</p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)</p><p>@patrickachase</p><p>- Partner at Redpoint, Former ML Engineer LinkedIn</p><p>@jacobeffron</p><p>- Partner at Redpoint, Former PM Flatiron Health</p>
]]></description>
      <pubDate>Thu, 6 Apr 2023 13:00:00 +0000</pubDate>
      <author>jeffron@redpoint.com (Redpoint Ventures)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-5-youcom-ceo-richard-socher-on-the-future-of-search-open-source-models-and-agi-O1zXb4Bq</link>
      <content:encoded><![CDATA[<p>Jacob and Erica sit down with <a href="http://you.com/" target="_blank">You.com</a> CEO and former Salesforce Chief Scientist Richard Socher to discuss building a new search engine and the future of search, his predictions on when open source models will catch up to GPT-4, AGI and more.</p><p>(1:29) - Richard's career moves and what motivated them</p><p>(6:44) - More about You.com and the ethos behind the company</p><p>(9:47) - How You.com integrates 3rd-party apps</p><p>(13:30) - Richard's thoughts on the future of AI</p><p>(23:19) - The role of academia in the world of AI going forward</p><p>(27:30) - Things in AI that Richard is excited about</p><p>(31:53) - Quickfire Round</p><p> </p><p>With your co-hosts:</p><p>@jasoncwarner</p><p>- Former CTO GitHub, VP Eng Heroku & Canonical</p><p>@ericabrescia</p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)</p><p>@patrickachase</p><p>- Partner at Redpoint, Former ML Engineer LinkedIn</p><p>@jacobeffron</p><p>- Partner at Redpoint, Former PM Flatiron Health</p>
]]></content:encoded>
      <enclosure length="41673918" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/e85c972f-450e-46b2-9938-897e41854362/audio/a55e6c41-b93e-4f1f-a5f2-3aec175e86c0/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 5: You.com CEO Richard Socher on The Future of Search, Open Source Models and AGI</itunes:title>
      <itunes:author>Redpoint Ventures</itunes:author>
      <itunes:duration>00:43:24</itunes:duration>
      <itunes:summary>Jacob and Erica sit down with You.com CEO and former Salesforce Chief Scientist Richard Socher to discuss building a new search engine and the future of search, his predictions on when open source models will catch up to GPT-4, AGI and more.
You can find Richard on Twitter (@RichardSocher) and learn more at You.com </itunes:summary>
      <itunes:subtitle>Jacob and Erica sit down with You.com CEO and former Salesforce Chief Scientist Richard Socher to discuss building a new search engine and the future of search, his predictions on when open source models will catch up to GPT-4, AGI and more.
You can find Richard on Twitter (@RichardSocher) and learn more at You.com </itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>5</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">d75f2dc5-4a10-4f32-bab6-cfd00b061fd9</guid>
      <title>Ep 4: Fixie.ai CEO Matt Welsh on How LLMs Will Change the Way We Work</title>
      <description><![CDATA[<p>Jordan and Erica chat with Fixie.ai co-founder and former Harvard CS professor on the journey from researcher to Google eng leader and then to startup life prior to founding Fixie, a new platform for building LLM-based apps (a Redpoint portfolio company). We also talk about having Mark Zuckerberg in his CS class, ChatGPT Plugins and the AI ecosystem, and what the future might look like for our kids with AGI. You can find Matt on Twitter (@mdwelsh) and learn more about Fixie at <a href="https://www.fixie.ai/" target="_blank">https://www.fixie.ai/</a></p><p>(1:06) - Matt talks about his early days at Berklee and Harvard</p><p>(6:45) - Making the jump to the start-up world</p><p>(8:15) - Matt explains what Fixie does</p><p>(11:30) - How Matt thinks about use cases for Fixie</p><p>(14:20) - How work might change with AI's integration</p><p>(17:04) - The future of LLM's</p><p>(20:30) - Matt's take on the race to AGI </p><p>(25:20) - How Matt thinks about the future of the world for his kids and how they might use AI</p><p>(29:24) - Quick fire round</p><p> </p><p>With your co-hosts:</p><p>@jasoncwarner</p><p>- Former CTO GitHub, VP Eng Heroku & Canonical</p><p>@ericabrescia</p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)</p><p>@patrickachase</p><p>- Partner at Redpoint, Former ML Engineer LinkedIn</p><p>@jacobeffron</p><p>- Partner at Redpoint, Former PM Flatiron Health</p>
]]></description>
      <pubDate>Tue, 4 Apr 2023 13:00:00 +0000</pubDate>
      <author>jeffron@redpoint.com (Redpoint Ventures)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-4-matt-welsh-co-founder-ceo-fixieai-on-how-llms-will-change-the-way-we-work-H1irGpdR</link>
      <content:encoded><![CDATA[<p>Jordan and Erica chat with Fixie.ai co-founder and former Harvard CS professor on the journey from researcher to Google eng leader and then to startup life prior to founding Fixie, a new platform for building LLM-based apps (a Redpoint portfolio company). We also talk about having Mark Zuckerberg in his CS class, ChatGPT Plugins and the AI ecosystem, and what the future might look like for our kids with AGI. You can find Matt on Twitter (@mdwelsh) and learn more about Fixie at <a href="https://www.fixie.ai/" target="_blank">https://www.fixie.ai/</a></p><p>(1:06) - Matt talks about his early days at Berklee and Harvard</p><p>(6:45) - Making the jump to the start-up world</p><p>(8:15) - Matt explains what Fixie does</p><p>(11:30) - How Matt thinks about use cases for Fixie</p><p>(14:20) - How work might change with AI's integration</p><p>(17:04) - The future of LLM's</p><p>(20:30) - Matt's take on the race to AGI </p><p>(25:20) - How Matt thinks about the future of the world for his kids and how they might use AI</p><p>(29:24) - Quick fire round</p><p> </p><p>With your co-hosts:</p><p>@jasoncwarner</p><p>- Former CTO GitHub, VP Eng Heroku & Canonical</p><p>@ericabrescia</p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)</p><p>@patrickachase</p><p>- Partner at Redpoint, Former ML Engineer LinkedIn</p><p>@jacobeffron</p><p>- Partner at Redpoint, Former PM Flatiron Health</p>
]]></content:encoded>
      <enclosure length="31383763" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/8085b8f9-e73a-44d0-9917-7fa3a4d43259/audio/87506c6f-ccfb-4e96-a03e-e6e9af1dd448/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 4: Fixie.ai CEO Matt Welsh on How LLMs Will Change the Way We Work</itunes:title>
      <itunes:author>Redpoint Ventures</itunes:author>
      <itunes:duration>00:32:41</itunes:duration>
      <itunes:summary>Jordan and Erica chat with Fixie.ai co-founder and former Harvard CS professor on the journey from researcher to Google eng leader and then to startup life prior to founding Fixie, a new platform for building LLM-based apps (a Redpoint portfolio company). We also talk about having Mark Zuckerberg in his CS class, ChatGPT Plugins and the AI ecosystem, and what the future might look like for our kids with AGI. 
You can find Matt on Twitter (@mdwelsh) and learn more about Fixie at https://www.fixie.ai/</itunes:summary>
      <itunes:subtitle>Jordan and Erica chat with Fixie.ai co-founder and former Harvard CS professor on the journey from researcher to Google eng leader and then to startup life prior to founding Fixie, a new platform for building LLM-based apps (a Redpoint portfolio company). We also talk about having Mark Zuckerberg in his CS class, ChatGPT Plugins and the AI ecosystem, and what the future might look like for our kids with AGI. 
You can find Matt on Twitter (@mdwelsh) and learn more about Fixie at https://www.fixie.ai/</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>4</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">55979a09-e63c-432b-bdea-e296060e7091</guid>
      <title>Ep 3: NEAR CEO Illia Polosukhin on the Origins of the Transformer Paper and The Overlap Between AI and Crypto</title>
      <description><![CDATA[<p>Jacob and Jason sit down with NEAR CEO and Transformer paper author Illia Polosukhin. They discuss Illia’s fascinating journey from Ukraine to Google and AI to crypto, the origin story behind the “Attention Is All You Need” paper and the overlap between AI and crypto. Illia also shared his thoughts on AGI and the problems that excite him most in AI right now. You can find Illia on Twitter (@ilblackdrago) and learn more about NEAR (@nearprotocol)</p><p>(00:39) - Welcoming Illia; how he became interested in AI, transitioning into Crypto and explaining NEAR </p><p>(02:40)  - Walking through Illia's story more in depth</p><p>(07:46) - How the Transformer Paper came to be</p><p>(11:24) - Understanding the Transformer Papers' impact</p><p>(18:28) - The overlap of Crypto and AI and how Illia sees the future of how they develop together</p><p>(26:47) - Illia's views on AGI</p><p>(30:46) - Optimism vs pessimism of the future of machine learning as a tool</p><p>(41:32) - What problems Illia sees in AI right now</p><p>(45:03) - Rapid fire questions</p><p>(47:36) - Where to learn more about NEAR and Illia</p><p> </p><p>With your co-hosts:</p><p>@jasoncwarner</p><p>- Former CTO GitHub, VP Eng Heroku & Canonical</p><p>@ericabrescia</p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)</p><p>@patrickachase</p><p>- Partner at Redpoint, Former ML Engineer LinkedIn</p><p>@jacobeffron</p><p>- Partner at Redpoint, Former PM Flatiron Health</p><p> </p>
]]></description>
      <pubDate>Tue, 21 Mar 2023 13:00:00 +0000</pubDate>
      <author>jeffron@redpoint.com (Redpoint Ventures)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-3-illia-polosukhin-co-founder-of-near-protocol-eL6nIpR_</link>
      <content:encoded><![CDATA[<p>Jacob and Jason sit down with NEAR CEO and Transformer paper author Illia Polosukhin. They discuss Illia’s fascinating journey from Ukraine to Google and AI to crypto, the origin story behind the “Attention Is All You Need” paper and the overlap between AI and crypto. Illia also shared his thoughts on AGI and the problems that excite him most in AI right now. You can find Illia on Twitter (@ilblackdrago) and learn more about NEAR (@nearprotocol)</p><p>(00:39) - Welcoming Illia; how he became interested in AI, transitioning into Crypto and explaining NEAR </p><p>(02:40)  - Walking through Illia's story more in depth</p><p>(07:46) - How the Transformer Paper came to be</p><p>(11:24) - Understanding the Transformer Papers' impact</p><p>(18:28) - The overlap of Crypto and AI and how Illia sees the future of how they develop together</p><p>(26:47) - Illia's views on AGI</p><p>(30:46) - Optimism vs pessimism of the future of machine learning as a tool</p><p>(41:32) - What problems Illia sees in AI right now</p><p>(45:03) - Rapid fire questions</p><p>(47:36) - Where to learn more about NEAR and Illia</p><p> </p><p>With your co-hosts:</p><p>@jasoncwarner</p><p>- Former CTO GitHub, VP Eng Heroku & Canonical</p><p>@ericabrescia</p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)</p><p>@patrickachase</p><p>- Partner at Redpoint, Former ML Engineer LinkedIn</p><p>@jacobeffron</p><p>- Partner at Redpoint, Former PM Flatiron Health</p><p> </p>
]]></content:encoded>
      <enclosure length="46526424" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/d62e11ba-1685-4fee-a88d-566172438784/audio/68862f27-110e-4cf6-8616-7e7c2ca68faa/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 3: NEAR CEO Illia Polosukhin on the Origins of the Transformer Paper and The Overlap Between AI and Crypto</itunes:title>
      <itunes:author>Redpoint Ventures</itunes:author>
      <itunes:duration>00:48:27</itunes:duration>
      <itunes:summary>Jacob and Jason sit down with NEAR CEO and Transformer paper author Illia Polosukhin. They discuss Illia’s fascinating journey from Ukraine to Google and AI to crypto, the origin story behind the “Attention Is All You Need” paper and the overlap between AI and crypto. Illia also shared his thoughts on AGI and the problems that excite him most in AI right now. 
You can find Illia on Twitter (@ilblackdrago) and learn more about NEAR (@nearprotocol)</itunes:summary>
      <itunes:subtitle>Jacob and Jason sit down with NEAR CEO and Transformer paper author Illia Polosukhin. They discuss Illia’s fascinating journey from Ukraine to Google and AI to crypto, the origin story behind the “Attention Is All You Need” paper and the overlap between AI and crypto. Illia also shared his thoughts on AGI and the problems that excite him most in AI right now. 
You can find Illia on Twitter (@ilblackdrago) and learn more about NEAR (@nearprotocol)</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>3</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">cab0319a-8b38-4f27-9229-fb7c97e1ad96</guid>
      <title>Ep 2: Databricks CTO Matei Zaharia on scaling and orchestrating large language models</title>
      <description><![CDATA[<p>Patrick and Jacob sit down with Matei Zaharia, Co-Founder and CTO at Databricks and Professor at Stanford. They discuss how companies are training and serving models in production with Databricks, where LLMs fall short for search and how to improve them, the state of the art AI research at Stanford, and how the size and cost of models is likely to change with technological advances in the coming years.</p><p> </p><p>(0:00) - Introduction</p><p>(2:04) - Founding story of Databricks</p><p>(6:03) - PhD classmates using early version of spark for Netflix competition</p><p>(6:55) - Building applications with MLFlow</p><p>(9:55) - LLMs and ChatGPT</p><p>(12:05) - Working with and fine-tuning foundation models</p><p>(13:00) - Prompt engineering here to stay or temporary?</p><p>(15:12) - Matei’s research at Stanford. The Demonstrate-Search-Predict framework (DSP)</p><p>(17:42) - How LLMs will be combined with classic information retrieval systems for world-class search</p><p>(19:38) - LLMs writing programs to orchestrate LLMs</p><p>(20:36) - Using LLMs in Databricks cloud product</p><p>(24:21) - Scaling LLM training and serving</p><p>(27:29) - How much will cost to train LLMs go down in coming years?</p><p>(29:22) - How many parameters is too many?</p><p>(31:14) - Open source vs closed source?</p><p>(35:19) - Stanford AI research - Snorkel, ColBERT, and More</p><p>(38:58) - Matei getting a $50 amazon gift card for weeks of work</p><p>(43:23) - Quick-fire round</p><p> </p><p>With your co-hosts:</p><p>@jasoncwarner</p><p>- Former CTO GitHub, VP Eng Heroku & Canonical</p><p> </p><p>@ericabrescia</p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)</p><p> </p><p>@patrickachase</p><p>- Partner at Redpoint, Former ML Engineer LinkedIn</p><p> </p><p>@jacobeffron</p><p>- Partner at Redpoint, Former PM Flatiron Health</p>
]]></description>
      <pubDate>Tue, 7 Mar 2023 14:00:00 +0000</pubDate>
      <author>jeffron@redpoint.com (Redpoint Ventures)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-2-databricks-cto-matei-zaharia-on-scaling-and-orchestrating-large-language-models-Cwbm0r4B</link>
      <content:encoded><![CDATA[<p>Patrick and Jacob sit down with Matei Zaharia, Co-Founder and CTO at Databricks and Professor at Stanford. They discuss how companies are training and serving models in production with Databricks, where LLMs fall short for search and how to improve them, the state of the art AI research at Stanford, and how the size and cost of models is likely to change with technological advances in the coming years.</p><p> </p><p>(0:00) - Introduction</p><p>(2:04) - Founding story of Databricks</p><p>(6:03) - PhD classmates using early version of spark for Netflix competition</p><p>(6:55) - Building applications with MLFlow</p><p>(9:55) - LLMs and ChatGPT</p><p>(12:05) - Working with and fine-tuning foundation models</p><p>(13:00) - Prompt engineering here to stay or temporary?</p><p>(15:12) - Matei’s research at Stanford. The Demonstrate-Search-Predict framework (DSP)</p><p>(17:42) - How LLMs will be combined with classic information retrieval systems for world-class search</p><p>(19:38) - LLMs writing programs to orchestrate LLMs</p><p>(20:36) - Using LLMs in Databricks cloud product</p><p>(24:21) - Scaling LLM training and serving</p><p>(27:29) - How much will cost to train LLMs go down in coming years?</p><p>(29:22) - How many parameters is too many?</p><p>(31:14) - Open source vs closed source?</p><p>(35:19) - Stanford AI research - Snorkel, ColBERT, and More</p><p>(38:58) - Matei getting a $50 amazon gift card for weeks of work</p><p>(43:23) - Quick-fire round</p><p> </p><p>With your co-hosts:</p><p>@jasoncwarner</p><p>- Former CTO GitHub, VP Eng Heroku & Canonical</p><p> </p><p>@ericabrescia</p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)</p><p> </p><p>@patrickachase</p><p>- Partner at Redpoint, Former ML Engineer LinkedIn</p><p> </p><p>@jacobeffron</p><p>- Partner at Redpoint, Former PM Flatiron Health</p>
]]></content:encoded>
      <enclosure length="44554911" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/266e1cff-b2ca-43cb-a480-8f8b648fca4e/audio/bd234510-ecc4-486d-9867-4c96f0149edb/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 2: Databricks CTO Matei Zaharia on scaling and orchestrating large language models</itunes:title>
      <itunes:author>Redpoint Ventures</itunes:author>
      <itunes:duration>00:46:24</itunes:duration>
      <itunes:summary>Patrick and Jacob sit down with Matei Zaharia, Co-Founder and CTO at Databricks and Professor at Stanford. They discuss how companies are training and serving models in production with Databricks, where LLMs fall short for search and how to improve them, the state of the art AI research at Stanford, and how the size and cost of models is likely to change with technological advances in the coming years. </itunes:summary>
      <itunes:subtitle>Patrick and Jacob sit down with Matei Zaharia, Co-Founder and CTO at Databricks and Professor at Stanford. They discuss how companies are training and serving models in production with Databricks, where LLMs fall short for search and how to improve them, the state of the art AI research at Stanford, and how the size and cost of models is likely to change with technological advances in the coming years. </itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>2</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">a8928203-07af-4832-9cac-e3e445fff077</guid>
      <title>Ep 1: Hugging Face CEO Clem Delangue on The Future of Open vs Closed Source in AI</title>
      <description><![CDATA[<p>Jacob and Jason sit down with Hugging Face CEO Clem Delangue and discuss trends in who’s using Hugging Face, the future of closed source vs open source in machine learning, why Clem compares large closed foundation models to Formula One Cars, how enterprise AI teams will evolve and AI safety.</p><p> </p><p>(0:00) - Introduction</p><p>(1:37) - Welcome Clem</p><p>(1:57) - Starting Hugging Face</p><p>(5:42) - Influence of ChatGPT</p><p>(15:47) - Use cases of large vs. small platforms</p><p>(18:44) -  Should large language models be open?</p><p>(30:20) - What’s next for Hugging Face?</p><p>(43:06) - Rapid fire</p><p>(47:02) - Learn more about Hugging Face</p><p> </p><p>With your co-hosts:</p><p>@jasoncwarner</p><p>- Former CTO GitHub, VP Eng Heroku & Canonical</p><p> </p><p>@ericabrescia</p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)</p><p> </p><p>@patrickachase</p><p>- Partner at Redpoint, Former ML Engineer LinkedIn</p><p> </p><p>@jacobeffron</p><p>- Partner at Redpoint, Former PM Flatiron Health</p><p> </p>
]]></description>
      <pubDate>Wed, 22 Feb 2023 14:00:00 +0000</pubDate>
      <author>jeffron@redpoint.com (Redpoint Ventures)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/ep-1-clem-delangue-on-the-future-of-open-vs-closed-source-in-ai-MH6K69Mv</link>
      <content:encoded><![CDATA[<p>Jacob and Jason sit down with Hugging Face CEO Clem Delangue and discuss trends in who’s using Hugging Face, the future of closed source vs open source in machine learning, why Clem compares large closed foundation models to Formula One Cars, how enterprise AI teams will evolve and AI safety.</p><p> </p><p>(0:00) - Introduction</p><p>(1:37) - Welcome Clem</p><p>(1:57) - Starting Hugging Face</p><p>(5:42) - Influence of ChatGPT</p><p>(15:47) - Use cases of large vs. small platforms</p><p>(18:44) -  Should large language models be open?</p><p>(30:20) - What’s next for Hugging Face?</p><p>(43:06) - Rapid fire</p><p>(47:02) - Learn more about Hugging Face</p><p> </p><p>With your co-hosts:</p><p>@jasoncwarner</p><p>- Former CTO GitHub, VP Eng Heroku & Canonical</p><p> </p><p>@ericabrescia</p><p>- Former COO Github, Founder Bitnami (acq’d by VMWare)</p><p> </p><p>@patrickachase</p><p>- Partner at Redpoint, Former ML Engineer LinkedIn</p><p> </p><p>@jacobeffron</p><p>- Partner at Redpoint, Former PM Flatiron Health</p><p> </p>
]]></content:encoded>
      <enclosure length="46617121" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/c3a8c62a-2473-4f61-87cd-667282c96e4e/audio/f4e85d23-b578-44f0-aa8b-840c83816411/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Ep 1: Hugging Face CEO Clem Delangue on The Future of Open vs Closed Source in AI</itunes:title>
      <itunes:author>Redpoint Ventures</itunes:author>
      <itunes:duration>00:48:33</itunes:duration>
      <itunes:summary>Jacob and Jason sit down with Hugging Face CEO Clem Delangue and discuss trends in who’s using Hugging Face, the future of closed source vs open source in machine learning, why Clem compares large closed foundation models to Formula One Cars, how enterprise AI teams will evolve and AI safety.</itunes:summary>
      <itunes:subtitle>Jacob and Jason sit down with Hugging Face CEO Clem Delangue and discuss trends in who’s using Hugging Face, the future of closed source vs open source in machine learning, why Clem compares large closed foundation models to Formula One Cars, how enterprise AI teams will evolve and AI safety.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">69ad0c5d-ab01-4460-8c7a-674dc0ccc342</guid>
      <title>Unsupervised Learning: Trailer</title>
      <description><![CDATA[ 
]]></description>
      <pubDate>Thu, 16 Feb 2023 18:00:00 +0000</pubDate>
      <author>jeffron@redpoint.com (Redpoint Ventures)</author>
      <link>https://unsupervised-learning.simplecast.com/episodes/unsupervised-learning-trailer-iqKavapM</link>
      <enclosure length="2066852" type="audio/mpeg" url="https://cdn.simplecast.com/audio/2c08ad29-5b79-42c0-a40a-6c1af4327f2f/episodes/53c7bbbb-61c3-4e55-b9a3-86cf27e67fcc/audio/88071956-1a90-43d4-92c2-5cfba8426ae1/default_tc.mp3?aid=rss_feed&amp;feed=dOSE_bdP"/>
      <itunes:title>Unsupervised Learning: Trailer</itunes:title>
      <itunes:author>Redpoint Ventures</itunes:author>
      <itunes:duration>00:02:09</itunes:duration>
      <itunes:summary></itunes:summary>
      <itunes:subtitle></itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>trailer</itunes:episodeType>
    </item>
  </channel>
</rss>