<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:media="http://search.yahoo.com/mrss/" xmlns:podcast="https://podcastindex.org/namespace/1.0">
  <channel>
    <atom:link href="https://feeds.simplecast.com/IgzWks06" rel="self" title="MP3 Audio" type="application/atom+xml"/>
    <atom:link href="https://simplecast.superfeedr.com" rel="hub" xmlns="http://www.w3.org/2005/Atom"/>
    <generator>https://simplecast.com</generator>
    <title>The New Stack Podcast</title>
    <description>The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software.

For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack</description>
    <copyright>All rights reserved</copyright>
    <language>en</language>
    <pubDate>Thu, 16 Apr 2026 19:45:00 +0000</pubDate>
    <lastBuildDate>Thu, 16 Apr 2026 19:45:12 +0000</lastBuildDate>
    
    <link>https://thenewstack.simplecast.com</link>
    <itunes:type>episodic</itunes:type>
    <itunes:summary>The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software.

For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack</itunes:summary>
    <itunes:author>The New Stack</itunes:author>
    <itunes:explicit>false</itunes:explicit>
    <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/bb688835-10e4-4197-b01f-34221ccb5d38/3000x3000/tns-makers-logo-simplecast.jpg?aid=rss_feed"/>
    <itunes:new-feed-url>https://feeds.simplecast.com/IgzWks06</itunes:new-feed-url>
    <itunes:keywords>developer, developers, technology, devops, thenewstack, tech, joab jackson, alex williams, software engineer, heather joslyn, cloud native, software engineer podcast, devops podcasts, software developer, kubernetes podcasts, devops podcast, software development, open source, software engineering, darry taft, kubernetes, kubernetes podcast, open source podcast</itunes:keywords>
    <itunes:owner>
      <itunes:name>The New Stack Podcast</itunes:name>
      <itunes:email>podcasts@thenewstack.io</itunes:email>
    </itunes:owner>
    <itunes:category text="Technology"/>
    <itunes:category text="News">
      <itunes:category text="Tech News"/>
    </itunes:category>
    <item>
      <guid isPermaLink="false">42719812-2698-4f63-92fc-ac488598bc1e</guid>
      <title>As agentic AI explodes, Amazon doubles down on MCP</title>
      <description><![CDATA[<p>At the MCP Summit inNew York City,Clare LiguoriofAmazon Web Servicesdiscussed the rapid rise of theModel Context Protocol(MCP), now a leading way to connect AI agents with tools and data. Originally developed byAnthropicand later transferred to theLinux Foundation, MCP has seen surging enterprise adoption as agentic AI expands.</p>
<p>Liguori highlighted her dual role shaping MCP’s evolving specification, including work on integrating webhooks, events, and notifications to support always-on AI agents. AWS has actively contributed features like Tasks and Elicitations and offers managed MCP servers, positioning itself as both contributor and experimental platform for emerging capabilities.</p>
<p>This collaboration illustrates how corporate involvement can accelerate open-source innovation and adoption. Looking ahead, MCP’s role as connective infrastructure for AI agents is expected to grow, especially as tools become more accessible. With broader adoption of AI development platforms across non-engineering roles, MCP could help extend automation beyond tech teams to businesses of all sizes.</p>
<p>Learn more from The New Stack about the latest around Model Context Protocol(MCP): </p>
<p><a href="https://thenewstack.io/mcp-the-missing-link-between-ai-agents-and-apis/" rel="noopener noreferrer">MCP: The Missing Link Between AI Agents and APIs</a></p>
<p><a href="https://thenewstack.io/model-context-protocol-evolution/" rel="noopener noreferrer">Beyond the vibe code: The steep mountain MCP must climb to reach production</a></p>
<p><a href="https://thenewstack.io/api-mcp-agent-integration/" rel="noopener noreferrer">MCP is everywhere, but don’t panic. Here’s why your existing APIs still matter.</a></p>
<p><a href="https://thenewstack.io/newsletter" rel="noopener noreferrer">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Thu, 16 Apr 2026 19:45:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Clare Liguori, AWS, Amazon, The New Stack, Alex Whilhelm)</author>
      <link>https://thenewstack.simplecast.com/episodes/as-agentic-ai-explodes-amazon-doubles-down-on-mcp-8Kx42nfQ</link>
      <content:encoded><![CDATA[<p>At the MCP Summit inNew York City,Clare LiguoriofAmazon Web Servicesdiscussed the rapid rise of theModel Context Protocol(MCP), now a leading way to connect AI agents with tools and data. Originally developed byAnthropicand later transferred to theLinux Foundation, MCP has seen surging enterprise adoption as agentic AI expands.</p>
<p>Liguori highlighted her dual role shaping MCP’s evolving specification, including work on integrating webhooks, events, and notifications to support always-on AI agents. AWS has actively contributed features like Tasks and Elicitations and offers managed MCP servers, positioning itself as both contributor and experimental platform for emerging capabilities.</p>
<p>This collaboration illustrates how corporate involvement can accelerate open-source innovation and adoption. Looking ahead, MCP’s role as connective infrastructure for AI agents is expected to grow, especially as tools become more accessible. With broader adoption of AI development platforms across non-engineering roles, MCP could help extend automation beyond tech teams to businesses of all sizes.</p>
<p>Learn more from The New Stack about the latest around Model Context Protocol(MCP): </p>
<p><a href="https://thenewstack.io/mcp-the-missing-link-between-ai-agents-and-apis/" rel="noopener noreferrer">MCP: The Missing Link Between AI Agents and APIs</a></p>
<p><a href="https://thenewstack.io/model-context-protocol-evolution/" rel="noopener noreferrer">Beyond the vibe code: The steep mountain MCP must climb to reach production</a></p>
<p><a href="https://thenewstack.io/api-mcp-agent-integration/" rel="noopener noreferrer">MCP is everywhere, but don’t panic. Here’s why your existing APIs still matter.</a></p>
<p><a href="https://thenewstack.io/newsletter" rel="noopener noreferrer">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="23374828" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/317e9dbc-9a52-4da7-9725-c4578874b757/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/audio/group/74f4f57e-bcf9-4dca-bc88-0a910ea6cc3a/group-item/2120ad23-e83c-4900-968b-8622e8a6ba56/128_default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>As agentic AI explodes, Amazon doubles down on MCP</itunes:title>
      <itunes:author>Clare Liguori, AWS, Amazon, The New Stack, Alex Whilhelm</itunes:author>
      <itunes:duration>00:24:20</itunes:duration>
      <itunes:summary>At the MCP Summit in New York City, Clare Liguori of Amazon Web Services discussed the rapid rise of the Model Context Protocol (MCP), now a leading way to connect AI agents with tools and data. Originally developed by Anthropic and later transferred to the Linux Foundation, MCP has seen surging enterprise adoption as agentic AI expands.</itunes:summary>
      <itunes:subtitle>At the MCP Summit in New York City, Clare Liguori of Amazon Web Services discussed the rapid rise of the Model Context Protocol (MCP), now a leading way to connect AI agents with tools and data. Originally developed by Anthropic and later transferred to the Linux Foundation, MCP has seen surging enterprise adoption as agentic AI expands.</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, the new stack, ai developer, alex wilhem, clare liguori, tech, amazon, mcp, the new stack makers, software engineer, alex wilhelm, mcp dev summit, aws, model context protocol, open soure, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1607</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">62982f86-f89d-4899-b68e-4cf1c63b3f8f</guid>
      <title>A year in, Google wants its Axion processors to feel like a scheduling decision</title>
      <description><![CDATA[<p>At KubeCon Europe, Google Cloud’s Jago Macleod and Abdel Sghiouar argued that adopting Arm for Kubernetes workloads has shifted from a complex migration to a practical, low-friction choice. After a year of production use, Google’s custom Arm-based Axion processors—powering C4A and N4A instances—are positioned as broadly viable for most containerized applications, offering strong gains in performance, cost efficiency, and energy usage compared to x86.</p>
<p>Rather than requiring a full overhaul, moving to Arm typically involves recompiling containers for a multi-architecture target and gradually rolling out via Kubernetes practices like canary deployments. While edge cases exist, they are relatively uncommon.</p>
<p>A key enabler is GKE’s compute classes, which allow workloads to express preferences across VM types, turning infrastructure decisions into automated scheduling choices rather than manual provisioning.</p>
<p>Ultimately, the conversation points to a larger constraint: energy. As AI workloads grow, efficiency—measured in “tokens per watt”—is emerging as the defining metric, with cost savings translating directly into greater compute capacity.</p>
<p>Learn more from The New Stack about the latest developments around Google’s work with Axion: </p>
<p><a href="https://thenewstack.io/arm-see-a-demo-about-migrating-a-x86-based-app-to-arm64/" rel="noopener noreferrer">Arm: See a Demo About Migrating a x86-Based App to ARM64 </a></p>
<p><a href="https://thenewstack.io/do-all-your-ai-workloads-actually-require-expensive-gpus/" rel="noopener noreferrer">Do All Your AI Workloads Actually Require Expensive GPUs? </a><br><a href="https://thenewstack.io/newsletter" rel="noopener noreferrer">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Wed, 15 Apr 2026 22:30:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Google, ARm, Jago Macleod, ABDELFETTAH SGHIOUAR)</author>
      <link>https://thenewstack.simplecast.com/episodes/a-year-in-google-wants-its-axion-processors-to-feel-like-a-scheduling-decision-DUWQTp_6</link>
      <content:encoded><![CDATA[<p>At KubeCon Europe, Google Cloud’s Jago Macleod and Abdel Sghiouar argued that adopting Arm for Kubernetes workloads has shifted from a complex migration to a practical, low-friction choice. After a year of production use, Google’s custom Arm-based Axion processors—powering C4A and N4A instances—are positioned as broadly viable for most containerized applications, offering strong gains in performance, cost efficiency, and energy usage compared to x86.</p>
<p>Rather than requiring a full overhaul, moving to Arm typically involves recompiling containers for a multi-architecture target and gradually rolling out via Kubernetes practices like canary deployments. While edge cases exist, they are relatively uncommon.</p>
<p>A key enabler is GKE’s compute classes, which allow workloads to express preferences across VM types, turning infrastructure decisions into automated scheduling choices rather than manual provisioning.</p>
<p>Ultimately, the conversation points to a larger constraint: energy. As AI workloads grow, efficiency—measured in “tokens per watt”—is emerging as the defining metric, with cost savings translating directly into greater compute capacity.</p>
<p>Learn more from The New Stack about the latest developments around Google’s work with Axion: </p>
<p><a href="https://thenewstack.io/arm-see-a-demo-about-migrating-a-x86-based-app-to-arm64/" rel="noopener noreferrer">Arm: See a Demo About Migrating a x86-Based App to ARM64 </a></p>
<p><a href="https://thenewstack.io/do-all-your-ai-workloads-actually-require-expensive-gpus/" rel="noopener noreferrer">Do All Your AI Workloads Actually Require Expensive GPUs? </a><br><a href="https://thenewstack.io/newsletter" rel="noopener noreferrer">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="21408748" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/317e9dbc-9a52-4da7-9725-c4578874b757/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/audio/group/3ed9876b-fe2d-409c-bf61-c76dcbdf24d8/group-item/cfd165bf-25b7-408d-af67-4dab4457c5bc/128_default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>A year in, Google wants its Axion processors to feel like a scheduling decision</itunes:title>
      <itunes:author>Google, ARm, Jago Macleod, ABDELFETTAH SGHIOUAR</itunes:author>
      <itunes:duration>00:22:18</itunes:duration>
      <itunes:summary>At KubeCon Europe, Google Cloud’s Jago Macleod and Abdel Sghiouar argued that adopting Arm for Kubernetes workloads has shifted from a complex migration to a practical, low-friction choice. After a year of production use, Google’s custom Arm-based Axion processors—powering C4A and N4A instances—are positioned as broadly viable for most containerized applications, offering strong gains in performance, cost efficiency, and energy usage compared to x86.</itunes:summary>
      <itunes:subtitle>At KubeCon Europe, Google Cloud’s Jago Macleod and Abdel Sghiouar argued that adopting Arm for Kubernetes workloads has shifted from a complex migration to a practical, low-friction choice. After a year of production use, Google’s custom Arm-based Axion processors—powering C4A and N4A instances—are positioned as broadly viable for most containerized applications, offering strong gains in performance, cost efficiency, and energy usage compared to x86.</itunes:subtitle>
      <itunes:keywords>gpus, jago macleod, google, tech podcast, ai developer, axion, ai workloads, tech, abdelfettah sghiouar, kubernetes, software engineer, open source, ai infrastructure, arm, kubecon amsterdam 2026, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1606</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">8787760a-aa44-4036-b024-9fa5888ef152</guid>
      <title>Can you make Kubernetes invisible? Here&apos;s why AWS is on a mission to do it.</title>
      <description><![CDATA[<p>In this episode of<i>The New Stack Makers</i>, Jesse Butler, principal product manager for AWS Elastic Kubernetes Service, shares his vision for simplifying cloud-native computing. Since joining AWS in 2020, Butler has focused on making Kubernetes easier to use, emphasizing open-source as a democratizing force. He highlights the role of the Cloud Native Computing Foundation (CNCF) in standardizing and governing open ecosystems while balancing community-driven innovation with commercial contributions.</p>
<p>Butler describes Kubernetes as widely adopted—used in production by around 80% of enterprises—yet still overly complex. His goal is to make it “invisible,” much like Linux, by abstracting and consolidating services. He points to projects like Karpenter, which enables real-time node provisioning for efficient scaling; Kro, which simplifies resource orchestration; and Cedar, a flexible policy engine for fine-grained authorization.</p>
<p>He underscores the importance of open-source contributors, noting their critical yet often underappreciated role. Looking ahead, Butler envisions a future where automation and human collaboration further enhance usability and innovation in open-source software.</p>
<p>Learn more from The New Stack about the latest around AWS Elastic Kubernetes Service</p>
<p><a href="https://thenewstack.io/2026-will-be-the-year-of-agentic-workloads-in-production-on-amazon-eks/" rel="noopener noreferrer">2026 Will Be the Year of Agentic Workloads in Production on Amazon EKS</a></p>
<p><a href="https://thenewstack.io/eks-auto-mode-kubernetes/" rel="noopener noreferrer">Amazon EKS Auto Mode wants to end Kubernetes toil — one node at a time</a></p>
<p><a href="https://thenewstack.io/newsletter" rel="noopener noreferrer">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Tue, 14 Apr 2026 17:35:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Jesse Butler, Amazon, Amazon Web Services, Adrian Brigwater, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/can-you-make-kubernetes-invisible-heres-why-aws-is-on-a-mission-to-do-it-OcYEcDAK</link>
      <content:encoded><![CDATA[<p>In this episode of<i>The New Stack Makers</i>, Jesse Butler, principal product manager for AWS Elastic Kubernetes Service, shares his vision for simplifying cloud-native computing. Since joining AWS in 2020, Butler has focused on making Kubernetes easier to use, emphasizing open-source as a democratizing force. He highlights the role of the Cloud Native Computing Foundation (CNCF) in standardizing and governing open ecosystems while balancing community-driven innovation with commercial contributions.</p>
<p>Butler describes Kubernetes as widely adopted—used in production by around 80% of enterprises—yet still overly complex. His goal is to make it “invisible,” much like Linux, by abstracting and consolidating services. He points to projects like Karpenter, which enables real-time node provisioning for efficient scaling; Kro, which simplifies resource orchestration; and Cedar, a flexible policy engine for fine-grained authorization.</p>
<p>He underscores the importance of open-source contributors, noting their critical yet often underappreciated role. Looking ahead, Butler envisions a future where automation and human collaboration further enhance usability and innovation in open-source software.</p>
<p>Learn more from The New Stack about the latest around AWS Elastic Kubernetes Service</p>
<p><a href="https://thenewstack.io/2026-will-be-the-year-of-agentic-workloads-in-production-on-amazon-eks/" rel="noopener noreferrer">2026 Will Be the Year of Agentic Workloads in Production on Amazon EKS</a></p>
<p><a href="https://thenewstack.io/eks-auto-mode-kubernetes/" rel="noopener noreferrer">Amazon EKS Auto Mode wants to end Kubernetes toil — one node at a time</a></p>
<p><a href="https://thenewstack.io/newsletter" rel="noopener noreferrer">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="22313212" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/317e9dbc-9a52-4da7-9725-c4578874b757/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/audio/group/68c798e2-6100-4bd8-a940-4a18f7ccbb99/group-item/e0ca90b9-5398-4546-9658-1bad3314f26f/128_default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Can you make Kubernetes invisible? Here&apos;s why AWS is on a mission to do it.</itunes:title>
      <itunes:author>Jesse Butler, Amazon, Amazon Web Services, Adrian Brigwater, The New Stack</itunes:author>
      <itunes:duration>00:23:14</itunes:duration>
      <itunes:summary>In this episode of The New Stack Makers, Jesse Butler, principal product manager for AWS Elastic Kubernetes Service, shares his vision for simplifying cloud-native computing. Since joining AWS in 2020, Butler has focused on making Kubernetes easier to use, emphasizing open-source as a democratizing force. He highlights the role of the Cloud Native Computing Foundation (CNCF) in standardizing and governing open ecosystems while balancing community-driven innovation with commercial contributions.</itunes:summary>
      <itunes:subtitle>In this episode of The New Stack Makers, Jesse Butler, principal product manager for AWS Elastic Kubernetes Service, shares his vision for simplifying cloud-native computing. Since joining AWS in 2020, Butler has focused on making Kubernetes easier to use, emphasizing open-source as a democratizing force. He highlights the role of the Cloud Native Computing Foundation (CNCF) in standardizing and governing open ecosystems while balancing community-driven innovation with commercial contributions.</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, ai developer, jesse butler, tech, amazon, kubernetes, software engineer, open source, kubecon, kubecon amsterdam 2026, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1605</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">a6c5800b-288f-42b1-b11e-e35f6d63bcd6</guid>
      <title>The next stages of AI conformance in the cloud-native, open-source world</title>
      <description><![CDATA[<p>Running AI models on Kubernetes has historically been inconsistent, with workloads behaving differently across cloud providers due to variations in GPUs, networking, and autoscaling. As organizations move AI from experimentation to production, standardization has become critical. In this episode of The New Stack Makers, Jonathan Bryce, Executive Director of The Cloud Native Computing Foundation shared that the Foundation’s Kubernetes AI conformance program aims to solve this by ensuring portability, predictability, and production readiness for AI workloads across environments.</p>
<p>The initiative reflects a broader industry shift: AI is moving from training-heavy workloads to inference at scale, with inference expected to dominate compute usage by the end of the decade. Unlike batch-based training, inference requires real-time, always-on performance, making Kubernetes an attractive platform due to its elasticity, GPU-aware autoscaling, and observability.</p>
<p>The conformance program establishes baseline standards for handling accelerators like GPUs and TPUs, reducing vendor lock-in and simplifying deployment. Early adopters include major cloud providers and ecosystem players, while new projects like llm-d aim to bridge orchestration and inference. As requirements evolve, ongoing collaboration and recertification will ensure the standards stay aligned with real-world needs.</p>
<p>Learn more from The New Stack about the latest developments around The Cloud Native Computing Foundation’s Kubernetes AI conformance program:</p>
<p><a href="https://thenewstack.io/cncf-kubernetes-is-foundational-infrastructure-for-ai/" rel="noopener noreferrer">CNCF: Kubernetes is ‘foundational’ infrastructure for AI</a></p>
<p><a href="https://thenewstack.io/kubernetes-gets-an-ai-conformance-program-and-vmware-is-already-on-board/" rel="noopener noreferrer">Kubernetes Gets an AI Conformance Program — and VMware Is Already On Board</a></p>
<p><a href="https://thenewstack.io/newsletter" rel="noopener noreferrer">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Thu, 9 Apr 2026 16:30:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Jonathan Bryce, CNCF, Jennifer Riggins, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/the-next-stages-of-ai-conformance-in-the-cloud-native-open-source-world-8YUCToxP</link>
      <content:encoded><![CDATA[<p>Running AI models on Kubernetes has historically been inconsistent, with workloads behaving differently across cloud providers due to variations in GPUs, networking, and autoscaling. As organizations move AI from experimentation to production, standardization has become critical. In this episode of The New Stack Makers, Jonathan Bryce, Executive Director of The Cloud Native Computing Foundation shared that the Foundation’s Kubernetes AI conformance program aims to solve this by ensuring portability, predictability, and production readiness for AI workloads across environments.</p>
<p>The initiative reflects a broader industry shift: AI is moving from training-heavy workloads to inference at scale, with inference expected to dominate compute usage by the end of the decade. Unlike batch-based training, inference requires real-time, always-on performance, making Kubernetes an attractive platform due to its elasticity, GPU-aware autoscaling, and observability.</p>
<p>The conformance program establishes baseline standards for handling accelerators like GPUs and TPUs, reducing vendor lock-in and simplifying deployment. Early adopters include major cloud providers and ecosystem players, while new projects like llm-d aim to bridge orchestration and inference. As requirements evolve, ongoing collaboration and recertification will ensure the standards stay aligned with real-world needs.</p>
<p>Learn more from The New Stack about the latest developments around The Cloud Native Computing Foundation’s Kubernetes AI conformance program:</p>
<p><a href="https://thenewstack.io/cncf-kubernetes-is-foundational-infrastructure-for-ai/" rel="noopener noreferrer">CNCF: Kubernetes is ‘foundational’ infrastructure for AI</a></p>
<p><a href="https://thenewstack.io/kubernetes-gets-an-ai-conformance-program-and-vmware-is-already-on-board/" rel="noopener noreferrer">Kubernetes Gets an AI Conformance Program — and VMware Is Already On Board</a></p>
<p><a href="https://thenewstack.io/newsletter" rel="noopener noreferrer">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="24001767" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/317e9dbc-9a52-4da7-9725-c4578874b757/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/audio/group/7954b850-2f7f-4b1a-9e79-a04c26b1f9f0/group-item/3cad732c-d126-42cd-b18c-bcffaeb246e8/128_default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>The next stages of AI conformance in the cloud-native, open-source world</itunes:title>
      <itunes:author>Jonathan Bryce, CNCF, Jennifer Riggins, The New Stack</itunes:author>
      <itunes:duration>00:25:00</itunes:duration>
      <itunes:summary>Running AI models on Kubernetes has historically been inconsistent, with workloads behaving differently across cloud providers due to variations in GPUs, networking, and autoscaling. As organizations move AI from experimentation to production, standardization has become critical. In this episode of The New Stack Makers, Jonathan Bryce, Executive Director of The Cloud Native Computing Foundation shared that the Foundation’s Kubernetes AI conformance program aims to solve this by ensuring portability, predictability, and production readiness for AI workloads across environments.</itunes:summary>
      <itunes:subtitle>Running AI models on Kubernetes has historically been inconsistent, with workloads behaving differently across cloud providers due to variations in GPUs, networking, and autoscaling. As organizations move AI from experimentation to production, standardization has become critical. In this episode of The New Stack Makers, Jonathan Bryce, Executive Director of The Cloud Native Computing Foundation shared that the Foundation’s Kubernetes AI conformance program aims to solve this by ensuring portability, predictability, and production readiness for AI workloads across environments.</itunes:subtitle>
      <itunes:keywords>ai conformance program, software developer, tech podcast, the new stack, ai developer, ai workloads, tech, kubernetes, the new stack makers, software engineer, open source, jonathan bryce, cncf, kubecon amsterdam, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1604</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">22cf6552-2bac-4360-b16b-a8b495151ace</guid>
      <title>Microsoft wants to make service mesh invisible</title>
      <description><![CDATA[<p>At KubeCon EU 2026, Mitch Connors of Microsoft outlined a vision to make service meshes effectively invisible to users. Now working on Azure Kubernetes Application Network, a fully managed service built on Istio’s ambient mode, Connors aims to deliver core capabilities like mTLS without requiring users to engage with the complexity traditionally associated with service meshes. Ambient mode eliminates sidecar upgrade challenges by shifting functionality to node-level and waypoint proxies, though adoption still faces hurdles, including lagging CVE patching.</p>
<p>Connors emphasized that AI workloads are reshaping network demands, as request variability in large language models requires smarter routing and resource management. Istio is addressing this through a two-speed model: stable APIs for reliability and experimental integrations like Agent Gateway for emerging AI protocols. Features such as inference-aware routing and policy enforcement for approved LLM endpoints highlight the mesh’s growing role in AI governance.</p>
<p>With multi-cluster support and GPU scarcity driving workload mobility, Microsoft’s approach bets that simplifying and abstracting the mesh will broaden adoption while meeting the evolving needs of AI-driven systems.</p>
<p>Learn more from The New Stack about service meshes: </p>
<p><a href="https://thenewstack.io/the-hidden-costs-of-service-meshes/" rel="noopener noreferrer">The Hidden Costs of Service Meshes</a></p>
<p><a href="https://thenewstack.io/all-the-things-a-service-mesh-can-do/" rel="noopener noreferrer">All the Things a Service Mesh Can Do</a></p>
<p><a href="https://thenewstack.io/newsletter" rel="noopener noreferrer">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Wed, 8 Apr 2026 17:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Mitch Connors, Microsoft, The New Stack, Frederic Lardinois)</author>
      <link>https://thenewstack.simplecast.com/episodes/microsoft-wants-to-make-service-mesh-invisible-uRk0gFG8</link>
      <content:encoded><![CDATA[<p>At KubeCon EU 2026, Mitch Connors of Microsoft outlined a vision to make service meshes effectively invisible to users. Now working on Azure Kubernetes Application Network, a fully managed service built on Istio’s ambient mode, Connors aims to deliver core capabilities like mTLS without requiring users to engage with the complexity traditionally associated with service meshes. Ambient mode eliminates sidecar upgrade challenges by shifting functionality to node-level and waypoint proxies, though adoption still faces hurdles, including lagging CVE patching.</p>
<p>Connors emphasized that AI workloads are reshaping network demands, as request variability in large language models requires smarter routing and resource management. Istio is addressing this through a two-speed model: stable APIs for reliability and experimental integrations like Agent Gateway for emerging AI protocols. Features such as inference-aware routing and policy enforcement for approved LLM endpoints highlight the mesh’s growing role in AI governance.</p>
<p>With multi-cluster support and GPU scarcity driving workload mobility, Microsoft’s approach bets that simplifying and abstracting the mesh will broaden adoption while meeting the evolving needs of AI-driven systems.</p>
<p>Learn more from The New Stack about service meshes: </p>
<p><a href="https://thenewstack.io/the-hidden-costs-of-service-meshes/" rel="noopener noreferrer">The Hidden Costs of Service Meshes</a></p>
<p><a href="https://thenewstack.io/all-the-things-a-service-mesh-can-do/" rel="noopener noreferrer">All the Things a Service Mesh Can Do</a></p>
<p><a href="https://thenewstack.io/newsletter" rel="noopener noreferrer">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="20496761" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/317e9dbc-9a52-4da7-9725-c4578874b757/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/audio/group/b7222906-27a4-4e2c-bf98-81ac762f89c3/group-item/9fdbef82-4703-40e3-9204-c546e7d54ab7/128_default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Microsoft wants to make service mesh invisible</itunes:title>
      <itunes:author>Mitch Connors, Microsoft, The New Stack, Frederic Lardinois</itunes:author>
      <itunes:duration>00:21:21</itunes:duration>
      <itunes:summary>At KubeCon EU 2026, Mitch Connors of Microsoft outlined a vision to make service meshes effectively invisible to users. Now working on Azure Kubernetes Application Network, a fully managed service built on Istio’s ambient mode, Connors aims to deliver core capabilities like mTLS without requiring users to engage with the complexity traditionally associated with service meshes. Ambient mode eliminates sidecar upgrade challenges by shifting functionality to node-level and waypoint proxies, though adoption still faces hurdles, including lagging CVE patching.</itunes:summary>
      <itunes:subtitle>At KubeCon EU 2026, Mitch Connors of Microsoft outlined a vision to make service meshes effectively invisible to users. Now working on Azure Kubernetes Application Network, a fully managed service built on Istio’s ambient mode, Connors aims to deliver core capabilities like mTLS without requiring users to engage with the complexity traditionally associated with service meshes. Ambient mode eliminates sidecar upgrade challenges by shifting functionality to node-level and waypoint proxies, though adoption still faces hurdles, including lagging CVE patching.</itunes:subtitle>
      <itunes:keywords>mitch connors, software developer, tech podcast, the new stack, ai developer, microsoft, tech, kubernetes, the new stack makers, software engineer, service mesh, open source, networking, kubecon amsterdam, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1603</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">ffb040c8-7931-45b2-9949-ab801e7a84f2</guid>
      <title>Amazon EKS Auto Mode wants to end Kubernetes toil — one node at a time</title>
      <description><![CDATA[<p>At KubeCon + CloudNativeCon Europe 2026 in Amsterdam, Alex Kestner, principal product manager for Amazon Elastic Kubernetes Service (EKS), discussed how Amazon EKS Auto Mode aims to reduce the operational burden of running Kubernetes at scale. While Kubernetes delivers significant power, it also introduces complexity—particularly through repetitive, day-to-day tasks like managing node lifecycles, ensuring security updates, and selecting optimal infrastructure.</p>
<p>Kestner emphasized that much of this “undifferentiated heavy lifting” distracts platform teams from delivering business value. Amazon EKS Auto Mode addresses this by automating infrastructure operations across the full node lifecycle, shifting responsibility for key operational components outside the cluster and into AWS-managed services.</p>
<p>Built in collaboration with the EC2 team and leveraging technologies like Karpenter, Auto Mode dynamically provisions right-sized compute resources based on workload requirements. While it doesn’t eliminate all challenges—such as unpredictable workloads or diverse deployment needs—it provides a more application-focused approach to scaling and cost optimization. Ultimately, Auto Mode represents a meaningful step toward simplifying Kubernetes operations in increasingly complex cloud-native environments.</p>
<p>Learn more from The New Stack about the latest developments around the latest with Amazon Elastic Kubernetes Service (EKS):</p>
<p><a href="https://thenewstack.io/2026-will-be-the-year-of-agentic-workloads-in-production-on-amazon-eks/" rel="noopener noreferrer">2026 Will Be the Year of Agentic Workloads in Production on Amazon EKS</a></p>
<p><a href="https://thenewstack.io/how-amazon-eks-auto-mode-simplifies-kubernetes-cluster-management-part-1/" rel="noopener noreferrer">How Amazon EKS Auto Mode Simplifies Kubernetes Cluster Management (Part 1)</a></p>
<p><a href="https://thenewstack.io/a-deep-dive-into-amazon-eks-auto-part-2/" rel="noopener noreferrer">A Deep Dive Into Amazon EKS Auto (Part 2)</a></p>
<p><a href="https://thenewstack.io/newsletter" rel="noopener noreferrer">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Tue, 7 Apr 2026 17:45:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Amazon EKS, AWS, Amazon Web Services, Adrian Bridgwater, Alex Kestner)</author>
      <link>https://thenewstack.simplecast.com/episodes/aws-eks-auto-mode-wants-to-end-kubernetes-toil-one-node-at-a-time-5FgJVaYP</link>
      <content:encoded><![CDATA[<p>At KubeCon + CloudNativeCon Europe 2026 in Amsterdam, Alex Kestner, principal product manager for Amazon Elastic Kubernetes Service (EKS), discussed how Amazon EKS Auto Mode aims to reduce the operational burden of running Kubernetes at scale. While Kubernetes delivers significant power, it also introduces complexity—particularly through repetitive, day-to-day tasks like managing node lifecycles, ensuring security updates, and selecting optimal infrastructure.</p>
<p>Kestner emphasized that much of this “undifferentiated heavy lifting” distracts platform teams from delivering business value. Amazon EKS Auto Mode addresses this by automating infrastructure operations across the full node lifecycle, shifting responsibility for key operational components outside the cluster and into AWS-managed services.</p>
<p>Built in collaboration with the EC2 team and leveraging technologies like Karpenter, Auto Mode dynamically provisions right-sized compute resources based on workload requirements. While it doesn’t eliminate all challenges—such as unpredictable workloads or diverse deployment needs—it provides a more application-focused approach to scaling and cost optimization. Ultimately, Auto Mode represents a meaningful step toward simplifying Kubernetes operations in increasingly complex cloud-native environments.</p>
<p>Learn more from The New Stack about the latest developments around the latest with Amazon Elastic Kubernetes Service (EKS):</p>
<p><a href="https://thenewstack.io/2026-will-be-the-year-of-agentic-workloads-in-production-on-amazon-eks/" rel="noopener noreferrer">2026 Will Be the Year of Agentic Workloads in Production on Amazon EKS</a></p>
<p><a href="https://thenewstack.io/how-amazon-eks-auto-mode-simplifies-kubernetes-cluster-management-part-1/" rel="noopener noreferrer">How Amazon EKS Auto Mode Simplifies Kubernetes Cluster Management (Part 1)</a></p>
<p><a href="https://thenewstack.io/a-deep-dive-into-amazon-eks-auto-part-2/" rel="noopener noreferrer">A Deep Dive Into Amazon EKS Auto (Part 2)</a></p>
<p><a href="https://thenewstack.io/newsletter" rel="noopener noreferrer">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="21623161" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/317e9dbc-9a52-4da7-9725-c4578874b757/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/audio/group/5973d659-bdfe-468c-80db-bf537faa002a/group-item/92ba1af0-bcff-4f29-8475-a03bc9ebd286/128_default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Amazon EKS Auto Mode wants to end Kubernetes toil — one node at a time</itunes:title>
      <itunes:author>Amazon EKS, AWS, Amazon Web Services, Adrian Bridgwater, Alex Kestner</itunes:author>
      <itunes:duration>00:22:31</itunes:duration>
      <itunes:summary>At KubeCon + CloudNativeCon Europe 2026 in Amsterdam, Alex Kestner, principal product manager for Amazon Elastic Kubernetes Service (EKS), discussed how Amazon EKS Auto Mode aims to reduce the operational burden of running Kubernetes at scale. While Kubernetes delivers significant power, it also introduces complexity—particularly through repetitive, day-to-day tasks like managing node lifecycles, ensuring security updates, and selecting optimal infrastructure.</itunes:summary>
      <itunes:subtitle>At KubeCon + CloudNativeCon Europe 2026 in Amsterdam, Alex Kestner, principal product manager for Amazon Elastic Kubernetes Service (EKS), discussed how Amazon EKS Auto Mode aims to reduce the operational burden of running Kubernetes at scale. While Kubernetes delivers significant power, it also introduces complexity—particularly through repetitive, day-to-day tasks like managing node lifecycles, ensuring security updates, and selecting optimal infrastructure.</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, the new stack, amazon eks auto mode, ai developer, cloud native, amazon web services, tech, kubernetes, the new stack makers, software engineer, open source, alex kestner, amazon eks, adrian bridgwater, aws, kubecon amsterdam, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1602</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">057db5ea-d6b9-443c-a0e1-7822f2152195</guid>
      <title>Edge-forward: Akamai eyes sweet spot between centralized &amp; decentralized AI inference</title>
      <description><![CDATA[<p>At KubeCon + CloudNativeCon Europe 2026, Lena Hall and Thorsten Hans of Akamai outlined how the company is evolving from a CDN provider into a developer-focused cloud platform for AI. Akamai’s strategy centers on low-latency, distributed computing, combining managed Kubernetes, serverless functions, and a distributed AI inference platform to support modern workloads.</p>
<p>With a global footprint of core and “distributed reach” datacenters, Akamai aims to bring compute closer to users while still leveraging centralized infrastructure for heavier processing. This hybrid model enables faster feedback loops critical for applications like fraud detection, robotics, and conversational AI.</p>
<p>To address concerns about complexity, Akamai emphasizes managed infrastructure and self-service tools that abstract away integration challenges. Its platform supports open source through managed Kubernetes and pre-packaged tools, simplifying deployment.</p>
<p>Akamai also invests in serverless technologies like WebAssembly-based functions, enabling developers to build and deploy globally distributed applications quickly. Overall, the company prioritizes developer experience, allowing teams to focus on application logic rather than infrastructure management.</p>
<p>Learn more from The New Stack about the latest developments around how Akamai is transforming to a developer-focused cloud platform for AI.</p>
<p><a href="https://thenewstack.io/akamai-picks-up-hosting-for-kernel-org/" rel="noopener noreferrer">Akamai Picks Up Hosting for Kernel.org</a></p>
<p><a href="https://thenewstack.io/should-you-care-about-fermyon-wasm-functions-on-akamai/" rel="noopener noreferrer">Should You Care About Fermyon Wasm Functions on Akamai?</a></p>
<p><a href="https://thenewstack.io/newsletter" rel="noopener noreferrer">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a> </p>
<p> </p>
]]></description>
      <pubDate>Wed, 1 Apr 2026 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Thorsten Hans, Lena Hall, Akamai, Adrian Bridgwater)</author>
      <link>https://thenewstack.simplecast.com/episodes/edge-forward-akamai-eyes-sweet-spot-between-centralized-decentralized-ai-inference-u17rX_Ty-C4jZwCZe</link>
      <content:encoded><![CDATA[<p>At KubeCon + CloudNativeCon Europe 2026, Lena Hall and Thorsten Hans of Akamai outlined how the company is evolving from a CDN provider into a developer-focused cloud platform for AI. Akamai’s strategy centers on low-latency, distributed computing, combining managed Kubernetes, serverless functions, and a distributed AI inference platform to support modern workloads.</p>
<p>With a global footprint of core and “distributed reach” datacenters, Akamai aims to bring compute closer to users while still leveraging centralized infrastructure for heavier processing. This hybrid model enables faster feedback loops critical for applications like fraud detection, robotics, and conversational AI.</p>
<p>To address concerns about complexity, Akamai emphasizes managed infrastructure and self-service tools that abstract away integration challenges. Its platform supports open source through managed Kubernetes and pre-packaged tools, simplifying deployment.</p>
<p>Akamai also invests in serverless technologies like WebAssembly-based functions, enabling developers to build and deploy globally distributed applications quickly. Overall, the company prioritizes developer experience, allowing teams to focus on application logic rather than infrastructure management.</p>
<p>Learn more from The New Stack about the latest developments around how Akamai is transforming to a developer-focused cloud platform for AI.</p>
<p><a href="https://thenewstack.io/akamai-picks-up-hosting-for-kernel-org/" rel="noopener noreferrer">Akamai Picks Up Hosting for Kernel.org</a></p>
<p><a href="https://thenewstack.io/should-you-care-about-fermyon-wasm-functions-on-akamai/" rel="noopener noreferrer">Should You Care About Fermyon Wasm Functions on Akamai?</a></p>
<p><a href="https://thenewstack.io/newsletter" rel="noopener noreferrer">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a> </p>
<p> </p>
]]></content:encoded>
      <enclosure length="21157972" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/317e9dbc-9a52-4da7-9725-c4578874b757/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/audio/group/4b61f2cf-788f-4b55-9531-6235ee50493a/group-item/4799d60a-b777-44a3-b51f-b22280a47545/128_default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Edge-forward: Akamai eyes sweet spot between centralized &amp; decentralized AI inference</itunes:title>
      <itunes:author>Thorsten Hans, Lena Hall, Akamai, Adrian Bridgwater</itunes:author>
      <itunes:duration>00:22:02</itunes:duration>
      <itunes:summary>At KubeCon + CloudNativeCon Europe 2026, Lena Hall and Thorsten Hans of Akamai outlined how the company is evolving from a CDN provider into a developer-focused cloud platform for AI. Akamai’s strategy centers on low-latency, distributed computing, combining managed Kubernetes, serverless functions, and a distributed AI inference platform to support modern workloads.</itunes:summary>
      <itunes:subtitle>At KubeCon + CloudNativeCon Europe 2026, Lena Hall and Thorsten Hans of Akamai outlined how the company is evolving from a CDN provider into a developer-focused cloud platform for AI. Akamai’s strategy centers on low-latency, distributed computing, combining managed Kubernetes, serverless functions, and a distributed AI inference platform to support modern workloads.</itunes:subtitle>
      <itunes:keywords>lena hall, the new stack, akamai, the new stack makers, open source, adrian bridgwater, kubecon, thorsten hans, kubecon amsterdam 2026</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1601</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">5f9567bc-7f9e-4cd1-a6ac-0d23621321b9</guid>
      <title>Kubernetes co-founder Brendan Burns: AI-generated code will become as invisible as assembly</title>
      <description><![CDATA[<p>In this episode of The New Stack Makers, Microsoft Corporate Vice President and Technical Fellow, Brendan Burns discusses how AI is reshaping Kubernetes and modern infrastructure. Originally designed for stateless applications, Kubernetes is evolving to support AI workloads that require complex GPU scheduling, co-location, and failure sensitivity. Features like Dynamic Resource Allocation and projects such as KAITO introduce AI-specific capabilities, while maintaining Kubernetes’ core strength: vendor-neutral extensibility. </p>
<p>Burns highlights that AI also changes how systems are monitored. Success is no longer binary; it depends on answer quality, user feedback, and large-scale testing using thousands of prompts and even AI evaluators. </p>
<p>On software development, Burns argues that the industry’s focus on reviewing AI-generated code is temporary. Just as developers stopped inspecting compiler output, AI-generated code will become a disposable artifact validated by tests and specifications. This shift will redefine engineering roles and may lead to programming languages designed for machines rather than humans, signaling a fundamental transformation in how software is built and maintained.</p>
<p>Learn more from The New Stack about the latest developments around how AI is reshaping Kubernetes and modern infrastructure:</p>
<p><a href="https://thenewstack.io/how-to-use-ai-to-design-intelligent-adaptable-infrastructure/" rel="noopener noreferrer">How To Use AI To Design Intelligent, Adaptable Infrastructure</a></p>
<p><a href="https://thenewstack.io/ai-infrastructure-crisis-roadmap/" rel="noopener noreferrer">The AI Infrastructure crisis: When ambition meets ancient systems </a></p>
<p><a href="https://thenewstack.io/newsletter" rel="noopener noreferrer">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Tue, 24 Mar 2026 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Brendan Burns, The New Stack, Microsoft, Frederic Lardinois)</author>
      <link>https://thenewstack.simplecast.com/episodes/kubernetes-co-founder-brendan-burns-ai-generated-code-will-become-as-invisible-as-assembly-QxRJCB28</link>
      <content:encoded><![CDATA[<p>In this episode of The New Stack Makers, Microsoft Corporate Vice President and Technical Fellow, Brendan Burns discusses how AI is reshaping Kubernetes and modern infrastructure. Originally designed for stateless applications, Kubernetes is evolving to support AI workloads that require complex GPU scheduling, co-location, and failure sensitivity. Features like Dynamic Resource Allocation and projects such as KAITO introduce AI-specific capabilities, while maintaining Kubernetes’ core strength: vendor-neutral extensibility. </p>
<p>Burns highlights that AI also changes how systems are monitored. Success is no longer binary; it depends on answer quality, user feedback, and large-scale testing using thousands of prompts and even AI evaluators. </p>
<p>On software development, Burns argues that the industry’s focus on reviewing AI-generated code is temporary. Just as developers stopped inspecting compiler output, AI-generated code will become a disposable artifact validated by tests and specifications. This shift will redefine engineering roles and may lead to programming languages designed for machines rather than humans, signaling a fundamental transformation in how software is built and maintained.</p>
<p>Learn more from The New Stack about the latest developments around how AI is reshaping Kubernetes and modern infrastructure:</p>
<p><a href="https://thenewstack.io/how-to-use-ai-to-design-intelligent-adaptable-infrastructure/" rel="noopener noreferrer">How To Use AI To Design Intelligent, Adaptable Infrastructure</a></p>
<p><a href="https://thenewstack.io/ai-infrastructure-crisis-roadmap/" rel="noopener noreferrer">The AI Infrastructure crisis: When ambition meets ancient systems </a></p>
<p><a href="https://thenewstack.io/newsletter" rel="noopener noreferrer">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="41968160" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/317e9dbc-9a52-4da7-9725-c4578874b757/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/audio/group/9dd48691-3147-4cdf-86cd-ca1ca381866b/group-item/8aca5000-5782-4b1d-b91e-e62eed95b81b/128_default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Kubernetes co-founder Brendan Burns: AI-generated code will become as invisible as assembly</itunes:title>
      <itunes:author>Brendan Burns, The New Stack, Microsoft, Frederic Lardinois</itunes:author>
      <itunes:duration>00:43:42</itunes:duration>
      <itunes:summary></itunes:summary>
      <itunes:subtitle></itunes:subtitle>
      <itunes:keywords>frederic lardinois, software developer, tech podcast, the new stack, ai developer, modern infrastructure, microsoft, tech, kubernetes, the new stack makers, software engineer, brendan burns, founder of kubernetes, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1598</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">6a3896ca-7a1a-4b38-ab76-a510cd10fc17</guid>
      <title>AI can write your infrastructure code. There&apos;s a reason most teams won&apos;t let it.</title>
      <description><![CDATA[<p>In this episode of<i>The New Stack Agents</i>, Marcin Wyszynski, co-founder of Spacelift and OpenTofu, explains how AI is transforming infrastructure as code (IaC). Originally built for individual operators, tools like Terraform struggled to scale across teams, prompting Wyszynski to help launch OpenTofu after HashiCorp’s 2023 license change. Now, the bigger shift is AI: engineers no longer write configuration languages like HCL manually, as AI tools generate it, dramatically lowering the barrier to entry.</p>
<p>However, this creates a dangerous gap between generating infrastructure and truly understanding it—like using a phrasebook to ask questions in a foreign language but not understanding the response. In infrastructure, that lack of comprehension can lead to serious risks.</p>
<p>To address this, Spacelift introduced Intent, which allows AI to directly interact with cloud systems in real time while enforcing deterministic guardrails through policy controls. The broader challenge remains balancing speed with control—enabling faster experimentation without sacrificing safety. Wyszynski argues that, like humans, AI can be trusted when constrained by strong guardrails.</p>
<p>Learn more from The New Stack about the latest developments around how AI is transforming infrastructure as code (IaC).</p>
<p><a href="https://thenewstack.io/the-maturing-state-of-infrastructure-as-code-in-2025/" rel="noopener noreferrer">The Maturing State of Infrastructure as Code in 2025</a></p>
<p><a href="https://thenewstack.io/generative-ai-tools-for-infrastructure-as-code/" rel="noopener noreferrer">Generative AI Tools for Infrastructure as Code</a></p>
<p><a href="https://thenewstack.io/newsletter" rel="noopener noreferrer">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Fri, 20 Mar 2026 10:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Marcin Wyszyński, The New Stack, Spacelift, Frederic Lardinois)</author>
      <link>https://thenewstack.simplecast.com/episodes/ai-can-write-your-infrastructure-code-theres-a-reason-most-teams-wont-let-it-mcNd_UGN</link>
      <content:encoded><![CDATA[<p>In this episode of<i>The New Stack Agents</i>, Marcin Wyszynski, co-founder of Spacelift and OpenTofu, explains how AI is transforming infrastructure as code (IaC). Originally built for individual operators, tools like Terraform struggled to scale across teams, prompting Wyszynski to help launch OpenTofu after HashiCorp’s 2023 license change. Now, the bigger shift is AI: engineers no longer write configuration languages like HCL manually, as AI tools generate it, dramatically lowering the barrier to entry.</p>
<p>However, this creates a dangerous gap between generating infrastructure and truly understanding it—like using a phrasebook to ask questions in a foreign language but not understanding the response. In infrastructure, that lack of comprehension can lead to serious risks.</p>
<p>To address this, Spacelift introduced Intent, which allows AI to directly interact with cloud systems in real time while enforcing deterministic guardrails through policy controls. The broader challenge remains balancing speed with control—enabling faster experimentation without sacrificing safety. Wyszynski argues that, like humans, AI can be trusted when constrained by strong guardrails.</p>
<p>Learn more from The New Stack about the latest developments around how AI is transforming infrastructure as code (IaC).</p>
<p><a href="https://thenewstack.io/the-maturing-state-of-infrastructure-as-code-in-2025/" rel="noopener noreferrer">The Maturing State of Infrastructure as Code in 2025</a></p>
<p><a href="https://thenewstack.io/generative-ai-tools-for-infrastructure-as-code/" rel="noopener noreferrer">Generative AI Tools for Infrastructure as Code</a></p>
<p><a href="https://thenewstack.io/newsletter" rel="noopener noreferrer">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="28183030" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/317e9dbc-9a52-4da7-9725-c4578874b757/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/audio/group/9a66bed6-b8d1-4ef9-81e0-31e196647f81/group-item/a5afdaab-70bd-4d09-a555-903bc8da3931/128_default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>AI can write your infrastructure code. There&apos;s a reason most teams won&apos;t let it.</itunes:title>
      <itunes:author>Marcin Wyszyński, The New Stack, Spacelift, Frederic Lardinois</itunes:author>
      <itunes:duration>00:29:21</itunes:duration>
      <itunes:summary>In this episode of The New Stack Agents, Marcin Wyszynski, co-founder of Spacelift and OpenTofu, explains how AI is transforming infrastructure as code (IaC). Originally built for individual operators, tools like Terraform struggled to scale across teams, prompting Wyszynski to help launch OpenTofu after HashiCorp’s 2023 license change. Now, the bigger shift is AI: engineers no longer write configuration languages like HCL manually, as AI tools generate it, dramatically lowering the barrier to entry.</itunes:summary>
      <itunes:subtitle>In this episode of The New Stack Agents, Marcin Wyszynski, co-founder of Spacelift and OpenTofu, explains how AI is transforming infrastructure as code (IaC). Originally built for individual operators, tools like Terraform struggled to scale across teams, prompting Wyszynski to help launch OpenTofu after HashiCorp’s 2023 license change. Now, the bigger shift is AI: engineers no longer write configuration languages like HCL manually, as AI tools generate it, dramatically lowering the barrier to entry.</itunes:subtitle>
      <itunes:keywords>frederic lardinois, software developer, ai agents, tech podcast, the new stack, ai developer, marcin wyszyński, tech, infrastructure as code, the new stack agents, spacelight, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1596</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">2955fe71-7185-4237-a336-653aa0b3930b</guid>
      <title>OutSystems CEO on how enterprises can successfully adopt vibe coding</title>
      <description><![CDATA[<p>Woodson Martin, CEO ofOutSystems, argues that successful enterprise AI deployments rarely rely on standalone agents. Instead, production systems combine AI agents with data, workflows, APIs, applications, and human oversight. While claims that “95% of agent pilots fail” are common, Martin suggests many of those pilots were simply low-commitment experiments made possible by the low cost of testing AI. Enterprises that succeed typically keep humans in the loop, at least initially, to review recommendations and maintain control over decisions.</p>
<p>Current enterprise use cases for agents include document processing, decision support, and personalized outputs. When integrated into broader systems, these applications can deliver measurable productivity gains. For example,Travel Essencebuilt an agentic system that reduced a two-hour customer planning process to three minutes, allowing staff to focus more on sales and helping drive 20% top-line growth.</p>
<p>Martin also believes AI will pressure traditional SaaS seat-based pricing and accelerate custom software development. In this environment, governed platforms like OutSystems can help enterprises adopt “vibe coding” while maintaining compliance, security, and lifecycle management.</p>
<p>Learn more from The New Stack about the latest developments around enterprise adoption of vibe coding:</p>
<p><a href="https://thenewstack.io/how-to-use-vibe-coding-safely-in-the-enterprise/" rel="noopener noreferrer">How To Use Vibe Coding Safely in the Enterprise</a></p>
<p><a href="https://thenewstack.io/5-challenges-with-vibe-coding-for-enterprises/" rel="noopener noreferrer">5 Challenges With Vibe Coding for Enterprises </a></p>
<p><a href="https://thenewstack.io/vibe-coding-the-shadow-it-problem-no-one-saw-coming/" rel="noopener noreferrer">Vibe Coding: The Shadow IT Problem No One Saw Coming</a></p>
<p><a href="https://thenewstack.io/newsletter" rel="noopener noreferrer">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Fri, 6 Mar 2026 20:40:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The new Stack, Frederic Lardinois, Outsystems, Woodson Martin)</author>
      <link>https://thenewstack.simplecast.com/episodes/outsystems-ceo-on-how-enterprises-can-successfully-adopt-vibe-coding-Hat3auTV</link>
      <content:encoded><![CDATA[<p>Woodson Martin, CEO ofOutSystems, argues that successful enterprise AI deployments rarely rely on standalone agents. Instead, production systems combine AI agents with data, workflows, APIs, applications, and human oversight. While claims that “95% of agent pilots fail” are common, Martin suggests many of those pilots were simply low-commitment experiments made possible by the low cost of testing AI. Enterprises that succeed typically keep humans in the loop, at least initially, to review recommendations and maintain control over decisions.</p>
<p>Current enterprise use cases for agents include document processing, decision support, and personalized outputs. When integrated into broader systems, these applications can deliver measurable productivity gains. For example,Travel Essencebuilt an agentic system that reduced a two-hour customer planning process to three minutes, allowing staff to focus more on sales and helping drive 20% top-line growth.</p>
<p>Martin also believes AI will pressure traditional SaaS seat-based pricing and accelerate custom software development. In this environment, governed platforms like OutSystems can help enterprises adopt “vibe coding” while maintaining compliance, security, and lifecycle management.</p>
<p>Learn more from The New Stack about the latest developments around enterprise adoption of vibe coding:</p>
<p><a href="https://thenewstack.io/how-to-use-vibe-coding-safely-in-the-enterprise/" rel="noopener noreferrer">How To Use Vibe Coding Safely in the Enterprise</a></p>
<p><a href="https://thenewstack.io/5-challenges-with-vibe-coding-for-enterprises/" rel="noopener noreferrer">5 Challenges With Vibe Coding for Enterprises </a></p>
<p><a href="https://thenewstack.io/vibe-coding-the-shadow-it-problem-no-one-saw-coming/" rel="noopener noreferrer">Vibe Coding: The Shadow IT Problem No One Saw Coming</a></p>
<p><a href="https://thenewstack.io/newsletter" rel="noopener noreferrer">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="42131164" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/317e9dbc-9a52-4da7-9725-c4578874b757/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/audio/group/8ea8f77e-c2bc-44ce-81ee-66421eff619f/group-item/50326a50-df79-4f78-80e9-6bc3b94868ec/128_default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>OutSystems CEO on how enterprises can successfully adopt vibe coding</itunes:title>
      <itunes:author>The new Stack, Frederic Lardinois, Outsystems, Woodson Martin</itunes:author>
      <itunes:duration>00:43:53</itunes:duration>
      <itunes:summary>Woodson Martin, CEO of OutSystems, argues that successful enterprise AI deployments rarely rely on standalone agents. Instead, production systems combine AI agents with data, workflows, APIs, applications, and human oversight. While claims that “95% of agent pilots fail” are common, Martin suggests many of those pilots were simply low-commitment experiments made possible by the low cost of testing AI. Enterprises that succeed typically keep humans in the loop, at least initially, to review recommendations and maintain control over decisions.</itunes:summary>
      <itunes:subtitle>Woodson Martin, CEO of OutSystems, argues that successful enterprise AI deployments rarely rely on standalone agents. Instead, production systems combine AI agents with data, workflows, APIs, applications, and human oversight. While claims that “95% of agent pilots fail” are common, Martin suggests many of those pilots were simply low-commitment experiments made possible by the low cost of testing AI. Enterprises that succeed typically keep humans in the loop, at least initially, to review recommendations and maintain control over decisions.</itunes:subtitle>
      <itunes:keywords>ai agents, outsystems, tech podcast, the new stack, ai developer, tech, developer podcast, software engineer, the new stack agents, woodson martin, vibe coding, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1595</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">25837f8f-29d2-4027-85d6-2c1cbc2cce52</guid>
      <title>Inception Labs says its diffusion LLM is 10x faster than Claude, ChatGPT, Gemini</title>
      <description><![CDATA[<p>On a recent episode of the The New Stack Agents, Inception Labs CEO Stefano Ermon introduced Mercury 2, a large language model built on diffusion rather than the standard autoregressive approach. Traditional LLMs generate text token by token from left to right, which Ermon describes as “fancy autocomplete.” In contrast, diffusion models begin with a rough draft and refine it in parallel, similar to image systems like Stable Diffusion.</p>
<p>This parallel process allows Mercury 2 to produce over 1,000 tokens per second—five to ten times faster than optimized models from labs such as OpenAI, Anthropic, and Google, according to company tests. Ermon argues diffusion models better leverage GPUs, with support from investor Nvidia to optimize performance.</p>
<p>While Mercury 2 matches mid-tier models like Claude Haiku and Google Flash rather than top systems such as Claude Opus or GPT-4, Ermon believes diffusion’s speed and economic advantages will become increasingly compelling as AI applications scale.</p>
<p>Learn more from The New Stack about the latest developments around around large language model built on diffusion: </p>
<p><a href="https://thenewstack.io/how-diffusion-based-llm-ai-speeds-up-reasoning/" rel="noopener noreferrer">How Diffusion-Based LLM AI Speeds Up Reasoning</a></p>
<p><a href="https://thenewstack.io/get-ready-for-faster-text-generation-with-diffusion-llms/" rel="noopener noreferrer">Get Ready for Faster Text Generation With Diffusion LLMs </a></p>
<p><a href="https://thenewstack.io/newsletter" rel="noopener noreferrer">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
<p> </p>
]]></description>
      <pubDate>Mon, 2 Mar 2026 21:30:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Inception, Stefano Ermon, Inception Labs, Frederic Larindois, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/inception-labs-says-its-diffusion-llm-is-10x-faster-than-claude-chatgpt-gemini-7BQmFqyr</link>
      <content:encoded><![CDATA[<p>On a recent episode of the The New Stack Agents, Inception Labs CEO Stefano Ermon introduced Mercury 2, a large language model built on diffusion rather than the standard autoregressive approach. Traditional LLMs generate text token by token from left to right, which Ermon describes as “fancy autocomplete.” In contrast, diffusion models begin with a rough draft and refine it in parallel, similar to image systems like Stable Diffusion.</p>
<p>This parallel process allows Mercury 2 to produce over 1,000 tokens per second—five to ten times faster than optimized models from labs such as OpenAI, Anthropic, and Google, according to company tests. Ermon argues diffusion models better leverage GPUs, with support from investor Nvidia to optimize performance.</p>
<p>While Mercury 2 matches mid-tier models like Claude Haiku and Google Flash rather than top systems such as Claude Opus or GPT-4, Ermon believes diffusion’s speed and economic advantages will become increasingly compelling as AI applications scale.</p>
<p>Learn more from The New Stack about the latest developments around around large language model built on diffusion: </p>
<p><a href="https://thenewstack.io/how-diffusion-based-llm-ai-speeds-up-reasoning/" rel="noopener noreferrer">How Diffusion-Based LLM AI Speeds Up Reasoning</a></p>
<p><a href="https://thenewstack.io/get-ready-for-faster-text-generation-with-diffusion-llms/" rel="noopener noreferrer">Get Ready for Faster Text Generation With Diffusion LLMs </a></p>
<p><a href="https://thenewstack.io/newsletter" rel="noopener noreferrer">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
<p> </p>
]]></content:encoded>
      <enclosure length="41941829" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/317e9dbc-9a52-4da7-9725-c4578874b757/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/audio/group/f1e1956a-01e1-445a-9d4e-453fdeb58112/group-item/5c78edb1-7f02-4b8f-b8f5-2bf3e413ad92/128_default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Inception Labs says its diffusion LLM is 10x faster than Claude, ChatGPT, Gemini</itunes:title>
      <itunes:author>Inception, Stefano Ermon, Inception Labs, Frederic Larindois, The New Stack</itunes:author>
      <itunes:duration>00:43:41</itunes:duration>
      <itunes:summary>On a recent episode of the The New Stack Agents, Inception Labs CEO Stefano Ermon introduced Mercury 2, a large language model built on diffusion rather than the standard autoregressive approach. Traditional LLMs generate text token by token from left to right, which Ermon describes as “fancy autocomplete.” In contrast, diffusion models begin with a rough draft and refine it in parallel, similar to image systems like Stable Diffusion.</itunes:summary>
      <itunes:subtitle>On a recent episode of the The New Stack Agents, Inception Labs CEO Stefano Ermon introduced Mercury 2, a large language model built on diffusion rather than the standard autoregressive approach. Traditional LLMs generate text token by token from left to right, which Ermon describes as “fancy autocomplete.” In contrast, diffusion models begin with a rough draft and refine it in parallel, similar to image systems like Stable Diffusion.</itunes:subtitle>
      <itunes:keywords>generative ai, diffusion model, ai model, the new stack, inception, inception labs, stefano ermon, the new stack agents</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1594</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">78c7ac16-cb81-45c4-a3d1-bd1c20e3855b</guid>
      <title>NanoClaw&apos;s answer to OpenClaw is minimal code, maximum isolation</title>
      <description><![CDATA[<p>On<i>The New Stack Agents</i>, Gavriel Cohen discusses why he built NanoClaw, a minimalist alternative to OpenClaw, after discovering security and architectural flaws in the rapidly growing agentic framework. Cohen, co-founder of AI marketing agencyQwibit, had been running agents across operations, sales, and research usingClaude Code. When Clawdbot (laterOpenClaw) launched, it initially seemed ideal. But Cohen grew concerned after noticing questionable dependencies—including his own outdated GitHub package—excessive WhatsApp data storage, a massive AI-generated codebase nearing 400,000 lines, and a lack of OS-level isolation between agents.</p>
<p>In response, he createdNanoClawwith radical minimalism: only a few hundred core lines, minimal dependencies, and containerized agents. Built around Claude Code “skills,” NanoClaw enables modular, build-time integrations while keeping the runtime small enough to audit easily. Cohen argues AI changes coding norms—favoring duplication over DRY, relaxing strict file limits, and treating code as disposable. His goal is simple, secure infrastructure that enterprises can fully understand and trust.</p>
<p> </p>
<p>Learn more from The New Stack about the latest around personal AI agents</p>
<p><a href="https://thenewstack.io/anthropic-agent-sdk-confusion/" rel="noopener noreferrer">Anthropic: You can still use your Claude accounts to run OpenClaw, NanoClaw and Co.</a></p>
<p><a href="https://thenewstack.io/openclaw-moltbot-security-concerns/" rel="noopener noreferrer">It took a researcher fewer than 2 hours to hijack OpenClaw</a></p>
<p><a href="https://thenewstack.io/deno-sandbox-security-secrets/" rel="noopener noreferrer">OpenClaw is being called a security “Dumpster fire,” but there is a way to stay safe</a></p>
<p><a href="https://thenewstack.io/newsletter" rel="noopener noreferrer">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Fri, 20 Feb 2026 18:10:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Nanoclaw, Gavriel Cohen, Frederic Lardinois, The New Stack, Qwibit, Concrete Media)</author>
      <link>https://thenewstack.simplecast.com/episodes/nanoclaws-answer-to-openclaw-is-minimal-code-maximum-isolation-qL8rtkBG</link>
      <content:encoded><![CDATA[<p>On<i>The New Stack Agents</i>, Gavriel Cohen discusses why he built NanoClaw, a minimalist alternative to OpenClaw, after discovering security and architectural flaws in the rapidly growing agentic framework. Cohen, co-founder of AI marketing agencyQwibit, had been running agents across operations, sales, and research usingClaude Code. When Clawdbot (laterOpenClaw) launched, it initially seemed ideal. But Cohen grew concerned after noticing questionable dependencies—including his own outdated GitHub package—excessive WhatsApp data storage, a massive AI-generated codebase nearing 400,000 lines, and a lack of OS-level isolation between agents.</p>
<p>In response, he createdNanoClawwith radical minimalism: only a few hundred core lines, minimal dependencies, and containerized agents. Built around Claude Code “skills,” NanoClaw enables modular, build-time integrations while keeping the runtime small enough to audit easily. Cohen argues AI changes coding norms—favoring duplication over DRY, relaxing strict file limits, and treating code as disposable. His goal is simple, secure infrastructure that enterprises can fully understand and trust.</p>
<p> </p>
<p>Learn more from The New Stack about the latest around personal AI agents</p>
<p><a href="https://thenewstack.io/anthropic-agent-sdk-confusion/" rel="noopener noreferrer">Anthropic: You can still use your Claude accounts to run OpenClaw, NanoClaw and Co.</a></p>
<p><a href="https://thenewstack.io/openclaw-moltbot-security-concerns/" rel="noopener noreferrer">It took a researcher fewer than 2 hours to hijack OpenClaw</a></p>
<p><a href="https://thenewstack.io/deno-sandbox-security-secrets/" rel="noopener noreferrer">OpenClaw is being called a security “Dumpster fire,” but there is a way to stay safe</a></p>
<p><a href="https://thenewstack.io/newsletter" rel="noopener noreferrer">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="49835406" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/317e9dbc-9a52-4da7-9725-c4578874b757/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/audio/group/2b061b2a-7d0a-4fe6-9c72-9a2adf9248b6/group-item/581d24e2-698b-4976-a0f3-31d72a117bb2/128_default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>NanoClaw&apos;s answer to OpenClaw is minimal code, maximum isolation</itunes:title>
      <itunes:author>Nanoclaw, Gavriel Cohen, Frederic Lardinois, The New Stack, Qwibit, Concrete Media</itunes:author>
      <itunes:duration>00:51:54</itunes:duration>
      <itunes:summary>On The New Stack Agents, Gavriel Cohen discusses why he built NanoClaw, a minimalist alternative to OpenClaw, after discovering security and architectural flaws in the rapidly growing agentic framework. Cohen, co-founder of AI marketing agency Qwibit, had been running agents across operations, sales, and research using Claude Code. When Clawdbot (later OpenClaw) launched, it initially seemed ideal. But Cohen grew concerned after noticing questionable dependencies—including his own outdated GitHub package—excessive WhatsApp data storage, a massive AI-generated codebase nearing 400,000 lines, and a lack of OS-level isolation between agents.</itunes:summary>
      <itunes:subtitle>On The New Stack Agents, Gavriel Cohen discusses why he built NanoClaw, a minimalist alternative to OpenClaw, after discovering security and architectural flaws in the rapidly growing agentic framework. Cohen, co-founder of AI marketing agency Qwibit, had been running agents across operations, sales, and research using Claude Code. When Clawdbot (later OpenClaw) launched, it initially seemed ideal. But Cohen grew concerned after noticing questionable dependencies—including his own outdated GitHub package—excessive WhatsApp data storage, a massive AI-generated codebase nearing 400,000 lines, and a lack of OS-level isolation between agents.</itunes:subtitle>
      <itunes:keywords>frederic lardinois, software developer, ai agents, tech podcast, the new stack, ai developer, concrete media, gavriel cohen, qwibit, tech, software engineer, nanoclaw, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1593</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">adaea7de-67cb-4e46-8724-bf29552b410a</guid>
      <title>The developer as conductor: Leading an orchestra of AI agents with the feature flag baton</title>
      <description><![CDATA[<p>A few weeks after Dynatrace acquired DevCycle, Michael Beemer and Andrew Norris discussed on The New Stack Makers podcast how feature flagging is becoming a critical safeguard in the AI era. By integrating DevCycle’s feature flagging into the Dynatrace observability platform, the combined solution delivers a “360-degree view” of software performance at the feature level. This closes a key visibility gap, enabling teams to see exactly how individual features affect systems in production.</p>
<p>As “agentic development” accelerates—where AI agents rapidly generate code—feature flags act as a safety net. They allow teams to test, control, and roll back AI-generated changes in live environments, keeping a human in the loop before full releases. This reduces risk while speeding enterprise adoption of AI tools. The discussion also highlighted support for the Cloud Native Computing Foundation’s OpenFeature standard to avoid vendor lock-in. Ultimately, developers are evolving into “conductors,” orchestrating AI agents with feature flags as their baton.</p>
<p> </p>
<p>Learn more from The New Stack about the latest around AI enterprise development: </p>
<p><a href="https://thenewstack.io/why-you-cant-build-ai-without-progressive-delivery/" rel="noopener noreferrer">Why You Can't Build AI Without Progressive Delivery </a></p>
<p><a href="https://thenewstack.io/beyond-automation-dynatrace-unveils-agentic-ai-that-fixes-problems-on-its-own/" rel="noopener noreferrer">Beyond automation: Dynatrace unveils agentic AI that fixes problems on its own </a></p>
<p><a href="https://thenewstack.io/newsletter" rel="noopener noreferrer">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
<p> </p>
]]></description>
      <pubDate>Thu, 19 Feb 2026 23:30:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Michael Beemer, Dynatrace, Andrew Norris, DevCycle, Matt Burns, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/the-developer-as-conductor-leading-an-orchestra-of-ai-agents-with-the-feature-flag-baton-KXz05uZb</link>
      <content:encoded><![CDATA[<p>A few weeks after Dynatrace acquired DevCycle, Michael Beemer and Andrew Norris discussed on The New Stack Makers podcast how feature flagging is becoming a critical safeguard in the AI era. By integrating DevCycle’s feature flagging into the Dynatrace observability platform, the combined solution delivers a “360-degree view” of software performance at the feature level. This closes a key visibility gap, enabling teams to see exactly how individual features affect systems in production.</p>
<p>As “agentic development” accelerates—where AI agents rapidly generate code—feature flags act as a safety net. They allow teams to test, control, and roll back AI-generated changes in live environments, keeping a human in the loop before full releases. This reduces risk while speeding enterprise adoption of AI tools. The discussion also highlighted support for the Cloud Native Computing Foundation’s OpenFeature standard to avoid vendor lock-in. Ultimately, developers are evolving into “conductors,” orchestrating AI agents with feature flags as their baton.</p>
<p> </p>
<p>Learn more from The New Stack about the latest around AI enterprise development: </p>
<p><a href="https://thenewstack.io/why-you-cant-build-ai-without-progressive-delivery/" rel="noopener noreferrer">Why You Can't Build AI Without Progressive Delivery </a></p>
<p><a href="https://thenewstack.io/beyond-automation-dynatrace-unveils-agentic-ai-that-fixes-problems-on-its-own/" rel="noopener noreferrer">Beyond automation: Dynatrace unveils agentic AI that fixes problems on its own </a></p>
<p><a href="https://thenewstack.io/newsletter" rel="noopener noreferrer">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
<p> </p>
]]></content:encoded>
      <enclosure length="18763484" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/317e9dbc-9a52-4da7-9725-c4578874b757/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/audio/group/047e7855-a862-425d-a4bc-b07be46e0af8/group-item/9b6590af-aa6a-493e-910b-80ce1a871d78/128_default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>The developer as conductor: Leading an orchestra of AI agents with the feature flag baton</itunes:title>
      <itunes:author>Michael Beemer, Dynatrace, Andrew Norris, DevCycle, Matt Burns, The New Stack</itunes:author>
      <itunes:duration>00:19:32</itunes:duration>
      <itunes:summary>A few weeks after Dynatrace acquired DevCycle, Michael Beemer and Andrew Norris discussed on The New Stack Makers podcast how feature flagging is becoming a critical safeguard in the AI era. By integrating DevCycle’s feature flagging into the Dynatrace observability platform, the combined solution delivers a “360-degree view” of software performance at the feature level. This closes a key visibility gap, enabling teams to see exactly how individual features affect systems in production.</itunes:summary>
      <itunes:subtitle>A few weeks after Dynatrace acquired DevCycle, Michael Beemer and Andrew Norris discussed on The New Stack Makers podcast how feature flagging is becoming a critical safeguard in the AI era. By integrating DevCycle’s feature flagging into the Dynatrace observability platform, the combined solution delivers a “360-degree view” of software performance at the feature level. This closes a key visibility gap, enabling teams to see exactly how individual features affect systems in production.</itunes:subtitle>
      <itunes:keywords>software delivery, matt burns, progressive delivery, tech podcast, feature flags, the new stack, ai developer, dynatrace perform, tech, acquisition, dynatrace, michael beemer, the new stack makers, software engineer, devcycle, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1592</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">5bcd83ae-7ccf-4469-b31a-06097b66aa2a</guid>
      <title>The reason AI agents shouldn’t touch your source code — and what they should do instead</title>
      <description><![CDATA[<p>Dynatrace is at a pivotal point, expanding beyond traditional observability into a platform designed for autonomous operations and security powered by agentic AI. In an interview on *The New Stack Makers*, recorded at the Dynatrace Perform conference, Chief Technology Strategist Alois Reitbauer discussed his vision for AI-managed production environments. The conversation followed Dynatrace’s acquisition of DevCycle, a feature-management platform. Reitbauer highlighted feature flags—long used in software development—as a critical safety mechanism in the age of agentic AI. </p><p>Rather than allowing AI agents to rewrite and deploy code, Dynatrace envisions them operating within guardrails by adjusting configuration settings through feature flags. This approach limits risk while enabling faster, automated decision-making. Customers, Reitbauer noted, are increasingly comfortable with AI handling defined tasks under constraints, but not with agents making sweeping, unsupervised changes. By combining AI with controlled configuration tools, Dynatrace aims to create a safer path toward truly autonomous operations. </p><p>Learn more from The New Stack about the latest in progressive delivery: </p><p><a href="https://thenewstack.io/why-you-cant-build-ai-without-progressive-delivery/ ">Why You Can’t Build AI Without Progressive Delivery </a></p><p><a href="https://thenewstack.io/continuous-delivery-gold-standard-for-software-development/ ">Continuous Delivery: Gold Standard for Software Development </a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Fri, 13 Feb 2026 00:05:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Dynatrace, Matt Burns, Alois Reitbauer, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/the-reason-ai-agents-shouldnt-touch-your-source-code-and-what-they-should-do-instead-avF32dzR</link>
      <content:encoded><![CDATA[<p>Dynatrace is at a pivotal point, expanding beyond traditional observability into a platform designed for autonomous operations and security powered by agentic AI. In an interview on *The New Stack Makers*, recorded at the Dynatrace Perform conference, Chief Technology Strategist Alois Reitbauer discussed his vision for AI-managed production environments. The conversation followed Dynatrace’s acquisition of DevCycle, a feature-management platform. Reitbauer highlighted feature flags—long used in software development—as a critical safety mechanism in the age of agentic AI. </p><p>Rather than allowing AI agents to rewrite and deploy code, Dynatrace envisions them operating within guardrails by adjusting configuration settings through feature flags. This approach limits risk while enabling faster, automated decision-making. Customers, Reitbauer noted, are increasingly comfortable with AI handling defined tasks under constraints, but not with agents making sweeping, unsupervised changes. By combining AI with controlled configuration tools, Dynatrace aims to create a safer path toward truly autonomous operations. </p><p>Learn more from The New Stack about the latest in progressive delivery: </p><p><a href="https://thenewstack.io/why-you-cant-build-ai-without-progressive-delivery/ ">Why You Can’t Build AI Without Progressive Delivery </a></p><p><a href="https://thenewstack.io/continuous-delivery-gold-standard-for-software-development/ ">Continuous Delivery: Gold Standard for Software Development </a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="21783239" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/44364930-b9fb-4e67-aa74-5e23fb58c7da/audio/c060e8c4-167b-4ec0-8d3e-4ca669939786/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>The reason AI agents shouldn’t touch your source code — and what they should do instead</itunes:title>
      <itunes:author>Dynatrace, Matt Burns, Alois Reitbauer, The New Stack</itunes:author>
      <itunes:duration>00:22:41</itunes:duration>
      <itunes:summary>Dynatrace is at a pivotal point, expanding beyond traditional observability into a platform designed for autonomous operations and security powered by agentic AI. In an interview on *The New Stack Makers*, recorded at the Dynatrace Perform conference, Chief Technology Strategist Alois Reitbauer discussed his vision for AI-managed production environments. The conversation followed Dynatrace’s acquisition of DevCycle, a feature-management platform. Reitbauer highlighted feature flags—long used in software development—as a critical safety mechanism in the age of agentic AI. </itunes:summary>
      <itunes:subtitle>Dynatrace is at a pivotal point, expanding beyond traditional observability into a platform designed for autonomous operations and security powered by agentic AI. In an interview on *The New Stack Makers*, recorded at the Dynatrace Perform conference, Chief Technology Strategist Alois Reitbauer discussed his vision for AI-managed production environments. The conversation followed Dynatrace’s acquisition of DevCycle, a feature-management platform. Reitbauer highlighted feature flags—long used in software development—as a critical safety mechanism in the age of agentic AI. </itunes:subtitle>
      <itunes:keywords>software delivery, ai delivery, matt burns, progressive delivery, software developer, dev cycle, tech podcast, feature flags, the new stack, tech, dynatrace, the new stack makers, software engineer, application development</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1591</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">d389ec5f-aff4-4cd7-acfd-0dd9194ca8c7</guid>
      <title>You can’t fire a bot: The blunt truth about AI slop and your job</title>
      <description><![CDATA[<p>Matan-Paul Shetrit, Director of Product Management at Writer, argues that people must take responsibility for how they use AI. If someone produces poor-quality output, he says, the blame lies with the user—not the tool. He believes many misunderstand AI’s role, confusing its ability to accelerate work with an abdication of accountability. Speaking on The New Stack Agents podcast, Shetrit emphasized that “we’re all becoming editors,” meaning professionals increasingly review and refine AI-generated content rather than create everything from scratch. However, ultimate responsibility remains human. If an AI-generated presentation contains errors, the presenter—not the AI—is accountable. </p><p>Shetrit also discussed the evolving AI landscape, contrasting massive general-purpose models from companies like OpenAI and Google with smaller, specialized models. At Writer, the focus is on enabling enterprise-scale AI adoption by reducing costs, improving accuracy, and increasing speed. He argues that bespoke, narrowly focused models tailored to specific use cases are essential for delivering reliable, cost-effective AI solutions at scale. </p><p>Learn more from The New Stack about the latest around enterprise development: </p><p><a href="https://thenewstack.io/why-pure-ai-coding-wont-work-for-enterprise-software/">Why Pure AI Coding Won’t Work for Enterprise Software </a></p><p><a href="https://thenewstack.io/how-to-use-vibe-coding-safely-in-the-enterprise/">How To Use Vibe Coding Safely in the Enterprise </a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Wed, 11 Feb 2026 23:10:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (the new stack, matan-paul shetrit, Frederic Lardinois, writer)</author>
      <link>https://thenewstack.simplecast.com/episodes/you-cant-fire-a-bot-the-blunt-truth-about-ai-slop-and-your-job-YquGk67j</link>
      <content:encoded><![CDATA[<p>Matan-Paul Shetrit, Director of Product Management at Writer, argues that people must take responsibility for how they use AI. If someone produces poor-quality output, he says, the blame lies with the user—not the tool. He believes many misunderstand AI’s role, confusing its ability to accelerate work with an abdication of accountability. Speaking on The New Stack Agents podcast, Shetrit emphasized that “we’re all becoming editors,” meaning professionals increasingly review and refine AI-generated content rather than create everything from scratch. However, ultimate responsibility remains human. If an AI-generated presentation contains errors, the presenter—not the AI—is accountable. </p><p>Shetrit also discussed the evolving AI landscape, contrasting massive general-purpose models from companies like OpenAI and Google with smaller, specialized models. At Writer, the focus is on enabling enterprise-scale AI adoption by reducing costs, improving accuracy, and increasing speed. He argues that bespoke, narrowly focused models tailored to specific use cases are essential for delivering reliable, cost-effective AI solutions at scale. </p><p>Learn more from The New Stack about the latest around enterprise development: </p><p><a href="https://thenewstack.io/why-pure-ai-coding-wont-work-for-enterprise-software/">Why Pure AI Coding Won’t Work for Enterprise Software </a></p><p><a href="https://thenewstack.io/how-to-use-vibe-coding-safely-in-the-enterprise/">How To Use Vibe Coding Safely in the Enterprise </a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="55022279" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/b2001b95-02a7-434e-91ce-89036654a77f/audio/b643bd65-836a-4fa6-9f13-e0eaa8358141/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>You can’t fire a bot: The blunt truth about AI slop and your job</itunes:title>
      <itunes:author>the new stack, matan-paul shetrit, Frederic Lardinois, writer</itunes:author>
      <itunes:duration>00:57:18</itunes:duration>
      <itunes:summary>Matan-Paul Shetrit, Director of Product Management at Writer, argues that people must take responsibility for how they use AI. If someone produces poor-quality output, he says, the blame lies with the user—not the tool. He believes many misunderstand AI’s role, confusing its ability to accelerate work with an abdication of accountability. Speaking on The New Stack Agents podcast, Shetrit emphasized that “we’re all becoming editors,” meaning professionals increasingly review and refine AI-generated content rather than create everything from scratch. However, ultimate responsibility remains human. If an AI-generated presentation contains errors, the presenter—not the AI—is accountable. </itunes:summary>
      <itunes:subtitle>Matan-Paul Shetrit, Director of Product Management at Writer, argues that people must take responsibility for how they use AI. If someone produces poor-quality output, he says, the blame lies with the user—not the tool. He believes many misunderstand AI’s role, confusing its ability to accelerate work with an abdication of accountability. Speaking on The New Stack Agents podcast, Shetrit emphasized that “we’re all becoming editors,” meaning professionals increasingly review and refine AI-generated content rather than create everything from scratch. However, ultimate responsibility remains human. If an AI-generated presentation contains errors, the presenter—not the AI—is accountable. </itunes:subtitle>
      <itunes:keywords>software developer, ai agents, agentic system, tech podcast, the new stack, ai developer, matan-paul shetrit, tech, enterprise scale, software engineer, the new stack agents, llms, writer, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1589</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">faedab12-d6e5-4400-a4e0-771bfe0585c0</guid>
      <title>GitLab CEO on why AI isn&apos;t helping enterprise ship code faster</title>
      <description><![CDATA[<p>AI coding assistants are boosting developer productivity, but most enterprises aren’t shipping software any faster. GitLab CEO Bill Staples says the reason is simple: coding was never the main bottleneck. After speaking with more than 60 customers, Staples found that developers spend only 10–20% of their time writing code. The remaining 80–90% is consumed by reviews, CI/CD pipelines, security scans, compliance checks, and deployment—areas that remain largely unautomated. Faster code generation only worsens downstream queues.</p><p>GitLab’s response is its newly GA’ed Duo Agent Platform, designed to automate the full software development lifecycle. The platform introduces “agent flows,” multi-step orchestrations that can take work from issue creation through merge requests, testing, and validation. Staples argues that context is the key differentiator. Unlike standalone coding tools that only see local code, GitLab’s all-in-one platform gives agents access to issues, epics, pipeline history, security data, and more through a unified knowledge graph.</p><p>Staples believes this platform approach, rather than fragmented point solutions, is what will finally unlock enterprise software delivery at scale.</p><p> </p><p>Learn more from The New Stack about the latest around GitLab and AI: </p><p><a href="https://thenewstack.io/gitlab-launches-its-ai-agent-platform-in-public-beta/">GitLab Launches Its AI Agent Platform in Public Beta</a></p><p><a href="https://thenewstack.io/gitlabs-field-cto-predicts-when-devsecops-meets-ai/">GitLab’s Field CTO Predicts: When DevSecOps Meets AI</a></p><p><a>Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></description>
      <pubDate>Tue, 10 Feb 2026 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Bill Staples, Frederic Lardinois, The New Stack, GitLab)</author>
      <link>https://thenewstack.simplecast.com/episodes/gitlab-ceo-on-why-ai-isnt-helping-enterprise-ship-code-faster-zLAXQaWm</link>
      <content:encoded><![CDATA[<p>AI coding assistants are boosting developer productivity, but most enterprises aren’t shipping software any faster. GitLab CEO Bill Staples says the reason is simple: coding was never the main bottleneck. After speaking with more than 60 customers, Staples found that developers spend only 10–20% of their time writing code. The remaining 80–90% is consumed by reviews, CI/CD pipelines, security scans, compliance checks, and deployment—areas that remain largely unautomated. Faster code generation only worsens downstream queues.</p><p>GitLab’s response is its newly GA’ed Duo Agent Platform, designed to automate the full software development lifecycle. The platform introduces “agent flows,” multi-step orchestrations that can take work from issue creation through merge requests, testing, and validation. Staples argues that context is the key differentiator. Unlike standalone coding tools that only see local code, GitLab’s all-in-one platform gives agents access to issues, epics, pipeline history, security data, and more through a unified knowledge graph.</p><p>Staples believes this platform approach, rather than fragmented point solutions, is what will finally unlock enterprise software delivery at scale.</p><p> </p><p>Learn more from The New Stack about the latest around GitLab and AI: </p><p><a href="https://thenewstack.io/gitlab-launches-its-ai-agent-platform-in-public-beta/">GitLab Launches Its AI Agent Platform in Public Beta</a></p><p><a href="https://thenewstack.io/gitlabs-field-cto-predicts-when-devsecops-meets-ai/">GitLab’s Field CTO Predicts: When DevSecOps Meets AI</a></p><p><a>Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></content:encoded>
      <enclosure length="55022279" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/1ed40131-88dd-40c4-a045-dfdb452fbf58/audio/e4bcc66e-d89a-480a-977b-ea5ecf52dc4e/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>GitLab CEO on why AI isn&apos;t helping enterprise ship code faster</itunes:title>
      <itunes:author>Bill Staples, Frederic Lardinois, The New Stack, GitLab</itunes:author>
      <itunes:duration>00:57:18</itunes:duration>
      <itunes:summary>AI coding assistants are boosting developer productivity, but most enterprises aren’t shipping software any faster. GitLab CEO Bill Staples says the reason is simple: coding was never the main bottleneck. After speaking with more than 60 customers, Staples found that developers spend only 10–20% of their time writing code. The remaining 80–90% is consumed by reviews, CI/CD pipelines, security scans, compliance checks, and deployment—areas that remain largely unautomated. Faster code generation only worsens downstream queues.</itunes:summary>
      <itunes:subtitle>AI coding assistants are boosting developer productivity, but most enterprises aren’t shipping software any faster. GitLab CEO Bill Staples says the reason is simple: coding was never the main bottleneck. After speaking with more than 60 customers, Staples found that developers spend only 10–20% of their time writing code. The remaining 80–90% is consumed by reviews, CI/CD pipelines, security scans, compliance checks, and deployment—areas that remain largely unautomated. Faster code generation only worsens downstream queues.</itunes:subtitle>
      <itunes:keywords>frederic lardinois, ai coding assistants, software developer, enterprise developer, tech podcast, the new stack, ai developer, tech, the new stack makers, software engineer, gitlab, duo agent platform, bill staples, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1590</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">d203998e-ad8f-4c90-bccd-154132d70864</guid>
      <title>The enterprise is not ready for &quot;the rise of the developer&quot;</title>
      <description><![CDATA[<p>Sean O’Dell of Dynatrace argues that enterprises are unprepared for a major shift brought on by AI: the rise of the developer. Speaking at Dynatrace Perform in Las Vegas, O’Dell explains that AI-assisted and “vibe” coding are collapsing traditional boundaries in software development. Developers, once insulated from production by layers of operations and governance, are now regaining end-to-end ownership of the entire software lifecycle — from development and testing to deployment and security. This shift challenges long-standing enterprise structures built around separation of duties and risk mitigation. </p><p>At the same time, the definition of “developer” is expanding. With AI lowering technical barriers, software creation is becoming more about creative intent than mastery of specialized tools, opening the door to nontraditional developers. Experimentation is also moving into production environments, a change that would have seemed reckless just 18 months ago. According to O’Dell, enterprises now understand AI well enough to experiment confidently, but many are not ready for the cultural, operational, and security implications of developers — broadly defined — taking full control again.</p><p><br />Learn more from The New Stack about the latest around enterprise developers and AI: </p><p><a href="https://thenewstack.io/retools-new-ai-powered-app-builder-lets-non-developers-build-enterprise-apps/">Retool’s New AI-Powered App Builder Lets Non-Developers Build Enterprise Apps</a></p><p><a href="https://thenewstack.io/solving-3-enterprise-ai-problems-developers-face/">Solving 3 Enterprise AI Problems Developers Face</a></p><p><a href="https://thenewstack.io/enterprise-platform-teams-are-stuck-in-day-two-hell/">Enterprise Platform Teams Are Stuck in Day 2 Hell</a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Thu, 5 Feb 2026 18:30:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Dynatrace, Matt Burns, The New Stack, Sean O&apos;Dell)</author>
      <link>https://thenewstack.simplecast.com/episodes/the-enterprise-is-not-ready-for-the-rise-of-the-developer-8JpzKYmX</link>
      <content:encoded><![CDATA[<p>Sean O’Dell of Dynatrace argues that enterprises are unprepared for a major shift brought on by AI: the rise of the developer. Speaking at Dynatrace Perform in Las Vegas, O’Dell explains that AI-assisted and “vibe” coding are collapsing traditional boundaries in software development. Developers, once insulated from production by layers of operations and governance, are now regaining end-to-end ownership of the entire software lifecycle — from development and testing to deployment and security. This shift challenges long-standing enterprise structures built around separation of duties and risk mitigation. </p><p>At the same time, the definition of “developer” is expanding. With AI lowering technical barriers, software creation is becoming more about creative intent than mastery of specialized tools, opening the door to nontraditional developers. Experimentation is also moving into production environments, a change that would have seemed reckless just 18 months ago. According to O’Dell, enterprises now understand AI well enough to experiment confidently, but many are not ready for the cultural, operational, and security implications of developers — broadly defined — taking full control again.</p><p><br />Learn more from The New Stack about the latest around enterprise developers and AI: </p><p><a href="https://thenewstack.io/retools-new-ai-powered-app-builder-lets-non-developers-build-enterprise-apps/">Retool’s New AI-Powered App Builder Lets Non-Developers Build Enterprise Apps</a></p><p><a href="https://thenewstack.io/solving-3-enterprise-ai-problems-developers-face/">Solving 3 Enterprise AI Problems Developers Face</a></p><p><a href="https://thenewstack.io/enterprise-platform-teams-are-stuck-in-day-two-hell/">Enterprise Platform Teams Are Stuck in Day 2 Hell</a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="24802576" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/b1a28eca-40a0-43c2-a2b9-547a720b6fe0/audio/af414990-f13c-48d1-93f9-d7db5de2a1b3/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>The enterprise is not ready for &quot;the rise of the developer&quot;</itunes:title>
      <itunes:author>Dynatrace, Matt Burns, The New Stack, Sean O&apos;Dell</itunes:author>
      <itunes:duration>00:25:50</itunes:duration>
      <itunes:summary>Sean O’Dell of Dynatrace argues that enterprises are unprepared for a major shift brought on by AI: the rise of the developer. Speaking at Dynatrace Perform in Las Vegas, O’Dell explains that AI-assisted and “vibe” coding are collapsing traditional boundaries in software development. Developers, once insulated from production by layers of operations and governance, are now regaining end-to-end ownership of the entire software lifecycle — from development and testing to deployment and security. This shift challenges long-standing enterprise structures built around separation of duties and risk mitigation. </itunes:summary>
      <itunes:subtitle>Sean O’Dell of Dynatrace argues that enterprises are unprepared for a major shift brought on by AI: the rise of the developer. Speaking at Dynatrace Perform in Las Vegas, O’Dell explains that AI-assisted and “vibe” coding are collapsing traditional boundaries in software development. Developers, once insulated from production by layers of operations and governance, are now regaining end-to-end ownership of the entire software lifecycle — from development and testing to deployment and security. This shift challenges long-standing enterprise structures built around separation of duties and risk mitigation. </itunes:subtitle>
      <itunes:keywords>matt burns, software developer, enterprise developer, tech podcast, the new stack, ai developer, tech, dynatrace, software development, software engineer, sean o&apos;dell, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1588</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">51b4e484-83e8-46be-b877-31ab6e0a1247</guid>
      <title>Meet Gravitino, a geo-distributed, federated metadata lake</title>
      <description><![CDATA[<p>In the era of agentic AI, attention has largely focused on data itself, while metadata has remained a neglected concern. Junping (JP) Du, founder and CEO of Datastrato, argues that this must change as AI fundamentally alters how data and metadata are consumed, governed, and understood. To address this gap, Datastrato created Apache Gravitino, an open source, high-performance, geo-distributed, federated metadata lake designed to act as a neutral control plane for metadata and governance across multi-modal, multi-engine AI workloads. </p><p>Gravitino achieved major milestones in 2025, including graduation as an Apache Top Level Project, a stable 1.1.0 release, and membership in the new Agentic AI Foundation. Du describes Gravitino as a “catalog of catalogs” that unifies metadata across engines like Spark, Trino, Ray, and PyTorch, eliminating silos and inconsistencies. Built to support both structured and unstructured data, Gravitino enables secure, consistent, and AI-friendly data access across clouds and regions, helping enterprises manage governance, access control, and scalability in increasingly complex AI environments.</p><p>Learn more from The New Stack about how the latest data and metadata are consumed, governed, and understood: </p><p><a href="https://thenewstack.io/is-agentic-metadata-the-next-infrastructure-layer/">Is Agentic Metadata the Next Infrastructure Layer?</a></p><p><a href="https://thenewstack.io/why-ai-loves-object-storage/">Why AI Loves Object Storage</a></p><p><a href="https://thenewstack.io/the-real-bottleneck-in-enterprise-ai-isnt-the-model-its-context/">The Real Bottleneck in Enterprise AI Isn’t the Model, It’s Context</a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Thu, 29 Jan 2026 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (. Junping (JP) Du, Datastrato, Heather Joslyn, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/meet-gravitino-a-geo-distributed-federated-metadata-lake-URCaO2eo</link>
      <content:encoded><![CDATA[<p>In the era of agentic AI, attention has largely focused on data itself, while metadata has remained a neglected concern. Junping (JP) Du, founder and CEO of Datastrato, argues that this must change as AI fundamentally alters how data and metadata are consumed, governed, and understood. To address this gap, Datastrato created Apache Gravitino, an open source, high-performance, geo-distributed, federated metadata lake designed to act as a neutral control plane for metadata and governance across multi-modal, multi-engine AI workloads. </p><p>Gravitino achieved major milestones in 2025, including graduation as an Apache Top Level Project, a stable 1.1.0 release, and membership in the new Agentic AI Foundation. Du describes Gravitino as a “catalog of catalogs” that unifies metadata across engines like Spark, Trino, Ray, and PyTorch, eliminating silos and inconsistencies. Built to support both structured and unstructured data, Gravitino enables secure, consistent, and AI-friendly data access across clouds and regions, helping enterprises manage governance, access control, and scalability in increasingly complex AI environments.</p><p>Learn more from The New Stack about how the latest data and metadata are consumed, governed, and understood: </p><p><a href="https://thenewstack.io/is-agentic-metadata-the-next-infrastructure-layer/">Is Agentic Metadata the Next Infrastructure Layer?</a></p><p><a href="https://thenewstack.io/why-ai-loves-object-storage/">Why AI Loves Object Storage</a></p><p><a href="https://thenewstack.io/the-real-bottleneck-in-enterprise-ai-isnt-the-model-its-context/">The Real Bottleneck in Enterprise AI Isn’t the Model, It’s Context</a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="28282505" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/0402efc3-f344-4223-87d0-37dc9ee63984/audio/e51af837-2e34-4689-b2bf-09283586355f/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Meet Gravitino, a geo-distributed, federated metadata lake</itunes:title>
      <itunes:author>. Junping (JP) Du, Datastrato, Heather Joslyn, The New Stack</itunes:author>
      <itunes:duration>00:29:27</itunes:duration>
      <itunes:summary>In the era of agentic AI, attention has largely focused on data itself, while metadata has remained a neglected concern. Junping (JP) Du, founder and CEO of Datastrato, argues that this must change as AI fundamentally alters how data and metadata are consumed, governed, and understood. To address this gap, Datastrato created Apache Gravitino, an open source, high-performance, geo-distributed, federated metadata lake designed to act as a neutral control plane for metadata and governance across multi-modal, multi-engine AI workloads. </itunes:summary>
      <itunes:subtitle>In the era of agentic AI, attention has largely focused on data itself, while metadata has remained a neglected concern. Junping (JP) Du, founder and CEO of Datastrato, argues that this must change as AI fundamentally alters how data and metadata are consumed, governed, and understood. To address this gap, Datastrato created Apache Gravitino, an open source, high-performance, geo-distributed, federated metadata lake designed to act as a neutral control plane for metadata and governance across multi-modal, multi-engine AI workloads. </itunes:subtitle>
      <itunes:keywords>generative ai, software developer, tech podcast, the new stack, ai developer, apache gravitino, ai workloads, junping (jp) du, tech, the new stack makers, open source, gravatino, datastrato, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1587</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">2b72d762-8395-4dec-be0b-dd4fe602ad61</guid>
      <title>CTO Chris Aniszczyk on the CNCF push for AI interoperability</title>
      <description><![CDATA[<p>Chris Aniszczyk, co-founder and CTO of the Cloud Native Computing Foundation (CNCF), argues that AI agents resemble microservices at a surface level, though they differ in how they are scaled and managed. In an interview ahead of KubeCon/CloudNativeCon Europe, he emphasized that being “AI native” requires being cloud native by default. Cloud-native technologies such as containers, microservices, Kubernetes, gRPC, Prometheus, and OpenTelemetry provide the scalability, resilience, and observability needed to support AI systems at scale. Aniszczyk noted that major AI platforms like ChatGPT and Claude already rely on Kubernetes and other CNCF projects.</p><p>To address growing complexity in running generative and agentic AI workloads, the CNCF has launched efforts to extend its conformance programs to AI. New requirements—such as dynamic resource allocation for GPUs and TPUs and specialized networking for inference workloads—are being handled inconsistently across the industry. CNCF aims to establish a baseline of compatibility to ensure vendor neutrality. Aniszczyk also highlighted CNCF incubation projects like Metal³ for bare-metal Kubernetes and OpenYurt for managing edge-based Kubernetes deployments.</p><p> </p><p>Learn more from The New Stack about CNCF and what to expect in 2026:</p><p><a href="https://thenewstack.io/why-the-cncfs-new-executive-director-is-obsessed-with-inference/">Why the CNCF’s New Executive Director Is Obsessed With Inference</a></p><p><a href="https://thenewstack.io/cncf-dragonfly-speeds-container-model-sharing-with-p2p/">CNCF Dragonfly Speeds Container, Model Sharing with P2P</a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Thu, 22 Jan 2026 20:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Chris Aniszczyk, Loraine Lawson, The New Stack, CNCF)</author>
      <link>https://thenewstack.simplecast.com/episodes/cto-chris-aniszczyk-on-the-cncf-push-for-ai-interoperability-QmmzHI5w</link>
      <content:encoded><![CDATA[<p>Chris Aniszczyk, co-founder and CTO of the Cloud Native Computing Foundation (CNCF), argues that AI agents resemble microservices at a surface level, though they differ in how they are scaled and managed. In an interview ahead of KubeCon/CloudNativeCon Europe, he emphasized that being “AI native” requires being cloud native by default. Cloud-native technologies such as containers, microservices, Kubernetes, gRPC, Prometheus, and OpenTelemetry provide the scalability, resilience, and observability needed to support AI systems at scale. Aniszczyk noted that major AI platforms like ChatGPT and Claude already rely on Kubernetes and other CNCF projects.</p><p>To address growing complexity in running generative and agentic AI workloads, the CNCF has launched efforts to extend its conformance programs to AI. New requirements—such as dynamic resource allocation for GPUs and TPUs and specialized networking for inference workloads—are being handled inconsistently across the industry. CNCF aims to establish a baseline of compatibility to ensure vendor neutrality. Aniszczyk also highlighted CNCF incubation projects like Metal³ for bare-metal Kubernetes and OpenYurt for managing edge-based Kubernetes deployments.</p><p> </p><p>Learn more from The New Stack about CNCF and what to expect in 2026:</p><p><a href="https://thenewstack.io/why-the-cncfs-new-executive-director-is-obsessed-with-inference/">Why the CNCF’s New Executive Director Is Obsessed With Inference</a></p><p><a href="https://thenewstack.io/cncf-dragonfly-speeds-container-model-sharing-with-p2p/">CNCF Dragonfly Speeds Container, Model Sharing with P2P</a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="22623337" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/2a90aef9-a8eb-4c6b-887d-666e53d18c29/audio/574c97b6-7cbd-4b47-8b9c-6c962925849e/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>CTO Chris Aniszczyk on the CNCF push for AI interoperability</itunes:title>
      <itunes:author>Chris Aniszczyk, Loraine Lawson, The New Stack, CNCF</itunes:author>
      <itunes:duration>00:23:33</itunes:duration>
      <itunes:summary>Chris Aniszczyk, co-founder and CTO of the Cloud Native Computing Foundation (CNCF), argues that AI agents resemble microservices at a surface level, though they differ in how they are scaled and managed. In an interview ahead of KubeCon/CloudNativeCon Europe, he emphasized that being “AI native” requires being cloud native by default. Cloud-native technologies such as containers, microservices, Kubernetes, gRPC, Prometheus, and OpenTelemetry provide the scalability, resilience, and observability needed to support AI systems at scale. Aniszczyk noted that major AI platforms like ChatGPT and Claude already rely on Kubernetes and other CNCF projects.</itunes:summary>
      <itunes:subtitle>Chris Aniszczyk, co-founder and CTO of the Cloud Native Computing Foundation (CNCF), argues that AI agents resemble microservices at a surface level, though they differ in how they are scaled and managed. In an interview ahead of KubeCon/CloudNativeCon Europe, he emphasized that being “AI native” requires being cloud native by default. Cloud-native technologies such as containers, microservices, Kubernetes, gRPC, Prometheus, and OpenTelemetry provide the scalability, resilience, and observability needed to support AI systems at scale. Aniszczyk noted that major AI platforms like ChatGPT and Claude already rely on Kubernetes and other CNCF projects.</itunes:subtitle>
      <itunes:keywords>cloud workloads, software developer, ai agents, tech podcast, the new stack, ai developer, loraine lawson, cloud native, ai workloads, tech, kubernetes, software engineer, cncf, 2026 ai prediction, chris aniszczyk, microservices, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1586</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">0f1a6d08-ccda-467d-b014-52e2fdc1f4f3</guid>
      <title>Solving the Problems that Accompany API Sprawl with AI</title>
      <description><![CDATA[<p>API sprawl creates hidden security risks and missed revenue opportunities when organizations lose visibility into the APIs they build. According to IBM’s Neeraj Nargund, APIs power the core business processes enterprises want to scale, making automated discovery, observability, and governance essential—especially when thousands of APIs exist across teams and environments. Strong governance helps identify endpoints, remediate shadow APIs, and manage risk at scale. At the same time, enterprises increasingly want to monetize the data APIs generate, packaging insights into products and pricing and segmenting usage, a need amplified by the rise of AI.</p><p>To address these challenges, Nargund highlights “smart APIs,” which are infused with AI to provide context awareness, event-driven behavior, and AI-assisted governance throughout the API lifecycle. These APIs help interpret and act on data, integrate with AI agents, and support real-time, streaming use cases.</p><p>IBM’s latest API Connect release embeds AI across API management and is designed for hybrid and multi-cloud environments, offering centralized governance, observability, and control through a single hybrid control plane.</p><p>Learn more from The New Stack about smart APIs: </p><p><a href="https://thenewstack.io/redefining-api-management-for-the-ai-driven-enterprise/ ">Redefining API Management for the AI-Driven Enterprise </a></p><p><a href="https://thenewstack.io/how-to-accelerate-growth-with-ai-powered-smart-apis/ ">How To Accelerate Growth With AI-Powered Smart APIs </a></p><p><a href="https://thenewstack.io/ai-account-sprawl-is-hurting-your-company/ ">Wrangle Account Sprawl With an AI Gateway </a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Thu, 15 Jan 2026 20:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Neeraj Nargund, The New Stack, IBM, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/solving-the-problems-that-accompany-api-sprawl-with-ai-C7eE96jg</link>
      <content:encoded><![CDATA[<p>API sprawl creates hidden security risks and missed revenue opportunities when organizations lose visibility into the APIs they build. According to IBM’s Neeraj Nargund, APIs power the core business processes enterprises want to scale, making automated discovery, observability, and governance essential—especially when thousands of APIs exist across teams and environments. Strong governance helps identify endpoints, remediate shadow APIs, and manage risk at scale. At the same time, enterprises increasingly want to monetize the data APIs generate, packaging insights into products and pricing and segmenting usage, a need amplified by the rise of AI.</p><p>To address these challenges, Nargund highlights “smart APIs,” which are infused with AI to provide context awareness, event-driven behavior, and AI-assisted governance throughout the API lifecycle. These APIs help interpret and act on data, integrate with AI agents, and support real-time, streaming use cases.</p><p>IBM’s latest API Connect release embeds AI across API management and is designed for hybrid and multi-cloud environments, offering centralized governance, observability, and control through a single hybrid control plane.</p><p>Learn more from The New Stack about smart APIs: </p><p><a href="https://thenewstack.io/redefining-api-management-for-the-ai-driven-enterprise/ ">Redefining API Management for the AI-Driven Enterprise </a></p><p><a href="https://thenewstack.io/how-to-accelerate-growth-with-ai-powered-smart-apis/ ">How To Accelerate Growth With AI-Powered Smart APIs </a></p><p><a href="https://thenewstack.io/ai-account-sprawl-is-hurting-your-company/ ">Wrangle Account Sprawl With an AI Gateway </a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="18557012" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/cf0a1033-cf04-4496-9cd0-a232ae7614aa/audio/92d20966-532f-4f8d-a652-cc92a531f402/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Solving the Problems that Accompany API Sprawl with AI</itunes:title>
      <itunes:author>Neeraj Nargund, The New Stack, IBM, Heather Joslyn</itunes:author>
      <itunes:duration>00:19:19</itunes:duration>
      <itunes:summary>API sprawl creates hidden security risks and missed revenue opportunities when organizations lose visibility into the APIs they build. According to IBM’s Neeraj Nargund, APIs power the core business processes enterprises want to scale, making automated discovery, observability, and governance essential—especially when thousands of APIs exist across teams and environments. Strong governance helps identify endpoints, remediate shadow APIs, and manage risk at scale. At the same time, enterprises increasingly want to monetize the data APIs generate, packaging insights into products and pricing and segmenting usage, a need amplified by the rise of AI.</itunes:summary>
      <itunes:subtitle>API sprawl creates hidden security risks and missed revenue opportunities when organizations lose visibility into the APIs they build. According to IBM’s Neeraj Nargund, APIs power the core business processes enterprises want to scale, making automated discovery, observability, and governance essential—especially when thousands of APIs exist across teams and environments. Strong governance helps identify endpoints, remediate shadow APIs, and manage risk at scale. At the same time, enterprises increasingly want to monetize the data APIs generate, packaging insights into products and pricing and segmenting usage, a need amplified by the rise of AI.</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, the new stack, ai developer, heather joslyn, tech, ibm, the new stack makers, software engineer, ibm api connect, smart apis, api sprawl, neeraj nargund, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1585</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">149f57e4-0bb7-4243-b419-55b108f842a9</guid>
      <title>CloudBees CEO: Why Migration Is a Mirage Costing You Millions</title>
      <description><![CDATA[<p>A CloudBees survey reveals that enterprise migration projects often fail to deliver promised modernization benefits. In 2024, 57% of enterprises spent over $1 million on migrations, with average overruns costing $315,000 per project. In <i>The New Stack Makers </i>podcast, CloudBees CEO Anuj Kapur describes this pattern as “the migration mirage,” where organizations chase modernization through costly migrations that push value further into the future. Findings from the CloudBees 2025 DevOps Migration Index show leaders routinely underestimate the longevity and resilience of existing systems. Kapur notes that applications often outlast CIOs, yet new leadership repeatedly mandates wholesale replacement. </p><p>The report argues modernization has been mistakenly equated with migration, which diverts resources from customer value to replatforming efforts. Beyond financial strain, migration erodes developer morale by forcing engineers to rework functioning systems instead of building new solutions. CloudBees advocates meeting developers where they are, setting flexible guardrails rather than enforcing rigid platforms. Kapur believes this approach, combined with emerging code assistance tools, could spark a new renaissance in software development by 2026.</p><p>Learn more from The New Stack about enterprise modernization: </p><p><a href="https://thenewstack.io/why-ai-alone-fails-at-large-scale-code-modernization/ ">Why AI Alone Fails at Large-Scale Code Modernization</a></p><p><a href="https://thenewstack.io/how-ai-can-speed-up-modernization-of-your-legacy-it-systems/ ">How AI Can Speed up Modernization of Your Legacy IT Systems</a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></description>
      <pubDate>Tue, 13 Jan 2026 17:30:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (CloudBees, Anuj Kapur, Alex Williams, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/cloudbees-ceo-why-migration-is-a-mirage-costing-you-millions-FV_PUDLJ</link>
      <content:encoded><![CDATA[<p>A CloudBees survey reveals that enterprise migration projects often fail to deliver promised modernization benefits. In 2024, 57% of enterprises spent over $1 million on migrations, with average overruns costing $315,000 per project. In <i>The New Stack Makers </i>podcast, CloudBees CEO Anuj Kapur describes this pattern as “the migration mirage,” where organizations chase modernization through costly migrations that push value further into the future. Findings from the CloudBees 2025 DevOps Migration Index show leaders routinely underestimate the longevity and resilience of existing systems. Kapur notes that applications often outlast CIOs, yet new leadership repeatedly mandates wholesale replacement. </p><p>The report argues modernization has been mistakenly equated with migration, which diverts resources from customer value to replatforming efforts. Beyond financial strain, migration erodes developer morale by forcing engineers to rework functioning systems instead of building new solutions. CloudBees advocates meeting developers where they are, setting flexible guardrails rather than enforcing rigid platforms. Kapur believes this approach, combined with emerging code assistance tools, could spark a new renaissance in software development by 2026.</p><p>Learn more from The New Stack about enterprise modernization: </p><p><a href="https://thenewstack.io/why-ai-alone-fails-at-large-scale-code-modernization/ ">Why AI Alone Fails at Large-Scale Code Modernization</a></p><p><a href="https://thenewstack.io/how-ai-can-speed-up-modernization-of-your-legacy-it-systems/ ">How AI Can Speed up Modernization of Your Legacy IT Systems</a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></content:encoded>
      <enclosure length="32784343" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/61b50cfa-7117-44fc-9680-ff5934392a7a/audio/ec39a413-5249-46a7-92dd-c91ef12cb38b/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>CloudBees CEO: Why Migration Is a Mirage Costing You Millions</itunes:title>
      <itunes:author>CloudBees, Anuj Kapur, Alex Williams, The New Stack</itunes:author>
      <itunes:duration>00:34:08</itunes:duration>
      <itunes:summary>A CloudBees survey reveals that enterprise migration projects often fail to deliver promised modernization benefits. In 2024, 57% of enterprises spent over $1 million on migrations, with average overruns costing $315,000 per project. In The New Stack Makers podcast, CloudBees CEO Anuj Kapur describes this pattern as “the migration mirage,” where organizations chase modernization through costly migrations that push value further into the future. Findings from the CloudBees 2025 DevOps Migration Index show leaders routinely underestimate the longevity and resilience of existing systems. Kapur notes that applications often outlast CIOs, yet new leadership repeatedly mandates wholesale replacement. </itunes:summary>
      <itunes:subtitle>A CloudBees survey reveals that enterprise migration projects often fail to deliver promised modernization benefits. In 2024, 57% of enterprises spent over $1 million on migrations, with average overruns costing $315,000 per project. In The New Stack Makers podcast, CloudBees CEO Anuj Kapur describes this pattern as “the migration mirage,” where organizations chase modernization through costly migrations that push value further into the future. Findings from the CloudBees 2025 DevOps Migration Index show leaders routinely underestimate the longevity and resilience of existing systems. Kapur notes that applications often outlast CIOs, yet new leadership repeatedly mandates wholesale replacement. </itunes:subtitle>
      <itunes:keywords>cloudbees, software developer, tech podcast, the new stack, anuj kapur, ai developer, modernization, devops migration index report, tech, software development, the new stack makers, software engineer, application modernization, it modernization, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1584</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">80e6336a-b5bb-458f-8a10-65736b901e62</guid>
      <title>Human Cognition Can’t Keep Up with Modern Networks. What’s Next?</title>
      <description><![CDATA[<p>IBM’s recent acquisitions of Red Hat, HashiCorp, and its planned purchase of Confluent reflect a deliberate strategy to build the infrastructure required for enterprise AI. According to IBM’s Sanil Nambiar, AI depends on consistent hybrid cloud runtimes (Red Hat), programmable and automated infrastructure (HashiCorp), and real-time, trustworthy data (Confluent). Without these foundations, AI cannot function effectively. </p><p>Nambiar argues that modern, software-defined networks have become too complex for humans to manage alone, overwhelmed by fragmented data, escalating tool sophistication, and a widening skills gap that makes veteran “tribal knowledge” hard to transfer. Trust, he says, is the biggest barrier to AI adoption in networking, since errors can cause costly outages. To address this, IBM launched IBM Network Intelligence, a “network-native” AI solution that combines time-series foundation models with reasoning large language models. This architecture enables AI agents to detect subtle warning patterns, collapse incident response times, and deliver accurate, trustworthy insights for real-world network operations.</p><p>Learn more from The New Stack about AI infrastructure and IBM’s approach:  </p><p><a href="https://thenewstack.io/ai-in-network-observability-the-dawn-of-network-intelligence/ ">AI in Network Observability: The Dawn of Network Intelligence </a></p><p><a href="https://thenewstack.io/how-agentic-ai-is-redefining-campus-and-branch-network-needs/ ">How Agentic AI Is Redefining Campus and Branch Network Needs </a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Wed, 7 Jan 2026 18:20:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Sanil Nambiar, heather joslyn, IBM, the new stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/human-cognition-cant-keep-up-with-modern-networks-whats-next-E7E9erOk</link>
      <content:encoded><![CDATA[<p>IBM’s recent acquisitions of Red Hat, HashiCorp, and its planned purchase of Confluent reflect a deliberate strategy to build the infrastructure required for enterprise AI. According to IBM’s Sanil Nambiar, AI depends on consistent hybrid cloud runtimes (Red Hat), programmable and automated infrastructure (HashiCorp), and real-time, trustworthy data (Confluent). Without these foundations, AI cannot function effectively. </p><p>Nambiar argues that modern, software-defined networks have become too complex for humans to manage alone, overwhelmed by fragmented data, escalating tool sophistication, and a widening skills gap that makes veteran “tribal knowledge” hard to transfer. Trust, he says, is the biggest barrier to AI adoption in networking, since errors can cause costly outages. To address this, IBM launched IBM Network Intelligence, a “network-native” AI solution that combines time-series foundation models with reasoning large language models. This architecture enables AI agents to detect subtle warning patterns, collapse incident response times, and deliver accurate, trustworthy insights for real-world network operations.</p><p>Learn more from The New Stack about AI infrastructure and IBM’s approach:  </p><p><a href="https://thenewstack.io/ai-in-network-observability-the-dawn-of-network-intelligence/ ">AI in Network Observability: The Dawn of Network Intelligence </a></p><p><a href="https://thenewstack.io/how-agentic-ai-is-redefining-campus-and-branch-network-needs/ ">How Agentic AI Is Redefining Campus and Branch Network Needs </a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="22350828" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/2c60aa3f-79b9-4250-88a4-51cc8af4ca17/audio/9a3dc2c5-ae34-450d-ad62-a12cc9f1de1f/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Human Cognition Can’t Keep Up with Modern Networks. What’s Next?</itunes:title>
      <itunes:author>Sanil Nambiar, heather joslyn, IBM, the new stack</itunes:author>
      <itunes:duration>00:23:16</itunes:duration>
      <itunes:summary>IBM’s recent acquisitions of Red Hat, HashiCorp, and its planned purchase of Confluent reflect a deliberate strategy to build the infrastructure required for enterprise AI. According to IBM’s Sanil Nambiar, AI depends on consistent hybrid cloud runtimes (Red Hat), programmable and automated infrastructure (HashiCorp), and real-time, trustworthy data (Confluent). Without these foundations, AI cannot function effectively. </itunes:summary>
      <itunes:subtitle>IBM’s recent acquisitions of Red Hat, HashiCorp, and its planned purchase of Confluent reflect a deliberate strategy to build the infrastructure required for enterprise AI. According to IBM’s Sanil Nambiar, AI depends on consistent hybrid cloud runtimes (Red Hat), programmable and automated infrastructure (HashiCorp), and real-time, trustworthy data (Confluent). Without these foundations, AI cannot function effectively. </itunes:subtitle>
      <itunes:keywords>sanil nambiar, software developer, tech podcast, the new stack, ai developer, tech, ibm network intelligence, ibm, the new stack makers, software engineer, ai infrastructure, hybrid cloud, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1583</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">74d4f15b-7381-47a8-afdd-1ca9b7f3c9e1</guid>
      <title>From Group Science Project to Enterprise Service: Rethinking OpenTelemetry</title>
      <description><![CDATA[<p>Ari Zilka, founder of MyDecisive.ai and former Hortonworks CPO, argues that most observability vendors now offer essentially identical, reactive dashboards that highlight problems only after systems are already broken. After speaking with all 23 observability vendors at KubeCon + CloudNativeCon North America 2025, Zilka said these tools fail to meaningfully reduce mean time to resolution (MTTR), a long-standing demand he heard repeatedly from thousands of CIOs during his time at New Relic.</p><p>Zilka believes observability must shift from reactive monitoring to proactive operations, where systems automatically respond to telemetry in real time. MyDecisive.ai is his attempt to solve this, acting as a “bump in the wire” that intercepts telemetry and uses AI-driven logic to trigger actions like rolling back faulty releases.</p><p>He also criticized the rising cost and complexity of OpenTelemetry adoption, noting that many companies now require large, specialized teams just to maintain OTel stacks. MyDecisive aims to turn OpenTelemetry into an enterprise-ready service that reduces human intervention and operational overhead.</p><p>Learn more from The New Stack about OpenTelemetry:</p><p><a href="https://thenewstack.io/observability-is-stuck-in-the-past-your-users-arent/">Observability Is Stuck in the Past. Your Users Aren't. </a></p><p><a href="https://thenewstack.io/setting-up-opentelemetry-on-the-frontend-because-i-hate-myself/">Setting Up OpenTelemetry on the Frontend Because I Hate Myself</a></p><p><a href="https://thenewstack.io/how-to-make-opentelemetry-better-in-the-browser/">How to Make OpenTelemetry Better in the Browser</a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Tue, 30 Dec 2025 19:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Ari Zilka, Mydecisive.ai, Alex Williams, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/from-group-science-project-to-enterprise-service-rethinking-opentelemetry-YaFtK3pZ</link>
      <content:encoded><![CDATA[<p>Ari Zilka, founder of MyDecisive.ai and former Hortonworks CPO, argues that most observability vendors now offer essentially identical, reactive dashboards that highlight problems only after systems are already broken. After speaking with all 23 observability vendors at KubeCon + CloudNativeCon North America 2025, Zilka said these tools fail to meaningfully reduce mean time to resolution (MTTR), a long-standing demand he heard repeatedly from thousands of CIOs during his time at New Relic.</p><p>Zilka believes observability must shift from reactive monitoring to proactive operations, where systems automatically respond to telemetry in real time. MyDecisive.ai is his attempt to solve this, acting as a “bump in the wire” that intercepts telemetry and uses AI-driven logic to trigger actions like rolling back faulty releases.</p><p>He also criticized the rising cost and complexity of OpenTelemetry adoption, noting that many companies now require large, specialized teams just to maintain OTel stacks. MyDecisive aims to turn OpenTelemetry into an enterprise-ready service that reduces human intervention and operational overhead.</p><p>Learn more from The New Stack about OpenTelemetry:</p><p><a href="https://thenewstack.io/observability-is-stuck-in-the-past-your-users-arent/">Observability Is Stuck in the Past. Your Users Aren't. </a></p><p><a href="https://thenewstack.io/setting-up-opentelemetry-on-the-frontend-because-i-hate-myself/">Setting Up OpenTelemetry on the Frontend Because I Hate Myself</a></p><p><a href="https://thenewstack.io/how-to-make-opentelemetry-better-in-the-browser/">How to Make OpenTelemetry Better in the Browser</a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="16648193" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/c44a1628-b4a0-4395-b3f0-d5b448502b94/audio/da1030eb-d485-405b-9849-9fffb7b07804/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>From Group Science Project to Enterprise Service: Rethinking OpenTelemetry</itunes:title>
      <itunes:author>Ari Zilka, Mydecisive.ai, Alex Williams, The New Stack</itunes:author>
      <itunes:duration>00:17:20</itunes:duration>
      <itunes:summary>Ari Zilka, founder of MyDecisive.ai and former Hortonworks CPO, argues that most observability vendors now offer essentially identical, reactive dashboards that highlight problems only after systems are already broken. After speaking with all 23 observability vendors at KubeCon + CloudNativeCon North America 2025, Zilka said these tools fail to meaningfully reduce mean time to resolution (MTTR), a long-standing demand he heard repeatedly from thousands of CIOs during his time at New Relic.</itunes:summary>
      <itunes:subtitle>Ari Zilka, founder of MyDecisive.ai and former Hortonworks CPO, argues that most observability vendors now offer essentially identical, reactive dashboards that highlight problems only after systems are already broken. After speaking with all 23 observability vendors at KubeCon + CloudNativeCon North America 2025, Zilka said these tools fail to meaningfully reduce mean time to resolution (MTTR), a long-standing demand he heard repeatedly from thousands of CIOs during his time at New Relic.</itunes:subtitle>
      <itunes:keywords>change management, software developer, tech podcast, the new stack, ai developer, ari zilka, tech, ai agent, kubecon atlanta 2025, mttr, open telemetry, the new stack makers, software engineer, kubecon, observability, mydecisive.ai, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1582</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">9d0a4b0d-3bd2-4feb-91c5-805157479036</guid>
      <title>Why You Can&apos;t Build AI Without Progressive Delivery</title>
      <description><![CDATA[<p>Former GitHub CEO Thomas Dohmke’s claim that AI-based development requires progressive delivery frames a conversation between analyst James Governor and The New Stack’s Alex Williams about why modern release practices matter more than ever. Governor argues that AI systems behave unpredictably in production: models can hallucinate, outputs vary between versions, and changes are often non-deterministic. Because of this uncertainty, teams must rely on progressive delivery techniques such as feature flags, canary releases, observability, measurement and rollback. These practices, originally developed to improve traditional software releases, now form the foundation for deploying AI safely. Concepts like evaluations, model versioning and controlled rollouts are direct extensions of established delivery disciplines. </p><p>Beyond AI, Governor’s book “Progressive Delivery” challenges DevOps thinking itself. He notes that DevOps focuses on development and operations but often neglects the user feedback loop. Using a framework of four A’s — abundance, autonomy, alignment and automation — he argues that progressive delivery reconnects teams with real user outcomes. Ultimately, success isn’t just reliability metrics, but whether users are actually satisfied. </p><p>Learn more from The New Stack about progressive delivery: </p><p><a href="https://thenewstack.io/mastering-progressive-hydration-for-enhanced-web-performance/ ">Mastering Progressive Hydration for Enhanced Web Performance </a></p><p><a href="https://thenewstack.io/continuous-delivery-gold-standard-for-software-development/ ">Continuous Delivery: Gold Standard for Software Development </a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></description>
      <pubDate>Tue, 23 Dec 2025 18:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (James Governor, Redmonk, Alex Williams, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/why-you-cant-build-ai-without-progressive-delivery-_NyB63DL</link>
      <content:encoded><![CDATA[<p>Former GitHub CEO Thomas Dohmke’s claim that AI-based development requires progressive delivery frames a conversation between analyst James Governor and The New Stack’s Alex Williams about why modern release practices matter more than ever. Governor argues that AI systems behave unpredictably in production: models can hallucinate, outputs vary between versions, and changes are often non-deterministic. Because of this uncertainty, teams must rely on progressive delivery techniques such as feature flags, canary releases, observability, measurement and rollback. These practices, originally developed to improve traditional software releases, now form the foundation for deploying AI safely. Concepts like evaluations, model versioning and controlled rollouts are direct extensions of established delivery disciplines. </p><p>Beyond AI, Governor’s book “Progressive Delivery” challenges DevOps thinking itself. He notes that DevOps focuses on development and operations but often neglects the user feedback loop. Using a framework of four A’s — abundance, autonomy, alignment and automation — he argues that progressive delivery reconnects teams with real user outcomes. Ultimately, success isn’t just reliability metrics, but whether users are actually satisfied. </p><p>Learn more from The New Stack about progressive delivery: </p><p><a href="https://thenewstack.io/mastering-progressive-hydration-for-enhanced-web-performance/ ">Mastering Progressive Hydration for Enhanced Web Performance </a></p><p><a href="https://thenewstack.io/continuous-delivery-gold-standard-for-software-development/ ">Continuous Delivery: Gold Standard for Software Development </a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></content:encoded>
      <enclosure length="26601891" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/ad0f2ec1-12e3-42b3-a64a-2d6e55d1e5a7/audio/ed63fd46-bec1-45f7-bf3a-bf9a8fcd0699/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Why You Can&apos;t Build AI Without Progressive Delivery</itunes:title>
      <itunes:author>James Governor, Redmonk, Alex Williams, The New Stack</itunes:author>
      <itunes:duration>00:27:42</itunes:duration>
      <itunes:summary>Former GitHub CEO Thomas Dohmke’s claim that AI-based development requires progressive delivery frames a conversation between analyst James Governor and The New Stack’s Alex Williams about why modern release practices matter more than ever. Governor argues that AI systems behave unpredictably in production: models can hallucinate, outputs vary between versions, and changes are often non-deterministic. Because of this uncertainty, teams must rely on progressive delivery techniques such as feature flags, canary releases, observability, measurement and rollback. These practices, originally developed to improve traditional software releases, now form the foundation for deploying AI safely. Concepts like evaluations, model versioning and controlled rollouts are direct extensions of established delivery disciplines. </itunes:summary>
      <itunes:subtitle>Former GitHub CEO Thomas Dohmke’s claim that AI-based development requires progressive delivery frames a conversation between analyst James Governor and The New Stack’s Alex Williams about why modern release practices matter more than ever. Governor argues that AI systems behave unpredictably in production: models can hallucinate, outputs vary between versions, and changes are often non-deterministic. Because of this uncertainty, teams must rely on progressive delivery techniques such as feature flags, canary releases, observability, measurement and rollback. These practices, originally developed to improve traditional software releases, now form the foundation for deploying AI safely. Concepts like evaluations, model versioning and controlled rollouts are direct extensions of established delivery disciplines. </itunes:subtitle>
      <itunes:keywords>software delivery, progressive delivery, tech podcast, the new stack, devops, tech, software development, james governor, the new stack makers, redmonk</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1581</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">50426018-2c95-40b5-b286-cb2177f72c78</guid>
      <title>How Nutanix Is Taming Operational Complexity</title>
      <description><![CDATA[<p>Most enterprises today run workloads across multiple IT infrastructures rather than a single platform, creating significant operational challenges. According to Nutanix CTO Deepak Goel, organizations face three major hurdles: managing operational complexity amid a shortage of cloud-native skills, migrating legacy virtual machine (VM) workloads to microservices-based cloud-native platforms, and running VM-based workloads alongside containerized applications. Many engineers have deep infrastructure experience but lack Kubernetes expertise, making the transition especially difficult and increasing the learning curve for IT administrators. </p><p>To address these issues, organizations are turning to platform engineering and internal developer platforms that abstract infrastructure complexity and provide standardized “golden paths” for deployment. Integrated development environments (IDEs) further reduce friction by embedding capabilities like observability and security. </p><p>Nutanix contributes through its hyper converged platform, which unifies compute and storage while supporting both VMs and containers. At KubeCon North America, Nutanix announced version 2.0 of Nutanix Data Services for Kubernetes (NDK), adding advanced data protection, fault-tolerant replication, and enhanced security through a partnership with Canonical to deliver a hardened operating system for Kubernetes environments.</p><p>Learn more from The New Stack about operational complexity in cloud native environments:</p><p><a href="https://thenewstack.io/qa-nutanix-ceo-rajiv-ramaswami-on-the-cloud-native-enterprise/ ">Q&A: Nutanix CEO Rajiv Ramaswami on the Cloud Native Enterprise </a></p><p><a href="https://thenewstack.io/kubernetes-complexity-realigns-platform-engineering-strategy/ ">Kubernetes Complexity Realigns Platform Engineering Strategy </a></p><p><a href="https://thenewstack.io/platform-engineering-on-the-brink-breakthrough-or-bust/ ">Platform Engineering on the Brink: Breakthrough or Bust? </a></p><p><a>Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></description>
      <pubDate>Thu, 18 Dec 2025 20:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack Podcast)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-nutanix-is-taming-operational-complexity-96hGg6_S</link>
      <content:encoded><![CDATA[<p>Most enterprises today run workloads across multiple IT infrastructures rather than a single platform, creating significant operational challenges. According to Nutanix CTO Deepak Goel, organizations face three major hurdles: managing operational complexity amid a shortage of cloud-native skills, migrating legacy virtual machine (VM) workloads to microservices-based cloud-native platforms, and running VM-based workloads alongside containerized applications. Many engineers have deep infrastructure experience but lack Kubernetes expertise, making the transition especially difficult and increasing the learning curve for IT administrators. </p><p>To address these issues, organizations are turning to platform engineering and internal developer platforms that abstract infrastructure complexity and provide standardized “golden paths” for deployment. Integrated development environments (IDEs) further reduce friction by embedding capabilities like observability and security. </p><p>Nutanix contributes through its hyper converged platform, which unifies compute and storage while supporting both VMs and containers. At KubeCon North America, Nutanix announced version 2.0 of Nutanix Data Services for Kubernetes (NDK), adding advanced data protection, fault-tolerant replication, and enhanced security through a partnership with Canonical to deliver a hardened operating system for Kubernetes environments.</p><p>Learn more from The New Stack about operational complexity in cloud native environments:</p><p><a href="https://thenewstack.io/qa-nutanix-ceo-rajiv-ramaswami-on-the-cloud-native-enterprise/ ">Q&A: Nutanix CEO Rajiv Ramaswami on the Cloud Native Enterprise </a></p><p><a href="https://thenewstack.io/kubernetes-complexity-realigns-platform-engineering-strategy/ ">Kubernetes Complexity Realigns Platform Engineering Strategy </a></p><p><a href="https://thenewstack.io/platform-engineering-on-the-brink-breakthrough-or-bust/ ">Platform Engineering on the Brink: Breakthrough or Bust? </a></p><p><a>Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></content:encoded>
      <enclosure length="14726416" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/8cdc3d51-7557-409b-a3ff-3624d3fb11b6/audio/64a41542-1046-4efb-bc83-dcb70d5bfc1b/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Nutanix Is Taming Operational Complexity</itunes:title>
      <itunes:author>The New Stack Podcast</itunes:author>
      <itunes:duration>00:15:20</itunes:duration>
      <itunes:summary>Most enterprises today run workloads across multiple IT infrastructures rather than a single platform, creating significant operational challenges. According to Nutanix CTO Deepak Goel, organizations face three major hurdles: managing operational complexity amid a shortage of cloud-native skills, migrating legacy virtual machine (VM) workloads to microservices-based cloud-native platforms, and running VM-based workloads alongside containerized applications. Many engineers have deep infrastructure experience but lack Kubernetes expertise, making the transition especially difficult and increasing the learning curve for IT administrators. </itunes:summary>
      <itunes:subtitle>Most enterprises today run workloads across multiple IT infrastructures rather than a single platform, creating significant operational challenges. According to Nutanix CTO Deepak Goel, organizations face three major hurdles: managing operational complexity amid a shortage of cloud-native skills, migrating legacy virtual machine (VM) workloads to microservices-based cloud-native platforms, and running VM-based workloads alongside containerized applications. Many engineers have deep infrastructure experience but lack Kubernetes expertise, making the transition especially difficult and increasing the learning curve for IT administrators. </itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1580</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">d88d2dc5-650d-4a09-92fb-69393d8b3182</guid>
      <title>Do All Your AI Workloads Actually Require Expensive GPUs?</title>
      <description><![CDATA[<p>GPUs dominate today’s AI landscape, but Google argues they are not necessary for every workload. As AI adoption has grown, customers have increasingly demanded compute options that deliver high performance with lower cost and power consumption. Drawing on its long history of custom silicon, Google introduced Axion CPUs in 2024 to meet needs for massive scale, flexibility, and general-purpose computing alongside AI workloads. The Axion-based C4A instance is generally available, while the newer N4A virtual machines promise up to 2x price performance.</p><p>In this episode, Andrei Gueletii, a technical solutions consultant for Google Cloud joined Gari Singh, a product manager for Google Kubernetes Engine (GKE), and Pranay Bakre, a principal solutions engineer at Arm for this episode, recorded at KubeCon + CloudNativeCon North America, in Atlanta. Built on Arm Neoverse V2 cores, Axion processors emphasize energy efficiency and customization, including flexible machine shapes that let users tailor memory and CPU resources. These features are particularly valuable for platform engineering teams, which must optimize centralized infrastructure for cost, FinOps goals, and price performance as they scale.</p><p>Importantly, many AI tasks—such as inference for smaller models or batch-oriented jobs—do not require GPUs. CPUs can be more efficient when GPU memory is underutilized or latency demands are low. By decoupling workloads and choosing the right compute for each task, organizations can significantly reduce AI compute costs.</p><p>Learn more from The New Stack about the Axion-based C4A: </p><p><a href="https://thenewstack.io/beyond-speed-why-your-next-app-must-be-multi-architecture/">Beyond Speed: Why Your Next App Must Be Multi-Architecture</a></p><p><a href="https://thenewstack.io/arm-see-a-demo-about-migrating-a-x86-based-app-to-arm64/">Arm: See a Demo About Migrating a x86-Based App to ARM64</a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Thu, 18 Dec 2025 19:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Google, Alex Williams, Arm, Andrei Gueletii, Gary Singh, Pranay Bakre)</author>
      <link>https://thenewstack.simplecast.com/episodes/do-all-your-ai-workloads-actually-require-expensive-gpus-JCN2XeIQ</link>
      <content:encoded><![CDATA[<p>GPUs dominate today’s AI landscape, but Google argues they are not necessary for every workload. As AI adoption has grown, customers have increasingly demanded compute options that deliver high performance with lower cost and power consumption. Drawing on its long history of custom silicon, Google introduced Axion CPUs in 2024 to meet needs for massive scale, flexibility, and general-purpose computing alongside AI workloads. The Axion-based C4A instance is generally available, while the newer N4A virtual machines promise up to 2x price performance.</p><p>In this episode, Andrei Gueletii, a technical solutions consultant for Google Cloud joined Gari Singh, a product manager for Google Kubernetes Engine (GKE), and Pranay Bakre, a principal solutions engineer at Arm for this episode, recorded at KubeCon + CloudNativeCon North America, in Atlanta. Built on Arm Neoverse V2 cores, Axion processors emphasize energy efficiency and customization, including flexible machine shapes that let users tailor memory and CPU resources. These features are particularly valuable for platform engineering teams, which must optimize centralized infrastructure for cost, FinOps goals, and price performance as they scale.</p><p>Importantly, many AI tasks—such as inference for smaller models or batch-oriented jobs—do not require GPUs. CPUs can be more efficient when GPU memory is underutilized or latency demands are low. By decoupling workloads and choosing the right compute for each task, organizations can significantly reduce AI compute costs.</p><p>Learn more from The New Stack about the Axion-based C4A: </p><p><a href="https://thenewstack.io/beyond-speed-why-your-next-app-must-be-multi-architecture/">Beyond Speed: Why Your Next App Must Be Multi-Architecture</a></p><p><a href="https://thenewstack.io/arm-see-a-demo-about-migrating-a-x86-based-app-to-arm64/">Arm: See a Demo About Migrating a x86-Based App to ARM64</a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="28624813" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/43b70391-5762-41ec-bde5-eebaabe39c7f/audio/4204f854-bce9-4b9a-b165-d55827fbe551/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Do All Your AI Workloads Actually Require Expensive GPUs?</itunes:title>
      <itunes:author>Google, Alex Williams, Arm, Andrei Gueletii, Gary Singh, Pranay Bakre</itunes:author>
      <itunes:duration>00:29:49</itunes:duration>
      <itunes:summary>GPUs dominate today’s AI landscape, but Google argues they are not necessary for every workload. As AI adoption has grown, customers have increasingly demanded compute options that deliver high performance with lower cost and power consumption. Drawing on its long history of custom silicon, Google introduced Axion CPUs in 2024 to meet needs for massive scale, flexibility, and general-purpose computing alongside AI workloads. The Axion-based C4A instance is generally available, while the newer N4A virtual machines promise up to 2x price performance.</itunes:summary>
      <itunes:subtitle>GPUs dominate today’s AI landscape, but Google argues they are not necessary for every workload. As AI adoption has grown, customers have increasingly demanded compute options that deliver high performance with lower cost and power consumption. Drawing on its long history of custom silicon, Google introduced Axion CPUs in 2024 to meet needs for massive scale, flexibility, and general-purpose computing alongside AI workloads. The Axion-based C4A instance is generally available, while the newer N4A virtual machines promise up to 2x price performance.</itunes:subtitle>
      <itunes:keywords>software developer, pranay bakre, axion-based n4a vms, google, tech podcast, the new stack, ai developer, andrei gueletii, tech, kubernetes, the new stack makers, software engineer, gary singh, open source, arm, kubecon atlanta, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1579</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">bd97cb0d-11b2-4853-8c15-703b1e2800a5</guid>
      <title>Breaking Data Team Silos Is the Key to Getting AI to Production</title>
      <description><![CDATA[<p>Enterprises are racing to deploy AI services, but the teams responsible for running them in production are seeing familiar problems reemerge—most notably, silos between data scientists and operations teams, reminiscent of the old DevOps divide. In a discussion recorded at AWS re:Invent 2025, IBM’s Thanos Matzanas and Martin Fuentes argue that the challenge isn’t new technology but repeating organizational patterns. As data teams move from internal projects to revenue-critical, customer-facing applications, they face new pressures around reliability, observability, and accountability.</p><p>The speakers stress that many existing observability and governance practices still apply. Standard metrics, KPIs, SLOs, access controls, and audit logs remain essential foundations, even as AI introduces non-determinism and a heavier reliance on human feedback to assess quality. Tools like OpenTelemetry provide common ground, but culture matters more than tooling.</p><p>Both emphasize starting with business value and breaking down silos early by involving data teams in production discussions. Rather than replacing observability professionals, AI should augment human expertise, especially in critical systems where trust, safety, and compliance are paramount.</p><p>Learn more from The New Stack about enabling AI with silos: </p><p><a href="https://thenewstack.io/are-your-ai-co-pilots-trapping-data-in-isolated-silos/">Are Your AI Co-Pilots Trapping Data in Isolated Silos?</a></p><p><a href="https://thenewstack.io/break-the-ai-gridlock-at-the-intersection-of-velocity-and-trust/">Break the AI Gridlock at the Intersection of Velocity and Trust</a></p><p><a href="https://thenewstack.io/taming-ai-observability-control-is-the-key-to-success/">Taming AI Observability: Control Is the Key to Success</a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Wed, 17 Dec 2025 17:30:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Martin Fuentes, Thanos Matzanas, Frederic Lardinois, IBM, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/breaking-data-team-silos-is-the-key-to-getting-ai-to-production-QEXn5jY4</link>
      <content:encoded><![CDATA[<p>Enterprises are racing to deploy AI services, but the teams responsible for running them in production are seeing familiar problems reemerge—most notably, silos between data scientists and operations teams, reminiscent of the old DevOps divide. In a discussion recorded at AWS re:Invent 2025, IBM’s Thanos Matzanas and Martin Fuentes argue that the challenge isn’t new technology but repeating organizational patterns. As data teams move from internal projects to revenue-critical, customer-facing applications, they face new pressures around reliability, observability, and accountability.</p><p>The speakers stress that many existing observability and governance practices still apply. Standard metrics, KPIs, SLOs, access controls, and audit logs remain essential foundations, even as AI introduces non-determinism and a heavier reliance on human feedback to assess quality. Tools like OpenTelemetry provide common ground, but culture matters more than tooling.</p><p>Both emphasize starting with business value and breaking down silos early by involving data teams in production discussions. Rather than replacing observability professionals, AI should augment human expertise, especially in critical systems where trust, safety, and compliance are paramount.</p><p>Learn more from The New Stack about enabling AI with silos: </p><p><a href="https://thenewstack.io/are-your-ai-co-pilots-trapping-data-in-isolated-silos/">Are Your AI Co-Pilots Trapping Data in Isolated Silos?</a></p><p><a href="https://thenewstack.io/break-the-ai-gridlock-at-the-intersection-of-velocity-and-trust/">Break the AI Gridlock at the Intersection of Velocity and Trust</a></p><p><a href="https://thenewstack.io/taming-ai-observability-control-is-the-key-to-success/">Taming AI Observability: Control Is the Key to Success</a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="29552683" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/8ae1d869-d854-4da2-ab5c-3bac19122634/audio/3442150e-a091-4467-a175-0a2ac394e0d7/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Breaking Data Team Silos Is the Key to Getting AI to Production</itunes:title>
      <itunes:author>Martin Fuentes, Thanos Matzanas, Frederic Lardinois, IBM, The New Stack</itunes:author>
      <itunes:duration>00:30:47</itunes:duration>
      <itunes:summary>Enterprises are racing to deploy AI services, but the teams responsible for running them in production are seeing familiar problems reemerge—most notably, silos between data scientists and operations teams, reminiscent of the old DevOps divide. In a discussion recorded at AWS re:Invent 2025, IBM’s Thanos Matzanas and Martin Fuentes argue that the challenge isn’t new technology but repeating organizational patterns. As data teams move from internal projects to revenue-critical, customer-facing applications, they face new pressures around reliability, observability, and accountability.</itunes:summary>
      <itunes:subtitle>Enterprises are racing to deploy AI services, but the teams responsible for running them in production are seeing familiar problems reemerge—most notably, silos between data scientists and operations teams, reminiscent of the old DevOps divide. In a discussion recorded at AWS re:Invent 2025, IBM’s Thanos Matzanas and Martin Fuentes argue that the challenge isn’t new technology but repeating organizational patterns. As data teams move from internal projects to revenue-critical, customer-facing applications, they face new pressures around reliability, observability, and accountability.</itunes:subtitle>
      <itunes:keywords>data, frederic lardinois, software developer, ai, thanos matzanas, tech podcast, the new stack, ai developer, martin feuntes, tech, ibm, the new stack makers, software engineer, silos, workflows, observability, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1578</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">6f347ca1-7301-4469-a276-cc430c2b0985</guid>
      <title>Why AI Parallelization Will Be One of the Biggest Challenges of 2026</title>
      <description><![CDATA[<p>Rob Whiteley, CEO of Coder, argues that the biggest winners in today’s AI boom resemble the “picks and shovels” sellers of the California Gold Rush: companies that provide tools enabling others to build with AI. Speaking on<i>The New Stack Makers</i>at AWS re:Invent, Whiteley described the current AI moment as the fastest-moving shift he’s seen in 25 years of tech. Developers are rapidly adopting AI tools, while platform teams face pressure to approve them, as saying “no” is no longer viable. </p><p>Whiteley warns of a widening gap between organizations that extract real value from AI and those that don’t, driven by skills shortages and insufficient investment in training. He sees parallels with the cloud-native transition and predicts the rise of “AI-native” companies. As agentic AI grows, developers increasingly act as managers overseeing many parallel AI agents, creating new challenges around governance, security, and state management. To address this, Coder introduced Mux, an open source coding agent multiplexer designed to help developers manage and evaluate large volumes of AI-generated code efficiently.</p><p>Learn more from The New Stack about AI Parallelization </p><p><a href="https://thenewstack.io/the-production-generative-ai-stack-architecture-and-components/">The Production Generative AI Stack: Architecture and Components</a></p><p><a href="https://thenewstack.io/unlock-velocity-enable-parallel-frontend-backend-development/">Enable ParallelFrontend/Backend Development to Unlock Velocity</a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Tue, 16 Dec 2025 14:30:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Rob Whiteley, Coder, Alex Williams, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/why-ai-parallelization-will-be-one-of-the-biggest-challenges-of-2026-0w_vQzKx</link>
      <content:encoded><![CDATA[<p>Rob Whiteley, CEO of Coder, argues that the biggest winners in today’s AI boom resemble the “picks and shovels” sellers of the California Gold Rush: companies that provide tools enabling others to build with AI. Speaking on<i>The New Stack Makers</i>at AWS re:Invent, Whiteley described the current AI moment as the fastest-moving shift he’s seen in 25 years of tech. Developers are rapidly adopting AI tools, while platform teams face pressure to approve them, as saying “no” is no longer viable. </p><p>Whiteley warns of a widening gap between organizations that extract real value from AI and those that don’t, driven by skills shortages and insufficient investment in training. He sees parallels with the cloud-native transition and predicts the rise of “AI-native” companies. As agentic AI grows, developers increasingly act as managers overseeing many parallel AI agents, creating new challenges around governance, security, and state management. To address this, Coder introduced Mux, an open source coding agent multiplexer designed to help developers manage and evaluate large volumes of AI-generated code efficiently.</p><p>Learn more from The New Stack about AI Parallelization </p><p><a href="https://thenewstack.io/the-production-generative-ai-stack-architecture-and-components/">The Production Generative AI Stack: Architecture and Components</a></p><p><a href="https://thenewstack.io/unlock-velocity-enable-parallel-frontend-backend-development/">Enable ParallelFrontend/Backend Development to Unlock Velocity</a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="23135755" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/c7b72c9b-c9fa-45d9-99ff-de4c59710f67/audio/35819e49-9e8d-42da-9685-9869a9b0f1b5/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Why AI Parallelization Will Be One of the Biggest Challenges of 2026</itunes:title>
      <itunes:author>Rob Whiteley, Coder, Alex Williams, The New Stack</itunes:author>
      <itunes:duration>00:24:05</itunes:duration>
      <itunes:summary>Rob Whiteley, CEO of Coder, argues that the biggest winners in today’s AI boom resemble the “picks and shovels” sellers of the California Gold Rush: companies that provide tools enabling others to build with AI. Speaking on The New Stack Makers at AWS re:Invent, Whiteley described the current AI moment as the fastest-moving shift he’s seen in 25 years of tech. Developers are rapidly adopting AI tools, while platform teams face pressure to approve them, as saying “no” is no longer viable. </itunes:summary>
      <itunes:subtitle>Rob Whiteley, CEO of Coder, argues that the biggest winners in today’s AI boom resemble the “picks and shovels” sellers of the California Gold Rush: companies that provide tools enabling others to build with AI. Speaking on The New Stack Makers at AWS re:Invent, Whiteley described the current AI moment as the fastest-moving shift he’s seen in 25 years of tech. Developers are rapidly adopting AI tools, while platform teams face pressure to approve them, as saying “no” is no longer viable. </itunes:subtitle>
      <itunes:keywords>coder, ai parallelization, software developer, tech podcast, rob whiteley, ai developer, mux, tech, ai skills, ai-generated code, open source, aws reinvent, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1577</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">dffbb7c3-4ad9-4558-89c2-a3daae708aee</guid>
      <title>Kubernetes GPU Management Just Got a Major Upgrade</title>
      <description><![CDATA[<p>Nvidia Distinguished Engineer Kevin Klues noted that low-level systems work is invisible when done well and highly visible when it fails — a dynamic that frames current Kubernetes innovations for AI. At KubeCon + CloudNativeCon North America 2025, Klues and AWS product manager Jesse Butler discussed two emerging capabilities: dynamic resource allocation (DRA) and a new workload abstraction designed for sophisticated AI scheduling.</p><p>DRA, now generally available in Kubernetes 1.34, fixes long-standing limitations in GPU requests. Instead of simply asking for a number of GPUs, users can specify types and configurations. Modeled after persistent volumes, DRA allows any specialized hardware to be exposed through standardized interfaces, enabling vendors to deliver custom device drivers cleanly. Butler called it one of the most elegant designs in Kubernetes.</p><p>Yet complex AI workloads require more coordination. A forthcoming workload abstraction, debuting in Kubernetes 1.35, will let users define pod groups with strict scheduling and topology rules — ensuring multi-node jobs start fully or not at all. Klues emphasized that this abstraction will shape Kubernetes’ AI trajectory for the next decade and encouraged community involvement.</p><p>Learn more from The New Stack about dynamic resource allocation: </p><p><a href="https://thenewstack.io/kubernetes-primer-dynamic-resource-allocation-dra-for-gpu-workloads/">Kubernetes Primer: Dynamic Resource Allocation (DRA) for GPU Workloads</a></p><p><a href="https://thenewstack.io/kubernetes-v1-34-introduces-benefits-but-also-new-blind-spots/ ">Kubernetes v1.34 Introduces Benefits but Also New Blind Spots</a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></description>
      <pubDate>Thu, 11 Dec 2025 18:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Kevin Kleus, Nvidia, Alex Williams, AWS, The New Stack, Jesse Butler)</author>
      <link>https://thenewstack.simplecast.com/episodes/kubernetes-gpu-management-just-got-a-major-upgrade-pIda8IiW</link>
      <content:encoded><![CDATA[<p>Nvidia Distinguished Engineer Kevin Klues noted that low-level systems work is invisible when done well and highly visible when it fails — a dynamic that frames current Kubernetes innovations for AI. At KubeCon + CloudNativeCon North America 2025, Klues and AWS product manager Jesse Butler discussed two emerging capabilities: dynamic resource allocation (DRA) and a new workload abstraction designed for sophisticated AI scheduling.</p><p>DRA, now generally available in Kubernetes 1.34, fixes long-standing limitations in GPU requests. Instead of simply asking for a number of GPUs, users can specify types and configurations. Modeled after persistent volumes, DRA allows any specialized hardware to be exposed through standardized interfaces, enabling vendors to deliver custom device drivers cleanly. Butler called it one of the most elegant designs in Kubernetes.</p><p>Yet complex AI workloads require more coordination. A forthcoming workload abstraction, debuting in Kubernetes 1.35, will let users define pod groups with strict scheduling and topology rules — ensuring multi-node jobs start fully or not at all. Klues emphasized that this abstraction will shape Kubernetes’ AI trajectory for the next decade and encouraged community involvement.</p><p>Learn more from The New Stack about dynamic resource allocation: </p><p><a href="https://thenewstack.io/kubernetes-primer-dynamic-resource-allocation-dra-for-gpu-workloads/">Kubernetes Primer: Dynamic Resource Allocation (DRA) for GPU Workloads</a></p><p><a href="https://thenewstack.io/kubernetes-v1-34-introduces-benefits-but-also-new-blind-spots/ ">Kubernetes v1.34 Introduces Benefits but Also New Blind Spots</a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></content:encoded>
      <enclosure length="34031533" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/4751a602-297d-4111-923d-5c7056d7466d/audio/f6f752a2-575c-457f-99dd-306cafaaac0e/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Kubernetes GPU Management Just Got a Major Upgrade</itunes:title>
      <itunes:author>Kevin Kleus, Nvidia, Alex Williams, AWS, The New Stack, Jesse Butler</itunes:author>
      <itunes:duration>00:35:26</itunes:duration>
      <itunes:summary>Nvidia Distinguished Engineer Kevin Klues noted that low-level systems work is invisible when done well and highly visible when it fails — a dynamic that frames current Kubernetes innovations for AI. At KubeCon + CloudNativeCon North America 2025, Klues and AWS product manager Jesse Butler discussed two emerging capabilities: dynamic resource allocation (DRA) and a new workload abstraction designed for sophisticated AI scheduling.</itunes:summary>
      <itunes:subtitle>Nvidia Distinguished Engineer Kevin Klues noted that low-level systems work is invisible when done well and highly visible when it fails — a dynamic that frames current Kubernetes innovations for AI. At KubeCon + CloudNativeCon North America 2025, Klues and AWS product manager Jesse Butler discussed two emerging capabilities: dynamic resource allocation (DRA) and a new workload abstraction designed for sophisticated AI scheduling.</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, the new stack, ai developer, jesse butler, tech, kubernetes, the new stack makers, dynamic resource allocation, software engineer, kevin kleus, open source, nvidia, aws, kubecon atlanta, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1575</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">6c9da1d9-e51c-4a48-a148-addaa04001e6</guid>
      <title>The Rise of the Cognitive Architect</title>
      <description><![CDATA[<p>At KubeCon North America 2025, GitLab’s Emilio Salvador outlined how developers are shifting from individual coders to leaders of hybrid human–AI teams. He envisions developers evolving into “cognitive architects,” responsible for breaking down large, complex problems and distributing work across both AI agents and humans. Complementing this is the emerging role of the “AI guardian,” reflecting growing skepticism around AI-generated code. Even as AI produces more code, humans remain accountable for reviewing quality, security, and compliance.</p><p>Salvador also described GitLab’s “AI paradox”: developers may code faster with AI, but overall productivity stalls because testing, security, and compliance processes haven’t kept pace. To fix this, he argues organizations must apply AI across the entire development lifecycle, not just in coding. GitLab’s Duo Agent Platform aims to support that end-to-end transformation.</p><p>Looking ahead, Salvador predicts the rise of a proactive “meta agent” that functions like a full team member. Still, he warns that enterprise adoption remains slow and advises organizations to start small, build skills, and scale gradually.</p><p>Learn more from The New Stack about the evolving role of "cognitive architects":</p><p><a href="https://thenewstack.io/the-engineer-in-the-ai-age-the-orchestrator-and-architect/">The Engineer in the AI Age: The Orchestrator and Architect</a></p><p><a href="https://thenewstack.io/the-new-role-of-enterprise-architecture-in-the-ai-era/">The New Role of Enterprise Architecture in the AI Era</a></p><p><a href="https://thenewstack.io/the-architects-guide-to-understanding-agentic-ai/">The Architect’s Guide to Understanding Agentic AI</a></p><p><a>Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p><p> </p>
]]></description>
      <pubDate>Wed, 10 Dec 2025 18:20:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Emilio Salvador, The New Stack, Frederic Lardinois, GitLab)</author>
      <link>https://thenewstack.simplecast.com/episodes/the-rise-of-the-cognitive-architect-R_M0CwHN</link>
      <content:encoded><![CDATA[<p>At KubeCon North America 2025, GitLab’s Emilio Salvador outlined how developers are shifting from individual coders to leaders of hybrid human–AI teams. He envisions developers evolving into “cognitive architects,” responsible for breaking down large, complex problems and distributing work across both AI agents and humans. Complementing this is the emerging role of the “AI guardian,” reflecting growing skepticism around AI-generated code. Even as AI produces more code, humans remain accountable for reviewing quality, security, and compliance.</p><p>Salvador also described GitLab’s “AI paradox”: developers may code faster with AI, but overall productivity stalls because testing, security, and compliance processes haven’t kept pace. To fix this, he argues organizations must apply AI across the entire development lifecycle, not just in coding. GitLab’s Duo Agent Platform aims to support that end-to-end transformation.</p><p>Looking ahead, Salvador predicts the rise of a proactive “meta agent” that functions like a full team member. Still, he warns that enterprise adoption remains slow and advises organizations to start small, build skills, and scale gradually.</p><p>Learn more from The New Stack about the evolving role of "cognitive architects":</p><p><a href="https://thenewstack.io/the-engineer-in-the-ai-age-the-orchestrator-and-architect/">The Engineer in the AI Age: The Orchestrator and Architect</a></p><p><a href="https://thenewstack.io/the-new-role-of-enterprise-architecture-in-the-ai-era/">The New Role of Enterprise Architecture in the AI Era</a></p><p><a href="https://thenewstack.io/the-architects-guide-to-understanding-agentic-ai/">The Architect’s Guide to Understanding Agentic AI</a></p><p><a>Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p><p> </p>
]]></content:encoded>
      <enclosure length="21977590" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/35becb0f-39e0-49c2-9c92-9651f304c2fa/audio/5810fd6b-9e56-4366-9997-b1514ef865ea/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>The Rise of the Cognitive Architect</itunes:title>
      <itunes:author>Emilio Salvador, The New Stack, Frederic Lardinois, GitLab</itunes:author>
      <itunes:duration>00:22:53</itunes:duration>
      <itunes:summary>At KubeCon North America 2025, GitLab’s Emilio Salvador outlined how developers are shifting from individual coders to leaders of hybrid human–AI teams. He envisions developers evolving into “cognitive architects,” responsible for breaking down large, complex problems and distributing work across both AI agents and humans. Complementing this is the emerging role of the “AI guardian,” reflecting growing skepticism around AI-generated code. Even as AI produces more code, humans remain accountable for reviewing quality, security, and compliance.</itunes:summary>
      <itunes:subtitle>At KubeCon North America 2025, GitLab’s Emilio Salvador outlined how developers are shifting from individual coders to leaders of hybrid human–AI teams. He envisions developers evolving into “cognitive architects,” responsible for breaking down large, complex problems and distributing work across both AI agents and humans. Complementing this is the emerging role of the “AI guardian,” reflecting growing skepticism around AI-generated code. Even as AI produces more code, humans remain accountable for reviewing quality, security, and compliance.</itunes:subtitle>
      <itunes:keywords>software developer, ai agents, ai development lifecycle, the new stack, cognitive architects, ai developer, kubernetes, the new stack makers, software engineer, open source, software development lifecycle, gitlab, kubecon atlanta, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1574</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">5e3c3b21-7fab-4398-9d10-72d4982c1610</guid>
      <title>Why the CNCF&apos;s New Executive Director is Obsessed With Inference</title>
      <description><![CDATA[<p>Jonathan Bryce, the new CNCF executive director, argues that inference—not model training—will define the next decade of computing. Speaking at KubeCon North America 2025, he emphasized that while the industry obsesses over massive LLM training runs, the real opportunity lies in efficiently serving these models at scale. Cloud-native infrastructure, he says, is uniquely suited to this shift because inference requires real-time deployment, security, scaling, and observability—strengths of the CNCF ecosystem. </p><p>Bryce believes Kubernetes is already central to modern inference stacks, with projects like Ray, KServe, and emerging GPU-oriented tooling enabling teams to deploy and operationalize models. To bring consistency to this fast-moving space, the CNCF launched a Kubernetes AI Conformance Program, ensuring environments support GPU workloads and Dynamic Resource Allocation. With AI agents poised to multiply inference demand by executing parallel, multi-step tasks, efficiency becomes essential. Bryce predicts that smaller, task-specific models and cloud-native routing optimizations will drive major performance gains. Ultimately, he sees CNCF technologies forming the foundation for what he calls “the biggest workload mankind will ever have.” </p><p>Learn more from The New Stack about inference: </p><p><a href="https://thenewstack.io/confronting-ais-next-big-challenge-inference-compute/ ">Confronting AI’s Next Big Challenge: Inference Compute </a></p><p><a href="https://thenewstack.io/deep-infra-is-building-an-ai-inference-cloud-for-developers/ ">Deep Infra Is Building an AI Inference Cloud for Developers </a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Tue, 9 Dec 2025 21:25:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Frederic Lardinois, Jonathan Bryce, CNCF, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/why-the-cncfs-new-executive-director-is-obsessed-with-inference-GlS1zESt</link>
      <content:encoded><![CDATA[<p>Jonathan Bryce, the new CNCF executive director, argues that inference—not model training—will define the next decade of computing. Speaking at KubeCon North America 2025, he emphasized that while the industry obsesses over massive LLM training runs, the real opportunity lies in efficiently serving these models at scale. Cloud-native infrastructure, he says, is uniquely suited to this shift because inference requires real-time deployment, security, scaling, and observability—strengths of the CNCF ecosystem. </p><p>Bryce believes Kubernetes is already central to modern inference stacks, with projects like Ray, KServe, and emerging GPU-oriented tooling enabling teams to deploy and operationalize models. To bring consistency to this fast-moving space, the CNCF launched a Kubernetes AI Conformance Program, ensuring environments support GPU workloads and Dynamic Resource Allocation. With AI agents poised to multiply inference demand by executing parallel, multi-step tasks, efficiency becomes essential. Bryce predicts that smaller, task-specific models and cloud-native routing optimizations will drive major performance gains. Ultimately, he sees CNCF technologies forming the foundation for what he calls “the biggest workload mankind will ever have.” </p><p>Learn more from The New Stack about inference: </p><p><a href="https://thenewstack.io/confronting-ais-next-big-challenge-inference-compute/ ">Confronting AI’s Next Big Challenge: Inference Compute </a></p><p><a href="https://thenewstack.io/deep-infra-is-building-an-ai-inference-cloud-for-developers/ ">Deep Infra Is Building an AI Inference Cloud for Developers </a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="24153486" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/f850d4f4-ddd7-40a9-abc8-41176e52fe85/audio/59aabe04-589b-4f5f-8e29-98b088000ea1/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Why the CNCF&apos;s New Executive Director is Obsessed With Inference</itunes:title>
      <itunes:author>Frederic Lardinois, Jonathan Bryce, CNCF, The New Stack</itunes:author>
      <itunes:duration>00:25:09</itunes:duration>
      <itunes:summary></itunes:summary>
      <itunes:subtitle></itunes:subtitle>
      <itunes:keywords>open community, frederic lardinois, software developer, ai agents, tech podcast, the new stack, ai developer, predictions, inference, tech, kubernetes, software engineer, open source, jonathan bryce, cncf, kubecon atlanta, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1573</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">8ef60b70-8a00-4a7b-b783-80e9e54013ab</guid>
      <title>Kubernetes Gets an AI Conformance Program — and VMware Is Already On Board</title>
      <description><![CDATA[<p>The Cloud Native Computing Foundation has introduced the Certified Kubernetes AI Conformance Program to bring consistency to an increasingly fragmented AI ecosystem. Announced at KubeCon + CloudNativeCon North America 2025, the program establishes open, community-driven standards to ensure AI applications run reliably and portably across different Kubernetes platforms. VMware by Broadcom’s vSphere Kubernetes Service (VKS) is among the first platforms to achieve certification.</p><p>In an interview with The New Stack, Broadcom leaders Dilpreet Bindra and Himanshu Singh explained that the program applies lessons from Kubernetes’ early evolution, aiming to reduce the “muddiness” in AI tooling and improve cross-platform interoperability. They emphasized portability as a core value: organizations should be able to move AI workloads between public and private clouds with minimal friction.</p><p>VKS integrates tightly with vSphere, using Kubernetes APIs directly to manage infrastructure components declaratively. This approach, along with new add-on management capabilities, reflects Kubernetes’ growing maturity. According to Bindra and Singh, this stability now enables enterprises to trust Kubernetes as a foundation for production-grade AI.</p><p> </p><p>Learn more from The New Stack about Broadcom’s latest updates with Kubernetes: </p><p><a href="https://thenewstack.io/has-vmware-finally-caught-up-with-kubernetes/">Has VMware Finally Caught Up with Kubernetes?</a></p><p><a href="https://thenewstack.io/vmware-vcf-9-0-finally-unifies-container-and-vm-management/">VMware VCF 9.0 Finally Unifies Container and VM Management</a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Mon, 8 Dec 2025 17:30:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Dilprett Bindra, Himanshu Signh, Broadcom, Alex Williams, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/kubernetes-gets-an-ai-conformance-program-and-vmware-is-already-on-board-NrHpJrHo</link>
      <content:encoded><![CDATA[<p>The Cloud Native Computing Foundation has introduced the Certified Kubernetes AI Conformance Program to bring consistency to an increasingly fragmented AI ecosystem. Announced at KubeCon + CloudNativeCon North America 2025, the program establishes open, community-driven standards to ensure AI applications run reliably and portably across different Kubernetes platforms. VMware by Broadcom’s vSphere Kubernetes Service (VKS) is among the first platforms to achieve certification.</p><p>In an interview with The New Stack, Broadcom leaders Dilpreet Bindra and Himanshu Singh explained that the program applies lessons from Kubernetes’ early evolution, aiming to reduce the “muddiness” in AI tooling and improve cross-platform interoperability. They emphasized portability as a core value: organizations should be able to move AI workloads between public and private clouds with minimal friction.</p><p>VKS integrates tightly with vSphere, using Kubernetes APIs directly to manage infrastructure components declaratively. This approach, along with new add-on management capabilities, reflects Kubernetes’ growing maturity. According to Bindra and Singh, this stability now enables enterprises to trust Kubernetes as a foundation for production-grade AI.</p><p> </p><p>Learn more from The New Stack about Broadcom’s latest updates with Kubernetes: </p><p><a href="https://thenewstack.io/has-vmware-finally-caught-up-with-kubernetes/">Has VMware Finally Caught Up with Kubernetes?</a></p><p><a href="https://thenewstack.io/vmware-vcf-9-0-finally-unifies-container-and-vm-management/">VMware VCF 9.0 Finally Unifies Container and VM Management</a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="29444849" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/a27e05a1-5ac6-43da-9511-b3902dfe0d78/audio/163e5e47-383f-4c87-a47d-9c5a27094934/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Kubernetes Gets an AI Conformance Program — and VMware Is Already On Board</itunes:title>
      <itunes:author>Dilprett Bindra, Himanshu Signh, Broadcom, Alex Williams, The New Stack</itunes:author>
      <itunes:duration>00:30:40</itunes:duration>
      <itunes:summary>The Cloud Native Computing Foundation has introduced the Certified Kubernetes AI Conformance Program to bring consistency to an increasingly fragmented AI ecosystem. Announced at KubeCon + CloudNativeCon North America 2025, the program establishes open, community-driven standards to ensure AI applications run reliably and portably across different Kubernetes platforms. VMware by Broadcom’s vSphere Kubernetes Service (VKS) is among the first platforms to achieve certification.</itunes:summary>
      <itunes:subtitle>The Cloud Native Computing Foundation has introduced the Certified Kubernetes AI Conformance Program to bring consistency to an increasingly fragmented AI ecosystem. Announced at KubeCon + CloudNativeCon North America 2025, the program establishes open, community-driven standards to ensure AI applications run reliably and portably across different Kubernetes platforms. VMware by Broadcom’s vSphere Kubernetes Service (VKS) is among the first platforms to achieve certification.</itunes:subtitle>
      <itunes:keywords>the new stack, ai developer, broadcom, kubernetes, the new stack makers, broadcom’s vsphere kubernetes service (vks), open source, dilprett bindra, himanshu signh, kubecon atlanta, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1572</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">6acdcb71-e335-4c3e-b45f-e2950c833b23</guid>
      <title>How etcd Solved Its Knowledge Drain with Deterministic Testing</title>
      <description><![CDATA[<p>The etcd project — a distributed key-value store older than Kubernetes — recently faced significant challenges due to maintainer turnover and the resulting loss of unwritten institutional knowledge. Lead maintainer Marek Siarkowicz explained that as longtime contributors left, crucial expertise about testing procedures and correctness guarantees disappeared. This gap led to a problematic release that introduced critical reliability issues, including potential data inconsistencies after crashes.</p><p>To rebuild confidence in etcd’s correctness, the new maintainer team introduced “robustness testing,” creating a framework inspired by Jepsen to validate both basic and distributed-system behavior. Their goal was to ensure linearizability, the “Holy Grail” of distributed systems, which required developing custom failure-injection tools and teaching the community how to debug complex scenarios.</p><p>The team later partnered with Antithesis to apply deterministic simulation testing, enabling fully reproducible execution paths and easier detection of subtle race conditions. This approach helped codify implicit knowledge into explicit properties and assertions. Siarkowicz emphasized that such rigorous testing is essential for safeguarding the sensitive “core” of large open source projects, ensuring correctness even as maintainers change.</p><p>Learn more from The New Stack about the etcd project</p><p><a href="https://thenewstack.io/tutorial-install-a-highly-available-k3s-cluster-at-the-edge/">Tutorial: Install a Highly Available K3s Cluster at the Edge </a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></description>
      <pubDate>Fri, 5 Dec 2025 15:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Marek Siarkowicz, Antithesis, Heather Joslyn, The New Stack, Google)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-etcd-solved-its-knowledge-drain-with-deterministic-testing-3E9fyII3</link>
      <content:encoded><![CDATA[<p>The etcd project — a distributed key-value store older than Kubernetes — recently faced significant challenges due to maintainer turnover and the resulting loss of unwritten institutional knowledge. Lead maintainer Marek Siarkowicz explained that as longtime contributors left, crucial expertise about testing procedures and correctness guarantees disappeared. This gap led to a problematic release that introduced critical reliability issues, including potential data inconsistencies after crashes.</p><p>To rebuild confidence in etcd’s correctness, the new maintainer team introduced “robustness testing,” creating a framework inspired by Jepsen to validate both basic and distributed-system behavior. Their goal was to ensure linearizability, the “Holy Grail” of distributed systems, which required developing custom failure-injection tools and teaching the community how to debug complex scenarios.</p><p>The team later partnered with Antithesis to apply deterministic simulation testing, enabling fully reproducible execution paths and easier detection of subtle race conditions. This approach helped codify implicit knowledge into explicit properties and assertions. Siarkowicz emphasized that such rigorous testing is essential for safeguarding the sensitive “core” of large open source projects, ensuring correctness even as maintainers change.</p><p>Learn more from The New Stack about the etcd project</p><p><a href="https://thenewstack.io/tutorial-install-a-highly-available-k3s-cluster-at-the-edge/">Tutorial: Install a Highly Available K3s Cluster at the Edge </a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></content:encoded>
      <enclosure length="20463324" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/05753291-155c-4a35-87f5-1f52bf6b387c/audio/5b06cc99-0981-41fb-94c7-64eef6f6e153/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How etcd Solved Its Knowledge Drain with Deterministic Testing</itunes:title>
      <itunes:author>Marek Siarkowicz, Antithesis, Heather Joslyn, The New Stack, Google</itunes:author>
      <itunes:duration>00:21:18</itunes:duration>
      <itunes:summary>The etcd project — a distributed key-value store older than Kubernetes — recently faced significant challenges due to maintainer turnover and the resulting loss of unwritten institutional knowledge. Lead maintainer Marek Siarkowicz explained that as longtime contributors left, crucial expertise about testing procedures and correctness guarantees disappeared. This gap led to a problematic release that introduced critical reliability issues, including potential data inconsistencies after crashes.</itunes:summary>
      <itunes:subtitle>The etcd project — a distributed key-value store older than Kubernetes — recently faced significant challenges due to maintainer turnover and the resulting loss of unwritten institutional knowledge. Lead maintainer Marek Siarkowicz explained that as longtime contributors left, crucial expertise about testing procedures and correctness guarantees disappeared. This gap led to a problematic release that introduced critical reliability issues, including potential data inconsistencies after crashes.</itunes:subtitle>
      <itunes:keywords>marek siarkowicz, kubecon atlana, google, tech podcast, the new stack, tech, etcd, the new stack makers, antithesis, open source</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1571</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">937318a8-7172-40db-9f5d-405ff85ffbd9</guid>
      <title>Helm 4: What’s New in the Open Source Kubernetes Package Manager?</title>
      <description><![CDATA[<p>Helm — originally a hackathon project called Kate’s Place — turned 10 in 2025, marking the milestone with the release of Helm 4, its first major update in six years. Created by Matt Butcher and colleagues as a playful take on “K8s,” the early project won a small prize but quickly grew into a serious effort when Deus leadership recognized the need for a Kubernetes package manager. Renamed Helm, it rapidly expanded with community contributors and became one of the first CNCF graduating projects.</p><p>Helm 4 reflects years of accumulated design debt and evolving use cases. After the rapid iterations of Helm 1, 2, and 3, the latest version modernizes logging, improves dependency management, and introduces WebAssembly-based plugins for cross-platform portability—addressing the growing diversity of operating systems and architectures. Beyond headline features, maintainers emphasize that mature projects increasingly deliver “boring” but essential improvements, such as better logging, which simplify workflows and integrate more cleanly with other tools. Helm’s re-architected internals also lay the foundation for new chart and package capabilities in upcoming 4.x releases.</p><p> </p><p>Learn more from The New Stack about Helm: </p><p><a href="https://thenewstack.io/the-super-helm-chart-to-deploy-or-not-to-deploy/">The Super Helm Chart: To Deploy or Not To Deploy?</a></p><p><a href="https://thenewstack.io/kubernetes-gets-a-new-resource-orchestrator-in-the-form-of-kro/">Kubernetes Gets a New Resource Orchestrator in the Form of Kro</a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></description>
      <pubDate>Wed, 3 Dec 2025 15:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Matt Farina, SUSE, Heather Joslyn, The New Stack, Fermyon Technologies, Matt Butcher)</author>
      <link>https://thenewstack.simplecast.com/episodes/helm-4-whats-new-in-the-open-source-kubernetes-package-manager-F8dxuSto</link>
      <content:encoded><![CDATA[<p>Helm — originally a hackathon project called Kate’s Place — turned 10 in 2025, marking the milestone with the release of Helm 4, its first major update in six years. Created by Matt Butcher and colleagues as a playful take on “K8s,” the early project won a small prize but quickly grew into a serious effort when Deus leadership recognized the need for a Kubernetes package manager. Renamed Helm, it rapidly expanded with community contributors and became one of the first CNCF graduating projects.</p><p>Helm 4 reflects years of accumulated design debt and evolving use cases. After the rapid iterations of Helm 1, 2, and 3, the latest version modernizes logging, improves dependency management, and introduces WebAssembly-based plugins for cross-platform portability—addressing the growing diversity of operating systems and architectures. Beyond headline features, maintainers emphasize that mature projects increasingly deliver “boring” but essential improvements, such as better logging, which simplify workflows and integrate more cleanly with other tools. Helm’s re-architected internals also lay the foundation for new chart and package capabilities in upcoming 4.x releases.</p><p> </p><p>Learn more from The New Stack about Helm: </p><p><a href="https://thenewstack.io/the-super-helm-chart-to-deploy-or-not-to-deploy/">The Super Helm Chart: To Deploy or Not To Deploy?</a></p><p><a href="https://thenewstack.io/kubernetes-gets-a-new-resource-orchestrator-in-the-form-of-kro/">Kubernetes Gets a New Resource Orchestrator in the Form of Kro</a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></content:encoded>
      <enclosure length="23773561" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/4ecf8c9e-551e-41a3-bc43-217403139aa8/audio/6cffbc4e-5c16-4611-aeb5-3e4ea70cfeba/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Helm 4: What’s New in the Open Source Kubernetes Package Manager?</itunes:title>
      <itunes:author>Matt Farina, SUSE, Heather Joslyn, The New Stack, Fermyon Technologies, Matt Butcher</itunes:author>
      <itunes:duration>00:24:45</itunes:duration>
      <itunes:summary>Helm — originally a hackathon project called Kate’s Place — turned 10 in 2025, marking the milestone with the release of Helm 4, its first major update in six years. Created by Matt Butcher and colleagues as a playful take on “K8s,” the early project won a small prize but quickly grew into a serious effort when Deus leadership recognized the need for a Kubernetes package manager. Renamed Helm, it rapidly expanded with community contributors and became one of the first CNCF graduating projects.</itunes:summary>
      <itunes:subtitle>Helm — originally a hackathon project called Kate’s Place — turned 10 in 2025, marking the milestone with the release of Helm 4, its first major update in six years. Created by Matt Butcher and colleagues as a playful take on “K8s,” the early project won a small prize but quickly grew into a serious effort when Deus leadership recognized the need for a Kubernetes package manager. Renamed Helm, it rapidly expanded with community contributors and became one of the first CNCF graduating projects.</itunes:subtitle>
      <itunes:keywords>fermyon technologies, suse, software developer, matt butcher, tech podcast, the new stack, ai developer, tech, helm, kubernetes, the new stack makers, matt farina, software engineer, open source, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1570</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">c75d65e8-443f-482f-8642-1e138de86498</guid>
      <title>All About Cedar, an Open Source Solution for Fine-Tuning Kubernetes Authorization</title>
      <description><![CDATA[<p>Kubernetes has relied on role-based access control (RBAC) since 2017, but its simplicity limits what developers can express, said Micah Hausler, principal engineer at AWS, on The New Stack Makers. RBAC only allows actions; it can’t enforce conditions, denials, or attribute-based rules. Seeking a more expressive authorization model for Kubernetes, Hausler explored Cedar, an authorization engine and policy language created at AWS in 2022 and later open-sourced. Although not designed specifically for Kubernetes, Cedar proved capable of modeling its authorization needs in a concise, readable way. Hausler highlighted Cedar’s clarity—nontechnical users can often understand policies at a glance—as well as its schema validation, autocomplete support, and formal verification, which ensures policies are correct and produce only allow or deny outcomes.</p><p>Now onboarding to the CNCF sandbox, Cedar is used by companies like Cloudflare and MongoDB and offers language-agnostic tooling, including a Go implementation donated by StrongDM. The project is actively seeking contributors, especially to expand bindings for languages like TypeScript, JavaScript, and Python.</p><p>Learn more from The New Stack about Cedar:</p><p><a href="https://thenewstack.io/ceph-20-years-of-cutting-edge-storage-at-the-edge/">Ceph: 20 Years of Cutting-Edge Storage at the Edge </a></p><p><a href="https://thenewstack.io/the-cedar-programming-language-authorization-simplified/">The Cedar Programming Language: Authorization Simplified</a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Tue, 2 Dec 2025 15:30:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Micah Hausler, The New Stack, Amazon Web Services, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/all-about-cedar-an-open-source-solution-for-fine-tuning-kubernetes-authorization-GpfTQadj</link>
      <content:encoded><![CDATA[<p>Kubernetes has relied on role-based access control (RBAC) since 2017, but its simplicity limits what developers can express, said Micah Hausler, principal engineer at AWS, on The New Stack Makers. RBAC only allows actions; it can’t enforce conditions, denials, or attribute-based rules. Seeking a more expressive authorization model for Kubernetes, Hausler explored Cedar, an authorization engine and policy language created at AWS in 2022 and later open-sourced. Although not designed specifically for Kubernetes, Cedar proved capable of modeling its authorization needs in a concise, readable way. Hausler highlighted Cedar’s clarity—nontechnical users can often understand policies at a glance—as well as its schema validation, autocomplete support, and formal verification, which ensures policies are correct and produce only allow or deny outcomes.</p><p>Now onboarding to the CNCF sandbox, Cedar is used by companies like Cloudflare and MongoDB and offers language-agnostic tooling, including a Go implementation donated by StrongDM. The project is actively seeking contributors, especially to expand bindings for languages like TypeScript, JavaScript, and Python.</p><p>Learn more from The New Stack about Cedar:</p><p><a href="https://thenewstack.io/ceph-20-years-of-cutting-edge-storage-at-the-edge/">Ceph: 20 Years of Cutting-Edge Storage at the Edge </a></p><p><a href="https://thenewstack.io/the-cedar-programming-language-authorization-simplified/">The Cedar Programming Language: Authorization Simplified</a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="15573620" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/80f260e6-86a3-4578-9534-ab9cf4cd37c7/audio/c95205c5-7226-4dea-a6e9-09c2c18faedb/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>All About Cedar, an Open Source Solution for Fine-Tuning Kubernetes Authorization</itunes:title>
      <itunes:author>Micah Hausler, The New Stack, Amazon Web Services, Heather Joslyn</itunes:author>
      <itunes:duration>00:16:13</itunes:duration>
      <itunes:summary>Kubernetes has relied on role-based access control (RBAC) since 2017, but its simplicity limits what developers can express, said Micah Hausler, principal engineer at AWS, on The New Stack Makers. RBAC only allows actions; it can’t enforce conditions, denials, or attribute-based rules. Seeking a more expressive authorization model for Kubernetes, Hausler explored Cedar, an authorization engine and policy language created at AWS in 2022 and later open-sourced. Although not designed specifically for Kubernetes, Cedar proved capable of modeling its authorization needs in a concise, readable way. Hausler highlighted Cedar’s clarity—nontechnical users can often understand policies at a glance—as well as its schema validation, autocomplete support, and formal verification, which ensures policies are correct and produce only allow or deny outcomes.</itunes:summary>
      <itunes:subtitle>Kubernetes has relied on role-based access control (RBAC) since 2017, but its simplicity limits what developers can express, said Micah Hausler, principal engineer at AWS, on The New Stack Makers. RBAC only allows actions; it can’t enforce conditions, denials, or attribute-based rules. Seeking a more expressive authorization model for Kubernetes, Hausler explored Cedar, an authorization engine and policy language created at AWS in 2022 and later open-sourced. Although not designed specifically for Kubernetes, Cedar proved capable of modeling its authorization needs in a concise, readable way. Hausler highlighted Cedar’s clarity—nontechnical users can often understand policies at a glance—as well as its schema validation, autocomplete support, and formal verification, which ensures policies are correct and produce only allow or deny outcomes.</itunes:subtitle>
      <itunes:keywords>policy management, software developer, tech podcast, the new stack, ai developer, amazon web services, tech, kubernetes, the new stack makers, software engineer, open source, kubecon atlanta, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1569</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">97ef8e54-49f4-4806-80b3-d892678b165e</guid>
      <title>Teaching a Billion People to Code: How JupyterLite Is Scaling the Impossible</title>
      <description><![CDATA[<p>JupyterLite, a fully browser-based distribution of JupyterLab, is enabling new levels of global scalability in technical education. Developed by Sylvain Corlay’s QuantStack team, it allows math and programming lessons to run entirely in students’ browsers — kernel included — without relying on Docker or cloud-scale infrastructure. Its most prominent success is Capytale, a French national deployment that supports half a million high school students and over 200,000 weekly sessions from essentially a single server, which hosts only teaching content while computation happens locally in each browser.</p><p>QuantStack, founded in 2016 as what Corlay calls an “accidental startup,” has since grown into a 30-person team contributing across Jupyter, Conda-Forge, and Apache Arrow. But JupyterLite embodies its most ambitious goal: making programming education accessible to countries with rapidly growing youth populations, such as Nigeria, where traditional cloud-hosted notebooks are impractical. Achieving a billion-user future will require advances in accessibility, collaboration, and expanding browser-based package support — efforts that depend on grants and foundation backing.</p><p>Learn more from The New Stack about Project Jupyter</p><p><a href="https://thenewstack.io/from-physics-to-the-future-brian-granger-on-project-jupyter-in-the-age-of-ai/">From Physics to the Future: Brian Granger on Project Jupyter in the Age of AI</a></p><p><a href="https://thenewstack.io/jupyter-ai-v3-could-it-generate-an-ecosystem-of-ai-personas/">Jupyter AI v3: Could It Generate an ‘Ecosystem of AI Personas?’</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></description>
      <pubDate>Mon, 1 Dec 2025 19:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Sylvain Corlay, QuantStack, The New Stack, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/teaching-a-billion-people-to-code-how-jupyterlite-is-scaling-the-impossible-FkjZEFD0</link>
      <content:encoded><![CDATA[<p>JupyterLite, a fully browser-based distribution of JupyterLab, is enabling new levels of global scalability in technical education. Developed by Sylvain Corlay’s QuantStack team, it allows math and programming lessons to run entirely in students’ browsers — kernel included — without relying on Docker or cloud-scale infrastructure. Its most prominent success is Capytale, a French national deployment that supports half a million high school students and over 200,000 weekly sessions from essentially a single server, which hosts only teaching content while computation happens locally in each browser.</p><p>QuantStack, founded in 2016 as what Corlay calls an “accidental startup,” has since grown into a 30-person team contributing across Jupyter, Conda-Forge, and Apache Arrow. But JupyterLite embodies its most ambitious goal: making programming education accessible to countries with rapidly growing youth populations, such as Nigeria, where traditional cloud-hosted notebooks are impractical. Achieving a billion-user future will require advances in accessibility, collaboration, and expanding browser-based package support — efforts that depend on grants and foundation backing.</p><p>Learn more from The New Stack about Project Jupyter</p><p><a href="https://thenewstack.io/from-physics-to-the-future-brian-granger-on-project-jupyter-in-the-age-of-ai/">From Physics to the Future: Brian Granger on Project Jupyter in the Age of AI</a></p><p><a href="https://thenewstack.io/jupyter-ai-v3-could-it-generate-an-ecosystem-of-ai-personas/">Jupyter AI v3: Could It Generate an ‘Ecosystem of AI Personas?’</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></content:encoded>
      <enclosure length="18533189" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/7b6aa336-f3e1-4a6c-b673-b061a0baffb5/audio/5ed1ed60-9776-47e4-94d7-41e6c17ca667/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Teaching a Billion People to Code: How JupyterLite Is Scaling the Impossible</itunes:title>
      <itunes:author>Sylvain Corlay, QuantStack, The New Stack, Heather Joslyn</itunes:author>
      <itunes:duration>00:19:18</itunes:duration>
      <itunes:summary>JupyterLite, a fully browser-based distribution of JupyterLab, is enabling new levels of global scalability in technical education. Developed by Sylvain Corlay’s QuantStack team, it allows math and programming lessons to run entirely in students’ browsers — kernel included — without relying on Docker or cloud-scale infrastructure. Its most prominent success is Capytale, a French national deployment that supports half a million high school students and over 200,000 weekly sessions from essentially a single server, which hosts only teaching content while computation happens locally in each browser.
</itunes:summary>
      <itunes:subtitle>JupyterLite, a fully browser-based distribution of JupyterLab, is enabling new levels of global scalability in technical education. Developed by Sylvain Corlay’s QuantStack team, it allows math and programming lessons to run entirely in students’ browsers — kernel included — without relying on Docker or cloud-scale infrastructure. Its most prominent success is Capytale, a French national deployment that supports half a million high school students and over 200,000 weekly sessions from essentially a single server, which hosts only teaching content while computation happens locally in each browser.
</itunes:subtitle>
      <itunes:keywords>project jupyter, the new stack, heather joslyn, jupytercon san diego, jupyterlite, the new stack makers, open source, sylvain corlay, quantstack, jupytercon</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1568</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">13cc47df-618d-48f9-8b93-30ea375dd1cb</guid>
      <title>2026 Will Be the Year of Agentic Workloads in Production on Amazon EKS</title>
      <description><![CDATA[<p>AWS’s approach to Elastic Kubernetes Service has evolved significantly since its 2018 launch. According to Mike Stefanik, Senior Manager of Product Management for EKS and ECR, today’s users increasingly represent the late majority—teams that want Kubernetes without managing every component themselves. In a conversation on<i>The New Stack Makers</i>, Stefanik described how AI workloads are reshaping Kubernetes operations and why AWS open-sourced an MCP server for EKS. Early feedback showed that meaningful, task-oriented tool names—not simple API mirrors—made MCP servers more effective for LLMs, prompting AWS to design tools focused on troubleshooting, runbooks, and full application workflows. AWS also introduced a hosted knowledge base built from years of support cases to power more capable agents.</p><p>While “agentic AI” gets plenty of buzz, most customers still rely on human-in-the-loop workflows. Stefanik expects that to shift, predicting 2026 as the year agentic workloads move into production. For experimentation, he recommends the open-source Strands SDK. Internally, he has already seen major productivity gains from BI agents that automate complex data analysis tasks.</p><p>Learn more from The New Stack about Amazon Web Services’ approach to Elastic Kubernetes Service</p><p><a href="https://thenewstack.io/how-amazon-eks-auto-mode-simplifies-kubernetes-cluster-management-part-1/">How Amazon EKS Auto Mode Simplifies Kubernetes Cluster Management (Part 1)</a></p><p><a href="https://thenewstack.io/a-deep-dive-into-amazon-eks-auto-part-2/">A Deep Dive Into Amazon EKS Auto (Part 2)</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></description>
      <pubDate>Fri, 28 Nov 2025 18:30:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Amazon web services, Frederic Lardinois, The New Stack, Mike Stefaniak)</author>
      <link>https://thenewstack.simplecast.com/episodes/2026-will-be-the-year-of-agentic-workloads-in-production-on-amazon-eks-3mD1hWCH</link>
      <content:encoded><![CDATA[<p>AWS’s approach to Elastic Kubernetes Service has evolved significantly since its 2018 launch. According to Mike Stefanik, Senior Manager of Product Management for EKS and ECR, today’s users increasingly represent the late majority—teams that want Kubernetes without managing every component themselves. In a conversation on<i>The New Stack Makers</i>, Stefanik described how AI workloads are reshaping Kubernetes operations and why AWS open-sourced an MCP server for EKS. Early feedback showed that meaningful, task-oriented tool names—not simple API mirrors—made MCP servers more effective for LLMs, prompting AWS to design tools focused on troubleshooting, runbooks, and full application workflows. AWS also introduced a hosted knowledge base built from years of support cases to power more capable agents.</p><p>While “agentic AI” gets plenty of buzz, most customers still rely on human-in-the-loop workflows. Stefanik expects that to shift, predicting 2026 as the year agentic workloads move into production. For experimentation, he recommends the open-source Strands SDK. Internally, he has already seen major productivity gains from BI agents that automate complex data analysis tasks.</p><p>Learn more from The New Stack about Amazon Web Services’ approach to Elastic Kubernetes Service</p><p><a href="https://thenewstack.io/how-amazon-eks-auto-mode-simplifies-kubernetes-cluster-management-part-1/">How Amazon EKS Auto Mode Simplifies Kubernetes Cluster Management (Part 1)</a></p><p><a href="https://thenewstack.io/a-deep-dive-into-amazon-eks-auto-part-2/">A Deep Dive Into Amazon EKS Auto (Part 2)</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></content:encoded>
      <enclosure length="22349156" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/2c7ec6cd-7daf-418f-964e-99373f441c9a/audio/d2febc98-61b9-4302-a713-6ab8343b4c52/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>2026 Will Be the Year of Agentic Workloads in Production on Amazon EKS</itunes:title>
      <itunes:author>Amazon web services, Frederic Lardinois, The New Stack, Mike Stefaniak</itunes:author>
      <itunes:duration>00:23:16</itunes:duration>
      <itunes:summary>AWS’s approach to Elastic Kubernetes Service has evolved significantly since its 2018 launch. According to Mike Stefanik, Senior Manager of Product Management for EKS and ECR, today’s users increasingly represent the late majority—teams that want Kubernetes without managing every component themselves. In a conversation on The New Stack Makers, Stefanik described how AI workloads are reshaping Kubernetes operations and why AWS open-sourced an MCP server for EKS. Early feedback showed that meaningful, task-oriented tool names—not simple API mirrors—made MCP servers more effective for LLMs, prompting AWS to design tools focused on troubleshooting, runbooks, and full application workflows. AWS also introduced a hosted knowledge base built from years of support cases to power more capable agents.</itunes:summary>
      <itunes:subtitle>AWS’s approach to Elastic Kubernetes Service has evolved significantly since its 2018 launch. According to Mike Stefanik, Senior Manager of Product Management for EKS and ECR, today’s users increasingly represent the late majority—teams that want Kubernetes without managing every component themselves. In a conversation on The New Stack Makers, Stefanik described how AI workloads are reshaping Kubernetes operations and why AWS open-sourced an MCP server for EKS. Early feedback showed that meaningful, task-oriented tool names—not simple API mirrors—made MCP servers more effective for LLMs, prompting AWS to design tools focused on troubleshooting, runbooks, and full application workflows. AWS also introduced a hosted knowledge base built from years of support cases to power more capable agents.</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, the new stack, ai developer, amazon web services, tech, the new stack makers, software engineer, mike stefaniak, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1566</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">ccbe0430-1c57-45b3-84de-5f880df3baaf</guid>
      <title>From Cloud Native to AI Native: Where Are We Going?</title>
      <description><![CDATA[<p>At KubeCon + CloudNativeCon 2025 in Atlanta, the panel of experts - Kate Goldenring of Fermyon Technologies, Idit Levine of Solo.io, Shaun O'Meara of Mirantis, Sean O'Dell of Dynatrace and James Harmison of Red Hat - explored whether the cloud native era has evolved into an AI native era — and what that shift means for infrastructure, security and development practices. Jonathan Bryce of the CNCF argued that true AI-native systems depend on robust inference layers, which have been overshadowed by the hype around chatbots and agents. As organizations push AI to the edge and demand faster, more personalized experiences, Fermyon’s Kate Goldenring highlighted WebAssembly as a way to bundle and securely deploy models directly to GPU-equipped hardware, reducing latency while adding sandboxed security.</p><p>Dynatrace’s Sean O’Dell noted that AI dramatically increases observability needs: integrating LLM-based intelligence adds value but also expands the challenge of filtering massive data streams to understand user behavior. Meanwhile, Mirantis CTO Shaun O’Meara emphasized a return to deeper infrastructure awareness. Unlike abstracted cloud native workloads, AI workloads running on GPUs require careful attention to hardware performance, orchestration, and energy constraints. Managing power-hungry data centers efficiently, he argued, will be a defining challenge of the AI native era.</p><p>Learn more from The New Stack about evolving cloud native ecosystem to an AI native era</p><p><a href="https://thenewstack.io/cloud-native-and-ai-why-open-source-needs-standards-like-mcp/">Cloud Native and AI: Why Open Source Needs Standards Like MCP</a></p><p><a href="https://thenewstack.io/a-decade-of-cloud-native-from-cncf-to-the-pandemic-to-ai/">A Decade of Cloud Native: From CNCF, to the Pandemic, to AI</a></p><p><a href="https://thenewstack.io/crossing-the-ai-chasm-lessons-from-the-early-days-of-cloud/">Crossing the AI Chasm: Lessons From the Early Days of Cloud</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></description>
      <pubDate>Fri, 28 Nov 2025 16:35:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Dynatrace, Mirantis, Kate Goldenring, Shaun O&apos;Meara, Sean O&apos;Dell, James Harmison, Heather Joslyn, The New Stack, Alex Williams, Red Hat, Fermyon Technologies, Jonathan Bryce, CNCF)</author>
      <link>https://thenewstack.simplecast.com/episodes/from-cloud-native-to-ai-native-where-are-we-going-RQEDbrdF</link>
      <content:encoded><![CDATA[<p>At KubeCon + CloudNativeCon 2025 in Atlanta, the panel of experts - Kate Goldenring of Fermyon Technologies, Idit Levine of Solo.io, Shaun O'Meara of Mirantis, Sean O'Dell of Dynatrace and James Harmison of Red Hat - explored whether the cloud native era has evolved into an AI native era — and what that shift means for infrastructure, security and development practices. Jonathan Bryce of the CNCF argued that true AI-native systems depend on robust inference layers, which have been overshadowed by the hype around chatbots and agents. As organizations push AI to the edge and demand faster, more personalized experiences, Fermyon’s Kate Goldenring highlighted WebAssembly as a way to bundle and securely deploy models directly to GPU-equipped hardware, reducing latency while adding sandboxed security.</p><p>Dynatrace’s Sean O’Dell noted that AI dramatically increases observability needs: integrating LLM-based intelligence adds value but also expands the challenge of filtering massive data streams to understand user behavior. Meanwhile, Mirantis CTO Shaun O’Meara emphasized a return to deeper infrastructure awareness. Unlike abstracted cloud native workloads, AI workloads running on GPUs require careful attention to hardware performance, orchestration, and energy constraints. Managing power-hungry data centers efficiently, he argued, will be a defining challenge of the AI native era.</p><p>Learn more from The New Stack about evolving cloud native ecosystem to an AI native era</p><p><a href="https://thenewstack.io/cloud-native-and-ai-why-open-source-needs-standards-like-mcp/">Cloud Native and AI: Why Open Source Needs Standards Like MCP</a></p><p><a href="https://thenewstack.io/a-decade-of-cloud-native-from-cncf-to-the-pandemic-to-ai/">A Decade of Cloud Native: From CNCF, to the Pandemic, to AI</a></p><p><a href="https://thenewstack.io/crossing-the-ai-chasm-lessons-from-the-early-days-of-cloud/">Crossing the AI Chasm: Lessons From the Early Days of Cloud</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></content:encoded>
      <enclosure length="42574201" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/227f7efb-0d3b-41fb-a1b1-57d22834215b/audio/da715a8d-2c0d-4cfe-8c41-197bed39bdea/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>From Cloud Native to AI Native: Where Are We Going?</itunes:title>
      <itunes:author>Dynatrace, Mirantis, Kate Goldenring, Shaun O&apos;Meara, Sean O&apos;Dell, James Harmison, Heather Joslyn, The New Stack, Alex Williams, Red Hat, Fermyon Technologies, Jonathan Bryce, CNCF</itunes:author>
      <itunes:duration>00:44:20</itunes:duration>
      <itunes:summary>At KubeCon + CloudNativeCon in Atlanta, the panel of experts - Kate Goldenring of Fermyon Technologies, Shaun O&apos;Meara of Mirantis, Sean O&apos;Dell of Dynatrace and James Harmison of Red Hat -explored whether the cloud native era has evolved into an AI native era — and what that shift means for infrastructure, security and development practices. Jonathan Bryce of the CNCF argued that true AI-native systems depend on robust inference layers, which have been overshadowed by the hype around chatbots and agents. As organizations push AI to the edge and demand faster, more personalized experiences, Fermyon’s Kate Goldenring highlighted WebAssembly as a way to bundle and securely deploy models directly to GPU-equipped hardware, reducing latency while adding sandboxed security.</itunes:summary>
      <itunes:subtitle>At KubeCon + CloudNativeCon in Atlanta, the panel of experts - Kate Goldenring of Fermyon Technologies, Shaun O&apos;Meara of Mirantis, Sean O&apos;Dell of Dynatrace and James Harmison of Red Hat -explored whether the cloud native era has evolved into an AI native era — and what that shift means for infrastructure, security and development practices. Jonathan Bryce of the CNCF argued that true AI-native systems depend on robust inference layers, which have been overshadowed by the hype around chatbots and agents. As organizations push AI to the edge and demand faster, more personalized experiences, Fermyon’s Kate Goldenring highlighted WebAssembly as a way to bundle and securely deploy models directly to GPU-equipped hardware, reducing latency while adding sandboxed security.</itunes:subtitle>
      <itunes:keywords>fermyon technologies, red hat, james harmison, alex williams, the new stack, heather joslyn, cloud native, infrastucture, dynatrace, kubecon atlanta 2025, shaun o&apos;meara, ai native, open source, jonathan bryce, mirantis, sean o&apos;dell, cncf, kubecon, kate goldenring</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1567</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">8abf25bd-bb0d-44e7-9e9e-0de51b6af449</guid>
      <title>Amazon CTO Werner Vogels&apos; Predictions for 2026</title>
      <description><![CDATA[<p>AWS re:Invent has long featured CTO Werner Vogels’ closing keynote, but this year he signaled it may be his last, emphasizing it’s time for “younger voices” at Amazon. After 21 years with the company, Vogels reflected on arriving as an academic and being stunned by Amazon’s technical scale—an energy that still drives him today. He released his annual predictions ahead of re:Invent, with this year’s five themes focused heavily on AI and broader societal impacts.</p><p>Vogels highlights technology’s growing role in addressing loneliness, noting how devices like Alexa can offer comfort to those who feel isolated. He foresees a “Renaissance developer,” where engineers must pair deep expertise with broad business and creative awareness. He warns quantum-safe encryption is becoming urgent as data harvested today may be decrypted within five years. Military innovations, he notes, continue to influence civilian tech, for better and worse. Finally, he argues personalized learning can preserve children’s curiosity and better support teachers, which he views as essential for future education.</p><p>Learn more from The New Stack about evolving role of technology systems from past to future: </p><p><a href="https://thenewstack.io/werner-vogels-6-lessons-for-keeping-systems-simple/">Werner Vogels’ 6 Lessons for Keeping Systems Simple</a></p><p><a href="https://thenewstack.io/50-years-later-remembering-how-the-future-looked-in-1974/">50 Years Later: Remembering How the Future Looked in 1974</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></description>
      <pubDate>Tue, 25 Nov 2025 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Werner Vogels, The New Stack, Amazon Web Services, Frederic Lardinois)</author>
      <link>https://thenewstack.simplecast.com/episodes/amazon-cto-werner-vogels-predictions-for-2026-XuYXWkHT</link>
      <content:encoded><![CDATA[<p>AWS re:Invent has long featured CTO Werner Vogels’ closing keynote, but this year he signaled it may be his last, emphasizing it’s time for “younger voices” at Amazon. After 21 years with the company, Vogels reflected on arriving as an academic and being stunned by Amazon’s technical scale—an energy that still drives him today. He released his annual predictions ahead of re:Invent, with this year’s five themes focused heavily on AI and broader societal impacts.</p><p>Vogels highlights technology’s growing role in addressing loneliness, noting how devices like Alexa can offer comfort to those who feel isolated. He foresees a “Renaissance developer,” where engineers must pair deep expertise with broad business and creative awareness. He warns quantum-safe encryption is becoming urgent as data harvested today may be decrypted within five years. Military innovations, he notes, continue to influence civilian tech, for better and worse. Finally, he argues personalized learning can preserve children’s curiosity and better support teachers, which he views as essential for future education.</p><p>Learn more from The New Stack about evolving role of technology systems from past to future: </p><p><a href="https://thenewstack.io/werner-vogels-6-lessons-for-keeping-systems-simple/">Werner Vogels’ 6 Lessons for Keeping Systems Simple</a></p><p><a href="https://thenewstack.io/50-years-later-remembering-how-the-future-looked-in-1974/">50 Years Later: Remembering How the Future Looked in 1974</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></content:encoded>
      <enclosure length="52541692" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/b7c684e6-a642-42cc-a5fa-f51c669ae9a4/audio/e3071384-984c-487c-8a82-2fe3e92cb4b2/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Amazon CTO Werner Vogels&apos; Predictions for 2026</itunes:title>
      <itunes:author>Werner Vogels, The New Stack, Amazon Web Services, Frederic Lardinois</itunes:author>
      <itunes:duration>00:54:43</itunes:duration>
      <itunes:summary>AWS re:Invent has long featured CTO Werner Vogels’ closing keynote, but this year he signaled it may be his last, emphasizing it’s time for “younger voices” at Amazon. After 21 years with the company, Vogels reflected on arriving as an academic and being stunned by Amazon’s technical scale—an energy that still drives him today. He released his annual predictions ahead of re:Invent, with this year’s five themes focused heavily on AI and broader societal impacts.</itunes:summary>
      <itunes:subtitle>AWS re:Invent has long featured CTO Werner Vogels’ closing keynote, but this year he signaled it may be his last, emphasizing it’s time for “younger voices” at Amazon. After 21 years with the company, Vogels reflected on arriving as an academic and being stunned by Amazon’s technical scale—an energy that still drives him today. He released his annual predictions ahead of re:Invent, with this year’s five themes focused heavily on AI and broader societal impacts.</itunes:subtitle>
      <itunes:keywords>2026 predicitons, software developer, tech podcast, werner vogels, the new stack, ai developer, amazon web services, system design thinking, tech, the new stack makers, software engineer, aws reinvent, ai podcast, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1565</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">5e1eab70-d317-4913-ad3e-c24ffd7881e8</guid>
      <title>How Can We Solve Observability&apos;s Data Capture and Spending Problem?</title>
      <description><![CDATA[<p>DevOps practitioners — whether developers, operators, SREs or business stakeholders — increasingly rely on telemetry to guide decisions, yet face growing complexity, siloed teams and rising observability costs. In a conversation at KubeCon + CloudNativeCon North America, IBM’s Jacob Yackenovich emphasized the importance of collecting high-granularity, full-capture data to avoid missing critical performance signals across hybrid application stacks that blend legacy and cloud-native components. He argued that observability must evolve to serve both technical and nontechnical users, enabling teams to focus on issues based on real business impact rather than subjective judgment.</p><p>AI’s rapid integration into applications introduces new observability challenges. Yackenovich described two patterns: add-on AI services, such as chatbots, whose failures don’t disrupt core workflows, and blocking-style AI components embedded in essential processes like fraud detection, where errors directly affect application function.</p><p>Rising cloud and ingestion costs further complicate telemetry strategies. Yackenovich cautioned against limiting visibility for budget reasons, advocating instead for predictable, fixed-price observability models that let organizations innovate without financial uncertainty.</p><p>Learn more from The New Stack about the latest in observability: </p><p><a href="https://thenewstack.io/introduction-to-observability/">Introduction to Observability</a></p><p><a href="https://thenewstack.io/observability-2-0-or-just-logs-all-over-again/">Observability 2.0? Or Just Logs All Over Again?</a></p><p><a href="https://thenewstack.io/why-a-culture-of-observability-is-key-to-technology-success/">Building an Observability Culture: Getting Everyone Onboard</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Thu, 20 Nov 2025 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (IBM, Jacob Yackenovich, The New Stack, Bruce Gain)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-can-we-solve-observabilitys-data-capture-and-spending-problem-IZylE5qi</link>
      <content:encoded><![CDATA[<p>DevOps practitioners — whether developers, operators, SREs or business stakeholders — increasingly rely on telemetry to guide decisions, yet face growing complexity, siloed teams and rising observability costs. In a conversation at KubeCon + CloudNativeCon North America, IBM’s Jacob Yackenovich emphasized the importance of collecting high-granularity, full-capture data to avoid missing critical performance signals across hybrid application stacks that blend legacy and cloud-native components. He argued that observability must evolve to serve both technical and nontechnical users, enabling teams to focus on issues based on real business impact rather than subjective judgment.</p><p>AI’s rapid integration into applications introduces new observability challenges. Yackenovich described two patterns: add-on AI services, such as chatbots, whose failures don’t disrupt core workflows, and blocking-style AI components embedded in essential processes like fraud detection, where errors directly affect application function.</p><p>Rising cloud and ingestion costs further complicate telemetry strategies. Yackenovich cautioned against limiting visibility for budget reasons, advocating instead for predictable, fixed-price observability models that let organizations innovate without financial uncertainty.</p><p>Learn more from The New Stack about the latest in observability: </p><p><a href="https://thenewstack.io/introduction-to-observability/">Introduction to Observability</a></p><p><a href="https://thenewstack.io/observability-2-0-or-just-logs-all-over-again/">Observability 2.0? Or Just Logs All Over Again?</a></p><p><a href="https://thenewstack.io/why-a-culture-of-observability-is-key-to-technology-success/">Building an Observability Culture: Getting Everyone Onboard</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="21471442" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/f9ac4c48-d8f2-4f9a-970b-03fb772db456/audio/ad0d5a4e-5eac-488b-af41-121a714c5556/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Can We Solve Observability&apos;s Data Capture and Spending Problem?</itunes:title>
      <itunes:author>IBM, Jacob Yackenovich, The New Stack, Bruce Gain</itunes:author>
      <itunes:duration>00:22:21</itunes:duration>
      <itunes:summary>DevOps practitioners — whether developers, operators, SREs or business stakeholders — increasingly rely on telemetry to guide decisions, yet face growing complexity, siloed teams and rising observability costs. In a conversation at KubeCon + CloudNativeCon North America, IBM’s Jacob Yackenovich emphasized the importance of collecting high-granularity, full-capture data to avoid missing critical performance signals across hybrid application stacks that blend legacy and cloud-native components. He argued that observability must evolve to serve both technical and nontechnical users, enabling teams to focus on issues based on real business impact rather than subjective judgment.</itunes:summary>
      <itunes:subtitle>DevOps practitioners — whether developers, operators, SREs or business stakeholders — increasingly rely on telemetry to guide decisions, yet face growing complexity, siloed teams and rising observability costs. In a conversation at KubeCon + CloudNativeCon North America, IBM’s Jacob Yackenovich emphasized the importance of collecting high-granularity, full-capture data to avoid missing critical performance signals across hybrid application stacks that blend legacy and cloud-native components. He argued that observability must evolve to serve both technical and nontechnical users, enabling teams to focus on issues based on real business impact rather than subjective judgment.</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, the new stack, ai developer, devops, bruce gain, tech, jacob yackenovich, devops practitioners, ibm, software engineer, observability, kubecon atlanta</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1564</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">4a5d1ff6-dab9-49c6-8d09-8fe015c79035</guid>
      <title>How Kubernetes Became the New Linux</title>
      <description><![CDATA[<p>Major banks once built their own Linux kernels because no distributions existed, but today commercial distros — and Kubernetes — are universal. At KubeCon + CloudNativeCon North America, AWS’s Jesse Butler noted that Kubernetes has reached the same maturity Linux once did: organizations no longer build bespoke control planes but rely on shared standards. That shift influences how AWS contributes to open source, emphasizing community-wide solutions rather than AWS-specific products.</p><p>Butler highlighted two AWS EKS projects donated to Kubernetes SIGs: KRO and Karpenter. KRO addresses the proliferation of custom controllers that emerged once CRDs made everything representable as Kubernetes resources. By generating CRDs and microcontrollers from simple YAML schemas, KRO transforms “glue code” into an automated service within Kubernetes itself. Karpenter tackles the limits of traditional autoscaling by delivering just-in-time, cost-optimized node provisioning with a flexible, intuitive API. Both projects embody AWS’s evolving philosophy: building features that serve the entire Kubernetes ecosystem as it matures into a true enterprise standard.</p><p>Learn more from The New Stack about the latest in Kube Resource Orchestrator and Karpenter:  </p><p><a href="https://thenewstack.io/migrating-from-cluster-autoscaler-to-karpenter-v0-32/ ">Migrating From Cluster Autoscaler to Karpenter v0.32</a></p><p><a href="https://thenewstack.io/how-amazon-eks-auto-mode-simplifies-kubernetes-cluster-management-part-1/ ">How Amazon EKS Auto Mode Simplifies Kubernetes Cluster Management (Part 1) </a></p><p><a href="https://thenewstack.io/kubernetes-gets-a-new-resource-orchestrator-in-the-form-of-kro/ ">Kubernetes Gets a New Resource Orchestrator in the Form of Kro</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Tue, 18 Nov 2025 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Amazon Web Services, Alex Williams, Jesse Butler, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-kubernetes-became-the-new-linux-jwRLgPWr</link>
      <content:encoded><![CDATA[<p>Major banks once built their own Linux kernels because no distributions existed, but today commercial distros — and Kubernetes — are universal. At KubeCon + CloudNativeCon North America, AWS’s Jesse Butler noted that Kubernetes has reached the same maturity Linux once did: organizations no longer build bespoke control planes but rely on shared standards. That shift influences how AWS contributes to open source, emphasizing community-wide solutions rather than AWS-specific products.</p><p>Butler highlighted two AWS EKS projects donated to Kubernetes SIGs: KRO and Karpenter. KRO addresses the proliferation of custom controllers that emerged once CRDs made everything representable as Kubernetes resources. By generating CRDs and microcontrollers from simple YAML schemas, KRO transforms “glue code” into an automated service within Kubernetes itself. Karpenter tackles the limits of traditional autoscaling by delivering just-in-time, cost-optimized node provisioning with a flexible, intuitive API. Both projects embody AWS’s evolving philosophy: building features that serve the entire Kubernetes ecosystem as it matures into a true enterprise standard.</p><p>Learn more from The New Stack about the latest in Kube Resource Orchestrator and Karpenter:  </p><p><a href="https://thenewstack.io/migrating-from-cluster-autoscaler-to-karpenter-v0-32/ ">Migrating From Cluster Autoscaler to Karpenter v0.32</a></p><p><a href="https://thenewstack.io/how-amazon-eks-auto-mode-simplifies-kubernetes-cluster-management-part-1/ ">How Amazon EKS Auto Mode Simplifies Kubernetes Cluster Management (Part 1) </a></p><p><a href="https://thenewstack.io/kubernetes-gets-a-new-resource-orchestrator-in-the-form-of-kro/ ">Kubernetes Gets a New Resource Orchestrator in the Form of Kro</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="19653737" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/ec7343cb-589a-45e2-8345-e8ee0f8c027a/audio/ae6adbb1-f494-41f0-a096-ab8de36541ce/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Kubernetes Became the New Linux</itunes:title>
      <itunes:author>Amazon Web Services, Alex Williams, Jesse Butler, The New Stack</itunes:author>
      <itunes:duration>00:20:28</itunes:duration>
      <itunes:summary>Major banks once built their own Linux kernels because no distributions existed, but today commercial distros — and Kubernetes — are universal. At KubeCon + CloudNativeCon North America, AWS’s Jesse Butler noted that Kubernetes has reached the same maturity Linux once did: organizations no longer build bespoke control planes but rely on shared standards. That shift influences how AWS contributes to open source, emphasizing community-wide solutions rather than AWS-specific products.</itunes:summary>
      <itunes:subtitle>Major banks once built their own Linux kernels because no distributions existed, but today commercial distros — and Kubernetes — are universal. At KubeCon + CloudNativeCon North America, AWS’s Jesse Butler noted that Kubernetes has reached the same maturity Linux once did: organizations no longer build bespoke control planes but rely on shared standards. That shift influences how AWS contributes to open source, emphasizing community-wide solutions rather than AWS-specific products.</itunes:subtitle>
      <itunes:keywords>software developer, the new stack, ai developer, jesse butler, cloud native, karpenter, tech, ai development, kubecon atlanta 2025, kubernetes, the new stack makers, kube resource orchestrator, software engineer, kro, open source</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1563</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">30000683-b67b-4d6f-8d06-4ea7c0589d6d</guid>
      <title>Keeping GPUs Ticking Like Clockwork</title>
      <description><![CDATA[<p>Clockwork began with a narrow goal—keeping clocks synchronized across servers—but soon realized that its precise latency measurements could reveal deeper data center networking issues. This insight led the company to build a hardware-agnostic monitoring and remediation platform capable of automatically routing around faults. Today, Clockwork’s technology is especially valuable for large GPU clusters used in training LLMs, where communication efficiency and reliability are critical. CEO Suresh Vasudevan explains that AI workloads are among the most demanding distributed applications ever, and Clockwork provides building blocks that improve visibility, performance and fault tolerance. Its flagship feature, FleetIQ, can reroute traffic around failing switches, preventing costly interruptions that might otherwise force teams to restart training from hours-old checkpoints. Although the company originated from Stanford research focused on clock synchronization for financial institutions, the team eventually recognized that packet-timing data could underpin powerful network telemetry and dynamic traffic control. By integrating with NVIDIA NCCL, TCP and RDMA libraries, Clockwork can not only measure congestion but also actively manage GPU communication to enhance both uptime and training efficiency. </p><p>Learn more from The New Stack about the latest in Clockwork: </p><p><a href="https://thenewstack.io/clockworks-fleetiq-aims-to-fix-ais-costly-network-bottleneck/">Clockwork’s FleetIQ Aims To Fix AI’s Costly Network Bottleneck </a></p><p><a href="https://thenewstack.io/what-happens-when-116-makers-reimagine-the-clock/">What Happens When 116 Makers Reimagine the Clock? </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></description>
      <pubDate>Mon, 17 Nov 2025 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Clockwork, Suresh Vasudevan, Frederic Lardinois, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/keeping-gpus-ticking-like-clockwork-ojA_aYL1</link>
      <content:encoded><![CDATA[<p>Clockwork began with a narrow goal—keeping clocks synchronized across servers—but soon realized that its precise latency measurements could reveal deeper data center networking issues. This insight led the company to build a hardware-agnostic monitoring and remediation platform capable of automatically routing around faults. Today, Clockwork’s technology is especially valuable for large GPU clusters used in training LLMs, where communication efficiency and reliability are critical. CEO Suresh Vasudevan explains that AI workloads are among the most demanding distributed applications ever, and Clockwork provides building blocks that improve visibility, performance and fault tolerance. Its flagship feature, FleetIQ, can reroute traffic around failing switches, preventing costly interruptions that might otherwise force teams to restart training from hours-old checkpoints. Although the company originated from Stanford research focused on clock synchronization for financial institutions, the team eventually recognized that packet-timing data could underpin powerful network telemetry and dynamic traffic control. By integrating with NVIDIA NCCL, TCP and RDMA libraries, Clockwork can not only measure congestion but also actively manage GPU communication to enhance both uptime and training efficiency. </p><p>Learn more from The New Stack about the latest in Clockwork: </p><p><a href="https://thenewstack.io/clockworks-fleetiq-aims-to-fix-ais-costly-network-bottleneck/">Clockwork’s FleetIQ Aims To Fix AI’s Costly Network Bottleneck </a></p><p><a href="https://thenewstack.io/what-happens-when-116-makers-reimagine-the-clock/">What Happens When 116 Makers Reimagine the Clock? </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></content:encoded>
      <enclosure length="26060634" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/a7fb7bd0-30ea-4da7-b621-3777e935349e/audio/f9f22e1e-bfe2-45a2-a11f-b644486db97d/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Keeping GPUs Ticking Like Clockwork</itunes:title>
      <itunes:author>Clockwork, Suresh Vasudevan, Frederic Lardinois, The New Stack</itunes:author>
      <itunes:duration>00:27:08</itunes:duration>
      <itunes:summary>Clockwork began with a narrow goal—keeping clocks synchronized across servers—but soon realized that its precise latency measurements could reveal deeper data center networking issues. This insight led the company to build a hardware-agnostic monitoring and remediation platform capable of automatically routing around faults. Today, Clockwork’s technology is especially valuable for large GPU clusters used in training LLMs, where communication efficiency and reliability are critical. CEO Suresh Vasudevan explains that AI workloads are among the most demanding distributed applications ever, and Clockwork provides building blocks that improve visibility, performance and fault tolerance. </itunes:summary>
      <itunes:subtitle>Clockwork began with a narrow goal—keeping clocks synchronized across servers—but soon realized that its precise latency measurements could reveal deeper data center networking issues. This insight led the company to build a hardware-agnostic monitoring and remediation platform capable of automatically routing around faults. Today, Clockwork’s technology is especially valuable for large GPU clusters used in training LLMs, where communication efficiency and reliability are critical. CEO Suresh Vasudevan explains that AI workloads are among the most demanding distributed applications ever, and Clockwork provides building blocks that improve visibility, performance and fault tolerance. </itunes:subtitle>
      <itunes:keywords>ai network, frederic lardinois, software developer, clockwork, tech podcast, the new stack, ai developer, hardware agnostic monitoring, tech, ai development, network, software engineer, the new stack agents, suresh vasudevan, gpu clusters</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1559</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">87bc67c0-9fa0-4559-9297-7ecb7f8c9e71</guid>
      <title>Jupyter Deploy: the New Middle Ground between Laptops and Enterprise</title>
      <description><![CDATA[<p>At JupyterCon 2025, Jupyter Deploy was introduced as an open source command-line tool designed to make cloud-based Jupyter deployments quick and accessible for small teams, educators, and researchers who lack cloud engineering expertise. As described by AWS engineer Jonathan Guinegagne, these users often struggle in an “in-between” space—needing more computing power and collaboration features than a laptop offers, but without the resources for complex cloud setups. </p><p>Jupyter Deploy simplifies this by orchestrating an entire encrypted stack—using Docker, Terraform, OAuth2, and Let’s Encrypt—with minimal setup, removing the need to manually manage 15–20 cloud components. While it offers an easy on-ramp, Guinegagne notes that long-term use still requires some cloud understanding. Built by AWS’s AI Open Source team but deliberately vendor-neutral, it uses a template-based approach, enabling community-contributed deployment recipes for any cloud. Led by Brian Granger, the project aims to join the official Jupyter ecosystem, with future plans including Kubernetes integration for enterprise scalability. </p><p>Learn more from The New Stack about the latest in Jupyter AI development: </p><p><a href="https://thenewstack.io/introduction-to-jupyter-notebooks-for-developers/">Introduction to Jupyter Notebooks for Developers</a></p><p><a href="https://thenewstack.io/display-ai-generated-images-in-a-jupyter-notebook/  " target="_blank">Display AI-Generated Images in a Jupyter Notebook </a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.  </a></p>
]]></description>
      <pubDate>Fri, 14 Nov 2025 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Jonathan Guinegagne, Heather Joslyn, Amazon Web Services, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/jupyter-deploy-the-new-middle-ground-between-laptops-and-enterprise-SX7JWUcb</link>
      <content:encoded><![CDATA[<p>At JupyterCon 2025, Jupyter Deploy was introduced as an open source command-line tool designed to make cloud-based Jupyter deployments quick and accessible for small teams, educators, and researchers who lack cloud engineering expertise. As described by AWS engineer Jonathan Guinegagne, these users often struggle in an “in-between” space—needing more computing power and collaboration features than a laptop offers, but without the resources for complex cloud setups. </p><p>Jupyter Deploy simplifies this by orchestrating an entire encrypted stack—using Docker, Terraform, OAuth2, and Let’s Encrypt—with minimal setup, removing the need to manually manage 15–20 cloud components. While it offers an easy on-ramp, Guinegagne notes that long-term use still requires some cloud understanding. Built by AWS’s AI Open Source team but deliberately vendor-neutral, it uses a template-based approach, enabling community-contributed deployment recipes for any cloud. Led by Brian Granger, the project aims to join the official Jupyter ecosystem, with future plans including Kubernetes integration for enterprise scalability. </p><p>Learn more from The New Stack about the latest in Jupyter AI development: </p><p><a href="https://thenewstack.io/introduction-to-jupyter-notebooks-for-developers/">Introduction to Jupyter Notebooks for Developers</a></p><p><a href="https://thenewstack.io/display-ai-generated-images-in-a-jupyter-notebook/  " target="_blank">Display AI-Generated Images in a Jupyter Notebook </a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.  </a></p>
]]></content:encoded>
      <enclosure length="21292555" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/3b1424a4-690c-4316-be6a-d5ff0e4f815c/audio/7c9b08d1-e727-450c-9222-b229d013b0c7/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Jupyter Deploy: the New Middle Ground between Laptops and Enterprise</itunes:title>
      <itunes:author>Jonathan Guinegagne, Heather Joslyn, Amazon Web Services, The New Stack</itunes:author>
      <itunes:duration>00:22:10</itunes:duration>
      <itunes:summary>At JupyterCon 2025, Jupyter Deploy was introduced as an open source command-line tool designed to make cloud-based Jupyter deployments quick and accessible for small teams, educators, and researchers who lack cloud engineering expertise. As described by AWS engineer Jonathan Guinegagne, these users often struggle in an “in-between” space—needing more computing power and collaboration features than a laptop offers, but without the resources for complex cloud setups. </itunes:summary>
      <itunes:subtitle>At JupyterCon 2025, Jupyter Deploy was introduced as an open source command-line tool designed to make cloud-based Jupyter deployments quick and accessible for small teams, educators, and researchers who lack cloud engineering expertise. As described by AWS engineer Jonathan Guinegagne, these users often struggle in an “in-between” space—needing more computing power and collaboration features than a laptop offers, but without the resources for complex cloud setups. </itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, the new stack, ai developer, amazon web services, tech, jupyter deploy, the new stack makers, software engineer, open source, jupytercon, ai podcast, jonathan guinegagne</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1562</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">9efae309-721a-4ebc-a580-7c26f6eadab6</guid>
      <title>From Physics to the Future: Brian Granger on Project Jupyter in the Age of AI</title>
      <description><![CDATA[<p>In an interview at JupyterCon, Brian Granger — co-creator of Project Jupyter and senior principal technologist at AWS — reflected on Jupyter’s evolution and how AI is redefining open source sustainability. Originally inspired by physics’ modular principles, Granger and co-founder Fernando Pérez designed Jupyter with flexible, extensible components like the notebook format and kernel message protocol. This architecture has endured as the ecosystem expanded from data science into AI and machine learning. </p><p>Now, AI is accelerating development itself: Granger described rewriting Jupyter Server in Go, complete with tests, in just 30 minutes using an AI coding agent — a task once considered impossible. This shift challenges traditional notions of technical debt and could reshape how large open source projects evolve. Jupyter’s 2017 ACM Software System Award placed it among computing’s greats, but also underscored its global responsibility. Granger emphasized that sustaining Jupyter’s mission — empowering human reasoning, collaboration, and innovation — remains the team’s top priority in the AI era.</p><p> </p><p>Learn more from The New Stack about the latest in Jupyter AI development: </p><p><a href="https://thenewstack.io/introduction-to-jupyter-notebooks-for-developers/">Introduction to Jupyter Notebooks for Developers </a></p><p><a href="https://thenewstack.io/display-ai-generated-images-in-a-jupyter-notebook/">Display AI-Generated Images in a Jupyter Notebook </a></p><p> </p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p><p> </p>
]]></description>
      <pubDate>Thu, 13 Nov 2025 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Bryan Granger, Project Jupyter, The New Stack, AWS, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/from-physics-to-the-future-brian-granger-on-project-jupyter-in-the-age-of-ai-CU6HbmKB</link>
      <content:encoded><![CDATA[<p>In an interview at JupyterCon, Brian Granger — co-creator of Project Jupyter and senior principal technologist at AWS — reflected on Jupyter’s evolution and how AI is redefining open source sustainability. Originally inspired by physics’ modular principles, Granger and co-founder Fernando Pérez designed Jupyter with flexible, extensible components like the notebook format and kernel message protocol. This architecture has endured as the ecosystem expanded from data science into AI and machine learning. </p><p>Now, AI is accelerating development itself: Granger described rewriting Jupyter Server in Go, complete with tests, in just 30 minutes using an AI coding agent — a task once considered impossible. This shift challenges traditional notions of technical debt and could reshape how large open source projects evolve. Jupyter’s 2017 ACM Software System Award placed it among computing’s greats, but also underscored its global responsibility. Granger emphasized that sustaining Jupyter’s mission — empowering human reasoning, collaboration, and innovation — remains the team’s top priority in the AI era.</p><p> </p><p>Learn more from The New Stack about the latest in Jupyter AI development: </p><p><a href="https://thenewstack.io/introduction-to-jupyter-notebooks-for-developers/">Introduction to Jupyter Notebooks for Developers </a></p><p><a href="https://thenewstack.io/display-ai-generated-images-in-a-jupyter-notebook/">Display AI-Generated Images in a Jupyter Notebook </a></p><p> </p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p><p> </p>
]]></content:encoded>
      <enclosure length="22511742" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/57f6775f-1dc2-4e69-8a45-e977b0f72b3c/audio/711cf99f-880c-4e28-a139-460e79fcc751/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>From Physics to the Future: Brian Granger on Project Jupyter in the Age of AI</itunes:title>
      <itunes:author>Bryan Granger, Project Jupyter, The New Stack, AWS, Heather Joslyn</itunes:author>
      <itunes:duration>00:23:26</itunes:duration>
      <itunes:summary>In an interview at JupyterCon, Brian Granger — co-creator of Project Jupyter and senior principal technologist at AWS — reflected on Jupyter’s evolution and how AI is redefining open source sustainability. Originally inspired by physics’ modular principles, Granger and co-founder Fernando Pérez designed Jupyter with flexible, extensible components like the notebook format and kernel message protocol. This architecture has endured as the ecosystem expanded from data science into AI and machine learning. </itunes:summary>
      <itunes:subtitle>In an interview at JupyterCon, Brian Granger — co-creator of Project Jupyter and senior principal technologist at AWS — reflected on Jupyter’s evolution and how AI is redefining open source sustainability. Originally inspired by physics’ modular principles, Granger and co-founder Fernando Pérez designed Jupyter with flexible, extensible components like the notebook format and kernel message protocol. This architecture has endured as the ecosystem expanded from data science into AI and machine learning. </itunes:subtitle>
      <itunes:keywords>software developer, project jupyter, tech podcast, the new stack, ai developer, heather joslyn, tech, developer podcast, jupyter notebooks, the new stack makers, software engineer, open source, aws, bryan granger</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1561</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">167cce17-1e33-4c30-9220-3725ccbbb3c2</guid>
      <title>Jupyter AI v3: Could It Generate an ‘Ecosystem of AI Personas’?</title>
      <description><![CDATA[<p>Jupyter AI v3 marks a major step forward in integrating intelligent coding assistance directly into JupyterLab. Discussed by AWS engineers David Qiu and Piyush Jain at JupyterCon, the new release introduces AI personas— customizable, specialized assistants that users can configure to perform tasks such as coding help, debugging, or analysis. Unlike other AI tools, Jupyter AI allows multiple named agents, such as “Claude Code” or “OpenAI Codex,” to coexist in one chat. </p><p>Developers can even build and share their own personas as local or pip-installable packages. This flexibility was enabled by splitting Jupyter AI’s previously large, complex codebase into smaller, modular packages, allowing users to install or replace components as needed. Looking ahead, Qiu envisions Jupyter AI as an “ecosystem of AI personas,” enabling multi-agent collaboration where different personas handle roles like data science, engineering, and testing. With contributors from AWS, Apple, Quansight, and others, the project is poised to expand into a diverse, community-driven AI ecosystem.</p><p>Learn more from The New Stack about the latest in Jupyter AI development: </p><p><a href="https://thenewstack.io/introduction-to-jupyter-notebooks-for-developers/">Introduction to Jupyter Notebooks for Developers</a></p><p><a href="https://thenewstack.io/display-ai-generated-images-in-a-jupyter-notebook/">Display AI-Generated Images in a Jupyter Notebook</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Wed, 12 Nov 2025 18:05:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (David Qiu, Piyush Jain, The New Stack, AWS, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/jupyter-ai-v3-could-it-generate-an-ecosystem-of-ai-personas-oFFhZUSL</link>
      <content:encoded><![CDATA[<p>Jupyter AI v3 marks a major step forward in integrating intelligent coding assistance directly into JupyterLab. Discussed by AWS engineers David Qiu and Piyush Jain at JupyterCon, the new release introduces AI personas— customizable, specialized assistants that users can configure to perform tasks such as coding help, debugging, or analysis. Unlike other AI tools, Jupyter AI allows multiple named agents, such as “Claude Code” or “OpenAI Codex,” to coexist in one chat. </p><p>Developers can even build and share their own personas as local or pip-installable packages. This flexibility was enabled by splitting Jupyter AI’s previously large, complex codebase into smaller, modular packages, allowing users to install or replace components as needed. Looking ahead, Qiu envisions Jupyter AI as an “ecosystem of AI personas,” enabling multi-agent collaboration where different personas handle roles like data science, engineering, and testing. With contributors from AWS, Apple, Quansight, and others, the project is poised to expand into a diverse, community-driven AI ecosystem.</p><p>Learn more from The New Stack about the latest in Jupyter AI development: </p><p><a href="https://thenewstack.io/introduction-to-jupyter-notebooks-for-developers/">Introduction to Jupyter Notebooks for Developers</a></p><p><a href="https://thenewstack.io/display-ai-generated-images-in-a-jupyter-notebook/">Display AI-Generated Images in a Jupyter Notebook</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="22318227" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/cd534e9f-06a8-41c7-be19-65be59a69db7/audio/681564f2-df55-4487-9f92-9b7f5b70e1cc/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Jupyter AI v3: Could It Generate an ‘Ecosystem of AI Personas’?</itunes:title>
      <itunes:author>David Qiu, Piyush Jain, The New Stack, AWS, Heather Joslyn</itunes:author>
      <itunes:duration>00:23:14</itunes:duration>
      <itunes:summary>Jupyter AI v3 marks a major step forward in integrating intelligent coding assistance directly into JupyterLab. Discussed by AWS engineers David Qiu and Piyush Jain at JupyterCon, the new release introduces AI personas — customizable, specialized assistants that users can configure to perform tasks such as coding help, debugging, or analysis. Unlike other AI tools, Jupyter AI allows multiple named agents, such as “Claude Code” or “OpenAI Codex,” to coexist in one chat. </itunes:summary>
      <itunes:subtitle>Jupyter AI v3 marks a major step forward in integrating intelligent coding assistance directly into JupyterLab. Discussed by AWS engineers David Qiu and Piyush Jain at JupyterCon, the new release introduces AI personas — customizable, specialized assistants that users can configure to perform tasks such as coding help, debugging, or analysis. Unlike other AI tools, Jupyter AI allows multiple named agents, such as “Claude Code” or “OpenAI Codex,” to coexist in one chat. </itunes:subtitle>
      <itunes:keywords>jupyter ai v3, multi-agents, tech podcast, ai developer, tech, ai development, developer podcast, david qiu, jupyterlab, software engineer, open source, aws, jupytercon, piyush jain</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1560</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">ed4e2d53-b6e4-4f69-a611-24c940aec689</guid>
      <title>Stop Writing Code, Start Writing Docs</title>
      <description><![CDATA[<p>In this episode of The New Stack Podcast, hosts Alex Williams and Frederic Lardinois spoke with Keith Ballinger, Vice President and General Manager of Google Cloud Platform Developer Experience (GPC), about the evolution of agentic coding tools and the future of programming. Ballinger, a hands-on executive who still codes, discussed Gemini CLI, Google’s response to tools like Claude Code, and his broader philosophy on how developers should work with AI. He emphasized that these tools are in their “first inning” and that developers must “slow down to speed up” by writing clear guides, focusing on architecture, and documenting intent—treating AI as a collaborative coworker rather than a one-shot solution. </p><p>Ballinger reflected on his early AI experiences, from Copilot at GitHub to modern agentic systems that automate tool use. He also explored the resurgence of the command line as an AI interface and predicted that programming will increasingly shift from writing code to expressing intent. Ultimately, he envisions a future where great programmers are great writers, focusing on clarity, problem decomposition, and design rather than syntax. </p><p>Learn more from The New Stack about the latest in Google AI development: </p><p><a href="https://thenewstack.io/why-pytorch-gets-all-the-love/ ">Why PyTorch Gets All the Love </a></p><p><a href="https://thenewstack.io/lightning-ai-brings-a-pytorch-copilot-to-its-development-environment/ ">Lightning AI Brings a PyTorch Copilot to Its Development Environment </a></p><p><a href="https://thenewstack.io/ray-comes-to-the-pytorch-foundation/ ">Ray Comes to the PyTorch Foundation </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Fri, 31 Oct 2025 17:25:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Keith Ballinger, The New Stack, Frederic Lardinois, Alex Williams, Google)</author>
      <link>https://thenewstack.simplecast.com/episodes/stop-writing-code-start-writing-docs-NurIHTIg</link>
      <content:encoded><![CDATA[<p>In this episode of The New Stack Podcast, hosts Alex Williams and Frederic Lardinois spoke with Keith Ballinger, Vice President and General Manager of Google Cloud Platform Developer Experience (GPC), about the evolution of agentic coding tools and the future of programming. Ballinger, a hands-on executive who still codes, discussed Gemini CLI, Google’s response to tools like Claude Code, and his broader philosophy on how developers should work with AI. He emphasized that these tools are in their “first inning” and that developers must “slow down to speed up” by writing clear guides, focusing on architecture, and documenting intent—treating AI as a collaborative coworker rather than a one-shot solution. </p><p>Ballinger reflected on his early AI experiences, from Copilot at GitHub to modern agentic systems that automate tool use. He also explored the resurgence of the command line as an AI interface and predicted that programming will increasingly shift from writing code to expressing intent. Ultimately, he envisions a future where great programmers are great writers, focusing on clarity, problem decomposition, and design rather than syntax. </p><p>Learn more from The New Stack about the latest in Google AI development: </p><p><a href="https://thenewstack.io/why-pytorch-gets-all-the-love/ ">Why PyTorch Gets All the Love </a></p><p><a href="https://thenewstack.io/lightning-ai-brings-a-pytorch-copilot-to-its-development-environment/ ">Lightning AI Brings a PyTorch Copilot to Its Development Environment </a></p><p><a href="https://thenewstack.io/ray-comes-to-the-pytorch-foundation/ ">Ray Comes to the PyTorch Foundation </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="60892934" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/92198c02-7491-4033-a276-8fe12f3369e5/audio/8b0a5d67-2b64-4226-9dc6-3930325c11fb/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Stop Writing Code, Start Writing Docs</itunes:title>
      <itunes:author>Keith Ballinger, The New Stack, Frederic Lardinois, Alex Williams, Google</itunes:author>
      <itunes:duration>01:03:25</itunes:duration>
      <itunes:summary>In this episode of The New Stack Podcast, hosts Alex Williams and Frederic Lardinois spoke with Keith Ballinger, Vice President and General Manager of Google Cloud Platform Developer Experience (GPC), about the evolution of agentic coding tools and the future of programming. Ballinger, a hands-on executive who still codes, discussed Gemini CLI, Google’s response to tools like Claude Code, and his broader philosophy on how developers should work with AI. He emphasized that these tools are in their “first inning” and that developers must “slow down to speed up” by writing clear guides, focusing on architecture, and documenting intent—treating AI as a collaborative coworker rather than a one-shot solution. 
</itunes:summary>
      <itunes:subtitle>In this episode of The New Stack Podcast, hosts Alex Williams and Frederic Lardinois spoke with Keith Ballinger, Vice President and General Manager of Google Cloud Platform Developer Experience (GPC), about the evolution of agentic coding tools and the future of programming. Ballinger, a hands-on executive who still codes, discussed Gemini CLI, Google’s response to tools like Claude Code, and his broader philosophy on how developers should work with AI. He emphasized that these tools are in their “first inning” and that developers must “slow down to speed up” by writing clear guides, focusing on architecture, and documenting intent—treating AI as a collaborative coworker rather than a one-shot solution. 
</itunes:subtitle>
      <itunes:keywords>frederic lardinois, software developer, google, tech podcast, the new stack, ai developer, tech, ai development, developer podcast, keith ballinger, the new stack agents</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1558</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">d823438c-9c14-408c-8a07-305a1d095434</guid>
      <title>Why PyTorch Won</title>
      <description><![CDATA[<p>At the PyTorch Conference 2025 in San Francisco, Luca Antiga — CTO of Lightning AI and head of the PyTorch Foundation’s Technical Advisory Council — discussed the evolution and influence of PyTorch. Originally designed to be “Pythonic” and researcher-friendly</p><p>Antiga emphasized that PyTorch has remained central across major AI shifts — from early neural networks to today’s generative AI boom — powering not just model training but also inference systems such as vLLM and SGLang used in production chatbots. Its flexibility also makes it ideal for reinforcement learning, now commonly used to fine-tune large language models (LLMs).</p><p>On the PyTorch Foundation, Antiga noted that while it recently expanded to include projects likev LLM ,DeepSpeed, and Ray, the goal isn’t to become a vast umbrella organization. Instead, the focus is on user experience and success within the PyTorch ecosystem.</p><p>Learn more from The New Stack about the latest in PyTorch:</p><p><a href="https://thenewstack.io/why-pytorch-gets-all-the-love/">Why PyTorch Gets All the Love</a></p><p><a href="https://thenewstack.io/lightning-ai-brings-a-pytorch-copilot-to-its-development-environment/">Lightning AI Brings a PyTorch Copilot to Its Development Environment</a></p><p><a href="https://thenewstack.io/ray-comes-to-the-pytorch-foundation/">Ray Comes to the PyTorch Foundation</a></p><p><br /><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Fri, 24 Oct 2025 19:30:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (PyTorch, Luca Antiga, Frederic Lardinois, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/why-pytorch-won-zvpTKztt</link>
      <content:encoded><![CDATA[<p>At the PyTorch Conference 2025 in San Francisco, Luca Antiga — CTO of Lightning AI and head of the PyTorch Foundation’s Technical Advisory Council — discussed the evolution and influence of PyTorch. Originally designed to be “Pythonic” and researcher-friendly</p><p>Antiga emphasized that PyTorch has remained central across major AI shifts — from early neural networks to today’s generative AI boom — powering not just model training but also inference systems such as vLLM and SGLang used in production chatbots. Its flexibility also makes it ideal for reinforcement learning, now commonly used to fine-tune large language models (LLMs).</p><p>On the PyTorch Foundation, Antiga noted that while it recently expanded to include projects likev LLM ,DeepSpeed, and Ray, the goal isn’t to become a vast umbrella organization. Instead, the focus is on user experience and success within the PyTorch ecosystem.</p><p>Learn more from The New Stack about the latest in PyTorch:</p><p><a href="https://thenewstack.io/why-pytorch-gets-all-the-love/">Why PyTorch Gets All the Love</a></p><p><a href="https://thenewstack.io/lightning-ai-brings-a-pytorch-copilot-to-its-development-environment/">Lightning AI Brings a PyTorch Copilot to Its Development Environment</a></p><p><a href="https://thenewstack.io/ray-comes-to-the-pytorch-foundation/">Ray Comes to the PyTorch Foundation</a></p><p><br /><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="28529518" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/0acc4d85-e38a-44b5-8069-7259312cb1a9/audio/e2127baf-c6cc-45e9-a44c-2040e9587522/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Why PyTorch Won</itunes:title>
      <itunes:author>PyTorch, Luca Antiga, Frederic Lardinois, The New Stack</itunes:author>
      <itunes:duration>00:29:43</itunes:duration>
      <itunes:summary>At the PyTorch Conference 2025 in San Francisco, Luca Antiga — CTO of Lightning AI and head of the PyTorch Foundation’s Technical Advisory Council — discussed the evolution and influence of PyTorch. Originally designed to be “Pythonic” and researcher-friendly</itunes:summary>
      <itunes:subtitle>At the PyTorch Conference 2025 in San Francisco, Luca Antiga — CTO of Lightning AI and head of the PyTorch Foundation’s Technical Advisory Council — discussed the evolution and influence of PyTorch. Originally designed to be “Pythonic” and researcher-friendly</itunes:subtitle>
      <itunes:keywords>frederic lardinois, tech podcast, the new stack, luca antiga, tech, ai development, open source, pytorch, softare development</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1557</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">86d5966b-d745-41e3-991a-f133ead8fbb7</guid>
      <title>Harness CEO Jyoti Bansal on Why AI Coding Doesn&apos;t Help You Ship Faster</title>
      <description><![CDATA[<p>Harness co-founder Jyoti Bansal highlights a growing issue in software development: while AI tools help generate more code, they often create bottlenecks further along the pipeline, especially in testing, deployment, and compliance. Since its 2017 launch, Harness has aimed to streamline these stages using AI and machine learning. With the rise of large language models (LLMs), the company shifted toward agentic AI, introducing a library of specialized agents—like DevOps, SRE, AppSec, and FinOps agents—that operate behind a unified interface called Harness AI. These agents assist in building production pipelines, not deploying code directly, ensuring human oversight remains critical for compliance and security.</p><p>Bansal emphasizes that AI in development isn't replacing people but accelerating workflows to meet tighter timelines. He also notes strong enterprise adoption, with even large, traditionally slower-moving organizations embracing AI integration. On the topic of an AI bubble, Bansal sees it as a natural part of innovation, akin to the Dotcom era, where market excitement can still lead to meaningful long-term transformation despite short-term volatility.</p><p> </p><p>Learn more from The New Stack about the latest in Harness' AI approach to software development: </p><p><a href="https://thenewstack.io/harness-ai-tackles-software-developments-real-bottleneck/">Harness AI Tackles Software Development’s Real Bottleneck </a> </p><p><a href="https://thenewstack.io/harnessing-ai-to-elevate-automated-software-testing/ ">Harnessing AI To Elevate Automated Software Testing</a> </p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Fri, 10 Oct 2025 19:30:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Jyoti Bansal, Harness, The New Stack, Frederic Lardinois)</author>
      <link>https://thenewstack.simplecast.com/episodes/harness-ceo-jyoti-bansal-on-why-ai-coding-doesnt-help-you-ship-faster-l2iOHCsk</link>
      <content:encoded><![CDATA[<p>Harness co-founder Jyoti Bansal highlights a growing issue in software development: while AI tools help generate more code, they often create bottlenecks further along the pipeline, especially in testing, deployment, and compliance. Since its 2017 launch, Harness has aimed to streamline these stages using AI and machine learning. With the rise of large language models (LLMs), the company shifted toward agentic AI, introducing a library of specialized agents—like DevOps, SRE, AppSec, and FinOps agents—that operate behind a unified interface called Harness AI. These agents assist in building production pipelines, not deploying code directly, ensuring human oversight remains critical for compliance and security.</p><p>Bansal emphasizes that AI in development isn't replacing people but accelerating workflows to meet tighter timelines. He also notes strong enterprise adoption, with even large, traditionally slower-moving organizations embracing AI integration. On the topic of an AI bubble, Bansal sees it as a natural part of innovation, akin to the Dotcom era, where market excitement can still lead to meaningful long-term transformation despite short-term volatility.</p><p> </p><p>Learn more from The New Stack about the latest in Harness' AI approach to software development: </p><p><a href="https://thenewstack.io/harness-ai-tackles-software-developments-real-bottleneck/">Harness AI Tackles Software Development’s Real Bottleneck </a> </p><p><a href="https://thenewstack.io/harnessing-ai-to-elevate-automated-software-testing/ ">Harnessing AI To Elevate Automated Software Testing</a> </p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="37818661" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/b3381a3d-7804-45b7-ad1c-d2b73eaae32b/audio/499c410c-17fd-4e00-bdd9-6ac75a8bedf8/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Harness CEO Jyoti Bansal on Why AI Coding Doesn&apos;t Help You Ship Faster</itunes:title>
      <itunes:author>Jyoti Bansal, Harness, The New Stack, Frederic Lardinois</itunes:author>
      <itunes:duration>00:39:23</itunes:duration>
      <itunes:summary>Harness co-founder Jyoti Bansal highlights a growing issue in software development: while AI tools help generate more code, they often create bottlenecks further along the pipeline, especially in testing, deployment, and compliance. Since its 2017 launch, Harness has aimed to streamline these stages using AI and machine learning. With the rise of large language models (LLMs), the company shifted toward agentic AI, introducing a library of specialized agents—like DevOps, SRE, AppSec, and FinOps agents—that operate behind a unified interface called Harness AI. These agents assist in building production pipelines, not deploying code directly, ensuring human oversight remains critical for compliance and security.</itunes:summary>
      <itunes:subtitle>Harness co-founder Jyoti Bansal highlights a growing issue in software development: while AI tools help generate more code, they often create bottlenecks further along the pipeline, especially in testing, deployment, and compliance. Since its 2017 launch, Harness has aimed to streamline these stages using AI and machine learning. With the rise of large language models (LLMs), the company shifted toward agentic AI, introducing a library of specialized agents—like DevOps, SRE, AppSec, and FinOps agents—that operate behind a unified interface called Harness AI. These agents assist in building production pipelines, not deploying code directly, ensuring human oversight remains critical for compliance and security.</itunes:subtitle>
      <itunes:keywords>jyoti bansal, software developer, ai, tech podcast, the new stack, ai developer podcast, tech, harness, ai development, software development, software engineer, the new stack agents, software testing, ai podcast</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1556</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">65440ae0-ea52-465c-9ff2-d8cb4c9b0d6b</guid>
      <title>How Agentgateway Solves Agentic AI’s Connectivity Challenges</title>
      <description><![CDATA[<p>The agentic AI space faces challenges around secure, governed connectivity between agents, tools, large language models, and microservices. To address this, Solo.io developed two open-source projects: Kagent and Agentgateway. While Kagent, donated to the Cloud Native Computing Foundation, helps scale AI agents, it lacks a secure way to mediate communication between agents and tools. Enter Agentgateway, donated to the Linux Foundation, which provides governance, observability, and security for agent-to-agent and agent-to-tool traffic. Written in Rust, it supports protocols like MCP and A2A and integrates with Kubernetes Gateway API and inference gateways.</p><p>Lin Sun, Solo.io’s head of open source, explained that Agentgateway allows developers to control which tools agents can access—offering flexibility to expose only tested or approved tools. This enables fine-grained policy enforcement and resilience in agent communication, similar to how service meshes manage microservice traffic. Agentgateway ensures secure and selective tool exposure, supporting scalable and secure agent ecosystems. Major players like AWS and Microsoft are also engaging in its development.</p><p>Learn more from The New Stack about the latest in open source projects like Agentgateway:</p><p> </p><p>Learn more from The New Stack about the latest in open source projects like Agentgateway: </p><p><a href="https://thenewstack.io/why-tech-giants-are-backing-the-new-agentgateway-project/ ">Why Tech Giants Are Backing the New Agentgateway Project </a></p><p><a href="https://thenewstack.io/ai-agents-are-creating-a-new-security-nightmare-for-enterprises-and-startups/ ">AI Agents Are Creating a New Security Nightmare for Enterprises and Startups </a></p><p><a href="https://thenewstack.io/five-steps-to-build-ai-agents-that-actually-deliver-business-results/ ">Five Steps to Build AI Agents that Actually Deliver Business Results </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Fri, 3 Oct 2025 16:15:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Solo.io, Lin sun, The New Stack, Frederic Lardinois)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-agentgateway-solves-agentic-ais-connectivity-challenges-_Xf7uKoi</link>
      <content:encoded><![CDATA[<p>The agentic AI space faces challenges around secure, governed connectivity between agents, tools, large language models, and microservices. To address this, Solo.io developed two open-source projects: Kagent and Agentgateway. While Kagent, donated to the Cloud Native Computing Foundation, helps scale AI agents, it lacks a secure way to mediate communication between agents and tools. Enter Agentgateway, donated to the Linux Foundation, which provides governance, observability, and security for agent-to-agent and agent-to-tool traffic. Written in Rust, it supports protocols like MCP and A2A and integrates with Kubernetes Gateway API and inference gateways.</p><p>Lin Sun, Solo.io’s head of open source, explained that Agentgateway allows developers to control which tools agents can access—offering flexibility to expose only tested or approved tools. This enables fine-grained policy enforcement and resilience in agent communication, similar to how service meshes manage microservice traffic. Agentgateway ensures secure and selective tool exposure, supporting scalable and secure agent ecosystems. Major players like AWS and Microsoft are also engaging in its development.</p><p>Learn more from The New Stack about the latest in open source projects like Agentgateway:</p><p> </p><p>Learn more from The New Stack about the latest in open source projects like Agentgateway: </p><p><a href="https://thenewstack.io/why-tech-giants-are-backing-the-new-agentgateway-project/ ">Why Tech Giants Are Backing the New Agentgateway Project </a></p><p><a href="https://thenewstack.io/ai-agents-are-creating-a-new-security-nightmare-for-enterprises-and-startups/ ">AI Agents Are Creating a New Security Nightmare for Enterprises and Startups </a></p><p><a href="https://thenewstack.io/five-steps-to-build-ai-agents-that-actually-deliver-business-results/ ">Five Steps to Build AI Agents that Actually Deliver Business Results </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="19774527" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/f679c70a-5e60-4e65-8b2c-a1e7815ad394/audio/738e58a7-2d35-4650-a4e5-dc58440dcfd6/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Agentgateway Solves Agentic AI’s Connectivity Challenges</itunes:title>
      <itunes:author>Solo.io, Lin sun, The New Stack, Frederic Lardinois</itunes:author>
      <itunes:duration>00:20:35</itunes:duration>
      <itunes:summary>The agentic AI space faces challenges around secure, governed connectivity between agents, tools, large language models, and microservices. To address this, Solo.io developed two open-source projects: Kagent and Agentgateway. While Kagent, donated to the Cloud Native Computing Foundation, helps scale AI agents, it lacks a secure way to mediate communication between agents and tools. Enter Agentgateway, donated to the Linux Foundation, which provides governance, observability, and security for agent-to-agent and agent-to-tool traffic. Written in Rust, it supports protocols like MCP and A2A and integrates with Kubernetes Gateway API and inference gateways.</itunes:summary>
      <itunes:subtitle>The agentic AI space faces challenges around secure, governed connectivity between agents, tools, large language models, and microservices. To address this, Solo.io developed two open-source projects: Kagent and Agentgateway. While Kagent, donated to the Cloud Native Computing Foundation, helps scale AI agents, it lacks a secure way to mediate communication between agents and tools. Enter Agentgateway, donated to the Linux Foundation, which provides governance, observability, and security for agent-to-agent and agent-to-tool traffic. Written in Rust, it supports protocols like MCP and A2A and integrates with Kubernetes Gateway API and inference gateways.</itunes:subtitle>
      <itunes:keywords>lin sun, agentgateway, frederic lardinois, the new stack, solo.io, the new stack agents, open source, kagent, open source summit amsterdam, open source security</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1555</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">64c685d8-8316-43fa-a557-c8db97181adf</guid>
      <title>Sentry Founder: AI Patch Generation Is &apos;Awful&apos; Right Now</title>
      <description><![CDATA[<p>David Cramer, founder and chief product officer of Sentry, remains skeptical about generative AI's current ability to replace human engineers, particularly in software production. While he acknowledges AI tools aren't yet reliable enough for full autonomy—especially in tasks like patch generation—he sees value in using large language models (LLMs) to enhance productivity. Sentry's AI-powered tool, Seer, uses GenAI to help developers debug more efficiently by identifying root causes and summarizing complex system data, mimicking some functions of senior engineers. However, Cramer emphasizes that human oversight remains essential, describing the current stage as "human in the loop" AI, useful for speeding up code reviews and identifying overlooked bugs.</p><p>Cramer also addressed Sentry's shift from open source to fair source licensing due to frustration over third parties commercializing their software without contributing back. Sentry now uses Functional Source Licensing, which becomes Apache 2.0 after two years. This move aims to strike a balance between openness and preventing exploitation, while maintaining accessibility for users and avoiding fragmented product versions.</p><p>Learn more from The New Stack about the latest in Sentry and David Cramer thoughts on AI development:  </p><p><a href="https://thenewstack.io/install-sentry-to-monitor-live-applications/">Install Sentry to Monitor Live Applications</a></p><p><a href="https://thenewstack.io/frontend-development-challenges-for-2021/">Frontend Development Challenges for 2021</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></description>
      <pubDate>Fri, 26 Sep 2025 16:40:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Sentry, David Cramer, Alex Williams, Frederic Lardinois, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/sentry-founder-ai-patch-generation-is-awful-right-now-MMnZttlV</link>
      <content:encoded><![CDATA[<p>David Cramer, founder and chief product officer of Sentry, remains skeptical about generative AI's current ability to replace human engineers, particularly in software production. While he acknowledges AI tools aren't yet reliable enough for full autonomy—especially in tasks like patch generation—he sees value in using large language models (LLMs) to enhance productivity. Sentry's AI-powered tool, Seer, uses GenAI to help developers debug more efficiently by identifying root causes and summarizing complex system data, mimicking some functions of senior engineers. However, Cramer emphasizes that human oversight remains essential, describing the current stage as "human in the loop" AI, useful for speeding up code reviews and identifying overlooked bugs.</p><p>Cramer also addressed Sentry's shift from open source to fair source licensing due to frustration over third parties commercializing their software without contributing back. Sentry now uses Functional Source Licensing, which becomes Apache 2.0 after two years. This move aims to strike a balance between openness and preventing exploitation, while maintaining accessibility for users and avoiding fragmented product versions.</p><p>Learn more from The New Stack about the latest in Sentry and David Cramer thoughts on AI development:  </p><p><a href="https://thenewstack.io/install-sentry-to-monitor-live-applications/">Install Sentry to Monitor Live Applications</a></p><p><a href="https://thenewstack.io/frontend-development-challenges-for-2021/">Frontend Development Challenges for 2021</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></content:encoded>
      <enclosure length="43277208" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/c9c66cf3-555f-49cb-9f4f-681d2023b4da/audio/df7ff7ce-904d-48a3-8506-d9f48ff36cea/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Sentry Founder: AI Patch Generation Is &apos;Awful&apos; Right Now</itunes:title>
      <itunes:author>Sentry, David Cramer, Alex Williams, Frederic Lardinois, The New Stack</itunes:author>
      <itunes:duration>00:45:04</itunes:duration>
      <itunes:summary>David Cramer, founder and chief product officer of Sentry, remains skeptical about generative AI&apos;s current ability to replace human engineers, particularly in software production. While he acknowledges AI tools aren&apos;t yet reliable enough for full autonomy—especially in tasks like patch generation—he sees value in using large language models (LLMs) to enhance productivity. Sentry&apos;s AI-powered tool, Seer, uses GenAI to help developers debug more efficiently by identifying root causes and summarizing complex system data, mimicking some functions of senior engineers. However, Cramer emphasizes that human oversight remains essential, describing the current stage as &quot;human in the loop&quot; AI, useful for speeding up code reviews and identifying overlooked bugs.</itunes:summary>
      <itunes:subtitle>David Cramer, founder and chief product officer of Sentry, remains skeptical about generative AI&apos;s current ability to replace human engineers, particularly in software production. While he acknowledges AI tools aren&apos;t yet reliable enough for full autonomy—especially in tasks like patch generation—he sees value in using large language models (LLMs) to enhance productivity. Sentry&apos;s AI-powered tool, Seer, uses GenAI to help developers debug more efficiently by identifying root causes and summarizing complex system data, mimicking some functions of senior engineers. However, Cramer emphasizes that human oversight remains essential, describing the current stage as &quot;human in the loop&quot; AI, useful for speeding up code reviews and identifying overlooked bugs.</itunes:subtitle>
      <itunes:keywords>software developer, functional source, tech podcast, the new stack, ai developer, tech, ai development, developer podcast, the new stack agents, open source, david cramer, sentry</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1554</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">4470f21f-cd49-40d0-9514-fe8ff95659df</guid>
      <title>Why Linear Built an API For Agents</title>
      <description><![CDATA[<p>Cursor, the AI code editor, recently integrated with Linear, a project management tool, enabling developers to assign tasks directly to Cursor's background coding agent within Linear. The collaboration felt natural, as Cursor already used Linear internally. Linear's new agent-specific API played a key role in enabling this integration, providing agents like Cursor with context-aware sessions to interact efficiently with the platform.</p><p>Developers can now offload tasks such as fixing issues, updating documentation, or managing dependencies to the Cursor agent. However, both Linear’s Tom Moor and Cursor’s Andrew Milich emphasized the importance of giving agents clear, thoughtful input. Simply assigning vague tasks like “@cursor, fix this” isn’t effective—developers still need to guide the agent with relevant context, such as links to similar pull requests.</p><p>Milich and Moor also discussed the growing value and adoption of autonomous agents, and hinted at a future where more companies build agent-specific APIs to support these tools. The full interview is available via podcast or YouTube.</p><p>Learn more from The New Stack about the latest in AI and development in Cursor AI and Linear:  </p><p><a href="https://thenewstack.io/install-cursor-and-learn-programming-with-ai-help/">Install Cursor and Learn Programming With AI Help</a></p><p><a href="https://thenewstack.io/using-cursor-ai-as-part-of-your-development-workflow/">Using Cursor AI as Part of Your Development Workflow</a></p><p><a href="https://thenewstack.io/anti-agile-project-tracker-linear-the-latest-to-take-on-jira/">Anti-Agile Project Tracker Linear the Latest to Take on Jira</a></p><p><br /><a>Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></description>
      <pubDate>Fri, 19 Sep 2025 20:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Tom Moor, Andrew Milich, Cursor, Linear, Frederic Lardinois)</author>
      <link>https://thenewstack.simplecast.com/episodes/why-linear-built-an-api-for-agents-6aGIzuu7</link>
      <content:encoded><![CDATA[<p>Cursor, the AI code editor, recently integrated with Linear, a project management tool, enabling developers to assign tasks directly to Cursor's background coding agent within Linear. The collaboration felt natural, as Cursor already used Linear internally. Linear's new agent-specific API played a key role in enabling this integration, providing agents like Cursor with context-aware sessions to interact efficiently with the platform.</p><p>Developers can now offload tasks such as fixing issues, updating documentation, or managing dependencies to the Cursor agent. However, both Linear’s Tom Moor and Cursor’s Andrew Milich emphasized the importance of giving agents clear, thoughtful input. Simply assigning vague tasks like “@cursor, fix this” isn’t effective—developers still need to guide the agent with relevant context, such as links to similar pull requests.</p><p>Milich and Moor also discussed the growing value and adoption of autonomous agents, and hinted at a future where more companies build agent-specific APIs to support these tools. The full interview is available via podcast or YouTube.</p><p>Learn more from The New Stack about the latest in AI and development in Cursor AI and Linear:  </p><p><a href="https://thenewstack.io/install-cursor-and-learn-programming-with-ai-help/">Install Cursor and Learn Programming With AI Help</a></p><p><a href="https://thenewstack.io/using-cursor-ai-as-part-of-your-development-workflow/">Using Cursor AI as Part of Your Development Workflow</a></p><p><a href="https://thenewstack.io/anti-agile-project-tracker-linear-the-latest-to-take-on-jira/">Anti-Agile Project Tracker Linear the Latest to Take on Jira</a></p><p><br /><a>Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></content:encoded>
      <enclosure length="46265198" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/2cd743fc-f8f0-4f4b-a2b1-d0d659ee5486/audio/acc87ce5-f45d-491f-b1ca-8888535c0651/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Why Linear Built an API For Agents</itunes:title>
      <itunes:author>Tom Moor, Andrew Milich, Cursor, Linear, Frederic Lardinois</itunes:author>
      <itunes:duration>00:48:11</itunes:duration>
      <itunes:summary>Cursor, the AI code editor, recently integrated with Linear, a project management tool, enabling developers to assign tasks directly to Cursor&apos;s background coding agent within Linear. The collaboration felt natural, as Cursor already used Linear internally. Linear&apos;s new agent-specific API played a key role in enabling this integration, providing agents like Cursor with context-aware sessions to interact efficiently with the platform.</itunes:summary>
      <itunes:subtitle>Cursor, the AI code editor, recently integrated with Linear, a project management tool, enabling developers to assign tasks directly to Cursor&apos;s background coding agent within Linear. The collaboration felt natural, as Cursor already used Linear internally. Linear&apos;s new agent-specific API played a key role in enabling this integration, providing agents like Cursor with context-aware sessions to interact efficiently with the platform.</itunes:subtitle>
      <itunes:keywords>tom moor, frederic lardinois, linear, tech podcast, the new stack, cursor ai, ai developer, tech, software development, andrew milich, the new stack agents, cursor</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1553</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">bf1f5d07-27d6-4b9d-be2f-3570102d5331</guid>
      <title>ServiceNow Says Windsurf Gave Its Engineers a 10% Productivity Boost</title>
      <description><![CDATA[<p>In this episode of The New Stack Agents, ServiceNow CTO and co-founder Pat Casey discusses why the company runs 90% of its workloads—including AI infrastructure—on its own physical servers rather than the public cloud. ServiceNow maintains GPU hubs across global data centers, enabling efficient, low-latency AI operations. Casey downplays the complexity of running AI models on-prem, noting their team’s strong Kubernetes and Triton expertise. </p><p>The company recently switched from GitHub Copilot to its own AI coding assistant, Windsurf, yielding a 10% productivity boost among 7,000 engineers. However, use of such tools isn’t mandatory—performance remains the main metric. Casey also addresses the impact of AI on junior developers, acknowledging that AI tools often handle tasks traditionally assigned to them. While ServiceNow still hires many interns, he sees the entry-level tech job market as increasingly vulnerable. Despite these concerns, Casey remains optimistic, viewing the AI revolution as transformative and ultimately beneficial, though not without disruption or risk. <br /> </p><p>Learn more from The New Stack about the latest in AI and development in ServiceNow </p><p><a href="https://thenewstack.io/servicenow-launches-a-control-tower-for-agents/">ServiceNow Launches a Control Tower for AI Agents</a></p><p><a href="https://thenewstack.io/servicenow-acquires-data-world-to-expand-its-ai-data-strategy/">ServiceNow Acquires Data.World To Expand Its AI Data Strategy </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></description>
      <pubDate>Fri, 12 Sep 2025 18:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Pat Casey, ServiceNow, The New Stack, Frederic Lardinois, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/servicenow-says-windsurf-gave-its-engineers-a-10-productivity-boost-aeRl1smX</link>
      <content:encoded><![CDATA[<p>In this episode of The New Stack Agents, ServiceNow CTO and co-founder Pat Casey discusses why the company runs 90% of its workloads—including AI infrastructure—on its own physical servers rather than the public cloud. ServiceNow maintains GPU hubs across global data centers, enabling efficient, low-latency AI operations. Casey downplays the complexity of running AI models on-prem, noting their team’s strong Kubernetes and Triton expertise. </p><p>The company recently switched from GitHub Copilot to its own AI coding assistant, Windsurf, yielding a 10% productivity boost among 7,000 engineers. However, use of such tools isn’t mandatory—performance remains the main metric. Casey also addresses the impact of AI on junior developers, acknowledging that AI tools often handle tasks traditionally assigned to them. While ServiceNow still hires many interns, he sees the entry-level tech job market as increasingly vulnerable. Despite these concerns, Casey remains optimistic, viewing the AI revolution as transformative and ultimately beneficial, though not without disruption or risk. <br /> </p><p>Learn more from The New Stack about the latest in AI and development in ServiceNow </p><p><a href="https://thenewstack.io/servicenow-launches-a-control-tower-for-agents/">ServiceNow Launches a Control Tower for AI Agents</a></p><p><a href="https://thenewstack.io/servicenow-acquires-data-world-to-expand-its-ai-data-strategy/">ServiceNow Acquires Data.World To Expand Its AI Data Strategy </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></content:encoded>
      <enclosure length="55357901" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/d22c4f78-10c1-4449-a1d4-18cf7c63ef58/audio/0da42812-e862-4697-8417-ac589f8ecbc8/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>ServiceNow Says Windsurf Gave Its Engineers a 10% Productivity Boost</itunes:title>
      <itunes:author>Pat Casey, ServiceNow, The New Stack, Frederic Lardinois, Alex Williams</itunes:author>
      <itunes:duration>00:57:39</itunes:duration>
      <itunes:summary>In this episode of The New Stack Agents, ServiceNow CTO and co-founder Pat Casey discusses why the company runs 90% of its workloads—including AI infrastructure—on its own physical servers rather than the public cloud. ServiceNow maintains GPU hubs across global data centers, enabling efficient, low-latency AI operations. Casey downplays the complexity of running AI models on-prem, noting their team’s strong Kubernetes and Triton expertise. </itunes:summary>
      <itunes:subtitle>In this episode of The New Stack Agents, ServiceNow CTO and co-founder Pat Casey discusses why the company runs 90% of its workloads—including AI infrastructure—on its own physical servers rather than the public cloud. ServiceNow maintains GPU hubs across global data centers, enabling efficient, low-latency AI operations. Casey downplays the complexity of running AI models on-prem, noting their team’s strong Kubernetes and Triton expertise. </itunes:subtitle>
      <itunes:keywords>frederic lardinois, software developer, tech podcast, alex williams, the new stack, ai developer, pat casey, tech, ai development, ai applications, software engineer, the new stack agents, servicenow</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1552</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">5f6f5d66-9e0d-41c7-bfea-f0b3826ec031</guid>
      <title>How the EU’s Cyber Act Burdens Lone Open Source Developers</title>
      <description><![CDATA[<p>The European Union’s upcoming Cyber Resilience Act (CRA) goes into effect in  October 2026, with the remainder of the requirements going into effect in December 2027, and introduces significant cybersecurity compliance requirements for software vendors, including those who rely heavily on open source components. At the Open Source Summit Europe, Christopher "CRob" Robinson of the Open Source Security Foundation highlighted concerns about how these regulations could impact open source maintainers. Many open source projects begin as personal solutions to shared problems and grow in popularity, often ending up embedded in critical systems across industries like automotive and energy. Despite this widespread use—Robinson noted up to 97% of commercial software contains open source—these projects are frequently maintained by individuals or small teams with limited resources.</p><p>Developers often have no visibility into how their code is used, yet they’re increasingly burdened by legal and compliance demands from downstream users, such as requests for Software Bills of Materials (SBOMs) and conformity assessments. The CRA raises the stakes, with potential penalties in the billions for noncompliance, putting immense pressure on the open source ecosystem.</p><p> </p><p>Learn more from The New Stack about Open Source Security:</p><p><a href="https://thenewstack.io/open-source-propels-the-fall-of-security-by-obscurity/">Open Source Propels the Fall of Security by Obscurity</a></p><p><a href="https://thenewstack.io/there-is-just-one-way-to-do-open-source-security-together/">There Is Just One Way To Do Open Source Security: Together</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p><p> </p>
]]></description>
      <pubDate>Thu, 11 Sep 2025 16:45:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Christopher Robinson, Open SSF, The New Stack, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-the-eus-cyber-act-burdens-lone-open-source-developers-cN_xtMYo</link>
      <content:encoded><![CDATA[<p>The European Union’s upcoming Cyber Resilience Act (CRA) goes into effect in  October 2026, with the remainder of the requirements going into effect in December 2027, and introduces significant cybersecurity compliance requirements for software vendors, including those who rely heavily on open source components. At the Open Source Summit Europe, Christopher "CRob" Robinson of the Open Source Security Foundation highlighted concerns about how these regulations could impact open source maintainers. Many open source projects begin as personal solutions to shared problems and grow in popularity, often ending up embedded in critical systems across industries like automotive and energy. Despite this widespread use—Robinson noted up to 97% of commercial software contains open source—these projects are frequently maintained by individuals or small teams with limited resources.</p><p>Developers often have no visibility into how their code is used, yet they’re increasingly burdened by legal and compliance demands from downstream users, such as requests for Software Bills of Materials (SBOMs) and conformity assessments. The CRA raises the stakes, with potential penalties in the billions for noncompliance, putting immense pressure on the open source ecosystem.</p><p> </p><p>Learn more from The New Stack about Open Source Security:</p><p><a href="https://thenewstack.io/open-source-propels-the-fall-of-security-by-obscurity/">Open Source Propels the Fall of Security by Obscurity</a></p><p><a href="https://thenewstack.io/there-is-just-one-way-to-do-open-source-security-together/">There Is Just One Way To Do Open Source Security: Together</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p><p> </p>
]]></content:encoded>
      <enclosure length="18735063" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/506d9d4e-b335-48e1-9be6-bfce21a59b3b/audio/c69e44af-b2b4-4a70-80d2-9afb6fb1394d/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How the EU’s Cyber Act Burdens Lone Open Source Developers</itunes:title>
      <itunes:author>Christopher Robinson, Open SSF, The New Stack, Alex Williams</itunes:author>
      <itunes:duration>00:19:30</itunes:duration>
      <itunes:summary>The European Union’s upcoming Cyber Resilience Act (CRA) goes into effect in October 2026, with the remainder of the requirements going into effect in December 2027, and introduces significant cybersecurity compliance requirements for software vendors, including those who rely heavily on open source components. At the Open Source Summit Europe, Christopher &quot;CRob&quot; Robinson of the Open Source Security Foundation highlighted concerns about how these regulations could impact open source maintainers. Many open source projects begin as personal solutions to shared problems and grow in popularity, often ending up embedded in critical systems across industries like automotive and energy. Despite this widespread use—Robinson noted up to 97% of commercial software contains open source—these projects are frequently maintained by individuals or small teams with limited resources.</itunes:summary>
      <itunes:subtitle>The European Union’s upcoming Cyber Resilience Act (CRA) goes into effect in October 2026, with the remainder of the requirements going into effect in December 2027, and introduces significant cybersecurity compliance requirements for software vendors, including those who rely heavily on open source components. At the Open Source Summit Europe, Christopher &quot;CRob&quot; Robinson of the Open Source Security Foundation highlighted concerns about how these regulations could impact open source maintainers. Many open source projects begin as personal solutions to shared problems and grow in popularity, often ending up embedded in critical systems across industries like automotive and energy. Despite this widespread use—Robinson noted up to 97% of commercial software contains open source—these projects are frequently maintained by individuals or small teams with limited resources.</itunes:subtitle>
      <itunes:keywords>christopher robinson, software developer, open source ai, ai, resilience act, alex williams, the new stack, ai development, open ssf, software ai security, software engineer, ai security, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1551</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">764dc96c-8b11-4703-98d0-00693fd71dbf</guid>
      <title>How Warp Went From Terminal To Agentic Development Environment</title>
      <description><![CDATA[<p>In this week’s<i>The New Stack Agents</i>, Zach Lloyd, founder and CEO of Warp, discussed the launch of Warp Code, the latest evolution of the Warp terminal into a full agentic development environment. Originally launched in 2022 to modernize the terminal, Warp now integrates powerful AI agents to help developers write, debug, and ship code. Key new features include a built-in file editor, project-structuring tools, agent-driven code review, and WARP.md files that guide agent behavior. </p><p>Recognizing developers’ hesitation to trust AI-generated code, Warp emphasizes transparency and control, enabling users to inspect and steer the agent’s work in real time through "persistent input" and task list updates. While Warp supports terminal workflows, Lloyd says it’s now better viewed as an AI coding platform. Interestingly, the launch announcement was delivered from horseback in a Western-themed ad, reflecting Warp’s desire to stand out in a crowded field of conventional tech product rollouts. The quirky “Code on Warp” (C.O.W.) branding captured attention and embodied their unique approach.</p><p>Learn more from The New Stack about the latest in AI and Warp:</p><p><a href="https://thenewstack.io/warp-goes-agentic-a-developer-walk-through-of-warp-2-0/">Warp Goes Agentic: A Developer Walk-Through of Warp 2.0</a></p><p><a href="https://thenewstack.io/developer-review-of-warp-for-windows-an-ai-terminal-app/">Developer Review of Warp for Windows, an AI Terminal App</a></p><p><a href="https://thenewstack.io/how-ai-can-help-you-learn-the-art-of-programming/">How AI Can Help You Learn the Art of Programming</a></p><p>Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</p>
]]></description>
      <pubDate>Fri, 5 Sep 2025 13:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Zach Lloyd, warp, The New Stack, Frederic Lardinois, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-warp-went-from-terminal-to-agentic-development-environment-vz426rSh</link>
      <content:encoded><![CDATA[<p>In this week’s<i>The New Stack Agents</i>, Zach Lloyd, founder and CEO of Warp, discussed the launch of Warp Code, the latest evolution of the Warp terminal into a full agentic development environment. Originally launched in 2022 to modernize the terminal, Warp now integrates powerful AI agents to help developers write, debug, and ship code. Key new features include a built-in file editor, project-structuring tools, agent-driven code review, and WARP.md files that guide agent behavior. </p><p>Recognizing developers’ hesitation to trust AI-generated code, Warp emphasizes transparency and control, enabling users to inspect and steer the agent’s work in real time through "persistent input" and task list updates. While Warp supports terminal workflows, Lloyd says it’s now better viewed as an AI coding platform. Interestingly, the launch announcement was delivered from horseback in a Western-themed ad, reflecting Warp’s desire to stand out in a crowded field of conventional tech product rollouts. The quirky “Code on Warp” (C.O.W.) branding captured attention and embodied their unique approach.</p><p>Learn more from The New Stack about the latest in AI and Warp:</p><p><a href="https://thenewstack.io/warp-goes-agentic-a-developer-walk-through-of-warp-2-0/">Warp Goes Agentic: A Developer Walk-Through of Warp 2.0</a></p><p><a href="https://thenewstack.io/developer-review-of-warp-for-windows-an-ai-terminal-app/">Developer Review of Warp for Windows, an AI Terminal App</a></p><p><a href="https://thenewstack.io/how-ai-can-help-you-learn-the-art-of-programming/">How AI Can Help You Learn the Art of Programming</a></p><p>Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</p>
]]></content:encoded>
      <enclosure length="51271932" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/53f951d1-50f9-4d38-90e4-12a99c2d3816/audio/0b1b5bf1-cb23-42f4-9a09-7c5b197be39a/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Warp Went From Terminal To Agentic Development Environment</itunes:title>
      <itunes:author>Zach Lloyd, warp, The New Stack, Frederic Lardinois, Alex Williams</itunes:author>
      <itunes:duration>00:53:24</itunes:duration>
      <itunes:summary>In this week’s The New Stack Agents, Zach Lloyd, founder and CEO of Warp, discussed the launch of Warp Code, the latest evolution of the Warp terminal into a full agentic development environment. Originally launched in 2022 to modernize the terminal, Warp now integrates powerful AI agents to help developers write, debug, and ship code. Key new features include a built-in file editor, project-structuring tools, agent-driven code review, and WARP.md files that guide agent behavior. </itunes:summary>
      <itunes:subtitle>In this week’s The New Stack Agents, Zach Lloyd, founder and CEO of Warp, discussed the launch of Warp Code, the latest evolution of the Warp terminal into a full agentic development environment. Originally launched in 2022 to modernize the terminal, Warp now integrates powerful AI agents to help developers write, debug, and ship code. Key new features include a built-in file editor, project-structuring tools, agent-driven code review, and WARP.md files that guide agent behavior. </itunes:subtitle>
      <itunes:keywords>software developer, ai agents, tech podcast, the new stack, ai developer, tech, warp code, software development, zach lloyd, agentic development environment, the new stack agents, warp, ai coding platform</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1549</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">18151298-1b25-4885-affa-7524fedfce53</guid>
      <title>The Linux Foundation In The Age Of AI</title>
      <description><![CDATA[<p>In a recent episode of The New Stack Agents from the Open Source Summit in Amsterdam, Jim Zemlin, executive director of the Linux Foundation, discussed the evolving landscape of open source AI. While the Linux Foundation has helped build ecosystems like the CNCF for cloud-native computing, there's no unified umbrella foundation yet for open source AI. Existing efforts include the PyTorch Foundation and LF AI & Data, but AI development is still fragmented across models, tooling, and standards. </p><p>Zemlin highlighted the industry's shift from foundational models to open-weight models and now toward inference stacks and agentic AI. He suggested a collective effort may eventually form but cautioned against forcing structure too early, stressing the importance of not hindering innovation. Foundations, he said, must balance scale with agility. On the debate over what qualifies as "open source" in AI, Zemlin adopted a pragmatic view, acknowledging the costs of creating frontier models. He supports open-weight models and believes fully open models, from data to deployment, may emerge over time. </p><p>Learn more from The New Stack about the latest in AI and open source, AI in China, Europe's AI and security regulations, and more: </p><p><a href="https://thenewstack.io/open-source-is-not-local-source-and-the-case-for-global-cooperation/ ">Open Source Is Not Local Source, and the Case for Global Cooperation </a></p><p><a href="https://thenewstack.io/u-s-blocks-open-source-help-from-these-countries/ ">US Blocks Open Source ‘Help’ From These Countries </a></p><p><a href="https://thenewstack.io/open-source-is-too-important-to-dilute/ ">Open Source Is Worth Defending</a> </p><p><a>Join our community of newsletter subscribers to stay on top of the news and at the top of your game./</a></p>
]]></description>
      <pubDate>Tue, 2 Sep 2025 16:30:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Jim Zemlin, The New Stack, The Linux Foundation, Frederic Lardinois)</author>
      <link>https://thenewstack.simplecast.com/episodes/the-linux-foundation-in-the-age-of-ai-KUCmmcZO</link>
      <content:encoded><![CDATA[<p>In a recent episode of The New Stack Agents from the Open Source Summit in Amsterdam, Jim Zemlin, executive director of the Linux Foundation, discussed the evolving landscape of open source AI. While the Linux Foundation has helped build ecosystems like the CNCF for cloud-native computing, there's no unified umbrella foundation yet for open source AI. Existing efforts include the PyTorch Foundation and LF AI & Data, but AI development is still fragmented across models, tooling, and standards. </p><p>Zemlin highlighted the industry's shift from foundational models to open-weight models and now toward inference stacks and agentic AI. He suggested a collective effort may eventually form but cautioned against forcing structure too early, stressing the importance of not hindering innovation. Foundations, he said, must balance scale with agility. On the debate over what qualifies as "open source" in AI, Zemlin adopted a pragmatic view, acknowledging the costs of creating frontier models. He supports open-weight models and believes fully open models, from data to deployment, may emerge over time. </p><p>Learn more from The New Stack about the latest in AI and open source, AI in China, Europe's AI and security regulations, and more: </p><p><a href="https://thenewstack.io/open-source-is-not-local-source-and-the-case-for-global-cooperation/ ">Open Source Is Not Local Source, and the Case for Global Cooperation </a></p><p><a href="https://thenewstack.io/u-s-blocks-open-source-help-from-these-countries/ ">US Blocks Open Source ‘Help’ From These Countries </a></p><p><a href="https://thenewstack.io/open-source-is-too-important-to-dilute/ ">Open Source Is Worth Defending</a> </p><p><a>Join our community of newsletter subscribers to stay on top of the news and at the top of your game./</a></p>
]]></content:encoded>
      <enclosure length="27905505" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/24ed88fa-d8d1-4d09-b4e8-9d4bf8cf1bdd/audio/7f33c4dc-5e65-43cf-af86-6185229023b5/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>The Linux Foundation In The Age Of AI</itunes:title>
      <itunes:author>Jim Zemlin, The New Stack, The Linux Foundation, Frederic Lardinois</itunes:author>
      <itunes:duration>00:29:04</itunes:duration>
      <itunes:summary>In a recent episode of The New Stack Agents from the Open Source Summit in Amsterdam, Jim Zemlin, executive director of the Linux Foundation, discussed the evolving landscape of open source AI. While the Linux Foundation has helped build ecosystems like the CNCF for cloud-native computing, there&apos;s no unified umbrella foundation yet for open source AI. Existing efforts include the PyTorch Foundation and LF AI &amp; Data, but AI development is still fragmented across models, tooling, and standards. </itunes:summary>
      <itunes:subtitle>In a recent episode of The New Stack Agents from the Open Source Summit in Amsterdam, Jim Zemlin, executive director of the Linux Foundation, discussed the evolving landscape of open source AI. While the Linux Foundation has helped build ecosystems like the CNCF for cloud-native computing, there&apos;s no unified umbrella foundation yet for open source AI. Existing efforts include the PyTorch Foundation and LF AI &amp; Data, but AI development is still fragmented across models, tooling, and standards. </itunes:subtitle>
      <itunes:keywords>the linux foundation, frederic lardinois, software developer, the new stack, ai developer, ai development, the new stack makers, the new stack agents, open source, open source summit amsterdam, jim zemlin</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1548</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">cb2e7f49-aebc-4724-8bb3-147f4cd911b4</guid>
      <title>Is Your Data Strategy Ready for the Agentic AI Era?</title>
      <description><![CDATA[<p>Enterprise AI is still in its infancy, with less than 1% of enterprise data currently used to fuel AI, according to Raj Verma, CEO of SingleStore. While consumer AI is slightly more advanced, most organizations are only beginning to understand the scale of infrastructure needed for true AI adoption. Verma predicts AI will evolve in three phases: first, the easy tasks will be automated; next, complex tasks will become easier; and finally, the seemingly impossible will become achievable—likely within three years. </p><p>However, to reach that point, enterprises must align their data strategies with their AI ambitions. Many have rushed into AI fearing obsolescence, but without preparing their data infrastructure, they're at risk of failure. Current legacy systems are not designed for the massive concurrency demands of agentic AI, potentially leading to underperformance. Verma emphasizes the need to move beyond siloed or "swim lane" databases toward unified, high-performance data platforms tailored for the scale and complexity of the AI era.</p><p>Learn more from The New Stack about the latest evolution in AI infrastructure: </p><p><a href="https://thenewstack.io/how-to-use-ai-to-design-intelligent-adaptable-infrastructure/">How To Use AI To Design Intelligent, Adaptable Infrastructure</a></p><p><a href="https://thenewstack.io/how-to-support-developers-in-building-ai-workloads/">How to Support Developers in Building AI Workloads </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Thu, 28 Aug 2025 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Raj Verma, SingleStore, Heather Joslyn, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/is-your-data-strategy-ready-for-the-agentic-ai-era-PHlzmxEi</link>
      <content:encoded><![CDATA[<p>Enterprise AI is still in its infancy, with less than 1% of enterprise data currently used to fuel AI, according to Raj Verma, CEO of SingleStore. While consumer AI is slightly more advanced, most organizations are only beginning to understand the scale of infrastructure needed for true AI adoption. Verma predicts AI will evolve in three phases: first, the easy tasks will be automated; next, complex tasks will become easier; and finally, the seemingly impossible will become achievable—likely within three years. </p><p>However, to reach that point, enterprises must align their data strategies with their AI ambitions. Many have rushed into AI fearing obsolescence, but without preparing their data infrastructure, they're at risk of failure. Current legacy systems are not designed for the massive concurrency demands of agentic AI, potentially leading to underperformance. Verma emphasizes the need to move beyond siloed or "swim lane" databases toward unified, high-performance data platforms tailored for the scale and complexity of the AI era.</p><p>Learn more from The New Stack about the latest evolution in AI infrastructure: </p><p><a href="https://thenewstack.io/how-to-use-ai-to-design-intelligent-adaptable-infrastructure/">How To Use AI To Design Intelligent, Adaptable Infrastructure</a></p><p><a href="https://thenewstack.io/how-to-support-developers-in-building-ai-workloads/">How to Support Developers in Building AI Workloads </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="26864369" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/1c5de14d-dbe1-49fb-b01e-b444fcc98a0a/audio/f4235762-cc0b-4595-8ddb-879a6b5a1b25/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Is Your Data Strategy Ready for the Agentic AI Era?</itunes:title>
      <itunes:author>Raj Verma, SingleStore, Heather Joslyn, The New Stack</itunes:author>
      <itunes:duration>00:27:58</itunes:duration>
      <itunes:summary>Enterprise AI is still in its infancy, with less than 1% of enterprise data currently used to fuel AI, according to Raj Verma, CEO of SingleStore. While consumer AI is slightly more advanced, most organizations are only beginning to understand the scale of infrastructure needed for true AI adoption. Verma predicts AI will evolve in three phases: first, the easy tasks will be automated; next, complex tasks will become easier; and finally, the seemingly impossible will become achievable—likely within three years. </itunes:summary>
      <itunes:subtitle>Enterprise AI is still in its infancy, with less than 1% of enterprise data currently used to fuel AI, according to Raj Verma, CEO of SingleStore. While consumer AI is slightly more advanced, most organizations are only beginning to understand the scale of infrastructure needed for true AI adoption. Verma predicts AI will evolve in three phases: first, the easy tasks will be automated; next, complex tasks will become easier; and finally, the seemingly impossible will become achievable—likely within three years. </itunes:subtitle>
      <itunes:keywords>data infrastructure, singlestore, raj verma, software developer, tech podcast, heather joslyn, ai workloads, tech, developer podcast, the new stack makers, software engineer, ai infrastructure, ai engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1547</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">24f73e81-ea6f-4bf1-bbac-fd330de7d112</guid>
      <title>MCP Security Risks Multiply With Each New Agent Connection</title>
      <description><![CDATA[<p>Anthropic's Model Context Protocol (MCP) has become the standard for connecting AI agents to tools and data, but its security has lagged behind. In <i>The New Stack Agents</i> podcast, Tzvika Shneider, CEO of API security startup Pynt, discussed the growing risks MCP introduces. Shneider sees MCP as a natural evolution from traditional APIs to LLMs and now to AI agents. However, MCP adds complexity and vulnerability, especially as agents interact across multiple servers. </p><p>Pynt’s research found that 72% of MCP plugins expose high-risk operations, like code execution or accessing privileged APIs, often without proper approval or validation. The danger compounds when untrusted inputs from one agent influence another with elevated permissions. Unlike traditional APIs, MCP calls are made by non-deterministic agents, making it harder to enforce security guardrails. While MCP exploits remain rare for now, most companies lack mature security strategies for it. Shneider believes MCP merely highlights existing API vulnerabilities, and organizations are only beginning to address these risks.</p><p> </p><p>Learn more from The New Stack about the latest in Model Context Protocol: </p><p><a href="https://thenewstack.io/model-context-protocol-a-primer-for-the-developers/ ">Model Context Protocol: A Primer for the Developers </a></p><p><a href="https://thenewstack.io/building-with-mcp-mind-the-security-gaps/ ">Building With MCP? Mind the Security Gaps</a> </p><p><a>MCP-UI Creators on Why AI Agents Need Rich User Interfaces</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Fri, 22 Aug 2025 16:45:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Tzvika Schneider, Pynt, Alex Williams, Frederic Lardinois, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/mcp-security-risks-multiply-with-each-new-agent-connection-AQJR47vH</link>
      <content:encoded><![CDATA[<p>Anthropic's Model Context Protocol (MCP) has become the standard for connecting AI agents to tools and data, but its security has lagged behind. In <i>The New Stack Agents</i> podcast, Tzvika Shneider, CEO of API security startup Pynt, discussed the growing risks MCP introduces. Shneider sees MCP as a natural evolution from traditional APIs to LLMs and now to AI agents. However, MCP adds complexity and vulnerability, especially as agents interact across multiple servers. </p><p>Pynt’s research found that 72% of MCP plugins expose high-risk operations, like code execution or accessing privileged APIs, often without proper approval or validation. The danger compounds when untrusted inputs from one agent influence another with elevated permissions. Unlike traditional APIs, MCP calls are made by non-deterministic agents, making it harder to enforce security guardrails. While MCP exploits remain rare for now, most companies lack mature security strategies for it. Shneider believes MCP merely highlights existing API vulnerabilities, and organizations are only beginning to address these risks.</p><p> </p><p>Learn more from The New Stack about the latest in Model Context Protocol: </p><p><a href="https://thenewstack.io/model-context-protocol-a-primer-for-the-developers/ ">Model Context Protocol: A Primer for the Developers </a></p><p><a href="https://thenewstack.io/building-with-mcp-mind-the-security-gaps/ ">Building With MCP? Mind the Security Gaps</a> </p><p><a>MCP-UI Creators on Why AI Agents Need Rich User Interfaces</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="45520395" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/62f56230-c234-4c4d-a37e-0498f8590c2a/audio/f02231eb-424d-4204-b174-69322ead62cd/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>MCP Security Risks Multiply With Each New Agent Connection</itunes:title>
      <itunes:author>Tzvika Schneider, Pynt, Alex Williams, Frederic Lardinois, The New Stack</itunes:author>
      <itunes:duration>00:47:25</itunes:duration>
      <itunes:summary>Anthropic&apos;s Model Context Protocol (MCP) has become the standard for connecting AI agents to tools and data, but its security has lagged behind. In The New Stack Agents podcast, Tzvika Shneider, CEO of API security startup Pynt, discussed the growing risks MCP introduces. Shneider sees MCP as a natural evolution from traditional APIs to LLMs and now to AI agents. However, MCP adds complexity and vulnerability, especially as agents interact across multiple servers. </itunes:summary>
      <itunes:subtitle>Anthropic&apos;s Model Context Protocol (MCP) has become the standard for connecting AI agents to tools and data, but its security has lagged behind. In The New Stack Agents podcast, Tzvika Shneider, CEO of API security startup Pynt, discussed the growing risks MCP introduces. Shneider sees MCP as a natural evolution from traditional APIs to LLMs and now to AI agents. However, MCP adds complexity and vulnerability, especially as agents interact across multiple servers. </itunes:subtitle>
      <itunes:keywords>software developer, pynt, application security, tech podcast, the new stack, tech, developer podcast, software engineer, the new stack agents, model context protocol, tzvika shneider</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1546</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">66c138cb-8dcb-4d5f-baf1-aa78c09e0d81</guid>
      <title>Why Your ‘Data Exhaust’ Is Your Most Valuable Asset</title>
      <description><![CDATA[<p>Rahul Auradkar, executive VP and GM at Salesforce, grew up in India with a deep passion for cricket, where his love for the game sparked an early interest in data. This fascination with statistics laid the foundation for his current work leading Salesforce’s Data Cloud and Einstein (Unified Data Services) team. Auradkar reflects on how structured data has evolved—from relational databases in enterprise applications to data warehouses, data lakes, and lakehouses. He explains how initial efforts focused on analyzing structured data, which later fed back into business processes. </p><p>Eventually, businesses realized that the byproducts of data—what he calls "data exhaust"—were themselves valuable. The rise of "old AI," or predictive AI, shifted perceptions, showing that data exhaust could define the application itself. As varied systems emerged with distinct protocols and SQL variants, data silos formed, trapping valuable insights. Auradkar emphasizes that the ongoing challenge is unifying these silos to enable seamless, meaningful business interactions—something Salesforce aims to solve with its Data Cloud and agentic AI platform.</p><p>Learn more from The New Stack about the evolution of structured data and agent AI: </p><p><a href="https://thenewstack.io/how-enterprises-and-startups-can-master-ai-with-smarter-data-practices/">How Enterprises and Startups Can Master AI With Smarter Data Practices </a></p><p><a href="https://thenewstack.io/enterprise-ai-success-demands-real-time-data-platforms/">Enterprise AI Success Demands Real-Time Data Platforms</a></p><p><a href="https://www.youtube.com/watch?v=4-gFtSFfKk4">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></description>
      <pubDate>Thu, 21 Aug 2025 16:30:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Rahul Auradkar, Alex Williams, Salesforce, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/why-your-data-exhaust-is-your-most-valuable-asset-q8ZccSf2</link>
      <content:encoded><![CDATA[<p>Rahul Auradkar, executive VP and GM at Salesforce, grew up in India with a deep passion for cricket, where his love for the game sparked an early interest in data. This fascination with statistics laid the foundation for his current work leading Salesforce’s Data Cloud and Einstein (Unified Data Services) team. Auradkar reflects on how structured data has evolved—from relational databases in enterprise applications to data warehouses, data lakes, and lakehouses. He explains how initial efforts focused on analyzing structured data, which later fed back into business processes. </p><p>Eventually, businesses realized that the byproducts of data—what he calls "data exhaust"—were themselves valuable. The rise of "old AI," or predictive AI, shifted perceptions, showing that data exhaust could define the application itself. As varied systems emerged with distinct protocols and SQL variants, data silos formed, trapping valuable insights. Auradkar emphasizes that the ongoing challenge is unifying these silos to enable seamless, meaningful business interactions—something Salesforce aims to solve with its Data Cloud and agentic AI platform.</p><p>Learn more from The New Stack about the evolution of structured data and agent AI: </p><p><a href="https://thenewstack.io/how-enterprises-and-startups-can-master-ai-with-smarter-data-practices/">How Enterprises and Startups Can Master AI With Smarter Data Practices </a></p><p><a href="https://thenewstack.io/enterprise-ai-success-demands-real-time-data-platforms/">Enterprise AI Success Demands Real-Time Data Platforms</a></p><p><a href="https://www.youtube.com/watch?v=4-gFtSFfKk4">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></content:encoded>
      <enclosure length="29481212" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/af0ff38c-3d84-4894-b91a-a8a150cef8bf/audio/81cb90bd-4d6b-4c17-9fd9-af8e8154118e/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Why Your ‘Data Exhaust’ Is Your Most Valuable Asset</itunes:title>
      <itunes:author>Rahul Auradkar, Alex Williams, Salesforce, The New Stack</itunes:author>
      <itunes:duration>00:30:42</itunes:duration>
      <itunes:summary></itunes:summary>
      <itunes:subtitle></itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, alex williams, the new stack, ai developer, einstein (unified data services), developer podcast, the new stack makers, software engineer, salesforce, rahul auradkar, salesforce’s data cloud, structured data</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1545</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">458e94ac-eb15-40dc-b6ce-46aba36022cb</guid>
      <title>The Top AI Tool for Devs Isn’t GitHub Copilot, New Report Finds</title>
      <description><![CDATA[<p>In this week’s episode ofThe New Stack Agents, Scott Carey, editor-in-chief of LeadDev, discussed their first AI Impact Report, which explores how engineering teams are adopting AI tools. The report shows that two-thirds of developers are actively using AI, with another 20% in pilot stages and only 2% having no plans to use AI — a group Carey finds particularly intriguing. Popular tools include Cursor (43%) and GitHub Copilot (37%), with others like OpenAI, Gemini, and Claude following, while Amazon Q and Replit lag behind.</p><p>Most developers use AI for code generation, documentation, and research, but usage for DevOps tasks like testing, deployment, and IT automation remains low. Carey finds this underutilization frustrating, given AI's potential impact in these areas. The report also highlights concern for junior developers, with 54% of respondents expecting fewer future hires at that level. While many believe AI boosts productivity, some remain unsure — a sign that organizations still struggle to measure developer performance effectively.</p><p>Learn more from The New Stack about the latest insights about the AI tool adoption: </p><p><a href="https://thenewstack.io/ai-adoption-why-businesses-struggle-to-move-from-development-to-production/">AI Adoption: Why Businesses Struggle to Move from Development to Production</a></p><p><a href="https://thenewstack.io/what-pair-programming-can-show-us-about-implementing-ai/">3 Strategies for Speeding Up AI Adoption Among Developers</a></p><p><a href="https://thenewstack.io/ai-everywhere-overcoming-barriers-to-adoption/">AI Everywhere: Overcoming Barriers to Adoption</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Fri, 15 Aug 2025 17:55:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Scott Carey, LeadDev, Frederic Lardinois, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/the-top-ai-tool-for-devs-isnt-github-copilot-new-report-finds-bei0R5Fg</link>
      <content:encoded><![CDATA[<p>In this week’s episode ofThe New Stack Agents, Scott Carey, editor-in-chief of LeadDev, discussed their first AI Impact Report, which explores how engineering teams are adopting AI tools. The report shows that two-thirds of developers are actively using AI, with another 20% in pilot stages and only 2% having no plans to use AI — a group Carey finds particularly intriguing. Popular tools include Cursor (43%) and GitHub Copilot (37%), with others like OpenAI, Gemini, and Claude following, while Amazon Q and Replit lag behind.</p><p>Most developers use AI for code generation, documentation, and research, but usage for DevOps tasks like testing, deployment, and IT automation remains low. Carey finds this underutilization frustrating, given AI's potential impact in these areas. The report also highlights concern for junior developers, with 54% of respondents expecting fewer future hires at that level. While many believe AI boosts productivity, some remain unsure — a sign that organizations still struggle to measure developer performance effectively.</p><p>Learn more from The New Stack about the latest insights about the AI tool adoption: </p><p><a href="https://thenewstack.io/ai-adoption-why-businesses-struggle-to-move-from-development-to-production/">AI Adoption: Why Businesses Struggle to Move from Development to Production</a></p><p><a href="https://thenewstack.io/what-pair-programming-can-show-us-about-implementing-ai/">3 Strategies for Speeding Up AI Adoption Among Developers</a></p><p><a href="https://thenewstack.io/ai-everywhere-overcoming-barriers-to-adoption/">AI Everywhere: Overcoming Barriers to Adoption</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="35313414" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/118820f6-fcde-40a8-a222-ca5e7f17b022/audio/ca65032f-fcba-45f0-821a-8f85a290ad84/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>The Top AI Tool for Devs Isn’t GitHub Copilot, New Report Finds</itunes:title>
      <itunes:author>Scott Carey, LeadDev, Frederic Lardinois, The New Stack</itunes:author>
      <itunes:duration>00:36:47</itunes:duration>
      <itunes:summary>In this week’s episode ofThe New Stack Agents, Scott Carey, editor-in-chief of LeadDev, discussed their first AI Impact Report, which explores how engineering teams are adopting AI tools. The report shows that two-thirds of developers are actively using AI, with another 20% in pilot stages and only 2% having no plans to use AI — a group Carey finds particularly intriguing. Popular tools include Cursor (43%) and GitHub Copilot (37%), with others like OpenAI, Gemini, and Claude following, while Amazon Q and Replit lag behind.</itunes:summary>
      <itunes:subtitle>In this week’s episode ofThe New Stack Agents, Scott Carey, editor-in-chief of LeadDev, discussed their first AI Impact Report, which explores how engineering teams are adopting AI tools. The report shows that two-thirds of developers are actively using AI, with another 20% in pilot stages and only 2% having no plans to use AI — a group Carey finds particularly intriguing. Popular tools include Cursor (43%) and GitHub Copilot (37%), with others like OpenAI, Gemini, and Claude following, while Amazon Q and Replit lag behind.</itunes:subtitle>
      <itunes:keywords>frederic lardinois, software developer, the new stack, ai developer, tech, scott carey, developer podcast, software engineer, the new stack agents, ai study, leaddev</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1544</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">734f5ff2-7048-4c77-af41-d651dd557b7f</guid>
      <title>Confronting AI’s Next Big Challenge: Inference Compute</title>
      <description><![CDATA[<p>While AI training garners most of the spotlight — and investment — the demands of<i>AI inference</i>are shaping up to be an even bigger challenge. In this episode of<i>The New Stack Makers</i>, Sid Sheth, founder and CEO of d-Matrix, argues that inference is anything but one-size-fits-all. Different use cases — from low-cost to high-interactivity or throughput-optimized — require tailored hardware, and existing GPU architectures aren’t built to address all these needs simultaneously.</p><p>“The world of inference is going to be truly heterogeneous,” Sheth said, meaning specialized hardware will be required to meet diverse performance profiles. A major bottleneck? The distance between memory and compute. Inference, especially in generative AI and agentic workflows, requires constant memory access, so minimizing the distance data must travel is key to improving performance and reducing cost.</p><p>To address this, d-Matrix developed <i>Corsair</i>, a modular platform where memory and compute are vertically stacked — “like pancakes” — enabling faster, more efficient inference. The result is scalable, flexible AI infrastructure purpose-built for inference at scale.</p><p>Learn more from The New Stack about inference compute and AI</p><p><a href="https://thenewstack.io/scaling-ai-inference-at-the-edge-with-distributed-postgresql/">Scaling AI Inference at the Edge with Distributed PostgreSQL</a></p><p><a href="https://thenewstack.io/deep-infra-is-building-an-ai-inference-cloud-for-developers/">Deep Infra Is Building an AI Inference Cloud for Developers</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game </a></p><p> </p>
]]></description>
      <pubDate>Wed, 6 Aug 2025 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (d-Matrix, Sid Sheth, The New Stack, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/confronting-ais-next-big-challenge-inference-compute-hYe1mmPL</link>
      <content:encoded><![CDATA[<p>While AI training garners most of the spotlight — and investment — the demands of<i>AI inference</i>are shaping up to be an even bigger challenge. In this episode of<i>The New Stack Makers</i>, Sid Sheth, founder and CEO of d-Matrix, argues that inference is anything but one-size-fits-all. Different use cases — from low-cost to high-interactivity or throughput-optimized — require tailored hardware, and existing GPU architectures aren’t built to address all these needs simultaneously.</p><p>“The world of inference is going to be truly heterogeneous,” Sheth said, meaning specialized hardware will be required to meet diverse performance profiles. A major bottleneck? The distance between memory and compute. Inference, especially in generative AI and agentic workflows, requires constant memory access, so minimizing the distance data must travel is key to improving performance and reducing cost.</p><p>To address this, d-Matrix developed <i>Corsair</i>, a modular platform where memory and compute are vertically stacked — “like pancakes” — enabling faster, more efficient inference. The result is scalable, flexible AI infrastructure purpose-built for inference at scale.</p><p>Learn more from The New Stack about inference compute and AI</p><p><a href="https://thenewstack.io/scaling-ai-inference-at-the-edge-with-distributed-postgresql/">Scaling AI Inference at the Edge with Distributed PostgreSQL</a></p><p><a href="https://thenewstack.io/deep-infra-is-building-an-ai-inference-cloud-for-developers/">Deep Infra Is Building an AI Inference Cloud for Developers</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game </a></p><p> </p>
]]></content:encoded>
      <enclosure length="23277025" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/b3483aea-06a8-4660-bf9c-f0b8efdeca55/audio/dd3e7c4e-c764-43f4-a52a-25289d2e2775/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Confronting AI’s Next Big Challenge: Inference Compute</itunes:title>
      <itunes:author>d-Matrix, Sid Sheth, The New Stack, Heather Joslyn</itunes:author>
      <itunes:duration>00:24:14</itunes:duration>
      <itunes:summary>While AI training garners most of the spotlight — and investment — the demands of AI inference are shaping up to be an even bigger challenge. In this episode of The New Stack Makers, Sid Sheth, founder and CEO of d-Matrix, argues that inference is anything but one-size-fits-all. Different use cases — from low-cost to high-interactivity or throughput-optimized — require tailored hardware, and existing GPU architectures aren’t built to address all these needs simultaneously.</itunes:summary>
      <itunes:subtitle>While AI training garners most of the spotlight — and investment — the demands of AI inference are shaping up to be an even bigger challenge. In this episode of The New Stack Makers, Sid Sheth, founder and CEO of d-Matrix, argues that inference is anything but one-size-fits-all. Different use cases — from low-cost to high-interactivity or throughput-optimized — require tailored hardware, and existing GPU architectures aren’t built to address all these needs simultaneously.</itunes:subtitle>
      <itunes:keywords>genai, software developer, the new stack, sid sheth, ai developer, heather joslyn, corsair, tech, developer podcast, the new stack makers, software engineer, inference compute</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1543</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">726fed02-daa6-493b-9689-c704375da140</guid>
      <title>Databricks VP: Don’t Try to Speed AI Evolution through Brute Force</title>
      <description><![CDATA[<p>In the latest episode of<i>The New Stack Agents</i>, Naveen Rao, VP of AI at Databricks and a former neuroscientist, reflects on the evolution of AI, neural networks, and the energy constraints that define both biological and artificial intelligence. Rao, who once built circuit systems as a child and later studied the brain’s 20-watt efficiency at Duke and Brown, argues that current AI development—relying on massive energy-intensive data centers—is unsustainable. He believes true intelligence should emerge from low-power, efficient systems, more aligned with biological computing.</p><p>Rao warns that the industry is headed toward “model collapse,” where large language models (LLMs) begin training on AI-generated content instead of real-world data, leading to compounding inaccuracies and hallucinations. He stresses the importance of grounding AI in reality and moving beyond brute-force scaling. Rao sees intelligence not just as a function of computing power, but as a distributed, observational system—“life is a learning machine,” he says—hinting at a need to fundamentally rethink how we build AI.</p><p>Learn more from The New Stack about the latest insights about the evolution of AI and neural networks: </p><p><a href="https://thenewstack.io/the-50-year-story-of-the-rise-fall-and-rebirth-of-neural-networks/">The 50-Year Story of the Rise, Fall, and Rebirth of Neural Networks</a></p><p><a href="https://thenewstack.io/the-evolution-of-the-ai-stack-from-foundations-to-agents/">The Evolution of the AI Stack: From Foundation to Agents</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Mon, 4 Aug 2025 15:15:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Databricks, Frederic Lardinois, The New Stack, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/databricks-vp-dont-try-to-speed-ai-evolution-through-brute-force-ZfzyW7kz</link>
      <content:encoded><![CDATA[<p>In the latest episode of<i>The New Stack Agents</i>, Naveen Rao, VP of AI at Databricks and a former neuroscientist, reflects on the evolution of AI, neural networks, and the energy constraints that define both biological and artificial intelligence. Rao, who once built circuit systems as a child and later studied the brain’s 20-watt efficiency at Duke and Brown, argues that current AI development—relying on massive energy-intensive data centers—is unsustainable. He believes true intelligence should emerge from low-power, efficient systems, more aligned with biological computing.</p><p>Rao warns that the industry is headed toward “model collapse,” where large language models (LLMs) begin training on AI-generated content instead of real-world data, leading to compounding inaccuracies and hallucinations. He stresses the importance of grounding AI in reality and moving beyond brute-force scaling. Rao sees intelligence not just as a function of computing power, but as a distributed, observational system—“life is a learning machine,” he says—hinting at a need to fundamentally rethink how we build AI.</p><p>Learn more from The New Stack about the latest insights about the evolution of AI and neural networks: </p><p><a href="https://thenewstack.io/the-50-year-story-of-the-rise-fall-and-rebirth-of-neural-networks/">The 50-Year Story of the Rise, Fall, and Rebirth of Neural Networks</a></p><p><a href="https://thenewstack.io/the-evolution-of-the-ai-stack-from-foundations-to-agents/">The Evolution of the AI Stack: From Foundation to Agents</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="37120252" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/b5c433ee-fb20-490d-82dc-4935c00c4d4a/audio/f2f40222-c821-4ea9-8412-b63cdbcfaf1a/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Databricks VP: Don’t Try to Speed AI Evolution through Brute Force</itunes:title>
      <itunes:author>Databricks, Frederic Lardinois, The New Stack, Alex Williams</itunes:author>
      <itunes:duration>00:38:39</itunes:duration>
      <itunes:summary>In the latest episode ofThe New Stack Agents, Naveen Rao, VP of AI at Databricks and a former neuroscientist, reflects on the evolution of AI, neural networks, and the energy constraints that define both biological and artificial intelligence. Rao, who once built circuit systems as a child and later studied the brain’s 20-watt efficiency at Duke and Brown, argues that current AI development—relying on massive energy-intensive data centers—is unsustainable. He believes true intelligence should emerge from low-power, efficient systems, more aligned with biological computing.</itunes:summary>
      <itunes:subtitle>In the latest episode ofThe New Stack Agents, Naveen Rao, VP of AI at Databricks and a former neuroscientist, reflects on the evolution of AI, neural networks, and the energy constraints that define both biological and artificial intelligence. Rao, who once built circuit systems as a child and later studied the brain’s 20-watt efficiency at Duke and Brown, argues that current AI development—relying on massive energy-intensive data centers—is unsustainable. He believes true intelligence should emerge from low-power, efficient systems, more aligned with biological computing.</itunes:subtitle>
      <itunes:keywords>genai, frederic lardinois, tech podcast, alex williams, the new stack, ai developer, tech, developer podcast, databricks, software engineer, the new stack agents, ai podcast</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1542</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">0c3c92e6-c929-463b-8b38-a78b9bd0cb84</guid>
      <title>How Fal.ai Went From Inference Optimization to Hosting Image and Video Models</title>
      <description><![CDATA[<p>Fal.ai, once focused on machine learning infrastructure, has evolved into a major player in generative media. In this episode of <i>The New Stack Agents</i>, hosts speak with Fal.ai CEO Burkay Gur and investor Glenn Solomon of Notable Capital. Originally aiming to optimize Python runtimes, Fal.ai shifted direction as generative AI exploded, driven by tools like DALL·E and ChatGPT. Today, Fal.ai hosts hundreds of models—from image to audio and video—and emphasizes fast, optimized inference to meet growing demand.</p><p>Speed became Fal.ai’s competitive edge, especially as newer generative models require GPU power not just for training but also for inference. Solomon noted that while optimization alone isn't a sustainable business model, Fal’s value lies in speed and developer experience. Fal.ai offers both an easy-to-use web interface and developer-focused APIs, appealing to both technical and non-technical users.</p><p>Gur also addressed generative AI’s impact on creatives, arguing that while the <i>cost of creation</i> has plummeted, the <i>cost of creativity</i> remains—and may even increase as content becomes easier to produce.</p><p>Learn more from The New Stack about AI’s impact on creatives:</p><p><a href="https://thenewstack.io/ai-will-steal-developer-jobs-but-not-how-you-think/">AI Will Steal Developer Jobs (But Not How You Think) </a></p><p><a href="https://thenewstack.io/how-ai-agents-will-change-the-web-for-users-and-developers/">How AI Agents Will Change the Web for Users and Developers </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Fri, 25 Jul 2025 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Burkay Gur, Glenn Solomon, Notable Capital, Frederic Lardinois, Alex Williams, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-falai-went-from-inference-optimization-to-hosting-image-and-video-models-Asb3vNns</link>
      <content:encoded><![CDATA[<p>Fal.ai, once focused on machine learning infrastructure, has evolved into a major player in generative media. In this episode of <i>The New Stack Agents</i>, hosts speak with Fal.ai CEO Burkay Gur and investor Glenn Solomon of Notable Capital. Originally aiming to optimize Python runtimes, Fal.ai shifted direction as generative AI exploded, driven by tools like DALL·E and ChatGPT. Today, Fal.ai hosts hundreds of models—from image to audio and video—and emphasizes fast, optimized inference to meet growing demand.</p><p>Speed became Fal.ai’s competitive edge, especially as newer generative models require GPU power not just for training but also for inference. Solomon noted that while optimization alone isn't a sustainable business model, Fal’s value lies in speed and developer experience. Fal.ai offers both an easy-to-use web interface and developer-focused APIs, appealing to both technical and non-technical users.</p><p>Gur also addressed generative AI’s impact on creatives, arguing that while the <i>cost of creation</i> has plummeted, the <i>cost of creativity</i> remains—and may even increase as content becomes easier to produce.</p><p>Learn more from The New Stack about AI’s impact on creatives:</p><p><a href="https://thenewstack.io/ai-will-steal-developer-jobs-but-not-how-you-think/">AI Will Steal Developer Jobs (But Not How You Think) </a></p><p><a href="https://thenewstack.io/how-ai-agents-will-change-the-web-for-users-and-developers/">How AI Agents Will Change the Web for Users and Developers </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="50589404" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/5f6a4fe6-88ce-4d4b-a9f3-2209eb4bc7da/audio/c90f9549-d561-476f-95ca-d1ddd93dcb1f/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Fal.ai Went From Inference Optimization to Hosting Image and Video Models</itunes:title>
      <itunes:author>Burkay Gur, Glenn Solomon, Notable Capital, Frederic Lardinois, Alex Williams, The New Stack</itunes:author>
      <itunes:duration>00:52:41</itunes:duration>
      <itunes:summary>Fal.ai, once focused on machine learning infrastructure, has evolved into a major player in generative media. In this episode of The New Stack Agents, hosts speak with Fal.ai CEO Burkay Gur and investor Glenn Solomon of Notable Capital. Originally aiming to optimize Python runtimes, Fal.ai shifted direction as generative AI exploded, driven by tools like DALL·E and ChatGPT. Today, Fal.ai hosts hundreds of models—from image to audio and video—and emphasizes fast, optimized inference to meet growing demand.</itunes:summary>
      <itunes:subtitle>Fal.ai, once focused on machine learning infrastructure, has evolved into a major player in generative media. In this episode of The New Stack Agents, hosts speak with Fal.ai CEO Burkay Gur and investor Glenn Solomon of Notable Capital. Originally aiming to optimize Python runtimes, Fal.ai shifted direction as generative AI exploded, driven by tools like DALL·E and ChatGPT. Today, Fal.ai hosts hundreds of models—from image to audio and video—and emphasizes fast, optimized inference to meet growing demand.</itunes:subtitle>
      <itunes:keywords>generative ai, notable capital, frederic lardinois, software developer, tech podcast, alex williams, the new stack, ai developer, ai developer podcast, tech, developer podcast, burkay gur, software engineer, the new stack agents, glenn solomon, infrastructure</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1541</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e9f79613-2bc4-47b5-98a3-3f7648c1e966</guid>
      <title>Why AI Agents Need a New Kind of Browser</title>
      <description><![CDATA[<p>Traditional headless browsers weren’t built for AI agents, often breaking when web elements shift even slightly. Paul Klein IV, founder of Browserbase and its open-source tool Stagehand, is tackling this by creating a browser infrastructure designed specifically for AI control. On <i>The New Stack Agents</i> podcast, Klein explained that Stagehand enables AI agents to interpret vague, natural-language instructions and still function reliably—even when web pages change. This flexibility contrasts with brittle legacy tools built for deterministic testing. Instead of writing 100 scripts for 100 websites, one AI-powered script can now handle thousands.</p><p>Klein’s broader vision is a world where AI can fully operate the web on behalf of users—automating tasks like filing taxes without human input. He acknowledges the technical challenges, from running browsers on servers to handling edge cases like time zones and emojis. The episode also touches on Klein’s concerns with AWS, which he says held a “partnership” meeting that felt more like corporate espionage. Still, Klein remains confident in Browserbase’s community-driven edge.</p><p>Learn more from The New Stack about the latest insights in AI browser based tools: </p><p><a href="https://thenewstack.io/why-headless-browsers-are-a-key-technology-for-ai-agents/ ">Why Headless Browsers Are a Key Technology for AI Agents </a></p><p><a href="https://thenewstack.io/ladybird-that-rare-breed-of-browser-based-on-web-standards/ ">Ladybird: That Rare Breed of Browser Based on Web Standards </a></p><p><a>Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></description>
      <pubDate>Fri, 18 Jul 2025 17:15:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Paul Klein, Browserbase, The New Stack Agents, The New Stack, Alex Williams, Frederic Lardinois)</author>
      <link>https://thenewstack.simplecast.com/episodes/why-ai-agents-need-a-new-kind-of-browser-l4YL_SB2</link>
      <content:encoded><![CDATA[<p>Traditional headless browsers weren’t built for AI agents, often breaking when web elements shift even slightly. Paul Klein IV, founder of Browserbase and its open-source tool Stagehand, is tackling this by creating a browser infrastructure designed specifically for AI control. On <i>The New Stack Agents</i> podcast, Klein explained that Stagehand enables AI agents to interpret vague, natural-language instructions and still function reliably—even when web pages change. This flexibility contrasts with brittle legacy tools built for deterministic testing. Instead of writing 100 scripts for 100 websites, one AI-powered script can now handle thousands.</p><p>Klein’s broader vision is a world where AI can fully operate the web on behalf of users—automating tasks like filing taxes without human input. He acknowledges the technical challenges, from running browsers on servers to handling edge cases like time zones and emojis. The episode also touches on Klein’s concerns with AWS, which he says held a “partnership” meeting that felt more like corporate espionage. Still, Klein remains confident in Browserbase’s community-driven edge.</p><p>Learn more from The New Stack about the latest insights in AI browser based tools: </p><p><a href="https://thenewstack.io/why-headless-browsers-are-a-key-technology-for-ai-agents/ ">Why Headless Browsers Are a Key Technology for AI Agents </a></p><p><a href="https://thenewstack.io/ladybird-that-rare-breed-of-browser-based-on-web-standards/ ">Ladybird: That Rare Breed of Browser Based on Web Standards </a></p><p><a>Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></content:encoded>
      <enclosure length="46986178" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/c92c5549-8c3f-4303-b961-68cfc006d8b6/audio/da9ffddd-aa08-4620-9b09-d532ce4abf37/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Why AI Agents Need a New Kind of Browser</itunes:title>
      <itunes:author>Paul Klein, Browserbase, The New Stack Agents, The New Stack, Alex Williams, Frederic Lardinois</itunes:author>
      <itunes:duration>00:48:56</itunes:duration>
      <itunes:summary>Traditional headless browsers weren’t built for AI agents, often breaking when web elements shift even slightly. Paul Klein IV, founder of Browserbase and its open-source tool Stagehand, is tackling this by creating a browser infrastructure designed specifically for AI control. On The New Stack Agents podcast, Klein explained that Stagehand enables AI agents to interpret vague, natural-language instructions and still function reliably—even when web pages change. This flexibility contrasts with brittle legacy tools built for deterministic testing. Instead of writing 100 scripts for 100 websites, one AI-powered script can now handle thousands.</itunes:summary>
      <itunes:subtitle>Traditional headless browsers weren’t built for AI agents, often breaking when web elements shift even slightly. Paul Klein IV, founder of Browserbase and its open-source tool Stagehand, is tackling this by creating a browser infrastructure designed specifically for AI control. On The New Stack Agents podcast, Klein explained that Stagehand enables AI agents to interpret vague, natural-language instructions and still function reliably—even when web pages change. This flexibility contrasts with brittle legacy tools built for deterministic testing. Instead of writing 100 scripts for 100 websites, one AI-powered script can now handle thousands.</itunes:subtitle>
      <itunes:keywords>browserbase, frederic lardinois, software developer, ai agents, browser, tech podcast, alex williams, the new stack, paul klein, open source natural language, browser based, tech, developer podcast, software engineer, ai startups, the new stack agents, ai podcast</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1540</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">81753549-8ef4-4e6f-b71a-f5da4ecd2020</guid>
      <title>How AWS is Working to Help Developers with AI Reality</title>
      <description><![CDATA[<p>In a recent episode of <i>The New Stack Agents</i> livestream, Antje Barth, AWS Developer Advocate for Generative AI, discussed the growing developer interest in building agentic and multi-agent systems. While foundational model knowledge is now common, Barth noted that developers are increasingly focused on tools, frameworks, and protocols for scaling agent-based applications. She emphasized the complexity of deploying such systems, particularly around navigating human-centric interfaces and minimizing latency in multi-agent communication.</p><p>Barth highlighted AWS’s support for developers through tools like Amazon Q CLI and the newly launched open-source Strands SDK, which AWS used internally to accelerate development cycles. Strands enables faster, flexible agentic system development, while services like Bedrock Agents offer a managed, enterprise-ready solution.</p><p>Security was another key theme. Barth stressed that safety must be a “day one” priority, with built-in support for authentication, secure communication, and observability. She encouraged developers to leverage AWS’s GenAI Innovation Center and active open-source communities to build robust, scalable, and secure agentic systems.</p><p>Learn more from The New Stack about AWS' support for developers through tools that support multiple agents: </p><p><a href="https://thenewstack.io/code-in-your-native-tongue-amazon-q-developer-goes-global/">Code in Your Native Tongue: Amazon Q Developer Goes Global </a></p><p><a href="https://thenewstack.io/aws-launches-its-take-on-an-open-source-ai-agents-sdk/">AWS Launches Its Take on an Open Source AI Agents SDK</a></p><p><a href="https://thenewstack.io/amazons-bedrock-can-now-check-ai-for-hallucinations/">Amazon's Bedrock Can Now 'Check' AI for Hallucinations </a></p><p><a>Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p><p> </p><h1> </h1><p> </p><h1> </h1><p> </p>
]]></description>
      <pubDate>Fri, 11 Jul 2025 17:05:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Antje Barth, The New Stack, AWS, Alex Williams, Frederic Lardinois)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-aws-is-working-to-help-developers-with-ai-reality-rYm__V24</link>
      <content:encoded><![CDATA[<p>In a recent episode of <i>The New Stack Agents</i> livestream, Antje Barth, AWS Developer Advocate for Generative AI, discussed the growing developer interest in building agentic and multi-agent systems. While foundational model knowledge is now common, Barth noted that developers are increasingly focused on tools, frameworks, and protocols for scaling agent-based applications. She emphasized the complexity of deploying such systems, particularly around navigating human-centric interfaces and minimizing latency in multi-agent communication.</p><p>Barth highlighted AWS’s support for developers through tools like Amazon Q CLI and the newly launched open-source Strands SDK, which AWS used internally to accelerate development cycles. Strands enables faster, flexible agentic system development, while services like Bedrock Agents offer a managed, enterprise-ready solution.</p><p>Security was another key theme. Barth stressed that safety must be a “day one” priority, with built-in support for authentication, secure communication, and observability. She encouraged developers to leverage AWS’s GenAI Innovation Center and active open-source communities to build robust, scalable, and secure agentic systems.</p><p>Learn more from The New Stack about AWS' support for developers through tools that support multiple agents: </p><p><a href="https://thenewstack.io/code-in-your-native-tongue-amazon-q-developer-goes-global/">Code in Your Native Tongue: Amazon Q Developer Goes Global </a></p><p><a href="https://thenewstack.io/aws-launches-its-take-on-an-open-source-ai-agents-sdk/">AWS Launches Its Take on an Open Source AI Agents SDK</a></p><p><a href="https://thenewstack.io/amazons-bedrock-can-now-check-ai-for-hallucinations/">Amazon's Bedrock Can Now 'Check' AI for Hallucinations </a></p><p><a>Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p><p> </p><h1> </h1><p> </p><h1> </h1><p> </p>
]]></content:encoded>
      <enclosure length="39187896" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/c4a6bd66-23e7-4463-80f1-180c72eddc43/audio/4ab0e216-325f-4f3d-9ea2-d33bb5a3c695/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How AWS is Working to Help Developers with AI Reality</itunes:title>
      <itunes:author>Antje Barth, The New Stack, AWS, Alex Williams, Frederic Lardinois</itunes:author>
      <itunes:duration>00:40:49</itunes:duration>
      <itunes:summary>In a recent episode of The New Stack Agents livestream, Antje Barth, AWS Developer Advocate for Generative AI, discussed the growing developer interest in building agentic and multi-agent systems. While foundational model knowledge is now common, Barth noted that developers are increasingly focused on tools, frameworks, and protocols for scaling agent-based applications. She emphasized the complexity of deploying such systems, particularly around navigating human-centric interfaces and minimizing latency in multi-agent communication.</itunes:summary>
      <itunes:subtitle>In a recent episode of The New Stack Agents livestream, Antje Barth, AWS Developer Advocate for Generative AI, discussed the growing developer interest in building agentic and multi-agent systems. While foundational model knowledge is now common, Barth noted that developers are increasingly focused on tools, frameworks, and protocols for scaling agent-based applications. She emphasized the complexity of deploying such systems, particularly around navigating human-centric interfaces and minimizing latency in multi-agent communication.</itunes:subtitle>
      <itunes:keywords>antje barth, frederic lardinois, software developer, ai agents, tech podcast, alex williams, the new stack, agentic ai, tech, developer podcast, multi-agent systems, bedrock agents, software engineer, amazon q cli, aws, ai podcast</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1539</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b6021b41-c611-420b-bbdf-bde7bd457fac</guid>
      <title>How Shortwave Wants To Reinvent Email With AI</title>
      <description><![CDATA[<p>In this episode of <i>The New Stack Agents</i>, Andrew Lee, co-founder of Shortwave and Firebase, discusses the evolution of his Gmail-centric email client into an AI-first platform. Initially launched in 2020 with traditional improvements like better threading and search, Shortwave pivoted to agentic AI after the rise of large language models (LLMs). Early features like summarization and translation garnered hype but lacked deep utility. </p><p>However, as models improved in 2023—especially Anthropic’s Claude Sonnet 3.5—Shortwave leaned heavily into tool-calling agents that could execute complex, multi-step tasks autonomously. Lee notes Anthropic’s lead in this area, especially in chaining tools intelligently, unlike earlier models from OpenAI. Still, challenges remain with managing large numbers of tools without breaking model reasoning. </p><p>Looking ahead, Lee envisions AI that can take proactive actions—like responding to emails—and dynamically generate interfaces tailored to tasks in real-time. This shift could fundamentally reshape how productivity apps work, with Shortwave aiming to be at the forefront of that transformation.</p><p>Learn more from The New Stack about the latest insights of the power AI at scale:</p><p><a href="https://thenewstack.io/why-streaming-is-the-power-grid-for-ai-native-data-platforms/">Why Streaming Is the Power Grid for AI-Native Data Platforms</a></p><p><a href="https://thenewstack.io/companies-must-embrace-bespoke-ai-designed-for-it-workflows/">Companies Must Embrace BeSpoke AI Designed for IT Workflows</a></p><p><a>Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></description>
      <pubDate>Thu, 3 Jul 2025 15:55:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Andrew Lee, Shortwave, The New Stack, Frederic Lardinois)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-shortwave-wants-to-reinvent-email-with-ai-hyQ9pfY7</link>
      <content:encoded><![CDATA[<p>In this episode of <i>The New Stack Agents</i>, Andrew Lee, co-founder of Shortwave and Firebase, discusses the evolution of his Gmail-centric email client into an AI-first platform. Initially launched in 2020 with traditional improvements like better threading and search, Shortwave pivoted to agentic AI after the rise of large language models (LLMs). Early features like summarization and translation garnered hype but lacked deep utility. </p><p>However, as models improved in 2023—especially Anthropic’s Claude Sonnet 3.5—Shortwave leaned heavily into tool-calling agents that could execute complex, multi-step tasks autonomously. Lee notes Anthropic’s lead in this area, especially in chaining tools intelligently, unlike earlier models from OpenAI. Still, challenges remain with managing large numbers of tools without breaking model reasoning. </p><p>Looking ahead, Lee envisions AI that can take proactive actions—like responding to emails—and dynamically generate interfaces tailored to tasks in real-time. This shift could fundamentally reshape how productivity apps work, with Shortwave aiming to be at the forefront of that transformation.</p><p>Learn more from The New Stack about the latest insights of the power AI at scale:</p><p><a href="https://thenewstack.io/why-streaming-is-the-power-grid-for-ai-native-data-platforms/">Why Streaming Is the Power Grid for AI-Native Data Platforms</a></p><p><a href="https://thenewstack.io/companies-must-embrace-bespoke-ai-designed-for-it-workflows/">Companies Must Embrace BeSpoke AI Designed for IT Workflows</a></p><p><a>Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></content:encoded>
      <enclosure length="34834015" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/fe08c0e6-00ff-4c00-9edc-d613793e385a/audio/7d0a1f5d-e480-4230-a7d9-a1d688c0c345/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Shortwave Wants To Reinvent Email With AI</itunes:title>
      <itunes:author>Andrew Lee, Shortwave, The New Stack, Frederic Lardinois</itunes:author>
      <itunes:duration>00:36:17</itunes:duration>
      <itunes:summary>In this episode of The New Stack Agents, Andrew Lee, co-founder of Shortwave and Firebase, discusses the evolution of his Gmail-centric email client into an AI-first platform. Initially launched in 2020 with traditional improvements like better threading and search, Shortwave pivoted to agentic AI after the rise of large language models (LLMs). Early features like summarization and translation garnered hype but lacked deep utility. </itunes:summary>
      <itunes:subtitle>In this episode of The New Stack Agents, Andrew Lee, co-founder of Shortwave and Firebase, discusses the evolution of his Gmail-centric email client into an AI-first platform. Initially launched in 2020 with traditional improvements like better threading and search, Shortwave pivoted to agentic AI after the rise of large language models (LLMs). Early features like summarization and translation garnered hype but lacked deep utility. </itunes:subtitle>
      <itunes:keywords>enterprise ai, andrew lee, ai agents, tech podcast, the new stack, agentic ai, tech, developer podcast, software engineer, the new stack agents, shortwave, email</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1538</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">a2dfea37-6c19-421b-9502-5d284261260a</guid>
      <title>Cracking the Complexity: Teleport CEO Pushes Identity-First Security</title>
      <description><![CDATA[<p>In this on-the-road episode of The New Stack Makers, Editor in Chief Heather Joslyn speaks with Ev Kontsevoy, CEO and co-founder of Teleport, from the floor of KubeCon + CloudNativeCon Europe in London. The discussion centers on infrastructure security and the growing need for robust identity management. Citing alarming cybersecurity statistics—such as the $5 million average cost of a breach and rising attack frequency—Kontsevoy stresses that complexity is the root challenge in securing infrastructure. </p><p>Today’s environments involve countless layers and technologies, each with its own identity and access controls, increasing the risk of human error and breaches. Kontsevoy argues for treating all entities—humans, laptops, servers, AI agents—as identities managed under a unified framework. Teleport provides a zero trust access platform that enforces strong, cryptographically-backed identity across systems. </p><p>He also highlights Teleport’s version 17 release, which boosts support for non-human identities and integrates deeply with AWS. Looking ahead, Teleport is exploring support for emerging AI agent protocols like MCP to extend its identity-first approach. </p><p>Learn more from The New Stack about the latest insights about Teleport: </p><p><a href="https://thenewstack.io/removing-the-complexity-to-securely-access-the-infrastructure/ ">Removing the Complexity to Securely Access the Infrastructure </a></p><p><a href="https://thenewstack.io/why-ai-cant-protect-you-from-ai-generated-attacks/ ">Why AI Can’t Protect You from AI-Generated Attacks </a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></description>
      <pubDate>Wed, 18 Jun 2025 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Teleport, Ev Kontsevoy, The New Stack, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/cracking-the-complexity-teleport-ceo-pushes-identity-first-security-mC0SOHAe</link>
      <content:encoded><![CDATA[<p>In this on-the-road episode of The New Stack Makers, Editor in Chief Heather Joslyn speaks with Ev Kontsevoy, CEO and co-founder of Teleport, from the floor of KubeCon + CloudNativeCon Europe in London. The discussion centers on infrastructure security and the growing need for robust identity management. Citing alarming cybersecurity statistics—such as the $5 million average cost of a breach and rising attack frequency—Kontsevoy stresses that complexity is the root challenge in securing infrastructure. </p><p>Today’s environments involve countless layers and technologies, each with its own identity and access controls, increasing the risk of human error and breaches. Kontsevoy argues for treating all entities—humans, laptops, servers, AI agents—as identities managed under a unified framework. Teleport provides a zero trust access platform that enforces strong, cryptographically-backed identity across systems. </p><p>He also highlights Teleport’s version 17 release, which boosts support for non-human identities and integrates deeply with AWS. Looking ahead, Teleport is exploring support for emerging AI agent protocols like MCP to extend its identity-first approach. </p><p>Learn more from The New Stack about the latest insights about Teleport: </p><p><a href="https://thenewstack.io/removing-the-complexity-to-securely-access-the-infrastructure/ ">Removing the Complexity to Securely Access the Infrastructure </a></p><p><a href="https://thenewstack.io/why-ai-cant-protect-you-from-ai-generated-attacks/ ">Why AI Can’t Protect You from AI-Generated Attacks </a></p><p><a href="https://thenewstack.io/newsletter">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></content:encoded>
      <enclosure length="20285692" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/d3ff2636-bede-4f03-91cf-a0423a797192/audio/35d28901-4f99-46f4-83c9-634f2f586747/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Cracking the Complexity: Teleport CEO Pushes Identity-First Security</itunes:title>
      <itunes:author>Teleport, Ev Kontsevoy, The New Stack, Heather Joslyn</itunes:author>
      <itunes:duration>00:21:07</itunes:duration>
      <itunes:summary>In this on-the-road episode of The New Stack Makers, Editor in Chief Heather Joslyn speaks with Ev Kontsevoy, CEO and co-founder of Teleport, from the floor of KubeCon + CloudNativeCon Europe in London. The discussion centers on infrastructure security and the growing need for robust identity management. Citing alarming cybersecurity statistics—such as the $5 million average cost of a breach and rising attack frequency—Kontsevoy stresses that complexity is the root challenge in securing infrastructure. </itunes:summary>
      <itunes:subtitle>In this on-the-road episode of The New Stack Makers, Editor in Chief Heather Joslyn speaks with Ev Kontsevoy, CEO and co-founder of Teleport, from the floor of KubeCon + CloudNativeCon Europe in London. The discussion centers on infrastructure security and the growing need for robust identity management. Citing alarming cybersecurity statistics—such as the $5 million average cost of a breach and rising attack frequency—Kontsevoy stresses that complexity is the root challenge in securing infrastructure. </itunes:subtitle>
      <itunes:keywords>identity, secure infrastructure, software developer, cybersecurity, access, teleport, tech podcast, the new stack, ai agents protocols, tech, developer podcast, complexity, the new stack makers, ev kontsevoy, software engineer, kubecon london, security, infrastructure</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1536</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">72d12082-09e6-4905-9bc1-6048b8450f1c</guid>
      <title>No SSH? What is Talos, this Linux Distro for Kubernetes?</title>
      <description><![CDATA[<p>Container-based Linux distributions are gaining traction, especially for edge deployments that demand lightweight and secure operating systems. Talos Linux, developed by Sidero Labs, is purpose-built for Kubernetes with security-first features like a fully immutable file system and disabled SSH access. In a demo, Sidero CTO Andrew Rynhard and Head of Product Justin Garrison explained Talos’s design philosophy, highlighting its minimalism and focus on automation. Inspired by CoreOS, Talos removes traditional tools like systemd and Bash, replacing them with machineD, a custom process manager written in Go.</p><p>Talos emphasizes API-driven management rather than SSH, making Kubernetes cluster operations more scalable and consistent. Its design supports cloud, bare metal, Docker, and edge devices like Raspberry Pi. Kernel immutability is reinforced by ephemeral signing keys. Through Sidero's Omni SaaS, Talos nodes connect securely via WireGuard. The operating system handles all certificates and network connectivity internally, streamlining security and deployment. As Garrison notes, Talos delivers a portable API for “big iron, small iron—no matter what.”</p><p>Learn more from The New Stack about Sidero Labs:  </p><p><a href="https://thenewstack.io/is-cluster-api-really-the-future-of-kubernetes-deployment/ Choosing a Linux Distribution https://thenewstack.io/choosing-a-linux-distribution/">Is Cluster API Really the Future of Kubernetes Deployment?</a> </p><p><a href="https://thenewstack.io/choosing-a-linux-distribution/">Choosing a Linux Distribution </a></p><p>Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/</p><p> </p>
]]></description>
      <pubDate>Thu, 12 Jun 2025 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Sidero Labs, Andrew Rynhard, Justin Garrison, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/no-ssh-what-is-talos-this-linux-distro-for-kubernetes-sIQLI7GK</link>
      <content:encoded><![CDATA[<p>Container-based Linux distributions are gaining traction, especially for edge deployments that demand lightweight and secure operating systems. Talos Linux, developed by Sidero Labs, is purpose-built for Kubernetes with security-first features like a fully immutable file system and disabled SSH access. In a demo, Sidero CTO Andrew Rynhard and Head of Product Justin Garrison explained Talos’s design philosophy, highlighting its minimalism and focus on automation. Inspired by CoreOS, Talos removes traditional tools like systemd and Bash, replacing them with machineD, a custom process manager written in Go.</p><p>Talos emphasizes API-driven management rather than SSH, making Kubernetes cluster operations more scalable and consistent. Its design supports cloud, bare metal, Docker, and edge devices like Raspberry Pi. Kernel immutability is reinforced by ephemeral signing keys. Through Sidero's Omni SaaS, Talos nodes connect securely via WireGuard. The operating system handles all certificates and network connectivity internally, streamlining security and deployment. As Garrison notes, Talos delivers a portable API for “big iron, small iron—no matter what.”</p><p>Learn more from The New Stack about Sidero Labs:  </p><p><a href="https://thenewstack.io/is-cluster-api-really-the-future-of-kubernetes-deployment/ Choosing a Linux Distribution https://thenewstack.io/choosing-a-linux-distribution/">Is Cluster API Really the Future of Kubernetes Deployment?</a> </p><p><a href="https://thenewstack.io/choosing-a-linux-distribution/">Choosing a Linux Distribution </a></p><p>Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/</p><p> </p>
]]></content:encoded>
      <enclosure length="18618034" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/435b2538-0b1c-46c7-a58f-d30861edf314/audio/7a3a5b14-f345-4810-b0fc-0c5c5156dd0e/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>No SSH? What is Talos, this Linux Distro for Kubernetes?</itunes:title>
      <itunes:author>Sidero Labs, Andrew Rynhard, Justin Garrison, Alex Williams</itunes:author>
      <itunes:duration>00:19:23</itunes:duration>
      <itunes:summary>Container-based Linux distributions are gaining traction, especially for edge deployments that demand lightweight and secure operating systems. Talos Linux, developed by Sidero Labs, is purpose-built for Kubernetes with security-first features like a fully immutable file system and disabled SSH access. In a demo, Sidero CTO Andrew Rynhard and Head of Product Justin Garrison explained Talos’s design philosophy, highlighting its minimalism and focus on automation. Inspired by CoreOS, Talos removes traditional tools like systemd and Bash, replacing them with machineD, a custom process manager written in Go.</itunes:summary>
      <itunes:subtitle>Container-based Linux distributions are gaining traction, especially for edge deployments that demand lightweight and secure operating systems. Talos Linux, developed by Sidero Labs, is purpose-built for Kubernetes with security-first features like a fully immutable file system and disabled SSH access. In a demo, Sidero CTO Andrew Rynhard and Head of Product Justin Garrison explained Talos’s design philosophy, highlighting its minimalism and focus on automation. Inspired by CoreOS, Talos removes traditional tools like systemd and Bash, replacing them with machineD, a custom process manager written in Go.</itunes:subtitle>
      <itunes:keywords>talos linux, justin garrison, sidero labs, andrew rynhard, tech podcast, linux, linux distro, tech, developer podcast, ssh, kubernetes, the new stack makers, software engineer, open source, demo, new software engineer, container based linux distribution</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1535</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">88f86139-cc78-46b7-9bb5-09cee3d28900</guid>
      <title>Aptori Is Building an Agentic AI Security Engineer</title>
      <description><![CDATA[<p>AI agents hold the promise of continuously testing, scanning, and fixing code for security vulnerabilities, but we're still progressing toward that vision. Startups like Aptori are helping bridge the gap by building AI-powered security engineers for enterprises. Aptori maps an organization’s codebase, APIs, and cloud infrastructure in real time to understand data flows and authorization logic, allowing it to detect and eventually remediate security issues. </p><p>At Google Cloud Next, Aptori CEO Sumeet Singh discussed how earlier tools merely alerted developers to issues—often overwhelming them—but newer models like Gemini 2.5 Flash and Claude Sonnet 4 are improving automated code fixes, making them more practical. Singh and co-founder Travis Newhouse previously built AppFormix, which automated OpenStack cloud operations before being acquired by Juniper Networks. Their experiences with slow release cycles due to security bottlenecks inspired Aptori’s focus. While the goal is autonomous agents, Singh emphasizes the need for transparency and deterministic elements in AI tools to ensure trust and reliability in enterprise security workflows.</p><p>Learn more from The New Stack about the latest insights in AI application security: </p><p><a href="https://thenewstack.io/ai-is-changing-cybersecurity-fast-and-most-analysts-arent-ready/">AI Is Changing Cybersecurity Fast and Most Analysts Aren’t Ready</a></p><p><a href="https://thenewstack.io/ai-security-agents-combat-ai-generated-code-risks/">AI Security Agents Combat AI-Generated Code Risks</a></p><p><a href="https://thenewstack.io/developers-are-embracing-ai-to-streamline-threat-detection-and-stay-ahead/">Developers Are Embracing AI To Streamline Threat Detection and Stay Ahead</a></p><p><a>Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></description>
      <pubDate>Tue, 3 Jun 2025 16:45:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack Podcast)</author>
      <link>https://thenewstack.simplecast.com/episodes/aptori-is-building-an-agentic-ai-security-engineer-YC8UPH7x</link>
      <content:encoded><![CDATA[<p>AI agents hold the promise of continuously testing, scanning, and fixing code for security vulnerabilities, but we're still progressing toward that vision. Startups like Aptori are helping bridge the gap by building AI-powered security engineers for enterprises. Aptori maps an organization’s codebase, APIs, and cloud infrastructure in real time to understand data flows and authorization logic, allowing it to detect and eventually remediate security issues. </p><p>At Google Cloud Next, Aptori CEO Sumeet Singh discussed how earlier tools merely alerted developers to issues—often overwhelming them—but newer models like Gemini 2.5 Flash and Claude Sonnet 4 are improving automated code fixes, making them more practical. Singh and co-founder Travis Newhouse previously built AppFormix, which automated OpenStack cloud operations before being acquired by Juniper Networks. Their experiences with slow release cycles due to security bottlenecks inspired Aptori’s focus. While the goal is autonomous agents, Singh emphasizes the need for transparency and deterministic elements in AI tools to ensure trust and reliability in enterprise security workflows.</p><p>Learn more from The New Stack about the latest insights in AI application security: </p><p><a href="https://thenewstack.io/ai-is-changing-cybersecurity-fast-and-most-analysts-arent-ready/">AI Is Changing Cybersecurity Fast and Most Analysts Aren’t Ready</a></p><p><a href="https://thenewstack.io/ai-security-agents-combat-ai-generated-code-risks/">AI Security Agents Combat AI-Generated Code Risks</a></p><p><a href="https://thenewstack.io/developers-are-embracing-ai-to-streamline-threat-detection-and-stay-ahead/">Developers Are Embracing AI To Streamline Threat Detection and Stay Ahead</a></p><p><a>Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></content:encoded>
      <enclosure length="17310729" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/71763a7c-1e9d-42fc-8e34-6a7fc48213d2/audio/c3561732-8620-42f6-b788-211ee5b0c9d2/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Aptori Is Building an Agentic AI Security Engineer</itunes:title>
      <itunes:author>The New Stack Podcast</itunes:author>
      <itunes:duration>00:18:01</itunes:duration>
      <itunes:summary>AI agents hold the promise of continuously testing, scanning, and fixing code for security vulnerabilities, but we&apos;re still progressing toward that vision. Startups like Aptori are helping bridge the gap by building AI-powered security engineers for enterprises. Aptori maps an organization’s codebase, APIs, and cloud infrastructure in real time to understand data flows and authorization logic, allowing it to detect and eventually remediate security issues. </itunes:summary>
      <itunes:subtitle>AI agents hold the promise of continuously testing, scanning, and fixing code for security vulnerabilities, but we&apos;re still progressing toward that vision. Startups like Aptori are helping bridge the gap by building AI-powered security engineers for enterprises. Aptori maps an organization’s codebase, APIs, and cloud infrastructure in real time to understand data flows and authorization logic, allowing it to detect and eventually remediate security issues. </itunes:subtitle>
      <itunes:keywords>sumeet singh, software developer, tech podcast, google cloud, the new stack, tech, developer podcast, the new stack makers, software engineer, aptori, open stack, google cloud next</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1534</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">070f5a70-286d-4ff7-bba7-b6cd8f0c0171</guid>
      <title>The AI Code Generation Problem Nobody&apos;s Talking About</title>
      <description><![CDATA[<p>In this episode of<i>The New Stack Makers</i>, Nitric CEO Steve Demchuk discusses how the frustration of building frontend apps within rigid FinTech environments led to the creation of the Nitric framework — a tool designed to eliminate the friction between developers and cloud infrastructure. Unlike traditional Infrastructure as Code (IaC), where developers must manage both app logic and infrastructure definitions separately, Nitric introduces “Infrastructure from Code.” This approach allows developers to focus solely on application logic while the platform infers and automates infrastructure needs using SDKs and CLI tools across multiple languages and cloud providers.</p><p>Demchuk emphasizes that Nitric doesn't remove platform team control but enforces it consistently. Guardrails defined by platform teams guide infrastructure provisioning, ensuring security and compliance — even as developers use AI tools to rapidly generate code. The result is a streamlined workflow where developers move faster, AI enhances productivity, and platform teams retain oversight. This episode offers engineering leaders insight into a paradigm shift in how cloud infrastructure is managed in the AI era.</p><p>Learn more from The New Stack about the latest insights about Nitric:  </p><p><a href="https://thenewstack.io/building-a-serverless-meme-generator-with-nitric-and-openai/ ">Building a Serverless Meme Generator With Nitric and OpenAI</a></p><p><a href="https://thenewstack.io/why-most-companies-are-struggling-with-infrastructure-as-code/ ">Why Most Companies Are Struggling With Infrastructure as Code</a></p><p> </p><p><a>Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Thu, 29 May 2025 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Steve Demchuck, Nitric, The New Stack, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/the-ai-code-generation-problem-nobodys-talking-about-UdBPKnUD</link>
      <content:encoded><![CDATA[<p>In this episode of<i>The New Stack Makers</i>, Nitric CEO Steve Demchuk discusses how the frustration of building frontend apps within rigid FinTech environments led to the creation of the Nitric framework — a tool designed to eliminate the friction between developers and cloud infrastructure. Unlike traditional Infrastructure as Code (IaC), where developers must manage both app logic and infrastructure definitions separately, Nitric introduces “Infrastructure from Code.” This approach allows developers to focus solely on application logic while the platform infers and automates infrastructure needs using SDKs and CLI tools across multiple languages and cloud providers.</p><p>Demchuk emphasizes that Nitric doesn't remove platform team control but enforces it consistently. Guardrails defined by platform teams guide infrastructure provisioning, ensuring security and compliance — even as developers use AI tools to rapidly generate code. The result is a streamlined workflow where developers move faster, AI enhances productivity, and platform teams retain oversight. This episode offers engineering leaders insight into a paradigm shift in how cloud infrastructure is managed in the AI era.</p><p>Learn more from The New Stack about the latest insights about Nitric:  </p><p><a href="https://thenewstack.io/building-a-serverless-meme-generator-with-nitric-and-openai/ ">Building a Serverless Meme Generator With Nitric and OpenAI</a></p><p><a href="https://thenewstack.io/why-most-companies-are-struggling-with-infrastructure-as-code/ ">Why Most Companies Are Struggling With Infrastructure as Code</a></p><p> </p><p><a>Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="18690830" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/fb5d2ec5-dcd1-4ee1-ab42-8c75c2875885/audio/d039f316-dada-43f5-80c6-6629d257d582/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>The AI Code Generation Problem Nobody&apos;s Talking About</itunes:title>
      <itunes:author>Steve Demchuck, Nitric, The New Stack, Alex Williams</itunes:author>
      <itunes:duration>00:19:28</itunes:duration>
      <itunes:summary>In this episode of The New Stack Makers, Nitric CEO Steve Demchuk discusses how the frustration of building frontend apps within rigid FinTech environments led to the creation of the Nitric framework — a tool designed to eliminate the friction between developers and cloud infrastructure. Unlike traditional Infrastructure as Code (IaC), where developers must manage both app logic and infrastructure definitions separately, Nitric introduces “Infrastructure from Code.” This approach allows developers to focus solely on application logic while the platform infers and automates infrastructure needs using SDKs and CLI tools across multiple languages and cloud providers.</itunes:summary>
      <itunes:subtitle>In this episode of The New Stack Makers, Nitric CEO Steve Demchuk discusses how the frustration of building frontend apps within rigid FinTech environments led to the creation of the Nitric framework — a tool designed to eliminate the friction between developers and cloud infrastructure. Unlike traditional Infrastructure as Code (IaC), where developers must manage both app logic and infrastructure definitions separately, Nitric introduces “Infrastructure from Code.” This approach allows developers to focus solely on application logic while the platform infers and automates infrastructure needs using SDKs and CLI tools across multiple languages and cloud providers.</itunes:subtitle>
      <itunes:keywords>nitric, software developer, infrastructure from code, tech podcast, the new stack, fintech, tech, developer podcast, infrastructure as code, the new stack makers, software engineer, steve demchuck</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1533</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">7d6d79b0-e2e5-4895-95a0-1b04ca7b9dd6</guid>
      <title>The New Bottleneck: AI That Codes Faster Than Humans Can Review</title>
      <description><![CDATA[<p>CodeRabbit, led by founder Harjot Gill, is tackling one of software development's biggest bottlenecks: the human code review process. While AI coding tools like GitHub Copilot have sped up code generation, they’ve inadvertently slowed down shipping due to increased complexity in code reviews. Developers now often review AI-generated code they didn’t write, leading to misunderstandings, bugs, and security risks. In an episode of <i>The New Stack Makers</i>, Gill discusses how Code Rabbit leverages advanced reasoning models—OpenAI’s o1, o3 mini, and Anthropic’s Claude series—to automate and enhance code reviews. </p><p>Unlike rigid, rule-based static analysis tools, Code Rabbit builds rich context at scale by spinning up sandbox environments for pull requests and allowing AI agents to navigate codebases like human reviewers. These agents can run CLI commands, analyze syntax trees, and pull in external context from Jira or vulnerability databases. Gill envisions a hybrid future where AI handles the grunt work of code review, empowering humans to focus on architecture and intent—ultimately reducing bugs, delays, and development costs.</p><p>Learn more from The New Stack about the latest insights about AI code reviews: </p><p><a href="https://thenewstack.io/coderabbits-ai-code-reviews-now-live-free-in-vs-code-cursor/">CodeRabbit's AI Code Reviews Now Live Free in VS Code, Cursor </a></p><p><a href="https://thenewstack.io/ai-coding-agents-level-up-from-helpers-to-team-players/">AI Coding Agents Level Up from Helpers to Team Players </a></p><p><a href="https://thenewstack.io/augment-code-an-ai-coding-tool-for-real-development-work/">Augment Code: An AI Coding Tool for 'Real' Development Work</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Tue, 27 May 2025 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Harjot Gill, Alex Williams, The New Stack, CodeRabbit)</author>
      <link>https://thenewstack.simplecast.com/episodes/the-new-bottleneck-ai-that-codes-faster-than-humans-can-review-nahx_4Ra</link>
      <content:encoded><![CDATA[<p>CodeRabbit, led by founder Harjot Gill, is tackling one of software development's biggest bottlenecks: the human code review process. While AI coding tools like GitHub Copilot have sped up code generation, they’ve inadvertently slowed down shipping due to increased complexity in code reviews. Developers now often review AI-generated code they didn’t write, leading to misunderstandings, bugs, and security risks. In an episode of <i>The New Stack Makers</i>, Gill discusses how Code Rabbit leverages advanced reasoning models—OpenAI’s o1, o3 mini, and Anthropic’s Claude series—to automate and enhance code reviews. </p><p>Unlike rigid, rule-based static analysis tools, Code Rabbit builds rich context at scale by spinning up sandbox environments for pull requests and allowing AI agents to navigate codebases like human reviewers. These agents can run CLI commands, analyze syntax trees, and pull in external context from Jira or vulnerability databases. Gill envisions a hybrid future where AI handles the grunt work of code review, empowering humans to focus on architecture and intent—ultimately reducing bugs, delays, and development costs.</p><p>Learn more from The New Stack about the latest insights about AI code reviews: </p><p><a href="https://thenewstack.io/coderabbits-ai-code-reviews-now-live-free-in-vs-code-cursor/">CodeRabbit's AI Code Reviews Now Live Free in VS Code, Cursor </a></p><p><a href="https://thenewstack.io/ai-coding-agents-level-up-from-helpers-to-team-players/">AI Coding Agents Level Up from Helpers to Team Players </a></p><p><a href="https://thenewstack.io/augment-code-an-ai-coding-tool-for-real-development-work/">Augment Code: An AI Coding Tool for 'Real' Development Work</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="19484117" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/becef644-7a2f-44ac-a219-ade09a9a1aee/audio/1fbf6f84-2b5b-424c-a575-786920404dda/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>The New Bottleneck: AI That Codes Faster Than Humans Can Review</itunes:title>
      <itunes:author>Harjot Gill, Alex Williams, The New Stack, CodeRabbit</itunes:author>
      <itunes:duration>00:20:17</itunes:duration>
      <itunes:summary>CodeRabbit, led by founder Harjot Gill, is tackling one of software development&apos;s biggest bottlenecks: the human code review process. While AI coding tools like GitHub Copilot have sped up code generation, they’ve inadvertently slowed down shipping due to increased complexity in code reviews. Developers now often review AI-generated code they didn’t write, leading to misunderstandings, bugs, and security risks. In an episode of The New Stack Makers, Gill discusses how Code Rabbit leverages advanced reasoning models—OpenAI’s o1, o3 mini, and Anthropic’s Claude series—to automate and enhance code reviews. </itunes:summary>
      <itunes:subtitle>CodeRabbit, led by founder Harjot Gill, is tackling one of software development&apos;s biggest bottlenecks: the human code review process. While AI coding tools like GitHub Copilot have sped up code generation, they’ve inadvertently slowed down shipping due to increased complexity in code reviews. Developers now often review AI-generated code they didn’t write, leading to misunderstandings, bugs, and security risks. In an episode of The New Stack Makers, Gill discusses how Code Rabbit leverages advanced reasoning models—OpenAI’s o1, o3 mini, and Anthropic’s Claude series—to automate and enhance code reviews. </itunes:subtitle>
      <itunes:keywords>coderabbit, software developer, harjot gill, tech podcast, the new stack, ai coding tools, human code review, ai code review, tech, developer podcast, the new stack makers, google cloud next</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1532</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">12f64a73-0874-43ee-91ad-baa5f70272ce</guid>
      <title>Google Cloud Next Wrap-Up</title>
      <description><![CDATA[<p>At the close of this year’s Google Cloud Next, The New Stack’s Alex Williams, AI editor Frederic Lardinois, and analyst Janakiram MSV discussed the event’s dominant theme: AI agents. The conversation focused heavily on agent frameworks, noting a shift from last year's third-party tools like Langchain, CrewAI, and Microsoft’s Autogen, to first-party offerings from model providers themselves. Google’s newly announced Agent Development Kit (ADK) highlights this trend, following closely on the heels of OpenAI’s agent SDK. MSV emphasized the significance of this shift, calling it a major milestone as Google joins the race alongside Microsoft and OpenAI. </p><p>Despite the buzz, Lardinois pointed out that many companies are still exploring how AI agents can fit into real-world workflows. The panel also highlighted how Google now delivers a full-stack AI development experience — from models to deployment platforms like Vertex AI. New enterprise tools like Agent Space and Agent Garden further signal Google’s commitment to making agents a core part of modern software development. </p><p>Learn more from The New Stack about the latest in AI agents: </p><p><a href="https://thenewstack.io/how-ai-agents-will-change-the-web-for-users-and-developers/ ">How AI Agents Will Change the Web for Users and Developers </a></p><p><a href="https://thenewstack.io/ai-agents-a-comprehensive-introduction-for-developers/ ">AI Agents: A Comprehensive Introduction for Developers </a></p><p><a href="https://thenewstack.io/ai-agents-are-coming-for-your-saas-stack/ ">AI Agents Are Coming for Your SaaS Stack </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Thu, 22 May 2025 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Frederic Lardinois, The New Stack, Janakiram &amp; Associates, Janakiram MSV, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/google-cloud-next-wrap-up-PJPrsGVD</link>
      <content:encoded><![CDATA[<p>At the close of this year’s Google Cloud Next, The New Stack’s Alex Williams, AI editor Frederic Lardinois, and analyst Janakiram MSV discussed the event’s dominant theme: AI agents. The conversation focused heavily on agent frameworks, noting a shift from last year's third-party tools like Langchain, CrewAI, and Microsoft’s Autogen, to first-party offerings from model providers themselves. Google’s newly announced Agent Development Kit (ADK) highlights this trend, following closely on the heels of OpenAI’s agent SDK. MSV emphasized the significance of this shift, calling it a major milestone as Google joins the race alongside Microsoft and OpenAI. </p><p>Despite the buzz, Lardinois pointed out that many companies are still exploring how AI agents can fit into real-world workflows. The panel also highlighted how Google now delivers a full-stack AI development experience — from models to deployment platforms like Vertex AI. New enterprise tools like Agent Space and Agent Garden further signal Google’s commitment to making agents a core part of modern software development. </p><p>Learn more from The New Stack about the latest in AI agents: </p><p><a href="https://thenewstack.io/how-ai-agents-will-change-the-web-for-users-and-developers/ ">How AI Agents Will Change the Web for Users and Developers </a></p><p><a href="https://thenewstack.io/ai-agents-a-comprehensive-introduction-for-developers/ ">AI Agents: A Comprehensive Introduction for Developers </a></p><p><a href="https://thenewstack.io/ai-agents-are-coming-for-your-saas-stack/ ">AI Agents Are Coming for Your SaaS Stack </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="17642171" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/0ef351b3-f19f-4295-a1cf-1e1f8219731c/audio/b47397ed-2ed5-4aaf-a6d3-ef26e36bd592/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Google Cloud Next Wrap-Up</itunes:title>
      <itunes:author>Frederic Lardinois, The New Stack, Janakiram &amp; Associates, Janakiram MSV, Alex Williams</itunes:author>
      <itunes:duration>00:18:22</itunes:duration>
      <itunes:summary>At the close of this year’s Google Cloud Next, The New Stack’s Alex Williams, AI editor Frederic Lardinois, and analyst Janakiram MSV discussed the event’s dominant theme: AI agents. The conversation focused heavily on agent frameworks, noting a shift from last year&apos;s third-party tools like Langchain, CrewAI, and Microsoft’s Autogen, to first-party offerings from model providers themselves. Google’s newly announced Agent Development Kit (ADK) highlights this trend, following closely on the heels of OpenAI’s agent SDK. MSV emphasized the significance of this shift, calling it a major milestone as Google joins the race alongside Microsoft and OpenAI. </itunes:summary>
      <itunes:subtitle>At the close of this year’s Google Cloud Next, The New Stack’s Alex Williams, AI editor Frederic Lardinois, and analyst Janakiram MSV discussed the event’s dominant theme: AI agents. The conversation focused heavily on agent frameworks, noting a shift from last year&apos;s third-party tools like Langchain, CrewAI, and Microsoft’s Autogen, to first-party offerings from model providers themselves. Google’s newly announced Agent Development Kit (ADK) highlights this trend, following closely on the heels of OpenAI’s agent SDK. MSV emphasized the significance of this shift, calling it a major milestone as Google joins the race alongside Microsoft and OpenAI. </itunes:subtitle>
      <itunes:keywords>software developer, ai agents, tech podcast, google cloud, software engineer podcast, the new stack, tech, janakiram msv, the new stack makers, software engineer, google cloud next, janakiram &amp; associates</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1531</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">7020c7a4-382a-4862-bd9e-6ebfb370e7af</guid>
      <title>Agentic AI and A2A in 2025: From Prompts to Processes</title>
      <description><![CDATA[<p>Agentic AI represents the next phase beyond generative AI, promising systems that not only generate content but also take autonomous actions within business processes. In a conversation recorded at Google Cloud Next, Kevin Laughridge of Deloitte explains that businesses are moving from AI pilots to production-scale deployments. Agentic AI enables decision-making, reasoning, and action across complex enterprise environments, reducing the need for constant human input. </p><p>A key enabler is Google’s newly announced open Agent2Agent (A2A) protocol, which allows AI agents from different vendors to communicate and collaborate securely across platforms. Over 50 companies, including PayPal, Salesforce, and Atlassian, are already adopting it. However, deploying agentic AI at scale requires more than individual tools—it demands an AI platform with runtime frameworks, UIs, and connectors. These platforms allow enterprises to integrate agents across clouds and systems, paving the way for AI that is collaborative, adaptive, and embedded in core operations. As AI becomes foundational, developers are transitioning from coding to architecting dynamic, learning systems.</p><p>Learn more from The New Stack about the latest insights about Agent2Agent Protocol: </p><p><a href="https://thenewstack.io/googles-agent2agent-protocol-helps-ai-agents-talk-to-each-other/">Google’s Agent2Agent Protocol Helps AI Agents Talk to Each Other</a></p><p><a href="https://thenewstack.io/a2a-mcp-kafka-and-flink-the-new-stack-for-ai-agents/">A2A, MCP, Kafka and Flink: The New Stack for AI Agents</a></p><p><a>Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></description>
      <pubDate>Tue, 20 May 2025 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Kevin Laughridge, Deloitte, Google, The New Stack, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/agentic-ai-and-a2a-in-2025-from-prompts-to-processes-wofRmVOi</link>
      <content:encoded><![CDATA[<p>Agentic AI represents the next phase beyond generative AI, promising systems that not only generate content but also take autonomous actions within business processes. In a conversation recorded at Google Cloud Next, Kevin Laughridge of Deloitte explains that businesses are moving from AI pilots to production-scale deployments. Agentic AI enables decision-making, reasoning, and action across complex enterprise environments, reducing the need for constant human input. </p><p>A key enabler is Google’s newly announced open Agent2Agent (A2A) protocol, which allows AI agents from different vendors to communicate and collaborate securely across platforms. Over 50 companies, including PayPal, Salesforce, and Atlassian, are already adopting it. However, deploying agentic AI at scale requires more than individual tools—it demands an AI platform with runtime frameworks, UIs, and connectors. These platforms allow enterprises to integrate agents across clouds and systems, paving the way for AI that is collaborative, adaptive, and embedded in core operations. As AI becomes foundational, developers are transitioning from coding to architecting dynamic, learning systems.</p><p>Learn more from The New Stack about the latest insights about Agent2Agent Protocol: </p><p><a href="https://thenewstack.io/googles-agent2agent-protocol-helps-ai-agents-talk-to-each-other/">Google’s Agent2Agent Protocol Helps AI Agents Talk to Each Other</a></p><p><a href="https://thenewstack.io/a2a-mcp-kafka-and-flink-the-new-stack-for-ai-agents/">A2A, MCP, Kafka and Flink: The New Stack for AI Agents</a></p><p><a>Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></content:encoded>
      <enclosure length="18541548" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/94c92e0a-034e-4bb7-8e64-da98b82367b5/audio/51afbd60-6e1a-47da-9598-fc0ab892e99f/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Agentic AI and A2A in 2025: From Prompts to Processes</itunes:title>
      <itunes:author>Kevin Laughridge, Deloitte, Google, The New Stack, Alex Williams</itunes:author>
      <itunes:duration>00:19:18</itunes:duration>
      <itunes:summary>Agentic AI represents the next phase beyond generative AI, promising systems that not only generate content but also take autonomous actions within business processes. In a conversation recorded at Google Cloud Next, Kevin Laughridge of Deloitte explains that businesses are moving from AI pilots to production-scale deployments. Agentic AI enables decision-making, reasoning, and action across complex enterprise environments, reducing the need for constant human input. </itunes:summary>
      <itunes:subtitle>Agentic AI represents the next phase beyond generative AI, promising systems that not only generate content but also take autonomous actions within business processes. In a conversation recorded at Google Cloud Next, Kevin Laughridge of Deloitte explains that businesses are moving from AI pilots to production-scale deployments. Agentic AI enables decision-making, reasoning, and action across complex enterprise environments, reducing the need for constant human input. </itunes:subtitle>
      <itunes:keywords>agent2agent, software developer, ai agents, google, tech podcast, the new stack, agentic ai, deloitte, tech, developer podcast, the new stack makers, kevin laughridge, google cloud next</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1530</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">2a6411f2-f2f2-4a30-89c7-f2076bd70c7c</guid>
      <title>Your AI Coding Buddy Is Always Available at 2 a.m.</title>
      <description><![CDATA[<p>Aja Hammerly, director of developer relations at Google, sees AI as the always-available coding partner developers have long wished for—especially in those late-night bursts of inspiration. In a conversation with Alex Williams at Google Cloud Next, she described AI-assisted coding as akin to having a virtual pair programmer who can fill in gaps and offer real-time support. </p><p>Hammerly urges developers to start their AI journey with tools that assist in code writing and explanation before moving into more complex AI agents. She distinguishes two types of DevEx AI: using AI to build apps and using it to eliminate developer toil. For Hammerly, this includes letting AI handle frontend work while she focuses on backend logic. The newly launched Firebase Studio exemplifies this dual approach, offering an AI-enhanced IDE with flexible tools like prototyping, code completion, and automation. Her advice? Developers should explore how AI fits into their unique workflow—because development, at its core, is deeply personal and individual.</p><p>Learn more from The New Stack about the latest AI insights with Google Cloud:</p><p><a href="https://thenewstack.io/google-ai-coding-tool-now-free-with-90x-copilots-output/">Google AI Coding Tool Now Free, With 90x Copilot’s Output</a></p><p><a href="https://thenewstack.io/gemini-2-5-pro-googles-coding-genius-gets-an-upgrade/">Gemini 2.5 Pro: Google’s Coding Genius Gets an Upgrade</a></p><p><a href="https://thenewstack.io/qa-how-google-itself-uses-its-gemini-large-language-model/">Q&A: How Google Itself Uses Its Gemini Large Language Model</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Thu, 15 May 2025 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Aja Hammerly, Google, The New Stack, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/your-ai-coding-buddy-is-always-available-at-2-am-dh-xslv-9bJgvW3q</link>
      <content:encoded><![CDATA[<p>Aja Hammerly, director of developer relations at Google, sees AI as the always-available coding partner developers have long wished for—especially in those late-night bursts of inspiration. In a conversation with Alex Williams at Google Cloud Next, she described AI-assisted coding as akin to having a virtual pair programmer who can fill in gaps and offer real-time support. </p><p>Hammerly urges developers to start their AI journey with tools that assist in code writing and explanation before moving into more complex AI agents. She distinguishes two types of DevEx AI: using AI to build apps and using it to eliminate developer toil. For Hammerly, this includes letting AI handle frontend work while she focuses on backend logic. The newly launched Firebase Studio exemplifies this dual approach, offering an AI-enhanced IDE with flexible tools like prototyping, code completion, and automation. Her advice? Developers should explore how AI fits into their unique workflow—because development, at its core, is deeply personal and individual.</p><p>Learn more from The New Stack about the latest AI insights with Google Cloud:</p><p><a href="https://thenewstack.io/google-ai-coding-tool-now-free-with-90x-copilots-output/">Google AI Coding Tool Now Free, With 90x Copilot’s Output</a></p><p><a href="https://thenewstack.io/gemini-2-5-pro-googles-coding-genius-gets-an-upgrade/">Gemini 2.5 Pro: Google’s Coding Genius Gets an Upgrade</a></p><p><a href="https://thenewstack.io/qa-how-google-itself-uses-its-gemini-large-language-model/">Q&A: How Google Itself Uses Its Gemini Large Language Model</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="19896224" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/f2d94c86-e2b9-4dc4-a9bd-fc7ce9cb0417/audio/549bd4c8-a90f-4c0c-be00-92449e06c8aa/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Your AI Coding Buddy Is Always Available at 2 a.m.</itunes:title>
      <itunes:author>Aja Hammerly, Google, The New Stack, Alex Williams</itunes:author>
      <itunes:duration>00:20:43</itunes:duration>
      <itunes:summary>Aja Hammerly, director of developer relations at Google, sees AI as the always-available coding partner developers have long wished for—especially in those late-night bursts of inspiration. In a conversation with Alex Williams at Google Cloud Next, she described AI-assisted coding as akin to having a virtual pair programmer who can fill in gaps and offer real-time support. </itunes:summary>
      <itunes:subtitle>Aja Hammerly, director of developer relations at Google, sees AI as the always-available coding partner developers have long wished for—especially in those late-night bursts of inspiration. In a conversation with Alex Williams at Google Cloud Next, she described AI-assisted coding as akin to having a virtual pair programmer who can fill in gaps and offer real-time support. </itunes:subtitle>
      <itunes:keywords>software developer, google, tech podcast, the new stack, ai-assisted coding, ide, tech, developer podcast, software development, the new stack makers, software engineer, aja hammerly, google cloud next</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1529</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">856c4e3c-2890-469f-8c1a-933432df4809</guid>
      <title>Google AI Infrastructure PM On New TPUs, Liquid Cooling and More</title>
      <description><![CDATA[<p>At Google Cloud Next '25, the company introduced Ironwood, its most advanced custom Tensor Processing Unit (TPU) to date. With 9,216 chips per pod delivering 42.5 exaflops of compute power, Ironwood doubles the performance per watt compared to its predecessor. Senior product manager Chelsie Czop explained that designing TPUs involves balancing power, thermal constraints, and interconnectivity. </p><p>Google's long-term investment in liquid cooling, now in its fourth generation, plays a key role in managing the heat generated by these powerful chips. Czop highlighted the incremental design improvements made visible through changes in the data center setup, such as liquid cooling pipe placements. Customers often ask whether to use TPUs or GPUs, but the answer depends on their specific workloads and infrastructure. Some, like Moloco, have seen a 10x performance boost by moving directly from CPUs to TPUs. However, many still use both TPUs and GPUs. As models evolve faster than hardware, Google relies on collaborations with teams like DeepMind to anticipate future needs.</p><p>Learn more from The New Stack about the latest AI infrastructure insights from Google Cloud:</p><p><a href="https://thenewstack.io/google-cloud-therapist-on-bringing-ai-to-cloud-native-infrastructure/">Google Cloud Therapist on Bringing AI to Cloud Native Infrastructure</a></p><p><a href="https://thenewstack.io/a2a-mcp-kafka-and-flink-the-new-stack-for-ai-agents/">A2A, MCP, Kafka and Flink: The New Stack for AI Agents</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a> </p>
]]></description>
      <pubDate>Tue, 13 May 2025 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Chelsie Czop, Google, Frederic Lardinois, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/google-ai-infrastructure-pm-on-new-tpus-liquid-cooling-and-more-MspV9_DP</link>
      <content:encoded><![CDATA[<p>At Google Cloud Next '25, the company introduced Ironwood, its most advanced custom Tensor Processing Unit (TPU) to date. With 9,216 chips per pod delivering 42.5 exaflops of compute power, Ironwood doubles the performance per watt compared to its predecessor. Senior product manager Chelsie Czop explained that designing TPUs involves balancing power, thermal constraints, and interconnectivity. </p><p>Google's long-term investment in liquid cooling, now in its fourth generation, plays a key role in managing the heat generated by these powerful chips. Czop highlighted the incremental design improvements made visible through changes in the data center setup, such as liquid cooling pipe placements. Customers often ask whether to use TPUs or GPUs, but the answer depends on their specific workloads and infrastructure. Some, like Moloco, have seen a 10x performance boost by moving directly from CPUs to TPUs. However, many still use both TPUs and GPUs. As models evolve faster than hardware, Google relies on collaborations with teams like DeepMind to anticipate future needs.</p><p>Learn more from The New Stack about the latest AI infrastructure insights from Google Cloud:</p><p><a href="https://thenewstack.io/google-cloud-therapist-on-bringing-ai-to-cloud-native-infrastructure/">Google Cloud Therapist on Bringing AI to Cloud Native Infrastructure</a></p><p><a href="https://thenewstack.io/a2a-mcp-kafka-and-flink-the-new-stack-for-ai-agents/">A2A, MCP, Kafka and Flink: The New Stack for AI Agents</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a> </p>
]]></content:encoded>
      <enclosure length="18861357" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/73e661fc-342e-4278-af82-96ef0d3d34c8/audio/685decd8-3e59-4e18-b76f-b40c1b90cd1a/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Google AI Infrastructure PM On New TPUs, Liquid Cooling and More</itunes:title>
      <itunes:author>Chelsie Czop, Google, Frederic Lardinois, The New Stack</itunes:author>
      <itunes:duration>00:19:38</itunes:duration>
      <itunes:summary>At Google Cloud Next &apos;25, the company introduced Ironwood, its most advanced custom Tensor Processing Unit (TPU) to date. With 9,216 chips per pod delivering 42.5 exaflops of compute power, Ironwood doubles the performance per watt compared to its predecessor. Senior product manager Chelsie Czop explained that designing TPUs involves balancing power, thermal constraints, and interconnectivity. </itunes:summary>
      <itunes:subtitle>At Google Cloud Next &apos;25, the company introduced Ironwood, its most advanced custom Tensor Processing Unit (TPU) to date. With 9,216 chips per pod delivering 42.5 exaflops of compute power, Ironwood doubles the performance per watt compared to its predecessor. Senior product manager Chelsie Czop explained that designing TPUs involves balancing power, thermal constraints, and interconnectivity. </itunes:subtitle>
      <itunes:keywords>tensor processing unit, frederic lardinois, software developer, google, tech podcast, the new stack, tech, developer podcast, chelsie czop, the new stack makers, software engineer, ironwood, liquid cooling, ai infrastructure, tpu</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1528</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">6eccd4bd-f2d6-45f1-8e87-be743ae8cdca</guid>
      <title>Google Cloud Therapist on Bringing AI to Cloud Native Infrastructure</title>
      <description><![CDATA[<p>At Google Cloud Next, Bobby Allen, Group Product Manager for Google Kubernetes Engine (GKE), emphasized GKE’s foundational role in supporting AI platforms. While AI dominates current tech conversations, Allen highlighted that cloud-native infrastructure like Kubernetes is what enables AI workloads to function efficiently. GKE powers key Google services like Vertex AI and is trusted by organizations including DeepMind, gaming companies, and healthcare providers for AI model training and inference. </p><p>Allen explained that GKE offers scalability, elasticity, and support for AI-specific hardware like GPUs and TPUs, making it ideal for modern workloads. He noted that Kubernetes was built with capabilities—like high availability and secure orchestration—that are now essential for AI deployment. Looking forward, GKE aims to evolve into a model router, allowing developers to access the right AI model based on function, not vendor, streamlining the development experience. Allen described GKE as offering maximum control with minimal technical debt, future-proofed by Google’s continued investment in open source and scalable architecture.</p><p>Learn more from The New Stack about the latest insights with Google Cloud: </p><p><a href="https://thenewstack.io/google-kubernetes-engine-customized-for-faster-ai-work/">Google Kubernetes Engine Customized for Faster AI Work</a></p><p><a href="https://thenewstack.io/kubecon-europe-how-google-will-evolve-kubernetes-in-ai-era/ ">KubeCon Europe: How Google Will Evolve Kubernetes in the AI Era</a></p><p><a href="https://thenewstack.io/apache-ray-finds-a-home-on-the-google-kubernetes-engine/ ">Apache Ray Finds a Home on the Google Kubernetes Engine</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Thu, 8 May 2025 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Bobby Allen, Frederic Lardinois, The New Stack, Google)</author>
      <link>https://thenewstack.simplecast.com/episodes/google-cloud-therapist-on-bringing-ai-to-cloud-native-infrastructure-ubK2yFCC</link>
      <content:encoded><![CDATA[<p>At Google Cloud Next, Bobby Allen, Group Product Manager for Google Kubernetes Engine (GKE), emphasized GKE’s foundational role in supporting AI platforms. While AI dominates current tech conversations, Allen highlighted that cloud-native infrastructure like Kubernetes is what enables AI workloads to function efficiently. GKE powers key Google services like Vertex AI and is trusted by organizations including DeepMind, gaming companies, and healthcare providers for AI model training and inference. </p><p>Allen explained that GKE offers scalability, elasticity, and support for AI-specific hardware like GPUs and TPUs, making it ideal for modern workloads. He noted that Kubernetes was built with capabilities—like high availability and secure orchestration—that are now essential for AI deployment. Looking forward, GKE aims to evolve into a model router, allowing developers to access the right AI model based on function, not vendor, streamlining the development experience. Allen described GKE as offering maximum control with minimal technical debt, future-proofed by Google’s continued investment in open source and scalable architecture.</p><p>Learn more from The New Stack about the latest insights with Google Cloud: </p><p><a href="https://thenewstack.io/google-kubernetes-engine-customized-for-faster-ai-work/">Google Kubernetes Engine Customized for Faster AI Work</a></p><p><a href="https://thenewstack.io/kubecon-europe-how-google-will-evolve-kubernetes-in-ai-era/ ">KubeCon Europe: How Google Will Evolve Kubernetes in the AI Era</a></p><p><a href="https://thenewstack.io/apache-ray-finds-a-home-on-the-google-kubernetes-engine/ ">Apache Ray Finds a Home on the Google Kubernetes Engine</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="23118688" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/1e5da4ea-dd5e-40c8-8b9c-d940bacf9567/audio/c7f6270e-226e-444e-a7e0-c21ea33f1874/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Google Cloud Therapist on Bringing AI to Cloud Native Infrastructure</itunes:title>
      <itunes:author>Bobby Allen, Frederic Lardinois, The New Stack, Google</itunes:author>
      <itunes:duration>00:24:04</itunes:duration>
      <itunes:summary>At Google Cloud Next, Bobby Allen, Group Product Manager for Google Kubernetes Engine (GKE), emphasized GKE’s foundational role in supporting AI platforms. While AI dominates current tech conversations, Allen highlighted that cloud-native infrastructure like Kubernetes is what enables AI workloads to function efficiently. GKE powers key Google services like Vertex AI and is trusted by organizations including DeepMind, gaming companies, and healthcare providers for AI model training and inference. </itunes:summary>
      <itunes:subtitle>At Google Cloud Next, Bobby Allen, Group Product Manager for Google Kubernetes Engine (GKE), emphasized GKE’s foundational role in supporting AI platforms. While AI dominates current tech conversations, Allen highlighted that cloud-native infrastructure like Kubernetes is what enables AI workloads to function efficiently. GKE powers key Google services like Vertex AI and is trusted by organizations including DeepMind, gaming companies, and healthcare providers for AI model training and inference. </itunes:subtitle>
      <itunes:keywords>frederic lardinois, software developer, google kubernetes engine, google, tech podcast, google cloud, the new stack, ai workloads, tech, bobby allen, ai development, developer podcast, kubernetes, the new stack makers, software engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1527</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">daedfae6-c263-4e01-a5c1-1f452fb586b9</guid>
      <title>VMware&apos;s Kubernetes Evolution: Quashing Complexity</title>
      <description><![CDATA[<p>Without this, developers waste time managing infrastructure instead of focusing on code. VMware addresses this with VCF, a pre-integrated Kubernetes solution that includes components like Harbor, Valero, and Istio, all managed by VMware. While some worry about added complexity from abstraction, Turner dismissed concerns about virtualization overhead, pointing to benchmarks showing 98.3% of bare metal performance for virtualized AI workloads. He emphasized that AI is driving nearly half of Kubernetes deployments, prompting VMware’s partnership with Nvidia to support GPU virtualization. </p><p>Turner also highlighted VMware's open source leadership, contributing to major projects and ensuring Kubernetes remains cloud-independent and standards-based. VMware aims to simplify Kubernetes and AI workload management while staying committed to the open ecosystem.</p><p>Learn more from The New Stack about the latest insights with VMware </p><p><a href="https://thenewstack.io/has-vmware-finally-caught-up-with-kubernetes/">Has VMware Finally Caught Up With Kubernetes?</a></p><p><a href="https://thenewstack.io/vmwares-golden-path/">VMware’s Golden Path</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></description>
      <pubDate>Tue, 6 May 2025 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Paul Turner, The New Stack, VMware, Broadcom, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/vmwares-kubernetes-evolution-quashing-complexity-s_KR8evX</link>
      <content:encoded><![CDATA[<p>Without this, developers waste time managing infrastructure instead of focusing on code. VMware addresses this with VCF, a pre-integrated Kubernetes solution that includes components like Harbor, Valero, and Istio, all managed by VMware. While some worry about added complexity from abstraction, Turner dismissed concerns about virtualization overhead, pointing to benchmarks showing 98.3% of bare metal performance for virtualized AI workloads. He emphasized that AI is driving nearly half of Kubernetes deployments, prompting VMware’s partnership with Nvidia to support GPU virtualization. </p><p>Turner also highlighted VMware's open source leadership, contributing to major projects and ensuring Kubernetes remains cloud-independent and standards-based. VMware aims to simplify Kubernetes and AI workload management while staying committed to the open ecosystem.</p><p>Learn more from The New Stack about the latest insights with VMware </p><p><a href="https://thenewstack.io/has-vmware-finally-caught-up-with-kubernetes/">Has VMware Finally Caught Up With Kubernetes?</a></p><p><a href="https://thenewstack.io/vmwares-golden-path/">VMware’s Golden Path</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></content:encoded>
      <enclosure length="29452790" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/3a42e716-16e4-4509-8bb3-d45a45509111/audio/29fa2cce-a7f9-496a-b9a8-8bdc81f66f09/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>VMware&apos;s Kubernetes Evolution: Quashing Complexity</itunes:title>
      <itunes:author>Paul Turner, The New Stack, VMware, Broadcom, Alex Williams</itunes:author>
      <itunes:duration>00:30:40</itunes:duration>
      <itunes:summary>Kubernetes adoption remains a challenge for many organizations, with 46% citing complexity and lack of training as key barriers, despite 84% evaluating or using it in production. Paul Turner, VP of Products at VMware Cloud Foundation (VCF), explained that making Kubernetes truly usable requires a full platform, not just a runtime. </itunes:summary>
      <itunes:subtitle>Kubernetes adoption remains a challenge for many organizations, with 46% citing complexity and lack of training as key barriers, despite 84% evaluating or using it in production. Paul Turner, VP of Products at VMware Cloud Foundation (VCF), explained that making Kubernetes truly usable requires a full platform, not just a runtime. </itunes:subtitle>
      <itunes:keywords>vmware, software developer, tech podcast, the new stack, tech, ai development, broadcom, developer podcast, paul turner, kubernetes, the new stack makers, software engineer, open source</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1526</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">964aa93a-d908-4cae-b0b7-a75f37c0f36b</guid>
      <title>Prequel: Software Errors Be Gone</title>
      <description><![CDATA[<p>Prequel is launching a new developer-focused service aimed at democratizing software error detection—an area typically dominated by large cloud providers. Co-founded by Lyndon Brown and Tony Meehan, both former NSA engineers, Prequel introduces a community-driven observability approach centered on <i>Common Reliability Enumerations</i> (CREs). CREs categorize recurring production issues, helping engineers detect, understand, and communicate problems without reinventing solutions or working in isolation. Their open-source tools, <strong>cre</strong> and <strong>prereq</strong>, allow teams to build and share detectors that catch bugs and anti-patterns in real time—without exposing sensitive data, thanks to edge processing using WebAssembly.</p><p>The urgency behind Prequel’s mission stems from the rapid pace of AI-driven development, increased third-party code usage, and rising infrastructure costs. Traditional observability tools may surface symptoms, but Prequel aims to provide precise problem definitions and actionable insights. While observability giants like Datadog and Splunk dominate the market, Brown and Meehan argue that engineers still feel overwhelmed by data and underpowered in diagnostics—something they believe CREs can finally change.</p><p>Learn more from The New Stack about the latest Observability insights </p><p><a href="https://thenewstack.io/why-consolidating-observability-tools-is-a-smart-move/">Why Consolidating Observability Tools Is a Smart Move</a></p><p><a href="https://thenewstack.io/why-a-culture-of-observability-is-key-to-technology-success/ ">Building an Observability Culture: Getting Everyone Onboard </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></description>
      <pubDate>Mon, 5 May 2025 16:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Prequel, Lyndon Brown, Tony Meehan, The New Stack, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/prequel-software-errors-be-gone-g7NeY9z1</link>
      <content:encoded><![CDATA[<p>Prequel is launching a new developer-focused service aimed at democratizing software error detection—an area typically dominated by large cloud providers. Co-founded by Lyndon Brown and Tony Meehan, both former NSA engineers, Prequel introduces a community-driven observability approach centered on <i>Common Reliability Enumerations</i> (CREs). CREs categorize recurring production issues, helping engineers detect, understand, and communicate problems without reinventing solutions or working in isolation. Their open-source tools, <strong>cre</strong> and <strong>prereq</strong>, allow teams to build and share detectors that catch bugs and anti-patterns in real time—without exposing sensitive data, thanks to edge processing using WebAssembly.</p><p>The urgency behind Prequel’s mission stems from the rapid pace of AI-driven development, increased third-party code usage, and rising infrastructure costs. Traditional observability tools may surface symptoms, but Prequel aims to provide precise problem definitions and actionable insights. While observability giants like Datadog and Splunk dominate the market, Brown and Meehan argue that engineers still feel overwhelmed by data and underpowered in diagnostics—something they believe CREs can finally change.</p><p>Learn more from The New Stack about the latest Observability insights </p><p><a href="https://thenewstack.io/why-consolidating-observability-tools-is-a-smart-move/">Why Consolidating Observability Tools Is a Smart Move</a></p><p><a href="https://thenewstack.io/why-a-culture-of-observability-is-key-to-technology-success/ ">Building an Observability Culture: Getting Everyone Onboard </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></content:encoded>
      <enclosure length="5011373" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/7636d400-2aaf-4995-a456-f859370fa965/audio/2fbf132a-b823-4507-acfa-74caf7f687a3/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Prequel: Software Errors Be Gone</itunes:title>
      <itunes:author>Prequel, Lyndon Brown, Tony Meehan, The New Stack, Alex Williams</itunes:author>
      <itunes:duration>00:05:13</itunes:duration>
      <itunes:summary>Prequel is launching a new developer-focused service aimed at democratizing software error detection—an area typically dominated by large cloud providers. Co-founded by Lyndon Brown and Tony Meehan, both former NSA engineers, Prequel introduces a community-driven observability approach centered on Common Reliability Enumerations (CREs). CREs categorize recurring production issues, helping engineers detect, understand, and communicate problems without reinventing solutions or working in isolation. Their open-source tools, cre and prereq, allow teams to build and share detectors that catch bugs and anti-patterns in real time—without exposing sensitive data, thanks to edge processing using WebAssembly.</itunes:summary>
      <itunes:subtitle>Prequel is launching a new developer-focused service aimed at democratizing software error detection—an area typically dominated by large cloud providers. Co-founded by Lyndon Brown and Tony Meehan, both former NSA engineers, Prequel introduces a community-driven observability approach centered on Common Reliability Enumerations (CREs). CREs categorize recurring production issues, helping engineers detect, understand, and communicate problems without reinventing solutions or working in isolation. Their open-source tools, cre and prereq, allow teams to build and share detectors that catch bugs and anti-patterns in real time—without exposing sensitive data, thanks to edge processing using WebAssembly.</itunes:subtitle>
      <itunes:keywords>prequel, software developer, prequel.dev, tech podcast, alex williams, the new stack, common reliability enumeration, postgres, developer podcast, the new stack makers, software engineer, open source, tony meehan, lyndon brown, observability, demo</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1525</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">09e2398e-5955-4a0e-8506-8438b751af48</guid>
      <title>Arm’s Open Source Leader on Meeting the AI Challenge</title>
      <description><![CDATA[<p>At Arm, open source is the default approach, with proprietary software requiring justification, says Andrew Wafaa, fellow and senior director of software communities. Speaking at KubeCon + CloudNativeCon Europe, Wafaa emphasized Arm’s decade-long commitment to open source, highlighting its investment in key projects like the Linux kernel, GCC, and LLVM. This investment is strategic, ensuring strong support for Arm’s architecture through vital tools and system software.</p><p>Wafaa also challenged the hype around GPUs in AI, asserting that CPUs—especially those enhanced with Arm’s Scalable Matrix Extension (SME2) and Scalable Vector Extension (SVE2)—are often more suitable for inference workloads. CPUs offer greater flexibility, and Arm’s innovations aim to reduce dependency on expensive GPU fleets.</p><p>On the AI framework front, Wafaa pointed to PyTorch as the emerging hub, likening its ecosystem-building potential to Kubernetes. As a PyTorch Foundation board member, he sees PyTorch becoming the central open source platform in AI development, with broad community and industry backing.</p><p>Learn more from The New Stack about the latest insights about Arm: </p><p><a href="https://thenewstack.io/edge-wars-heat-up-as-arm-aims-to-outflank-intel-qualcomm/">Edge Wars Heat Up as Arm Aims to Outflank Intel, Qualcomm </a></p><p><a href="https://thenewstack.io/arm-see-a-demo-about-migrating-a-x86-based-app-to-arm64/">Arm: See a Demo About Migrating a x86-Based App to ARM64</a></p><p><br /><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></description>
      <pubDate>Thu, 1 May 2025 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New stack, Andrew Wafaa, Heather Joslyn, Arm)</author>
      <link>https://thenewstack.simplecast.com/episodes/arms-open-source-leader-on-meeting-the-ai-challenge-RAg32qJJ</link>
      <content:encoded><![CDATA[<p>At Arm, open source is the default approach, with proprietary software requiring justification, says Andrew Wafaa, fellow and senior director of software communities. Speaking at KubeCon + CloudNativeCon Europe, Wafaa emphasized Arm’s decade-long commitment to open source, highlighting its investment in key projects like the Linux kernel, GCC, and LLVM. This investment is strategic, ensuring strong support for Arm’s architecture through vital tools and system software.</p><p>Wafaa also challenged the hype around GPUs in AI, asserting that CPUs—especially those enhanced with Arm’s Scalable Matrix Extension (SME2) and Scalable Vector Extension (SVE2)—are often more suitable for inference workloads. CPUs offer greater flexibility, and Arm’s innovations aim to reduce dependency on expensive GPU fleets.</p><p>On the AI framework front, Wafaa pointed to PyTorch as the emerging hub, likening its ecosystem-building potential to Kubernetes. As a PyTorch Foundation board member, he sees PyTorch becoming the central open source platform in AI development, with broad community and industry backing.</p><p>Learn more from The New Stack about the latest insights about Arm: </p><p><a href="https://thenewstack.io/edge-wars-heat-up-as-arm-aims-to-outflank-intel-qualcomm/">Edge Wars Heat Up as Arm Aims to Outflank Intel, Qualcomm </a></p><p><a href="https://thenewstack.io/arm-see-a-demo-about-migrating-a-x86-based-app-to-arm64/">Arm: See a Demo About Migrating a x86-Based App to ARM64</a></p><p><br /><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></content:encoded>
      <enclosure length="17625870" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/92aaaeed-03ae-4e1b-bf5a-c0a19624c85f/audio/a4035b18-189b-4f6b-aa77-f6f9fb489a50/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Arm’s Open Source Leader on Meeting the AI Challenge</itunes:title>
      <itunes:author>The New stack, Andrew Wafaa, Heather Joslyn, Arm</itunes:author>
      <itunes:duration>00:18:21</itunes:duration>
      <itunes:summary>At Arm, open source is the default approach, with proprietary software requiring justification, says Andrew Wafaa, fellow and senior director of software communities. Speaking at KubeCon + CloudNativeCon Europe, Wafaa emphasized Arm’s decade-long commitment to open source, highlighting its investment in key projects like the Linux kernel, GCC, and LLVM. This investment is strategic, ensuring strong support for Arm’s architecture through vital tools and system software.</itunes:summary>
      <itunes:subtitle>At Arm, open source is the default approach, with proprietary software requiring justification, says Andrew Wafaa, fellow and senior director of software communities. Speaking at KubeCon + CloudNativeCon Europe, Wafaa emphasized Arm’s decade-long commitment to open source, highlighting its investment in key projects like the Linux kernel, GCC, and LLVM. This investment is strategic, ensuring strong support for Arm’s architecture through vital tools and system software.</itunes:subtitle>
      <itunes:keywords>continuous deployment, software developer, tech podcast, the new stack, edge, ai workloads, tech, developer podcast, continuous integration, kubernetes, the new stack makers, open source, ospo, arm, andrew wafaa</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1524</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">32e387b0-2aa5-47fc-a73f-6952d1405ad0</guid>
      <title>Why Kubernetes Cost Optimization Keeps Failing</title>
      <description><![CDATA[<p>In today’s uncertain economy, businesses are tightening costs, including for Kubernetes (K8s) operations, which are notoriously difficult to optimize. Yodar Shafrir, co-founder and CEO of ScaleOps, explained at KubeCon + CloudNativeCon Europe that dynamic, cloud-native applications have constantly shifting loads, making resource allocation complex. </p><p>Engineers must provision enough resources to handle spikes without overspending, but in large production clusters with thousands of applications, manual optimization often fails. This leads to 70–80% resource waste and performance issues. Developers typically prioritize application performance over operational cost, and AI workloads further strain resources. Existing optimization tools offer static recommendations that quickly become outdated due to the dynamic nature of workloads, risking downtime. </p><p>Shafrir emphasized that real-time, fully automated solutions like ScaleOps' platform are crucial. By dynamically adjusting container-level resources based on real-time consumption and business metrics, ScaleOps improves application reliability and eliminates waste. Their approach shifts Kubernetes management from static to dynamic resource allocation. Listen to the full episode for more insights and ScaleOps' roadmap.</p><p>Learn more from The New Stack about the latest in scaling Kubernetes and managing operational costs: </p><p><a href="https://thenewstack.io/scaleops-adds-predictive-horizontal-scaling-smart-placement/ ">ScaleOps Adds Predictive Horizontal Scaling, Smart Placement </a></p><p><a href="https://thenewstack.io/scaleops-dynamically-right-sizes-containers-at-runtime/ ">ScaleOps Dynamically Right-Sizes Containers at Runtime</a> </p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Tue, 29 Apr 2025 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Yodar Shafrir, ScaleOps, The New stack, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/why-kubernetes-cost-optimization-keeps-failing-_Z3RbkC1</link>
      <content:encoded><![CDATA[<p>In today’s uncertain economy, businesses are tightening costs, including for Kubernetes (K8s) operations, which are notoriously difficult to optimize. Yodar Shafrir, co-founder and CEO of ScaleOps, explained at KubeCon + CloudNativeCon Europe that dynamic, cloud-native applications have constantly shifting loads, making resource allocation complex. </p><p>Engineers must provision enough resources to handle spikes without overspending, but in large production clusters with thousands of applications, manual optimization often fails. This leads to 70–80% resource waste and performance issues. Developers typically prioritize application performance over operational cost, and AI workloads further strain resources. Existing optimization tools offer static recommendations that quickly become outdated due to the dynamic nature of workloads, risking downtime. </p><p>Shafrir emphasized that real-time, fully automated solutions like ScaleOps' platform are crucial. By dynamically adjusting container-level resources based on real-time consumption and business metrics, ScaleOps improves application reliability and eliminates waste. Their approach shifts Kubernetes management from static to dynamic resource allocation. Listen to the full episode for more insights and ScaleOps' roadmap.</p><p>Learn more from The New Stack about the latest in scaling Kubernetes and managing operational costs: </p><p><a href="https://thenewstack.io/scaleops-adds-predictive-horizontal-scaling-smart-placement/ ">ScaleOps Adds Predictive Horizontal Scaling, Smart Placement </a></p><p><a href="https://thenewstack.io/scaleops-dynamically-right-sizes-containers-at-runtime/ ">ScaleOps Dynamically Right-Sizes Containers at Runtime</a> </p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="16677868" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/30d9a9da-3216-4223-901e-c2544163121d/audio/ed5f002c-fad3-422d-8d52-b913a35052b0/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Why Kubernetes Cost Optimization Keeps Failing</itunes:title>
      <itunes:author>Yodar Shafrir, ScaleOps, The New stack, Heather Joslyn</itunes:author>
      <itunes:duration>00:17:22</itunes:duration>
      <itunes:summary>In today’s uncertain economy, businesses are tightening costs, including for Kubernetes (K8s) operations, which are notoriously difficult to optimize. Yodar Shafrir, co-founder and CEO of ScaleOps, explained at KubeCon + CloudNativeCon Europe that dynamic, cloud-native applications have constantly shifting loads, making resource allocation complex. </itunes:summary>
      <itunes:subtitle>In today’s uncertain economy, businesses are tightening costs, including for Kubernetes (K8s) operations, which are notoriously difficult to optimize. Yodar Shafrir, co-founder and CEO of ScaleOps, explained at KubeCon + CloudNativeCon Europe that dynamic, cloud-native applications have constantly shifting loads, making resource allocation complex. </itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, scaleops, tech, application performance, developer podcast, kubernetes, the new stack makers, software engineer, open source, cloud optimization</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1522</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">05192d6e-1cf5-44d1-a8df-3b68bf85b358</guid>
      <title>How Heroku Is ‘Re-Platforming’ Its Platform</title>
      <description><![CDATA[<p>Heroku has been undergoing a major transformation, re-platforming its entire Platform as a Service (PaaS) offering over the past year and a half. This ambitious effort, dubbed “Fir,” will soon reach general availability. According to Betty Junod, CMO and SVP at Heroku (owned by Salesforce), the overhaul includes a shift to Kubernetes and OCI standards, reinforcing Heroku’s commitment to open source. </p><p>The platform now features Heroku Cloud Native Buildpacks, which let developers create container images without Dockerfiles. Originally built on Ruby on Rails and predating Docker and AWS, Heroku now supports eight programming languages. The company has also deepened its open source engagement by becoming a platinum member of the Cloud Native Computing Foundation (CNCF), contributing to projects like OpenTelemetry. Additionally, Heroku has open sourced its Twelve-Factor Apps methodology, inviting the community to help modernize it to address evolving needs such as secrets management and workload identity. This signals a broader effort to align Heroku’s future with the cloud native ecosystem. </p><p>Learn more from The New Stack about Heroku's approach to Platform-as-a-Service:</p><p><a href="https://thenewstack.io/return-to-paas-building-the-platform-of-our-dreams/ ">Return to PaaS: Building the Platform of Our Dreams</a></p><p><a href="https://thenewstack.io/heroku-moved-twelve-factor-apps-to-open-source-whats-next/ ">Heroku Moved Twelve-Factor Apps to Open Source. What’s Next?</a></p><p><a href="https://thenewstack.io/how-heroku-is-positioned-to-help-ops-engineers-in-the-genai-era/">How Heroku Is Positioned To Help Ops Engineers in the GenAI Era</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></description>
      <pubDate>Thu, 24 Apr 2025 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Betty Junod, Heroku, The New Stack, Heather Joslyn, Salesforce)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-heroku-is-re-platforming-its-platform-NT04vMiZ</link>
      <content:encoded><![CDATA[<p>Heroku has been undergoing a major transformation, re-platforming its entire Platform as a Service (PaaS) offering over the past year and a half. This ambitious effort, dubbed “Fir,” will soon reach general availability. According to Betty Junod, CMO and SVP at Heroku (owned by Salesforce), the overhaul includes a shift to Kubernetes and OCI standards, reinforcing Heroku’s commitment to open source. </p><p>The platform now features Heroku Cloud Native Buildpacks, which let developers create container images without Dockerfiles. Originally built on Ruby on Rails and predating Docker and AWS, Heroku now supports eight programming languages. The company has also deepened its open source engagement by becoming a platinum member of the Cloud Native Computing Foundation (CNCF), contributing to projects like OpenTelemetry. Additionally, Heroku has open sourced its Twelve-Factor Apps methodology, inviting the community to help modernize it to address evolving needs such as secrets management and workload identity. This signals a broader effort to align Heroku’s future with the cloud native ecosystem. </p><p>Learn more from The New Stack about Heroku's approach to Platform-as-a-Service:</p><p><a href="https://thenewstack.io/return-to-paas-building-the-platform-of-our-dreams/ ">Return to PaaS: Building the Platform of Our Dreams</a></p><p><a href="https://thenewstack.io/heroku-moved-twelve-factor-apps-to-open-source-whats-next/ ">Heroku Moved Twelve-Factor Apps to Open Source. What’s Next?</a></p><p><a href="https://thenewstack.io/how-heroku-is-positioned-to-help-ops-engineers-in-the-genai-era/">How Heroku Is Positioned To Help Ops Engineers in the GenAI Era</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></content:encoded>
      <enclosure length="17304878" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/bd080eda-78c4-46c9-910c-e18fd8879475/audio/104a533f-1aa5-4cc2-b9b4-5439dc55ffeb/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Heroku Is ‘Re-Platforming’ Its Platform</itunes:title>
      <itunes:author>Betty Junod, Heroku, The New Stack, Heather Joslyn, Salesforce</itunes:author>
      <itunes:duration>00:18:01</itunes:duration>
      <itunes:summary>Heroku has been undergoing a major transformation, re-platforming its entire Platform as a Service (PaaS) offering over the past year and a half. This ambitious effort, dubbed “Fir,” will soon reach general availability. According to Betty Junod, CMO and SVP at Heroku (owned by Salesforce), the overhaul includes a shift to Kubernetes and OCI standards, reinforcing Heroku’s commitment to open source.</itunes:summary>
      <itunes:subtitle>Heroku has been undergoing a major transformation, re-platforming its entire Platform as a Service (PaaS) offering over the past year and a half. This ambitious effort, dubbed “Fir,” will soon reach general availability. According to Betty Junod, CMO and SVP at Heroku (owned by Salesforce), the overhaul includes a shift to Kubernetes and OCI standards, reinforcing Heroku’s commitment to open source.</itunes:subtitle>
      <itunes:keywords>kubecon london 2025, software developer, tech podcast, the new stack, tech, developer podcast, kubernetes, betty junod, the new stack makers, software engineer, open source, platform as a service, salesforce, heroku</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1521</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">2906f9ec-63f5-4fb4-b36a-eda7e3a75f22</guid>
      <title>Container Security and AI: A Talk with Chainguard&apos;s Founder</title>
      <description><![CDATA[<p>In this episode of <i>The New Stack Makers</i>, recorded at KubeCon + CloudNativeCon Europe, Alex Williams speaks with Ville Aikas, Chainguard founder and early Kubernetes contributor. They reflect on the evolution of container security, particularly how early assumptions—like trusting that users would validate container images—proved problematic. Aikas recalls the lack of secure defaults, such as allowing containers to run as root, stemming from the team’s internal Google perspective, which led to unrealistic expectations about external security practices.</p><p>The Kubernetes community has since made strides with governance policies, secure defaults, and standard practices like avoiding long-lived credentials and supporting federated authentication. Aikas founded Chainguard to address the need for trusted, minimal, and verifiable container images—offering zero-CVE images, transparent toolchains, and full SBOMs. This security-first philosophy now extends to virtual machines and Java dependencies via Chainguard Libraries.</p><p>The discussion also highlights the rising concerns around AI/ML security in Kubernetes, including complex model dependencies, GPU integrations, and potential attack vectors—prompting Chainguard’s move toward locked-down AI images.</p><p>Learn more from The New Stack about Container Security and AI</p><p><a href="https://thenewstack.io/chainguard-takes-aim-at-vulnerable-java-libraries/">Chainguard Takes Aim At Vulnerable Java Libraries</a></p><p><a href="https://thenewstack.io/clean-container-images-a-supply-chain-security-revolution/">Clean Container Images: A Supply Chain Security Revolution</a></p><p><a href="https://thenewstack.io/revolutionizing-offensive-security-a-new-era-with-agentic-ai/">Revolutionizing Offensive Security: A New Era With Agentic AI</a></p><p> </p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Tue, 22 Apr 2025 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Villie Aikas, Chainguard, The New Stack, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/container-security-and-ai-a-talk-with-chainguards-founder-hJMKjYRQ</link>
      <content:encoded><![CDATA[<p>In this episode of <i>The New Stack Makers</i>, recorded at KubeCon + CloudNativeCon Europe, Alex Williams speaks with Ville Aikas, Chainguard founder and early Kubernetes contributor. They reflect on the evolution of container security, particularly how early assumptions—like trusting that users would validate container images—proved problematic. Aikas recalls the lack of secure defaults, such as allowing containers to run as root, stemming from the team’s internal Google perspective, which led to unrealistic expectations about external security practices.</p><p>The Kubernetes community has since made strides with governance policies, secure defaults, and standard practices like avoiding long-lived credentials and supporting federated authentication. Aikas founded Chainguard to address the need for trusted, minimal, and verifiable container images—offering zero-CVE images, transparent toolchains, and full SBOMs. This security-first philosophy now extends to virtual machines and Java dependencies via Chainguard Libraries.</p><p>The discussion also highlights the rising concerns around AI/ML security in Kubernetes, including complex model dependencies, GPU integrations, and potential attack vectors—prompting Chainguard’s move toward locked-down AI images.</p><p>Learn more from The New Stack about Container Security and AI</p><p><a href="https://thenewstack.io/chainguard-takes-aim-at-vulnerable-java-libraries/">Chainguard Takes Aim At Vulnerable Java Libraries</a></p><p><a href="https://thenewstack.io/clean-container-images-a-supply-chain-security-revolution/">Clean Container Images: A Supply Chain Security Revolution</a></p><p><a href="https://thenewstack.io/revolutionizing-offensive-security-a-new-era-with-agentic-ai/">Revolutionizing Offensive Security: A New Era With Agentic AI</a></p><p> </p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="20031154" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/f84b83df-76f3-467f-b760-01fb43fa7634/audio/0f98ce21-8d29-413a-b572-800d93fc12ca/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Container Security and AI: A Talk with Chainguard&apos;s Founder</itunes:title>
      <itunes:author>Villie Aikas, Chainguard, The New Stack, Alex Williams</itunes:author>
      <itunes:duration>00:20:51</itunes:duration>
      <itunes:summary>In this episode of The New Stack Makers, recorded at KubeCon + CloudNativeCon Europe, Alex Williams speaks with Ville Aikas, Chainguard founder and early Kubernetes contributor. They reflect on the evolution of container security, particularly how early assumptions—like trusting that users would validate container images—proved problematic. Aikas recalls the lack of secure defaults, such as allowing containers to run as root, stemming from the team’s internal Google perspective, which led to unrealistic expectations about external security practices.</itunes:summary>
      <itunes:subtitle>In this episode of The New Stack Makers, recorded at KubeCon + CloudNativeCon Europe, Alex Williams speaks with Ville Aikas, Chainguard founder and early Kubernetes contributor. They reflect on the evolution of container security, particularly how early assumptions—like trusting that users would validate container images—proved problematic. Aikas recalls the lack of secure defaults, such as allowing containers to run as root, stemming from the team’s internal Google perspective, which led to unrealistic expectations about external security practices.</itunes:subtitle>
      <itunes:keywords>container security, software developer, chainguard, ai, tech podcast, the new stack, tech, developer podcast, kubernetes, the new stack makers, software engineer, open source, kubecon london</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1520</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">fd46556f-c3fa-4742-a843-c015102a11ad</guid>
      <title>Kelsey Hightower, AWS&apos;s Eswar Bala on Open Source&apos;s Evolution</title>
      <description><![CDATA[<p>In a candid episode of <i>The New Stack Makers</i>, Kubernetes pioneer Kelsey Hightower and AWS’s Eswar Bala explored the evolving relationship between enterprise cloud providers and open source software at KubeCon+CloudNativeCon London. Hightower highlighted open source's origins as a grassroots movement challenging big vendors, and shared how it gave people—especially those without traditional tech credentials—a way into the industry. Recalling his own journey, Hightower emphasized that open source empowered individuals through contribution over credentials.</p><p>Bala traced the early development of Kubernetes and his own transition from building container orchestration systems to launching AWS’s Elastic Kubernetes Service (EKS), driven by growing customer demand. The discussion, recorded at KubeCon + CloudNativeCon Europe, touched on how open source is now central to enterprise cloud strategies, with AWS not only contributing but creating projects like Karpenter, Cedar, and Kro.</p><p>Both speakers agreed that open source's collaborative model—where companies build in public and customers drive innovation—has reshaped the cloud ecosystem, turning former tensions into partnerships built on community-driven progress.</p><p>Learn more from The New Stack about the relationship between enterprise cloud providers and open source software:</p><p><a href="https://thenewstack.io/the-metamorphosis-of-open-source-an-industry-in-transition/">The Metamorphosis of Open Source: An Industry in Transition</a></p><p><a href="https://thenewstack.io/the-complex-relationship-between-cloud-providers-and-open-source/">The Complex Relationship Between Cloud Providers and Open Source</a></p><p><a href="https://thenewstack.io/how-open-source-has-turned-the-tables-on-enterprise-software/">How Open Source Has Turned the Tables on Enterprise Software</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Thu, 17 Apr 2025 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Kelsey Hightower, Eswar Bala, AWS, The New Stack, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/kelsey-hightower-awss-eswar-bala-on-open-sources-evolution-mlPs3j0L</link>
      <content:encoded><![CDATA[<p>In a candid episode of <i>The New Stack Makers</i>, Kubernetes pioneer Kelsey Hightower and AWS’s Eswar Bala explored the evolving relationship between enterprise cloud providers and open source software at KubeCon+CloudNativeCon London. Hightower highlighted open source's origins as a grassroots movement challenging big vendors, and shared how it gave people—especially those without traditional tech credentials—a way into the industry. Recalling his own journey, Hightower emphasized that open source empowered individuals through contribution over credentials.</p><p>Bala traced the early development of Kubernetes and his own transition from building container orchestration systems to launching AWS’s Elastic Kubernetes Service (EKS), driven by growing customer demand. The discussion, recorded at KubeCon + CloudNativeCon Europe, touched on how open source is now central to enterprise cloud strategies, with AWS not only contributing but creating projects like Karpenter, Cedar, and Kro.</p><p>Both speakers agreed that open source's collaborative model—where companies build in public and customers drive innovation—has reshaped the cloud ecosystem, turning former tensions into partnerships built on community-driven progress.</p><p>Learn more from The New Stack about the relationship between enterprise cloud providers and open source software:</p><p><a href="https://thenewstack.io/the-metamorphosis-of-open-source-an-industry-in-transition/">The Metamorphosis of Open Source: An Industry in Transition</a></p><p><a href="https://thenewstack.io/the-complex-relationship-between-cloud-providers-and-open-source/">The Complex Relationship Between Cloud Providers and Open Source</a></p><p><a href="https://thenewstack.io/how-open-source-has-turned-the-tables-on-enterprise-software/">How Open Source Has Turned the Tables on Enterprise Software</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="36360820" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/9aeb5259-c58f-4035-a38f-dc1cd1a3ca71/audio/12b5b053-80c8-44d6-bae7-acd05b4c83e6/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Kelsey Hightower, AWS&apos;s Eswar Bala on Open Source&apos;s Evolution</itunes:title>
      <itunes:author>Kelsey Hightower, Eswar Bala, AWS, The New Stack, Alex Williams</itunes:author>
      <itunes:duration>00:37:52</itunes:duration>
      <itunes:summary>In a candid episode of The New Stack Makers, Kubernetes pioneer Kelsey Hightower and AWS’s Eswar Bala explored the evolving relationship between enterprise cloud providers and open source software at KubeCon+CloudNativeCon London. Hightower highlighted open source&apos;s origins as a grassroots movement challenging big vendors, and shared how it gave people—especially those without traditional tech credentials—a way into the industry. Recalling his own journey, Hightower emphasized that open source empowered individuals through contribution over credentials.</itunes:summary>
      <itunes:subtitle>In a candid episode of The New Stack Makers, Kubernetes pioneer Kelsey Hightower and AWS’s Eswar Bala explored the evolving relationship between enterprise cloud providers and open source software at KubeCon+CloudNativeCon London. Hightower highlighted open source&apos;s origins as a grassroots movement challenging big vendors, and shared how it gave people—especially those without traditional tech credentials—a way into the industry. Recalling his own journey, Hightower emphasized that open source empowered individuals through contribution over credentials.</itunes:subtitle>
      <itunes:keywords>software developer, kelsey hightower, tech podcast, cedar, the new stack, karpenter, tech, developer podcast, kubecon 2025, kubernetes, the new stack makers, kube resource orchestrator, software engineer, open source, eswar bala, aws</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1518</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">bf194af6-d858-4c4b-b6cf-199f12a353ae</guid>
      <title>The Kro Project: Giving Kubernetes Users What They Want</title>
      <description><![CDATA[<p>In a rare show of collaboration, Google, Amazon, and Microsoft have joined forces on Kro — the Kubernetes Resource Orchestrator — an open source, cloud-agnostic tool designed to simplify custom resource orchestration in Kubernetes. Announced during KubeCon + CloudNativeCon Europe, Kro was born from strong customer demand for a Kubernetes-native solution that works across cloud providers without vendor lock-in. Nic Slattery, Product Manager at Google and Jesse Butler, Principal Product Manager, AWS shared with The New Stack that unlike many enterprise products, Kro didn’t stem from top-down strategy but from consistent customer "pull" experienced by all three companies. It aims to reduce complexity by allowing platform teams to offer simplified interfaces to developers, enabling resource requests without needing deep service-specific knowledge. Kro also represents a unique cross-company collaboration, driven by a shared mission and open source values. Though still in its alpha stage, the project has already attracted 57 contributors in just seven months. The team is now focused on refining core features and preparing for a production-ready release — all while maintaining a narrowly scoped, community-first approach.</p><p>Learn more from The New Stack about KRO:</p><p><a href="https://thenewstack.io/one-mighty-kro-one-giant-leap-for-kubernetes-resource-orchestration/">One Mighty kro; One Giant Leap for Kubernetes Resource Orchestration</a></p><p><a href="https://thenewstack.io/kubernetes-gets-a-new-resource-orchestrator-in-the-form-of-kro/">Kubernetes Gets a New Resource Orchestrator in the Form of Kro</a></p><p><a href="https://thenewstack.io/orchestrate-cloud-native-workloads-with-kro-and-kubernetes/">Orchestrate Cloud Native Workloads With Kro and Kubernetes</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></description>
      <pubDate>Tue, 15 Apr 2025 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Google, Nic Slattery, Jesse Butler, The New Stack, AWS, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/the-kro-project-giving-kubernetes-users-what-they-want-vuQjtIXI</link>
      <content:encoded><![CDATA[<p>In a rare show of collaboration, Google, Amazon, and Microsoft have joined forces on Kro — the Kubernetes Resource Orchestrator — an open source, cloud-agnostic tool designed to simplify custom resource orchestration in Kubernetes. Announced during KubeCon + CloudNativeCon Europe, Kro was born from strong customer demand for a Kubernetes-native solution that works across cloud providers without vendor lock-in. Nic Slattery, Product Manager at Google and Jesse Butler, Principal Product Manager, AWS shared with The New Stack that unlike many enterprise products, Kro didn’t stem from top-down strategy but from consistent customer "pull" experienced by all three companies. It aims to reduce complexity by allowing platform teams to offer simplified interfaces to developers, enabling resource requests without needing deep service-specific knowledge. Kro also represents a unique cross-company collaboration, driven by a shared mission and open source values. Though still in its alpha stage, the project has already attracted 57 contributors in just seven months. The team is now focused on refining core features and preparing for a production-ready release — all while maintaining a narrowly scoped, community-first approach.</p><p>Learn more from The New Stack about KRO:</p><p><a href="https://thenewstack.io/one-mighty-kro-one-giant-leap-for-kubernetes-resource-orchestration/">One Mighty kro; One Giant Leap for Kubernetes Resource Orchestration</a></p><p><a href="https://thenewstack.io/kubernetes-gets-a-new-resource-orchestrator-in-the-form-of-kro/">Kubernetes Gets a New Resource Orchestrator in the Form of Kro</a></p><p><a href="https://thenewstack.io/orchestrate-cloud-native-workloads-with-kro-and-kubernetes/">Orchestrate Cloud Native Workloads With Kro and Kubernetes</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></content:encoded>
      <enclosure length="20982012" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/ad2971c1-f0d3-4c48-a97a-f6d1f217a0b3/audio/b17508ac-1c40-4563-927d-ad930eafca02/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>The Kro Project: Giving Kubernetes Users What They Want</itunes:title>
      <itunes:author>Google, Nic Slattery, Jesse Butler, The New Stack, AWS, Alex Williams</itunes:author>
      <itunes:duration>00:21:51</itunes:duration>
      <itunes:summary>In a rare show of collaboration, Google, Amazon, and Microsoft have joined forces on Kro — the Kubernetes Resource Orchestrator — an open source, cloud-agnostic tool designed to simplify custom resource orchestration in Kubernetes. Announced during KubeCon + CloudNativeCon Europe, Kro was born from strong customer demand for a Kubernetes-native solution that works across cloud providers without vendor lock-in. Nic Slattery, Product Manager at Google and Jesse Butler, Principal Product Manager, AWS shared with The New Stack that unlike many enterprise products, Kro didn’t stem from top-down strategy but from consistent customer &quot;pull&quot; experienced by all three companies. </itunes:summary>
      <itunes:subtitle>In a rare show of collaboration, Google, Amazon, and Microsoft have joined forces on Kro — the Kubernetes Resource Orchestrator — an open source, cloud-agnostic tool designed to simplify custom resource orchestration in Kubernetes. Announced during KubeCon + CloudNativeCon Europe, Kro was born from strong customer demand for a Kubernetes-native solution that works across cloud providers without vendor lock-in. Nic Slattery, Product Manager at Google and Jesse Butler, Principal Product Manager, AWS shared with The New Stack that unlike many enterprise products, Kro didn’t stem from top-down strategy but from consistent customer &quot;pull&quot; experienced by all three companies. </itunes:subtitle>
      <itunes:keywords>software developer, google, tech podcast, the new stack, jesse butler, nic slattery, tech, developer podcast, kubernetes, the new stack makers, kubernetes resource orchestrator, software engineer, kro, open source, aws</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1519</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">4ad021d6-a9aa-472f-ac08-430eb00f25c1</guid>
      <title>OpenSearch: What’s Next for the Search and Analytics Suite?</title>
      <description><![CDATA[<p>OpenSearch has evolved significantly since its 2021 launch, recently reaching a major milestone with its move to the Linux Foundation. This shift from company-led to foundation-based governance has accelerated community contributions and enterprise adoption, as discussed by NetApp’s Amanda Katona in a <i>New Stack Makers</i> episode recorded at KubeCon + CloudNativeCon Europe. NetApp, an early adopter of OpenSearch following Elasticsearch’s licensing change, now offers managed services on the platform and contributes actively to its development.</p><p>Katona emphasized how neutral governance under the Linux Foundation has lowered barriers to enterprise contribution, noting a 56% increase in downloads since the transition and growing interest from developers. OpenSearch 3.0, featuring a Lucene 10 upgrade, promises faster search capabilities—especially relevant as data volumes surge. NetApp’s ongoing investments include work on machine learning plugins and developer training resources.</p><p>Katona sees the Linux Foundation’s involvement as key to OpenSearch’s long-term success, offering vendor-neutral governance and reassuring users seeking openness, performance, and scalability in data search and analytics.</p><p>Learn more from The New Stack about OpenSearch: </p><p><a href="https://thenewstack.io/report-opensearch-bests-elasticsearch-at-vector-modeling/">Report: OpenSearch Bests ElasticSearch at Vector Modeling</a></p><p><a href="https://thenewstack.io/aws-transfers-opensearch-to-the-linux-foundation/">AWS Transfers OpenSearch to the Linux Foundation</a> </p><p><a href="https://thenewstack.io/opensearch-how-the-project-went-from-fork-to-foundation/">OpenSearch: How the Project Went From Fork to Foundation</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Thu, 10 Apr 2025 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Amanda Katona, NetApp, AWS, Alex Williams, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/opensearch-whats-next-for-the-search-and-analytics-suite-2AiYtKSb</link>
      <content:encoded><![CDATA[<p>OpenSearch has evolved significantly since its 2021 launch, recently reaching a major milestone with its move to the Linux Foundation. This shift from company-led to foundation-based governance has accelerated community contributions and enterprise adoption, as discussed by NetApp’s Amanda Katona in a <i>New Stack Makers</i> episode recorded at KubeCon + CloudNativeCon Europe. NetApp, an early adopter of OpenSearch following Elasticsearch’s licensing change, now offers managed services on the platform and contributes actively to its development.</p><p>Katona emphasized how neutral governance under the Linux Foundation has lowered barriers to enterprise contribution, noting a 56% increase in downloads since the transition and growing interest from developers. OpenSearch 3.0, featuring a Lucene 10 upgrade, promises faster search capabilities—especially relevant as data volumes surge. NetApp’s ongoing investments include work on machine learning plugins and developer training resources.</p><p>Katona sees the Linux Foundation’s involvement as key to OpenSearch’s long-term success, offering vendor-neutral governance and reassuring users seeking openness, performance, and scalability in data search and analytics.</p><p>Learn more from The New Stack about OpenSearch: </p><p><a href="https://thenewstack.io/report-opensearch-bests-elasticsearch-at-vector-modeling/">Report: OpenSearch Bests ElasticSearch at Vector Modeling</a></p><p><a href="https://thenewstack.io/aws-transfers-opensearch-to-the-linux-foundation/">AWS Transfers OpenSearch to the Linux Foundation</a> </p><p><a href="https://thenewstack.io/opensearch-how-the-project-went-from-fork-to-foundation/">OpenSearch: How the Project Went From Fork to Foundation</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="19362420" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/39d59ced-0f42-40c1-a619-b260df600053/audio/7f3f539c-3dfc-4858-83f6-ad4ad1bceeb6/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>OpenSearch: What’s Next for the Search and Analytics Suite?</itunes:title>
      <itunes:author>Amanda Katona, NetApp, AWS, Alex Williams, The New Stack</itunes:author>
      <itunes:duration>00:20:10</itunes:duration>
      <itunes:summary>OpenSearch has evolved significantly since its 2021 launch, recently reaching a major milestone with its move to the Linux Foundation. This shift from company-led to foundation-based governance has accelerated community contributions and enterprise adoption, as discussed by NetApp’s Amanda Katona in a New Stack Makers episode recorded at KubeCon + CloudNativeCon Europe. NetApp, an early adopter of OpenSearch following Elasticsearch’s licensing change, now offers managed services on the platform and contributes actively to its development.</itunes:summary>
      <itunes:subtitle>OpenSearch has evolved significantly since its 2021 launch, recently reaching a major milestone with its move to the Linux Foundation. This shift from company-led to foundation-based governance has accelerated community contributions and enterprise adoption, as discussed by NetApp’s Amanda Katona in a New Stack Makers episode recorded at KubeCon + CloudNativeCon Europe. NetApp, an early adopter of OpenSearch following Elasticsearch’s licensing change, now offers managed services on the platform and contributes actively to its development.</itunes:subtitle>
      <itunes:keywords>software developer, netapp, opensearch, tech podcast, the new stack, tech, developer podcast, the new stack makers, data visualization, open source, kubecon london, aws</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1517</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b5062849-1995-4dd4-a8d2-129ff987de9d</guid>
      <title>Kong’s AI Gateway Aims to Make Building with AI Easier</title>
      <description><![CDATA[<p>AI applications are evolving beyond chatbots into more complex and transformative solutions, according to Marco Palladino, CTO and co-founder of Kong. In a recent episode of The New Stack Makers, he discussed the rise of AI agents, which act as "virtual employees" to enhance organizational efficiency. For instance, AI can now function as a product manager for APIs—analyzing documentation, detecting inaccuracies, and making corrections.</p><p>However, reliance on AI agents brings security risks, such as data leakage and governance challenges. Organizations need observability and safeguards, but developers often resist implementing these requirements manually. As GenAI adoption matures, teams seek ways to accelerate development without rebuilding security measures repeatedly.</p><p>To address these challenges, Kong introduced AI Gateway, an open-source plugin for its API Gateway. AI Gateway supports multiple AI models across providers like AWS, Microsoft, and Google, offering developers a universal API to integrate AI securely and efficiently. It also features automated retrieval-augmented generation (RAG) pipelines to minimize hallucinations.</p><p>Palladino emphasized the need for consistent security in AI infrastructure, ensuring developers can focus on innovation while leveraging built-in protections.</p><p>Learn more from The New Stack about Kong’s AI Gateway</p><p><a href="https://thenewstack.io/kong-new-ai-infused-features-for-api-management-dev-tools/">Kong: New ‘AI-Infused’ Features for API Management, Dev Tools</a></p><p><a href="https://thenewstack.io/from-zero-to-a-terraform-provider-for-kong-in-120-hours/">From Zero to a Terraform Provider for Kong in 120 Hours</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>.</p><p> </p>
]]></description>
      <pubDate>Thu, 3 Apr 2025 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Marco Palladino, Kong, The New Stack, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/kongs-ai-gateway-aims-to-make-building-with-ai-easier-lPkzKc99</link>
      <content:encoded><![CDATA[<p>AI applications are evolving beyond chatbots into more complex and transformative solutions, according to Marco Palladino, CTO and co-founder of Kong. In a recent episode of The New Stack Makers, he discussed the rise of AI agents, which act as "virtual employees" to enhance organizational efficiency. For instance, AI can now function as a product manager for APIs—analyzing documentation, detecting inaccuracies, and making corrections.</p><p>However, reliance on AI agents brings security risks, such as data leakage and governance challenges. Organizations need observability and safeguards, but developers often resist implementing these requirements manually. As GenAI adoption matures, teams seek ways to accelerate development without rebuilding security measures repeatedly.</p><p>To address these challenges, Kong introduced AI Gateway, an open-source plugin for its API Gateway. AI Gateway supports multiple AI models across providers like AWS, Microsoft, and Google, offering developers a universal API to integrate AI securely and efficiently. It also features automated retrieval-augmented generation (RAG) pipelines to minimize hallucinations.</p><p>Palladino emphasized the need for consistent security in AI infrastructure, ensuring developers can focus on innovation while leveraging built-in protections.</p><p>Learn more from The New Stack about Kong’s AI Gateway</p><p><a href="https://thenewstack.io/kong-new-ai-infused-features-for-api-management-dev-tools/">Kong: New ‘AI-Infused’ Features for API Management, Dev Tools</a></p><p><a href="https://thenewstack.io/from-zero-to-a-terraform-provider-for-kong-in-120-hours/">From Zero to a Terraform Provider for Kong in 120 Hours</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>.</p><p> </p>
]]></content:encoded>
      <enclosure length="20244732" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/14b46c5c-a49c-4fe2-87f2-cf928acc40a0/audio/5b309f88-163b-4f35-af47-d5cac775971c/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Kong’s AI Gateway Aims to Make Building with AI Easier</itunes:title>
      <itunes:author>Marco Palladino, Kong, The New Stack, Heather Joslyn</itunes:author>
      <itunes:duration>00:21:05</itunes:duration>
      <itunes:summary>AI applications are evolving beyond chatbots into more complex and transformative solutions, according to Marco Palladino, CTO and co-founder of Kong. In a recent episode of The New Stack Makers, he discussed the rise of AI agents, which act as &quot;virtual employees&quot; to enhance organizational efficiency. For instance, AI can now function as a product manager for APIs—analyzing documentation, detecting inaccuracies, and making corrections.

However, reliance on AI agents brings security risks, such as data leakage and governance challenges. Organizations need observability and safeguards, but developers often resist implementing these requirements manually. As GenAI adoption matures, teams seek ways to accelerate development without rebuilding security measures repeatedly.
</itunes:summary>
      <itunes:subtitle>AI applications are evolving beyond chatbots into more complex and transformative solutions, according to Marco Palladino, CTO and co-founder of Kong. In a recent episode of The New Stack Makers, he discussed the rise of AI agents, which act as &quot;virtual employees&quot; to enhance organizational efficiency. For instance, AI can now function as a product manager for APIs—analyzing documentation, detecting inaccuracies, and making corrections.

However, reliance on AI agents brings security risks, such as data leakage and governance challenges. Organizations need observability and safeguards, but developers often resist implementing these requirements manually. As GenAI adoption matures, teams seek ways to accelerate development without rebuilding security measures repeatedly.
</itunes:subtitle>
      <itunes:keywords>data leak, gateway ai, software developer, ai agents, tech podcast, the new stack, security risks, governance, ai development, developer podcast, marco palladino, the new stack makers, ai applications, software engineer, kong</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1516</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">7c391ac5-80cd-4f2c-8cf6-11ee8650f12a</guid>
      <title>What’s the Future of Platform Engineering?</title>
      <description><![CDATA[<p>Platform engineering was meant to ease the burdens of Devs and Ops by reducing cognitive load and repetitive tasks. However, building internal development platforms (IDPs) has proven challenging. Despite this, Gartner predicts that by 2026, 80% of software engineering organizations will have a platform team.</p><p>In a recent <i>New Stack Makers</i> episode, Mallory Haigh of Humanitec and Nathen Harvey of Google discussed the current state and future of platform engineering. Haigh emphasized that many organizations rush to build IDPs without understanding why they need them, leading to ineffective implementations. She noted that platform engineering is 10% technical and 90% cultural change, requiring deep introspection and strategic planning.</p><p>AI-driven automation, particularly agentic AI, is expected to shape platform engineering’s future. Haigh highlighted how AI can enhance platform orchestration and optimize GPU resource management. Harvey compared platform engineering to generative AI—both aim to reduce toil and improve efficiency. As AI adoption grows, platform teams must ensure their infrastructure supports these advancements.</p><p>Learn more from The New Stack about platform engineering:  </p><p><a href="https://thenewstack.io/platform-engineering-on-the-brink-breakthrough-or-bust/">Platform Engineering on the Brink: Breakthrough or Bust?</a></p><p><a href="https://thenewstack.io/platform-engineers-must-have-strong-opinions/">Platform Engineers Must Have Strong Opinions</a></p><p><a href="https://thenewstack.io/the-missing-piece-in-platform-engineering-recognizing-producers/">The Missing Piece in Platform Engineering: Recognizing Producers</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p><p> </p>
]]></description>
      <pubDate>Thu, 27 Mar 2025 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Mallory Haigh, google, Nathen Harvey, Heather Joslyn, Humanitec, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/whats-the-future-of-platform-engineering-MSW2xzVa</link>
      <content:encoded><![CDATA[<p>Platform engineering was meant to ease the burdens of Devs and Ops by reducing cognitive load and repetitive tasks. However, building internal development platforms (IDPs) has proven challenging. Despite this, Gartner predicts that by 2026, 80% of software engineering organizations will have a platform team.</p><p>In a recent <i>New Stack Makers</i> episode, Mallory Haigh of Humanitec and Nathen Harvey of Google discussed the current state and future of platform engineering. Haigh emphasized that many organizations rush to build IDPs without understanding why they need them, leading to ineffective implementations. She noted that platform engineering is 10% technical and 90% cultural change, requiring deep introspection and strategic planning.</p><p>AI-driven automation, particularly agentic AI, is expected to shape platform engineering’s future. Haigh highlighted how AI can enhance platform orchestration and optimize GPU resource management. Harvey compared platform engineering to generative AI—both aim to reduce toil and improve efficiency. As AI adoption grows, platform teams must ensure their infrastructure supports these advancements.</p><p>Learn more from The New Stack about platform engineering:  </p><p><a href="https://thenewstack.io/platform-engineering-on-the-brink-breakthrough-or-bust/">Platform Engineering on the Brink: Breakthrough or Bust?</a></p><p><a href="https://thenewstack.io/platform-engineers-must-have-strong-opinions/">Platform Engineers Must Have Strong Opinions</a></p><p><a href="https://thenewstack.io/the-missing-piece-in-platform-engineering-recognizing-producers/">The Missing Piece in Platform Engineering: Recognizing Producers</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p><p> </p>
]]></content:encoded>
      <enclosure length="25673603" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/72ba619b-52ea-4756-a521-3a553c68c592/audio/aaa62198-9c78-4e9b-99cd-fc65429f1735/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>What’s the Future of Platform Engineering?</itunes:title>
      <itunes:author>Mallory Haigh, google, Nathen Harvey, Heather Joslyn, Humanitec, The New Stack</itunes:author>
      <itunes:duration>00:26:44</itunes:duration>
      <itunes:summary>Platform engineering was meant to ease the burdens of Devs and Ops by reducing cognitive load and repetitive tasks. However, building internal development platforms (IDPs) has proven challenging. Despite this, Gartner predicts that by 2026, 80% of software engineering organizations will have a platform team.
In a recent New Stack Makers episode, Mallory Haigh of Humanitec and Nathen Harvey of Google discussed the current state and future of platform engineering. Haigh emphasized that many organizations rush to build IDPs without understanding why they need them, leading to ineffective implementations. She noted that platform engineering is 10% technical and 90% cultural change, requiring deep introspection and strategic planning.</itunes:summary>
      <itunes:subtitle>Platform engineering was meant to ease the burdens of Devs and Ops by reducing cognitive load and repetitive tasks. However, building internal development platforms (IDPs) has proven challenging. Despite this, Gartner predicts that by 2026, 80% of software engineering organizations will have a platform team.
In a recent New Stack Makers episode, Mallory Haigh of Humanitec and Nathen Harvey of Google discussed the current state and future of platform engineering. Haigh emphasized that many organizations rush to build IDPs without understanding why they need them, leading to ineffective implementations. She noted that platform engineering is 10% technical and 90% cultural change, requiring deep introspection and strategic planning.</itunes:subtitle>
      <itunes:keywords>google, tech podcast, the new stack, devops podcast, tech, developer podcast, humanitec, the new stack makers, software engineer, nathen harvey, platform engineering, mallory haigh</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1515</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">fe47b48b-1222-43ca-93fd-effa1e3b5d33</guid>
      <title>AI Agents are Dumb Robots, Calling LLMs</title>
      <description><![CDATA[<p>AI agents are set to transform software development, but software itself isn’t going anywhere—despite the dramatic predictions. On this episode of The New Stack Makers, Mark Hinkle, CEO and Founder of Peripety Labs, discusses how AI agents relate to serverless technologies, infrastructure-as-code (IaC), and configuration management. </p><p>Hinkle envisions AI agents as “dumb robots” handling tasks like querying APIs and exchanging data, while the real intelligence remains in large language models (LLMs). These agents, likely implemented as serverless functions in Python or JavaScript, will automate software development processes dynamically. LLMs, leveraging vast amounts of open-source code, will enable AI agents to generate bespoke, task-specific tools on the fly—unlike traditional cloud tools from HashiCorp or configuration management tools like Chef and Puppet. </p><p>As AI-generated tooling becomes more prevalent, managing and optimizing these agents will require strong observability and evaluation practices. According to Hinkle, this shift marks the future of software, where AI agents dynamically create, call, and manage tools for CI/CD, monitoring, and beyond. Check out the full episode for more insights. </p><p>Learn more from The New Stack about emerging trends in AI agents: </p><p><a href="https://thenewstack.io/lessons-from-kubernetes-and-the-cloud-should-steer-the-ai-revolution/ ">Lessons From Kubernetes and the Cloud Should Steer the AI Revolution</a></p><p><a href="https://thenewstack.io/ai-agents-why-workflows-are-the-llm-use-case-to-watch/ ">AI Agents: Why Workflows Are the LLM Use Case to Watch </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></description>
      <pubDate>Thu, 20 Mar 2025 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Mark Hinkle, Peripety Labs, The New Stack, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/ai-agents-are-dumb-robots-calling-llms-_FZlvad5</link>
      <content:encoded><![CDATA[<p>AI agents are set to transform software development, but software itself isn’t going anywhere—despite the dramatic predictions. On this episode of The New Stack Makers, Mark Hinkle, CEO and Founder of Peripety Labs, discusses how AI agents relate to serverless technologies, infrastructure-as-code (IaC), and configuration management. </p><p>Hinkle envisions AI agents as “dumb robots” handling tasks like querying APIs and exchanging data, while the real intelligence remains in large language models (LLMs). These agents, likely implemented as serverless functions in Python or JavaScript, will automate software development processes dynamically. LLMs, leveraging vast amounts of open-source code, will enable AI agents to generate bespoke, task-specific tools on the fly—unlike traditional cloud tools from HashiCorp or configuration management tools like Chef and Puppet. </p><p>As AI-generated tooling becomes more prevalent, managing and optimizing these agents will require strong observability and evaluation practices. According to Hinkle, this shift marks the future of software, where AI agents dynamically create, call, and manage tools for CI/CD, monitoring, and beyond. Check out the full episode for more insights. </p><p>Learn more from The New Stack about emerging trends in AI agents: </p><p><a href="https://thenewstack.io/lessons-from-kubernetes-and-the-cloud-should-steer-the-ai-revolution/ ">Lessons From Kubernetes and the Cloud Should Steer the AI Revolution</a></p><p><a href="https://thenewstack.io/ai-agents-why-workflows-are-the-llm-use-case-to-watch/ ">AI Agents: Why Workflows Are the LLM Use Case to Watch </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></content:encoded>
      <enclosure length="27391834" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/3e5a83b1-07ed-4ccc-a5c5-5fd619484233/audio/67725c5d-4840-4aa0-ad08-826ffb7646fa/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>AI Agents are Dumb Robots, Calling LLMs</itunes:title>
      <itunes:author>Mark Hinkle, Peripety Labs, The New Stack, Alex Williams</itunes:author>
      <itunes:duration>00:28:31</itunes:duration>
      <itunes:summary>AI agents are set to transform software development, but software itself isn’t going anywhere—despite the dramatic predictions. On this episode of The New Stack Makers, Mark Hinkle, CEO and Founder of Peripety Labs, discusses how AI agents relate to serverless technologies, infrastructure-as-code (IaC), and configuration management. </itunes:summary>
      <itunes:subtitle>AI agents are set to transform software development, but software itself isn’t going anywhere—despite the dramatic predictions. On this episode of The New Stack Makers, Mark Hinkle, CEO and Founder of Peripety Labs, discusses how AI agents relate to serverless technologies, infrastructure-as-code (IaC), and configuration management. </itunes:subtitle>
      <itunes:keywords>peripety labs, mark hinkle, software developer, software engineering, tech podcast, the new stack, tech, developer podcast, the new stack makers, serverless, observability</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1514</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">3e7864f2-3907-4eff-b7da-355ee680ff2b</guid>
      <title>Goodbye SaaS, Hello AI Agents</title>
      <description><![CDATA[<p>The transition from SaaS to Services as Software with AI agents is underway, necessitating new orchestration methods similar to Kubernetes for containers. AI agents will require resource allocation, workflow management, and scalable infrastructure as they evolve. Two key trends are driving this shift: </p><p>Data Evolution – From spreadsheets to AI agents, data has progressed through relational databases, big data, predictive analytics, and generative AI. </p><p>Computing Evolution – Starting from mainframes, the journey has moved through desktops, client servers, web/mobile, SaaS, and now agentic workflows. </p><p>Janakiram MSV, an analyst, notes on this episode of The New Stack Makers that SaaS depends on data—without it, platforms like Salesforce and SAP lack value. As data becomes more actionable and compute more agentic, a new paradigm emerges: Services as Software. AI agents will automate tasks previously requiring human intervention, like emails and sales follow-ups. However, orchestrating them will be complex, akin to Kubernetes managing containers. Unlike deterministic containers, AI agents depend on dynamic, trained data, posing new enterprise challenges in memory management and infrastructure. </p><p>Learn more from The New Stack about evolution to AI agents:</p><p><a href="https://thenewstack.io/how-ai-agents-are-starting-to-automate-the-enterprise/"> How AI Agents Are Starting To Automate the Enterprise </a></p><p><a href="https://thenewstack.io/can-you-trust-ai-to-be-your-data-analyst/">Can You Trust AI To Be Your Data Analyst? </a></p><p><a href="https://thenewstack.io/agentic-ai-is-the-new-web-app-and-your-ai-strategy-must-evolve/">Agentic AI is the New Web App, and Your AI Strategy Must Evolve</a> </p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></description>
      <pubDate>Thu, 13 Mar 2025 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Janakiram &amp; Associates, Alex Williams, Janakiram MSV, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/goodbye-saas-hello-ai-agents-ok7Uw0WO</link>
      <content:encoded><![CDATA[<p>The transition from SaaS to Services as Software with AI agents is underway, necessitating new orchestration methods similar to Kubernetes for containers. AI agents will require resource allocation, workflow management, and scalable infrastructure as they evolve. Two key trends are driving this shift: </p><p>Data Evolution – From spreadsheets to AI agents, data has progressed through relational databases, big data, predictive analytics, and generative AI. </p><p>Computing Evolution – Starting from mainframes, the journey has moved through desktops, client servers, web/mobile, SaaS, and now agentic workflows. </p><p>Janakiram MSV, an analyst, notes on this episode of The New Stack Makers that SaaS depends on data—without it, platforms like Salesforce and SAP lack value. As data becomes more actionable and compute more agentic, a new paradigm emerges: Services as Software. AI agents will automate tasks previously requiring human intervention, like emails and sales follow-ups. However, orchestrating them will be complex, akin to Kubernetes managing containers. Unlike deterministic containers, AI agents depend on dynamic, trained data, posing new enterprise challenges in memory management and infrastructure. </p><p>Learn more from The New Stack about evolution to AI agents:</p><p><a href="https://thenewstack.io/how-ai-agents-are-starting-to-automate-the-enterprise/"> How AI Agents Are Starting To Automate the Enterprise </a></p><p><a href="https://thenewstack.io/can-you-trust-ai-to-be-your-data-analyst/">Can You Trust AI To Be Your Data Analyst? </a></p><p><a href="https://thenewstack.io/agentic-ai-is-the-new-web-app-and-your-ai-strategy-must-evolve/">Agentic AI is the New Web App, and Your AI Strategy Must Evolve</a> </p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></content:encoded>
      <enclosure length="28834211" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/9dc220bf-5318-460b-8c92-66e03d156b31/audio/6664ffc8-66c2-41d1-ab94-5909fb10d7b6/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Goodbye SaaS, Hello AI Agents</itunes:title>
      <itunes:author>Janakiram &amp; Associates, Alex Williams, Janakiram MSV, The New Stack</itunes:author>
      <itunes:duration>00:30:02</itunes:duration>
      <itunes:summary>The transition from SaaS to Services as Software with AI agents is underway, necessitating new orchestration methods similar to Kubernetes for containers. AI agents will require resource allocation, workflow management, and scalable infrastructure as they evolve. Two key trends are driving this shift: Data Evolution – From spreadsheets to AI agents, data has progressed through relational databases, big data, predictive analytics, and generative AI. Computing Evolution – Starting from mainframes, the journey has moved through desktops, client servers, web/mobile, SaaS, and now agentic workflows. Janakiram MSV, an analyst, notes on this episode of The New Stack Makers that SaaS depends on data—without it, platforms like Salesforce and SAP lack value. As data becomes more actionable and compute more agentic, a new paradigm emerges: Services as Software. AI agents will automate tasks previously requiring human intervention, like emails and sales follow-ups. However, orchestrating them will be complex, akin to Kubernetes managing containers. Unlike deterministic containers, AI agents depend on dynamic, trained data, posing new enterprise challenges in memory management and infrastructure. </itunes:summary>
      <itunes:subtitle>The transition from SaaS to Services as Software with AI agents is underway, necessitating new orchestration methods similar to Kubernetes for containers. AI agents will require resource allocation, workflow management, and scalable infrastructure as they evolve. Two key trends are driving this shift: Data Evolution – From spreadsheets to AI agents, data has progressed through relational databases, big data, predictive analytics, and generative AI. Computing Evolution – Starting from mainframes, the journey has moved through desktops, client servers, web/mobile, SaaS, and now agentic workflows. Janakiram MSV, an analyst, notes on this episode of The New Stack Makers that SaaS depends on data—without it, platforms like Salesforce and SAP lack value. As data becomes more actionable and compute more agentic, a new paradigm emerges: Services as Software. AI agents will automate tasks previously requiring human intervention, like emails and sales follow-ups. However, orchestrating them will be complex, akin to Kubernetes managing containers. Unlike deterministic containers, AI agents depend on dynamic, trained data, posing new enterprise challenges in memory management and infrastructure. </itunes:subtitle>
      <itunes:keywords>software developer, ai agents, tech podcast, the new stack, tech, developer podcast, janakiram msv, the new stack makers, software engineer, janakiram &amp; associates</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1513</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">67f1b92c-86df-41c7-9b04-99af95145834</guid>
      <title>How Generative AI Is Reshaping the SDLC</title>
      <description><![CDATA[<p>Amazon Q Developer is streamlining the software development lifecycle by integrating AI-powered tools into AWS. In an interview at AWS in Seattle, Srini Iragavarapu, director of generative AI Applications and Developer Experiences at AWS, discussed how Amazon Q Developer enhances the developer experience. Initially focused on inline code completions, Amazon Q Developer evolved by incorporating generative AI models like Amazon Nova and Anthropic models, improving recommendations and accelerating development. British Telecom reported a 37% acceptance rate for AI-generated code.</p><p>Beyond code completion, Amazon Q Developer enables developers to interact with Q for code reviews, test generation, and migrations. AWS also developed agentic frameworks to automate undifferentiated tasks, such as upgrading Java versions. Iragavarapu noted that internally, AWS used Q Developer to migrate 30,000 production applications, saving $260 million annually. The platform offers code generation, testing suites, RAG capabilities, and access to AWS custom chips, further flattening the SDLC by automating routine work. Listen to The New Stack Makers for the full discussion.</p><p>Learn more from The New Stack about Amazon Q Developer: </p><p><a href="https://thenewstack.io/amazon-q-developer-now-handles-your-entire-code-pipeline/ ">Amazon Q Developer Now Handles Your Entire Code Pipeline </a></p><p><a href="https://thenewstack.io/amazon-q-apps-ai-powered-development-for-all/ ">Amazon Q Apps: AI-Powered Development for All </a></p><p><a href="https://thenewstack.io/amazon-revamps-developer-ai-with-code-conversion-security/ ">Amazon Revamps Developer AI With Code Conversion, Security </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></description>
      <pubDate>Thu, 6 Mar 2025 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Srini Iragavarapu, AWS, Alex Williams, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-generative-ai-is-reshaping-the-sdlc-8CKcJTwe</link>
      <content:encoded><![CDATA[<p>Amazon Q Developer is streamlining the software development lifecycle by integrating AI-powered tools into AWS. In an interview at AWS in Seattle, Srini Iragavarapu, director of generative AI Applications and Developer Experiences at AWS, discussed how Amazon Q Developer enhances the developer experience. Initially focused on inline code completions, Amazon Q Developer evolved by incorporating generative AI models like Amazon Nova and Anthropic models, improving recommendations and accelerating development. British Telecom reported a 37% acceptance rate for AI-generated code.</p><p>Beyond code completion, Amazon Q Developer enables developers to interact with Q for code reviews, test generation, and migrations. AWS also developed agentic frameworks to automate undifferentiated tasks, such as upgrading Java versions. Iragavarapu noted that internally, AWS used Q Developer to migrate 30,000 production applications, saving $260 million annually. The platform offers code generation, testing suites, RAG capabilities, and access to AWS custom chips, further flattening the SDLC by automating routine work. Listen to The New Stack Makers for the full discussion.</p><p>Learn more from The New Stack about Amazon Q Developer: </p><p><a href="https://thenewstack.io/amazon-q-developer-now-handles-your-entire-code-pipeline/ ">Amazon Q Developer Now Handles Your Entire Code Pipeline </a></p><p><a href="https://thenewstack.io/amazon-q-apps-ai-powered-development-for-all/ ">Amazon Q Apps: AI-Powered Development for All </a></p><p><a href="https://thenewstack.io/amazon-revamps-developer-ai-with-code-conversion-security/ ">Amazon Revamps Developer AI With Code Conversion, Security </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></content:encoded>
      <enclosure length="20845339" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/f89c6a46-1370-43a2-b253-b10e7a5b1f30/audio/b71a7efc-efec-4553-b7fc-36658a5e90bf/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Generative AI Is Reshaping the SDLC</itunes:title>
      <itunes:author>Srini Iragavarapu, AWS, Alex Williams, The New Stack</itunes:author>
      <itunes:duration>00:21:42</itunes:duration>
      <itunes:summary>Amazon Q Developer is streamlining the software development lifecycle by integrating AI-powered tools into AWS. In an interview at AWS in Seattle, Srini Iragavarapu, director of generative AI Applications and Developer Experiences at AWS, discussed how Amazon Q Developer enhances the developer experience. Initially focused on inline code completions, Amazon Q Developer evolved by incorporating generative AI models like Amazon Nova and Anthropic models, improving recommendations and accelerating development. British Telecom reported a 37% acceptance rate for AI-generated code.</itunes:summary>
      <itunes:subtitle>Amazon Q Developer is streamlining the software development lifecycle by integrating AI-powered tools into AWS. In an interview at AWS in Seattle, Srini Iragavarapu, director of generative AI Applications and Developer Experiences at AWS, discussed how Amazon Q Developer enhances the developer experience. Initially focused on inline code completions, Amazon Q Developer evolved by incorporating generative AI models like Amazon Nova and Anthropic models, improving recommendations and accelerating development. British Telecom reported a 37% acceptance rate for AI-generated code.</itunes:subtitle>
      <itunes:keywords>software developer, a, tech podcast, the new stack, tech, developer podcast, the new stack makers, software engineer, srini iragavarapu, amazon q developer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1512</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">a04614c2-0725-499e-a92f-8342b80b34da</guid>
      <title>OAuth Works for AI Agents but Scaling is Another Question</title>
      <description><![CDATA[<p>Maya Kaczorowski noticed that AI identity and AI agent identity concerns were emerging from outside the security industry, rather than from CISOs and security leaders. She concluded that OAuth, the open standard for authentication, already serves the purpose of granting access without exposing passwords. </p><p>Kaczorowski, a respected technologist and founder of Oblique, a startup focused on self-serve access controls, recently wrote about OAuth and AI agents and shared her insights on this episode of The New Stack Makers. She noted that developers see AI agents as extensions of themselves, granting them limited access to data and capabilities—precisely what OAuth is designed to handle. </p><p>The challenges with AI agent identity are vast, involving different approaches to authentication, such as those explored by companies like AuthZed. While existing authorization models like RBAC or ABAC may still apply, the real challenge lies in scale. The exponential growth of AI-related entities—from users to LLMs—could mean even small organizations manage hundreds of thousands of agents. Future solutions must accommodate this massive scale efficiently. </p><p>For the full discussion, check out The New Stack Makers interview with Kaczorowski. </p><p>Learn more from The New Stack about OAuth requirements for AI Agents: </p><p><a href="https://thenewstack.io/oauth-2-0-a-standard-in-name-only/">OAuth 2.0: A Standard in Name Only? </a></p><p><a href="https://thenewstack.io/ai-agents-are-redefining-the-future-of-identity-and-access-management/">AI Agents Are Redefining the Future of Identity and Access Management</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p><p> </p>
]]></description>
      <pubDate>Thu, 27 Feb 2025 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Maya Kaczorowski, Oblique Security, The New Stack, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/oauth-works-for-ai-agents-but-scaling-is-another-question-W7a2Iye1</link>
      <content:encoded><![CDATA[<p>Maya Kaczorowski noticed that AI identity and AI agent identity concerns were emerging from outside the security industry, rather than from CISOs and security leaders. She concluded that OAuth, the open standard for authentication, already serves the purpose of granting access without exposing passwords. </p><p>Kaczorowski, a respected technologist and founder of Oblique, a startup focused on self-serve access controls, recently wrote about OAuth and AI agents and shared her insights on this episode of The New Stack Makers. She noted that developers see AI agents as extensions of themselves, granting them limited access to data and capabilities—precisely what OAuth is designed to handle. </p><p>The challenges with AI agent identity are vast, involving different approaches to authentication, such as those explored by companies like AuthZed. While existing authorization models like RBAC or ABAC may still apply, the real challenge lies in scale. The exponential growth of AI-related entities—from users to LLMs—could mean even small organizations manage hundreds of thousands of agents. Future solutions must accommodate this massive scale efficiently. </p><p>For the full discussion, check out The New Stack Makers interview with Kaczorowski. </p><p>Learn more from The New Stack about OAuth requirements for AI Agents: </p><p><a href="https://thenewstack.io/oauth-2-0-a-standard-in-name-only/">OAuth 2.0: A Standard in Name Only? </a></p><p><a href="https://thenewstack.io/ai-agents-are-redefining-the-future-of-identity-and-access-management/">AI Agents Are Redefining the Future of Identity and Access Management</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p><p> </p>
]]></content:encoded>
      <enclosure length="24578550" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/8ae0077b-1687-4f33-8240-f72d89bd834c/audio/4d24e824-102b-4bcc-805a-24957e1e2d33/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>OAuth Works for AI Agents but Scaling is Another Question</itunes:title>
      <itunes:author>Maya Kaczorowski, Oblique Security, The New Stack, Alex Williams</itunes:author>
      <itunes:duration>00:25:36</itunes:duration>
      <itunes:summary>Maya Kaczorowski noticed that AI identity and AI agent identity concerns were emerging from outside the security industry, rather than from CISOs and security leaders. She concluded that OAuth, the open standard for authentication, already serves the purpose of granting access without exposing passwords. 

Kaczorowski, a respected technologist and founder of Oblique, a startup focused on self-serve access controls, recently wrote about OAuth and AI agents and shared her insights on the latest episode of The New Stack Makers. She noted that developers see AI agents as extensions of themselves, granting them limited access to data and capabilities—precisely what OAuth is designed to handle. </itunes:summary>
      <itunes:subtitle>Maya Kaczorowski noticed that AI identity and AI agent identity concerns were emerging from outside the security industry, rather than from CISOs and security leaders. She concluded that OAuth, the open standard for authentication, already serves the purpose of granting access without exposing passwords. 

Kaczorowski, a respected technologist and founder of Oblique, a startup focused on self-serve access controls, recently wrote about OAuth and AI agents and shared her insights on the latest episode of The New Stack Makers. She noted that developers see AI agents as extensions of themselves, granting them limited access to data and capabilities—precisely what OAuth is designed to handle. </itunes:subtitle>
      <itunes:keywords>oauth, software developer, ai agents, maya kaczorowski, tech podcast, the new stack, tech, developer podcast, oblique security, the new stack makers, software engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1511</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">c987c4f2-c089-4819-b880-332fc8c87425</guid>
      <title>LLMs and AI Agents Evolving Like Programming Languages</title>
      <description><![CDATA[<p>The rise of the World Wide Web enabled developers to build tools and platforms on top of it. Similarly, the advent of large language models (LLMs) allows for creating new AI-driven tools, such as autonomous agents that interact with LLMs, execute tasks, and make decisions. However, verifying these decisions is crucial, and critical reasoning may be a solution, according to Yam Marcovitz, tech lead at Parlant.io and CEO of emcie.co.</p><p>Marcovitz likens LLM development to the evolution of programming languages, from punch cards to modern languages like Python. Early LLMs started with small transformer models, leading to systems like BERT and GPT-3. Now, instead of mere text auto-completion, models are evolving to enable better reasoning and complex instructions.</p><p>Parlant uses "attentive reasoning queries (ARQs)" to maintain consistency in AI responses, ensuring near-perfect accuracy. Their approach balances structure and flexibility, preventing models from operating entirely autonomously. Ultimately, Marcovitz argues that subjectivity in human interpretation extends to LLMs, making perfect objectivity unrealistic.</p><p>Learn more from The New Stack about the evolution of LLMs: </p><p><a href="https://thenewstack.io/ai-alignment-in-practice-what-it-means-and-how-to-get-it/ ">AI Alignment in Practice: What It Means and How to Get It </a></p><p><a href="https://thenewstack.io/agentic-ai-the-next-frontier-of-ai-power/">Agentic AI: The Next Frontier of AI Power </a></p><p><a href="https://thenewstack.io/make-the-most-of-ai-agents-tips-and-tricks-for-developers/">Make the Most of AI Agents: Tips and Tricks for Developers </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></description>
      <pubDate>Thu, 20 Feb 2025 18:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Yam Marcovitz, Emcie, Parlant, Alex Williams, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/llms-and-ai-agents-evolving-like-programming-languages-L4yppusF</link>
      <content:encoded><![CDATA[<p>The rise of the World Wide Web enabled developers to build tools and platforms on top of it. Similarly, the advent of large language models (LLMs) allows for creating new AI-driven tools, such as autonomous agents that interact with LLMs, execute tasks, and make decisions. However, verifying these decisions is crucial, and critical reasoning may be a solution, according to Yam Marcovitz, tech lead at Parlant.io and CEO of emcie.co.</p><p>Marcovitz likens LLM development to the evolution of programming languages, from punch cards to modern languages like Python. Early LLMs started with small transformer models, leading to systems like BERT and GPT-3. Now, instead of mere text auto-completion, models are evolving to enable better reasoning and complex instructions.</p><p>Parlant uses "attentive reasoning queries (ARQs)" to maintain consistency in AI responses, ensuring near-perfect accuracy. Their approach balances structure and flexibility, preventing models from operating entirely autonomously. Ultimately, Marcovitz argues that subjectivity in human interpretation extends to LLMs, making perfect objectivity unrealistic.</p><p>Learn more from The New Stack about the evolution of LLMs: </p><p><a href="https://thenewstack.io/ai-alignment-in-practice-what-it-means-and-how-to-get-it/ ">AI Alignment in Practice: What It Means and How to Get It </a></p><p><a href="https://thenewstack.io/agentic-ai-the-next-frontier-of-ai-power/">Agentic AI: The Next Frontier of AI Power </a></p><p><a href="https://thenewstack.io/make-the-most-of-ai-agents-tips-and-tricks-for-developers/">Make the Most of AI Agents: Tips and Tricks for Developers </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></content:encoded>
      <enclosure length="27014416" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/a12d81ba-3877-41fd-9422-128afb6ff201/audio/227cd246-dfe3-44f3-8a19-038ef8e43ae3/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>LLMs and AI Agents Evolving Like Programming Languages</itunes:title>
      <itunes:author>Yam Marcovitz, Emcie, Parlant, Alex Williams, The New Stack</itunes:author>
      <itunes:duration>00:28:08</itunes:duration>
      <itunes:summary>The rise of the World Wide Web enabled developers to build tools and platforms on top of it. Similarly, the advent of large language models (LLMs) allows for creating new AI-driven tools, such as autonomous agents that interact with LLMs, execute tasks, and make decisions. However, verifying these decisions is crucial, and critical reasoning may be a solution, according to Yam Marcovitz, tech lead at Parlant.io and CEO of emcie.co.</itunes:summary>
      <itunes:subtitle>The rise of the World Wide Web enabled developers to build tools and platforms on top of it. Similarly, the advent of large language models (LLMs) allows for creating new AI-driven tools, such as autonomous agents that interact with LLMs, execute tasks, and make decisions. However, verifying these decisions is crucial, and critical reasoning may be a solution, according to Yam Marcovitz, tech lead at Parlant.io and CEO of emcie.co.</itunes:subtitle>
      <itunes:keywords>yam marcovitz, ai agents, ai, alex williams, the new stack, parlant, emcie, tech, developer podcast, the new stack makers, software engineer, llm chat, customer service</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1510</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">a7170521-1d38-478a-8a43-3f943b41b0b4</guid>
      <title>Writing Code About Your Infrastructure? That&apos;s a Losing Race</title>
      <description><![CDATA[<p>Adam Jacob, CEO of System Initiative, discusses a shift in infrastructure automation—moving from writing code to building models that enable rapid simulations and collaboration. In The New Stack Makers, he compares this approach to Formula One racing, where teams use high-fidelity models to simulate race conditions, optimizing performance before hitting the track.</p><p>System Initiative applies this concept to enterprise automation, creating a model that understands how infrastructure components interact. This enables fast, multiplayer feedback loops, simplifying complex tasks while enhancing collaboration. Engineers can extend the system by writing small, reactive JavaScript functions that automate processes, such as transforming existing architectures into new ones. The platform visually represents these transformations, making automation more intuitive and efficient.</p><p>By leveraging models instead of traditional code-based infrastructure management, System Initiative enhances agility, reduces complexity, and improves DevOps collaboration. To explore how this ties into the concept of the digital twin, listen to the full<i>New Stack Makers </i>episode.</p><p>Learn more from The New Stack about System Initiative:</p><p><a href="https://thenewstack.io/system-initiative-goes-live-beyond-infrastructure-as-code/">Beyond Infrastructure as Code: System Initiative Goes Live</a></p><p><a href="https://thenewstack.io/how-system-initiative-treats-aws-components-as-digital-twins/">How System Initiative Treats AWS Components as Digital Twins</a></p><p><a href="https://thenewstack.io/system-initiative-code-now-open-source/">System Initiative Code Now Open Source</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Thu, 13 Feb 2025 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack, Alex Williams, System Initiative, Adam Jacob)</author>
      <link>https://thenewstack.simplecast.com/episodes/writing-code-about-your-infrastructure-thats-a-losing-race-L7y3lD2U</link>
      <content:encoded><![CDATA[<p>Adam Jacob, CEO of System Initiative, discusses a shift in infrastructure automation—moving from writing code to building models that enable rapid simulations and collaboration. In The New Stack Makers, he compares this approach to Formula One racing, where teams use high-fidelity models to simulate race conditions, optimizing performance before hitting the track.</p><p>System Initiative applies this concept to enterprise automation, creating a model that understands how infrastructure components interact. This enables fast, multiplayer feedback loops, simplifying complex tasks while enhancing collaboration. Engineers can extend the system by writing small, reactive JavaScript functions that automate processes, such as transforming existing architectures into new ones. The platform visually represents these transformations, making automation more intuitive and efficient.</p><p>By leveraging models instead of traditional code-based infrastructure management, System Initiative enhances agility, reduces complexity, and improves DevOps collaboration. To explore how this ties into the concept of the digital twin, listen to the full<i>New Stack Makers </i>episode.</p><p>Learn more from The New Stack about System Initiative:</p><p><a href="https://thenewstack.io/system-initiative-goes-live-beyond-infrastructure-as-code/">Beyond Infrastructure as Code: System Initiative Goes Live</a></p><p><a href="https://thenewstack.io/how-system-initiative-treats-aws-components-as-digital-twins/">How System Initiative Treats AWS Components as Digital Twins</a></p><p><a href="https://thenewstack.io/system-initiative-code-now-open-source/">System Initiative Code Now Open Source</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="30109822" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/66c72248-a127-42e8-ab71-b214731e0794/audio/cd32dccb-e96f-4a6e-8cbe-1b3bea8aafbd/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Writing Code About Your Infrastructure? That&apos;s a Losing Race</itunes:title>
      <itunes:author>The New Stack, Alex Williams, System Initiative, Adam Jacob</itunes:author>
      <itunes:duration>00:31:21</itunes:duration>
      <itunes:summary>Adam Jacob, CEO of System Initiative, discusses a shift in infrastructure automation—moving from writing code to building models that enable rapid simulations and collaboration. In The New Stack Makers, he compares this approach to Formula One racing, where teams use high-fidelity models to simulate race conditions, optimizing performance before hitting the track.</itunes:summary>
      <itunes:subtitle>Adam Jacob, CEO of System Initiative, discusses a shift in infrastructure automation—moving from writing code to building models that enable rapid simulations and collaboration. In The New Stack Makers, he compares this approach to Formula One racing, where teams use high-fidelity models to simulate race conditions, optimizing performance before hitting the track.</itunes:subtitle>
      <itunes:keywords>adam jacob, software developer, tech podcast, alex williams, the new stack, enteprise automation, devops, system initiative, tech, developer podcast, the new stack makers, software engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1509</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">f1749d05-b990-4f19-b48a-147f02df1297</guid>
      <title>OpenTelemetry: What’s New with the 2nd Biggest CNCF Project?</title>
      <description><![CDATA[<p>Morgan McLean, co-founder of OpenTelemetry and senior director of product management at Splunk, has long tackled the challenges of observability in large-scale systems. In a conversation with Alex Williams on<i>The New Stack Makers</i>, McLean reflected on his early frustrations debugging high-scale services and the need for better observability tools.</p><p>OpenTelemetry, formed in 2019 from OpenTracing and OpenCensus, has since become a key part of modern observability strategies. As a Cloud Native Computing Foundation (CNCF) incubating project, it’s the second most active open source project after Kubernetes, with over 1,200 developers contributing monthly. McLean highlighted OpenTelemetry’s role in solving scaling challenges, particularly in Kubernetes environments, by standardizing distributed tracing, application metrics, and data extraction.</p><p>Looking ahead, profiling is set to become the fourth major observability signal alongside logs, tracing, and metrics, with general availability expected in 2025. McLean emphasized ongoing improvements, including automation and ease of adoption, predicting even faster OpenTelemetry adoption as friction points are resolved.</p><p>Learn more from The New Stack about the latest trends in Open Telemetry:</p><p><a href="https://thenewstack.io/what-is-opentelemetry-the-ultimate-guide/">What Is OpenTelemetry? The Ultimate Guide</a></p><p><a href="https://thenewstack.io/observability-in-2025-opentelemetry-and-ai-to-fill-in-gaps/">Observability in 2025: OpenTelemetry and AI to Fill In Gaps</a></p><p><a href="https://thenewstack.io/honeycomb-ios-austin-parker-opentelemetry-in-depth/">Honeycomb.io’s Austin Parker: OpenTelemetry In-Depth</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p><p> </p>
]]></description>
      <pubDate>Thu, 6 Feb 2025 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Morgan McLean, Splunk, The New Stack, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/opentelemetry-whats-new-with-the-2nd-biggest-cncf-project-fR_i9boO</link>
      <content:encoded><![CDATA[<p>Morgan McLean, co-founder of OpenTelemetry and senior director of product management at Splunk, has long tackled the challenges of observability in large-scale systems. In a conversation with Alex Williams on<i>The New Stack Makers</i>, McLean reflected on his early frustrations debugging high-scale services and the need for better observability tools.</p><p>OpenTelemetry, formed in 2019 from OpenTracing and OpenCensus, has since become a key part of modern observability strategies. As a Cloud Native Computing Foundation (CNCF) incubating project, it’s the second most active open source project after Kubernetes, with over 1,200 developers contributing monthly. McLean highlighted OpenTelemetry’s role in solving scaling challenges, particularly in Kubernetes environments, by standardizing distributed tracing, application metrics, and data extraction.</p><p>Looking ahead, profiling is set to become the fourth major observability signal alongside logs, tracing, and metrics, with general availability expected in 2025. McLean emphasized ongoing improvements, including automation and ease of adoption, predicting even faster OpenTelemetry adoption as friction points are resolved.</p><p>Learn more from The New Stack about the latest trends in Open Telemetry:</p><p><a href="https://thenewstack.io/what-is-opentelemetry-the-ultimate-guide/">What Is OpenTelemetry? The Ultimate Guide</a></p><p><a href="https://thenewstack.io/observability-in-2025-opentelemetry-and-ai-to-fill-in-gaps/">Observability in 2025: OpenTelemetry and AI to Fill In Gaps</a></p><p><a href="https://thenewstack.io/honeycomb-ios-austin-parker-opentelemetry-in-depth/">Honeycomb.io’s Austin Parker: OpenTelemetry In-Depth</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p><p> </p>
]]></content:encoded>
      <enclosure length="29036921" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/2b0f307f-6ef6-40b2-af31-b91c32709987/audio/4eebe84c-fa14-41a3-a8f8-3b3661bf2877/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>OpenTelemetry: What’s New with the 2nd Biggest CNCF Project?</itunes:title>
      <itunes:author>Morgan McLean, Splunk, The New Stack, Alex Williams</itunes:author>
      <itunes:duration>00:30:14</itunes:duration>
      <itunes:summary>Morgan McLean, co-founder of OpenTelemetry and senior director of product management at Splunk, has long tackled the challenges of observability in large-scale systems. In a conversation with Alex Williams on The New Stack Makers, McLean reflected on his early frustrations debugging high-scale services and the need for better observability tools.</itunes:summary>
      <itunes:subtitle>Morgan McLean, co-founder of OpenTelemetry and senior director of product management at Splunk, has long tackled the challenges of observability in large-scale systems. In a conversation with Alex Williams on The New Stack Makers, McLean reflected on his early frustrations debugging high-scale services and the need for better observability tools.</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, the new stack, morgan mclean, tech, developer podcast, open telemetry, software engineer, observability</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1508</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">71e7cee0-7aec-4bae-8edf-90eec3093f6c</guid>
      <title>What’s Driving the Rising Cost of Observability?</title>
      <description><![CDATA[<p>Observability is expensive because traditional tools weren’t designed for the complexity and scale of modern cloud-native systems, explains Christine Yen, CEO of Honeycomb.io. Logging tools, while flexible, were optimized for manual, human-scale data reading. This approach struggles with the massive scale of today’s software, making logging slow and resource-intensive. Monitoring tools, with their dashboards and metrics, prioritized speed over flexibility, which doesn’t align with the dynamic nature of containerized microservices. Similarly, traditional APM tools relied on “magical” setups tailored for consistent application environments like Rails, but they falter in modern polyglot infrastructures with diverse frameworks.</p><p>Additionally, observability costs are rising due to evolving demands from DevOps, platform engineering, and site reliability engineering (SRE). Practices like service-level objectives (SLOs) emphasize end-user experience, pushing teams to track meaningful metrics. However, outdated observability tools often hinder this, forcing teams to cut back on crucial data. Yen highlights the potential of AI and innovations like OpenTelemetry to address these challenges.</p><p>Learn more from The New Stack about the latest trends in observability:</p><p><a href="https://thenewstack.io/honeycomb-ios-austin-parker-opentelemetry-in-depth/">Honeycomb.io’s Austin Parker: OpenTelemetry In-Depth</a></p><p><a href="https://thenewstack.io/observability-in-2025-opentelemetry-and-ai-to-fill-in-gaps/">Observability in 2025: OpenTelemetry and AI to Fill In Gaps</a></p><p><a href="https://thenewstack.io/observability-and-ai-new-connections-at-kubecon/">Observability and AI: New Connections at KubeCon</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></description>
      <pubDate>Thu, 30 Jan 2025 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Christine Yen, Honeycomb, The New Stack, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/whats-driving-the-rising-cost-of-observability-SeSa2rtt</link>
      <content:encoded><![CDATA[<p>Observability is expensive because traditional tools weren’t designed for the complexity and scale of modern cloud-native systems, explains Christine Yen, CEO of Honeycomb.io. Logging tools, while flexible, were optimized for manual, human-scale data reading. This approach struggles with the massive scale of today’s software, making logging slow and resource-intensive. Monitoring tools, with their dashboards and metrics, prioritized speed over flexibility, which doesn’t align with the dynamic nature of containerized microservices. Similarly, traditional APM tools relied on “magical” setups tailored for consistent application environments like Rails, but they falter in modern polyglot infrastructures with diverse frameworks.</p><p>Additionally, observability costs are rising due to evolving demands from DevOps, platform engineering, and site reliability engineering (SRE). Practices like service-level objectives (SLOs) emphasize end-user experience, pushing teams to track meaningful metrics. However, outdated observability tools often hinder this, forcing teams to cut back on crucial data. Yen highlights the potential of AI and innovations like OpenTelemetry to address these challenges.</p><p>Learn more from The New Stack about the latest trends in observability:</p><p><a href="https://thenewstack.io/honeycomb-ios-austin-parker-opentelemetry-in-depth/">Honeycomb.io’s Austin Parker: OpenTelemetry In-Depth</a></p><p><a href="https://thenewstack.io/observability-in-2025-opentelemetry-and-ai-to-fill-in-gaps/">Observability in 2025: OpenTelemetry and AI to Fill In Gaps</a></p><p><a href="https://thenewstack.io/observability-and-ai-new-connections-at-kubecon/">Observability and AI: New Connections at KubeCon</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></content:encoded>
      <enclosure length="23932803" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/f6fe85bf-da8b-4e38-ac1e-5a19578e1491/audio/7d94e100-9a69-483e-adba-5d73cd003510/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>What’s Driving the Rising Cost of Observability?</itunes:title>
      <itunes:author>Christine Yen, Honeycomb, The New Stack, Heather Joslyn</itunes:author>
      <itunes:duration>00:24:55</itunes:duration>
      <itunes:summary>Observability is expensive because traditional tools weren’t designed for the complexity and scale of modern cloud-native systems, explains Christine Yen, CEO of Honeycomb.io. Logging tools, while flexible, were optimized for manual, human-scale data reading. This approach struggles with the massive scale of today’s software, making logging slow and resource-intensive. Monitoring tools, with their dashboards and metrics, prioritized speed over flexibility, which doesn’t align with the dynamic nature of containerized microservices. Similarly, traditional APM tools relied on “magical” setups tailored for consistent application environments like Rails, but they falter in modern polyglot infrastructures with diverse frameworks.</itunes:summary>
      <itunes:subtitle>Observability is expensive because traditional tools weren’t designed for the complexity and scale of modern cloud-native systems, explains Christine Yen, CEO of Honeycomb.io. Logging tools, while flexible, were optimized for manual, human-scale data reading. This approach struggles with the massive scale of today’s software, making logging slow and resource-intensive. Monitoring tools, with their dashboards and metrics, prioritized speed over flexibility, which doesn’t align with the dynamic nature of containerized microservices. Similarly, traditional APM tools relied on “magical” setups tailored for consistent application environments like Rails, but they falter in modern polyglot infrastructures with diverse frameworks.</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, the new stack, honeycomb, tech, developer podcast, monitoring, open telemetry, the new stack makers, software engineer, observability</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1507</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b94d642e-e929-4338-818f-14712f94589c</guid>
      <title>How Oracle Is Meeting the Infrastructure Needs of AI</title>
      <description><![CDATA[<p>Generative AI is a data-driven story with significant infrastructure and operational implications, particularly around the rising demand for GPUs, which are better suited for AI workloads than CPUs. In an episode of<i>The New Stack Makers</i>recorded at KubeCon + CloudNativeCon North America, Sudha Raghavan, SVP for Developer Platform at Oracle Cloud Infrastructure, discussed how AI’s rapid adoption has reshaped infrastructure needs.</p><p>The release of ChatGPT triggered a surge in GPU demand, with organizations requiring GPUs for tasks ranging from testing workloads to training large language models across massive GPU clusters. These workloads run continuously at peak power, posing challenges such as high hardware failure rates and energy consumption.</p><p>Oracle is addressing these issues by building GPU superclusters and enhancing Kubernetes functionality. Tools like Oracle’s Node Manager simplify interactions between Kubernetes and GPUs, providing tailored observability while maintaining Kubernetes’ user-friendly experience. Raghavan emphasized the importance of stateful job management and infrastructure innovations to meet the demands of modern AI workloads.</p><p>Learn more from The New Stack about how Oracle is addressing the GPU demand for AI workloads with its GPU superclusters and enhancing Kubernetes functionality: </p><p><a href="https://thenewstack.io/oracle-code-assist-java-optimized-now-in-beta/ ">Oracle Code Assist, Java-Optimized, Now in Beta</a></p><p><a href="https://thenewstack.io/oracles-code-assist-fashionably-late-to-the-genai-party/ ">Oracle’s Code Assist: Fashionably Late to the GenAI Party</a></p><p><a href="https://thenewstack.io/oracle-unveils-java-23-simplicity-meets-enterprise-power/ ">Oracle Unveils Java 23: Simplicity Meets Enterprise Power</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a> </p>
]]></description>
      <pubDate>Thu, 23 Jan 2025 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Oracle, Sudha Raghavan, Alex Williams, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-oracle-is-meeting-the-infrastructure-needs-of-ai-SPc1NBc8</link>
      <content:encoded><![CDATA[<p>Generative AI is a data-driven story with significant infrastructure and operational implications, particularly around the rising demand for GPUs, which are better suited for AI workloads than CPUs. In an episode of<i>The New Stack Makers</i>recorded at KubeCon + CloudNativeCon North America, Sudha Raghavan, SVP for Developer Platform at Oracle Cloud Infrastructure, discussed how AI’s rapid adoption has reshaped infrastructure needs.</p><p>The release of ChatGPT triggered a surge in GPU demand, with organizations requiring GPUs for tasks ranging from testing workloads to training large language models across massive GPU clusters. These workloads run continuously at peak power, posing challenges such as high hardware failure rates and energy consumption.</p><p>Oracle is addressing these issues by building GPU superclusters and enhancing Kubernetes functionality. Tools like Oracle’s Node Manager simplify interactions between Kubernetes and GPUs, providing tailored observability while maintaining Kubernetes’ user-friendly experience. Raghavan emphasized the importance of stateful job management and infrastructure innovations to meet the demands of modern AI workloads.</p><p>Learn more from The New Stack about how Oracle is addressing the GPU demand for AI workloads with its GPU superclusters and enhancing Kubernetes functionality: </p><p><a href="https://thenewstack.io/oracle-code-assist-java-optimized-now-in-beta/ ">Oracle Code Assist, Java-Optimized, Now in Beta</a></p><p><a href="https://thenewstack.io/oracles-code-assist-fashionably-late-to-the-genai-party/ ">Oracle’s Code Assist: Fashionably Late to the GenAI Party</a></p><p><a href="https://thenewstack.io/oracle-unveils-java-23-simplicity-meets-enterprise-power/ ">Oracle Unveils Java 23: Simplicity Meets Enterprise Power</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a> </p>
]]></content:encoded>
      <enclosure length="26375845" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/60cb6396-706f-442c-9e98-c10a9f480b53/audio/d946296f-6646-44c2-adab-57c91246bbbe/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Oracle Is Meeting the Infrastructure Needs of AI</itunes:title>
      <itunes:author>Oracle, Sudha Raghavan, Alex Williams, The New Stack</itunes:author>
      <itunes:duration>00:27:28</itunes:duration>
      <itunes:summary>Generative AI is a data-driven story with significant infrastructure and operational implications, particularly around the rising demand for GPUs, which are better suited for AI workloads than CPUs. In an episode of The New Stack Makers recorded at KubeCon + CloudNativeCon North America, Sudha Raghavan, SVP for Developer Platform at Oracle Cloud Infrastructure, discussed how AI’s rapid adoption has reshaped infrastructure needs, and how Oracle is addressing it by building GPU superclusters and enhancing Kubernetes functionality.</itunes:summary>
      <itunes:subtitle>Generative AI is a data-driven story with significant infrastructure and operational implications, particularly around the rising demand for GPUs, which are better suited for AI workloads than CPUs. In an episode of The New Stack Makers recorded at KubeCon + CloudNativeCon North America, Sudha Raghavan, SVP for Developer Platform at Oracle Cloud Infrastructure, discussed how AI’s rapid adoption has reshaped infrastructure needs, and how Oracle is addressing it by building GPU superclusters and enhancing Kubernetes functionality.</itunes:subtitle>
      <itunes:keywords>generative ai, oracle, software developer, sudha raghavan, gpu superclusters, gpu, tech podcast, the new stack, ai workloads, tech, the new stack makers, software engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1506</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">4ab9c6dc-5ef6-4125-815a-152a785cfb14</guid>
      <title>Arm: See a Demo About Migrating a x86-Based App to ARM64</title>
      <description><![CDATA[<p>The hardware industry is surging, driven by AI's demanding workloads, with Arm—a 35-year-old pioneer in processor IP—playing a pivotal role. In an episode of<i>The New Stack Makers</i>recorded at KubeCon + CloudNativeCon North America, Pranay Bakre, principal solutions engineer at Arm, discussed how Arm is helping organizations migrate and run applications on its technology.</p><p>Bakre highlighted Arm’s partnership with hyperscalers like AWS, Google, Microsoft, and Oracle, showcasing processors such as AWS Graviton and Google Axion, built on Arm’s power-efficient, cost-effective Neoverse IP. This design ethos has spurred wide adoption, with 90-95% of CNCF projects supporting native Arm binaries.</p><p>Attendees at Arm’s booth frequently inquired about its plans to support AI workloads. Bakre noted the performance advantages of Arm-based infrastructure, delivering up to 60% workload improvements over legacy architectures. The episode also features a demo on migrating x86 applications to ARM64 in both cloud and containerized environments, emphasizing Arm’s readiness for the AI era.</p><p>Learn more from The New Stack about Arm: </p><p><a href="https://thenewstack.io/arm-eyes-ai-with-its-latest-neoverse-cores-and-subsystems/">Arm Eyes AI with Its Latest Neoverse Cores and Subsystem</a></p><p><a href="https://thenewstack.io/big-three-in-cloud-prompts-arm-to-rethink-software/">Big Three in Cloud Prompts ARM to Rethink Software</a></p><p><br /><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a> </p><p> </p>
]]></description>
      <pubDate>Thu, 16 Jan 2025 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Pranay Bakre, Arm, Alex Williams, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/arm-see-a-demo-about-migrating-a-x86-based-app-to-arm64-eiKi_8nz</link>
      <content:encoded><![CDATA[<p>The hardware industry is surging, driven by AI's demanding workloads, with Arm—a 35-year-old pioneer in processor IP—playing a pivotal role. In an episode of<i>The New Stack Makers</i>recorded at KubeCon + CloudNativeCon North America, Pranay Bakre, principal solutions engineer at Arm, discussed how Arm is helping organizations migrate and run applications on its technology.</p><p>Bakre highlighted Arm’s partnership with hyperscalers like AWS, Google, Microsoft, and Oracle, showcasing processors such as AWS Graviton and Google Axion, built on Arm’s power-efficient, cost-effective Neoverse IP. This design ethos has spurred wide adoption, with 90-95% of CNCF projects supporting native Arm binaries.</p><p>Attendees at Arm’s booth frequently inquired about its plans to support AI workloads. Bakre noted the performance advantages of Arm-based infrastructure, delivering up to 60% workload improvements over legacy architectures. The episode also features a demo on migrating x86 applications to ARM64 in both cloud and containerized environments, emphasizing Arm’s readiness for the AI era.</p><p>Learn more from The New Stack about Arm: </p><p><a href="https://thenewstack.io/arm-eyes-ai-with-its-latest-neoverse-cores-and-subsystems/">Arm Eyes AI with Its Latest Neoverse Cores and Subsystem</a></p><p><a href="https://thenewstack.io/big-three-in-cloud-prompts-arm-to-rethink-software/">Big Three in Cloud Prompts ARM to Rethink Software</a></p><p><br /><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a> </p><p> </p>
]]></content:encoded>
      <enclosure length="20618451" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/089056ac-de7e-468a-b7c0-5c3718de4c00/audio/3c7dfcdd-18cf-4b57-b2d6-2d99930ef5ee/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Arm: See a Demo About Migrating a x86-Based App to ARM64</itunes:title>
      <itunes:author>Pranay Bakre, Arm, Alex Williams, The New Stack</itunes:author>
      <itunes:duration>00:21:28</itunes:duration>
      <itunes:summary>The hardware industry is surging, driven by AI&apos;s demanding workloads, with Arm—a 35-year-old pioneer in processor IP—playing a pivotal role. In an episode of The New Stack Makers recorded at KubeCon + CloudNativeCon North America, Pranay Bakre, principal solutions engineer at Arm, discussed how Arm is helping organizations migrate and run applications on its technology.</itunes:summary>
      <itunes:subtitle>The hardware industry is surging, driven by AI&apos;s demanding workloads, with Arm—a 35-year-old pioneer in processor IP—playing a pivotal role. In an episode of The New Stack Makers recorded at KubeCon + CloudNativeCon North America, Pranay Bakre, principal solutions engineer at Arm, discussed how Arm is helping organizations migrate and run applications on its technology.</itunes:subtitle>
      <itunes:keywords>application deployment, pranay bakre, tech podcast, the new stack, devops, ai workloads, devops podcast, tech, the new stack makers, software engineer, performance tuning</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1505</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1c1a09a7-ba53-4086-9f2e-56f4587f3936</guid>
      <title>Heroku Moved Twelve-Factor Apps to Open Source. What’s Next?</title>
      <description><![CDATA[<p>Heroku has open-sourced its Twelve-Factor App methodology, initially created in 2011 to help developers build portable, resilient cloud applications. Heroku CTO Gail Frederick announced this shift at KubeCon + CloudNativeCon North America, explaining the move aims to involve the community in modernizing the framework. While the methodology inspired a generation of cloud developers, certain factors are now outdated, such as the focus on logs as event streams. Frederick highlighted the need for updates to address current practices like telemetry and metrics visualization, reflecting the rise of OpenTelemetry.</p><p>The updated Twelve-Factor methodology will expand to accommodate modern cloud-native realities, such as deploying interconnected systems of apps with diverse backing services. Planned enhancements include supporting documents, reference architectures, and code examples illustrating the principles in action. Success will be measured by its applicability to use cases involving edge computing, IoT, serverless, and distributed systems. Heroku views this open-source effort as an opportunity to redefine best practices for the next era of cloud development.</p><p>Learn more from The New Stack about Heroku: </p><p><a href="https://thenewstack.io/how-heroku-is-positioned-to-help-ops-engineers-in-the-genai-era/">How Heroku Is Positioned To Help Ops Engineers in the GenAI Era</a></p><p><a href="https://thenewstack.io/the-data-stack-journey-lessons-from-architecting-stacks-at-heroku-and-mattermost/">The Data Stack Journey: Lessons from Architecting Stacks at Heroku and Mattermost</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></description>
      <pubDate>Thu, 2 Jan 2025 19:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Gail Frederick, Heroku, Salesforce, Alex Williams, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/heroku-moved-twelve-factor-apps-to-open-source-whats-next-3XavhakW</link>
      <content:encoded><![CDATA[<p>Heroku has open-sourced its Twelve-Factor App methodology, initially created in 2011 to help developers build portable, resilient cloud applications. Heroku CTO Gail Frederick announced this shift at KubeCon + CloudNativeCon North America, explaining the move aims to involve the community in modernizing the framework. While the methodology inspired a generation of cloud developers, certain factors are now outdated, such as the focus on logs as event streams. Frederick highlighted the need for updates to address current practices like telemetry and metrics visualization, reflecting the rise of OpenTelemetry.</p><p>The updated Twelve-Factor methodology will expand to accommodate modern cloud-native realities, such as deploying interconnected systems of apps with diverse backing services. Planned enhancements include supporting documents, reference architectures, and code examples illustrating the principles in action. Success will be measured by its applicability to use cases involving edge computing, IoT, serverless, and distributed systems. Heroku views this open-source effort as an opportunity to redefine best practices for the next era of cloud development.</p><p>Learn more from The New Stack about Heroku: </p><p><a href="https://thenewstack.io/how-heroku-is-positioned-to-help-ops-engineers-in-the-genai-era/">How Heroku Is Positioned To Help Ops Engineers in the GenAI Era</a></p><p><a href="https://thenewstack.io/the-data-stack-journey-lessons-from-architecting-stacks-at-heroku-and-mattermost/">The Data Stack Journey: Lessons from Architecting Stacks at Heroku and Mattermost</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></content:encoded>
      <enclosure length="21988110" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/3fdb023d-359a-4995-b7ce-d3554561cd92/audio/af85db54-212a-4521-8e9f-6cf062cf9954/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Heroku Moved Twelve-Factor Apps to Open Source. What’s Next?</itunes:title>
      <itunes:author>Gail Frederick, Heroku, Salesforce, Alex Williams, The New Stack</itunes:author>
      <itunes:duration>00:22:54</itunes:duration>
      <itunes:summary>Heroku has open-sourced its Twelve-Factor App methodology, initially created in 2011 to help developers build portable, resilient cloud applications. Heroku CTO Gail Frederick announced this shift at KubeCon + CloudNativeCon North America, explaining the move aims to involve the community in modernizing the framework. While the methodology inspired a generation of cloud developers, certain factors are now outdated, such as the focus on logs as event streams. Frederick highlighted the need for updates to address current practices like telemetry and metrics visualization, reflecting the rise of OpenTelemetry.</itunes:summary>
      <itunes:subtitle>Heroku has open-sourced its Twelve-Factor App methodology, initially created in 2011 to help developers build portable, resilient cloud applications. Heroku CTO Gail Frederick announced this shift at KubeCon + CloudNativeCon North America, explaining the move aims to involve the community in modernizing the framework. While the methodology inspired a generation of cloud developers, certain factors are now outdated, such as the focus on logs as event streams. Frederick highlighted the need for updates to address current practices like telemetry and metrics visualization, reflecting the rise of OpenTelemetry.</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, alex williams, the new stack, tech, developer podcast, the new stack makers, software engineer, salesforce, gail frederick, software deverloper, heroku</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1504</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">a50a8866-1c86-43b8-a6f1-219f0f04db3c</guid>
      <title>How Falco Brought Real-Time Observability to Infrastructure</title>
      <description><![CDATA[<p>Falco, an open-source runtime observability and security tool, was created by Sysdig founder Loris Degioanni to collect real-time system events directly from the kernel. Leveraging eBPF technology for improved safety and performance, Falco gathers data like pod names and namespaces, correlating them with customizable rules. Unlike static analysis tools, it operates in real-time, monitoring events as they occur. In this episode of The New Stack Makers, TNS Editor-in-Chief, Heather Joslyn spoke with Thomas Labarussias, Senior Developer Advocate at Sysdig, Leonardo Grasso, Open Source Tech Lead Manager at Sysdig and Luca Guerra, Sr. Open Source Engineer at Sysdig to get the latest update on Falco. </p><p>Graduating from the Cloud Native Computing Foundation (CNCF) in February 2023 after entering its sandbox six years prior, Falco’s maintainers have focused on technical maturity and broad usability. This includes simplifying installations across diverse environments, thanks in part to advancements from the Linux Foundation.</p><p>Looking ahead, the team is enhancing core functionalities, including more customizable rules and alert formats. A key innovation is Falco Talon, introduced in September 2023, which provides a no-code response engine to link alerts with real-time remediation actions. Talon addresses a longstanding gap in automating responses within the Falco ecosystem, advancing its capabilities for runtime security.</p><p>Learn more from The New Stack about Falco:</p><p><a href="https://thenewstack.io/falco-is-a-cncf-graduate-now-what/">Falco Is a CNCF Graduate. Now What?</a></p><p><a href="https://thenewstack.io/falco-plugins-bring-new-data-sources-to-real-time-security/">Falco Plugins Bring New Data Sources to Real-Time Security</a></p><p><a href="https://thenewstack.io/ebpf-tools-an-overview-of-falco-inspektor-gadget-hubble-and-cilium/">eBPF Tools: An Overview of Falco, Inspektor Gadget, Hubble and Cilium</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></description>
      <pubDate>Thu, 26 Dec 2024 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Leo Grasso, Sysdig, Thomas Labarussias, Heather Joslyn, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-falco-brought-real-time-observability-to-infrastructure-vOqlGKa_</link>
      <content:encoded><![CDATA[<p>Falco, an open-source runtime observability and security tool, was created by Sysdig founder Loris Degioanni to collect real-time system events directly from the kernel. Leveraging eBPF technology for improved safety and performance, Falco gathers data like pod names and namespaces, correlating them with customizable rules. Unlike static analysis tools, it operates in real-time, monitoring events as they occur. In this episode of The New Stack Makers, TNS Editor-in-Chief, Heather Joslyn spoke with Thomas Labarussias, Senior Developer Advocate at Sysdig, Leonardo Grasso, Open Source Tech Lead Manager at Sysdig and Luca Guerra, Sr. Open Source Engineer at Sysdig to get the latest update on Falco. </p><p>Graduating from the Cloud Native Computing Foundation (CNCF) in February 2023 after entering its sandbox six years prior, Falco’s maintainers have focused on technical maturity and broad usability. This includes simplifying installations across diverse environments, thanks in part to advancements from the Linux Foundation.</p><p>Looking ahead, the team is enhancing core functionalities, including more customizable rules and alert formats. A key innovation is Falco Talon, introduced in September 2023, which provides a no-code response engine to link alerts with real-time remediation actions. Talon addresses a longstanding gap in automating responses within the Falco ecosystem, advancing its capabilities for runtime security.</p><p>Learn more from The New Stack about Falco:</p><p><a href="https://thenewstack.io/falco-is-a-cncf-graduate-now-what/">Falco Is a CNCF Graduate. Now What?</a></p><p><a href="https://thenewstack.io/falco-plugins-bring-new-data-sources-to-real-time-security/">Falco Plugins Bring New Data Sources to Real-Time Security</a></p><p><a href="https://thenewstack.io/ebpf-tools-an-overview-of-falco-inspektor-gadget-hubble-and-cilium/">eBPF Tools: An Overview of Falco, Inspektor Gadget, Hubble and Cilium</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></content:encoded>
      <enclosure length="18679545" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/1e0d1075-8eeb-469a-9ba5-95ed53900677/audio/1d36a362-ce9e-436d-819c-07c722a041b6/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Falco Brought Real-Time Observability to Infrastructure</itunes:title>
      <itunes:author>Leo Grasso, Sysdig, Thomas Labarussias, Heather Joslyn, The New Stack</itunes:author>
      <itunes:duration>00:19:27</itunes:duration>
      <itunes:summary>Falco, an open-source runtime observability and security tool, was created by Sysdig founder Loris Degioanni to collect real-time system events directly from the kernel. Leveraging eBPF technology for improved safety and performance, Falco gathers data like pod names and namespaces, correlating them with customizable rules. Unlike static analysis tools, it operates in real-time, monitoring events as they occur. In this episode of The New Stack Makers, TNS Editor-in-Chief, Heather Joslyn spoke with Thomas Labarussias, Senior Developer Advocate at Sysdig, Leonardo Grasso, Open Source Tech Lead Manager at Sysdig and Luca Guerra, Sr. Open Source Engineer at Sysdig to get the latest update on Falco.</itunes:summary>
      <itunes:subtitle>Falco, an open-source runtime observability and security tool, was created by Sysdig founder Loris Degioanni to collect real-time system events directly from the kernel. Leveraging eBPF technology for improved safety and performance, Falco gathers data like pod names and namespaces, correlating them with customizable rules. Unlike static analysis tools, it operates in real-time, monitoring events as they occur. In this episode of The New Stack Makers, TNS Editor-in-Chief, Heather Joslyn spoke with Thomas Labarussias, Senior Developer Advocate at Sysdig, Leonardo Grasso, Open Source Tech Lead Manager at Sysdig and Luca Guerra, Sr. Open Source Engineer at Sysdig to get the latest update on Falco.</itunes:subtitle>
      <itunes:keywords>sysdig, tech podcast, real-time, thomas labarussias, tech, developer podcast, the new stack makers, software engineer, leo grasso, observability, falco</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1503</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b33e55b6-efbf-4b5d-99ba-9429982a54a6</guid>
      <title>How cert-manager Got to 500 Million Downloads a Month</title>
      <description><![CDATA[<p>Jetstack’s cert-manager, a leading open-source project in Kubernetes certificate management, began as a job interview challenge. Co-founder Matt Barker recalls asking a prospective engineer to automate Let’s Encrypt within Kubernetes. By Monday, the candidate had created kube-lego, which evolved into cert-manager, now downloaded over 500 million times monthly.</p><p>Cert-manager’s journey to CNCF graduation, achieved in September, began with its donation to the foundation four years ago. Relaunched as cert-manager, the project grew under engineer James Munnelly, becoming the de facto standard for certificate lifecycle management. The thriving community and ecosystem around cert-manager highlighted its suitability for CNCF stewardship. However, maintainers, including Ashley Davis, noted challenges in navigating differing opinions within its vast user base.</p><p>With graduation achieved, cert-manager’s roadmap includes sub-projects like trust-manager, addressing TLS trust bundle management and Istio integration. Barker aims to streamline enterprise-scale deployments and educate security teams on cert-manager’s impact. Cert-manager has become integral to cloud-native workflows, promising to simplify hybrid, multicloud, and edge deployments.</p><p>Learn more from The New Stack about cert-manager:</p><p><a href="https://thenewstack.io/jetstacks-certificate-management-project-joins-the-cncf-sandbox-of-cloud-native-technologies/">Jetstack’s cert-manager Joins the CNCF Sandbox of Cloud Native Technologies</a></p><p><a href="https://thenewstack.io/jetstack-secure-promises-to-ease-kubernetes-tls-security/">Jetstack Secure Promises to Ease Kubernetes TLS Security</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Thu, 19 Dec 2024 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Jetstack, Venafi, Matt Barker, Ashley Davis, The New Stack, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-cert-manager-got-to-500-million-downloads-a-month-RJgBj_Ns</link>
      <content:encoded><![CDATA[<p>Jetstack’s cert-manager, a leading open-source project in Kubernetes certificate management, began as a job interview challenge. Co-founder Matt Barker recalls asking a prospective engineer to automate Let’s Encrypt within Kubernetes. By Monday, the candidate had created kube-lego, which evolved into cert-manager, now downloaded over 500 million times monthly.</p><p>Cert-manager’s journey to CNCF graduation, achieved in September, began with its donation to the foundation four years ago. Relaunched as cert-manager, the project grew under engineer James Munnelly, becoming the de facto standard for certificate lifecycle management. The thriving community and ecosystem around cert-manager highlighted its suitability for CNCF stewardship. However, maintainers, including Ashley Davis, noted challenges in navigating differing opinions within its vast user base.</p><p>With graduation achieved, cert-manager’s roadmap includes sub-projects like trust-manager, addressing TLS trust bundle management and Istio integration. Barker aims to streamline enterprise-scale deployments and educate security teams on cert-manager’s impact. Cert-manager has become integral to cloud-native workflows, promising to simplify hybrid, multicloud, and edge deployments.</p><p>Learn more from The New Stack about cert-manager:</p><p><a href="https://thenewstack.io/jetstacks-certificate-management-project-joins-the-cncf-sandbox-of-cloud-native-technologies/">Jetstack’s cert-manager Joins the CNCF Sandbox of Cloud Native Technologies</a></p><p><a href="https://thenewstack.io/jetstack-secure-promises-to-ease-kubernetes-tls-security/">Jetstack Secure Promises to Ease Kubernetes TLS Security</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="22381410" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/287db86e-fd35-4a94-9151-e9bba277abee/audio/e866906b-da16-4788-99e0-ee3bcbd58c33/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How cert-manager Got to 500 Million Downloads a Month</itunes:title>
      <itunes:author>Jetstack, Venafi, Matt Barker, Ashley Davis, The New Stack, Heather Joslyn</itunes:author>
      <itunes:duration>00:23:18</itunes:duration>
      <itunes:summary>Jetstack’s cert-manager, a leading open-source project in Kubernetes certificate management, began as a job interview challenge. Co-founder Matt Barker recalls asking a prospective engineer to automate Let’s Encrypt within Kubernetes. By Monday, the candidate had created kube-lego, which evolved into cert-manager, now downloaded over 500 million times monthly.
Cert-manager’s journey to CNCF graduation, achieved in September, began with its donation to the foundation four years ago. Relaunched as cert-manager, the project grew under engineer James Munnelly, becoming the de facto standard for certificate lifecycle management. The thriving community and ecosystem around cert-manager highlighted its suitability for CNCF stewardship. However, maintainers, including Ashley Davis, noted challenges in navigating differing opinions within its vast user base.</itunes:summary>
      <itunes:subtitle>Jetstack’s cert-manager, a leading open-source project in Kubernetes certificate management, began as a job interview challenge. Co-founder Matt Barker recalls asking a prospective engineer to automate Let’s Encrypt within Kubernetes. By Monday, the candidate had created kube-lego, which evolved into cert-manager, now downloaded over 500 million times monthly.
Cert-manager’s journey to CNCF graduation, achieved in September, began with its donation to the foundation four years ago. Relaunched as cert-manager, the project grew under engineer James Munnelly, becoming the de facto standard for certificate lifecycle management. The thriving community and ecosystem around cert-manager highlighted its suitability for CNCF stewardship. However, maintainers, including Ashley Davis, noted challenges in navigating differing opinions within its vast user base.</itunes:subtitle>
      <itunes:keywords>cloud native computing foundation, jetstack, venafi, tech, developer podcast, kubernetes, the new stack makers, software engineer, cert-manager, open source, certificate management</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1502</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">92b2ea68-e819-453a-b702-cf5d3a304d22</guid>
      <title>Why Are So Many Developers Out of Work in 2024?</title>
      <description><![CDATA[<p>The tech industry faces a paradox: despite high demand for skills, many developers and engineers are unemployed. At KubeCon + CloudNativeCon North America in Salt Lake City, Utah, Andela and the Cloud Native Computing Foundation (CNCF) announced an initiative to train 20,000 technologists in cloud native computing over the next decade. oss O'neill, Senior Program Manager at Andela and Chris Aniszczyk, CNCF’s CTO, highlighted the lack of Kubernetes-certified professionals in regions like Africa and emphasized the need for global inclusivity to make cloud native technology ubiquitous.</p><p>Andela, operating in over 135 countries and founded in Nigeria, views this program as a continuation of its mission to upskill African talent, aligning with its partnerships with tech giants like Google, AWS, and Nvidia. This initiative also addresses the increasing employer demand for Kubernetes and modern cloud skills, reflecting a broader skills mismatch in the tech workforce.</p><p>Aniszczyk noted that companies urgently seek expertise in cloud native infrastructure, observability, and platform engineering. The partnership aims to bridge these gaps, offering opportunities to meet evolving global tech needs.</p><p>Learn more from The New Stack about developer talent, skills and needs: </p><p><a href="https://thenewstack.io/recruiters-speak-top-skills-devs-need-for-ai-cloud-jobs/">Top Developer Skills for AI and Cloud Jobs</a></p><p><a href="https://thenewstack.io/5-software-development-skills-ai-will-render-obsolete/">5 Software Development Skills AI Will Render Obsolete</a></p><p><a href="https://thenewstack.io/cloud-native-skill-gaps-are-killing-your-gains/">Cloud Native Skill Gaps are Killing Your Gains</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></description>
      <pubDate>Thu, 12 Dec 2024 13:45:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Andela, Ross O&apos;Neill, Chris Aniszczyk, Cloud Native Computing Foundation, Heather Joslyn, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/why-are-so-many-developers-out-of-work-in-2024-lElvsLCt</link>
      <content:encoded><![CDATA[<p>The tech industry faces a paradox: despite high demand for skills, many developers and engineers are unemployed. At KubeCon + CloudNativeCon North America in Salt Lake City, Utah, Andela and the Cloud Native Computing Foundation (CNCF) announced an initiative to train 20,000 technologists in cloud native computing over the next decade. oss O'neill, Senior Program Manager at Andela and Chris Aniszczyk, CNCF’s CTO, highlighted the lack of Kubernetes-certified professionals in regions like Africa and emphasized the need for global inclusivity to make cloud native technology ubiquitous.</p><p>Andela, operating in over 135 countries and founded in Nigeria, views this program as a continuation of its mission to upskill African talent, aligning with its partnerships with tech giants like Google, AWS, and Nvidia. This initiative also addresses the increasing employer demand for Kubernetes and modern cloud skills, reflecting a broader skills mismatch in the tech workforce.</p><p>Aniszczyk noted that companies urgently seek expertise in cloud native infrastructure, observability, and platform engineering. The partnership aims to bridge these gaps, offering opportunities to meet evolving global tech needs.</p><p>Learn more from The New Stack about developer talent, skills and needs: </p><p><a href="https://thenewstack.io/recruiters-speak-top-skills-devs-need-for-ai-cloud-jobs/">Top Developer Skills for AI and Cloud Jobs</a></p><p><a href="https://thenewstack.io/5-software-development-skills-ai-will-render-obsolete/">5 Software Development Skills AI Will Render Obsolete</a></p><p><a href="https://thenewstack.io/cloud-native-skill-gaps-are-killing-your-gains/">Cloud Native Skill Gaps are Killing Your Gains</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p>
]]></content:encoded>
      <enclosure length="20320800" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/cf900967-c01d-43ff-a5e8-945de6116d9a/audio/d6ba0616-c209-475e-81f6-7e7cb9bdde6b/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Why Are So Many Developers Out of Work in 2024?</itunes:title>
      <itunes:author>Andela, Ross O&apos;Neill, Chris Aniszczyk, Cloud Native Computing Foundation, Heather Joslyn, The New Stack</itunes:author>
      <itunes:duration>00:21:10</itunes:duration>
      <itunes:summary>The tech industry faces a paradox: despite high demand for skills, many developers and engineers are unemployed. At KubeCon + CloudNativeCon North America in Salt Lake City, Utah, Andela and the Cloud Native Computing Foundation (CNCF) announced an initiative to train 20,000 technologists in cloud native computing over the next decade. oss O&apos;neill, Senior Program Manager at Andela and Chris Aniszczyk, CNCF’s CTO, highlighted the lack of Kubernetes-certified professionals in regions like Africa and emphasized the need for global inclusivity to make cloud native technology ubiquitous.</itunes:summary>
      <itunes:subtitle>The tech industry faces a paradox: despite high demand for skills, many developers and engineers are unemployed. At KubeCon + CloudNativeCon North America in Salt Lake City, Utah, Andela and the Cloud Native Computing Foundation (CNCF) announced an initiative to train 20,000 technologists in cloud native computing over the next decade. oss O&apos;neill, Senior Program Manager at Andela and Chris Aniszczyk, CNCF’s CTO, highlighted the lack of Kubernetes-certified professionals in regions like Africa and emphasized the need for global inclusivity to make cloud native technology ubiquitous.</itunes:subtitle>
      <itunes:keywords>heather josylyn, software developer, ross oneill, developer skills, tech podcast, hiring, tech, developer podcast, software engineer, andela, developer talent, chris aniszczyk</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1501</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">14542f99-66fb-4299-b386-8dc89018819e</guid>
      <title>MapLibre: How a Fork Became a Thriving Open Source Project</title>
      <description><![CDATA[<p>When open source projects shift to proprietary licensing, forks and new communities often emerge. Such was the case with MapLibre, born from Mapbox’s 2020 decision to make its map rendering engine proprietary. In conjunction with All Things Open 2024, Seth Fitzsimmons, a principal engineer at AWS and Tarus Balog, principal technical strategist for open source at AWS shared that this engine, popular for its WebGL-powered vector maps and dynamic customization features, was essential for organizations like BMW, The New York Times, and Instacart. However, Mapbox’s move disappointed its open-source user base by tying the upgraded Mapbox GL JS library to proprietary products.</p><p>In response, three users forked the engine to create MapLibre, committing to modernizing and preserving its open-source ethos. Despite challenges—forking often struggles to sustain momentum—MapLibre has thrived, supported by contributors and corporate sponsors like AWS, Meta, and Microsoft. Notably, a community member transitioned the project from JavaScript to TypeScript over nine months, showcasing the dedication of unpaid contributors.</p><p>Thanks to financial backing, MapLibre now employs maintainers, enabling it to reciprocate community efforts while fostering equality among participants. The project illustrates the resilience of open-source communities when proprietary shifts occur.</p><p>Learn more from The New Stack about forking open source projects:</p><p><a href="https://thenewstack.io/open-source-projects-fork/">Why Do Open Source Projects Fork?</a></p><p><a href="https://thenewstack.io/opensearch-how-the-project-went-from-fork-to-foundation/">OpenSearch: How the Project Went From Fork to Foundation</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></description>
      <pubDate>Thu, 5 Dec 2024 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Seth Fitzsimmons, Tarus Balog, Amazon Web Services, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/maplibre-how-a-fork-became-a-thriving-open-source-project-Py7ijZjJ</link>
      <content:encoded><![CDATA[<p>When open source projects shift to proprietary licensing, forks and new communities often emerge. Such was the case with MapLibre, born from Mapbox’s 2020 decision to make its map rendering engine proprietary. In conjunction with All Things Open 2024, Seth Fitzsimmons, a principal engineer at AWS and Tarus Balog, principal technical strategist for open source at AWS shared that this engine, popular for its WebGL-powered vector maps and dynamic customization features, was essential for organizations like BMW, The New York Times, and Instacart. However, Mapbox’s move disappointed its open-source user base by tying the upgraded Mapbox GL JS library to proprietary products.</p><p>In response, three users forked the engine to create MapLibre, committing to modernizing and preserving its open-source ethos. Despite challenges—forking often struggles to sustain momentum—MapLibre has thrived, supported by contributors and corporate sponsors like AWS, Meta, and Microsoft. Notably, a community member transitioned the project from JavaScript to TypeScript over nine months, showcasing the dedication of unpaid contributors.</p><p>Thanks to financial backing, MapLibre now employs maintainers, enabling it to reciprocate community efforts while fostering equality among participants. The project illustrates the resilience of open-source communities when proprietary shifts occur.</p><p>Learn more from The New Stack about forking open source projects:</p><p><a href="https://thenewstack.io/open-source-projects-fork/">Why Do Open Source Projects Fork?</a></p><p><a href="https://thenewstack.io/opensearch-how-the-project-went-from-fork-to-foundation/">OpenSearch: How the Project Went From Fork to Foundation</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></content:encoded>
      <enclosure length="24810518" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/bb34c936-107d-4fa7-b6f7-d159c217a0ea/audio/b8d262f3-669a-4c6c-9b7d-9a1105461bf5/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>MapLibre: How a Fork Became a Thriving Open Source Project</itunes:title>
      <itunes:author>Seth Fitzsimmons, Tarus Balog, Amazon Web Services, The New Stack</itunes:author>
      <itunes:duration>00:25:50</itunes:duration>
      <itunes:summary>When open source projects shift to proprietary licensing, forks and new communities often emerge. Such was the case with MapLibre, born from Mapbox’s 2020 decision to make its map rendering engine proprietary. In conjunction with All Things Open 2024, Seth Fitzsimmons, a principal engineer at AWS and Tarus Balog, principal technical strategist for open source at AWS shared that this engine, popular for its WebGL-powered vector maps and dynamic customization features, was essential for organizations like BMW, The New York Times, and Instacart. However, Mapbox’s move disappointed its open-source user base by tying the upgraded Mapbox GL JS library to proprietary products.</itunes:summary>
      <itunes:subtitle>When open source projects shift to proprietary licensing, forks and new communities often emerge. Such was the case with MapLibre, born from Mapbox’s 2020 decision to make its map rendering engine proprietary. In conjunction with All Things Open 2024, Seth Fitzsimmons, a principal engineer at AWS and Tarus Balog, principal technical strategist for open source at AWS shared that this engine, popular for its WebGL-powered vector maps and dynamic customization features, was essential for organizations like BMW, The New York Times, and Instacart. However, Mapbox’s move disappointed its open-source user base by tying the upgraded Mapbox GL JS library to proprietary products.</itunes:subtitle>
      <itunes:keywords>software developer, tarus balog, mapbox, tech podcast, the new stack, amazon web services, maplibre, tech, developer podcast, the new stack makers, open source, seth fitzsimmons, fork</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1500</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">dcba6187-ed2a-433a-b2ec-e58c27b1a6cd</guid>
      <title>OpenSearch: How the Project Went from Fork to Foundation</title>
      <description><![CDATA[<p>At All Things Open in October, Anandhi Bumstead, AWS’s director of software engineering, highlighted OpenSearch's journey and the advantages of the Linux Foundation's stewardship. OpenSearch, an open source data ingestion and analytics engine, was transferred by Amazon Web Services (AWS) to the Linux Foundation in September 2024, seeking neutral governance and broader community collaboration. Originally forked from Elasticsearch after a licensing change in 2021, OpenSearch has evolved into a versatile platform likened to a “Swiss Army knife” for its broad use cases, including observability, log and security analytics, alert detection, and semantic and hybrid search, particularly in generative AI applications.</p><p>Despite criticism over slower indexing speeds compared to Elasticsearch, significant performance improvements have been made. The latest release, OpenSearch 2.17, delivers 6.5x faster query performance and a 25% indexing improvement due to segment replication. Future efforts aim to enhance indexing, search, storage, and vector capabilities while optimizing costs and efficiency. Contributions are welcomed via opensearch.org.</p><p>Learn more from The New Stack about deploying applications on OpenSearch</p><p><a href="https://thenewstack.io/aws-transfers-opensearch-to-the-linux-foundation/">AWS Transfers OpenSearch to the Linux Foundation</a></p><p><a href="https://thenewstack.io/from-flashpoint-to-foundation-opensearchs-path-clears/">From Flashpoint to Foundation: OpenSearch’s Path Clears</a></p><p><a href="https://thenewstack.io/semantic-search-with-amazon-opensearch-serverless-and-titan/">Semantic Search with Amazon OpenSearch Serverless and Titan</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Tue, 26 Nov 2024 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Anandhi Bumstead, The New Stack, AWS, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/opensearch-how-the-project-went-from-fork-to-foundation-MxeQwxBY</link>
      <content:encoded><![CDATA[<p>At All Things Open in October, Anandhi Bumstead, AWS’s director of software engineering, highlighted OpenSearch's journey and the advantages of the Linux Foundation's stewardship. OpenSearch, an open source data ingestion and analytics engine, was transferred by Amazon Web Services (AWS) to the Linux Foundation in September 2024, seeking neutral governance and broader community collaboration. Originally forked from Elasticsearch after a licensing change in 2021, OpenSearch has evolved into a versatile platform likened to a “Swiss Army knife” for its broad use cases, including observability, log and security analytics, alert detection, and semantic and hybrid search, particularly in generative AI applications.</p><p>Despite criticism over slower indexing speeds compared to Elasticsearch, significant performance improvements have been made. The latest release, OpenSearch 2.17, delivers 6.5x faster query performance and a 25% indexing improvement due to segment replication. Future efforts aim to enhance indexing, search, storage, and vector capabilities while optimizing costs and efficiency. Contributions are welcomed via opensearch.org.</p><p>Learn more from The New Stack about deploying applications on OpenSearch</p><p><a href="https://thenewstack.io/aws-transfers-opensearch-to-the-linux-foundation/">AWS Transfers OpenSearch to the Linux Foundation</a></p><p><a href="https://thenewstack.io/from-flashpoint-to-foundation-opensearchs-path-clears/">From Flashpoint to Foundation: OpenSearch’s Path Clears</a></p><p><a href="https://thenewstack.io/semantic-search-with-amazon-opensearch-serverless-and-titan/">Semantic Search with Amazon OpenSearch Serverless and Titan</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="16590583" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/44388ad5-fccb-41c3-b006-b4eeaa6c87d3/audio/14032bcc-cd99-44d5-814d-71d2a2b2dff0/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>OpenSearch: How the Project Went from Fork to Foundation</itunes:title>
      <itunes:author>Anandhi Bumstead, The New Stack, AWS, Heather Joslyn</itunes:author>
      <itunes:duration>00:17:16</itunes:duration>
      <itunes:summary>At All Things Open in October, Anandhi Bumstead, AWS’s director of software engineering, highlighted OpenSearch&apos;s journey and the advantages of the Linux Foundation&apos;s stewardship.  OpenSearch, an open source data ingestion and analytics engine, was transferred by Amazon Web Services (AWS) to the Linux Foundation in September 2024, seeking neutral governance and broader community collaboration. Originally forked from Elasticsearch after a licensing change in 2021, OpenSearch has evolved into a versatile platform likened to a “Swiss Army knife” for its broad use cases, including observability, log and security analytics, alert detection, and semantic and hybrid search, particularly in generative AI applications.</itunes:summary>
      <itunes:subtitle>At All Things Open in October, Anandhi Bumstead, AWS’s director of software engineering, highlighted OpenSearch&apos;s journey and the advantages of the Linux Foundation&apos;s stewardship.  OpenSearch, an open source data ingestion and analytics engine, was transferred by Amazon Web Services (AWS) to the Linux Foundation in September 2024, seeking neutral governance and broader community collaboration. Originally forked from Elasticsearch after a licensing change in 2021, OpenSearch has evolved into a versatile platform likened to a “Swiss Army knife” for its broad use cases, including observability, log and security analytics, alert detection, and semantic and hybrid search, particularly in generative AI applications.</itunes:subtitle>
      <itunes:keywords>open search, anandhi bumstead, software developer, tech podcast, the new stack, all things open, tech, developer podcast, the new stack makers, software engineer, data ingestion, open source, aws</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1499</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">947e9e93-4b88-4471-99a7-16327a7f8ba0</guid>
      <title>Is Apache Spark Too Costly? An Amazon Engineer Tells His Story</title>
      <description><![CDATA[<p>Is Apache Spark too costly? Amazon Principal Engineer Patrick Ames tackled this question during an interview with <i>The New Stack Makers</i>, sharing insights into transitioning from Spark to Ray for managing large-scale data. Ames, described as a "go-to" engineer for exabyte-scale projects, emphasized a goal-driven approach to solving complex engineering problems, from simplifying daily chores to optimizing software solutions.</p><p>Initially, Spark was chosen at Amazon for its simplicity and open-source flexibility, allowing efficient merging of data with minimal SQL code. The team leveraged Spark in a decoupled architecture over S3 storage, scaling it to handle thousands of jobs daily. However, as data volumes grew to hundreds of terabytes and beyond, Spark’s limitations became apparent. Long processing times and high costs prompted a search for alternatives.</p><p>Enter Ray—a unified framework designed for scaling AI and Python applications. After experimentation, Ames and his team noted significant efficiency improvements, driving the shift from Spark to Ray to meet scalability and cost-efficiency needs.</p><p>Learn more from The New Stack about Apache Spark and Ray: </p><p><a href="https://thenewstack.io/amazon-to-save-millions-moving-from-apache-spark-to-ray/ ">Amazon to Save Millions Moving From Apache Spark to Ray</a></p><p><a href="https://thenewstack.io/how-ray-a-distributed-ai-framework-helps-power-chatgpt/ ">How Ray, a Distributed AI Framework, Helps Power ChatGPT </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></description>
      <pubDate>Thu, 21 Nov 2024 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Patrick Ames, Alex Williams, Amazon Web Services)</author>
      <link>https://thenewstack.simplecast.com/episodes/is-apache-spark-too-costly-an-amazon-engineer-tells-his-story-pcqnOm3J</link>
      <content:encoded><![CDATA[<p>Is Apache Spark too costly? Amazon Principal Engineer Patrick Ames tackled this question during an interview with <i>The New Stack Makers</i>, sharing insights into transitioning from Spark to Ray for managing large-scale data. Ames, described as a "go-to" engineer for exabyte-scale projects, emphasized a goal-driven approach to solving complex engineering problems, from simplifying daily chores to optimizing software solutions.</p><p>Initially, Spark was chosen at Amazon for its simplicity and open-source flexibility, allowing efficient merging of data with minimal SQL code. The team leveraged Spark in a decoupled architecture over S3 storage, scaling it to handle thousands of jobs daily. However, as data volumes grew to hundreds of terabytes and beyond, Spark’s limitations became apparent. Long processing times and high costs prompted a search for alternatives.</p><p>Enter Ray—a unified framework designed for scaling AI and Python applications. After experimentation, Ames and his team noted significant efficiency improvements, driving the shift from Spark to Ray to meet scalability and cost-efficiency needs.</p><p>Learn more from The New Stack about Apache Spark and Ray: </p><p><a href="https://thenewstack.io/amazon-to-save-millions-moving-from-apache-spark-to-ray/ ">Amazon to Save Millions Moving From Apache Spark to Ray</a></p><p><a href="https://thenewstack.io/how-ray-a-distributed-ai-framework-helps-power-chatgpt/ ">How Ray, a Distributed AI Framework, Helps Power ChatGPT </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></content:encoded>
      <enclosure length="24423134" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/e75bd8d1-0c0c-43f1-ad35-b20bdadfa27b/audio/80f7c4e9-52cb-491c-ba3d-b36e341fd7ef/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Is Apache Spark Too Costly? An Amazon Engineer Tells His Story</itunes:title>
      <itunes:author>Patrick Ames, Alex Williams, Amazon Web Services</itunes:author>
      <itunes:duration>00:25:26</itunes:duration>
      <itunes:summary>Is Apache Spark too costly? Amazon Principal Engineer Patrick Ames tackled this question during an interview with The New Stack Makers, sharing insights into transitioning from Spark to Ray for managing large-scale data. Ames, described as a &quot;go-to&quot; engineer for exabyte-scale projects, emphasized a goal-driven approach to solving complex engineering problems, from simplifying daily chores to optimizing software solutions.</itunes:summary>
      <itunes:subtitle>Is Apache Spark too costly? Amazon Principal Engineer Patrick Ames tackled this question during an interview with The New Stack Makers, sharing insights into transitioning from Spark to Ray for managing large-scale data. Ames, described as a &quot;go-to&quot; engineer for exabyte-scale projects, emphasized a goal-driven approach to solving complex engineering problems, from simplifying daily chores to optimizing software solutions.</itunes:subtitle>
      <itunes:keywords>ray, software developer, tech podcast, the new stack, amazon web services, tech, developer podcast, apache spark, the new stack makers, software engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1498</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e8535d84-26da-40cd-bd64-aeabb9d025ce</guid>
      <title>Codiac: Kubernetes Doesn&apos;t Need To Be That Complex</title>
      <description><![CDATA[<p>In this New Stack Makers, Codiac aims to simplify app deployment on Kubernetes by offering a unified interface that minimizes complexity. Traditionally, Kubernetes is powerful but challenging for teams due to its intricate configurations and extensive manual coding. Co-founded by Ben Ghazi and Mark Freydl, Codiac provides engineers with infrastructure on demand, container management, and advanced software development life cycle (SDLC) tools, making Kubernetes more accessible.</p><p>Codiac’s interface streamlines continuous integration and deployment (CI/CD), reducing deployment steps to a single line of code within CI/CD pipelines. Developers can easily deploy, manage containers, and configure applications without mastering Kubernetes' esoteric syntax. Codiac also offers features like "cabinets" to organize assets across multi-cloud environments and enables repeatable processes through snapshots, making cluster management smoother.</p><p>For experienced engineers, Codiac alleviates the burden of manually managing YAML files and configuring multiple services. With ephemeral clusters and repeatable snapshots, Codiac supports scalable, reproducible development workflows, giving engineers a practical way to manage applications and infrastructure seamlessly across complex Kubernetes environments.</p><p>Learn more from The New Stack about deploying applications on Kubernetes:</p><p><a href="https://thenewstack.io/kubernetes-needs-to-take-a-lesson-from-portainer-on-ease-of-use/ ">Kubernetes Needs to Take a Lesson from Portainer on Ease-of-Use </a></p><p><a href="https://thenewstack.io/three-common-kubernetes-challenges-and-how-to-solve-them/">Three Common Kubernetes Challenges and How to Solve Them </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a> </p>
]]></description>
      <pubDate>Thu, 14 Nov 2024 17:30:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Ben Ghazi, Codiac, mark freydl, The New Stack, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/codiac-kubernetes-doesnt-need-to-be-that-complex-wdy_EtaG</link>
      <content:encoded><![CDATA[<p>In this New Stack Makers, Codiac aims to simplify app deployment on Kubernetes by offering a unified interface that minimizes complexity. Traditionally, Kubernetes is powerful but challenging for teams due to its intricate configurations and extensive manual coding. Co-founded by Ben Ghazi and Mark Freydl, Codiac provides engineers with infrastructure on demand, container management, and advanced software development life cycle (SDLC) tools, making Kubernetes more accessible.</p><p>Codiac’s interface streamlines continuous integration and deployment (CI/CD), reducing deployment steps to a single line of code within CI/CD pipelines. Developers can easily deploy, manage containers, and configure applications without mastering Kubernetes' esoteric syntax. Codiac also offers features like "cabinets" to organize assets across multi-cloud environments and enables repeatable processes through snapshots, making cluster management smoother.</p><p>For experienced engineers, Codiac alleviates the burden of manually managing YAML files and configuring multiple services. With ephemeral clusters and repeatable snapshots, Codiac supports scalable, reproducible development workflows, giving engineers a practical way to manage applications and infrastructure seamlessly across complex Kubernetes environments.</p><p>Learn more from The New Stack about deploying applications on Kubernetes:</p><p><a href="https://thenewstack.io/kubernetes-needs-to-take-a-lesson-from-portainer-on-ease-of-use/ ">Kubernetes Needs to Take a Lesson from Portainer on Ease-of-Use </a></p><p><a href="https://thenewstack.io/three-common-kubernetes-challenges-and-how-to-solve-them/">Three Common Kubernetes Challenges and How to Solve Them </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a> </p>
]]></content:encoded>
      <enclosure length="27738740" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/699aeb1b-2ae6-4508-8ae2-c3309502b096/audio/e1b573da-0e79-4598-9b93-6406c97545f9/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Codiac: Kubernetes Doesn&apos;t Need To Be That Complex</itunes:title>
      <itunes:author>Ben Ghazi, Codiac, mark freydl, The New Stack, Alex Williams</itunes:author>
      <itunes:duration>00:28:53</itunes:duration>
      <itunes:summary>In this New Stack Makers, Codiac aims to simplify app deployment on Kubernetes by offering a unified interface that minimizes complexity. Traditionally, Kubernetes is powerful but challenging for teams due to its intricate configurations and extensive manual coding. Co-founded by Ben Ghazi and Mark Freydl, Codiac provides engineers with infrastructure on demand, container management, and advanced software development life cycle (SDLC) tools, making Kubernetes more accessible.</itunes:summary>
      <itunes:subtitle>In this New Stack Makers, Codiac aims to simplify app deployment on Kubernetes by offering a unified interface that minimizes complexity. Traditionally, Kubernetes is powerful but challenging for teams due to its intricate configurations and extensive manual coding. Co-founded by Ben Ghazi and Mark Freydl, Codiac provides engineers with infrastructure on demand, container management, and advanced software development life cycle (SDLC) tools, making Kubernetes more accessible.</itunes:subtitle>
      <itunes:keywords>application deployment, software developer, codiac, tech podcast, the new stack, ben ghazi, tech, developer podcast, sdlc, kubernetes, the new stack makers, ingress, software engineer, platform engineering, automation</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1497</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">90751293-9b57-4fc7-827c-25838439a2b7</guid>
      <title>Valkey: What’s New and What’s Next?</title>
      <description><![CDATA[<p>Valkey, an open-source fork of Redis launched in March, introduced its multithreaded Version 8.0 in September, now available through AWS ElastiCache. At All Things Open 2024 in Raleigh, AWS's Kyle Davis explains that Valkey was developed after Redis changed to a restrictive license, drawing contributors from companies like AWS, Google, Alibaba, and Oracle. Notably, some contributors emerged independently, including a significant contributor from Vietnam. Version 8.0 differentiates itself from Redis by leveraging multithreaded CPUs, addressing the efficiency of I/O operations in modern hardware. Additionally, data structure refinements were made to improve memory efficiency by up to 20%, particularly benefiting large-key databases.</p><p>Looking ahead, Valkey plans two annual updates, with the next release expected in 2025. New modules are anticipated, including a JSON module for efficient data manipulation and a Bloom filter for probabilistic data presence checks. Version 9.0 may bring substantial changes to clustering, updating it to better leverage modern technologies. The Valkey project aims to continue evolving its capabilities to meet the demands of advanced data storage needs.</p><p>Learn more from The New Stack about Valkey: </p><p><a href="https://thenewstack.io/valkey-is-a-different-kind-of-fork/ ">Valkey Is a Different Kind of Fork </a></p><p><a href="https://thenewstack.io/aws-adds-support-drops-prices-for-redis-forked-valkey/ ">AWS Adds Support, Drops Prices, for Redis-Forked Valkey </a></p><p><a href="https://thenewstack.io/valkey-a-redis-fork-with-a-future/ ">Valkey: A Redis Fork With a Future </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p><p> </p>
]]></description>
      <pubDate>Thu, 7 Nov 2024 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Kyle Davis, The New Stack, Alex Williams, AWS)</author>
      <link>https://thenewstack.simplecast.com/episodes/valkey-whats-new-and-whats-next-udl6DVuG</link>
      <content:encoded><![CDATA[<p>Valkey, an open-source fork of Redis launched in March, introduced its multithreaded Version 8.0 in September, now available through AWS ElastiCache. At All Things Open 2024 in Raleigh, AWS's Kyle Davis explains that Valkey was developed after Redis changed to a restrictive license, drawing contributors from companies like AWS, Google, Alibaba, and Oracle. Notably, some contributors emerged independently, including a significant contributor from Vietnam. Version 8.0 differentiates itself from Redis by leveraging multithreaded CPUs, addressing the efficiency of I/O operations in modern hardware. Additionally, data structure refinements were made to improve memory efficiency by up to 20%, particularly benefiting large-key databases.</p><p>Looking ahead, Valkey plans two annual updates, with the next release expected in 2025. New modules are anticipated, including a JSON module for efficient data manipulation and a Bloom filter for probabilistic data presence checks. Version 9.0 may bring substantial changes to clustering, updating it to better leverage modern technologies. The Valkey project aims to continue evolving its capabilities to meet the demands of advanced data storage needs.</p><p>Learn more from The New Stack about Valkey: </p><p><a href="https://thenewstack.io/valkey-is-a-different-kind-of-fork/ ">Valkey Is a Different Kind of Fork </a></p><p><a href="https://thenewstack.io/aws-adds-support-drops-prices-for-redis-forked-valkey/ ">AWS Adds Support, Drops Prices, for Redis-Forked Valkey </a></p><p><a href="https://thenewstack.io/valkey-a-redis-fork-with-a-future/ ">Valkey: A Redis Fork With a Future </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p><p> </p>
]]></content:encoded>
      <enclosure length="21353228" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/b748b88e-e601-4ae1-afdd-0955d9fd1198/audio/74846f55-7543-4e39-8b12-218ff8d94d51/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Valkey: What’s New and What’s Next?</itunes:title>
      <itunes:author>Kyle Davis, The New Stack, Alex Williams, AWS</itunes:author>
      <itunes:duration>00:22:14</itunes:duration>
      <itunes:summary>Valkey, an open-source fork of Redis launched in March, introduced its multithreaded Version 8.0 in September, now available through AWS ElastiCache. AWS&apos;s Kyle Davis explains that Valkey was developed after Redis changed to a restrictive license, drawing contributors from companies like AWS, Google, Alibaba, and Oracle. Notably, some contributors emerged independently, including a significant contributor from Vietnam. Version 8.0 differentiates itself from Redis by leveraging multithreaded CPUs, addressing the efficiency of I/O operations in modern hardware. Additionally, data structure refinements were made to improve memory efficiency by up to 20%, particularly benefiting large-key databases.</itunes:summary>
      <itunes:subtitle>Valkey, an open-source fork of Redis launched in March, introduced its multithreaded Version 8.0 in September, now available through AWS ElastiCache. AWS&apos;s Kyle Davis explains that Valkey was developed after Redis changed to a restrictive license, drawing contributors from companies like AWS, Google, Alibaba, and Oracle. Notably, some contributors emerged independently, including a significant contributor from Vietnam. Version 8.0 differentiates itself from Redis by leveraging multithreaded CPUs, addressing the efficiency of I/O operations in modern hardware. Additionally, data structure refinements were made to improve memory efficiency by up to 20%, particularly benefiting large-key databases.</itunes:subtitle>
      <itunes:keywords>in memory data storage, software developer, tech podcast, the new stack, valkey, tech, the new stack makers, software engineer, open source, kyle davis</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1496</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">5b4d8d18-69f8-4f41-b050-f6d2cb8f797a</guid>
      <title>Why Beginning Developers Love Python</title>
      <description><![CDATA[<p>Deb Nicholson, executive director of the Python Software Foundation, attributes Python’s popularity to its minimal syntactical complexity, which appeals to beginners and seasoned developers alike. Python allows flexibility for those exploring coding without a specific focus, unlike purpose-built languages. Since her leadership began in 2022, Nicholson has overseen the foundation’s role in managing Python’s fiscal and operational needs, including the package index that hosts over half a million add-ons. This open ecosystem enables contributions from large corporations and individual developers while demanding vigilant security measures.</p><p>Nicholson envisions Python's future advancements, particularly in improving multi-threading and expanding usage in mobile development. She acknowledges Python’s critical role in AI and data science but remains cautious about AI’s pervasive application, likening it to a temporary trend. On open source in the enterprise, Nicholson critiques companies profiting from open-source tools while adopting restrictive licenses. Instead, she admires models like Red Hat’s, which leverage open source sustainably without compromising accessibility or innovation.</p><p>Learn more from The New Stack about Python: </p><p><a href="https://thenewstack.io/python-3-13-blazing-new-trails-in-performance-and-scale/">Python 3.13: Blazing New Trails in Performance and Scale</a></p><p><a href="https://thenewstack.io/the-top-5-python-packages-and-what-they-do/">The Top 5 Python Packages and What They Do</a></p><p><a href="https://thenewstack.io/python-mulls-a-change-in-version-numbering/">Python Mulls a Change in Version Numbering</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></description>
      <pubDate>Thu, 31 Oct 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (deb nicholson, python software foundation, jack wallen)</author>
      <link>https://thenewstack.simplecast.com/episodes/why-beginning-developers-love-python-cC9I6jVy</link>
      <content:encoded><![CDATA[<p>Deb Nicholson, executive director of the Python Software Foundation, attributes Python’s popularity to its minimal syntactical complexity, which appeals to beginners and seasoned developers alike. Python allows flexibility for those exploring coding without a specific focus, unlike purpose-built languages. Since her leadership began in 2022, Nicholson has overseen the foundation’s role in managing Python’s fiscal and operational needs, including the package index that hosts over half a million add-ons. This open ecosystem enables contributions from large corporations and individual developers while demanding vigilant security measures.</p><p>Nicholson envisions Python's future advancements, particularly in improving multi-threading and expanding usage in mobile development. She acknowledges Python’s critical role in AI and data science but remains cautious about AI’s pervasive application, likening it to a temporary trend. On open source in the enterprise, Nicholson critiques companies profiting from open-source tools while adopting restrictive licenses. Instead, she admires models like Red Hat’s, which leverage open source sustainably without compromising accessibility or innovation.</p><p>Learn more from The New Stack about Python: </p><p><a href="https://thenewstack.io/python-3-13-blazing-new-trails-in-performance-and-scale/">Python 3.13: Blazing New Trails in Performance and Scale</a></p><p><a href="https://thenewstack.io/the-top-5-python-packages-and-what-they-do/">The Top 5 Python Packages and What They Do</a></p><p><a href="https://thenewstack.io/python-mulls-a-change-in-version-numbering/">Python Mulls a Change in Version Numbering</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></content:encoded>
      <enclosure length="28081048" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/86dba61e-b08b-483a-82f5-786e698a6299/audio/726e9384-8de5-49a3-b4b5-0eb2fe194173/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Why Beginning Developers Love Python</itunes:title>
      <itunes:author>deb nicholson, python software foundation, jack wallen</itunes:author>
      <itunes:duration>00:29:15</itunes:duration>
      <itunes:summary>In this episode of the New Stack Makers, Deb Nicholson, executive director of the Python Software Foundation, attributes Python’s popularity to its minimal syntactical complexity, which appeals to beginners and seasoned developers alike. Python allows flexibility for those exploring coding without a specific focus, unlike purpose-built languages. Since her leadership began in 2022, Nicholson has overseen the foundation’s role in managing Python’s fiscal and operational needs, including the package index that hosts over half a million add-ons. This open ecosystem enables contributions from large corporations and individual developers while demanding vigilant security measures.</itunes:summary>
      <itunes:subtitle>In this episode of the New Stack Makers, Deb Nicholson, executive director of the Python Software Foundation, attributes Python’s popularity to its minimal syntactical complexity, which appeals to beginners and seasoned developers alike. Python allows flexibility for those exploring coding without a specific focus, unlike purpose-built languages. Since her leadership began in 2022, Nicholson has overseen the foundation’s role in managing Python’s fiscal and operational needs, including the package index that hosts over half a million add-ons. This open ecosystem enables contributions from large corporations and individual developers while demanding vigilant security measures.</itunes:subtitle>
      <itunes:keywords>software developer, python, the new stack, python programming, tech, developer podcast, deb nicholson, the new stack makers, software engineer, programming language</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1495</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">8314a6b8-29e5-4e09-ab8f-a531f1c17a24</guid>
      <title>Platform Engineering Rules, now with AI</title>
      <description><![CDATA[<p>Platform engineering will be a key focus at KubeCon this year, with a special emphasis on AI platforms. Priyanka Sharma, executive director of the Linux Foundation, highlighted the convergence of platform engineering and AI during an interview on The New Stack Makers with Adobe’s Joseph Sandoval. KubeCon will feature talks from experts like Chen Goldberg of CoreWeave and Aparna Sinha of CapitalOne, showcasing how AI workloads will transform platform operations.<br /><br />Sandoval emphasized the growing maturity of platform engineering over the past two to three years, now centered on addressing user needs. He also discussed Adobe's collaboration on CNOE, an open-source initiative for internal developer platforms. The intersection of platform engineering, Kubernetes, cloud-native technologies, and AI raises questions about scaling infrastructure management with AI, potentially improving efficiency and reducing toil for roles like SRE and DevOps. Sharma noted that reference architectures, long requested by the CNCF community, will be highlighted at the event, guiding users without dictating solutions. </p><p>Learn more from The New Stack about Kubernetes: </p><p><a>Cloud Native Networking as Kubernetes Starts Its Second Decade</a></p><p><a href="https://thenewstack.io/primer-how-kubernetes-came-to-be-what-it-is-and-why-you-should-care/ ">Primer: How Kubernetes Came to Be, What It Is, and Why You Should Care </a></p><p><a href="https://thenewstack.io/the-evolution-of-cloud-foundry-with-kubernetes/ ">How Cloud Foundry Has Evolved With Kubernetes </a></p><p>Join our community of newsletter subscribers to stay on top of the news and at the top of your game. game. https://thenewstack.io/newsletter/</p>
]]></description>
      <pubDate>Thu, 24 Oct 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (CNCF, Priyanka Sharma, Joseph Sandoval, Adobe, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/platform-engineering-rules-now-with-ai-Y5fEUUhs</link>
      <content:encoded><![CDATA[<p>Platform engineering will be a key focus at KubeCon this year, with a special emphasis on AI platforms. Priyanka Sharma, executive director of the Linux Foundation, highlighted the convergence of platform engineering and AI during an interview on The New Stack Makers with Adobe’s Joseph Sandoval. KubeCon will feature talks from experts like Chen Goldberg of CoreWeave and Aparna Sinha of CapitalOne, showcasing how AI workloads will transform platform operations.<br /><br />Sandoval emphasized the growing maturity of platform engineering over the past two to three years, now centered on addressing user needs. He also discussed Adobe's collaboration on CNOE, an open-source initiative for internal developer platforms. The intersection of platform engineering, Kubernetes, cloud-native technologies, and AI raises questions about scaling infrastructure management with AI, potentially improving efficiency and reducing toil for roles like SRE and DevOps. Sharma noted that reference architectures, long requested by the CNCF community, will be highlighted at the event, guiding users without dictating solutions. </p><p>Learn more from The New Stack about Kubernetes: </p><p><a>Cloud Native Networking as Kubernetes Starts Its Second Decade</a></p><p><a href="https://thenewstack.io/primer-how-kubernetes-came-to-be-what-it-is-and-why-you-should-care/ ">Primer: How Kubernetes Came to Be, What It Is, and Why You Should Care </a></p><p><a href="https://thenewstack.io/the-evolution-of-cloud-foundry-with-kubernetes/ ">How Cloud Foundry Has Evolved With Kubernetes </a></p><p>Join our community of newsletter subscribers to stay on top of the news and at the top of your game. game. https://thenewstack.io/newsletter/</p>
]]></content:encoded>
      <enclosure length="24484092" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/d75da426-af6b-45bc-a4ce-3d9f25eedf94/audio/2efca53c-b914-406f-aaf3-6c2659281874/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Platform Engineering Rules, now with AI</itunes:title>
      <itunes:author>CNCF, Priyanka Sharma, Joseph Sandoval, Adobe, Alex Williams</itunes:author>
      <itunes:duration>00:25:30</itunes:duration>
      <itunes:summary>Platform engineering will be a key focus at KubeCon this year, with a special emphasis on AI platforms. Priyanka Sharma, executive director of the Linux Foundation, highlighted the convergence of platform engineering and AI during an interview on The New Stack Makers with Adobe’s Joseph Sandoval. KubeCon will feature talks from experts like Chen Goldberg of CoreWeave and Aparna Sinha of CapitalOne, showcasing how AI workloads will transform platform operations.</itunes:summary>
      <itunes:subtitle>Platform engineering will be a key focus at KubeCon this year, with a special emphasis on AI platforms. Priyanka Sharma, executive director of the Linux Foundation, highlighted the convergence of platform engineering and AI during an interview on The New Stack Makers with Adobe’s Joseph Sandoval. KubeCon will feature talks from experts like Chen Goldberg of CoreWeave and Aparna Sinha of CapitalOne, showcasing how AI workloads will transform platform operations.</itunes:subtitle>
      <itunes:keywords>kubecon north america, software developer, tech podcast, the new stack, kubecon salt lake city, tech, developer podcast, the new stack makers, software engineer, cncf</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1494</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">3a7e60f9-4ab0-4dde-8dbb-029f0ed15266</guid>
      <title>Data Observability: MultiCloud, GenAI Make Challenges Harder</title>
      <description><![CDATA[<p>Rohit Choudhary, co-founder and CEO of Acceldata, placed an early bet on data observability, which has proven prescient. In a New Stack Makers podcast episode, Choudhary discussed three key insights that shaped his vision: First, the exponential growth of data in enterprises, further amplified by generative AI and large language models. Second, the rise of a multicloud and multitechnology environment, with a majority of companies adopting hybrid or multiple cloud strategies. Third, a shortage of engineering talent to manage increasingly complex data systems.</p><p>As data becomes more essential across industries, challenges in data observability have intensified. Choudhary highlights the complexity of tracking where data is produced, used, and its compliance requirements, especially with the surge in unstructured data. He emphasized that data's operational role in business decisions, marketing, and operations heightens the need for better traceability. Moving forward, traceability and the ability to manage the growing volume of alerts will become areas of hyper-focus for enterprises.</p><p>Learn more from The New Stack about data observability: </p><p><a href="https://thenewstack.io/what-is-data-observability-and-why-does-it-matter/"><strong>What Is Data Observability and Why Does It Matter?</strong></a></p><p><a href="https://thenewstack.io/the-looming-crisis-in-the-data-observability-market/"><strong>The Looming Crisis in the Observability Market</strong></a></p><p><a href="https://thenewstack.io/the-growth-of-observability-data-is-out-of-control/"><strong>The Growth of Observability Data Is Out of Control!</strong></a></p><p><a href="https://thenewstack.io/newsletter/"><strong>Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</strong></a><strong> </strong></p><p> </p>
]]></description>
      <pubDate>Thu, 17 Oct 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Rohit Choudhary, Acceldata, The New Stack, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/data-observability-multicloud-genai-make-challenges-harder-8EXOAey7</link>
      <content:encoded><![CDATA[<p>Rohit Choudhary, co-founder and CEO of Acceldata, placed an early bet on data observability, which has proven prescient. In a New Stack Makers podcast episode, Choudhary discussed three key insights that shaped his vision: First, the exponential growth of data in enterprises, further amplified by generative AI and large language models. Second, the rise of a multicloud and multitechnology environment, with a majority of companies adopting hybrid or multiple cloud strategies. Third, a shortage of engineering talent to manage increasingly complex data systems.</p><p>As data becomes more essential across industries, challenges in data observability have intensified. Choudhary highlights the complexity of tracking where data is produced, used, and its compliance requirements, especially with the surge in unstructured data. He emphasized that data's operational role in business decisions, marketing, and operations heightens the need for better traceability. Moving forward, traceability and the ability to manage the growing volume of alerts will become areas of hyper-focus for enterprises.</p><p>Learn more from The New Stack about data observability: </p><p><a href="https://thenewstack.io/what-is-data-observability-and-why-does-it-matter/"><strong>What Is Data Observability and Why Does It Matter?</strong></a></p><p><a href="https://thenewstack.io/the-looming-crisis-in-the-data-observability-market/"><strong>The Looming Crisis in the Observability Market</strong></a></p><p><a href="https://thenewstack.io/the-growth-of-observability-data-is-out-of-control/"><strong>The Growth of Observability Data Is Out of Control!</strong></a></p><p><a href="https://thenewstack.io/newsletter/"><strong>Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</strong></a><strong> </strong></p><p> </p>
]]></content:encoded>
      <enclosure length="22308196" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/23f51451-ddd7-42c5-8973-eb818b8effa5/audio/3b87aa55-772c-4047-8aa6-5109611963e8/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Data Observability: MultiCloud, GenAI Make Challenges Harder</itunes:title>
      <itunes:author>Rohit Choudhary, Acceldata, The New Stack, Heather Joslyn</itunes:author>
      <itunes:duration>00:23:14</itunes:duration>
      <itunes:summary>Rohit Choudhary, co-founder and CEO of Acceldata, placed an early bet on data observability, which has proven prescient. In a New Stack Makers podcast episode, Choudhary discussed three key insights that shaped his vision: First, the exponential growth of data in enterprises, further amplified by generative AI and large language models. Second, the rise of a multicloud and multitechnology environment, with a majority of companies adopting hybrid or multiple cloud strategies. Third, a shortage of engineering talent to manage increasingly complex data systems.</itunes:summary>
      <itunes:subtitle>Rohit Choudhary, co-founder and CEO of Acceldata, placed an early bet on data observability, which has proven prescient. In a New Stack Makers podcast episode, Choudhary discussed three key insights that shaped his vision: First, the exponential growth of data in enterprises, further amplified by generative AI and large language models. Second, the rise of a multicloud and multitechnology environment, with a majority of companies adopting hybrid or multiple cloud strategies. Third, a shortage of engineering talent to manage increasingly complex data systems.</itunes:subtitle>
      <itunes:keywords>genai, software developer, rohit choudhary, tech podcast, the new stack, heather joslyn, tech, developer podcast, the new stack makers, software engineer, data observability</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1493</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">bfe00529-17e3-40f6-9243-65f18b512175</guid>
      <title>Rust’s Expanding Horizons: Memory Safe and Lightning Fast</title>
      <description><![CDATA[<p>Rust has maintained its place among the top 15 programming languages and has been the most admired language for nine consecutive years. In a New Stack Makers podcast, Joel Marcey, director of technology at the Rust Foundation, discussed the language's growing importance, including initiatives to improve its security, performance, and adoption in various domains. While Rust is widely used in systems and backend programming, it’s also gaining traction in embedded systems, safety-critical applications, game development, and even the Linux kernel.</p><p>Marcey highlighted Rust’s strengths as a safe and fast systems language, noting its use on the web through WebAssembly (Wasm), though adoption there is still early. He also addressed Rust vs. Go, explaining that Rust excels in performance-critical applications. Marcey discussed recent updates, such as Rust 1.81, and project goals for 2024, which include a new edition and async improvements.</p><p>He also touched on government interest in Rust, including DARPA’s initiative to convert C code to Rust, and the Rust Security Initiative, aimed at maintaining the language’s strong security reputation.</p><p>Learn more from The New Stack about Rust </p><p><a href="https://thenewstack.io/the-case-for-rust-as-the-future-of-javascript-infrastructure/ ">Could Rust be the Future of JavaScript Infrastructure?</a></p><p><a href="https://thenewstack.io/rust-growing-fastest-but-javascript-reigns-supreme/ ">Rust Growing Fastest, But JavaScript Reigns Supreme</a></p><p><a href="https://thenewstack.io/rust-vs-zig-in-reality-a-somewhat-friendly-debate/ ">Rust vs. Zig in Reality: A (Somewhat) Friendly Debate</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Thu, 10 Oct 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Joel Marcey, darryl taft)</author>
      <link>https://thenewstack.simplecast.com/episodes/rusts-expanding-horizons-memory-safe-and-lightning-fast-dBPVEZdh</link>
      <content:encoded><![CDATA[<p>Rust has maintained its place among the top 15 programming languages and has been the most admired language for nine consecutive years. In a New Stack Makers podcast, Joel Marcey, director of technology at the Rust Foundation, discussed the language's growing importance, including initiatives to improve its security, performance, and adoption in various domains. While Rust is widely used in systems and backend programming, it’s also gaining traction in embedded systems, safety-critical applications, game development, and even the Linux kernel.</p><p>Marcey highlighted Rust’s strengths as a safe and fast systems language, noting its use on the web through WebAssembly (Wasm), though adoption there is still early. He also addressed Rust vs. Go, explaining that Rust excels in performance-critical applications. Marcey discussed recent updates, such as Rust 1.81, and project goals for 2024, which include a new edition and async improvements.</p><p>He also touched on government interest in Rust, including DARPA’s initiative to convert C code to Rust, and the Rust Security Initiative, aimed at maintaining the language’s strong security reputation.</p><p>Learn more from The New Stack about Rust </p><p><a href="https://thenewstack.io/the-case-for-rust-as-the-future-of-javascript-infrastructure/ ">Could Rust be the Future of JavaScript Infrastructure?</a></p><p><a href="https://thenewstack.io/rust-growing-fastest-but-javascript-reigns-supreme/ ">Rust Growing Fastest, But JavaScript Reigns Supreme</a></p><p><a href="https://thenewstack.io/rust-vs-zig-in-reality-a-somewhat-friendly-debate/ ">Rust vs. Zig in Reality: A (Somewhat) Friendly Debate</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="22803478" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/8d2c619e-28cc-4f8b-a683-bc2fc26b4d83/audio/ed60898d-5179-4bc2-a6da-ce83eec364e2/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Rust’s Expanding Horizons: Memory Safe and Lightning Fast</itunes:title>
      <itunes:author>Joel Marcey, darryl taft</itunes:author>
      <itunes:duration>00:23:45</itunes:duration>
      <itunes:summary>Rust has maintained its place among the top 15 programming languages and has been the most admired language for nine consecutive years. In a New Stack Makers podcast, Joel Marcey, director of technology at the Rust Foundation, discussed the language&apos;s growing importance, including initiatives to improve its security, performance, and adoption in various domains. While Rust is widely used in systems and backend programming, it’s also gaining traction in embedded systems, safety-critical applications, game development, and even the Linux kernel.</itunes:summary>
      <itunes:subtitle>Rust has maintained its place among the top 15 programming languages and has been the most admired language for nine consecutive years. In a New Stack Makers podcast, Joel Marcey, director of technology at the Rust Foundation, discussed the language&apos;s growing importance, including initiatives to improve its security, performance, and adoption in various domains. While Rust is widely used in systems and backend programming, it’s also gaining traction in embedded systems, safety-critical applications, game development, and even the Linux kernel.</itunes:subtitle>
      <itunes:keywords>joel marcey, software developer, tech podcast, the new stack, darryl taft, tech, developer podcast, the new stack makers, software engineer, rust foundation, programming language</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1492</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">d7e90758-2376-4281-936b-84270942ea69</guid>
      <title>Are We Thinking About Supply Chain Security All Wrong?</title>
      <description><![CDATA[<p>In a New Stack Makers episode, Ashley Williams, founder and CEO of axo, highlights how the software world depends on open-source code, which is largely maintained by unpaid volunteers. She likens this to a CVS relying on volunteer-run shipping companies, pointing out how unsettling that might be for customers. The conversation focuses on open-source maintainers’ reluctance to be seen as "suppliers" of software, an idea explored in a 2022 blog post by Thomas Depierre. Many maintainers reject the label, as there is no contractual obligation to support the software they provide. </p><p>Williams critiques the industry's response to this, noting that instead of involving maintainers in software supply chain security, companies have relied on third-party vendors. However, these vendors have no relationship with the maintainers, leading to increased vulnerabilities. Williams advocates for better engagement with maintainers, especially at build time, to improve security. She also reflects on the growing pressures on maintainers and the underappreciation of release teams.</p><p>Learn more from The New Stack about open source software supply chain</p><p><a href="https://thenewstack.io/2023-the-year-open-source-security-supply-chain-grew-up/">2023: The Year Open Source Security Supply Chain Grew Up</a></p><p><a href="https://thenewstack.io/fortifying-the-software-supply-chain/">Fortifying the Software Supply Chain</a></p><p><a href="https://thenewstack.io/the-challenges-of-securing-the-open-source-supply-chain/">The Challenges of Securing the Open Source Supply Chain</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></description>
      <pubDate>Thu, 3 Oct 2024 07:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Ashley Williams, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/are-we-thinking-about-supply-chain-security-all-wrong-jWFXSEtO</link>
      <content:encoded><![CDATA[<p>In a New Stack Makers episode, Ashley Williams, founder and CEO of axo, highlights how the software world depends on open-source code, which is largely maintained by unpaid volunteers. She likens this to a CVS relying on volunteer-run shipping companies, pointing out how unsettling that might be for customers. The conversation focuses on open-source maintainers’ reluctance to be seen as "suppliers" of software, an idea explored in a 2022 blog post by Thomas Depierre. Many maintainers reject the label, as there is no contractual obligation to support the software they provide. </p><p>Williams critiques the industry's response to this, noting that instead of involving maintainers in software supply chain security, companies have relied on third-party vendors. However, these vendors have no relationship with the maintainers, leading to increased vulnerabilities. Williams advocates for better engagement with maintainers, especially at build time, to improve security. She also reflects on the growing pressures on maintainers and the underappreciation of release teams.</p><p>Learn more from The New Stack about open source software supply chain</p><p><a href="https://thenewstack.io/2023-the-year-open-source-security-supply-chain-grew-up/">2023: The Year Open Source Security Supply Chain Grew Up</a></p><p><a href="https://thenewstack.io/fortifying-the-software-supply-chain/">Fortifying the Software Supply Chain</a></p><p><a href="https://thenewstack.io/the-challenges-of-securing-the-open-source-supply-chain/">The Challenges of Securing the Open Source Supply Chain</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></content:encoded>
      <enclosure length="42058021" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/c03e2a98-cacc-45cc-aa80-f33083e308ce/audio/0616c637-a51c-4204-bce7-48e5ce0b8b46/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Are We Thinking About Supply Chain Security All Wrong?</itunes:title>
      <itunes:author>Ashley Williams, Alex Williams</itunes:author>
      <itunes:duration>00:43:48</itunes:duration>
      <itunes:summary>In this New Stack Makers episode, Ashley Williams, founder and CEO of axo, highlights how the software world depends on open-source code, which is largely maintained by unpaid volunteers. She likens this to a CVS relying on volunteer-run shipping companies, pointing out how unsettling that might be for customers. The conversation focuses on open-source maintainers’ reluctance to be seen as &quot;suppliers&quot; of software, an idea explored in a 2022 blog post by Thomas Depierre. Many maintainers reject the label, as there is no contractual obligation to support the software they provide.</itunes:summary>
      <itunes:subtitle>In this New Stack Makers episode, Ashley Williams, founder and CEO of axo, highlights how the software world depends on open-source code, which is largely maintained by unpaid volunteers. She likens this to a CVS relying on volunteer-run shipping companies, pointing out how unsettling that might be for customers. The conversation focuses on open-source maintainers’ reluctance to be seen as &quot;suppliers&quot; of software, an idea explored in a 2022 blog post by Thomas Depierre. Many maintainers reject the label, as there is no contractual obligation to support the software they provide.</itunes:subtitle>
      <itunes:keywords>software supply chain, software developer, tech podcast, alex williams, the new stack, ashley williams, tech, developer podcast, open source, security</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1491</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">026808a0-bdc4-431c-9045-705604a4809d</guid>
      <title>What a CTO Learned at Nvidia About Managing Engineers</title>
      <description><![CDATA[<p>In this New Stack Makers podcast, Xun Wang, CTO of Bloomreach, brings insights from his time at Nvidia, particularly lessons from its founder, Jensen Huang, to his current role in e-commerce personalization. Wang emphasizes structuring organizations to reflect the architecture of the products they build, applying a hands-on, detail-oriented approach that encourages deep understanding of engineering challenges. </p><p>He credits Huang for teaching him the importance of focusing on fundamental architecture rather than relying on iterative testing alone. Wang highlights the impact of generative AI (GenAI) on Bloomreach, explaining how AI-driven search is essential to understanding human language and user intent. As GenAI reshapes application development, Wang stresses the need for engineers to adopt new skills in AI manipulation, while still maintaining traditional coding expertise. He advocates for continuous learning, acknowledging the challenge of staying updated in a rapidly evolving field. Wang, himself, reads extensively to keep pace with innovations, underscoring the importance of staying curious and adaptable in today’s tech landscape. </p><p>Learn more from The New Stack about Entrepreneurship for Engineers: </p><p><a href="https://thenewstack.io/entrepreneurship-for-engineers-how-to-grow-into-leadership/ ">How to Grow into Leadership </a></p><p><a href="https://thenewstack.io/engineering-leaders-switch-to-wartime-management-now/ ">Engineering Leaders: Switch to Wartime Management Now </a></p><p><a href="https://thenewstack.io/how-teleports-leader-transitioned-from-engineer-to-ceo/ ">How Teleport’s Leader Transitioned from Engineer to CEO </a></p><p> </p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a> </p>
]]></description>
      <pubDate>Thu, 26 Sep 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Xun Wang, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/what-a-cto-learned-at-nvidia-about-managing-engineers-GECXRHBi</link>
      <content:encoded><![CDATA[<p>In this New Stack Makers podcast, Xun Wang, CTO of Bloomreach, brings insights from his time at Nvidia, particularly lessons from its founder, Jensen Huang, to his current role in e-commerce personalization. Wang emphasizes structuring organizations to reflect the architecture of the products they build, applying a hands-on, detail-oriented approach that encourages deep understanding of engineering challenges. </p><p>He credits Huang for teaching him the importance of focusing on fundamental architecture rather than relying on iterative testing alone. Wang highlights the impact of generative AI (GenAI) on Bloomreach, explaining how AI-driven search is essential to understanding human language and user intent. As GenAI reshapes application development, Wang stresses the need for engineers to adopt new skills in AI manipulation, while still maintaining traditional coding expertise. He advocates for continuous learning, acknowledging the challenge of staying updated in a rapidly evolving field. Wang, himself, reads extensively to keep pace with innovations, underscoring the importance of staying curious and adaptable in today’s tech landscape. </p><p>Learn more from The New Stack about Entrepreneurship for Engineers: </p><p><a href="https://thenewstack.io/entrepreneurship-for-engineers-how-to-grow-into-leadership/ ">How to Grow into Leadership </a></p><p><a href="https://thenewstack.io/engineering-leaders-switch-to-wartime-management-now/ ">Engineering Leaders: Switch to Wartime Management Now </a></p><p><a href="https://thenewstack.io/how-teleports-leader-transitioned-from-engineer-to-ceo/ ">How Teleport’s Leader Transitioned from Engineer to CEO </a></p><p> </p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a> </p>
]]></content:encoded>
      <enclosure length="43897042" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/30ecc140-c048-4fee-ad2d-99c2d9b7b236/audio/61e7767c-d4f4-472b-8597-39ffbcfb1883/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>What a CTO Learned at Nvidia About Managing Engineers</itunes:title>
      <itunes:author>Xun Wang, Alex Williams</itunes:author>
      <itunes:duration>00:45:43</itunes:duration>
      <itunes:summary>In this New Stack Makers podcast, Xun Wang, CTO of Bloomreach, brings insights from his time at Nvidia, particularly lessons from its founder, Jensen Huang, to his current role in e-commerce personalization. Wang emphasizes structuring organizations to reflect the architecture of the products they build, applying a hands-on, detail-oriented approach that encourages deep understanding of engineering challenges. </itunes:summary>
      <itunes:subtitle>In this New Stack Makers podcast, Xun Wang, CTO of Bloomreach, brings insights from his time at Nvidia, particularly lessons from its founder, Jensen Huang, to his current role in e-commerce personalization. Wang emphasizes structuring organizations to reflect the architecture of the products they build, applying a hands-on, detail-oriented approach that encourages deep understanding of engineering challenges. </itunes:subtitle>
      <itunes:keywords>xun wang, software developer, ai, the new stack, leadership, tech, developer podcast, the new stack makers, software engineer, engineer leadership, nvidia, jensen huang</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1490</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">59c3ea1d-b0c8-4eea-ae56-8dc979bfc845</guid>
      <title>How to Find Success with Code Reviews</title>
      <description><![CDATA[<p>Code reviews can be highly beneficial but tricky to execute well due to the human factors involved, says Adrienne Braganza Tacke, author of *Looks Good to Me: Actionable Advice for Constructive Code Review.* In a recent conversation with *The New Stack*, Tacke identified three challenges teams must address for successful code reviews: ambiguity, subjectivity, and ego.</p><p>Ambiguity arises when the goals or expectations for the code are unclear, leading to miscommunication and rework. Tacke emphasizes the need for clarity and explicit communication throughout the review process. Subjectivity, the second challenge, can derail reviews when personal preferences overshadow objective evaluation. Reviewers should justify their suggestions based on technical merit rather than opinion. Finally, ego can get in the way, with developers feeling attached to their code. Both reviewers and submitters must check their egos to foster a constructive dialogue.</p><p>Tacke encourages programmers to first review their own work, as self-checks can enhance the quality of the code before it reaches the reviewer. Ultimately, code reviews can improve code quality, mentor developers, and strengthen team knowledge. </p><p>Learn more from The New Stack about code reviews:</p><p><a href="https://thenewstack.io/the-anatomy-of-slow-code-reviews/">The Anatomy of Slow Code Reviews </a></p><p><a href="https://thenewstack.io/one-company-rethinks-diff-to-cut-code-review-times/">One Company Rethinks Diff to Cut Code Review Times</a></p><p><a href="https://thenewstack.io/how-good-is-your-code-review-process/">How Good Is Your Code Review Process?</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a> </p>
]]></description>
      <pubDate>Thu, 19 Sep 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Adrienne Tacke, Loraine Lawson)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-to-find-success-with-code-reviews-yojcmWsq</link>
      <content:encoded><![CDATA[<p>Code reviews can be highly beneficial but tricky to execute well due to the human factors involved, says Adrienne Braganza Tacke, author of *Looks Good to Me: Actionable Advice for Constructive Code Review.* In a recent conversation with *The New Stack*, Tacke identified three challenges teams must address for successful code reviews: ambiguity, subjectivity, and ego.</p><p>Ambiguity arises when the goals or expectations for the code are unclear, leading to miscommunication and rework. Tacke emphasizes the need for clarity and explicit communication throughout the review process. Subjectivity, the second challenge, can derail reviews when personal preferences overshadow objective evaluation. Reviewers should justify their suggestions based on technical merit rather than opinion. Finally, ego can get in the way, with developers feeling attached to their code. Both reviewers and submitters must check their egos to foster a constructive dialogue.</p><p>Tacke encourages programmers to first review their own work, as self-checks can enhance the quality of the code before it reaches the reviewer. Ultimately, code reviews can improve code quality, mentor developers, and strengthen team knowledge. </p><p>Learn more from The New Stack about code reviews:</p><p><a href="https://thenewstack.io/the-anatomy-of-slow-code-reviews/">The Anatomy of Slow Code Reviews </a></p><p><a href="https://thenewstack.io/one-company-rethinks-diff-to-cut-code-review-times/">One Company Rethinks Diff to Cut Code Review Times</a></p><p><a href="https://thenewstack.io/how-good-is-your-code-review-process/">How Good Is Your Code Review Process?</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a> </p>
]]></content:encoded>
      <enclosure length="32904715" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/8314d0a2-b76a-4484-adfe-a3b7083baad1/audio/906eb8a4-a6f4-41c4-af91-093b720bb8df/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How to Find Success with Code Reviews</itunes:title>
      <itunes:author>Adrienne Tacke, Loraine Lawson</itunes:author>
      <itunes:duration>00:34:16</itunes:duration>
      <itunes:summary>Code reviews can be highly beneficial but tricky to execute well due to the human factors involved, says Adrienne Braganza Tacke, author of *Looks Good to Me: Actionable Advice for Constructive Code Review.* In a recent conversation with *The New Stack*, Tacke identified three challenges teams must address for successful code reviews: ambiguity, subjectivity, and ego. </itunes:summary>
      <itunes:subtitle>Code reviews can be highly beneficial but tricky to execute well due to the human factors involved, says Adrienne Braganza Tacke, author of *Looks Good to Me: Actionable Advice for Constructive Code Review.* In a recent conversation with *The New Stack*, Tacke identified three challenges teams must address for successful code reviews: ambiguity, subjectivity, and ego. </itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, the new stack, adrienne tacke, loraine lawson, devops, devops podcast, tech, the new stack makers, software engineer, code review, platform engineering</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1489</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">60f2489e-31b2-4b56-8a95-b8ca083c5fdb</guid>
      <title>How Apache Iceberg and Flink Can Ease Developer Pain</title>
      <description><![CDATA[<p>In the New Stack Makers episode, Adi Polak, Director, Advocacy and Developer Experience Engineering at Confluent discusses the operational and analytical estates in data infrastructure. The operational estate focuses on fast, low-latency event-driven applications, while the analytical estate handles long-running data crunching tasks. Challenges arise due to the "schema evolution" from upstream operational changes impacting downstream analytics, creating complexity for developers. </p><p>Apache Iceberg and Flink help mitigate these issues. Iceberg, a table format developed by Netflix, optimizes querying by managing file relationships within a data lake, reducing processing time and errors. It has been widely adopted by major companies like Airbnb and LinkedIn. </p><p>Apache Flink, a versatile data processing framework, is driving two key trends: shifting some batch processing tasks into stream processing and transitioning microservices into Flink streaming applications. This approach enhances system reliability, lowers latency, and meets customer demands for real-time data, like instant flight status updates. Together, Iceberg and Flink streamline data infrastructure, addressing developer pain points and improving efficiency. </p><p>Learn more from The New Stack about Apache Iceberg and Flink:</p><p><a href="https://thenewstack.io/has-your-data-lakehouse-frozen-over-thaw-iceberg/">Unfreeze Apache Iceberg to Thaw Your Data Lakehouse</a></p><p><a href="https://thenewstack.io/apache-flink-2023-retrospective-and-glimpse-into-the-future/">Apache Flink: 2023 Retrospective and Glimpse into the Future </a></p><p><a href="https://thenewstack.io/4-reasons-why-developers-should-use-apache-flink/">4 Reasons Why Developers Should Use Apache Flink </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></description>
      <pubDate>Thu, 12 Sep 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Adi Polak, Confluent, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-apache-iceberg-and-flink-can-ease-developer-pain-8SjInCpv</link>
      <content:encoded><![CDATA[<p>In the New Stack Makers episode, Adi Polak, Director, Advocacy and Developer Experience Engineering at Confluent discusses the operational and analytical estates in data infrastructure. The operational estate focuses on fast, low-latency event-driven applications, while the analytical estate handles long-running data crunching tasks. Challenges arise due to the "schema evolution" from upstream operational changes impacting downstream analytics, creating complexity for developers. </p><p>Apache Iceberg and Flink help mitigate these issues. Iceberg, a table format developed by Netflix, optimizes querying by managing file relationships within a data lake, reducing processing time and errors. It has been widely adopted by major companies like Airbnb and LinkedIn. </p><p>Apache Flink, a versatile data processing framework, is driving two key trends: shifting some batch processing tasks into stream processing and transitioning microservices into Flink streaming applications. This approach enhances system reliability, lowers latency, and meets customer demands for real-time data, like instant flight status updates. Together, Iceberg and Flink streamline data infrastructure, addressing developer pain points and improving efficiency. </p><p>Learn more from The New Stack about Apache Iceberg and Flink:</p><p><a href="https://thenewstack.io/has-your-data-lakehouse-frozen-over-thaw-iceberg/">Unfreeze Apache Iceberg to Thaw Your Data Lakehouse</a></p><p><a href="https://thenewstack.io/apache-flink-2023-retrospective-and-glimpse-into-the-future/">Apache Flink: 2023 Retrospective and Glimpse into the Future </a></p><p><a href="https://thenewstack.io/4-reasons-why-developers-should-use-apache-flink/">4 Reasons Why Developers Should Use Apache Flink </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></content:encoded>
      <enclosure length="45255827" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/6864b87e-b20f-4228-9308-bd5ce0db5c61/audio/083e99cd-2378-4e5a-ab4a-538a39c7f503/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Apache Iceberg and Flink Can Ease Developer Pain</itunes:title>
      <itunes:author>Adi Polak, Confluent, Alex Williams</itunes:author>
      <itunes:duration>00:47:08</itunes:duration>
      <itunes:summary>In the New Stack Makers episode, Adi Polak, Director, Advocacy and Developer Experience Engineering at Confluent discusses the operational and analytical estates in data infrastructure. The operational estate focuses on fast, low-latency event-driven applications, while the analytical estate handles long-running data crunching tasks. Challenges arise due to the &quot;schema evolution&quot; from upstream operational changes impacting downstream analytics, creating complexity for developers. </itunes:summary>
      <itunes:subtitle>In the New Stack Makers episode, Adi Polak, Director, Advocacy and Developer Experience Engineering at Confluent discusses the operational and analytical estates in data infrastructure. The operational estate focuses on fast, low-latency event-driven applications, while the analytical estate handles long-running data crunching tasks. Challenges arise due to the &quot;schema evolution&quot; from upstream operational changes impacting downstream analytics, creating complexity for developers. </itunes:subtitle>
      <itunes:keywords>tech podcast, iceberg, the new stack, confluent, tech, the new stack makers, software engineer, apache iceberg, apache flink</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1488</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">669db0e2-abb9-4494-9f27-537c9895a800</guid>
      <title>How Heroku Is Positioned to Help Ops Engineers in the GenAI Era</title>
      <description><![CDATA[<p>Bob Wise, CEO of Heroku, discussed the impact of generative AI (GenAI) coding tools on software development in a recent episode of The New Stack Makers. He compared the rise of these tools to adding an "infinite number of interns" to development teams, noting that while they accelerate code writing, they don't yet simplify testing, deployment, or production operations. Wise likened this to the early days of Kubernetes, which focused on improving operations rather than the frontend experience. He emphasized that Kubernetes' success was due to its focus on easing the operational burden, something current GenAI tools have yet to achieve.</p><p>Heroku, acquired by Salesforce in 2010, is positioned to benefit from these changes by helping teams transition to more automated systems. Wise highlighted Heroku’s strategic bet on Postgres, a database technology that's gaining traction, especially for GenAI workloads. He also discussed Heroku's ongoing migration to Kubernetes, aligning with industry standards to enhance its platform.</p><p>Learn more from The New Stack about Heroku</p><p><a href="https://thenewstack.io/the-data-stack-journey-lessons-from-architecting-stacks-at-heroku-and-mattermost/">The Data Stack Journey: Lessons from Architecting Stacks at Heroku and Mattermost</a></p><p><a href="https://thenewstack.io/kubernetes-and-the-next-generation-of-paas/ ">Kubernetes and the Next Generation of PaaS </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Thu, 5 Sep 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Bob Wise, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-heroku-is-positioned-to-help-ops-engineers-in-the-genai-era-CtwStc4c</link>
      <content:encoded><![CDATA[<p>Bob Wise, CEO of Heroku, discussed the impact of generative AI (GenAI) coding tools on software development in a recent episode of The New Stack Makers. He compared the rise of these tools to adding an "infinite number of interns" to development teams, noting that while they accelerate code writing, they don't yet simplify testing, deployment, or production operations. Wise likened this to the early days of Kubernetes, which focused on improving operations rather than the frontend experience. He emphasized that Kubernetes' success was due to its focus on easing the operational burden, something current GenAI tools have yet to achieve.</p><p>Heroku, acquired by Salesforce in 2010, is positioned to benefit from these changes by helping teams transition to more automated systems. Wise highlighted Heroku’s strategic bet on Postgres, a database technology that's gaining traction, especially for GenAI workloads. He also discussed Heroku's ongoing migration to Kubernetes, aligning with industry standards to enhance its platform.</p><p>Learn more from The New Stack about Heroku</p><p><a href="https://thenewstack.io/the-data-stack-journey-lessons-from-architecting-stacks-at-heroku-and-mattermost/">The Data Stack Journey: Lessons from Architecting Stacks at Heroku and Mattermost</a></p><p><a href="https://thenewstack.io/kubernetes-and-the-next-generation-of-paas/ ">Kubernetes and the Next Generation of PaaS </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="40182638" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/e44fefde-ece4-4437-84a5-ed481536aa36/audio/1ea6b1ed-49cf-4763-bd1d-dfb4e38372a7/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Heroku Is Positioned to Help Ops Engineers in the GenAI Era</itunes:title>
      <itunes:author>Bob Wise, Alex Williams</itunes:author>
      <itunes:duration>00:41:51</itunes:duration>
      <itunes:summary>Bob Wise, CEO of Heroku, discussed the impact of generative AI (GenAI) coding tools on software development in a recent episode of The New Stack Makers. He compared the rise of these tools to adding an &quot;infinite number of interns&quot; to development teams, noting that while they accelerate code writing, they don&apos;t yet simplify testing, deployment, or production operations. Wise likened this to the early days of Kubernetes, which focused on improving operations rather than the frontend experience. He emphasized that Kubernetes&apos; success was due to its focus on easing the operational burden, something current GenAI tools have yet to achieve</itunes:summary>
      <itunes:subtitle>Bob Wise, CEO of Heroku, discussed the impact of generative AI (GenAI) coding tools on software development in a recent episode of The New Stack Makers. He compared the rise of these tools to adding an &quot;infinite number of interns&quot; to development teams, noting that while they accelerate code writing, they don&apos;t yet simplify testing, deployment, or production operations. Wise likened this to the early days of Kubernetes, which focused on improving operations rather than the frontend experience. He emphasized that Kubernetes&apos; success was due to its focus on easing the operational burden, something current GenAI tools have yet to achieve</itunes:subtitle>
      <itunes:keywords>buildpacks, software developer, tech podcast, alex williams, the new stack, cloud native, devops podcast, bob wise, tech, developer podcast, software engineer, platform as a service, heroku</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1487</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">bc79cde7-9d4b-4050-98cc-da1ce65e65b6</guid>
      <title>OpenJS Foundation’s Leader Details the Threats to Open Source</title>
      <description><![CDATA[<p>After the XZ Utils backdoor vulnerability was uncovered in March, the OpenJS Foundation saw a surge in inquiries from potential open source JavaScript contributors. Robin Ginn, executive director of the foundation, noted that volunteer-led JavaScript communities often face challenges in managing these contributions. The discovery that a single contributor, "Jia Tan," planted the backdoor heightened vigilance, especially when new contributors requested admin privileges. Ginn emphasized that trust is not synonymous with security, especially in open source projects where maintainers must be vigilant about who can access their repositories.</p><p>The XZ vulnerability highlighted broader concerns about the security of open source software, particularly in projects with only a single maintainer. Despite receiving a significant grant from Germany's Sovereign Tech Fund, the foundation remains under-resourced, with just two full-time staffers supporting 35 projects. Ginn urged companies that rely on open source software to invest in it by hiring maintainers, ensuring these critical projects are properly supported.</p><p>Learn more from The New Stack about open source vulnerability</p><p><a href="https://thenewstack.io/linux-xz-backdoor-damage-could-be-greater-than-feared/">Linux xz Backdoor Damage Could Be Greater Than Feared</a> </p><p><a href="https://thenewstack.io/unzipping-the-xz-backdoor-and-its-lessons-for-open-source/">Unzipping the XZ Backdoor and Its Lessons for Open Source </a></p><p><a href="https://thenewstack.io/linux-xz-and-the-great-flaws-in-open-source/">Linux xz and the Great Flaws in Open Source </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a> </p><h1> </h1>
]]></description>
      <pubDate>Thu, 29 Aug 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Robin Ginn, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/openjs-foundations-leader-details-the-threats-to-open-source-fSd9ne_j</link>
      <content:encoded><![CDATA[<p>After the XZ Utils backdoor vulnerability was uncovered in March, the OpenJS Foundation saw a surge in inquiries from potential open source JavaScript contributors. Robin Ginn, executive director of the foundation, noted that volunteer-led JavaScript communities often face challenges in managing these contributions. The discovery that a single contributor, "Jia Tan," planted the backdoor heightened vigilance, especially when new contributors requested admin privileges. Ginn emphasized that trust is not synonymous with security, especially in open source projects where maintainers must be vigilant about who can access their repositories.</p><p>The XZ vulnerability highlighted broader concerns about the security of open source software, particularly in projects with only a single maintainer. Despite receiving a significant grant from Germany's Sovereign Tech Fund, the foundation remains under-resourced, with just two full-time staffers supporting 35 projects. Ginn urged companies that rely on open source software to invest in it by hiring maintainers, ensuring these critical projects are properly supported.</p><p>Learn more from The New Stack about open source vulnerability</p><p><a href="https://thenewstack.io/linux-xz-backdoor-damage-could-be-greater-than-feared/">Linux xz Backdoor Damage Could Be Greater Than Feared</a> </p><p><a href="https://thenewstack.io/unzipping-the-xz-backdoor-and-its-lessons-for-open-source/">Unzipping the XZ Backdoor and Its Lessons for Open Source </a></p><p><a href="https://thenewstack.io/linux-xz-and-the-great-flaws-in-open-source/">Linux xz and the Great Flaws in Open Source </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a> </p><h1> </h1>
]]></content:encoded>
      <enclosure length="27114309" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/f8bbab7e-2065-4f81-a516-a2a15a106043/audio/3427e30f-bbe1-45e9-bffa-93bb1278ada0/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>OpenJS Foundation’s Leader Details the Threats to Open Source</itunes:title>
      <itunes:author>Robin Ginn, Alex Williams</itunes:author>
      <itunes:duration>00:28:14</itunes:duration>
      <itunes:summary>After the XZ Utils backdoor vulnerability was uncovered in March, the OpenJS Foundation saw a surge in inquiries from potential open source JavaScript contributors. Robin Ginn, executive director of the foundation, noted that volunteer-led JavaScript communities often face challenges in managing these contributions. The discovery that a single contributor, &quot;Jia Tan,&quot; planted the backdoor heightened vigilance, especially when new contributors requested admin privileges. Ginn emphasized that trust is not synonymous with security, especially in open source projects where maintainers must be vigilant about who can access their repositories.</itunes:summary>
      <itunes:subtitle>After the XZ Utils backdoor vulnerability was uncovered in March, the OpenJS Foundation saw a surge in inquiries from potential open source JavaScript contributors. Robin Ginn, executive director of the foundation, noted that volunteer-led JavaScript communities often face challenges in managing these contributions. The discovery that a single contributor, &quot;Jia Tan,&quot; planted the backdoor heightened vigilance, especially when new contributors requested admin privileges. Ginn emphasized that trust is not synonymous with security, especially in open source projects where maintainers must be vigilant about who can access their repositories.</itunes:subtitle>
      <itunes:keywords>software developer, java script, xz utils backdoor, openjs, the new stack, linux, vulnerability, robin ginn, tech, developer podcast, the new stack makers, software engineer, open source</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1486</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">6e1e0705-a9c9-468b-bded-d8fddc8e4a7e</guid>
      <title>What’s the Future for Software Developers?</title>
      <description><![CDATA[<p>Paige Bailey, who began coding at age 9 in rural Texas, now leads the GenAI developer experience at Google. In a conversation with Chris Pirillo on The New Stack Makers, Bailey reflected on the evolving role of software development in the era of generative AI. While she once urged her nieces and nephews to pursue computer science degrees, Bailey now believes that critical thinking and problem-solving may be more crucial for future tech careers. </p><p>She emphasized that generative AI is democratizing software development, making it more accessible and enabling developers to focus on creative tasks rather than the minutiae of coding. Bailey's experience at Google highlights this shift, as she now acts more as a reviewer and overseer of AI-generated code. She sees GenAI not as a replacement for developers but as a tool to accelerate their creativity and tackle longstanding backlogs. Bailey believes the key is ensuring everyone understands how to effectively apply generative AI to their work.</p><p>Learn more from The New Stack about the future of development: </p><p><a href="https://thenewstack.io/7-ways-to-future-proof-your-developer-job-in-the-age-of-ai/">7 Ways to Future Proof Your Developer Job in the Age of AI </a></p><p><a href="https://thenewstack.io/the-future-of-developer-careers/">The Future of Developer Careers </a></p><p><a href="https://thenewstack.io/4-forecasts-for-the-future-of-developer-relations/">4 Forecasts for the Future of Developer Relations</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Thu, 22 Aug 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Paige Bailey, Chris Pirillo)</author>
      <link>https://thenewstack.simplecast.com/episodes/whats-the-future-for-software-developers-xcFQ7Ls2</link>
      <content:encoded><![CDATA[<p>Paige Bailey, who began coding at age 9 in rural Texas, now leads the GenAI developer experience at Google. In a conversation with Chris Pirillo on The New Stack Makers, Bailey reflected on the evolving role of software development in the era of generative AI. While she once urged her nieces and nephews to pursue computer science degrees, Bailey now believes that critical thinking and problem-solving may be more crucial for future tech careers. </p><p>She emphasized that generative AI is democratizing software development, making it more accessible and enabling developers to focus on creative tasks rather than the minutiae of coding. Bailey's experience at Google highlights this shift, as she now acts more as a reviewer and overseer of AI-generated code. She sees GenAI not as a replacement for developers but as a tool to accelerate their creativity and tackle longstanding backlogs. Bailey believes the key is ensuring everyone understands how to effectively apply generative AI to their work.</p><p>Learn more from The New Stack about the future of development: </p><p><a href="https://thenewstack.io/7-ways-to-future-proof-your-developer-job-in-the-age-of-ai/">7 Ways to Future Proof Your Developer Job in the Age of AI </a></p><p><a href="https://thenewstack.io/the-future-of-developer-careers/">The Future of Developer Careers </a></p><p><a href="https://thenewstack.io/4-forecasts-for-the-future-of-developer-relations/">4 Forecasts for the Future of Developer Relations</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="31071964" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/5dfdb933-3d32-4a81-8c9d-75df172b26e8/audio/a11b8ffe-0f2b-4bf0-9f7d-d0a70d277c22/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>What’s the Future for Software Developers?</itunes:title>
      <itunes:author>Paige Bailey, Chris Pirillo</itunes:author>
      <itunes:duration>00:32:21</itunes:duration>
      <itunes:summary>Paige Bailey, who began coding at age 9 in rural Texas, now leads the GenAI developer experience at Google. In a conversation with Chris Pirillo on The New Stack Makers, Bailey reflected on the evolving role of software development in the era of generative AI. While she once urged her nieces and nephews to pursue computer science degrees, Bailey now believes that critical thinking and problem-solving may be more crucial for future tech careers. </itunes:summary>
      <itunes:subtitle>Paige Bailey, who began coding at age 9 in rural Texas, now leads the GenAI developer experience at Google. In a conversation with Chris Pirillo on The New Stack Makers, Bailey reflected on the evolving role of software development in the era of generative AI. While she once urged her nieces and nephews to pursue computer science degrees, Bailey now believes that critical thinking and problem-solving may be more crucial for future tech careers. </itunes:subtitle>
      <itunes:keywords>generative ai, paige bailey, software developer, tech podcast, the new stack, chris pirillo, tech, google genai, the new stack makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1485</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b0cdc53c-3f90-4f1a-bdc7-02eb3fc2d10c</guid>
      <title>Want to Create Software Sustainably? Anne Currie’s Got Ideas</title>
      <description><![CDATA[<p>Anne Currie, a leading expert in sustainable tech and part of the Green Software Foundation, discusses practical steps for building resilient, sustainable software in an episode of The New Stack Makers. With 30 years of experience, Currie co-authored <i>Building Green Software</i>, emphasizing the tech industry's role in the energy transition. She highlights the complexity of adapting technology to renewable energy, involving extensive research and debunking misinformation. Currie discusses the importance of energy proportionality—the idea that increased utilization improves a computer's energy efficiency—and how this concept aligns with modern DevOps practices that reduce carbon emissions while enhancing speed, cost efficiency, and security.</p><p>Currie also emphasizes architecting systems to operate on renewable power and draws parallels between managing variable grid power and internet bandwidth. Using examples like video conferencing, she illustrates how software can adapt to fluctuating resources. The episode also touches on potential pitfalls like greenwashing and the challenges in accurately naming concepts like energy proportionality.</p><p>Learn more from The New Stack about sustainability: </p><p><a href="https://thenewstack.io/sustainability-how-did-amazon-azure-google-perform-in-2023/ ">Sustainability: How Did Amazon, Azure, Google Perform in 2023? </a></p><p><a href="https://thenewstack.io/sustainability-focus-cloud-efficiency-not-carbon-emissions/ ">Sustainability Focus: Cloud Efficiency, Not Carbon Emissions </a></p><p><a href="https://thenewstack.io/developers-should-press-cloud-providers-on-sustainability/ ">Developers Should Press Cloud Providers on Sustainability </a></p><p>Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/</p>
]]></description>
      <pubDate>Thu, 15 Aug 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Anne Currie, Charles Humble)</author>
      <link>https://thenewstack.simplecast.com/episodes/want-to-create-software-sustainably-anne-curries-got-ideas-Kd4ozUSQ</link>
      <content:encoded><![CDATA[<p>Anne Currie, a leading expert in sustainable tech and part of the Green Software Foundation, discusses practical steps for building resilient, sustainable software in an episode of The New Stack Makers. With 30 years of experience, Currie co-authored <i>Building Green Software</i>, emphasizing the tech industry's role in the energy transition. She highlights the complexity of adapting technology to renewable energy, involving extensive research and debunking misinformation. Currie discusses the importance of energy proportionality—the idea that increased utilization improves a computer's energy efficiency—and how this concept aligns with modern DevOps practices that reduce carbon emissions while enhancing speed, cost efficiency, and security.</p><p>Currie also emphasizes architecting systems to operate on renewable power and draws parallels between managing variable grid power and internet bandwidth. Using examples like video conferencing, she illustrates how software can adapt to fluctuating resources. The episode also touches on potential pitfalls like greenwashing and the challenges in accurately naming concepts like energy proportionality.</p><p>Learn more from The New Stack about sustainability: </p><p><a href="https://thenewstack.io/sustainability-how-did-amazon-azure-google-perform-in-2023/ ">Sustainability: How Did Amazon, Azure, Google Perform in 2023? </a></p><p><a href="https://thenewstack.io/sustainability-focus-cloud-efficiency-not-carbon-emissions/ ">Sustainability Focus: Cloud Efficiency, Not Carbon Emissions </a></p><p><a href="https://thenewstack.io/developers-should-press-cloud-providers-on-sustainability/ ">Developers Should Press Cloud Providers on Sustainability </a></p><p>Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/</p>
]]></content:encoded>
      <enclosure length="40228614" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/fd0a0c16-da9f-400c-878b-48a6d40acb89/audio/01732718-bdec-44b7-a3aa-e06c68c8c6d8/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Want to Create Software Sustainably? Anne Currie’s Got Ideas</itunes:title>
      <itunes:author>Anne Currie, Charles Humble</itunes:author>
      <itunes:duration>00:41:54</itunes:duration>
      <itunes:summary>Anne Currie, a leading expert in sustainable tech and part of the Green Software Foundation, discusses practical steps for building resilient, sustainable software in an episode of The New Stack Makers. With 30 years of experience, Currie co-authored Building Green Software, emphasizing the tech industry&apos;s role in the energy transition. She highlights the complexity of adapting technology to renewable energy, involving extensive research and debunking misinformation. Currie discusses the importance of energy proportionality—the idea that increased utilization improves a computer&apos;s energy efficiency—and how this concept aligns with modern DevOps practices that reduce carbon emissions while enhancing speed, cost efficiency, and security.</itunes:summary>
      <itunes:subtitle>Anne Currie, a leading expert in sustainable tech and part of the Green Software Foundation, discusses practical steps for building resilient, sustainable software in an episode of The New Stack Makers. With 30 years of experience, Currie co-authored Building Green Software, emphasizing the tech industry&apos;s role in the energy transition. She highlights the complexity of adapting technology to renewable energy, involving extensive research and debunking misinformation. Currie discusses the importance of energy proportionality—the idea that increased utilization improves a computer&apos;s energy efficiency—and how this concept aligns with modern DevOps practices that reduce carbon emissions while enhancing speed, cost efficiency, and security.</itunes:subtitle>
      <itunes:keywords>sustainable tech, green software, software developer, tech podcast, the new stack, sustainability development, developer podcast, software engineer, anne currie, charles humble</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1484</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">3d53ecd2-8c10-4738-91c0-8ba6903611f5</guid>
      <title>VMware’s Golden Path</title>
      <description><![CDATA[<p>In an era marked by complexity, the golden path is essential for software architects, asserts James Watters, senior director of R&D at VMware Tanzu, Broadcom. This approach, emphasizing fewer application patterns, simplifies life for security personnel, developers, and infrastructure teams. VMware defines the golden path as streamlining software development, crucial in today's economic climate. Watters highlights this in the Broadcom report: State of Cloud Native App Platforms 2024, noting that 55% of organizations favor this method for its consistency and security. </p><p>Watters, a pioneer in platform as a service since 2009, helped establish Cloud Foundry and now drives VMware Tanzu. Tanzu's golden operations offer standardized, consistent processes across platforms, crucial for efficiency and security. Watters advocates for minimal DIY in favor of operational consistency, providing commands for building, deploying, and scaling applications. </p><p>Tanzu’s focus is on integrating AI to enhance user interfaces and data access, impacting platform engineering significantly in the coming years. This integration aims to offer a better developer experience while maintaining security and efficiency. </p><p>Learn more from The New Stack about golden paths: </p><p><a href="https://thenewstack.io/golden-paths-start-with-a-shift-left/ ">Golden Paths Start with a Shift Left </a></p><p><a href="https://thenewstack.io/platform-engineering-not-working-out-youre-doing-it-wrong/ ">Platform Engineering Not Working Out? You’re Doing It Wrong. </a></p><p><a href="https://thenewstack.io/how-to-pave-golden-paths-that-actually-go-somewhere/ ">How to Pave Golden Paths That Actually Go Somewhere </a></p><p> </p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></description>
      <pubDate>Thu, 8 Aug 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (James Watters, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/vmwares-golden-path-fVFNSF1G</link>
      <content:encoded><![CDATA[<p>In an era marked by complexity, the golden path is essential for software architects, asserts James Watters, senior director of R&D at VMware Tanzu, Broadcom. This approach, emphasizing fewer application patterns, simplifies life for security personnel, developers, and infrastructure teams. VMware defines the golden path as streamlining software development, crucial in today's economic climate. Watters highlights this in the Broadcom report: State of Cloud Native App Platforms 2024, noting that 55% of organizations favor this method for its consistency and security. </p><p>Watters, a pioneer in platform as a service since 2009, helped establish Cloud Foundry and now drives VMware Tanzu. Tanzu's golden operations offer standardized, consistent processes across platforms, crucial for efficiency and security. Watters advocates for minimal DIY in favor of operational consistency, providing commands for building, deploying, and scaling applications. </p><p>Tanzu’s focus is on integrating AI to enhance user interfaces and data access, impacting platform engineering significantly in the coming years. This integration aims to offer a better developer experience while maintaining security and efficiency. </p><p>Learn more from The New Stack about golden paths: </p><p><a href="https://thenewstack.io/golden-paths-start-with-a-shift-left/ ">Golden Paths Start with a Shift Left </a></p><p><a href="https://thenewstack.io/platform-engineering-not-working-out-youre-doing-it-wrong/ ">Platform Engineering Not Working Out? You’re Doing It Wrong. </a></p><p><a href="https://thenewstack.io/how-to-pave-golden-paths-that-actually-go-somewhere/ ">How to Pave Golden Paths That Actually Go Somewhere </a></p><p> </p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></content:encoded>
      <enclosure length="24504990" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/f69dcdba-eb63-4f23-87f4-e510dfe47f5f/audio/a1a90f49-9077-482d-a5c7-ccf0267aac33/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>VMware’s Golden Path</itunes:title>
      <itunes:author>James Watters, Alex Williams</itunes:author>
      <itunes:duration>00:25:31</itunes:duration>
      <itunes:summary>In an era marked by complexity, the golden path is essential for software architects, asserts James Watters, senior director of R&amp;D at VMware Tanzu, Broadcom. This approach, emphasizing fewer application patterns, simplifies life for security personnel, developers, and infrastructure teams. VMware defines the golden path as streamlining software development, crucial in today&apos;s economic climate. Watters highlights this in the Broadcom report: State of Cloud Native App Platforms 2024, noting that 55% of organizations favor this method for its consistency and security. </itunes:summary>
      <itunes:subtitle>In an era marked by complexity, the golden path is essential for software architects, asserts James Watters, senior director of R&amp;D at VMware Tanzu, Broadcom. This approach, emphasizing fewer application patterns, simplifies life for security personnel, developers, and infrastructure teams. VMware defines the golden path as streamlining software development, crucial in today&apos;s economic climate. Watters highlights this in the Broadcom report: State of Cloud Native App Platforms 2024, noting that 55% of organizations favor this method for its consistency and security. </itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, the new stack, cloud native, tech, broadcom, developer podcast, james watters, the new stack makers, software engineer, platform engineering, vmware tanzu</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1483</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">83d4e561-6674-4cd7-a597-dd6d6419d861</guid>
      <title>Setting Microservices Up for Success: Real-World Advice</title>
      <description><![CDATA[<p>Maintaining and ensuring the success of a microservice-based system can be challenging. Sarah Wells, a seasoned tech consultant with over 20 years of experience, offers valuable insights in her book "Enabling Microservices Success" and a discussion on The New Stack Makers podcast. Drawing from her tenure at the Financial Times (FT), Wells illustrates how transitioning to microservices and adopting DevOps and SRE practices enabled FT to accelerate software releases from 12 annually to over 20,000. </p><p>This transformation required merging IT organizations, investing in automation, and fostering team autonomy. Wells emphasizes that successful microservices adoption depends not only on developer expertise but also on organizational structures. She highlights the importance of continuous delivery and proactive communication, especially during critical periods like major news events. Additionally, she discusses the evolving roles of senior engineers and the need for flexibility in defining architectural responsibilities. Wells advocates for "engineering enablement" over "platform teams" to better support effective service management and evolution. </p><p>Learn more from The New Stack about enabling successful outcomes of microservices: </p><p><a href="https://thenewstack.io/microservices/what-is-microservices-architecture/ ">What Is Microservices Architecture? </a></p><p><a href="https://thenewstack.io/4-strategies-for-migrating-monolithic-apps-to-microservices/ ">4 Strategies for Migrating Monolithic Apps to Microservices </a></p><p><a href="https://thenewstack.io/continuous-improvement-metrics-for-scaling-engineering-teams/">Continuous Improvement Metrics for Scaling Engineering Teams  </a></p><p>Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/</p>
]]></description>
      <pubDate>Thu, 1 Aug 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Sarah Wells, Charles Humble)</author>
      <link>https://thenewstack.simplecast.com/episodes/setting-microservices-up-for-success-real-world-advice-oIujHPr0</link>
      <content:encoded><![CDATA[<p>Maintaining and ensuring the success of a microservice-based system can be challenging. Sarah Wells, a seasoned tech consultant with over 20 years of experience, offers valuable insights in her book "Enabling Microservices Success" and a discussion on The New Stack Makers podcast. Drawing from her tenure at the Financial Times (FT), Wells illustrates how transitioning to microservices and adopting DevOps and SRE practices enabled FT to accelerate software releases from 12 annually to over 20,000. </p><p>This transformation required merging IT organizations, investing in automation, and fostering team autonomy. Wells emphasizes that successful microservices adoption depends not only on developer expertise but also on organizational structures. She highlights the importance of continuous delivery and proactive communication, especially during critical periods like major news events. Additionally, she discusses the evolving roles of senior engineers and the need for flexibility in defining architectural responsibilities. Wells advocates for "engineering enablement" over "platform teams" to better support effective service management and evolution. </p><p>Learn more from The New Stack about enabling successful outcomes of microservices: </p><p><a href="https://thenewstack.io/microservices/what-is-microservices-architecture/ ">What Is Microservices Architecture? </a></p><p><a href="https://thenewstack.io/4-strategies-for-migrating-monolithic-apps-to-microservices/ ">4 Strategies for Migrating Monolithic Apps to Microservices </a></p><p><a href="https://thenewstack.io/continuous-improvement-metrics-for-scaling-engineering-teams/">Continuous Improvement Metrics for Scaling Engineering Teams  </a></p><p>Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/</p>
]]></content:encoded>
      <enclosure length="37450857" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/d7ab7d7d-c37d-445b-adde-49604351703d/audio/c4f00b53-115c-481c-ab7b-c1ab9f7f924c/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Setting Microservices Up for Success: Real-World Advice</itunes:title>
      <itunes:author>Sarah Wells, Charles Humble</itunes:author>
      <itunes:duration>00:39:00</itunes:duration>
      <itunes:summary>Maintaining and ensuring the success of a microservice-based system can be challenging. Sarah Wells, a seasoned tech consultant with over 20 years of experience, offers valuable insights in her book &quot;Enabling Microservices Success&quot; and a discussion on The New Stack Makers podcast. Drawing from her tenure at the Financial Times (FT), Wells illustrates how transitioning to microservices and adopting DevOps and SRE practices enabled FT to accelerate software releases from 12 annually to over 20,000. </itunes:summary>
      <itunes:subtitle>Maintaining and ensuring the success of a microservice-based system can be challenging. Sarah Wells, a seasoned tech consultant with over 20 years of experience, offers valuable insights in her book &quot;Enabling Microservices Success&quot; and a discussion on The New Stack Makers podcast. Drawing from her tenure at the Financial Times (FT), Wells illustrates how transitioning to microservices and adopting DevOps and SRE practices enabled FT to accelerate software releases from 12 annually to over 20,000. </itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, the new stack, devops, tech, developer podcast, the new stack makers, software engineer, sarah wells, automation, microservices, charles humble</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1482</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">a9d7f6db-c3c2-491b-afb3-63c3ffbf4f75</guid>
      <title>How OpenTofu Happened — and What’s Next?</title>
      <description><![CDATA[<p>In August 2023, the open source community rallied to create OpenTofu, an alternative to Terraform, after HashiCorp, now owned by IBM, adopted a restrictive Business Source License for Terraform. Ohad Maislish, co-founder and CEO of env0, explained on The New Stack Makers how this move sparked the initiative. A few hours after HashiCorp's license change, Maislish secured the domain opentf.org and began developing the new project, eventually named OpenTofu, which was donated to The Linux Foundation to ensure its license couldn't be altered.</p><p>Maislish highlighted the importance of distinguishing between vendor-backed and foundation-backed open source projects to avoid sudden licensing changes. Before coding, the community created a manifesto, gathering significant support and pledges, but received no response from HashiCorp. Consequently, they proceeded with the fork and development of OpenTofu. Despite accusations of intellectual property theft from HashiCorp, OpenTofu gained traction and was adopted by organizations like Oracle. The community continues to prioritize user feedback through GitHub.</p><p>Learn more from The New Stack about OpenTofu: </p><p><a href="https://thenewstack.io/opentofu-vs-hashicorp-takes-center-stage-at-open-source-summit/">OpenTofu vs. HashiCorp Takes Center Stage at Open Source Summit </a></p><p><a href="https://thenewstack.io/opentofu-amiable-to-a-terraform-reconciliation/">OpenTofu Amiable to a Terraform Reconciliation </a></p><p><a href="https://thenewstack.io/opentofu-1-6-general-availability-open-source-infrastructure-as-code/">OpenTofu 1.6 General Availability: Open Source Infrastructure as Code </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a> </p>
]]></description>
      <pubDate>Thu, 25 Jul 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Ohad Mailish, env0, Chris Pirillo, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-opentofu-happened-and-whats-next-5iA_bs8d</link>
      <content:encoded><![CDATA[<p>In August 2023, the open source community rallied to create OpenTofu, an alternative to Terraform, after HashiCorp, now owned by IBM, adopted a restrictive Business Source License for Terraform. Ohad Maislish, co-founder and CEO of env0, explained on The New Stack Makers how this move sparked the initiative. A few hours after HashiCorp's license change, Maislish secured the domain opentf.org and began developing the new project, eventually named OpenTofu, which was donated to The Linux Foundation to ensure its license couldn't be altered.</p><p>Maislish highlighted the importance of distinguishing between vendor-backed and foundation-backed open source projects to avoid sudden licensing changes. Before coding, the community created a manifesto, gathering significant support and pledges, but received no response from HashiCorp. Consequently, they proceeded with the fork and development of OpenTofu. Despite accusations of intellectual property theft from HashiCorp, OpenTofu gained traction and was adopted by organizations like Oracle. The community continues to prioritize user feedback through GitHub.</p><p>Learn more from The New Stack about OpenTofu: </p><p><a href="https://thenewstack.io/opentofu-vs-hashicorp-takes-center-stage-at-open-source-summit/">OpenTofu vs. HashiCorp Takes Center Stage at Open Source Summit </a></p><p><a href="https://thenewstack.io/opentofu-amiable-to-a-terraform-reconciliation/">OpenTofu Amiable to a Terraform Reconciliation </a></p><p><a href="https://thenewstack.io/opentofu-1-6-general-availability-open-source-infrastructure-as-code/">OpenTofu 1.6 General Availability: Open Source Infrastructure as Code </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a> </p>
]]></content:encoded>
      <enclosure length="28328062" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/18cde9f8-e86c-4bf9-bf96-4a217622aa8e/audio/321ff918-bc3a-475e-bd4c-e8d2f40742da/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How OpenTofu Happened — and What’s Next?</itunes:title>
      <itunes:author>Ohad Mailish, env0, Chris Pirillo, The New Stack</itunes:author>
      <itunes:duration>00:29:30</itunes:duration>
      <itunes:summary>In August 2023, the open source community rallied to create OpenTofu, an alternative to Terraform, after HashiCorp, now owned by IBM, adopted a restrictive Business Source License for Terraform. Ohad Maislish, co-founder and CEO of env0, explained on The New Stack Makers how this move sparked the initiative. A few hours after HashiCorp&apos;s license change, Maislish secured the domain opentf.org and began developing the new project, eventually named OpenTofu, which was donated to The Linux Foundation to ensure its license couldn&apos;t be altered.</itunes:summary>
      <itunes:subtitle>In August 2023, the open source community rallied to create OpenTofu, an alternative to Terraform, after HashiCorp, now owned by IBM, adopted a restrictive Business Source License for Terraform. Ohad Maislish, co-founder and CEO of env0, explained on The New Stack Makers how this move sparked the initiative. A few hours after HashiCorp&apos;s license change, Maislish secured the domain opentf.org and began developing the new project, eventually named OpenTofu, which was donated to The Linux Foundation to ensure its license couldn&apos;t be altered.</itunes:subtitle>
      <itunes:keywords>open tofu, software developer, tech podcast, the new stack, ohad mailish, chris pirillo, tech, developer podcast, env0, infrastructure as a service, the new stack makers, software engineer, open source</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1473</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">c309a5b2-ad08-40d6-9fe2-d91f56727909</guid>
      <title>The Fediverse: What It Is, Why It’s Promising, What’s Next</title>
      <description><![CDATA[<p>In the early days, the internet was a decentralized space created by enthusiasts. However, it has since transformed into a centralized, commerce-driven entity dominated by a few major players. The promise of the fediverse, a decentralized social networking concept, offers a refreshing alternative.</p><p>Evan Prodromou, OpenEarth Foundation's director of open technology, has been advocating for decentralized social networks since 2008, starting with his creation, Identi.ca. Unlike Twitter, Identi.ca was open source and federated, allowing independent networks to interconnect.</p><p>Prodromou, a co-author of ActivityPub—the W3C standard for decentralized networking used by platforms like Mastodon—discusses the evolution of the fediverse on The New Stack Makers podcast. He notes that small social networks dwindled to a few giants, such as Twitter and Facebook, which rarely interconnected. The acquisition of Twitter by Elon Musk disrupted the established norms, prompting users to reconsider their dependence on centralized platforms.</p><p>The fediverse aims to address these issues by allowing users to maintain relationships across different instances, ensuring a smoother transition between networks. This decentralization fosters community management and better control over social interactions.</p><p>Check out the full podcast episode to explore how tech giants like Meta are engaging with the fediverse and how to join decentralized social networks.</p><p>Learn more from The New Stack about fediverse:</p><p><a href="https://thenewstack.io/fediforum-showcases-new-fediverse-apps-and-developer-network/">FediForum Showcases New Fediverse Apps and Developer Network</a></p><p><a href="https://thenewstack.io/one-login-towards-a-single-fediverse-identity-on-activitypub/">One Login: Towards a Single Fediverse Identity on ActivityPub</a></p><p><a href="https://thenewstack.io/web-dev-2024-fediverse-ramps-up-more-ai-less-javascript/">Web Dev 2024: Fediverse Ramps Up, More AI, Less JavaScript</a></p><p>Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/</p>
]]></description>
      <pubDate>Thu, 18 Jul 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Evan Prodromou, Chris Pirillo)</author>
      <link>https://thenewstack.simplecast.com/episodes/the-fediverse-what-it-is-why-its-promising-whats-next-YpdCiuYB</link>
      <content:encoded><![CDATA[<p>In the early days, the internet was a decentralized space created by enthusiasts. However, it has since transformed into a centralized, commerce-driven entity dominated by a few major players. The promise of the fediverse, a decentralized social networking concept, offers a refreshing alternative.</p><p>Evan Prodromou, OpenEarth Foundation's director of open technology, has been advocating for decentralized social networks since 2008, starting with his creation, Identi.ca. Unlike Twitter, Identi.ca was open source and federated, allowing independent networks to interconnect.</p><p>Prodromou, a co-author of ActivityPub—the W3C standard for decentralized networking used by platforms like Mastodon—discusses the evolution of the fediverse on The New Stack Makers podcast. He notes that small social networks dwindled to a few giants, such as Twitter and Facebook, which rarely interconnected. The acquisition of Twitter by Elon Musk disrupted the established norms, prompting users to reconsider their dependence on centralized platforms.</p><p>The fediverse aims to address these issues by allowing users to maintain relationships across different instances, ensuring a smoother transition between networks. This decentralization fosters community management and better control over social interactions.</p><p>Check out the full podcast episode to explore how tech giants like Meta are engaging with the fediverse and how to join decentralized social networks.</p><p>Learn more from The New Stack about fediverse:</p><p><a href="https://thenewstack.io/fediforum-showcases-new-fediverse-apps-and-developer-network/">FediForum Showcases New Fediverse Apps and Developer Network</a></p><p><a href="https://thenewstack.io/one-login-towards-a-single-fediverse-identity-on-activitypub/">One Login: Towards a Single Fediverse Identity on ActivityPub</a></p><p><a href="https://thenewstack.io/web-dev-2024-fediverse-ramps-up-more-ai-less-javascript/">Web Dev 2024: Fediverse Ramps Up, More AI, Less JavaScript</a></p><p>Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/</p>
]]></content:encoded>
      <enclosure length="39014025" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/fc391adf-006d-4579-bb2b-020e0698bf46/audio/4d48fb51-5dc9-4f2b-b9c7-fb42cc8e7d0c/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>The Fediverse: What It Is, Why It’s Promising, What’s Next</itunes:title>
      <itunes:author>Evan Prodromou, Chris Pirillo</itunes:author>
      <itunes:duration>00:40:38</itunes:duration>
      <itunes:summary>In the early days, the internet was a decentralized space created by enthusiasts. However, it has since transformed into a centralized, commerce-driven entity dominated by a few major players. The promise of the fediverse, a decentralized social networking concept, offers a refreshing alternative.

Evan Prodromou, OpenEarth Foundation&apos;s director of open technology, has been advocating for decentralized social networks since 2008, starting with his creation, Identi.ca. Unlike Twitter, Identi.ca was open source and federated, allowing independent networks to interconnect.

Prodromou, a co-author of ActivityPub—the W3C standard for decentralized networking used by platforms like Mastodon—discusses the evolution of the fediverse on The New Stack Makers podcast.</itunes:summary>
      <itunes:subtitle>In the early days, the internet was a decentralized space created by enthusiasts. However, it has since transformed into a centralized, commerce-driven entity dominated by a few major players. The promise of the fediverse, a decentralized social networking concept, offers a refreshing alternative.

Evan Prodromou, OpenEarth Foundation&apos;s director of open technology, has been advocating for decentralized social networks since 2008, starting with his creation, Identi.ca. Unlike Twitter, Identi.ca was open source and federated, allowing independent networks to interconnect.

Prodromou, a co-author of ActivityPub—the W3C standard for decentralized networking used by platforms like Mastodon—discusses the evolution of the fediverse on The New Stack Makers podcast.</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, fediverse, the new stack, federated social platforms, chris pirillo, tech, evan prodromou, the new stack makers, software engineer, identi.ca</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1481</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">da13a999-364d-427b-a53f-a5c00923285e</guid>
      <title>Why Framework’s ‘Right to Repair’ Ethos Is Gaining  Fans</title>
      <description><![CDATA[<p>In a recent episode of The New Stack Makers, recorded at the Open Source Summit North America, Matt Hartley, Linux support lead at Framework, discusses the importance of the "right to repair" movement. This initiative seeks to allow consumers to repair and upgrade their own electronic devices, countering the trend of disposable electronics that contribute to environmental damage. Framework, a company offering modular and customizable laptops, embodies this philosophy by enabling users to replace outdated components easily.</p><p>Hartley, interviewed by Chris Pirillo, highlights how Framework’s approach helps reduce electronic waste, likening obsolete electronics to a form of "technical debt." He shares his personal struggle with old devices, like an ASUS Eee, illustrating the need for repairable technology. Hartley also describes his role in fostering a DIY community, collaborating closely with Fedora Linux maintainers and creating user-friendly support scripts. Framework’s community is actively contributing to the platform, developing new features and hardware integrations.</p><p>The episode underscores the growing momentum of the right to repair movement, advocating for consumer empowerment and environmental sustainability.</p><p> </p><p>Learn more from The New Stack about repairing and upgrading devices: </p><p><a href="https://thenewstack.io/new-linux-laptops-come-with-right-to-repair-and-more/ ">New Linux Laptops Come with Right-to-Repair and More </a></p><p><a href="https://thenewstack.io/troubling-tech-trends-the-dark-side-of-ces-2024/ ">Troubling Tech Trends: The Dark Side of CES 2024 </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a> </p>
]]></description>
      <pubDate>Thu, 11 Jul 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Matt Hartley, Chris Pirillo, framework)</author>
      <link>https://thenewstack.simplecast.com/episodes/why-frameworks-right-to-repair-ethos-is-gaining-fans-mMjNVKKq</link>
      <content:encoded><![CDATA[<p>In a recent episode of The New Stack Makers, recorded at the Open Source Summit North America, Matt Hartley, Linux support lead at Framework, discusses the importance of the "right to repair" movement. This initiative seeks to allow consumers to repair and upgrade their own electronic devices, countering the trend of disposable electronics that contribute to environmental damage. Framework, a company offering modular and customizable laptops, embodies this philosophy by enabling users to replace outdated components easily.</p><p>Hartley, interviewed by Chris Pirillo, highlights how Framework’s approach helps reduce electronic waste, likening obsolete electronics to a form of "technical debt." He shares his personal struggle with old devices, like an ASUS Eee, illustrating the need for repairable technology. Hartley also describes his role in fostering a DIY community, collaborating closely with Fedora Linux maintainers and creating user-friendly support scripts. Framework’s community is actively contributing to the platform, developing new features and hardware integrations.</p><p>The episode underscores the growing momentum of the right to repair movement, advocating for consumer empowerment and environmental sustainability.</p><p> </p><p>Learn more from The New Stack about repairing and upgrading devices: </p><p><a href="https://thenewstack.io/new-linux-laptops-come-with-right-to-repair-and-more/ ">New Linux Laptops Come with Right-to-Repair and More </a></p><p><a href="https://thenewstack.io/troubling-tech-trends-the-dark-side-of-ces-2024/ ">Troubling Tech Trends: The Dark Side of CES 2024 </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a> </p>
]]></content:encoded>
      <enclosure length="18186283" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/477f7243-86d7-4476-bc0d-dc903b272888/audio/444b0c7e-113e-447a-b64d-a65d106679b0/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Why Framework’s ‘Right to Repair’ Ethos Is Gaining  Fans</itunes:title>
      <itunes:author>Matt Hartley, Chris Pirillo, framework</itunes:author>
      <itunes:duration>00:18:56</itunes:duration>
      <itunes:summary>In a recent episode of The New Stack Makers, recorded at the Open Source Summit North America, Matt Hartley, Linux support lead at Framework, discusses the importance of the &quot;right to repair&quot; movement. This initiative seeks to allow consumers to repair and upgrade their own electronic devices, countering the trend of disposable electronics that contribute to environmental damage. Framework, a company offering modular and customizable laptops, embodies this philosophy by enabling users to replace outdated components easily.</itunes:summary>
      <itunes:subtitle>In a recent episode of The New Stack Makers, recorded at the Open Source Summit North America, Matt Hartley, Linux support lead at Framework, discusses the importance of the &quot;right to repair&quot; movement. This initiative seeks to allow consumers to repair and upgrade their own electronic devices, countering the trend of disposable electronics that contribute to environmental damage. Framework, a company offering modular and customizable laptops, embodies this philosophy by enabling users to replace outdated components easily.</itunes:subtitle>
      <itunes:keywords>matt hartley, framework, software developer, the new stack, sustainability, right to repair, chris pirillo, tech, developer podcast, the new stack makers, software engineer, open source</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1480</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">8f97a69c-eb77-4f99-8977-8a98903abce9</guid>
      <title>What’s the Future of Distributed Ledgers?</title>
      <description><![CDATA[<p>Blockchain technology continues to drive innovation despite declining hype, with Distributed Ledgers (DLTs) offering secure, decentralized digital asset transactions. In an On the Road episode of The New Stack Makers recorded at Open Source Summit North America, Andrew Aitken of Hedera and Dr. Leemon Baird of Swirlds Labs discussed DLTs with Alex Williams. </p><p>Baird highlighted the Hashgraph Consensus Algorithm, an efficient, secure distributed consensus mechanism he created, leveraging a hashgraph data structure and gossip protocol for rapid, robust transaction sharing among network nodes. This algorithm, which has been open source under the Apache 2.0 license for nine months, aims to maintain decentralization by involving 32 global organizations in its governance. Aitken emphasized building an ecosystem of DLT contributors, adhering to open source best practices, and developing cross-chain applications and more wallets to enhance exchange capabilities. This collaborative approach seeks to ensure transparency in both governance and software development. For more insights into DLT’s 2.0 era, listen to the full episode.</p><p>Learn more from The New Stack about Distributed Ledgers (DLTs) </p><p><a href="https://thenewstack.io/iota-distributed-ledger-beyond-blockchain-for-supply-chains/">IOTA Distributed Ledger: Beyond Blockchain for Supply Chains </a></p><p><a href="https://thenewstack.io/why-i-changed-my-mind-about-blockchain/">Why I Changed My Mind About Blockchain </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></description>
      <pubDate>Tue, 2 Jul 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Dr. Leemon Baird, Andrew Aitken, Hedera, Swirlds Labs, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/whats-the-future-of-distributed-ledgers-frNCF_uO</link>
      <content:encoded><![CDATA[<p>Blockchain technology continues to drive innovation despite declining hype, with Distributed Ledgers (DLTs) offering secure, decentralized digital asset transactions. In an On the Road episode of The New Stack Makers recorded at Open Source Summit North America, Andrew Aitken of Hedera and Dr. Leemon Baird of Swirlds Labs discussed DLTs with Alex Williams. </p><p>Baird highlighted the Hashgraph Consensus Algorithm, an efficient, secure distributed consensus mechanism he created, leveraging a hashgraph data structure and gossip protocol for rapid, robust transaction sharing among network nodes. This algorithm, which has been open source under the Apache 2.0 license for nine months, aims to maintain decentralization by involving 32 global organizations in its governance. Aitken emphasized building an ecosystem of DLT contributors, adhering to open source best practices, and developing cross-chain applications and more wallets to enhance exchange capabilities. This collaborative approach seeks to ensure transparency in both governance and software development. For more insights into DLT’s 2.0 era, listen to the full episode.</p><p>Learn more from The New Stack about Distributed Ledgers (DLTs) </p><p><a href="https://thenewstack.io/iota-distributed-ledger-beyond-blockchain-for-supply-chains/">IOTA Distributed Ledger: Beyond Blockchain for Supply Chains </a></p><p><a href="https://thenewstack.io/why-i-changed-my-mind-about-blockchain/">Why I Changed My Mind About Blockchain </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></content:encoded>
      <enclosure length="22614142" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/45996fc4-a1b6-4d22-b419-bd9db9c28428/audio/aaef5183-d8cd-4ff2-bba9-9c026e0506a1/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>What’s the Future of Distributed Ledgers?</itunes:title>
      <itunes:author>Dr. Leemon Baird, Andrew Aitken, Hedera, Swirlds Labs, Alex Williams</itunes:author>
      <itunes:duration>00:23:33</itunes:duration>
      <itunes:summary>Blockchain technology continues to drive innovation despite declining hype, with Distributed Ledgers (DLTs) offering secure, decentralized digital asset transactions. In an On the Road episode of The New Stack Makers recorded at Open Source Summit North America, Andrew Aitken of Hedera and Dr. Leemon Baird of Swirlds Labs discussed DLTs with Alex Williams. </itunes:summary>
      <itunes:subtitle>Blockchain technology continues to drive innovation despite declining hype, with Distributed Ledgers (DLTs) offering secure, decentralized digital asset transactions. In an On the Road episode of The New Stack Makers recorded at Open Source Summit North America, Andrew Aitken of Hedera and Dr. Leemon Baird of Swirlds Labs discussed DLTs with Alex Williams. </itunes:subtitle>
      <itunes:keywords>distributed ledgers (dlt), software developer, tech podcast, alex williams, the new stack, tech, developer podcast, software engineer, open source, hedera, hashgraph consensus algorithm, andrew aitken, blockchain, swirlds labs, dr. leemon baird</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1479</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">6d238a7f-f65d-4e3d-8ceb-52b7c0717f32</guid>
      <title>Linux xz and the Great Flaws in Open Source</title>
      <description><![CDATA[<p>The Linux xz utils backdoor exploit, discussed in an interview at the Open Source Summit 2024 on The New Stack Makers with John Kjell, director of open source at TestifySec, highlights critical vulnerabilities in the open-source ecosystem. This exploit involved a maintainer of the Linux xz utils project adding malicious code to a new release, discovered by a Microsoft engineer. This breach demonstrates the high trust placed in maintainers and how this trust can be exploited. Kjell explains that the backdoor allowed remote code execution or unauthorized server access through SSH connections.</p><p>The exploit reveals a significant flaw: the human element in open source. Maintainers, often under pressure from company executives to quickly address vulnerabilities and updates, can become targets for social engineering. Attackers built trust within the community by contributing to projects over time, eventually gaining maintainer status and inserting malicious code. This scenario underscores the economic pressures on open source, where maintainers work unpaid and face demands from large organizations, exposing the fragility of the open-source supply chain. Despite these challenges, the community's resilience is also evident in their rapid response to such threats.</p><p> </p><p>Learn more from The New Stack about Linux xz utils </p><p><a href="https://thenewstack.io/linux-xz-backdoor-damage-could-be-greater-than-feared/">Linux xz Backdoor Damage Could Be Greater Than Feared </a></p><p><a href="https://thenewstack.io/unzipping-the-xz-backdoor-and-its-lessons-for-open-source/">Unzipping the XZ Backdoor and Its Lessons for Open Source </a></p><p><a href="https://thenewstack.io/the-linux-xz-backdoor-episode-an-open-source-mystery/">The Linux xz Backdoor Episode: An Open Source Myster </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p><h1> </h1><h1> </h1><p> </p>
]]></description>
      <pubDate>Thu, 27 Jun 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (John Kjell, TestifySec, Chris Pirillo)</author>
      <link>https://thenewstack.simplecast.com/episodes/linux-xz-and-the-great-flaws-in-open-source-VLv3s1Nl</link>
      <content:encoded><![CDATA[<p>The Linux xz utils backdoor exploit, discussed in an interview at the Open Source Summit 2024 on The New Stack Makers with John Kjell, director of open source at TestifySec, highlights critical vulnerabilities in the open-source ecosystem. This exploit involved a maintainer of the Linux xz utils project adding malicious code to a new release, discovered by a Microsoft engineer. This breach demonstrates the high trust placed in maintainers and how this trust can be exploited. Kjell explains that the backdoor allowed remote code execution or unauthorized server access through SSH connections.</p><p>The exploit reveals a significant flaw: the human element in open source. Maintainers, often under pressure from company executives to quickly address vulnerabilities and updates, can become targets for social engineering. Attackers built trust within the community by contributing to projects over time, eventually gaining maintainer status and inserting malicious code. This scenario underscores the economic pressures on open source, where maintainers work unpaid and face demands from large organizations, exposing the fragility of the open-source supply chain. Despite these challenges, the community's resilience is also evident in their rapid response to such threats.</p><p> </p><p>Learn more from The New Stack about Linux xz utils </p><p><a href="https://thenewstack.io/linux-xz-backdoor-damage-could-be-greater-than-feared/">Linux xz Backdoor Damage Could Be Greater Than Feared </a></p><p><a href="https://thenewstack.io/unzipping-the-xz-backdoor-and-its-lessons-for-open-source/">Unzipping the XZ Backdoor and Its Lessons for Open Source </a></p><p><a href="https://thenewstack.io/the-linux-xz-backdoor-episode-an-open-source-mystery/">The Linux xz Backdoor Episode: An Open Source Myster </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p><h1> </h1><h1> </h1><p> </p>
]]></content:encoded>
      <enclosure length="12230364" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/e87c7c9e-5dfd-4d6e-a112-7eef02efebbd/audio/d6ae6fc1-2e5d-40a0-9812-422c8e376a14/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Linux xz and the Great Flaws in Open Source</itunes:title>
      <itunes:author>John Kjell, TestifySec, Chris Pirillo</itunes:author>
      <itunes:duration>00:12:44</itunes:duration>
      <itunes:summary>The Linux xz utils backdoor exploit, discussed in an interview at the Open Source Summit 2024 on The New Stack Makers with John Kjell, director of open source at TestifySec, highlights critical vulnerabilities in the open-source ecosystem. This exploit involved a maintainer of the Linux xz utils project adding malicious code to a new release, discovered by a Microsoft engineer. This breach demonstrates the high trust placed in maintainers and how this trust can be exploited. Kjell explains that the backdoor allowed remote code execution or unauthorized server access through SSH connections.</itunes:summary>
      <itunes:subtitle>The Linux xz utils backdoor exploit, discussed in an interview at the Open Source Summit 2024 on The New Stack Makers with John Kjell, director of open source at TestifySec, highlights critical vulnerabilities in the open-source ecosystem. This exploit involved a maintainer of the Linux xz utils project adding malicious code to a new release, discovered by a Microsoft engineer. This breach demonstrates the high trust placed in maintainers and how this trust can be exploited. Kjell explains that the backdoor allowed remote code execution or unauthorized server access through SSH connections.</itunes:subtitle>
      <itunes:keywords>supply chain management, linux xz utils, tech podcast, testifysec, the new stack, chris pirillo, developer podcast, john kjell, the new stack makers, software engineer, open source, security</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1478</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e682c9a6-4c4a-4131-9a9f-08922b3ec604</guid>
      <title>How Amazon Bedrock Helps Build GenAI Apps in  Python</title>
      <description><![CDATA[<p>Suman Debnath, principal developer advocate for machine learning at Amazon Web Services, emphasized the advantages of using Python in machine learning during a New Stack Makers episode recorded at PyCon US. He noted Python's ease of use and its foundational role in the data science ecosystem as key reasons for its popularity. However, Debnath highlighted that building generative AI applications doesn't necessarily require deep data science expertise or Python. </p><p>Amazon Bedrock, AWS’s generative AI framework introduced in September, exemplifies this flexibility by allowing developers to use any programming language via an API-based service. Bedrock supports various languages like Python, C, C++, and Java, enabling developers to leverage large language models without intricate knowledge of machine learning. It also integrates well with open-source libraries such as Langchain and llamaindex. Debnath recommends visiting the community AWS platform and GitHub for resources on getting started with Bedrock. The episode includes a demonstration of Bedrock's capabilities and its benefits for Python users.</p><p> </p><p>Learn More from The New Stack on Amazon Bedrock: </p><p><a href="https://thenewstack.io/amazon-bedrock-expands-palette-of-large-language-models/">Amazon Bedrock Expands Palette of Large Language Models </a></p><p><a href="https://thenewstack.io/build-a-qa-application-with-amazon-bedrock-and-amazon-titan/">Build a Q&A Application with Amazon Bedrock and Amazon Titan </a></p><p><a href="https://thenewstack.io/10-key-products-for-building-llm-based-apps-on-aws/">10 Key Products for Building LLM-Based Apps on AWS</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game/</a></p><p> </p>
]]></description>
      <pubDate>Thu, 20 Jun 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Suman Debnath, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-amazon-bedrock-helps-build-genai-apps-in-python-lkzy1IEg</link>
      <content:encoded><![CDATA[<p>Suman Debnath, principal developer advocate for machine learning at Amazon Web Services, emphasized the advantages of using Python in machine learning during a New Stack Makers episode recorded at PyCon US. He noted Python's ease of use and its foundational role in the data science ecosystem as key reasons for its popularity. However, Debnath highlighted that building generative AI applications doesn't necessarily require deep data science expertise or Python. </p><p>Amazon Bedrock, AWS’s generative AI framework introduced in September, exemplifies this flexibility by allowing developers to use any programming language via an API-based service. Bedrock supports various languages like Python, C, C++, and Java, enabling developers to leverage large language models without intricate knowledge of machine learning. It also integrates well with open-source libraries such as Langchain and llamaindex. Debnath recommends visiting the community AWS platform and GitHub for resources on getting started with Bedrock. The episode includes a demonstration of Bedrock's capabilities and its benefits for Python users.</p><p> </p><p>Learn More from The New Stack on Amazon Bedrock: </p><p><a href="https://thenewstack.io/amazon-bedrock-expands-palette-of-large-language-models/">Amazon Bedrock Expands Palette of Large Language Models </a></p><p><a href="https://thenewstack.io/build-a-qa-application-with-amazon-bedrock-and-amazon-titan/">Build a Q&A Application with Amazon Bedrock and Amazon Titan </a></p><p><a href="https://thenewstack.io/10-key-products-for-building-llm-based-apps-on-aws/">10 Key Products for Building LLM-Based Apps on AWS</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game/</a></p><p> </p>
]]></content:encoded>
      <enclosure length="5792957" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/603bc9ee-b0ea-422b-80e8-eed27cdddf9a/audio/0c84683f-6420-4445-91b0-a827ec6ab841/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Amazon Bedrock Helps Build GenAI Apps in  Python</itunes:title>
      <itunes:author>Suman Debnath, Heather Joslyn</itunes:author>
      <itunes:duration>00:06:02</itunes:duration>
      <itunes:summary>Suman Debnath, principal developer advocate for machine learning at Amazon Web Services, emphasized the advantages of using Python in machine learning during a New Stack Makers episode recorded at PyCon US. He noted Python&apos;s ease of use and its foundational role in the data science ecosystem as key reasons for its popularity. However, Debnath highlighted that building generative AI applications doesn&apos;t necessarily require deep data science expertise or Python.</itunes:summary>
      <itunes:subtitle>Suman Debnath, principal developer advocate for machine learning at Amazon Web Services, emphasized the advantages of using Python in machine learning during a New Stack Makers episode recorded at PyCon US. He noted Python&apos;s ease of use and its foundational role in the data science ecosystem as key reasons for its popularity. However, Debnath highlighted that building generative AI applications doesn&apos;t necessarily require deep data science expertise or Python.</itunes:subtitle>
      <itunes:keywords>generative ai, software developer, python, tech podcast, the new stack, amazon web services, tech, developer podcast, the new stack makers, software engineer, suman debnath, aws, amazon bedrock</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1477</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">045a2198-be85-43dd-9427-abd2a9f637a2</guid>
      <title>How to Start Building in Python with Amazon Q Developer</title>
      <description><![CDATA[<p>Nathan Peck, a senior developer advocate for generative AI at Amazon Web Services (AWS), shares his experiences working with Python in a recent episode of The New Stack Makers, recorded at PyCon US. Although not a Python expert, Peck frequently deals with Python scripts in his role, often assisting colleagues in running scripts as cron jobs. He highlights the challenge of being a T-shaped developer, possessing broad knowledge across multiple languages and frameworks but deep expertise in only a few.</p><p>Peck introduces Amazon Q, a generative AI coding assistant launched by AWS in November, and demonstrates its capabilities. The assistant can be integrated into an integrated development environment (IDE) like VS Code. It assists in explaining, refactoring, fixing, and even developing new features for Python codebases. Peck emphasizes Amazon Q's ability to surface best practices from extensive AWS documentation, making it easier for developers to navigate and apply.</p><p>Amazon Q Developer is available for free to users with an AWS Builder ID, without requiring an AWS cloud account. Peck's demo showcases how this tool can simplify and enhance the coding experience, especially for those handling complex or unfamiliar codebases.</p><p>Learn more from The New Stack about Amazon Q and Amazon’s Generative AI strategy:</p><p><a href="https://thenewstack.io/amazon-q-a-genai-to-understand-aws-and-your-business-docs/">Amazon Q, a GenAI to Understand AWS (and Your Business Docs)</a></p><p><a href="https://thenewstack.io/decoding-amazons-generative-ai-strategy/">Decoding Amazon’s Generative AI Strategy</a></p><p><a href="https://thenewstack.io/responsible-ai-at-amazon-web-services-qa-with-diya-wynn/">Responsible AI at Amazon Web Services: Q&A with Diya Wynn</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a> </p>
]]></description>
      <pubDate>Thu, 13 Jun 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Nathan Peck, AWS, Heather Joslyn, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-to-start-building-in-python-with-amazon-q-developer-77PImD_x</link>
      <content:encoded><![CDATA[<p>Nathan Peck, a senior developer advocate for generative AI at Amazon Web Services (AWS), shares his experiences working with Python in a recent episode of The New Stack Makers, recorded at PyCon US. Although not a Python expert, Peck frequently deals with Python scripts in his role, often assisting colleagues in running scripts as cron jobs. He highlights the challenge of being a T-shaped developer, possessing broad knowledge across multiple languages and frameworks but deep expertise in only a few.</p><p>Peck introduces Amazon Q, a generative AI coding assistant launched by AWS in November, and demonstrates its capabilities. The assistant can be integrated into an integrated development environment (IDE) like VS Code. It assists in explaining, refactoring, fixing, and even developing new features for Python codebases. Peck emphasizes Amazon Q's ability to surface best practices from extensive AWS documentation, making it easier for developers to navigate and apply.</p><p>Amazon Q Developer is available for free to users with an AWS Builder ID, without requiring an AWS cloud account. Peck's demo showcases how this tool can simplify and enhance the coding experience, especially for those handling complex or unfamiliar codebases.</p><p>Learn more from The New Stack about Amazon Q and Amazon’s Generative AI strategy:</p><p><a href="https://thenewstack.io/amazon-q-a-genai-to-understand-aws-and-your-business-docs/">Amazon Q, a GenAI to Understand AWS (and Your Business Docs)</a></p><p><a href="https://thenewstack.io/decoding-amazons-generative-ai-strategy/">Decoding Amazon’s Generative AI Strategy</a></p><p><a href="https://thenewstack.io/responsible-ai-at-amazon-web-services-qa-with-diya-wynn/">Responsible AI at Amazon Web Services: Q&A with Diya Wynn</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a> </p>
]]></content:encoded>
      <enclosure length="9324777" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/1a2be367-bdb6-45c2-a4ae-660f2d6f8440/audio/59a945d9-ae32-4e66-95a4-d42f7d9046e4/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How to Start Building in Python with Amazon Q Developer</itunes:title>
      <itunes:author>Nathan Peck, AWS, Heather Joslyn, The New Stack</itunes:author>
      <itunes:duration>00:09:42</itunes:duration>
      <itunes:summary>Nathan Peck, a senior developer advocate for generative AI at Amazon Web Services (AWS), shares his experiences working with Python in a recent episode of The New Stack Makers, recorded at PyCon US. Although not a Python expert, Peck frequently deals with Python scripts in his role, often assisting colleagues in running scripts as cron jobs. He highlights the challenge of being a T-shaped developer, possessing broad knowledge across multiple languages and frameworks but deep expertise in only a few.</itunes:summary>
      <itunes:subtitle>Nathan Peck, a senior developer advocate for generative AI at Amazon Web Services (AWS), shares his experiences working with Python in a recent episode of The New Stack Makers, recorded at PyCon US. Although not a Python expert, Peck frequently deals with Python scripts in his role, often assisting colleagues in running scripts as cron jobs. He highlights the challenge of being a T-shaped developer, possessing broad knowledge across multiple languages and frameworks but deep expertise in only a few.</itunes:subtitle>
      <itunes:keywords>generative ai, software developer, python, tech podcast, the new stack, amazon q, tech, developer podcast, the new stack makers, software engineer, pycon, aws, aws builder id, amazon q developer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1476</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">0b147b43-7cc7-4ea9-bd80-2feca7ac490d</guid>
      <title>Who’s Keeping the Python Ecosystem Safe?</title>
      <description><![CDATA[<p>Mike Fiedler, a PyPI safety and security engineer at the Python Software Foundation, prefers the title “code gardener,” reflecting his role in maintaining and securing open source projects. Recorded at PyCon US, Fiedler explains his task of “pulling the weeds” in code—handling unglamorous but crucial aspects of open source contributions. Since August, funded by Amazon Web Services, Fiedler has focused on enhancing the security of the Python Package Index (PyPI). His efforts include ensuring that both packages and the pipeline are secure, emphasizing the importance of vetting third-party modules before deployment.</p><p>One of Fiedler’s significant initiatives was enforcing mandatory two-factor authentication (2FA) for all PyPI user accounts by January 1, following a community awareness campaign. This transition was smooth, thanks to proactive outreach. Additionally, the foundation collaborates with security researchers and the public to report and address malicious packages.</p><p>In late 2023, a security audit by Trail of Bits, funded by the Open Technology Fund, identified and quickly resolved medium-sized vulnerabilities, increasing PyPI's overall security. More details on Fiedler's work are available in the full interview video.</p><p>Learn more from The New Stack about PyPl:</p><p><a href="https://thenewstack.io/pypi-strives-to-pull-itself-out-of-trouble/">PyPl Strives to Pull Itself Out of Trouble</a></p><p><a href="https://thenewstack.io/how-python-is-evolving/">How Python Is Evolving</a></p><p><a href="https://thenewstack.io/poisoned-lolip0p-pypi-packages/ ">Poisoned Lolip0p PyPI Packages</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></description>
      <pubDate>Thu, 6 Jun 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Mike Fiedler, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/whos-keeping-the-python-ecosystem-safe-W7b5_5zV</link>
      <content:encoded><![CDATA[<p>Mike Fiedler, a PyPI safety and security engineer at the Python Software Foundation, prefers the title “code gardener,” reflecting his role in maintaining and securing open source projects. Recorded at PyCon US, Fiedler explains his task of “pulling the weeds” in code—handling unglamorous but crucial aspects of open source contributions. Since August, funded by Amazon Web Services, Fiedler has focused on enhancing the security of the Python Package Index (PyPI). His efforts include ensuring that both packages and the pipeline are secure, emphasizing the importance of vetting third-party modules before deployment.</p><p>One of Fiedler’s significant initiatives was enforcing mandatory two-factor authentication (2FA) for all PyPI user accounts by January 1, following a community awareness campaign. This transition was smooth, thanks to proactive outreach. Additionally, the foundation collaborates with security researchers and the public to report and address malicious packages.</p><p>In late 2023, a security audit by Trail of Bits, funded by the Open Technology Fund, identified and quickly resolved medium-sized vulnerabilities, increasing PyPI's overall security. More details on Fiedler's work are available in the full interview video.</p><p>Learn more from The New Stack about PyPl:</p><p><a href="https://thenewstack.io/pypi-strives-to-pull-itself-out-of-trouble/">PyPl Strives to Pull Itself Out of Trouble</a></p><p><a href="https://thenewstack.io/how-python-is-evolving/">How Python Is Evolving</a></p><p><a href="https://thenewstack.io/poisoned-lolip0p-pypi-packages/ ">Poisoned Lolip0p PyPI Packages</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p>
]]></content:encoded>
      <enclosure length="17428586" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/a3b54c54-66d0-4523-bc1a-6262c24da771/audio/4c67b979-95dc-492d-9aee-a25e723174ca/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Who’s Keeping the Python Ecosystem Safe?</itunes:title>
      <itunes:author>Mike Fiedler, Heather Joslyn</itunes:author>
      <itunes:duration>00:18:09</itunes:duration>
      <itunes:summary>Mike Fiedler, a PyPI safety and security engineer at the Python Software Foundation, prefers the title “code gardener,” reflecting his role in maintaining and securing open source projects. Recorded at PyCon US in May, Fiedler explains his task of “pulling the weeds” in code—handling unglamorous but crucial aspects of open source contributions. Since August, funded by Amazon Web Services, Fiedler has focused on enhancing the security of the Python Package Index (PyPI). His efforts include ensuring that both packages and the pipeline are secure, emphasizing the importance of vetting third-party modules before deployment.</itunes:summary>
      <itunes:subtitle>Mike Fiedler, a PyPI safety and security engineer at the Python Software Foundation, prefers the title “code gardener,” reflecting his role in maintaining and securing open source projects. Recorded at PyCon US in May, Fiedler explains his task of “pulling the weeds” in code—handling unglamorous but crucial aspects of open source contributions. Since August, funded by Amazon Web Services, Fiedler has focused on enhancing the security of the Python Package Index (PyPI). His efforts include ensuring that both packages and the pipeline are secure, emphasizing the importance of vetting third-party modules before deployment.</itunes:subtitle>
      <itunes:keywords>mike fiedler, python, tech podcast, the new stack, heather joslyn, pypl, tech, the new stack makers, software engineer, security</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1475</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1a068202-de78-4b5c-aaed-f830f14b48bb</guid>
      <title>How Training Data Differentiates Falcon, the LLM from the UAE</title>
      <description><![CDATA[<p>The name "Falcon" for the UAE’s large language model (LLM) symbolizes the national bird's qualities of courage and perseverance, reflecting the vision of the Technology Innovation Institute (TII) in Abu Dhabi. TII, launched in 2020, addresses AI’s rapid advancements and unintended consequences by fostering an open-source approach to enhance community understanding and control of AI. In this New Stack Makers, Dr. Hakim Hacid, Executive Director and Acting Chief Researcher, Technology Innovation Institute emphasized the importance of perseverance and innovation in overcoming challenges. Falcon gained attention for being the first truly open model with capabilities matching many closed-source models, opening new possibilities for practitioners and industry. </p><p>Last June, Falcon introduced a 40-billion parameter model, outperforming the LLaMA-65B, with smaller models enabling local inference without the cloud. The latest 180-billion parameter model, trained on 3.5 trillion tokens, illustrates Falcon’s commitment to quality and efficiency over sheer size. Falcon’s distinctiveness lies in its data quality, utilizing over 80% RefinedWeb data, based on CommonCrawl, which ensures cleaner and deduplicated data, resulting in high-quality outcomes. This data-centric approach, combined with powerful computational resources, sets Falcon apart in the AI landscape.</p><p> </p><p>Learn more from The New Stack about Open Source AI: </p><p><a href="https://thenewstack.io/open-source-initiative-hits-the-road-to-define-open-source-ai/">Open Source Initiative Hits the Road to Define Open Source AI </a></p><p><a href="https://thenewstack.io/linus-torvalds-on-security-ai-open-source-and-trust/"> Linus Torvalds on Security, AI, Open Source and Trust</a></p><p><a href="https://thenewstack.io/transparency-and-community-an-open-source-vision-for-ai/">Transparency and Community: An Open Source Vision for AI</a> </p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a> </p>
]]></description>
      <pubDate>Thu, 30 May 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Dr. Hakim Hacid, Alex Williams, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-training-data-differentiates-falcon-the-llm-from-the-uae-zqDmvyOu</link>
      <content:encoded><![CDATA[<p>The name "Falcon" for the UAE’s large language model (LLM) symbolizes the national bird's qualities of courage and perseverance, reflecting the vision of the Technology Innovation Institute (TII) in Abu Dhabi. TII, launched in 2020, addresses AI’s rapid advancements and unintended consequences by fostering an open-source approach to enhance community understanding and control of AI. In this New Stack Makers, Dr. Hakim Hacid, Executive Director and Acting Chief Researcher, Technology Innovation Institute emphasized the importance of perseverance and innovation in overcoming challenges. Falcon gained attention for being the first truly open model with capabilities matching many closed-source models, opening new possibilities for practitioners and industry. </p><p>Last June, Falcon introduced a 40-billion parameter model, outperforming the LLaMA-65B, with smaller models enabling local inference without the cloud. The latest 180-billion parameter model, trained on 3.5 trillion tokens, illustrates Falcon’s commitment to quality and efficiency over sheer size. Falcon’s distinctiveness lies in its data quality, utilizing over 80% RefinedWeb data, based on CommonCrawl, which ensures cleaner and deduplicated data, resulting in high-quality outcomes. This data-centric approach, combined with powerful computational resources, sets Falcon apart in the AI landscape.</p><p> </p><p>Learn more from The New Stack about Open Source AI: </p><p><a href="https://thenewstack.io/open-source-initiative-hits-the-road-to-define-open-source-ai/">Open Source Initiative Hits the Road to Define Open Source AI </a></p><p><a href="https://thenewstack.io/linus-torvalds-on-security-ai-open-source-and-trust/"> Linus Torvalds on Security, AI, Open Source and Trust</a></p><p><a href="https://thenewstack.io/transparency-and-community-an-open-source-vision-for-ai/">Transparency and Community: An Open Source Vision for AI</a> </p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a> </p>
]]></content:encoded>
      <enclosure length="22528043" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/533cfe64-9c32-4eb9-8cbe-4f87123b4748/audio/5da2ccb3-63cc-4a87-959b-8b6a48245499/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Training Data Differentiates Falcon, the LLM from the UAE</itunes:title>
      <itunes:author>Dr. Hakim Hacid, Alex Williams, The New Stack</itunes:author>
      <itunes:duration>00:23:27</itunes:duration>
      <itunes:summary>The name &quot;Falcon&quot; for the UAE’s large language model (LLM) symbolizes the national bird&apos;s qualities of courage and perseverance, reflecting the vision of the Technology Innovation Institute (TII) in Abu Dhabi. TII, launched in 2020, addresses AI’s rapid advancements and unintended consequences by fostering an open-source approach to enhance community understanding and control of AI. In this New Stack Makers, Dr. Hakim Hacid, Executive Director and Acting Chief Researcher, Technology Innovation Institute emphasized the importance of perseverance and innovation in overcoming challenges. Falcon gained attention for being the first truly open model with capabilities matching many closed-source models, opening new possibilities for practitioners and industry. </itunes:summary>
      <itunes:subtitle>The name &quot;Falcon&quot; for the UAE’s large language model (LLM) symbolizes the national bird&apos;s qualities of courage and perseverance, reflecting the vision of the Technology Innovation Institute (TII) in Abu Dhabi. TII, launched in 2020, addresses AI’s rapid advancements and unintended consequences by fostering an open-source approach to enhance community understanding and control of AI. In this New Stack Makers, Dr. Hakim Hacid, Executive Director and Acting Chief Researcher, Technology Innovation Institute emphasized the importance of perseverance and innovation in overcoming challenges. Falcon gained attention for being the first truly open model with capabilities matching many closed-source models, opening new possibilities for practitioners and industry. </itunes:subtitle>
      <itunes:keywords>software developer, open source ai, tech podcast, the new stack, dr. hakim hacid, tech, developer podcast, the new stack makers, software engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1474</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">40018c89-9ef3-4765-b874-61e0ef8553a4</guid>
      <title>Out with C and C++, In with Memory Safety</title>
      <description><![CDATA[<p>Crash-level bugs continue to pose a significant challenge due to the lack of memory safety in programming languages, an issue persisting since the punch card era. This enduring problem, described as "the Joker to the Batman" by Anil Dash, VP of developer experience at Fastly, is highlighted in a recent episode of The New Stack Makers. The White House has emphasized memory safety, advocating for the adoption of memory-safe programming languages and better software measurability. The Office of the National Cyber Director (ONCD) noted that languages like C and C++ lack memory safety traits and are prevalent in critical systems. They recommend using memory-safe languages, such as Java, C#, and Rust, to develop secure software. Memory safety is particularly crucial for the US government due to the high stakes, especially in space exploration, where reliability standards are exceptionally stringent. Dash underscores the importance of resilience and predictability in missions that may outlast their creators, necessitating rigorous memory safety practices.</p><p>Learn more from The New Stack about Memory Safety:</p><p><a href="https://thenewstack.io/white-house-warns-against-using-memory-unsafe-languages/ ">White House Warns Against Using Memory-Unsafe Languages </a></p><p><a href="https://thenewstack.io/can-c-be-saved-bjarne-stroustrup-on-ensuring-memory-safety/ ">Can C++ Be Saved? Bjarne Stroupstrup on Ensuring Memory Safety</a></p><p><a href="https://thenewstack.io/bjarne-stroustrups-plan-for-bringing-safety-to-c/ ">Bjarne Stroupstrup's Plan for Bringing Safety to C++</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><h1> </h1>
]]></description>
      <pubDate>Wed, 22 May 2024 18:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Anil Dash, Fastly, Alex Williams, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/out-with-c-and-c-in-with-memory-safety-Hqbpz3W5</link>
      <content:encoded><![CDATA[<p>Crash-level bugs continue to pose a significant challenge due to the lack of memory safety in programming languages, an issue persisting since the punch card era. This enduring problem, described as "the Joker to the Batman" by Anil Dash, VP of developer experience at Fastly, is highlighted in a recent episode of The New Stack Makers. The White House has emphasized memory safety, advocating for the adoption of memory-safe programming languages and better software measurability. The Office of the National Cyber Director (ONCD) noted that languages like C and C++ lack memory safety traits and are prevalent in critical systems. They recommend using memory-safe languages, such as Java, C#, and Rust, to develop secure software. Memory safety is particularly crucial for the US government due to the high stakes, especially in space exploration, where reliability standards are exceptionally stringent. Dash underscores the importance of resilience and predictability in missions that may outlast their creators, necessitating rigorous memory safety practices.</p><p>Learn more from The New Stack about Memory Safety:</p><p><a href="https://thenewstack.io/white-house-warns-against-using-memory-unsafe-languages/ ">White House Warns Against Using Memory-Unsafe Languages </a></p><p><a href="https://thenewstack.io/can-c-be-saved-bjarne-stroustrup-on-ensuring-memory-safety/ ">Can C++ Be Saved? Bjarne Stroupstrup on Ensuring Memory Safety</a></p><p><a href="https://thenewstack.io/bjarne-stroustrups-plan-for-bringing-safety-to-c/ ">Bjarne Stroupstrup's Plan for Bringing Safety to C++</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><h1> </h1>
]]></content:encoded>
      <enclosure length="34870796" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/8058f561-78b3-477d-8a83-799e7a03fb00/audio/ba7b6128-a718-45aa-b3c0-7e89b4d01c4c/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Out with C and C++, In with Memory Safety</itunes:title>
      <itunes:author>Anil Dash, Fastly, Alex Williams, The New Stack</itunes:author>
      <itunes:duration>00:36:19</itunes:duration>
      <itunes:summary>Crash-level bugs continue to pose a significant challenge due to the lack of memory safety in programming languages, an issue persisting since the punch card era. This enduring problem, described as &quot;the Joker to the Batman&quot; by Anil Dash, VP of developer experience at Fastly, is highlighted in a recent episode of The New Stack Makers.</itunes:summary>
      <itunes:subtitle>Crash-level bugs continue to pose a significant challenge due to the lack of memory safety in programming languages, an issue persisting since the punch card era. This enduring problem, described as &quot;the Joker to the Batman&quot; by Anil Dash, VP of developer experience at Fastly, is highlighted in a recent episode of The New Stack Makers.</itunes:subtitle>
      <itunes:keywords>memory safety, software developer, tech podcast, governance, fastly, c++, developer podcast, the new stack makers, software engineer, anil dash</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1471</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b6f9961f-da10-430c-9932-a16c3a66efc8</guid>
      <title>How Open Source and Time Series Data Fit Together</title>
      <description><![CDATA[<p>In the push to integrate data into development, time series databases have gained significant importance. These databases capture time-stamped data from servers and sensors, enabling the collection and storage of valuable information. InfluxDB, a leading open-source time series database technology by InfluxData, has partnered with Amazon Web Services (AWS) to offer a managed open-source service for time series databases. </p><p>Brad Bebee, General Manager of Amazon Neptune and Amazon Timestream highlighted the challenges faced by customers managing open-source Influx database instances, despite appreciating its API and performance. To address this, AWS initiated a private beta offering a managed service tailored to customer needs. Paul Dix, Co-founder and CTO of InfluxData joined Bebee, and highlighted Influx's prized utility in tracking measurements, metrics, and sensor data in real-time. </p><p>AWS's Timestream complements this by providing managed time series database services, including TimesTen for Live Analytics and Timestream for Influx DB. Bebee emphasized the growing relevance of time series data and customers' preference for managed open-source databases, aligning with AWS's strategy of offering such services. This partnership aims to simplify database management and enhance performance for customers utilizing time series databases. </p><p>Learn more from The New Stack about time series databases:</p><p><a href="https://thenewstack.io/what-are-time-series-databases-and-why-do-you-need-them/">What Are Time Series Databases, and Why Do You Need Them?</a></p><p><a href="https://thenewstack.io/amazon-timestream-managed-influxdb-for-time-series-data/ Install the InfluxDB ">Amazon Timestream: Managed InfluxDB for Time Series Data </a></p><p><a href="https://thenewstack.io/install-the-influxdb-time-series-database-on-ubuntu-server-22-04/">Install the InfluxDB Time-Series Database on Ubuntu Server 22.04</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></description>
      <pubDate>Thu, 16 May 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Paul Dix, Bradley Bebee, Influxdata, Alex Williams, Amazon Web Services)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-open-source-and-time-series-data-fit-together-LDTFT3CT</link>
      <content:encoded><![CDATA[<p>In the push to integrate data into development, time series databases have gained significant importance. These databases capture time-stamped data from servers and sensors, enabling the collection and storage of valuable information. InfluxDB, a leading open-source time series database technology by InfluxData, has partnered with Amazon Web Services (AWS) to offer a managed open-source service for time series databases. </p><p>Brad Bebee, General Manager of Amazon Neptune and Amazon Timestream highlighted the challenges faced by customers managing open-source Influx database instances, despite appreciating its API and performance. To address this, AWS initiated a private beta offering a managed service tailored to customer needs. Paul Dix, Co-founder and CTO of InfluxData joined Bebee, and highlighted Influx's prized utility in tracking measurements, metrics, and sensor data in real-time. </p><p>AWS's Timestream complements this by providing managed time series database services, including TimesTen for Live Analytics and Timestream for Influx DB. Bebee emphasized the growing relevance of time series data and customers' preference for managed open-source databases, aligning with AWS's strategy of offering such services. This partnership aims to simplify database management and enhance performance for customers utilizing time series databases. </p><p>Learn more from The New Stack about time series databases:</p><p><a href="https://thenewstack.io/what-are-time-series-databases-and-why-do-you-need-them/">What Are Time Series Databases, and Why Do You Need Them?</a></p><p><a href="https://thenewstack.io/amazon-timestream-managed-influxdb-for-time-series-data/ Install the InfluxDB ">Amazon Timestream: Managed InfluxDB for Time Series Data </a></p><p><a href="https://thenewstack.io/install-the-influxdb-time-series-database-on-ubuntu-server-22-04/">Install the InfluxDB Time-Series Database on Ubuntu Server 22.04</a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></content:encoded>
      <enclosure length="20378968" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/dc7507db-1ce1-432a-a5d1-c6b90a4ebe50/audio/fcda5e6b-3b97-4b7e-98ca-4c9c8ce0b73b/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Open Source and Time Series Data Fit Together</itunes:title>
      <itunes:author>Paul Dix, Bradley Bebee, Influxdata, Alex Williams, Amazon Web Services</itunes:author>
      <itunes:duration>00:21:13</itunes:duration>
      <itunes:summary>In the push to integrate data into development, time series databases have gained significant importance. These databases capture time-stamped data from servers and sensors, enabling the collection and storage of valuable information. InfluxDB, a leading open-source time series database technology by InfluxData, has partnered with Amazon Web Services (AWS) to offer a managed open-source service for time series databases.

Amazon&apos;s Brad Bebee highlighted the challenges faced by customers managing open-source Influx database instances, despite appreciating its API and performance. To address this, AWS initiated a private beta offering a managed service tailored to customer needs. InfluxDB, founded in 2013, is prized for its utility in tracking measurements, metrics, and sensor data in real-time.</itunes:summary>
      <itunes:subtitle>In the push to integrate data into development, time series databases have gained significant importance. These databases capture time-stamped data from servers and sensors, enabling the collection and storage of valuable information. InfluxDB, a leading open-source time series database technology by InfluxData, has partnered with Amazon Web Services (AWS) to offer a managed open-source service for time series databases.

Amazon&apos;s Brad Bebee highlighted the challenges faced by customers managing open-source Influx database instances, despite appreciating its API and performance. To address this, AWS initiated a private beta offering a managed service tailored to customer needs. InfluxDB, founded in 2013, is prized for its utility in tracking measurements, metrics, and sensor data in real-time.</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, the new stack, databases, paul dix, amazon web services, tech, time series databases, the new stack makers, bradley bebee, influxdata</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1472</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">2361a1f5-8ed6-4fd8-b424-abddffdaefd6</guid>
      <title>Postgres is Now a Vector Database, Too</title>
      <description><![CDATA[<p>Amazon Web Services (AWS) has introduced PG Vector, an open-source tool that integrates generative AI and vector capabilities into PostgreSQL databases. Sirish Chandrasekaran, General Manager of Amazon Relational Database Services, explained at Open Source Summit 2024 in Seattle that PG Vector allows users to store vector types in Postgres and perform similarity searches, a key feature for generative AI applications. </p><p>The tool, developed by Andrew Kane and offered by AWS in services like Aurora and RDS, originally used an indexing scheme called IVFFlat but has since adopted Hierarchical Navigable Small World (HNSW) for improved query performance. </p><p>HNSW offers a graph-based approach, enhancing the ability to find nearest neighbors efficiently, which is crucial for generative AI tasks. AWS emphasizes customer feedback and continuous innovation in the rapidly evolving field of generative AI, aiming to stay responsive and adaptive to customer needs.</p><p> </p><p>Learn more from The New Stack about Vector Databases </p><p><a href="https://thenewstack.io/top-5-vector-database-solutions-for-your-ai-project/">Top 5 Vector Database Solutions for Your AI Project </a></p><p><a href="https://thenewstack.io/vector-databases-are-having-a-moment-a-chat-with-pinecone/">Vector Databases Are Having a Moment – A Chat with Pinecone </a></p><p><a href="https://thenewstack.io/why-vector-size-matters/">Why Vector Size Matters </a></p><p>Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/</p><p> </p>
]]></description>
      <pubDate>Thu, 9 May 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Sirish Chandrasekaran, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/postgres-is-now-a-vector-database-too-nWn2lj6X</link>
      <content:encoded><![CDATA[<p>Amazon Web Services (AWS) has introduced PG Vector, an open-source tool that integrates generative AI and vector capabilities into PostgreSQL databases. Sirish Chandrasekaran, General Manager of Amazon Relational Database Services, explained at Open Source Summit 2024 in Seattle that PG Vector allows users to store vector types in Postgres and perform similarity searches, a key feature for generative AI applications. </p><p>The tool, developed by Andrew Kane and offered by AWS in services like Aurora and RDS, originally used an indexing scheme called IVFFlat but has since adopted Hierarchical Navigable Small World (HNSW) for improved query performance. </p><p>HNSW offers a graph-based approach, enhancing the ability to find nearest neighbors efficiently, which is crucial for generative AI tasks. AWS emphasizes customer feedback and continuous innovation in the rapidly evolving field of generative AI, aiming to stay responsive and adaptive to customer needs.</p><p> </p><p>Learn more from The New Stack about Vector Databases </p><p><a href="https://thenewstack.io/top-5-vector-database-solutions-for-your-ai-project/">Top 5 Vector Database Solutions for Your AI Project </a></p><p><a href="https://thenewstack.io/vector-databases-are-having-a-moment-a-chat-with-pinecone/">Vector Databases Are Having a Moment – A Chat with Pinecone </a></p><p><a href="https://thenewstack.io/why-vector-size-matters/">Why Vector Size Matters </a></p><p>Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/</p><p> </p>
]]></content:encoded>
      <enclosure length="17218771" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/c8a2c48b-7493-44a7-9f72-dcb122c0a184/audio/92635c8d-c857-41e5-a2c0-e1be7e5b7155/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Postgres is Now a Vector Database, Too</itunes:title>
      <itunes:author>Sirish Chandrasekaran, Alex Williams</itunes:author>
      <itunes:duration>00:17:56</itunes:duration>
      <itunes:summary>Amazon Web Services (AWS) has introduced PG Vector, an open-source tool that integrates generative AI and vector capabilities into PostgreSQL databases. Sirish Chandrasekaran, General Manager of Amazon Relational Database Services, explained at Open Source Summit 2024 in Seattle that PG Vector allows users to store vector types in Postgres and perform similarity searches, a key feature for generative AI applications.</itunes:summary>
      <itunes:subtitle>Amazon Web Services (AWS) has introduced PG Vector, an open-source tool that integrates generative AI and vector capabilities into PostgreSQL databases. Sirish Chandrasekaran, General Manager of Amazon Relational Database Services, explained at Open Source Summit 2024 in Seattle that PG Vector allows users to store vector types in Postgres and perform similarity searches, a key feature for generative AI applications.</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, the new stack, postgres, developer podcast, sirish chandrasekaran, vector database, the new stack makers, software engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1470</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">41dc0505-b4cd-48d0-96de-258cb7fdf0a3</guid>
      <title>Valkey: A Redis Fork with a Future</title>
      <description><![CDATA[<p>Valkey, a Redis fork supported by the Linux Foundation, challenges Redis' new license. In this episode, Madelyn Olson, a lead contributor to the Valkey project and former Redis core contributor, along with Ping Xie, Staff Software Engineer at Google and Dmitry Polyakovsky, Consulting Member of Technical Staff at Oracle highlights concerns about the shift to a more restrictive license at Open Source Summit 2024 in Seattle. </p><p>Despite Redis' free license for end users, many contributors may not support it. Valkey, with significant industry backing, prioritizes continuity and a smooth transition for Redis users. AWS, along with Google and Oracle maintainers, emphasizes the importance of open, permissive licenses for large tech companies. Valkey plans incremental updates and module development in Rust to enhance functionality and attract more engineers. The focus remains on compatibility, continuity, and consolidating client behaviors for a robust ecosystem. </p><p> </p><p>Learn more from The New Stack about the Valkey Project and changes to Open Source licensing</p><p><a href="https://thenewstack.io/linux-foundation-forks-the-open-source-redis-as-valkey/">Linux Foundation Backs 'Valkey' Open Source Fork of Redis </a></p><p><a href="https://thenewstack.io/redis-pulls-back-on-open-source-licensing-citing-stingy-cloud-services/">Redis Pulls Back on Open Source Licensing, Citing Stingy Cloud Services</a></p><p><a href="https://thenewstack.io/hashicorp-abandons-open-source-for-business-source-license/">HashiCorp's Licensing Change is only the Latest Challenge to Open Source </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p><p> </p>
]]></description>
      <pubDate>Thu, 2 May 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Madelyn Olson, Dmitry Polyakovsky, Ping Xie, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/valkey-a-redis-fork-with-a-future-KtKc56_0</link>
      <content:encoded><![CDATA[<p>Valkey, a Redis fork supported by the Linux Foundation, challenges Redis' new license. In this episode, Madelyn Olson, a lead contributor to the Valkey project and former Redis core contributor, along with Ping Xie, Staff Software Engineer at Google and Dmitry Polyakovsky, Consulting Member of Technical Staff at Oracle highlights concerns about the shift to a more restrictive license at Open Source Summit 2024 in Seattle. </p><p>Despite Redis' free license for end users, many contributors may not support it. Valkey, with significant industry backing, prioritizes continuity and a smooth transition for Redis users. AWS, along with Google and Oracle maintainers, emphasizes the importance of open, permissive licenses for large tech companies. Valkey plans incremental updates and module development in Rust to enhance functionality and attract more engineers. The focus remains on compatibility, continuity, and consolidating client behaviors for a robust ecosystem. </p><p> </p><p>Learn more from The New Stack about the Valkey Project and changes to Open Source licensing</p><p><a href="https://thenewstack.io/linux-foundation-forks-the-open-source-redis-as-valkey/">Linux Foundation Backs 'Valkey' Open Source Fork of Redis </a></p><p><a href="https://thenewstack.io/redis-pulls-back-on-open-source-licensing-citing-stingy-cloud-services/">Redis Pulls Back on Open Source Licensing, Citing Stingy Cloud Services</a></p><p><a href="https://thenewstack.io/hashicorp-abandons-open-source-for-business-source-license/">HashiCorp's Licensing Change is only the Latest Challenge to Open Source </a></p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p><p> </p>
]]></content:encoded>
      <enclosure length="16924110" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/3b1a20b2-9d03-4354-84e8-0670abdbb398/audio/2af9e758-9134-4420-8693-586f1607b3d3/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Valkey: A Redis Fork with a Future</itunes:title>
      <itunes:author>Madelyn Olson, Dmitry Polyakovsky, Ping Xie, Alex Williams</itunes:author>
      <itunes:duration>00:17:37</itunes:duration>
      <itunes:summary>Valkey, a Redis fork supported by the Linux Foundation, challenges Redis&apos; new license. In this episode, Madelyn Olson, a lead contributor to the Valkey project and former Redis core contributor, along with Ping Xie, Staff Software Engineer at Google and Dmitry Polyakovsky, Consulting Member of Technical Staff at Oracle highlights concerns about the shift to a more restrictive license at Open Source Summit 2024 in Seattle . </itunes:summary>
      <itunes:subtitle>Valkey, a Redis fork supported by the Linux Foundation, challenges Redis&apos; new license. In this episode, Madelyn Olson, a lead contributor to the Valkey project and former Redis core contributor, along with Ping Xie, Staff Software Engineer at Google and Dmitry Polyakovsky, Consulting Member of Technical Staff at Oracle highlights concerns about the shift to a more restrictive license at Open Source Summit 2024 in Seattle . </itunes:subtitle>
      <itunes:keywords>alex williams, linux, ping xie, dmitry polyakovsky, valkey project, madelyn olson, redis</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1469</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b8da1089-022c-46e3-ac00-978d628d5e4e</guid>
      <title>Kubernetes Gets Back to Scaling with Virtual Clusters</title>
      <description><![CDATA[<p>A virtual cluster, described by Loft Labs CEO Lukas Gentele at Kubecon+ CloudNativeCon Paris, is a Kubernetes control plane running inside a container within another Kubernetes cluster. In this New Stack Makers episode, Gentele explained that this approach eliminates the need for numerous separate control planes, allowing VMs to run in lightweight, quickly deployable containers. Loft Labs' open-sourced vcluster technology enables virtual clusters to spin up in about six seconds, significantly faster than traditional Kubernetes clusters that can take over 30 minutes to start in services like Amazon EKS or Google GKE.</p><p>The integration of vCluster into Rancher at KubeCon Paris enables users to manage virtual clusters alongside real clusters seamlessly. This innovation addresses challenges faced by companies managing multiple applications and clusters, advocating for a multi-tenant cluster approach for improved sharing and security, contrary to the trend of isolated single-tenant clusters that emerged due to complexities in cluster sharing within Kubernetes.</p><p> </p><p>Learn more from The New Stack about virtual clusters: </p><p><a href="https://thenewstack.io/vcluster-to-the-rescue/ ">Vcluster to the Rescue </a></p><p><a href="https://thenewstack.io/navigating-the-trade-offs-of-scaling-kubernetes-dev-environments/ ">Navigating the Trade-Offs of Scaling Kubernetes Dev Environments </a></p><p><a href="https://thenewstack.io/managing-kubernetes-clusters-for-platform-engineers/ ">Managing Kubernetes Clusters for Platform Engineers </a></p><p>Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/</p>
]]></description>
      <pubDate>Thu, 25 Apr 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Lukas Gentele, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/kubernetes-gets-back-to-scaling-with-virtual-clusters-nZR8IXzz</link>
      <content:encoded><![CDATA[<p>A virtual cluster, described by Loft Labs CEO Lukas Gentele at Kubecon+ CloudNativeCon Paris, is a Kubernetes control plane running inside a container within another Kubernetes cluster. In this New Stack Makers episode, Gentele explained that this approach eliminates the need for numerous separate control planes, allowing VMs to run in lightweight, quickly deployable containers. Loft Labs' open-sourced vcluster technology enables virtual clusters to spin up in about six seconds, significantly faster than traditional Kubernetes clusters that can take over 30 minutes to start in services like Amazon EKS or Google GKE.</p><p>The integration of vCluster into Rancher at KubeCon Paris enables users to manage virtual clusters alongside real clusters seamlessly. This innovation addresses challenges faced by companies managing multiple applications and clusters, advocating for a multi-tenant cluster approach for improved sharing and security, contrary to the trend of isolated single-tenant clusters that emerged due to complexities in cluster sharing within Kubernetes.</p><p> </p><p>Learn more from The New Stack about virtual clusters: </p><p><a href="https://thenewstack.io/vcluster-to-the-rescue/ ">Vcluster to the Rescue </a></p><p><a href="https://thenewstack.io/navigating-the-trade-offs-of-scaling-kubernetes-dev-environments/ ">Navigating the Trade-Offs of Scaling Kubernetes Dev Environments </a></p><p><a href="https://thenewstack.io/managing-kubernetes-clusters-for-platform-engineers/ ">Managing Kubernetes Clusters for Platform Engineers </a></p><p>Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/</p>
]]></content:encoded>
      <enclosure length="22550236" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/b23207ec-0dcd-4137-babd-2cdcb1f5493b/audio/f6b664fb-0f07-4ec4-ace6-d0f94fc67a0c/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Kubernetes Gets Back to Scaling with Virtual Clusters</itunes:title>
      <itunes:author>Lukas Gentele, Alex Williams</itunes:author>
      <itunes:duration>00:23:29</itunes:duration>
      <itunes:summary>A virtual cluster, described by Loft Labs CEO Lukas Gentele at Kubecon + CloudNativeCon Paris, is a Kubernetes control plane running inside a container within another Kubernetes cluster. In this New Stack Makers episode, Gentele explained that this approach eliminates the need for numerous separate control planes, allowing VMs to run in lightweight, quickly deployable containers. Loft Labs&apos; open-sourced vcluster technology enables virtual clusters to spin up in about six seconds, significantly faster than traditional Kubernetes clusters that can take over 30 minutes to start in services like Amazon EKS or Google GKE.</itunes:summary>
      <itunes:subtitle>A virtual cluster, described by Loft Labs CEO Lukas Gentele at Kubecon + CloudNativeCon Paris, is a Kubernetes control plane running inside a container within another Kubernetes cluster. In this New Stack Makers episode, Gentele explained that this approach eliminates the need for numerous separate control planes, allowing VMs to run in lightweight, quickly deployable containers. Loft Labs&apos; open-sourced vcluster technology enables virtual clusters to spin up in about six seconds, significantly faster than traditional Kubernetes clusters that can take over 30 minutes to start in services like Amazon EKS or Google GKE.</itunes:subtitle>
      <itunes:keywords>cluster, tech podcast, the new stack, vcluster, kubernetes, the new stack makers, software engineer, lukas gentele, rancher</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1468</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">82f3af68-2e5b-4f05-8b50-8eb09f1342f4</guid>
      <title>How Giant Swarm Is Helping to Support the Future of Flux</title>
      <description><![CDATA[<p>When Weaveworks, known for pioneering "GitOps," shut down, concerns arose about the future of Flux, a critical open-source project. However, Puja Abbassi, Giant Swarm's VP of Product, reassured Alex Williams, Founder and Publisher of The New Stack at Open Source Summit in Paris that Flux's maintenance is secure in this episode of The New Makers podcast. </p><p>Giant companies like Microsoft Azure and GitLab have pledged support. Giant Swarm, an avid Flux user, also contributes to its development, ensuring its vitality alongside related projects like infrastructure code plugins and UI improvements. Abbassi highlighted the importance of considering a project's sustainability and integration capabilities when choosing open-source tools. He noted Argo CD's advantage in UI, emphasizing that projects like Flux must evolve to meet user expectations and avoid being overshadowed. This underscores the crucial role of community support, diversity, and compatibility within the Cloud Native Computing Foundation's ecosystem for long-term tool adoption.</p><p>Learn more from The New Stack about Flux:  </p><p><a href="https://thenewstack.io/end-of-an-era-weaveworks-closes-shop-amid-cloud-native-turbulence/ ">End of an Era: Weaveworks Closes Shop Amid Cloud Native Turbulence </a></p><p><a href="https://thenewstack.io/why-flux-isnt-dying-after-weaveworks/">Why Flux Isn't Dying after Weaveworks</a></p><p><a href="https://thenewstack.io/newsletter/ ">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><h1> </h1><p> </p>
]]></description>
      <pubDate>Mon, 22 Apr 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Puja Abbassi, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-giant-swarm-is-helping-to-support-the-future-of-flux-ZZPXYUEV</link>
      <content:encoded><![CDATA[<p>When Weaveworks, known for pioneering "GitOps," shut down, concerns arose about the future of Flux, a critical open-source project. However, Puja Abbassi, Giant Swarm's VP of Product, reassured Alex Williams, Founder and Publisher of The New Stack at Open Source Summit in Paris that Flux's maintenance is secure in this episode of The New Makers podcast. </p><p>Giant companies like Microsoft Azure and GitLab have pledged support. Giant Swarm, an avid Flux user, also contributes to its development, ensuring its vitality alongside related projects like infrastructure code plugins and UI improvements. Abbassi highlighted the importance of considering a project's sustainability and integration capabilities when choosing open-source tools. He noted Argo CD's advantage in UI, emphasizing that projects like Flux must evolve to meet user expectations and avoid being overshadowed. This underscores the crucial role of community support, diversity, and compatibility within the Cloud Native Computing Foundation's ecosystem for long-term tool adoption.</p><p>Learn more from The New Stack about Flux:  </p><p><a href="https://thenewstack.io/end-of-an-era-weaveworks-closes-shop-amid-cloud-native-turbulence/ ">End of an Era: Weaveworks Closes Shop Amid Cloud Native Turbulence </a></p><p><a href="https://thenewstack.io/why-flux-isnt-dying-after-weaveworks/">Why Flux Isn't Dying after Weaveworks</a></p><p><a href="https://thenewstack.io/newsletter/ ">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><h1> </h1><p> </p>
]]></content:encoded>
      <enclosure length="27513929" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/aa8f9b30-c25d-4370-bfcc-05b0b68ad5e8/audio/a1316b38-7e60-4870-aa24-060ebd962694/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Giant Swarm Is Helping to Support the Future of Flux</itunes:title>
      <itunes:author>Puja Abbassi, Alex Williams</itunes:author>
      <itunes:duration>00:28:39</itunes:duration>
      <itunes:summary>When Weaveworks, known for pioneering &quot;GitOps,&quot; shut down, concerns arose about the future of Flux, a critical open-source project. However, Puja Abbassi, Giant Swarm&apos;s VP of Product, reassured Alex Williams, Founder and Publisher of The New Stack at Open Source Summit in Paris that Flux&apos;s maintenance is secure in this episode of The New Makers podcast. </itunes:summary>
      <itunes:subtitle>When Weaveworks, known for pioneering &quot;GitOps,&quot; shut down, concerns arose about the future of Flux, a critical open-source project. However, Puja Abbassi, Giant Swarm&apos;s VP of Product, reassured Alex Williams, Founder and Publisher of The New Stack at Open Source Summit in Paris that Flux&apos;s maintenance is secure in this episode of The New Makers podcast. </itunes:subtitle>
      <itunes:keywords>flux, software developer, giant swarm, ai, tech podcast, gitops, tech, developer podcast, the new stack makers, software engineer, puja abbassi, ai podcast</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1466</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">6077dd96-1de4-4b3e-b2f1-2ffa85c101bf</guid>
      <title>AI, LLMs and Security: How to Deal with the New Threats</title>
      <description><![CDATA[<p>The use of large language models (LLMs) has become widespread, but there are significant security risks associated with them. LLMs with millions or billions of parameters are complex and challenging to fully scrutinize, making them susceptible to exploitation by attackers who can find loopholes or vulnerabilities. On an episode of The New Stack Makers, Chris Pirillo, Tech Evangelist and Lance Seidman, Backend Engineer at Atomic Form discussed these security challenges, emphasizing the need for human oversight to protect AI systems.</p><p>One example highlighted was malicious AI models on Hugging Face, which exploited the Python pickle module to execute arbitrary commands on users' machines. To mitigate such risks, Hugging Face implemented security scanners to check every file for security threats. However, human vigilance remains crucial in identifying and addressing potential exploits.</p><p>Seidman also stressed the importance of technical safeguards and a culture of security awareness within the AI community. Developers should prioritize security throughout the development life cycle to stay ahead of evolving threats. Overall, the message is clear: while AI offers remarkable capabilities, it requires careful management and oversight to prevent misuse and protect against security breaches.</p><p>Learn more from The New Stack about AI and security:  </p><p><a href="https://thenewstack.io/artificial-intelligence-stopping-the-big-unknown-in-application-data-security/  ">Artificial Intelligence: Stopping the Big Unknown in Application, Data Security </a></p><p><a href="https://thenewstack.io/cyberattacks-ai-and-multicloud-hit-cybersecurity-in-2023/ ">Cyberattacks, AI and Multicloud Hit Cybersecurity in 2023</a></p><p><a href="https://thenewstack.io/will-generative-ai-kill-devsecops/ ">Will Generative AI Kill DevSecOps?</a> </p><p> </p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p><p> </p>
]]></description>
      <pubDate>Thu, 11 Apr 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Chris Pirillo, Lance Seidman)</author>
      <link>https://thenewstack.simplecast.com/episodes/ai-llms-and-security-how-to-deal-with-the-new-threats-_CCduj2Y</link>
      <content:encoded><![CDATA[<p>The use of large language models (LLMs) has become widespread, but there are significant security risks associated with them. LLMs with millions or billions of parameters are complex and challenging to fully scrutinize, making them susceptible to exploitation by attackers who can find loopholes or vulnerabilities. On an episode of The New Stack Makers, Chris Pirillo, Tech Evangelist and Lance Seidman, Backend Engineer at Atomic Form discussed these security challenges, emphasizing the need for human oversight to protect AI systems.</p><p>One example highlighted was malicious AI models on Hugging Face, which exploited the Python pickle module to execute arbitrary commands on users' machines. To mitigate such risks, Hugging Face implemented security scanners to check every file for security threats. However, human vigilance remains crucial in identifying and addressing potential exploits.</p><p>Seidman also stressed the importance of technical safeguards and a culture of security awareness within the AI community. Developers should prioritize security throughout the development life cycle to stay ahead of evolving threats. Overall, the message is clear: while AI offers remarkable capabilities, it requires careful management and oversight to prevent misuse and protect against security breaches.</p><p>Learn more from The New Stack about AI and security:  </p><p><a href="https://thenewstack.io/artificial-intelligence-stopping-the-big-unknown-in-application-data-security/  ">Artificial Intelligence: Stopping the Big Unknown in Application, Data Security </a></p><p><a href="https://thenewstack.io/cyberattacks-ai-and-multicloud-hit-cybersecurity-in-2023/ ">Cyberattacks, AI and Multicloud Hit Cybersecurity in 2023</a></p><p><a href="https://thenewstack.io/will-generative-ai-kill-devsecops/ ">Will Generative AI Kill DevSecOps?</a> </p><p> </p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p><p> </p>
]]></content:encoded>
      <enclosure length="36022274" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/51666990-def8-4cd0-a062-973fa7cdf48e/audio/a1fdc067-165e-49b4-a737-0b16aea583ae/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>AI, LLMs and Security: How to Deal with the New Threats</itunes:title>
      <itunes:author>Chris Pirillo, Lance Seidman</itunes:author>
      <itunes:duration>00:37:31</itunes:duration>
      <itunes:summary>The use of large language models (LLMs) has become widespread, but there are significant security risks associated with them. LLMs with millions or billions of parameters are complex and challenging to fully scrutinize, making them susceptible to exploitation by attackers who can find loopholes or vulnerabilities. On an episode of The New Stack Makers, Chris Pirillo, Tech Evangelist and Lance Seidman, Backend Engineer at Atomic Form discussed these security challenges, emphasizing the need for human oversight to protect AI systems.</itunes:summary>
      <itunes:subtitle>The use of large language models (LLMs) has become widespread, but there are significant security risks associated with them. LLMs with millions or billions of parameters are complex and challenging to fully scrutinize, making them susceptible to exploitation by attackers who can find loopholes or vulnerabilities. On an episode of The New Stack Makers, Chris Pirillo, Tech Evangelist and Lance Seidman, Backend Engineer at Atomic Form discussed these security challenges, emphasizing the need for human oversight to protect AI systems.</itunes:subtitle>
      <itunes:keywords>software developer, breach, ai, tech podcast, the new stack, chris pirillo, lance seidman, tech, developer podcast, the new stack makers, software engineer, hugging face, ai podcast, security</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1464</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">a6a62114-afba-4dc6-ae04-b16d1f774fc3</guid>
      <title>How Kubernetes Faces a New Reality with the AI Engineer</title>
      <description><![CDATA[<p>The Kubernetes community primarily focuses on improving the development and operations experience for applications and infrastructure, emphasizing DevOps and developer-centric approaches. In contrast, the data science community historically moved at a slower pace. However, with the emergence of the AI engineer persona, the pace of advancement in data science has accelerated significantly. </p><p>Alex Williams, founder and publisher of The New Stack co-hosted a discussion with Sanjeev Mohan, an independent analyst, which highlighted the challenges faced by data-related tasks on Kubernetes due to the stateful nature of data. Unlike applications, restarting a database node after a failure may lead to inconsistent states and data loss. This discrepancy in pace and needs between developers and data scientists led to Kubernetes and the Cloud Native Computing Foundation initially overlooking data science. </p><p>Nevertheless, Mohan noted that the pace of data engineers has increased as they explore new AI applications and workloads. Kubernetes now plays a crucial role in supporting these advancements by helping manage resources efficiently, especially considering the high cost of training large language models (LLMs) and using GPUs for AI workloads. Mohan also discussed the evolving landscape of AI frameworks and the importance of aligning business use cases with AI strategies. Learn more from The New Stack about data development and DevOps: AI Will Drive Streaming Data Use — But Not Yet, Report Says https://thenewstack.io/ai-will-drive-streaming-data-adoption-says-redpanda-survey/ The Paradigm Shift from Model-Centric to Data-Centric AI https://thenewstack.io/the-paradigm-shift-from-model-centric-to-data-centric-ai/ AI Development Needs to Focus More on Data, Less on Models https://thenewstack.io/ai-development-needs-to-focus-more-on-data-less-on-models/ </p><p>Learn more from The New Stack about data development and DevOps: </p><p><a href="https://thenewstack.io/ai-will-drive-streaming-data-adoption-says-redpanda-survey/ ">AI Will Drive Streaming Data Use - But Not Yet, Report Says</a></p><p><a href="https://thenewstack.io/the-paradigm-shift-from-model-centric-to-data-centric-ai/ ">The Paradigm Shift from Model-Centric to Data-Centric AI</a></p><p><a href="https://thenewstack.io/ai-development-needs-to-focus-more-on-data-less-on-models/ ">AI Development Needs to Focus More on Data, Less on Models </a></p><p>Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/</p>
]]></description>
      <pubDate>Thu, 4 Apr 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Sanjeev Mohan, Alex Williams, The new stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-kubernetes-faces-a-new-reality-with-the-ai-engineer-CLCUovra</link>
      <content:encoded><![CDATA[<p>The Kubernetes community primarily focuses on improving the development and operations experience for applications and infrastructure, emphasizing DevOps and developer-centric approaches. In contrast, the data science community historically moved at a slower pace. However, with the emergence of the AI engineer persona, the pace of advancement in data science has accelerated significantly. </p><p>Alex Williams, founder and publisher of The New Stack co-hosted a discussion with Sanjeev Mohan, an independent analyst, which highlighted the challenges faced by data-related tasks on Kubernetes due to the stateful nature of data. Unlike applications, restarting a database node after a failure may lead to inconsistent states and data loss. This discrepancy in pace and needs between developers and data scientists led to Kubernetes and the Cloud Native Computing Foundation initially overlooking data science. </p><p>Nevertheless, Mohan noted that the pace of data engineers has increased as they explore new AI applications and workloads. Kubernetes now plays a crucial role in supporting these advancements by helping manage resources efficiently, especially considering the high cost of training large language models (LLMs) and using GPUs for AI workloads. Mohan also discussed the evolving landscape of AI frameworks and the importance of aligning business use cases with AI strategies. Learn more from The New Stack about data development and DevOps: AI Will Drive Streaming Data Use — But Not Yet, Report Says https://thenewstack.io/ai-will-drive-streaming-data-adoption-says-redpanda-survey/ The Paradigm Shift from Model-Centric to Data-Centric AI https://thenewstack.io/the-paradigm-shift-from-model-centric-to-data-centric-ai/ AI Development Needs to Focus More on Data, Less on Models https://thenewstack.io/ai-development-needs-to-focus-more-on-data-less-on-models/ </p><p>Learn more from The New Stack about data development and DevOps: </p><p><a href="https://thenewstack.io/ai-will-drive-streaming-data-adoption-says-redpanda-survey/ ">AI Will Drive Streaming Data Use - But Not Yet, Report Says</a></p><p><a href="https://thenewstack.io/the-paradigm-shift-from-model-centric-to-data-centric-ai/ ">The Paradigm Shift from Model-Centric to Data-Centric AI</a></p><p><a href="https://thenewstack.io/ai-development-needs-to-focus-more-on-data-less-on-models/ ">AI Development Needs to Focus More on Data, Less on Models </a></p><p>Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/</p>
]]></content:encoded>
      <enclosure length="28308001" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/183a3173-ea78-401c-bc69-a9c40dbc64a8/audio/34dc95f2-93ec-4c48-8e54-ceee99aa32d6/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Kubernetes Faces a New Reality with the AI Engineer</itunes:title>
      <itunes:author>Sanjeev Mohan, Alex Williams, The new stack</itunes:author>
      <itunes:duration>00:29:29</itunes:duration>
      <itunes:summary>The Kubernetes community primarily focuses on improving the development and operations experience for applications and infrastructure, emphasizing DevOps and developer-centric approaches. In contrast, the data science community historically moved at a slower pace. However, with the emergence of the AI engineer persona, the pace of advancement in data science has accelerated significantly. 

Alex Williams, founder and publisher of The New Stack co-hosted a discussion with Sanjeev Mohan, an independent analyst, which highlighted the challenges faced by data-related tasks on Kubernetes due to the stateful nature of data. Unlike applications, restarting a database node after a failure may lead to inconsistent states and data loss. This discrepancy in pace and needs between developers and data scientists led to Kubernetes and the Cloud Native Computing Foundation initially overlooking data science. </itunes:summary>
      <itunes:subtitle>The Kubernetes community primarily focuses on improving the development and operations experience for applications and infrastructure, emphasizing DevOps and developer-centric approaches. In contrast, the data science community historically moved at a slower pace. However, with the emergence of the AI engineer persona, the pace of advancement in data science has accelerated significantly. 

Alex Williams, founder and publisher of The New Stack co-hosted a discussion with Sanjeev Mohan, an independent analyst, which highlighted the challenges faced by data-related tasks on Kubernetes due to the stateful nature of data. Unlike applications, restarting a database node after a failure may lead to inconsistent states and data loss. This discrepancy in pace and needs between developers and data scientists led to Kubernetes and the Cloud Native Computing Foundation initially overlooking data science. </itunes:subtitle>
      <itunes:keywords>data, software developer, ai, tech podcast, alex williams, the new stack, devops, devops podcast, tech, ai development, developer podcast, data science, the new stack makers, software engineer, sanjeev mohan, data development</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1465</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">fd8fcf51-8fff-48dd-931e-8fd89e594546</guid>
      <title>LLM Observability: The Breakdown</title>
      <description><![CDATA[<p>LLM observability focuses on maximizing the utility of larger language models (LLMs) by monitoring key metrics and signals. Alex Williams, Founder and Publisher for The New Stack, and Janikiram MSV, Principal of Janikiram & Associates and an analyst and writer for The New Stack, discusses the emergence of the LLM stack, which encompasses various components like LLMs, vector databases, embedding models, retrieval systems, read anchor models, and more. The objective of LLM observability is to ensure that users can extract desired outcomes effectively from this complex ecosystem.</p><p>Similar to infrastructure observability in DevOps and SRE practices, LLM observability aims to provide insights into the LLM stack's performance. This includes monitoring metrics specific to LLMs, such as GPU/CPU usage, storage, model serving, change agents in applications, hallucinations, span traces, relevance, retrieval models, latency, monitoring, and user feedback. MSV emphasizes the importance of monitoring resource usage, model catalog synchronization with external providers like Hugging Face, vector database availability, and the inference engine's functionality.</p><p>He also mentions peer companies in the LLM observability space like Datadog, New Relic, Signoz, Dynatrace, LangChain (LangSmith), Arize.ai (Phoenix), and Truera, hinting at a deeper exploration in a future episode of The New Stack Makers.</p><p> </p><p>Learn more from The New Stack about LLM and observability  </p><p><a href="https://thenewstack.io/observability-in-2024-more-opentelemetry-less-confusion/">Observability in 2024: More OpenTelemetry, Less Confusion </a></p><p><a href="https://thenewstack.io/how-ai-can-supercharge-observability/ ">How AI Can Supercharge Observability </a></p><p><a href="https://thenewstack.io/next-gen-observability-monitoring-and-analytics-in-platform-engineering/ ">Next-Gen Observability: Monitoring and Analytics in Platform Engineering</a></p><p> </p><p><a href="https://thenewstack.io/newsletter/ ">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p><p> </p>
]]></description>
      <pubDate>Thu, 28 Mar 2024 14:45:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Janakiram MSV, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/llm-observability-the-breakdown-WdkHy_T8</link>
      <content:encoded><![CDATA[<p>LLM observability focuses on maximizing the utility of larger language models (LLMs) by monitoring key metrics and signals. Alex Williams, Founder and Publisher for The New Stack, and Janikiram MSV, Principal of Janikiram & Associates and an analyst and writer for The New Stack, discusses the emergence of the LLM stack, which encompasses various components like LLMs, vector databases, embedding models, retrieval systems, read anchor models, and more. The objective of LLM observability is to ensure that users can extract desired outcomes effectively from this complex ecosystem.</p><p>Similar to infrastructure observability in DevOps and SRE practices, LLM observability aims to provide insights into the LLM stack's performance. This includes monitoring metrics specific to LLMs, such as GPU/CPU usage, storage, model serving, change agents in applications, hallucinations, span traces, relevance, retrieval models, latency, monitoring, and user feedback. MSV emphasizes the importance of monitoring resource usage, model catalog synchronization with external providers like Hugging Face, vector database availability, and the inference engine's functionality.</p><p>He also mentions peer companies in the LLM observability space like Datadog, New Relic, Signoz, Dynatrace, LangChain (LangSmith), Arize.ai (Phoenix), and Truera, hinting at a deeper exploration in a future episode of The New Stack Makers.</p><p> </p><p>Learn more from The New Stack about LLM and observability  </p><p><a href="https://thenewstack.io/observability-in-2024-more-opentelemetry-less-confusion/">Observability in 2024: More OpenTelemetry, Less Confusion </a></p><p><a href="https://thenewstack.io/how-ai-can-supercharge-observability/ ">How AI Can Supercharge Observability </a></p><p><a href="https://thenewstack.io/next-gen-observability-monitoring-and-analytics-in-platform-engineering/ ">Next-Gen Observability: Monitoring and Analytics in Platform Engineering</a></p><p> </p><p><a href="https://thenewstack.io/newsletter/ ">Join our community of newsletter subscribers to stay on top of the news and at the top of your game. </a></p><p> </p><p> </p>
]]></content:encoded>
      <enclosure length="24824311" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/354cc2ac-0c96-4743-9e98-2c99b1d54f9a/audio/8dcc9d55-31ef-4c2a-9b87-8b0b21e5071f/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>LLM Observability: The Breakdown</itunes:title>
      <itunes:author>Janakiram MSV, Alex Williams</itunes:author>
      <itunes:duration>00:25:51</itunes:duration>
      <itunes:summary>LLM observability focuses on maximizing the utility of larger language models (LLMs) by monitoring key metrics and signals. Alex Williams, Founder and Publisher for The New Stack, and Janikiram MSV, Principal of Janikiram &amp; Associates and an analyst and writer for The New Stack, an analyst and writer for The New Stack, discusses the emergence of the LLM stack, which encompasses various components like LLMs, vector databases, embedding models, retrieval systems, read anchor models, and more. The objective of LLM observability is to ensure that users can extract desired outcomes effectively from this complex ecosystem.</itunes:summary>
      <itunes:subtitle>LLM observability focuses on maximizing the utility of larger language models (LLMs) by monitoring key metrics and signals. Alex Williams, Founder and Publisher for The New Stack, and Janikiram MSV, Principal of Janikiram &amp; Associates and an analyst and writer for The New Stack, an analyst and writer for The New Stack, discusses the emergence of the LLM stack, which encompasses various components like LLMs, vector databases, embedding models, retrieval systems, read anchor models, and more. The objective of LLM observability is to ensure that users can extract desired outcomes effectively from this complex ecosystem.</itunes:subtitle>
      <itunes:keywords>machine learning, software developer, ai, tech podcast, alex williams, the new stack, llm observability, monitoring, janakiram msv, the new stack makers, observability</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1463</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">aa242199-b4ff-4d9d-9d04-5f512bf5fd77</guid>
      <title>Why Software Developers Should Be Thinking About the Climate</title>
      <description><![CDATA[<p>In a conversation on The New Stack Makers, co-hosted by Alex Williams, TNS founder and publisher, and Charles Humble, an industry expert who served as a software engineer, architect and CTO and now podcaster, author and consultant at Conissaunce Ltd., discussed why software developers and engineers should care about their impact on climate change. Humble emphasized that building software sustainably starts with better operations, leading to cost savings and improved security. He cited past successes in combating environmental issues like acid rain and the ozone hole through international agreements and emissions reduction strategies.</p><p>Despite modest growth since 2010, data centers remain significant electricity consumers, comparable to countries like Brazil. The power-intensive nature of AI models exacerbates these challenges and may lead to scarcity issues. Humble mentioned the Green Software Foundation's Maturity Matrix with goals for carbon-free data centers and longer device lifespans, discussing their validity and the role of regulation in achieving them. Overall, software development's environmental impact, primarily carbon emissions, necessitates proactive measures and industry-wide collaboration.</p><p> </p><p>Learn more from The New Stack about sustainability: </p><p><a href="https://thenewstack.io/what-is-greenops-putting-a-sustainable-focus-on-finops/ ">What is GreenOps? Putting a Sustainable Focus on FinOps</a></p><p><a href="https://thenewstack.io/unraveling-the-costs-of-bad-code-in-software-development/ ">Unraveling the Costs of Bad Code in Software Development </a></p><p><a href="https://thenewstack.io/can-reducing-cloud-waste-help-save-the-planet/">Can Reducing Cloud Waste Help Save the Planet?</a></p><p><a href="https://thenewstack.io/how-to-build-open-source-sustainability/">How to Build Open Source Sustainability</a></p><p> </p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></description>
      <pubDate>Thu, 21 Mar 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Charles Humble, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/why-software-developers-should-be-thinking-about-the-climate-3oJUPMnZ</link>
      <content:encoded><![CDATA[<p>In a conversation on The New Stack Makers, co-hosted by Alex Williams, TNS founder and publisher, and Charles Humble, an industry expert who served as a software engineer, architect and CTO and now podcaster, author and consultant at Conissaunce Ltd., discussed why software developers and engineers should care about their impact on climate change. Humble emphasized that building software sustainably starts with better operations, leading to cost savings and improved security. He cited past successes in combating environmental issues like acid rain and the ozone hole through international agreements and emissions reduction strategies.</p><p>Despite modest growth since 2010, data centers remain significant electricity consumers, comparable to countries like Brazil. The power-intensive nature of AI models exacerbates these challenges and may lead to scarcity issues. Humble mentioned the Green Software Foundation's Maturity Matrix with goals for carbon-free data centers and longer device lifespans, discussing their validity and the role of regulation in achieving them. Overall, software development's environmental impact, primarily carbon emissions, necessitates proactive measures and industry-wide collaboration.</p><p> </p><p>Learn more from The New Stack about sustainability: </p><p><a href="https://thenewstack.io/what-is-greenops-putting-a-sustainable-focus-on-finops/ ">What is GreenOps? Putting a Sustainable Focus on FinOps</a></p><p><a href="https://thenewstack.io/unraveling-the-costs-of-bad-code-in-software-development/ ">Unraveling the Costs of Bad Code in Software Development </a></p><p><a href="https://thenewstack.io/can-reducing-cloud-waste-help-save-the-planet/">Can Reducing Cloud Waste Help Save the Planet?</a></p><p><a href="https://thenewstack.io/how-to-build-open-source-sustainability/">How to Build Open Source Sustainability</a></p><p> </p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></content:encoded>
      <enclosure length="37362669" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/b47ad6a1-b556-4a2a-8b98-517f0030ee70/audio/f145230a-4f9d-4ee9-a4f3-a91493be2ffd/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Why Software Developers Should Be Thinking About the Climate</itunes:title>
      <itunes:author>Charles Humble, Alex Williams</itunes:author>
      <itunes:duration>00:38:55</itunes:duration>
      <itunes:summary>In a conversation on The New Stack Makers podcast co-hosted by Alex Williams, TNS founder and publisher, and Charles Humble, an industry expert who served as a software engineer, architect and CTO and now podcaster, author and consultant at Conissaunce Ltd., discussed why software developers and engineers should care about their impact on climate change. Humble emphasized that building software sustainably starts with better operations, leading to cost savings and improved security. He cited past successes in combating environmental issues like acid rain and the ozone hole through international agreements and emissions reduction strategies.</itunes:summary>
      <itunes:subtitle>In a conversation on The New Stack Makers podcast co-hosted by Alex Williams, TNS founder and publisher, and Charles Humble, an industry expert who served as a software engineer, architect and CTO and now podcaster, author and consultant at Conissaunce Ltd., discussed why software developers and engineers should care about their impact on climate change. Humble emphasized that building software sustainably starts with better operations, leading to cost savings and improved security. He cited past successes in combating environmental issues like acid rain and the ozone hole through international agreements and emissions reduction strategies.</itunes:subtitle>
      <itunes:keywords>ai, tech podcast, the new stack, sustainability, tech, ai development, developer podcast, the new stack makers, software development cost, charles humble</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1462</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">8ab283de-50e2-4a45-9083-c2036afdee9d</guid>
      <title>Nvidia’s Superchips for AI: ‘Radical,’ but a Work in Progress</title>
      <description><![CDATA[<p>This New Stack Makers podcast co-hosted by Alex Williams, TNS founder and publisher, and Adrian Cockcroft, Partner and Analyst at OrionX.net, discussed Nvidia's GH200 Grace Hopper superchip. Industry expert Sunil Mallya, Co-founder and CTO of Flip AI weighed in on how it is revolutionizing the hardware industry for AI workloads by centralizing GPU communication, reducing networking overhead, and creating a more efficient system. </p><p>Mallya noted that despite its innovative design, challenges remain in adoption due to interface issues and the need for software to catch up with hardware advancements. However, optimism persists for the future of AI-focused chips, with Nvidia leading the charge in creating large-scale coherent memory systems. Meanwhile, Flip AI, a DevOps large language model, aims to interpret observability data to troubleshoot incidents effectively across various cloud platforms. While discussing the latest chip innovations and challenges in training large language models, the episode sheds light on the evolving landscape of AI hardware and software integration.</p><p>Learn more from The New Stack about Nvidia and the future of chip design  </p><p><a href="https://thenewstack.io/nvidia-wants-to-rewrite-the-software-development-stack/ ">Nvidia Wants to Rewrite the Software Development Stack</a> </p><p><a href="https://thenewstack.io/nvidia-gpu-dominance-at-a-crossroads/ ">Nvidia GPU Dominance at a Crossroads </a></p><p><a href="https://thenewstack.io/newsletter/ ">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.  </a></p>
]]></description>
      <pubDate>Thu, 14 Mar 2024 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Alex Williams, Adrian Cockcroft, Sunil Mallya)</author>
      <link>https://thenewstack.simplecast.com/episodes/nvidias-superchips-for-ai-radical-but-a-work-in-progress-wE117ETZ</link>
      <content:encoded><![CDATA[<p>This New Stack Makers podcast co-hosted by Alex Williams, TNS founder and publisher, and Adrian Cockcroft, Partner and Analyst at OrionX.net, discussed Nvidia's GH200 Grace Hopper superchip. Industry expert Sunil Mallya, Co-founder and CTO of Flip AI weighed in on how it is revolutionizing the hardware industry for AI workloads by centralizing GPU communication, reducing networking overhead, and creating a more efficient system. </p><p>Mallya noted that despite its innovative design, challenges remain in adoption due to interface issues and the need for software to catch up with hardware advancements. However, optimism persists for the future of AI-focused chips, with Nvidia leading the charge in creating large-scale coherent memory systems. Meanwhile, Flip AI, a DevOps large language model, aims to interpret observability data to troubleshoot incidents effectively across various cloud platforms. While discussing the latest chip innovations and challenges in training large language models, the episode sheds light on the evolving landscape of AI hardware and software integration.</p><p>Learn more from The New Stack about Nvidia and the future of chip design  </p><p><a href="https://thenewstack.io/nvidia-wants-to-rewrite-the-software-development-stack/ ">Nvidia Wants to Rewrite the Software Development Stack</a> </p><p><a href="https://thenewstack.io/nvidia-gpu-dominance-at-a-crossroads/ ">Nvidia GPU Dominance at a Crossroads </a></p><p><a href="https://thenewstack.io/newsletter/ ">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.  </a></p>
]]></content:encoded>
      <enclosure length="38172256" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/c27ff629-b50f-4c9d-a4b8-154adb18e4a5/audio/21249ddc-8d38-4138-8e5d-612df9cfb2b5/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Nvidia’s Superchips for AI: ‘Radical,’ but a Work in Progress</itunes:title>
      <itunes:author>Alex Williams, Adrian Cockcroft, Sunil Mallya</itunes:author>
      <itunes:duration>00:39:45</itunes:duration>
      <itunes:summary>This New Stack Makers podcast co-hosted by Alex Williams, TNS founder and publisher, and Adrian Cockcroft, Partner and Analyst at OrionX.net, discussed Nvidia&apos;s GH200 Grace Hopper superchip. Industry expert Sunil Mallya, Co-founder and CTO of Flip AI, also weighed in on how it is revolutionizing the hardware industry for AI workloads by centralizing GPU communication, reducing networking overhead, and creating a more efficient system. </itunes:summary>
      <itunes:subtitle>This New Stack Makers podcast co-hosted by Alex Williams, TNS founder and publisher, and Adrian Cockcroft, Partner and Analyst at OrionX.net, discussed Nvidia&apos;s GH200 Grace Hopper superchip. Industry expert Sunil Mallya, Co-founder and CTO of Flip AI, also weighed in on how it is revolutionizing the hardware industry for AI workloads by centralizing GPU communication, reducing networking overhead, and creating a more efficient system. </itunes:subtitle>
      <itunes:keywords>adrian cockcroft, software developer, architecture, ai, tech podcast, alex williams, the new stack, devops, ai workloads, tech, developer podcast, incident management, the new stack makers, software engineer, chip design, nvidia, sunil mallya, nvidia gh200 grace hopper, observability</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1461</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">39a49aad-2bc7-453b-bdd6-35f7039b8330</guid>
      <title>Is GitHub Copilot Dependable? These Demos Aren’t Promising</title>
      <description><![CDATA[<p>This New Stack Makers podcast co-hosted by TNS founder and publisher, Alex Williams and Joan Westenberg, founder and writer of Joan’s Index, discussed Copilot. Westenberg highlighted its integration with Microsoft 365 and its role as a coding assistant, showcasing its potential to streamline various tasks. </p><p>However, she also revealed its limitations, particularly in reliability. Despite being designed to assist with tasks across Microsoft 365, Copilot's performance fell short during Westenberg's tests, failing to retrieve necessary information from her email and Microsoft Teams meetings. While Copilot proves useful for coding, providing helpful code snippets, its effectiveness diminishes for more complex projects. Westenberg's demonstrations underscored both the strengths and weaknesses of Copilot, emphasizing the need for improvement, especially in reliability, to fulfill its promise as a versatile work companion.</p><p> </p><p>Learn more from The New Stack about Copilot </p><p><a href="https://thenewstack.io/microsoft-one-ups-google-with-copilot-stack-for-developers/   "><strong>Microsoft One-ups Google with Copilot Stack for Developers </strong></a></p><p><a href="https://thenewstack.io/copilot-enterprise-introduces-search-and-customized-best-practices/ "><strong>Copilot Enterprises Introduces Search and Customized Best Practices </strong></a></p><p> </p><p><a href="https://thenewstack.io/newsletter/ ">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p><p> </p>
]]></description>
      <pubDate>Thu, 7 Mar 2024 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Joan Westenberg, Alex Williams, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/is-github-copilot-dependable-these-demos-arent-promising-yfBop0o9</link>
      <content:encoded><![CDATA[<p>This New Stack Makers podcast co-hosted by TNS founder and publisher, Alex Williams and Joan Westenberg, founder and writer of Joan’s Index, discussed Copilot. Westenberg highlighted its integration with Microsoft 365 and its role as a coding assistant, showcasing its potential to streamline various tasks. </p><p>However, she also revealed its limitations, particularly in reliability. Despite being designed to assist with tasks across Microsoft 365, Copilot's performance fell short during Westenberg's tests, failing to retrieve necessary information from her email and Microsoft Teams meetings. While Copilot proves useful for coding, providing helpful code snippets, its effectiveness diminishes for more complex projects. Westenberg's demonstrations underscored both the strengths and weaknesses of Copilot, emphasizing the need for improvement, especially in reliability, to fulfill its promise as a versatile work companion.</p><p> </p><p>Learn more from The New Stack about Copilot </p><p><a href="https://thenewstack.io/microsoft-one-ups-google-with-copilot-stack-for-developers/   "><strong>Microsoft One-ups Google with Copilot Stack for Developers </strong></a></p><p><a href="https://thenewstack.io/copilot-enterprise-introduces-search-and-customized-best-practices/ "><strong>Copilot Enterprises Introduces Search and Customized Best Practices </strong></a></p><p> </p><p><a href="https://thenewstack.io/newsletter/ ">Join our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p><p> </p>
]]></content:encoded>
      <enclosure length="28385742" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/8afcf993-aabc-4dfe-aacd-418a5de7c6f4/audio/7cf23ae2-1266-4d20-a7f4-593798fae73d/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Is GitHub Copilot Dependable? These Demos Aren’t Promising</itunes:title>
      <itunes:author>Joan Westenberg, Alex Williams, The New Stack</itunes:author>
      <itunes:duration>00:29:34</itunes:duration>
      <itunes:summary>This New Stack Makers podcast co-hosted by TNS founder and publisher, Alex Williams and Joan Westenberg, founder and writer of Joan’s Index, discussed Copilot. Westenberg highlighted its integration with Microsoft 365 and its role as a coding assistant, showcasing its potential to streamline various tasks. </itunes:summary>
      <itunes:subtitle>This New Stack Makers podcast co-hosted by TNS founder and publisher, Alex Williams and Joan Westenberg, founder and writer of Joan’s Index, discussed Copilot. Westenberg highlighted its integration with Microsoft 365 and its role as a coding assistant, showcasing its potential to streamline various tasks. </itunes:subtitle>
      <itunes:keywords>generative ai, joan westenberg, software developer, copilot, tech podcast, the new stack, devops, tech, developer podcast, the new stack makers, software engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1460</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">930429e4-6a5f-4dab-9f31-1f0131f0279d</guid>
      <title>The New Monitoring for Services That Feed from LLMs</title>
      <description><![CDATA[<p>This New Stack Makers podcast co-hosted by <a href="https://www.linkedin.com/in/martin-parker-59405541/">Adrian Cockroft, </a>analyst at OrionX.net and TNS founder and publisher, Alex Williams discusses the importance of monitoring services utilizing Large Language Models (LLMs) and the emergence of tools like LangChain and LangSmith to address this need. Adrian Cockcroft, formerly of Netflix and now working with The New Stack, highlights the significance of monitoring AI apps using LLMs and the challenges posed by slow and expensive API calls from LLMs. LangChain acts as middleware, connecting LLMs with services, akin to the Java Database Controller. LangChain's monitoring capabilities led to the development of LangSmith, a monitoring tool. Another tool, LangKit by WhyLabs, offers similar functionalities but is less integrated. This reflects the typical evolution of open-source projects into commercial products. LangChain recently secured funding, indicating growing interest in such monitoring solutions. Cockcroft emphasizes the importance of enterprise-level support and tooling for integrating these solutions into commercial environments. This discussion  underscores the evolving landscape of monitoring services powered by LLMs and the emergence of specialized tools to address associated challenges.</p><p> </p><p>Learn more from The New Stack about LangChain: </p><p><a href="https://thenewstack.io/langchain-the-trendiest-web-framework-of-2023-thanks-to-ai/ "><strong>LangChain: The Trendiest Web Framework of 2023, Thanks to AI </strong></a></p><p><strong>How Retool AI Differs from LangChain (Hint: It's Automation) </strong></p><p> </p><p><a href="https://thenewstack.io/newsletter/"><strong>Join our community of newsletter subscribers to stay on top of the news and at the top of your game</strong></a>. </p><p> </p>
]]></description>
      <pubDate>Wed, 28 Feb 2024 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Adrian Cockcroft, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/the-new-monitoring-for-services-that-feed-from-llms-8flOsg30</link>
      <content:encoded><![CDATA[<p>This New Stack Makers podcast co-hosted by <a href="https://www.linkedin.com/in/martin-parker-59405541/">Adrian Cockroft, </a>analyst at OrionX.net and TNS founder and publisher, Alex Williams discusses the importance of monitoring services utilizing Large Language Models (LLMs) and the emergence of tools like LangChain and LangSmith to address this need. Adrian Cockcroft, formerly of Netflix and now working with The New Stack, highlights the significance of monitoring AI apps using LLMs and the challenges posed by slow and expensive API calls from LLMs. LangChain acts as middleware, connecting LLMs with services, akin to the Java Database Controller. LangChain's monitoring capabilities led to the development of LangSmith, a monitoring tool. Another tool, LangKit by WhyLabs, offers similar functionalities but is less integrated. This reflects the typical evolution of open-source projects into commercial products. LangChain recently secured funding, indicating growing interest in such monitoring solutions. Cockcroft emphasizes the importance of enterprise-level support and tooling for integrating these solutions into commercial environments. This discussion  underscores the evolving landscape of monitoring services powered by LLMs and the emergence of specialized tools to address associated challenges.</p><p> </p><p>Learn more from The New Stack about LangChain: </p><p><a href="https://thenewstack.io/langchain-the-trendiest-web-framework-of-2023-thanks-to-ai/ "><strong>LangChain: The Trendiest Web Framework of 2023, Thanks to AI </strong></a></p><p><strong>How Retool AI Differs from LangChain (Hint: It's Automation) </strong></p><p> </p><p><a href="https://thenewstack.io/newsletter/"><strong>Join our community of newsletter subscribers to stay on top of the news and at the top of your game</strong></a>. </p><p> </p>
]]></content:encoded>
      <enclosure length="25971191" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/940fbd39-3936-441f-8087-4ce99b7be568/audio/050961ea-530c-49a0-ae27-046982856d34/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>The New Monitoring for Services That Feed from LLMs</itunes:title>
      <itunes:author>Adrian Cockcroft, Alex Williams</itunes:author>
      <itunes:duration>00:27:03</itunes:duration>
      <itunes:summary>This New Stack Makers podcast co-hosted by Adrian Cockroft, analyst at OrionX.net and TNS founder and publisher, Alex Williams discusses the importance of monitoring services utilizing Large Language Models (LLMs) and the emergence of tools like LangChain and LangSmith to address this need. Adrian Cockcroft, formerly of Netflix and now working with The New Stack, highlights the significance of monitoring AI apps using LLMs and the challenges posed by slow and expensive API calls from LLMs.</itunes:summary>
      <itunes:subtitle>This New Stack Makers podcast co-hosted by Adrian Cockroft, analyst at OrionX.net and TNS founder and publisher, Alex Williams discusses the importance of monitoring services utilizing Large Language Models (LLMs) and the emergence of tools like LangChain and LangSmith to address this need. Adrian Cockcroft, formerly of Netflix and now working with The New Stack, highlights the significance of monitoring AI apps using LLMs and the challenges posed by slow and expensive API calls from LLMs.</itunes:subtitle>
      <itunes:keywords>adrian cockcroft, software developer, ai, tech podcast, the new stack, tech, developer podcast, ai models, large language models, the new stack makers, software engineer, ai podcast</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1459</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e99b35a1-dc9e-406f-924d-5ee48e40b1cf</guid>
      <title>How Platform Engineering Supports SRE</title>
      <description><![CDATA[<p>In this New Stack Makers podcast, Martin Parker, a solutions architect for UST, spoke with TNS editor-in-chief, Heather Joslyn and discussed the significance of internal developer platforms (IDPs), emphasizing benefits beyond frontend developers to backend engineers and site reliability engineers (SREs). </p><p>Parker highlighted the role of IDPs in automating repetitive tasks, allowing SREs to focus on optimizing application performance. Standardization is key, ensuring observability and monitoring solutions align with best practices and cater to SRE needs. By providing standardized service level indicators (SLIs) and key performance indicators (KPIs), IDPs enable SREs to maintain reliability efficiently. Parker stresses the importance of avoiding siloed solutions by establishing standardized practices and tools for effective monitoring and incident response. Overall, the deployment of IDPs aims to streamline operations, reduce incidents, and enhance organizational value by empowering SREs to concentrate on system maintenance and improvements.</p><p>Learn more from The New Stack about UST: </p><p><a href="https://thenewstack.io/cloud-cost-unit-economics-a-modern-profitability-model/">Cloud Cost-Unit Economics- A Modern Profitability Model </a></p><p><a href="https://thenewstack.io/cloud-native-users-struggle-to-achieve-benefits-report-says/ ">Cloud Native Users Struggle to Achieve Benefits, Report Says</a> </p><p><a href="https://thenewstack.io/newsletter/ ">John our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></description>
      <pubDate>Wed, 7 Feb 2024 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (UST, The New Stack, Martin Parker, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-platform-engineering-supports-sre-_NWRxL40</link>
      <content:encoded><![CDATA[<p>In this New Stack Makers podcast, Martin Parker, a solutions architect for UST, spoke with TNS editor-in-chief, Heather Joslyn and discussed the significance of internal developer platforms (IDPs), emphasizing benefits beyond frontend developers to backend engineers and site reliability engineers (SREs). </p><p>Parker highlighted the role of IDPs in automating repetitive tasks, allowing SREs to focus on optimizing application performance. Standardization is key, ensuring observability and monitoring solutions align with best practices and cater to SRE needs. By providing standardized service level indicators (SLIs) and key performance indicators (KPIs), IDPs enable SREs to maintain reliability efficiently. Parker stresses the importance of avoiding siloed solutions by establishing standardized practices and tools for effective monitoring and incident response. Overall, the deployment of IDPs aims to streamline operations, reduce incidents, and enhance organizational value by empowering SREs to concentrate on system maintenance and improvements.</p><p>Learn more from The New Stack about UST: </p><p><a href="https://thenewstack.io/cloud-cost-unit-economics-a-modern-profitability-model/">Cloud Cost-Unit Economics- A Modern Profitability Model </a></p><p><a href="https://thenewstack.io/cloud-native-users-struggle-to-achieve-benefits-report-says/ ">Cloud Native Users Struggle to Achieve Benefits, Report Says</a> </p><p><a href="https://thenewstack.io/newsletter/ ">John our community of newsletter subscribers to stay on top of the news and at the top of your game</a>. </p>
]]></content:encoded>
      <enclosure length="18116066" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/f5076cfb-3789-4350-8f31-b8ebf29fb229/audio/a21245c4-78d1-4ef2-9020-c9c468bf4c34/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Platform Engineering Supports SRE</itunes:title>
      <itunes:author>UST, The New Stack, Martin Parker, Heather Joslyn</itunes:author>
      <itunes:duration>00:18:52</itunes:duration>
      <itunes:summary>In this New Stack Makers podcast, Martin Parker, a solutions architect for UST, spoke with TNS editor-in-chief, Heather Joslyn and discussed the significance of internal developer platforms (IDPs), emphasizing benefits beyond frontend developers to backend engineers and site reliability engineers (SREs). </itunes:summary>
      <itunes:subtitle>In this New Stack Makers podcast, Martin Parker, a solutions architect for UST, spoke with TNS editor-in-chief, Heather Joslyn and discussed the significance of internal developer platforms (IDPs), emphasizing benefits beyond frontend developers to backend engineers and site reliability engineers (SREs). </itunes:subtitle>
      <itunes:keywords>software developer, martin parker, tech podcast, ust, the new stack, devops, devops podcast, tech, developer podcast, internal developer platform, the new stack makers, software engineer, platform engineering</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1458</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">960314ba-ef8d-4c5c-ad18-b2aab154da04</guid>
      <title>Internal Developer Platforms: Helping Teams Limit Scope</title>
      <description><![CDATA[<p>In this New Stack Makers podcast, Ben Wilcock, a senior technical marketing architect for Tanzu, spoke with TNS editor-in-chief, Heather Joslyn and discussed the challenges organizations face when building internal developer platforms, particularly the issue of scope, at KubeCon + CloudNativeCon North America. </p><p>He emphasized the difficulty for platform engineering teams to select and integrate various Kubernetes projects amid a plethora of options. Wilcock highlights the complexity of tracking software updates, new features, and dependencies once choices are made. He underscores the advantage of having a standardized approach to software deployment, preventing errors caused by diverse mechanisms. </p><p>Tanzu aims to simplify the adoption of platform engineering and internal developer platforms, offering a turnkey approach with the Tanzu Application Platform. This platform is designed to be flexible, malleable, and functional out of the box. Additionally, Tanzu has introduced the Tanzu Developer Portal, providing a focal point for developers to share information and facilitating faster progress in platform engineering without the need to integrate numerous open source projects.</p><p> </p><p>Learn more from The New Stack about Tanzu and internal developer platforms:</p><p><a href="https://thenewstack.io/vmware-unveils-a-pile-of-new-data-services-for-its-cloud/ ">VMware Unveils a Pile of New Data Services for Its Cloud VMware </a></p><p><a href="https://thenewstack.io/vmware-expands-tanzu-into-a-full-platform-engineering-environment/ ">VMware Expands Tanzu into a Full Platform Engineering Environment</a> </p><p><a href="https://thenewstack.io/vmware-targets-the-platform-engineer/ ">VMware Targets the Platform Engineer</a></p><p> </p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p><p> </p>
]]></description>
      <pubDate>Wed, 31 Jan 2024 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (vmware tanzu, vmware, The New Stack, Ben Wilcock, Broadcom, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/internal-developer-platforms-helping-teams-limit-scope-xQ1AMAej</link>
      <content:encoded><![CDATA[<p>In this New Stack Makers podcast, Ben Wilcock, a senior technical marketing architect for Tanzu, spoke with TNS editor-in-chief, Heather Joslyn and discussed the challenges organizations face when building internal developer platforms, particularly the issue of scope, at KubeCon + CloudNativeCon North America. </p><p>He emphasized the difficulty for platform engineering teams to select and integrate various Kubernetes projects amid a plethora of options. Wilcock highlights the complexity of tracking software updates, new features, and dependencies once choices are made. He underscores the advantage of having a standardized approach to software deployment, preventing errors caused by diverse mechanisms. </p><p>Tanzu aims to simplify the adoption of platform engineering and internal developer platforms, offering a turnkey approach with the Tanzu Application Platform. This platform is designed to be flexible, malleable, and functional out of the box. Additionally, Tanzu has introduced the Tanzu Developer Portal, providing a focal point for developers to share information and facilitating faster progress in platform engineering without the need to integrate numerous open source projects.</p><p> </p><p>Learn more from The New Stack about Tanzu and internal developer platforms:</p><p><a href="https://thenewstack.io/vmware-unveils-a-pile-of-new-data-services-for-its-cloud/ ">VMware Unveils a Pile of New Data Services for Its Cloud VMware </a></p><p><a href="https://thenewstack.io/vmware-expands-tanzu-into-a-full-platform-engineering-environment/ ">VMware Expands Tanzu into a Full Platform Engineering Environment</a> </p><p><a href="https://thenewstack.io/vmware-targets-the-platform-engineer/ ">VMware Targets the Platform Engineer</a></p><p> </p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p><p> </p>
]]></content:encoded>
      <enclosure length="14782516" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/8023e66f-fbdb-4de3-af3b-975c191259b0/audio/ae7c7255-5645-4afe-a6f4-868cb8a86f9d/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Internal Developer Platforms: Helping Teams Limit Scope</itunes:title>
      <itunes:author>vmware tanzu, vmware, The New Stack, Ben Wilcock, Broadcom, Heather Joslyn</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/3fe526cf-3e97-43d0-a329-1ad6899b26d5/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:15:23</itunes:duration>
      <itunes:summary>In this New Stack Makers podcast, Ben Wilcock, a senior technical marketing architect for Tanzu, spoke with TNS editor-in-chief, Heather Joslyn and discussed the challenges organizations face when building internal developer platforms, particularly the issue of scope, at KubeCon + CloudNativeCon North America. </itunes:summary>
      <itunes:subtitle>In this New Stack Makers podcast, Ben Wilcock, a senior technical marketing architect for Tanzu, spoke with TNS editor-in-chief, Heather Joslyn and discussed the challenges organizations face when building internal developer platforms, particularly the issue of scope, at KubeCon + CloudNativeCon North America. </itunes:subtitle>
      <itunes:keywords>vmware, software developer, ben wilcock, tech podcast, the new stack, tanzu, devops, devops podcast, scope, tech, developer podcast, kubecon na, internal developer platform, the new stack makers, software engineer, kubecon 2023, kubecon, vmware tanzu</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1457</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">d1935b26-f9c7-4146-b4a7-62ed83483827</guid>
      <title>How the Kubernetes Gateway API Beats Network Ingress</title>
      <description><![CDATA[<p>In this New Stack Makers podcast, Mike Stefaniak, senior product manager at NGINX and Kate Osborn, a software engineer at NGINX discusses challenges associated with network ingress in Kubernetes clusters and introduces the Kubernetes Gateway API as a solution. Stefaniak highlights the issues that arise when multiple teams work on the same ingress, leading to friction and incidents. NGINX has also introduced the NGINX Gateway Fabric, implementing the Kubernetes Gateway API as an alternative to network ingress. </p><p>The Kubernetes Gateway API, proposed four years ago and recently made generally available, offers advantages such as extensibility. It allows referencing policies with custom resource definitions for better validation, avoiding the need for annotations. Each resource has an associated role, enabling clean application of role-based access control policies for enhanced security.</p><p>While network ingress is prevalent and mature, the Kubernetes Gateway API is expected to find adoption in greenfield projects initially. It has the potential to unite North-South and East-West traffic, offering a role-oriented API for comprehensive control over cluster traffic. The article encourages exploring the Kubernetes Gateway API and engaging with the community to contribute to its development.</p><p>Learn more from The New Stack about NGINX and the open source Kubernetes Gateway API:</p><p><a href="https://thenewstack.io/kubernetes-api-gateway-1-0-goes-live-as-maintainers-plan-for-the-future/"><strong>Kubernetes API Gateway 1.0 Goes Live, as Maintainers Plan for The Future </strong></a></p><p><a href="https://thenewstack.io/api-gateway-ingress-controller-or-service-mesh-when-to-use-what-and-why/ ">API Gateway, Ingress Controller or Service Mesh: When to Use What and Why </a></p><p><a href="https://thenewstack.io/ingress-controllers-or-the-kubernetes-gateway-api-which-is-right-for-you/ ">Ingress Controllers or the Kubernetes Gateway API? Which is Right for You? </a></p><p> </p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p><p> </p><p> </p><h1> </h1><h1> </h1>
]]></description>
      <pubDate>Tue, 23 Jan 2024 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (the new stack, nginx, Kate Osborn, Mike Stefaniak, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-the-kubernetes-gateway-api-beats-network-ingress-VWE1ebyg</link>
      <content:encoded><![CDATA[<p>In this New Stack Makers podcast, Mike Stefaniak, senior product manager at NGINX and Kate Osborn, a software engineer at NGINX discusses challenges associated with network ingress in Kubernetes clusters and introduces the Kubernetes Gateway API as a solution. Stefaniak highlights the issues that arise when multiple teams work on the same ingress, leading to friction and incidents. NGINX has also introduced the NGINX Gateway Fabric, implementing the Kubernetes Gateway API as an alternative to network ingress. </p><p>The Kubernetes Gateway API, proposed four years ago and recently made generally available, offers advantages such as extensibility. It allows referencing policies with custom resource definitions for better validation, avoiding the need for annotations. Each resource has an associated role, enabling clean application of role-based access control policies for enhanced security.</p><p>While network ingress is prevalent and mature, the Kubernetes Gateway API is expected to find adoption in greenfield projects initially. It has the potential to unite North-South and East-West traffic, offering a role-oriented API for comprehensive control over cluster traffic. The article encourages exploring the Kubernetes Gateway API and engaging with the community to contribute to its development.</p><p>Learn more from The New Stack about NGINX and the open source Kubernetes Gateway API:</p><p><a href="https://thenewstack.io/kubernetes-api-gateway-1-0-goes-live-as-maintainers-plan-for-the-future/"><strong>Kubernetes API Gateway 1.0 Goes Live, as Maintainers Plan for The Future </strong></a></p><p><a href="https://thenewstack.io/api-gateway-ingress-controller-or-service-mesh-when-to-use-what-and-why/ ">API Gateway, Ingress Controller or Service Mesh: When to Use What and Why </a></p><p><a href="https://thenewstack.io/ingress-controllers-or-the-kubernetes-gateway-api-which-is-right-for-you/ ">Ingress Controllers or the Kubernetes Gateway API? Which is Right for You? </a></p><p> </p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p><p> </p><p> </p><h1> </h1><h1> </h1>
]]></content:encoded>
      <enclosure length="14449728" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/eb039999-cf12-458b-a025-a1f86f87b7ac/audio/c68d67bd-b216-4bcb-834b-bcfefd3ca8d8/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How the Kubernetes Gateway API Beats Network Ingress</itunes:title>
      <itunes:author>the new stack, nginx, Kate Osborn, Mike Stefaniak, Heather Joslyn</itunes:author>
      <itunes:duration>00:15:03</itunes:duration>
      <itunes:summary>Mike Stefaniak, senior product manager at NGINX and Kate Osborn, a software engineer at NGINX discusses with The New Stack editor-in-chief, Heather Joslyn the challenges associated with network ingress in Kubernetes clusters and introduces the Kubernetes Gateway API as a solution. NGINX has also introduced the NGINX Gateway Fabric, implementing the Kubernetes Gateway API as an alternative to network ingress. </itunes:summary>
      <itunes:subtitle>Mike Stefaniak, senior product manager at NGINX and Kate Osborn, a software engineer at NGINX discusses with The New Stack editor-in-chief, Heather Joslyn the challenges associated with network ingress in Kubernetes clusters and introduces the Kubernetes Gateway API as a solution. NGINX has also introduced the NGINX Gateway Fabric, implementing the Kubernetes Gateway API as an alternative to network ingress. </itunes:subtitle>
      <itunes:keywords>nginx, ingress controllers, software developer, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, kubecon na, kubernetes, kubernetes gateway api, the new stack makers, ingress, software engineer, kubecon</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1456</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">81774d12-5262-461d-bbc9-5af772802fa6</guid>
      <title>What You Can Do with Vector Search</title>
      <description><![CDATA[<p>TNS publisher Alex Williams spoke with Ben Kramer, co-founder and CTO of Monterey.ai Cole Hoffer, Senior Software Engineer at Monterey.ai to discuss how the company utilizes vector search to analyze user voices, feedback, reviews, bug reports, and support tickets from various channels to provide product development recommendations. Monterey.ai connects customer feedback to the development process, bridging customer support and leadership to align with user needs. Figma and Comcast are among the companies using this approach. </p><p>In this interview, Kramer discussed the challenges of building Large Language Model (LLM) based products and the importance of diverse skills in AI web companies and how Monterey employs Zilliz for vector search, leveraging Milvus, an open-source vector database. </p><p>Kramer highlighted Zilliz's flexibility, underlying Milvus technology, and choice of algorithms for semantic search. The decision to choose Zilliz was influenced by its performance in the company's use case, privacy and security features, and ease of integration into their private network. The cloud-managed solution and Zilliz's ability to meet their needs were crucial factors for Monterey AI, given its small team and preference to avoid managing infrastructure.</p><p>Learn more from The New Stack about Zilliz and vector database search:</p><p><a href="https://thenewstack.io/improving-chatgpts-ability-to-understand-ambiguous-prompts/">Improving ChatGPT’s Ability to Understand Ambiguous Prompts</a></p><p><a href="https://thenewstack.io/create-a-movie-recommendation-engine-with-milvus-and-python/">Create a Movie Recommendation Engine with Milvus and Python</a></p><p><a href="https://thenewstack.io/using-a-vector-database-to-search-white-house-speeches/">Using a Vector Database to Search White House Speeches</a></p><p> </p><p>Join our community of newsletter subscribers to stay on top of the news and at the top of your game. <a href="https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0VBT3l2UE5ZNDA0TFZqdU5tRXllcDVNR1NVUXxBQ3Jtc0ttTjFyY3hVaUJHdTBNcE82ck9WanlFNm9UN0FUSnN6aFBBaThMVDBJUllfV1hnUGg1TlJqUTAxNEJ6SXdOOWkyellQRjA0emF6QVRzN1NaeS1mQlg3UHJoeTJZWWo3dlVIcmFqVEx5ZXZOWVdMeEFaWQ&q=https%3A%2F%2Fthenewstack.io%2Fnewsletter%2F&v=niBbfs2UrPI" target="_blank">https://thenewstack.io/newsletter/</a></p><p> </p>
]]></description>
      <pubDate>Wed, 17 Jan 2024 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (monterey.ai, the new stack, zilliz, Alex williams, Ben Kramer, Cole Hoffer)</author>
      <link>https://thenewstack.simplecast.com/episodes/what-you-can-do-with-vector-search-1pQyUxX_</link>
      <content:encoded><![CDATA[<p>TNS publisher Alex Williams spoke with Ben Kramer, co-founder and CTO of Monterey.ai Cole Hoffer, Senior Software Engineer at Monterey.ai to discuss how the company utilizes vector search to analyze user voices, feedback, reviews, bug reports, and support tickets from various channels to provide product development recommendations. Monterey.ai connects customer feedback to the development process, bridging customer support and leadership to align with user needs. Figma and Comcast are among the companies using this approach. </p><p>In this interview, Kramer discussed the challenges of building Large Language Model (LLM) based products and the importance of diverse skills in AI web companies and how Monterey employs Zilliz for vector search, leveraging Milvus, an open-source vector database. </p><p>Kramer highlighted Zilliz's flexibility, underlying Milvus technology, and choice of algorithms for semantic search. The decision to choose Zilliz was influenced by its performance in the company's use case, privacy and security features, and ease of integration into their private network. The cloud-managed solution and Zilliz's ability to meet their needs were crucial factors for Monterey AI, given its small team and preference to avoid managing infrastructure.</p><p>Learn more from The New Stack about Zilliz and vector database search:</p><p><a href="https://thenewstack.io/improving-chatgpts-ability-to-understand-ambiguous-prompts/">Improving ChatGPT’s Ability to Understand Ambiguous Prompts</a></p><p><a href="https://thenewstack.io/create-a-movie-recommendation-engine-with-milvus-and-python/">Create a Movie Recommendation Engine with Milvus and Python</a></p><p><a href="https://thenewstack.io/using-a-vector-database-to-search-white-house-speeches/">Using a Vector Database to Search White House Speeches</a></p><p> </p><p>Join our community of newsletter subscribers to stay on top of the news and at the top of your game. <a href="https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0VBT3l2UE5ZNDA0TFZqdU5tRXllcDVNR1NVUXxBQ3Jtc0ttTjFyY3hVaUJHdTBNcE82ck9WanlFNm9UN0FUSnN6aFBBaThMVDBJUllfV1hnUGg1TlJqUTAxNEJ6SXdOOWkyellQRjA0emF6QVRzN1NaeS1mQlg3UHJoeTJZWWo3dlVIcmFqVEx5ZXZOWVdMeEFaWQ&q=https%3A%2F%2Fthenewstack.io%2Fnewsletter%2F&v=niBbfs2UrPI" target="_blank">https://thenewstack.io/newsletter/</a></p><p> </p>
]]></content:encoded>
      <enclosure length="24459433" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/a2354f09-9d3a-4fe1-b681-d88e5bc28bce/audio/9fe62872-46a6-4ead-b093-91c15e66b4e7/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>What You Can Do with Vector Search</itunes:title>
      <itunes:author>monterey.ai, the new stack, zilliz, Alex williams, Ben Kramer, Cole Hoffer</itunes:author>
      <itunes:duration>00:25:28</itunes:duration>
      <itunes:summary>Ben Kramer, co-founder and CTO at Monterey.ai and Cole Hoffer, Senior Software Engineer at Monterey.ai discusses how the company utilizes vector search to analyze user voices, feedback, reviews, bug reports, and support tickets from various channels to provide product development recommendations in a podcast with The New Stack Makers. </itunes:summary>
      <itunes:subtitle>Ben Kramer, co-founder and CTO at Monterey.ai and Cole Hoffer, Senior Software Engineer at Monterey.ai discusses how the company utilizes vector search to analyze user voices, feedback, reviews, bug reports, and support tickets from various channels to provide product development recommendations in a podcast with The New Stack Makers. </itunes:subtitle>
      <itunes:keywords>monterey, ben kramer, software developer, tech podcast, the new stack, devops, cole hoffer, devops podcast, tech, developer podcast, monterey.ai, the new stack makers, software engineer, zilliz</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1455</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">81fe0286-5292-4d59-80a5-6220ce12e5d4</guid>
      <title>How Ethical Hacking Tricks Can Protect Your APIs and Apps</title>
      <description><![CDATA[<p>TNS host Heather Joslyn sits down with Ron Masas to discuss trade-offs when it comes to creating fast, secure applications and APIs. He notes a common issue of neglecting documentation and validation, leading to vulnerabilities. Weak authorization is a recurring problem, with instances where changing an invoice ID could expose another user's data.</p><p>Masas, an ethical hacker, highlights the risk posed by "zombie" APIs—applications that have become disused but remain potential targets. He suggests investigating frameworks, checking default configurations, and maintaining robust logging to enhance security. Collaboration between developers and security teams is crucial, with "security champions" in development teams and nuanced communication about vulnerabilities from security teams being essential elements for robust cybersecurity.</p><p>For further details, the podcast discusses case studies involving TikTok and Digital Ocean, Masas's views on AI and development, and anticipated security challenges.</p><p>Learn more from The New Stack about Imperva and API security:</p><p><a href="https://thenewstack.io/what-developers-need-to-know-about-business-logic-attacks/">What Developers Need to Know about Business Logic Attacks</a></p><p><a href="https://thenewstack.io/why-your-apis-arent-safe-and-what-to-do-about-it/">Why Your APIs Aren’t Safe — and What to Do about It</a></p><p><a href="https://thenewstack.io/the-limits-of-shift-left-whats-next-for-developer-security/">The Limits of Shift-Left: What’s Next for Developer Security</a></p><p> </p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></description>
      <pubDate>Wed, 10 Jan 2024 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Ron Masas, Imperva, Heather Joslyn, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-ethical-hacking-tricks-can-protect-your-apis-and-apps-yESOCfZF</link>
      <content:encoded><![CDATA[<p>TNS host Heather Joslyn sits down with Ron Masas to discuss trade-offs when it comes to creating fast, secure applications and APIs. He notes a common issue of neglecting documentation and validation, leading to vulnerabilities. Weak authorization is a recurring problem, with instances where changing an invoice ID could expose another user's data.</p><p>Masas, an ethical hacker, highlights the risk posed by "zombie" APIs—applications that have become disused but remain potential targets. He suggests investigating frameworks, checking default configurations, and maintaining robust logging to enhance security. Collaboration between developers and security teams is crucial, with "security champions" in development teams and nuanced communication about vulnerabilities from security teams being essential elements for robust cybersecurity.</p><p>For further details, the podcast discusses case studies involving TikTok and Digital Ocean, Masas's views on AI and development, and anticipated security challenges.</p><p>Learn more from The New Stack about Imperva and API security:</p><p><a href="https://thenewstack.io/what-developers-need-to-know-about-business-logic-attacks/">What Developers Need to Know about Business Logic Attacks</a></p><p><a href="https://thenewstack.io/why-your-apis-arent-safe-and-what-to-do-about-it/">Why Your APIs Aren’t Safe — and What to Do about It</a></p><p><a href="https://thenewstack.io/the-limits-of-shift-left-whats-next-for-developer-security/">The Limits of Shift-Left: What’s Next for Developer Security</a></p><p> </p><p><a href="https://thenewstack.io/newsletter/">Join our community of newsletter subscribers to stay on top of the news and at the top of your game.</a></p>
]]></content:encoded>
      <enclosure length="15685634" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/06a091da-1069-460b-b024-8feb5323c461/audio/f942fe4f-7cdf-4bfd-a72c-60234b0c7e66/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Ethical Hacking Tricks Can Protect Your APIs and Apps</itunes:title>
      <itunes:author>Ron Masas, Imperva, Heather Joslyn, The New Stack</itunes:author>
      <itunes:duration>00:16:20</itunes:duration>
      <itunes:summary>Ron Masas, lead vulnerability researcher at Imperva, emphasizes the trade-off between building fast and creating secure applications and APIs in a podcast with The New Stack Makers.</itunes:summary>
      <itunes:subtitle>Ron Masas, lead vulnerability researcher at Imperva, emphasizes the trade-off between building fast and creating secure applications and APIs in a podcast with The New Stack Makers.</itunes:subtitle>
      <itunes:keywords>ethical hacking, tech news, software developer, developer news, ron masas, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, imperva, the new stack makers, software engineer, apis, api security</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1454</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">bbf01f17-d785-4342-a92b-7efac1aabc4d</guid>
      <title>2023 Top Episodes - What’s Platform Engineering?</title>
      <description><![CDATA[<p>Platform engineering “is the art of designing and binding all of the different tech and tools that you have inside of an organization into a golden path that enables self service for developers and reduces cognitive load,” said Kaspar Von Grünberg, founder and CEO of Humanitec, in this episode of The New Stack Makers podcast.</p><p>This structure is important for individual contributors, Grünberg said, as well as backend engineers: “if you look at the operation teams, it reduces their burden to do repetitive things. And so platform engineers build and design internal developer platforms, and help and serve users."</p><p>This conversation, hosted by Heather Joslyn, TNS features editor, dove into platform engineering: what it is, how it works, the problems it is intended to solve, and how to get started in building a platform engineering operation in your organization. It also debunks some key fallacies around the concept.</p><p>Learn more from The New Stack about Platform Engineering and Humanitec:</p><p><a href="https://thenewstack.io/platform-engineering/">Platform Engineering Overview, News, and Trends</a></p><p><a href="https://thenewstack.io/the-hype-train-is-over-platform-engineering-is-here-to-stay/">The Hype Train Is Over. Platform Engineering Is Here to Stay</a></p><p><a href="https://thenewstack.io/9-steps-to-platform-engineering-hell/">9 Steps to Platform Engineering Hell</a></p>
]]></description>
      <pubDate>Wed, 3 Jan 2024 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack, Heather Joslyn, Humanitec, Kaspar Von Grünberg)</author>
      <link>https://thenewstack.simplecast.com/episodes/2023-top-episodes-whats-platform-engineering-uFNPtRha</link>
      <content:encoded><![CDATA[<p>Platform engineering “is the art of designing and binding all of the different tech and tools that you have inside of an organization into a golden path that enables self service for developers and reduces cognitive load,” said Kaspar Von Grünberg, founder and CEO of Humanitec, in this episode of The New Stack Makers podcast.</p><p>This structure is important for individual contributors, Grünberg said, as well as backend engineers: “if you look at the operation teams, it reduces their burden to do repetitive things. And so platform engineers build and design internal developer platforms, and help and serve users."</p><p>This conversation, hosted by Heather Joslyn, TNS features editor, dove into platform engineering: what it is, how it works, the problems it is intended to solve, and how to get started in building a platform engineering operation in your organization. It also debunks some key fallacies around the concept.</p><p>Learn more from The New Stack about Platform Engineering and Humanitec:</p><p><a href="https://thenewstack.io/platform-engineering/">Platform Engineering Overview, News, and Trends</a></p><p><a href="https://thenewstack.io/the-hype-train-is-over-platform-engineering-is-here-to-stay/">The Hype Train Is Over. Platform Engineering Is Here to Stay</a></p><p><a href="https://thenewstack.io/9-steps-to-platform-engineering-hell/">9 Steps to Platform Engineering Hell</a></p>
]]></content:encoded>
      <enclosure length="22797698" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/cdd7f6ae-0fdc-462c-84bd-0c04d08d068d/audio/11f22b93-a1b4-4670-9d01-6eb882eb7942/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>2023 Top Episodes - What’s Platform Engineering?</itunes:title>
      <itunes:author>The New Stack, Heather Joslyn, Humanitec, Kaspar Von Grünberg</itunes:author>
      <itunes:duration>00:23:44</itunes:duration>
      <itunes:summary>As we begin a new year, we are revisiting some of our most popular episodes of 2023. Kaspar Von Grünberg joined TNS host Heather Joslyn for a discussion about Platform Engineering that is still just as relevant today.</itunes:summary>
      <itunes:subtitle>As we begin a new year, we are revisiting some of our most popular episodes of 2023. Kaspar Von Grünberg joined TNS host Heather Joslyn for a discussion about Platform Engineering that is still just as relevant today.</itunes:subtitle>
      <itunes:keywords>software developer, software engineering, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, developers, humanitec, the new stack makers, software engineer, platform engineering</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1453</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">7348cc70-45ce-440c-83e0-8fe105d44c99</guid>
      <title>2023 Top Episodes - The End of Programming is Nigh</title>
      <description><![CDATA[<p>Is the end of programming nigh? That's the big question posed in this episode recorded earlier in 2023. It was very popular among listeners, and with the topic being as relevant as ever, we wanted to wrap up the year by highlighting this conversation again.</p><p>If you ask Matt Welsh, he'd say yes, the end of programming is upon us. As Richard McManus <a href="https://thenewstack.io/coding-sucks-anyway-matt-welsh-on-the-end-of-programming/" target="_blank">wrote on The New Stack</a>, Welsh is a former professor of computer science at Harvard who spoke at a virtual meetup of the <a href="http://www.chicagoacm.org/" target="_blank">Chicago Association for Computing Machinery</a> (ACM), explaining his thesis that ChatGPT and GitHub Copilot represent the beginning of the end of programming.</p><p>Welsh joined us on The New Stack Makers to discuss his perspectives about the end of programming and answer questions about the future of computer science, distributed computing, and more.</p><p>Welsh is now the founder of <a href="https://www.fixie.ai/">fixie.ai</a>, a platform they are building to let companies develop applications on top of large language models to extend with different capabilities.</p><p>For 40 to 50 years, programming language design has had one goal. Make it easier to write programs, Welsh said in the interview.</p><p>Still, programming languages are complex, Welsh said. And no amount of work is going to make it simple. </p><p>Learn more from The New Stack about AI and the future of software development:</p><p><a href="https://thenewstack.io/top-5-large-language-models-and-how-to-use-them-effectively/">Top 5 Large Language Models and How to Use Them Effectively</a></p><p><a href="https://thenewstack.io/30-non-trivial-ways-for-developer-to-use-gpt4/">30 Non-Trivial Ways for Developers to Use GPT-4</a></p><p><a href="https://thenewstack.io/developer-tips-in-ai-prompt-engineering/">Developer Tips in AI Prompt Engineering</a></p>
]]></description>
      <pubDate>Wed, 27 Dec 2023 22:44:18 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack, Matt Welsh, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/2023-top-episodes-the-end-of-programming-is-nigh-LJbKs57x</link>
      <content:encoded><![CDATA[<p>Is the end of programming nigh? That's the big question posed in this episode recorded earlier in 2023. It was very popular among listeners, and with the topic being as relevant as ever, we wanted to wrap up the year by highlighting this conversation again.</p><p>If you ask Matt Welsh, he'd say yes, the end of programming is upon us. As Richard McManus <a href="https://thenewstack.io/coding-sucks-anyway-matt-welsh-on-the-end-of-programming/" target="_blank">wrote on The New Stack</a>, Welsh is a former professor of computer science at Harvard who spoke at a virtual meetup of the <a href="http://www.chicagoacm.org/" target="_blank">Chicago Association for Computing Machinery</a> (ACM), explaining his thesis that ChatGPT and GitHub Copilot represent the beginning of the end of programming.</p><p>Welsh joined us on The New Stack Makers to discuss his perspectives about the end of programming and answer questions about the future of computer science, distributed computing, and more.</p><p>Welsh is now the founder of <a href="https://www.fixie.ai/">fixie.ai</a>, a platform they are building to let companies develop applications on top of large language models to extend with different capabilities.</p><p>For 40 to 50 years, programming language design has had one goal. Make it easier to write programs, Welsh said in the interview.</p><p>Still, programming languages are complex, Welsh said. And no amount of work is going to make it simple. </p><p>Learn more from The New Stack about AI and the future of software development:</p><p><a href="https://thenewstack.io/top-5-large-language-models-and-how-to-use-them-effectively/">Top 5 Large Language Models and How to Use Them Effectively</a></p><p><a href="https://thenewstack.io/30-non-trivial-ways-for-developer-to-use-gpt4/">30 Non-Trivial Ways for Developers to Use GPT-4</a></p><p><a href="https://thenewstack.io/developer-tips-in-ai-prompt-engineering/">Developer Tips in AI Prompt Engineering</a></p>
]]></content:encoded>
      <enclosure length="30712591" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/37a62f18-ba74-4a51-b02c-714c6ae1dc0c/audio/e0ddcf7f-a214-4383-9957-487a81abdb45/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>2023 Top Episodes - The End of Programming is Nigh</itunes:title>
      <itunes:author>The New Stack, Matt Welsh, Alex Williams</itunes:author>
      <itunes:duration>00:31:59</itunes:duration>
      <itunes:summary>As we wrap up and year, we are revisiting some of our most popular episodes of 2023.
Is the end of programming nigh? If you ask Matt Welsh, he&apos;d say yes. TNS host Alex Williams is joined by Welsh, a former professor of computer science at Harvard and the founder of fixie.ai, to discuss if we are witnessing the beginning of the end of programming.</itunes:summary>
      <itunes:subtitle>As we wrap up and year, we are revisiting some of our most popular episodes of 2023.
Is the end of programming nigh? If you ask Matt Welsh, he&apos;d say yes. TNS host Alex Williams is joined by Welsh, a former professor of computer science at Harvard and the founder of fixie.ai, to discuss if we are witnessing the beginning of the end of programming.</itunes:subtitle>
      <itunes:keywords>tech news, coding, software developer, developer news, tech podcast, the new stack, devops, programming, devops podcast, tech, developer podcast, developers, software development, programmers, the new stack makers, software engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1452</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">da6f70cb-096c-48b2-9523-abe77767a6a1</guid>
      <title>The New Age of Virtualization</title>
      <description><![CDATA[<p>Kubevirt, a relatively new capability within Kubernetes, signifies a shift in the virtualization landscape, allowing operations teams to run KVM virtual machines nested in containers behind the Kubernetes API. This integration means that the Kubernetes API now encompasses the concept of virtual machines, enabling VM-based workloads to operate seamlessly within a cluster behind the API. This development addresses the challenge of transitioning traditional virtualized environments into cloud-native settings, where certain applications may resist containerization or require substantial investments for adaptation.</p><p>The emerging era of virtualization simplifies the execution of virtual machines without concerning the underlying infrastructure, presenting various opportunities and use cases. Noteworthy advantages include simplified migration of legacy applications without the need for containerization, thereby reducing associated costs.</p><p>Kubevirt 1.1, discussed at KubeCon in Chicago by Red Hat's Vladik Romanovsky and Nvidia's Ryan Hallisey, introduces features like memory hotplug and vCPU hotplug, emphasizing the stability of Kubevirt. The platform's stability now allows for the implementation of features that were previously constrained.</p><p>Learn more from The New Stack about Kubevirt and the Cloud Native Computing Foundation:</p><p><a href="https://thenewstack.io/the-future-of-vms-on-kubernetes-building-on-kubevirt/">The Future of VMs on Kubernetes: Building on KubeVirt</a></p><p><a href="https://thenewstack.io/a-platform-for-kubernetes/">A Platform for Kubernetes</a></p><p><a href="https://thenewstack.io/scaling-open-source-community-by-getting-closer-to-users/">Scaling Open Source Community by Getting Closer to Users</a></p>
]]></description>
      <pubDate>Thu, 21 Dec 2023 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The Cloud Native Computing Foundation, The New Stack, Ryan Hallisey, Vladik Romanovsky, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/the-new-age-of-virtualization-eYIXkghr</link>
      <content:encoded><![CDATA[<p>Kubevirt, a relatively new capability within Kubernetes, signifies a shift in the virtualization landscape, allowing operations teams to run KVM virtual machines nested in containers behind the Kubernetes API. This integration means that the Kubernetes API now encompasses the concept of virtual machines, enabling VM-based workloads to operate seamlessly within a cluster behind the API. This development addresses the challenge of transitioning traditional virtualized environments into cloud-native settings, where certain applications may resist containerization or require substantial investments for adaptation.</p><p>The emerging era of virtualization simplifies the execution of virtual machines without concerning the underlying infrastructure, presenting various opportunities and use cases. Noteworthy advantages include simplified migration of legacy applications without the need for containerization, thereby reducing associated costs.</p><p>Kubevirt 1.1, discussed at KubeCon in Chicago by Red Hat's Vladik Romanovsky and Nvidia's Ryan Hallisey, introduces features like memory hotplug and vCPU hotplug, emphasizing the stability of Kubevirt. The platform's stability now allows for the implementation of features that were previously constrained.</p><p>Learn more from The New Stack about Kubevirt and the Cloud Native Computing Foundation:</p><p><a href="https://thenewstack.io/the-future-of-vms-on-kubernetes-building-on-kubevirt/">The Future of VMs on Kubernetes: Building on KubeVirt</a></p><p><a href="https://thenewstack.io/a-platform-for-kubernetes/">A Platform for Kubernetes</a></p><p><a href="https://thenewstack.io/scaling-open-source-community-by-getting-closer-to-users/">Scaling Open Source Community by Getting Closer to Users</a></p>
]]></content:encoded>
      <enclosure length="15729519" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/b1ab7554-9fa9-4de4-b3bf-0a7a5db1f2a3/audio/a913cc79-3076-4010-8c0a-04db3b6a13e6/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>The New Age of Virtualization</itunes:title>
      <itunes:author>The Cloud Native Computing Foundation, The New Stack, Ryan Hallisey, Vladik Romanovsky, Alex Williams</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/a8caf205-d5e5-4a25-b6fa-0595cee12011/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:16:23</itunes:duration>
      <itunes:summary>This discussion from KubeCon, hosted by Alex Williams, with Red Hat&apos;s Vladik Romanovsky and Nvidia&apos;s Ryan Hallisey focuses on Kubevirt 1.1.  We get introduced to new features and find out about the stability of Kubevirt.</itunes:summary>
      <itunes:subtitle>This discussion from KubeCon, hosted by Alex Williams, with Red Hat&apos;s Vladik Romanovsky and Nvidia&apos;s Ryan Hallisey focuses on Kubevirt 1.1.  We get introduced to new features and find out about the stability of Kubevirt.</itunes:subtitle>
      <itunes:keywords>virtual machines, tech news, kubevirt, software developer, developer news, tech podcast, the new stack, devops, cloud native, devops podcast, tech, developer podcast, kubernetes, the new stack makers, software engineer, the cloud native computing foundation, cncf, cloud computing, kubecon</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1451</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e018f585-7b4f-4fc0-a164-2ea76bf9879d</guid>
      <title>Kubernetes Goes Mainstream? With Calico, Yes</title>
      <description><![CDATA[<p>The Kubernetes landscape is evolving, shifting from the domain of visionaries and early adopters to a more mainstream audience. Tigera, represented by CEO Ratan Tipirneni at KubeCon North America in Chicago, recognizes the changing dynamics and the demand for simplified Kubernetes solutions. Tigera's open-source Calico security platform has been updated with a focus on mainstream users, presenting a cohesive and user-friendly solution. This update encompasses five key capabilities: vulnerability scoring, configuration hardening, runtime security, network security, and observability.</p><p>The aim is to provide users with a comprehensive view of their cluster's security through a zero to 100 scoring system, tracked over time. Tigera's recommendation engine suggests actions to enhance overall security based on the risk profile, evaluating factors such as egress traffic controls and workload isolation within dynamic Kubernetes environments. Tigera emphasizes the importance of understanding the actual flow of data across the network, using empirical data and observed behavior to build accurate security measures rather than relying on projections. This approach addresses the evolving needs of customers who seek not just vulnerability scores but insights into runtime behavior for a more robust security profile.</p><p>Learn more from The New Stack about Tigera and Cloud Native Security:</p><p><a href="https://thenewstack.io/cloud-native-network-security-whos-responsible/">Cloud Native Network Security: Who’s Responsible?</a></p><p><a href="https://thenewstack.io/turbocharging-host-workloads-with-calico-ebpf-and-xdp/">Turbocharging Host Workloads with Calico eBPF and XDP</a></p><p><a href="https://thenewstack.io/3-observability-best-practices-for-cloud-native-app-security/">3 Observability Best Practices for Cloud Native App Security</a></p>
]]></description>
      <pubDate>Wed, 13 Dec 2023 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Ratan Tipirneni, Tigera, The New Stack, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/kubernetes-goes-mainstream-with-calico-yes-BWfyf_TE</link>
      <content:encoded><![CDATA[<p>The Kubernetes landscape is evolving, shifting from the domain of visionaries and early adopters to a more mainstream audience. Tigera, represented by CEO Ratan Tipirneni at KubeCon North America in Chicago, recognizes the changing dynamics and the demand for simplified Kubernetes solutions. Tigera's open-source Calico security platform has been updated with a focus on mainstream users, presenting a cohesive and user-friendly solution. This update encompasses five key capabilities: vulnerability scoring, configuration hardening, runtime security, network security, and observability.</p><p>The aim is to provide users with a comprehensive view of their cluster's security through a zero to 100 scoring system, tracked over time. Tigera's recommendation engine suggests actions to enhance overall security based on the risk profile, evaluating factors such as egress traffic controls and workload isolation within dynamic Kubernetes environments. Tigera emphasizes the importance of understanding the actual flow of data across the network, using empirical data and observed behavior to build accurate security measures rather than relying on projections. This approach addresses the evolving needs of customers who seek not just vulnerability scores but insights into runtime behavior for a more robust security profile.</p><p>Learn more from The New Stack about Tigera and Cloud Native Security:</p><p><a href="https://thenewstack.io/cloud-native-network-security-whos-responsible/">Cloud Native Network Security: Who’s Responsible?</a></p><p><a href="https://thenewstack.io/turbocharging-host-workloads-with-calico-ebpf-and-xdp/">Turbocharging Host Workloads with Calico eBPF and XDP</a></p><p><a href="https://thenewstack.io/3-observability-best-practices-for-cloud-native-app-security/">3 Observability Best Practices for Cloud Native App Security</a></p>
]]></content:encoded>
      <enclosure length="19340269" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/55218984-a706-4af4-a852-1a12d925a147/audio/5c411861-e97c-421a-afed-33b1be6efd54/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Kubernetes Goes Mainstream? With Calico, Yes</itunes:title>
      <itunes:author>Ratan Tipirneni, Tigera, The New Stack, Alex Williams</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/1312a608-3b60-4cf6-bcb6-6c2bbd833669/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:20:08</itunes:duration>
      <itunes:summary>The Kubernetes landscape is shifting from the domain of visionaries and early adopters to a more mainstream audience. TNS host Alex Williams discusses this evolution with Ratan Tipirneni, CEO of Tigera, at KubeCon.</itunes:summary>
      <itunes:subtitle>The Kubernetes landscape is shifting from the domain of visionaries and early adopters to a more mainstream audience. TNS host Alex Williams discusses this evolution with Ratan Tipirneni, CEO of Tigera, at KubeCon.</itunes:subtitle>
      <itunes:keywords>ratan tipirneni, software developer, cybersecurity, tech podcast, the new stack, devops, cloud native, devops podcast, tech, developer podcast, kubernetes, tigera, the new stack makers, software engineer, cloud computing, kubecon, cloud security</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1450</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">6cf9d849-e117-4c06-84e6-ea8e26591d8d</guid>
      <title>Hello, GitOps -- Boeing&apos;s Open Source Push</title>
      <description><![CDATA[<p>Boeing, with around 6,000 engineers, is emphasizing open source engagement by focusing on three main themes, according to Damani Corbin, who heads Boeing's Open Source office. He joined our host, Alex Williams, for a discussion at KubeCon+CloudNativeCon in Chicago.</p><p>The first priority Corbin talks about is simplifying the consumption of open source software for developers. Second, Boeing aims to facilitate developer contributions to open source projects, fostering involvement in communities like the Cloud Native Computing Foundation and the Linux Foundation. The third theme involves identifying opportunities for "inner sourcing" to share internally developed solutions across different groups.</p><p>Boeing is actively working to break down barriers and encourage code reuse across the organization, promoting participation in open source initiatives. Corbin highlights the importance of separating business-critical components from those that can be shared with the community, prioritizing security and extending efforts to enhance open source security practices. The organization is consolidating its open source strategy by collaborating with legal and information security teams.</p><p>Corbin emphasizes the goal of making open source involvement accessible and attractive, with a phased approach to encourage meaningful contributions and ultimately enabling the compensation of engineers for open source work in the future.</p><p>Learn more from The New Stack about Boeing and CNCF open source projects:</p><p><a href="https://thenewstack.io/how-boeing-uses-cloud-native/">How Boeing Uses Cloud Native</a></p><p><a href="https://thenewstack.io/how-open-source-has-turned-the-tables-on-enterprise-software/">How Open Source Has Turned the Tables on Enterprise Software</a></p><p><a href="https://thenewstack.io/scaling-open-source-community-by-getting-closer-to-users/">Scaling Open Source Community by Getting Closer to Users</a></p><p><a href="https://thenewstack.io/mercedes-benz-4-reasons-to-sponsor-open-source-projects/">Mercedes-Benz: 4 Reasons to Sponsor Open Source Projects</a></p>
]]></description>
      <pubDate>Tue, 12 Dec 2023 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The Cloud Native Computing Foundation, The New Stack, Damani Corbin, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/hello-gitops-boeings-open-source-push-gF_BMCsC</link>
      <content:encoded><![CDATA[<p>Boeing, with around 6,000 engineers, is emphasizing open source engagement by focusing on three main themes, according to Damani Corbin, who heads Boeing's Open Source office. He joined our host, Alex Williams, for a discussion at KubeCon+CloudNativeCon in Chicago.</p><p>The first priority Corbin talks about is simplifying the consumption of open source software for developers. Second, Boeing aims to facilitate developer contributions to open source projects, fostering involvement in communities like the Cloud Native Computing Foundation and the Linux Foundation. The third theme involves identifying opportunities for "inner sourcing" to share internally developed solutions across different groups.</p><p>Boeing is actively working to break down barriers and encourage code reuse across the organization, promoting participation in open source initiatives. Corbin highlights the importance of separating business-critical components from those that can be shared with the community, prioritizing security and extending efforts to enhance open source security practices. The organization is consolidating its open source strategy by collaborating with legal and information security teams.</p><p>Corbin emphasizes the goal of making open source involvement accessible and attractive, with a phased approach to encourage meaningful contributions and ultimately enabling the compensation of engineers for open source work in the future.</p><p>Learn more from The New Stack about Boeing and CNCF open source projects:</p><p><a href="https://thenewstack.io/how-boeing-uses-cloud-native/">How Boeing Uses Cloud Native</a></p><p><a href="https://thenewstack.io/how-open-source-has-turned-the-tables-on-enterprise-software/">How Open Source Has Turned the Tables on Enterprise Software</a></p><p><a href="https://thenewstack.io/scaling-open-source-community-by-getting-closer-to-users/">Scaling Open Source Community by Getting Closer to Users</a></p><p><a href="https://thenewstack.io/mercedes-benz-4-reasons-to-sponsor-open-source-projects/">Mercedes-Benz: 4 Reasons to Sponsor Open Source Projects</a></p>
]]></content:encoded>
      <enclosure length="18466316" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/858d95d0-e28a-4937-80e4-b6a64cf9fc89/audio/290990b7-c173-4df5-8197-204062f05e5e/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Hello, GitOps -- Boeing&apos;s Open Source Push</itunes:title>
      <itunes:author>The Cloud Native Computing Foundation, The New Stack, Damani Corbin, Alex Williams</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/976da1ac-bbb0-430c-885b-990c3910c8ac/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:19:14</itunes:duration>
      <itunes:summary>Take flight for Boeing&apos;s open-source journey with Damani Corbin, the architect behind Boeing&apos;s Open Source office as he speaks with TNS host Alex Williams at KubeCon. Learn how they foster collaboration, contribute to projects, and prioritize code reuse for a secure, community-driven approach.</itunes:summary>
      <itunes:subtitle>Take flight for Boeing&apos;s open-source journey with Damani Corbin, the architect behind Boeing&apos;s Open Source office as he speaks with TNS host Alex Williams at KubeCon. Learn how they foster collaboration, contribute to projects, and prioritize code reuse for a secure, community-driven approach.</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, the new stack, devops, cloud native, devops podcast, tech, developer podcast, developers, software development, the new stack makers, software engineer, boeing, open source, the cloud native computing foundation, cncf, cloud computing, kubecon</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1449</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">9e8788f5-09be-4a55-9907-1872d2b5c278</guid>
      <title>How AWS Supports Open Source Work in the Kubernetes Universe</title>
      <description><![CDATA[<p>At KubeCon + CloudNativeCon North America 2022, Amazon Web Services (AWS) revealed plans to mirror Kubernetes assets hosted on Google Cloud, addressing Cloud Native Computing Foundation's (CNCF) egress costs. A year later, the project, led by AWS's Davanum Srinivas, redirects image requests to the nearest cloud provider, reducing egress costs for users.</p><p>AWS's Todd Neal and Jonathan Innis discussed this on The New Stack Makers podcast recorded at KubeCon North America 2023. Neal explained the registry's functionality, allowing users to pull images directly from the respective cloud provider, avoiding egress costs.</p><p>The discussion also highlighted AWS's recent open source contributions, including beta features in Kubectl, prerelease of Containerd 2.0, and Microsoft's support for Karpenter on Azure. Karpenter, an AWS-developed Kubernetes cluster autoscaler, simplifies node group configuration, dynamically selecting instance types and availability zones based on running pods.</p><p>The AWS team encouraged developers to contribute to Kubernetes ecosystem projects and join the sig-node CI subproject to enhance kubelet reliability. The conversation in this episode emphasized the benefits of open development for rapid feedback and community collaboration.</p><p>Learn more from The New Stack about AWS and Open Source:</p><p><a href="https://thenewstack.io/how-powertools-for-aws-lambda-grew-via-40-volunteers/">Powertools for AWS Lambda Grows with Help of Volunteers</a></p><p><a href="https://thenewstack.io/amazon-web-services-open-sources-a-kvm-based-fuzzing-framework/">Amazon Web Services Open Sources a KVM-Based Fuzzing Framework</a></p><p><a href="https://thenewstack.io/aws-why-we-support-sustainable-open-source/">AWS: Why We Support Sustainable Open Source </a></p>
]]></description>
      <pubDate>Thu, 7 Dec 2023 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Todd Neal, Jonathan Innis, Amazon Web Services, The New Stack, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-aws-supports-open-source-work-in-the-kubernetes-universe-ZrBEQNx7</link>
      <content:encoded><![CDATA[<p>At KubeCon + CloudNativeCon North America 2022, Amazon Web Services (AWS) revealed plans to mirror Kubernetes assets hosted on Google Cloud, addressing Cloud Native Computing Foundation's (CNCF) egress costs. A year later, the project, led by AWS's Davanum Srinivas, redirects image requests to the nearest cloud provider, reducing egress costs for users.</p><p>AWS's Todd Neal and Jonathan Innis discussed this on The New Stack Makers podcast recorded at KubeCon North America 2023. Neal explained the registry's functionality, allowing users to pull images directly from the respective cloud provider, avoiding egress costs.</p><p>The discussion also highlighted AWS's recent open source contributions, including beta features in Kubectl, prerelease of Containerd 2.0, and Microsoft's support for Karpenter on Azure. Karpenter, an AWS-developed Kubernetes cluster autoscaler, simplifies node group configuration, dynamically selecting instance types and availability zones based on running pods.</p><p>The AWS team encouraged developers to contribute to Kubernetes ecosystem projects and join the sig-node CI subproject to enhance kubelet reliability. The conversation in this episode emphasized the benefits of open development for rapid feedback and community collaboration.</p><p>Learn more from The New Stack about AWS and Open Source:</p><p><a href="https://thenewstack.io/how-powertools-for-aws-lambda-grew-via-40-volunteers/">Powertools for AWS Lambda Grows with Help of Volunteers</a></p><p><a href="https://thenewstack.io/amazon-web-services-open-sources-a-kvm-based-fuzzing-framework/">Amazon Web Services Open Sources a KVM-Based Fuzzing Framework</a></p><p><a href="https://thenewstack.io/aws-why-we-support-sustainable-open-source/">AWS: Why We Support Sustainable Open Source </a></p>
]]></content:encoded>
      <enclosure length="17047345" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/8e85f0ad-deb8-40ad-8387-8c0e8b2ce9e5/audio/c7e6ed39-8b43-490b-b9c0-8ae947c900cb/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How AWS Supports Open Source Work in the Kubernetes Universe</itunes:title>
      <itunes:author>Todd Neal, Jonathan Innis, Amazon Web Services, The New Stack, Heather Joslyn</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/7b821dec-b136-407a-9d66-3ec37a1bd8b0/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:17:45</itunes:duration>
      <itunes:summary>Engineers from Amazon Web Services provide updates on kubectl, containerd, Karpenter and more from KubeCon.</itunes:summary>
      <itunes:subtitle>Engineers from Amazon Web Services provide updates on kubectl, containerd, Karpenter and more from KubeCon.</itunes:subtitle>
      <itunes:keywords>tech news, kubecon north america, software developer, software engineering, tech podcast, the new stack, devops, cloud native, devops podcast, tech, developer podcast, software development, kubernetes, the new stack makers, software engineer, open source, cloud computing, kubecon, aws</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1448</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">c4ca3470-b181-400e-ab98-b35bfdc64c26</guid>
      <title>2024 Forecast: What Can Developers Expect in the New Year?</title>
      <description><![CDATA[<p>In the past year, developers have faced both promise and uncertainty, particularly in the realm of generative AI. Heath Newburn, global field CTO for PagerDuty, joins TNS host Heather Joslyn to talk about the impact AI and other topics will have on developers in 2024.</p><p>Newburn anticipates a growing emphasis on DevSecOps in response to high-profile cyber incidents, noting a shift in executive attitudes toward security spending. The rise of automation-centric tools like Backstage signals a changing landscape in the link between development and operations tools. Notably, there's a move from focusing on efficiency gains to achieving new outcomes, with organizations seeking innovative products rather than marginal coding speed improvements.</p><p>Newburn highlights the importance of experimentation, encouraging organizations to identify areas for trial and error, learning swiftly from failures. The upcoming year is predicted to favor organizations capable of rapid experimentation and information gathering over perfection in code writing.</p><p>Listen to the full podcast episode as Newburn further discusses his predictions related to platform engineering, remote work, and the continued impact of generative AI.</p><p>Learn more from The New Stack about PagerDuty and trends in software development:</p><p><a href="https://thenewstack.io/how-ai-and-automation-can-improve-operational-resiliency/">How AI and Automation Can Improve Operational Resiliency</a></p><p><a href="https://thenewstack.io/why-infrastructure-as-code-is-vital-for-modern-devops/">Why Infrastructure as Code Is Vital for Modern DevOps</a></p><p><a href="https://thenewstack.io/operationalizing-ai-accelerating-automation-dataops-aiops/">Operationalizing AI: Accelerating Automation, DataOps, AIOps</a></p>
]]></description>
      <pubDate>Wed, 6 Dec 2023 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (PagerDuty, The New Stack, Heath Newburn, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/2024-forecast-what-can-developers-expect-in-the-new-year-ndSmX1Ys</link>
      <content:encoded><![CDATA[<p>In the past year, developers have faced both promise and uncertainty, particularly in the realm of generative AI. Heath Newburn, global field CTO for PagerDuty, joins TNS host Heather Joslyn to talk about the impact AI and other topics will have on developers in 2024.</p><p>Newburn anticipates a growing emphasis on DevSecOps in response to high-profile cyber incidents, noting a shift in executive attitudes toward security spending. The rise of automation-centric tools like Backstage signals a changing landscape in the link between development and operations tools. Notably, there's a move from focusing on efficiency gains to achieving new outcomes, with organizations seeking innovative products rather than marginal coding speed improvements.</p><p>Newburn highlights the importance of experimentation, encouraging organizations to identify areas for trial and error, learning swiftly from failures. The upcoming year is predicted to favor organizations capable of rapid experimentation and information gathering over perfection in code writing.</p><p>Listen to the full podcast episode as Newburn further discusses his predictions related to platform engineering, remote work, and the continued impact of generative AI.</p><p>Learn more from The New Stack about PagerDuty and trends in software development:</p><p><a href="https://thenewstack.io/how-ai-and-automation-can-improve-operational-resiliency/">How AI and Automation Can Improve Operational Resiliency</a></p><p><a href="https://thenewstack.io/why-infrastructure-as-code-is-vital-for-modern-devops/">Why Infrastructure as Code Is Vital for Modern DevOps</a></p><p><a href="https://thenewstack.io/operationalizing-ai-accelerating-automation-dataops-aiops/">Operationalizing AI: Accelerating Automation, DataOps, AIOps</a></p>
]]></content:encoded>
      <enclosure length="21387015" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/d5d5af30-def4-438d-a881-106b3d7528df/audio/13d7ee4f-80f7-4c6b-aa68-b7f0469bc361/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>2024 Forecast: What Can Developers Expect in the New Year?</itunes:title>
      <itunes:author>PagerDuty, The New Stack, Heath Newburn, Heather Joslyn</itunes:author>
      <itunes:duration>00:22:16</itunes:duration>
      <itunes:summary>Predictions about generative AI, developer productivity, DevSecOps and more are on the mind of Heath Newburn, global CTO of PagerDuty, in this episode of The New Stack Makers.</itunes:summary>
      <itunes:subtitle>Predictions about generative AI, developer productivity, DevSecOps and more are on the mind of Heath Newburn, global CTO of PagerDuty, in this episode of The New Stack Makers.</itunes:subtitle>
      <itunes:keywords>generative ai, coding, software developer, tech podcast, the new stack, devops, devops podcast, tech, pagerduty, developer podcast, developers, artificial intelligence, the new stack makers, software engineer, platform engineering, automation, devsecops</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1447</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b40de75f-c1d8-48ba-b3fc-458b0a1014f9</guid>
      <title>How to Know If You’re Building the Right Internal Tools</title>
      <description><![CDATA[<p>In this episode of The New Stack Makers, Rob Skillington, co-founder and CTO of Chronosphere, discusses the challenges engineers face in building tools for their organizations. Skillington emphasizes that the "build or buy" decision oversimplifies the issue of tooling and suggests that understanding the abstractions of a project is crucial. Engineers should consider where to build and where to buy, creating solutions that address the entire problem. Skillington advises against short-term thinking, urging innovators to consider the long-term landscape.</p><p>Drawing from his experience at Uber, Skillington highlights the importance of knowing the audience and customer base, even when they are colleagues. He shares a lesson learned when building a visualization platform for engineers at Uber, where understanding user adoption as a key performance indicator upfront could have improved the project's outcome.</p><p>Skillington also addresses the "not invented here syndrome," noting its prevalence in organizations like Microsoft and its potential impact on tool adoption. He suggests that younger companies, like Uber, may be more inclined to explore external solutions rather than building everything in-house. The conversation provides insights into Skillington's experiences and the considerations involved in developing internal tools and platforms.</p><p>Learn more from The New Stack about Software Engineering, Observability, and Chronosphere:</p><p><a href="https://thenewstack.io/cloud-native-observability-fighting-rising-costs-incidents/">Cloud Native Observability: Fighting Rising Costs, Incidents</a></p><p><a href="https://thenewstack.io/a-guide-to-measuring-developer-productivity/">A Guide to Measuring Developer Productivity </a></p><p><a href="https://thenewstack.io/4-key-observability-best-practices/">4 Key Observability Best Practices</a></p>
]]></description>
      <pubDate>Tue, 5 Dec 2023 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Chronosphere, Rob Skillington, Heather Joslyn, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-to-know-if-youre-building-the-right-internal-tools-q2tZg5oh</link>
      <content:encoded><![CDATA[<p>In this episode of The New Stack Makers, Rob Skillington, co-founder and CTO of Chronosphere, discusses the challenges engineers face in building tools for their organizations. Skillington emphasizes that the "build or buy" decision oversimplifies the issue of tooling and suggests that understanding the abstractions of a project is crucial. Engineers should consider where to build and where to buy, creating solutions that address the entire problem. Skillington advises against short-term thinking, urging innovators to consider the long-term landscape.</p><p>Drawing from his experience at Uber, Skillington highlights the importance of knowing the audience and customer base, even when they are colleagues. He shares a lesson learned when building a visualization platform for engineers at Uber, where understanding user adoption as a key performance indicator upfront could have improved the project's outcome.</p><p>Skillington also addresses the "not invented here syndrome," noting its prevalence in organizations like Microsoft and its potential impact on tool adoption. He suggests that younger companies, like Uber, may be more inclined to explore external solutions rather than building everything in-house. The conversation provides insights into Skillington's experiences and the considerations involved in developing internal tools and platforms.</p><p>Learn more from The New Stack about Software Engineering, Observability, and Chronosphere:</p><p><a href="https://thenewstack.io/cloud-native-observability-fighting-rising-costs-incidents/">Cloud Native Observability: Fighting Rising Costs, Incidents</a></p><p><a href="https://thenewstack.io/a-guide-to-measuring-developer-productivity/">A Guide to Measuring Developer Productivity </a></p><p><a href="https://thenewstack.io/4-key-observability-best-practices/">4 Key Observability Best Practices</a></p>
]]></content:encoded>
      <enclosure length="19314773" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/e7ba34c3-e9b4-4182-be21-b924588c891a/audio/32a64bbe-cdb7-482f-b58b-0bafa9a3f1db/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How to Know If You’re Building the Right Internal Tools</itunes:title>
      <itunes:author>Chronosphere, Rob Skillington, Heather Joslyn, The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/f2f2f083-4906-4204-acd5-e953dd2d938b/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:20:07</itunes:duration>
      <itunes:summary>In this episode of The New Stack Makers, the co-founder of Chronosphere, Rob Skillington, joins us at KubeCon in Chicago to share what he’s learned from building platforms and tools for his colleagues.</itunes:summary>
      <itunes:subtitle>In this episode of The New Stack Makers, the co-founder of Chronosphere, Rob Skillington, joins us at KubeCon in Chicago to share what he’s learned from building platforms and tools for his colleagues.</itunes:subtitle>
      <itunes:keywords>kubecon north america, software developer, software engineering, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, developers, cloud native con, rob skillington, the new stack makers, software engineer, platform engineering, kubecon, chronosphere</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1446</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">d73a7408-d383-403c-b978-4534f035c6b5</guid>
      <title>Hey Programming Language Developer -- Get Over Yourself</title>
      <description><![CDATA[<p>Jean Yang, founder of API observability company Akita Software, emphasizes that programming languages should be shaped by software development needs and data, rather than philosophical ideals. Yang, a former assistant professor at Carnegie Mellon University, believes that programming tools and processes should be influenced by actual use and data, prioritizing the developer experience over the language creator's beliefs. With a background in programming languages, Yang advocates for a shift away from the outdated notion that language developers are building solely for themselves.</p><p>In this discussion on The New Stack Makers, Yang underscores the importance of understanding the reality of developers' needs, especially as developer tools have evolved into a full-time industry. She argues for a focus on UX design and product fundamentals in developing tools, moving beyond the traditional mindset where developer tools were considered side projects.</p><p>Yang founded Akita to address the challenges of building reliable software systems in a world dominated by APIs and microservices. The company transitioned to API observability, recognizing the crucial role APIs play in enhancing the understandability of complex systems. Yang's commitment to improving software correctness and the belief in APIs as key to abstraction and ease of monitoring align with Postman's direction after acquiring Akita. Postman aims to serve developers worldwide, emphasizing the significance of APIs in complex systems.</p><p>Check out more episodes from The Tech Founder Odyssey series:</p><p><a href="https://thenewstack.io/how-byteboards-ceo-decided-to-fix-the-tech-interview/" target="_blank">How Byteboard’s CEO Decided to Fix the Broken Tech Interview</a></p><p><a href="https://thenewstack.io/a-lifelong-maker-tackles-a-developer-onboarding-problem/" target="_blank">A Lifelong ‘Maker’ Tackles a Developer Onboarding Problem</a></p><p><a href="https://thenewstack.io/how-teleports-leader-transitioned-from-engineer-to-ceo/" target="_blank">How Teleport’s Leader Transitioned from Engineer to CEO</a></p>
]]></description>
      <pubDate>Thu, 30 Nov 2023 19:10:05 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack, Jean Yang, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/hey-programming-language-developer-get-over-yourself-eDzKG393</link>
      <content:encoded><![CDATA[<p>Jean Yang, founder of API observability company Akita Software, emphasizes that programming languages should be shaped by software development needs and data, rather than philosophical ideals. Yang, a former assistant professor at Carnegie Mellon University, believes that programming tools and processes should be influenced by actual use and data, prioritizing the developer experience over the language creator's beliefs. With a background in programming languages, Yang advocates for a shift away from the outdated notion that language developers are building solely for themselves.</p><p>In this discussion on The New Stack Makers, Yang underscores the importance of understanding the reality of developers' needs, especially as developer tools have evolved into a full-time industry. She argues for a focus on UX design and product fundamentals in developing tools, moving beyond the traditional mindset where developer tools were considered side projects.</p><p>Yang founded Akita to address the challenges of building reliable software systems in a world dominated by APIs and microservices. The company transitioned to API observability, recognizing the crucial role APIs play in enhancing the understandability of complex systems. Yang's commitment to improving software correctness and the belief in APIs as key to abstraction and ease of monitoring align with Postman's direction after acquiring Akita. Postman aims to serve developers worldwide, emphasizing the significance of APIs in complex systems.</p><p>Check out more episodes from The Tech Founder Odyssey series:</p><p><a href="https://thenewstack.io/how-byteboards-ceo-decided-to-fix-the-tech-interview/" target="_blank">How Byteboard’s CEO Decided to Fix the Broken Tech Interview</a></p><p><a href="https://thenewstack.io/a-lifelong-maker-tackles-a-developer-onboarding-problem/" target="_blank">A Lifelong ‘Maker’ Tackles a Developer Onboarding Problem</a></p><p><a href="https://thenewstack.io/how-teleports-leader-transitioned-from-engineer-to-ceo/" target="_blank">How Teleport’s Leader Transitioned from Engineer to CEO</a></p>
]]></content:encoded>
      <enclosure length="25142250" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/3ccac9d1-f31f-4d53-b2e1-36cd9d428c56/audio/47fbd98c-bd76-408e-92b0-15bb44477a60/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Hey Programming Language Developer -- Get Over Yourself</itunes:title>
      <itunes:author>The New Stack, Jean Yang, Alex Williams</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/e02734ee-32cd-46c9-a3f2-60b12ecefd0c/3000x3000/the-tech-odyssey-logo-white-bg.jpg?aid=rss_feed"/>
      <itunes:duration>00:26:10</itunes:duration>
      <itunes:summary>In this special edition of The Tech Founder Odyssey, Jean Yang of Akita Software chats with TNS host Alex Williams about advocating for a pragmatic and data-driven approach to shape programming languages and tools in response to real-world developer needs.</itunes:summary>
      <itunes:subtitle>In this special edition of The Tech Founder Odyssey, Jean Yang of Akita Software chats with TNS host Alex Williams about advocating for a pragmatic and data-driven approach to shape programming languages and tools in response to real-world developer needs.</itunes:subtitle>
      <itunes:keywords>software developer, programming languages, tech podcast, the new stack, devops, programming, devops podcast, tech, developer podcast, software development, akita software, the new stack makers, software engineer, jean yang, api, postman, observability, tech founder odyssey</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1445</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">f5df3075-cb98-49c7-a57a-a54b9095fc4c</guid>
      <title>Docker CTO Explains How Docker Can Support AI Efforts</title>
      <description><![CDATA[<p>Docker CTO Justin Cormack reveals that Docker has been a go-to tool for data scientists in AI and machine learning for years, primarily in specialized areas like image processing and prediction models. However, the release of OpenAI's ChatGPT last year sparked a significant surge in Docker's popularity within the AI community.</p><p>The focus shifted to large language models (LLMs), with a growing interest in the retrieval-augmented generation (RAG) stack. Docker's collaboration with Ollama enables developers to run Llama 2 and Code Llama locally, simplifying the process of starting and experimenting with AI applications. Additionally, partnerships with Neo4j and LangChain allow for enhanced support in storing and retrieving data for LLMs.</p><p>Cormack emphasizes the simplicity of getting started locally, addressing challenges related to GPU shortages in the cloud. Docker's efforts also include building an AI solution using its data, aiming to assist users in Dockerizing applications through an interactive notebook in Visual Studio Code. This tool leverages LLMs to analyze applications, suggest improvements, and generate Docker files tailored to specific languages and applications.</p><p>Docker's integration with AI technologies demonstrates a commitment to making AI and Docker more accessible and user-friendly.</p><p>Learn more from The New Stack about AI and Docker:</p><p><a href="https://thenewstack.io/ai/">Artificial Intelligence News, Analysis, and Resources</a></p><p><a href="https://thenewstack.io/will-genai-take-jobs-no-says-docker-ceo/" target="_blank">Will GenAI Take Jobs? No, Says Docker CEO</a></p><p><a href="https://thenewstack.io/debugging-containers-in-kubernetes-its-complicated/" target="_blank">Debugging Containers in Kubernetes — It’s Complicated</a></p>
]]></description>
      <pubDate>Tue, 28 Nov 2023 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Docker, The New Stack, Justin Cormack, Loraine Lawson)</author>
      <link>https://thenewstack.simplecast.com/episodes/docker-cto-explains-how-docker-can-support-ai-efforts-48tjZ420</link>
      <content:encoded><![CDATA[<p>Docker CTO Justin Cormack reveals that Docker has been a go-to tool for data scientists in AI and machine learning for years, primarily in specialized areas like image processing and prediction models. However, the release of OpenAI's ChatGPT last year sparked a significant surge in Docker's popularity within the AI community.</p><p>The focus shifted to large language models (LLMs), with a growing interest in the retrieval-augmented generation (RAG) stack. Docker's collaboration with Ollama enables developers to run Llama 2 and Code Llama locally, simplifying the process of starting and experimenting with AI applications. Additionally, partnerships with Neo4j and LangChain allow for enhanced support in storing and retrieving data for LLMs.</p><p>Cormack emphasizes the simplicity of getting started locally, addressing challenges related to GPU shortages in the cloud. Docker's efforts also include building an AI solution using its data, aiming to assist users in Dockerizing applications through an interactive notebook in Visual Studio Code. This tool leverages LLMs to analyze applications, suggest improvements, and generate Docker files tailored to specific languages and applications.</p><p>Docker's integration with AI technologies demonstrates a commitment to making AI and Docker more accessible and user-friendly.</p><p>Learn more from The New Stack about AI and Docker:</p><p><a href="https://thenewstack.io/ai/">Artificial Intelligence News, Analysis, and Resources</a></p><p><a href="https://thenewstack.io/will-genai-take-jobs-no-says-docker-ceo/" target="_blank">Will GenAI Take Jobs? No, Says Docker CEO</a></p><p><a href="https://thenewstack.io/debugging-containers-in-kubernetes-its-complicated/" target="_blank">Debugging Containers in Kubernetes — It’s Complicated</a></p>
]]></content:encoded>
      <enclosure length="11982097" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/72daf389-a13e-4d28-9eec-6ac01c64b6af/audio/f17d1537-3cc2-4f22-925b-e13b8b888e6f/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Docker CTO Explains How Docker Can Support AI Efforts</itunes:title>
      <itunes:author>Docker, The New Stack, Justin Cormack, Loraine Lawson</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/1bf881d8-e53e-4e74-bf61-72b9a1581d0a/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:12:28</itunes:duration>
      <itunes:summary>In this episode of The New Stack Makers, Docker CTO Justin Cormack explains how Docker is making AI models easier to deploy locally.</itunes:summary>
      <itunes:subtitle>In this episode of The New Stack Makers, Docker CTO Justin Cormack explains how Docker is making AI models easier to deploy locally.</itunes:subtitle>
      <itunes:keywords>machine learning, dockercon, software developer, tech podcast, the new stack, devops, data engineering, devops podcast, tech, developer podcast, artificial intelligence, docker, data science, large language models, the new stack makers, software engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1444</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">0e138867-9205-4959-9902-0866bf903a5e</guid>
      <title>What Does Open Mean in AI?</title>
      <description><![CDATA[<p>In this episode, Stefano Maffulli, Executive Director of the Open Source Initiative, discusses the need for a new definition as AI differs significantly from open source software. The complexity arises from the unique nature of AI, particularly large language models and transformers, which challenge traditional copyright frameworks. Maffulli emphasizes the urgency of establishing a definition for open source AI and discusses an ongoing effort to release a set of principles by the year's end.</p><p>The concept of "open" in the context of AI is undergoing a significant transformation, reminiscent of the early days of open source. The recent upheaval at OpenAI, resulting in the removal of CEO Sam Altman, reflects a profound shift in the technology community, prompting a reconsideration of the definition of "open" in the realm of AI.</p><p>The conversation highlights the parallels between the current AI debate and the early days of software development, emphasizing the necessity for a cohesive approach to navigate the evolving landscape. Altman's ousting underscores a clash of belief systems within OpenAI, with a "safetyist" community advocating caution and transparency, while Altman leans towards experimentation. The historical significance of open source, with a focus on trust preservation over technical superiority, serves as a guide for defining "open" and "AI" in a rapidly changing environment.</p><p>Learn more from The New Stack about AI and Open Source:</p><p><a href="https://thenewstack.io/ai/">Artificial Intelligence News, Analysis, and Resources</a></p><p><a href="https://thenewstack.io/open-source-development-threatened-in-europe/">Open Source Development Threatened in Europe</a></p><p><a href="https://thenewstack.io/the-ai-engineer-foundation-open-source-for-the-future-of-ai/">The AI Engineer Foundation: Open Source for the Future of AI</a></p>
]]></description>
      <pubDate>Wed, 22 Nov 2023 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack, Stefano Maffulli, Open Source Initiative, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/what-does-open-mean-in-ai-BrRNRM3v</link>
      <content:encoded><![CDATA[<p>In this episode, Stefano Maffulli, Executive Director of the Open Source Initiative, discusses the need for a new definition as AI differs significantly from open source software. The complexity arises from the unique nature of AI, particularly large language models and transformers, which challenge traditional copyright frameworks. Maffulli emphasizes the urgency of establishing a definition for open source AI and discusses an ongoing effort to release a set of principles by the year's end.</p><p>The concept of "open" in the context of AI is undergoing a significant transformation, reminiscent of the early days of open source. The recent upheaval at OpenAI, resulting in the removal of CEO Sam Altman, reflects a profound shift in the technology community, prompting a reconsideration of the definition of "open" in the realm of AI.</p><p>The conversation highlights the parallels between the current AI debate and the early days of software development, emphasizing the necessity for a cohesive approach to navigate the evolving landscape. Altman's ousting underscores a clash of belief systems within OpenAI, with a "safetyist" community advocating caution and transparency, while Altman leans towards experimentation. The historical significance of open source, with a focus on trust preservation over technical superiority, serves as a guide for defining "open" and "AI" in a rapidly changing environment.</p><p>Learn more from The New Stack about AI and Open Source:</p><p><a href="https://thenewstack.io/ai/">Artificial Intelligence News, Analysis, and Resources</a></p><p><a href="https://thenewstack.io/open-source-development-threatened-in-europe/">Open Source Development Threatened in Europe</a></p><p><a href="https://thenewstack.io/the-ai-engineer-foundation-open-source-for-the-future-of-ai/">The AI Engineer Foundation: Open Source for the Future of AI</a></p>
]]></content:encoded>
      <enclosure length="21748132" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/28f29026-3992-48be-b372-7955c3c89cc4/audio/ca0556ed-d6fe-4d1f-949b-b6cb7c02526b/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>What Does Open Mean in AI?</itunes:title>
      <itunes:author>The New Stack, Stefano Maffulli, Open Source Initiative, Alex Williams</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/ba2f9591-959b-4e75-9e68-a9ace50b1acf/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:22:39</itunes:duration>
      <itunes:summary>The concept of &quot;open&quot; in the context of AI is undergoing a significant transformation, reminiscent of the early days of open source. The Executive Director of the Open Source Initiative sat down at the Open Source Summit to discuss this topic with TNS host Alex Williams.</itunes:summary>
      <itunes:subtitle>The concept of &quot;open&quot; in the context of AI is undergoing a significant transformation, reminiscent of the early days of open source. The Executive Director of the Open Source Initiative sat down at the Open Source Summit to discuss this topic with TNS host Alex Williams.</itunes:subtitle>
      <itunes:keywords>tech news, software developer, ai, tech podcast, technology, the new stack, stefano maffulli, devops, devops podcast, tech, developer podcast, artificial intelligence, the new stack makers, software engineer, open source summit eu, open source</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1443</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">73a51fbf-037c-40fc-8446-68bd0db3bd1c</guid>
      <title>Debugging Containers in Kubernetes</title>
      <description><![CDATA[<p>DockerCon showcased a commitment to enhancing the developer experience, with a particular focus on addressing the challenge of debugging containers in Kubernetes. The newly launched Docker Debug offers a language-independent toolbox for debugging both local and remote containerized applications.</p><p>By abstracting Kubernetes concepts like pods and namespaces, Docker aims to simplify debugging processes and shift the focus from container layers to the application itself. Our guest, Docker Principal Engineer Ivan Pedrazas, emphasized the need to eliminate unnecessary complexities in debugging, especially in the context of Kubernetes, where developers grapple with unfamiliar concerns exposed by the API.</p><p>Another Docker project, Tape, simplifies deployment by consolidating Kubernetes artifacts into a single package, streamlining the process for developers. The ultimate goal is to facilitate debugging of slim containers with minimal dependencies, optimizing security and user experience in Kubernetes development.</p><p>While progress is being made, bridging the gap between developer practices and platform engineering expectations remains an ongoing challenge.</p><p>Learn more from The New Stack about Kubernetes and Docker:</p><p><a href="https://thenewstack.io/kubernetes/">Kubernetes Overview, News, and Trends</a></p><p><a href="https://thenewstack.io/docker-rolls-out-3-tools-to-speed-and-ease-development/">Docker Rolls out 3 Tools to Speed and Ease Development</a></p><p><a href="https://thenewstack.io/will-genai-take-jobs-no-says-docker-ceo/" target="_blank">Will GenAI Take Jobs? No, Says Docker CEO</a></p>
]]></description>
      <pubDate>Tue, 21 Nov 2023 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Docker, The New Stack, Ivan Pedrazas, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/debugging-containers-in-kubernetes-LuhSwCaN</link>
      <content:encoded><![CDATA[<p>DockerCon showcased a commitment to enhancing the developer experience, with a particular focus on addressing the challenge of debugging containers in Kubernetes. The newly launched Docker Debug offers a language-independent toolbox for debugging both local and remote containerized applications.</p><p>By abstracting Kubernetes concepts like pods and namespaces, Docker aims to simplify debugging processes and shift the focus from container layers to the application itself. Our guest, Docker Principal Engineer Ivan Pedrazas, emphasized the need to eliminate unnecessary complexities in debugging, especially in the context of Kubernetes, where developers grapple with unfamiliar concerns exposed by the API.</p><p>Another Docker project, Tape, simplifies deployment by consolidating Kubernetes artifacts into a single package, streamlining the process for developers. The ultimate goal is to facilitate debugging of slim containers with minimal dependencies, optimizing security and user experience in Kubernetes development.</p><p>While progress is being made, bridging the gap between developer practices and platform engineering expectations remains an ongoing challenge.</p><p>Learn more from The New Stack about Kubernetes and Docker:</p><p><a href="https://thenewstack.io/kubernetes/">Kubernetes Overview, News, and Trends</a></p><p><a href="https://thenewstack.io/docker-rolls-out-3-tools-to-speed-and-ease-development/">Docker Rolls out 3 Tools to Speed and Ease Development</a></p><p><a href="https://thenewstack.io/will-genai-take-jobs-no-says-docker-ceo/" target="_blank">Will GenAI Take Jobs? No, Says Docker CEO</a></p>
]]></content:encoded>
      <enclosure length="15193278" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/49abed81-87e2-4a4f-a0cb-bdcadaa2c66f/audio/2da66b8e-9600-492f-a63c-7dd43fdf11c6/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Debugging Containers in Kubernetes</itunes:title>
      <itunes:author>Docker, The New Stack, Ivan Pedrazas, Alex Williams</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/bd580e5e-894b-47ab-852b-3b8996a7ddee/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:15:49</itunes:duration>
      <itunes:summary>We talked to Ivan Pedrazas at DockerCon to learn about Docker Debug, a new tool to speed up the debugging process.</itunes:summary>
      <itunes:subtitle>We talked to Ivan Pedrazas at DockerCon to learn about Docker Debug, a new tool to speed up the debugging process.</itunes:subtitle>
      <itunes:keywords>dockercon, software developer, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, developers, docker, software development, kubernetes, the new stack makers, software engineer, platform engineering, software testing, dockercon 23</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1442</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">30366a60-dea6-421b-9d23-07ac7e01bb0d</guid>
      <title>Integrating a Data Warehouse and a Data Lake</title>
      <description><![CDATA[<p>TNS host Alex Williams is joined by Florian Valeye, a data engineer at Back Market, to shed light on the evolving landscape of data engineering, particularly focusing on Delta Lake and his contributions to open source communities. As a member of the Delta Lake community, Valeye discusses the intersection of data warehouses and data lakes, emphasizing the need for a unified platform that breaks down traditional barriers.</p><p>Delta Lake, initially created by Databricks and now under the Linux Foundation, aims to enhance reliability, performance, and quality in data lakes. Valeye explains how Delta Lake addresses the challenges posed by the separation of data warehouses and data lakes, emphasizing the importance of providing asset transactions, real-time processing, and scalable metadata.</p><p>Valeye's involvement in Delta Lake began as a response to the challenges faced at Back Market, a global marketplace for refurbished devices. The platform manages large datasets, and Delta Lake proved to be a pivotal solution in optimizing ETL processes and facilitating communication between data scientists and data engineers.</p><p>The conversation delves into Valeye's journey with Delta Lake, his introduction to Rust programming language, and his role as a maintainer in the Rust-based library for Delta Lake. Valeye emphasizes Rust's importance in providing a high-level API with reliability and efficiency, offering a balanced approach for developers.</p><p>Looking ahead, Valeye envisions Delta Lake evolving beyond traditional data engineering, becoming a platform that seamlessly connects data scientists and engineers. He anticipates improvements in data storage optimization and envisions Delta Lake serving as a standard format for machine learning and AI applications.</p><p>The conversation concludes with Valeye reflecting on his future contributions, expressing a passion for Rust programming and an eagerness to explore evolving projects in the open-source community. </p><p>Learn more from The New Stack about Delta Lake and The Linux Foundation:</p><p><a href="https://thenewstack.io/delta-lake-a-layer-to-ensure-data-quality/">Delta Lake: A Layer to Ensure Data Quality</a></p><p><a href="https://thenewstack.io/data-2023-revenge-of-the-sql-nerds/">Data in 2023: Revenge of the SQL Nerds</a></p><p><a href="https://thenewstack.io/what-do-you-know-about-your-linux-system/">What Do You Know about Your Linux System?</a></p>
]]></description>
      <pubDate>Thu, 16 Nov 2023 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The Linux Foundation, Florian Valeye, The New Stack, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/integrating-a-data-warehouse-and-a-data-lake-q3RdFLyw</link>
      <content:encoded><![CDATA[<p>TNS host Alex Williams is joined by Florian Valeye, a data engineer at Back Market, to shed light on the evolving landscape of data engineering, particularly focusing on Delta Lake and his contributions to open source communities. As a member of the Delta Lake community, Valeye discusses the intersection of data warehouses and data lakes, emphasizing the need for a unified platform that breaks down traditional barriers.</p><p>Delta Lake, initially created by Databricks and now under the Linux Foundation, aims to enhance reliability, performance, and quality in data lakes. Valeye explains how Delta Lake addresses the challenges posed by the separation of data warehouses and data lakes, emphasizing the importance of providing asset transactions, real-time processing, and scalable metadata.</p><p>Valeye's involvement in Delta Lake began as a response to the challenges faced at Back Market, a global marketplace for refurbished devices. The platform manages large datasets, and Delta Lake proved to be a pivotal solution in optimizing ETL processes and facilitating communication between data scientists and data engineers.</p><p>The conversation delves into Valeye's journey with Delta Lake, his introduction to Rust programming language, and his role as a maintainer in the Rust-based library for Delta Lake. Valeye emphasizes Rust's importance in providing a high-level API with reliability and efficiency, offering a balanced approach for developers.</p><p>Looking ahead, Valeye envisions Delta Lake evolving beyond traditional data engineering, becoming a platform that seamlessly connects data scientists and engineers. He anticipates improvements in data storage optimization and envisions Delta Lake serving as a standard format for machine learning and AI applications.</p><p>The conversation concludes with Valeye reflecting on his future contributions, expressing a passion for Rust programming and an eagerness to explore evolving projects in the open-source community. </p><p>Learn more from The New Stack about Delta Lake and The Linux Foundation:</p><p><a href="https://thenewstack.io/delta-lake-a-layer-to-ensure-data-quality/">Delta Lake: A Layer to Ensure Data Quality</a></p><p><a href="https://thenewstack.io/data-2023-revenge-of-the-sql-nerds/">Data in 2023: Revenge of the SQL Nerds</a></p><p><a href="https://thenewstack.io/what-do-you-know-about-your-linux-system/">What Do You Know about Your Linux System?</a></p>
]]></content:encoded>
      <enclosure length="20159887" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/10767bb7-8133-4550-9c12-b8cdf1a35f64/audio/08afe06d-19e5-4552-9601-a6087c56a014/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Integrating a Data Warehouse and a Data Lake</itunes:title>
      <itunes:author>The Linux Foundation, Florian Valeye, The New Stack, Alex Williams</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/2271cf24-882e-459b-ae60-ca4bf08a3a36/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:20:59</itunes:duration>
      <itunes:summary>This conversation from the Open Source Summit in Spain provides valuable insights into the significance of Delta Lake, the role of Rust in data engineering, and the collaborative nature of open-source communities.</itunes:summary>
      <itunes:subtitle>This conversation from the Open Source Summit in Spain provides valuable insights into the significance of Delta Lake, the role of Rust in data engineering, and the collaborative nature of open-source communities.</itunes:subtitle>
      <itunes:keywords>the linux foundation, software developer, rust, data engineer, tech podcast, the new stack, devops, data engineering, devops podcast, tech, developer podcast, the new stack makers, software engineer, delta lake, open source, rust programming, open source summit</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1441</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">f07490d0-62f9-4fb1-84ec-035447c98c18</guid>
      <title>WebAssembly&apos;s Status in Computing</title>
      <description><![CDATA[<p>Liam Crilly, Senior Director of Product Management at NGINX, discussed the potential of WebAssembly (Wasm) during this recording at the Open Source Summit in Bilbao, Spain. With over three decades of experience, Crilly highlighted WebAssembly's promise of universal portability, allowing developers to build once and run anywhere across a network of devices.</p><p>While Wasm is more mature on the client side in browsers, its deployment on the server side is less developed, lacking sufficient runtimes and toolchains. Crilly noted that WebAssembly acts as a powerful compiler target, enabling the generation of well-optimized instruction set code. Despite the need for a virtual machine, WebAssembly's abstraction layer eliminates hardware-specific concerns, providing near-native compute performance through additional layers of optimization.</p><p>Learn more from The New Stack about WebAssembly and NGINX:</p><p><a href="https://thenewstack.io/webassembly/">WebAssembly Overview, News and Trends</a></p><p><a href="https://thenewstack.io/why-webassembly-will-disrupt-the-operating-system/">Why WebAssembly Will Disrupt the Operating System</a></p><p><a href="https://thenewstack.io/true-portability-is-the-killer-use-case-for-webassembly/">True Portability Is the Killer Use Case for WebAssembly</a></p><p><a href="https://thenewstack.io/four-factors-of-a-webassembly-native-world/">4 Factors of a WebAssembly Native World</a></p>
]]></description>
      <pubDate>Tue, 14 Nov 2023 18:45:02 +0000</pubDate>
      <author>podcasts@thenewstack.io (Liam Crilly, Bruce Gain, NGINX, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/webassemblys-status-in-computing-VoSyC6jr</link>
      <content:encoded><![CDATA[<p>Liam Crilly, Senior Director of Product Management at NGINX, discussed the potential of WebAssembly (Wasm) during this recording at the Open Source Summit in Bilbao, Spain. With over three decades of experience, Crilly highlighted WebAssembly's promise of universal portability, allowing developers to build once and run anywhere across a network of devices.</p><p>While Wasm is more mature on the client side in browsers, its deployment on the server side is less developed, lacking sufficient runtimes and toolchains. Crilly noted that WebAssembly acts as a powerful compiler target, enabling the generation of well-optimized instruction set code. Despite the need for a virtual machine, WebAssembly's abstraction layer eliminates hardware-specific concerns, providing near-native compute performance through additional layers of optimization.</p><p>Learn more from The New Stack about WebAssembly and NGINX:</p><p><a href="https://thenewstack.io/webassembly/">WebAssembly Overview, News and Trends</a></p><p><a href="https://thenewstack.io/why-webassembly-will-disrupt-the-operating-system/">Why WebAssembly Will Disrupt the Operating System</a></p><p><a href="https://thenewstack.io/true-portability-is-the-killer-use-case-for-webassembly/">True Portability Is the Killer Use Case for WebAssembly</a></p><p><a href="https://thenewstack.io/four-factors-of-a-webassembly-native-world/">4 Factors of a WebAssembly Native World</a></p>
]]></content:encoded>
      <enclosure length="22731590" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/d6fdbb16-99a9-475a-b349-1a07e7ae207e/audio/d544cb1c-6422-4f67-a19e-efbacddca57b/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>WebAssembly&apos;s Status in Computing</itunes:title>
      <itunes:author>Liam Crilly, Bruce Gain, NGINX, The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/d96ef88d-8067-4319-bc8d-2e8d219cbb69/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:23:40</itunes:duration>
      <itunes:summary>Liam Crilly of NGINX joins TNS host Bruce Gain at the Open Source Summit to share his unique perspective on WebAssembly.</itunes:summary>
      <itunes:subtitle>Liam Crilly of NGINX joins TNS host Bruce Gain at the Open Source Summit to share his unique perspective on WebAssembly.</itunes:subtitle>
      <itunes:keywords>nginx, software developer, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, developers, webassembly, the new stack makers, software engineer, open source, oss eu, open source summit</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1440</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">867c0571-52ad-4759-9606-0f88b536334f</guid>
      <title>PostgreSQL Takes a New Turn</title>
      <description><![CDATA[<p>Jonathan Katz, a principal product manager at Amazon Web Services, discusses the evolution of PostgreSQL in an episode of The New Stack Makers. He notes that PostgreSQL's uses have expanded significantly since its inception and now cover a wide range of applications and workloads. Initially considered niche, it faced competition from both open-source and commercial relational database systems. Katz's involvement in the PostgreSQL community began as an app developer, and he later contributed by organizing events.</p><p>PostgreSQL originated from academic research at the University of California at Berkeley in the mid-1980s, becoming an open-source project in 1994. In the mid-1990s, proprietary databases like Oracle, IBM DB2, and Microsoft SQL dominated the market, while open-source alternatives like MySQL, MariaDB, and SQLite emerged.</p><p>PostgreSQL 16 introduces logical replication from standby servers, enhancing scalability by offloading work from the primary server. The meticulous design process within the PostgreSQL community leads to stable and reliable features. Katz mentions the development of Direct I/O as a long-term feature to reduce latency and improve data writing performance, although it will take several years to implement.</p><p>Amazon Web Services has built Amazon RDS on PostgreSQL to simplify application development for developers. This managed service handles operational tasks such as deployment, backups, and monitoring, allowing developers to focus on their applications. Amazon RDS supports multiple PostgreSQL releases, making it easier for businesses to manage and maintain their databases.</p><p>Learn more from The New Stack about PostgreSQL and AWS:</p><p><a href="https://thenewstack.io/postgresql-16-expands-analytics-capabilities/">PostgreSQL 16 Expands Analytics Capabilities</a></p><p><a href="https://thenewstack.io/how-powertools-for-aws-lambda-grew-via-40-volunteers/">Powertools for AWS Lambda Grows with Help of Volunteers</a></p><p><a href="https://thenewstack.io/how-donating-open-source-code-can-advance-your-career/">How Donating Open Source Code Can Advance Your Career</a></p>
]]></description>
      <pubDate>Wed, 8 Nov 2023 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack, Amazon Web Services, Jonathan Katz, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/postgresql-takes-a-new-turn-3XPJPDyP</link>
      <content:encoded><![CDATA[<p>Jonathan Katz, a principal product manager at Amazon Web Services, discusses the evolution of PostgreSQL in an episode of The New Stack Makers. He notes that PostgreSQL's uses have expanded significantly since its inception and now cover a wide range of applications and workloads. Initially considered niche, it faced competition from both open-source and commercial relational database systems. Katz's involvement in the PostgreSQL community began as an app developer, and he later contributed by organizing events.</p><p>PostgreSQL originated from academic research at the University of California at Berkeley in the mid-1980s, becoming an open-source project in 1994. In the mid-1990s, proprietary databases like Oracle, IBM DB2, and Microsoft SQL dominated the market, while open-source alternatives like MySQL, MariaDB, and SQLite emerged.</p><p>PostgreSQL 16 introduces logical replication from standby servers, enhancing scalability by offloading work from the primary server. The meticulous design process within the PostgreSQL community leads to stable and reliable features. Katz mentions the development of Direct I/O as a long-term feature to reduce latency and improve data writing performance, although it will take several years to implement.</p><p>Amazon Web Services has built Amazon RDS on PostgreSQL to simplify application development for developers. This managed service handles operational tasks such as deployment, backups, and monitoring, allowing developers to focus on their applications. Amazon RDS supports multiple PostgreSQL releases, making it easier for businesses to manage and maintain their databases.</p><p>Learn more from The New Stack about PostgreSQL and AWS:</p><p><a href="https://thenewstack.io/postgresql-16-expands-analytics-capabilities/">PostgreSQL 16 Expands Analytics Capabilities</a></p><p><a href="https://thenewstack.io/how-powertools-for-aws-lambda-grew-via-40-volunteers/">Powertools for AWS Lambda Grows with Help of Volunteers</a></p><p><a href="https://thenewstack.io/how-donating-open-source-code-can-advance-your-career/">How Donating Open Source Code Can Advance Your Career</a></p>
]]></content:encoded>
      <enclosure length="20286528" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/9a2cf986-ef75-44b1-9cc5-459b7a21bce7/audio/36e2d69e-7cb4-4f2e-ad51-312d464f9423/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>PostgreSQL Takes a New Turn</itunes:title>
      <itunes:author>The New Stack, Amazon Web Services, Jonathan Katz, Alex Williams</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/4fe7cd39-95cb-4cb9-aa7c-66812939280f/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:21:07</itunes:duration>
      <itunes:summary>TNS host Alex Williams talks to Jonathan Katz of AWS about the evolution of PostgreSQL at the Open Source Summit in Spain.</itunes:summary>
      <itunes:subtitle>TNS host Alex Williams talks to Jonathan Katz of AWS about the evolution of PostgreSQL at the Open Source Summit in Spain.</itunes:subtitle>
      <itunes:keywords>software developer, open source summit europe, tech podcast, the new stack, devops, devops podcast, amazon web services, tech, developer podcast, the new stack makers, software engineer, open source, oss eu, database, application development, aws, postgresql, open source summit</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1439</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">5db75fb4-7f00-4cda-8b05-d3c4b12ba065</guid>
      <title>The Limits of Shift-Left: What’s Next for Developer Security</title>
      <description><![CDATA[<p>The practice of "shift left," which involves moving security concerns to the code level and increasing developers' responsibility for security, is facing a backlash, with both developers and security professionals expressing concerns. Peter Klimek, director of technology at Imperva, discusses the reasons behind this backlash in this episode.</p><p>Some organizations may have exhausted the benefits of shift left, while the main challenge for many isn't finding vulnerabilities but finding time to address them. Security attacks are now targeting business logic vulnerabilities rather than dependencies, which shift left tools are better at identifying. These business logic vulnerabilities are often tied to authorization decisions, making them harder to address through code-level tools. Additionally, attacks increasingly focus on the frontend, such as API development and cart attacks.</p><p>Klimek emphasizes the need for development and security teams to collaborate and advocates for using DORA metrics to assess the impact of security efforts on the development pipeline. Some organizations may reach a point where the tools added to the development lifecycle become counterproductive, he notes. DORA metrics can help determine when this occurs and provide valuable insights for security teams.</p><p>Learn more from The New Stack about Developer Security and Imperva:</p><p><a href="https://thenewstack.io/why-your-apis-arent-safe-and-what-to-do-about-it/">Why Your APIs Aren’t Safe — and What to Do about It</a></p><p><a href="https://thenewstack.io/what-developers-need-to-know-about-business-logic-attacks/">What Developers Need to Know about Business Logic Attacks</a></p><p><a href="https://thenewstack.io/are-your-development-practices-introducing-api-security-risks/">Are Your Development Practices Introducing API Security Risks?</a></p>
]]></description>
      <pubDate>Tue, 7 Nov 2023 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Imperva, Loraine Lawson, Peter Klimek, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/the-limits-of-shift-left-whats-next-for-developer-security-GISOEYdk</link>
      <content:encoded><![CDATA[<p>The practice of "shift left," which involves moving security concerns to the code level and increasing developers' responsibility for security, is facing a backlash, with both developers and security professionals expressing concerns. Peter Klimek, director of technology at Imperva, discusses the reasons behind this backlash in this episode.</p><p>Some organizations may have exhausted the benefits of shift left, while the main challenge for many isn't finding vulnerabilities but finding time to address them. Security attacks are now targeting business logic vulnerabilities rather than dependencies, which shift left tools are better at identifying. These business logic vulnerabilities are often tied to authorization decisions, making them harder to address through code-level tools. Additionally, attacks increasingly focus on the frontend, such as API development and cart attacks.</p><p>Klimek emphasizes the need for development and security teams to collaborate and advocates for using DORA metrics to assess the impact of security efforts on the development pipeline. Some organizations may reach a point where the tools added to the development lifecycle become counterproductive, he notes. DORA metrics can help determine when this occurs and provide valuable insights for security teams.</p><p>Learn more from The New Stack about Developer Security and Imperva:</p><p><a href="https://thenewstack.io/why-your-apis-arent-safe-and-what-to-do-about-it/">Why Your APIs Aren’t Safe — and What to Do about It</a></p><p><a href="https://thenewstack.io/what-developers-need-to-know-about-business-logic-attacks/">What Developers Need to Know about Business Logic Attacks</a></p><p><a href="https://thenewstack.io/are-your-development-practices-introducing-api-security-risks/">Are Your Development Practices Introducing API Security Risks?</a></p>
]]></content:encoded>
      <enclosure length="21778225" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/8205d6e5-7ea7-41c9-b465-3e56c88bbfa1/audio/ee998cf5-d706-443a-b585-22d80a0a5047/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>The Limits of Shift-Left: What’s Next for Developer Security</itunes:title>
      <itunes:author>Imperva, Loraine Lawson, Peter Klimek, The New Stack</itunes:author>
      <itunes:duration>00:22:41</itunes:duration>
      <itunes:summary>Peter Klimek of Imperva joins us to discuss how &quot;shift left&quot; is experiencing a bit of a backlash. And it&apos;s not just from developers, but from security professionals as well.</itunes:summary>
      <itunes:subtitle>Peter Klimek of Imperva joins us to discuss how &quot;shift left&quot; is experiencing a bit of a backlash. And it&apos;s not just from developers, but from security professionals as well.</itunes:subtitle>
      <itunes:keywords>shift left, software developer, cybersecurity, it security, software engineering, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, software development, imperva, the new stack makers, software engineer, api, developer security, dora metrics</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1438</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e4d55936-d2e1-40f2-adc1-39fabf361943</guid>
      <title>How AI and Automation Can Improve Operational Resiliency</title>
      <description><![CDATA[<p>Operational resiliency, as explained by Dormain Drewitz of PagerDuty, involves the ability to bounce back and recover from setbacks, not only technically but also in terms of organizational recovery. True resiliency means maintaining the willingness to take risks even after facing challenges. In a conversation with Heather Joslyn on the New Stack Makers podcast, Drewitz discussed the role of AI and automation in achieving operational resiliency, especially in a context where teams are under pressure to be more productive.</p><p>Automation, including generative AI code completion tools, is increasingly used to boost developer productivity. However, this may lead to shifting bottlenecks from developers to operations, creating new challenges. Drewitz emphasized the importance of considering the entire value chain and identifying areas where AI and automation can assist. For instance, automating repetitive tasks in incident response, such as checking APIs, closing ports, or database checks, can significantly reduce interruptions and productivity losses.</p><p>PagerDuty's AI-powered platform leverages generative AI to automate tasks and create runbooks for incident handling, allowing engineers to focus on resolving root causes and restoring services. This includes drafting status updates and incident postmortem reports, streamlining incident response and saving time. Having an operations platform that can generate draft reports at the push of a button simplifies the process, making it easier to review and edit without starting from scratch.</p><p>Learn more from The New Stack about AI, Automation, Incident Response, and PagerDuty:</p><p><a href="https://thenewstack.io/operationalizing-ai-accelerating-automation-dataops-aiops/">Operationalizing AI: Accelerating Automation, DataOps, AIOps</a></p><p><a href="https://thenewstack.io/three-ways-automation-can-improve-workplace-culture/">Three Ways Automation Can Improve Workplace Culture</a></p><p><a href="https://thenewstack.io/incident-response-three-ts-to-rule-them-all/">Incident Response: Three Ts to Rule Them All</a></p><p><a href="https://thenewstack.io/four-ways-to-win-executive-buy-in-for-automation/">Four Ways to Win Executive Buy-In for Automation</a></p>
]]></description>
      <pubDate>Fri, 3 Nov 2023 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack, PagerDuty, Dormain Drewitz, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-ai-and-automation-can-improve-operational-resiliency-GdQMCnwD</link>
      <content:encoded><![CDATA[<p>Operational resiliency, as explained by Dormain Drewitz of PagerDuty, involves the ability to bounce back and recover from setbacks, not only technically but also in terms of organizational recovery. True resiliency means maintaining the willingness to take risks even after facing challenges. In a conversation with Heather Joslyn on the New Stack Makers podcast, Drewitz discussed the role of AI and automation in achieving operational resiliency, especially in a context where teams are under pressure to be more productive.</p><p>Automation, including generative AI code completion tools, is increasingly used to boost developer productivity. However, this may lead to shifting bottlenecks from developers to operations, creating new challenges. Drewitz emphasized the importance of considering the entire value chain and identifying areas where AI and automation can assist. For instance, automating repetitive tasks in incident response, such as checking APIs, closing ports, or database checks, can significantly reduce interruptions and productivity losses.</p><p>PagerDuty's AI-powered platform leverages generative AI to automate tasks and create runbooks for incident handling, allowing engineers to focus on resolving root causes and restoring services. This includes drafting status updates and incident postmortem reports, streamlining incident response and saving time. Having an operations platform that can generate draft reports at the push of a button simplifies the process, making it easier to review and edit without starting from scratch.</p><p>Learn more from The New Stack about AI, Automation, Incident Response, and PagerDuty:</p><p><a href="https://thenewstack.io/operationalizing-ai-accelerating-automation-dataops-aiops/">Operationalizing AI: Accelerating Automation, DataOps, AIOps</a></p><p><a href="https://thenewstack.io/three-ways-automation-can-improve-workplace-culture/">Three Ways Automation Can Improve Workplace Culture</a></p><p><a href="https://thenewstack.io/incident-response-three-ts-to-rule-them-all/">Incident Response: Three Ts to Rule Them All</a></p><p><a href="https://thenewstack.io/four-ways-to-win-executive-buy-in-for-automation/">Four Ways to Win Executive Buy-In for Automation</a></p>
]]></content:encoded>
      <enclosure length="20039097" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/e2cc0d2b-3891-4bff-bd52-b8221261d1a9/audio/7fd6e05a-115f-44b5-8a07-6254effb7705/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How AI and Automation Can Improve Operational Resiliency</itunes:title>
      <itunes:author>The New Stack, PagerDuty, Dormain Drewitz, Heather Joslyn</itunes:author>
      <itunes:duration>00:20:52</itunes:duration>
      <itunes:summary>Dormain Drewitz of PagerDuty joins us to talk about how using automation fueled by AI can help companies bounce back from incidents faster and gain the confidence to keep taking risks.</itunes:summary>
      <itunes:subtitle>Dormain Drewitz of PagerDuty joins us to talk about how using automation fueled by AI can help companies bounce back from incidents faster and gain the confidence to keep taking risks.</itunes:subtitle>
      <itunes:keywords>software developer, software engineering, tech podcast, the new stack, devops, devops podcast, tech, pagerduty, developer podcast, artificial intelligence, software development, the new stack makers, software engineer, automation</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1437</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">c2fc9c22-1945-4022-b270-c1625274fed3</guid>
      <title>Will GenAI Take Developer Jobs? Docker CEO Weighs In</title>
      <description><![CDATA[<p>In this episode, Scott Johnston, CEO of Docker, highlights the evolving role of developers, emphasizing their increasing importance in architectural decision-making and tool development for applications. This shift in prioritizing a great developer experience and rapid tool development has led to substantial spending in the industry.</p><p>Johnston expressed confidence that integrating generative AI into the developer experience will drive business growth and expand the customer base. He downplayed concerns about AI taking jobs, explaining that it would alleviate repetitive tasks, enabling developers to focus on more complex problem-solving. Johnston likened this evolution to expanding bike lanes in a city, leading to increased bike traffic, equating it to the development of more apps due to increased speed and efficiency.</p><p>In his talk with TNS host, Alex Williams, Johnston emphasized that each advancement in programming languages and tools has expanded the developer market and driven greater demand for applications. Notably, the demand for over 750 million apps in the next two years, as reported by IDC, demonstrates the ever-increasing appetite for creative solutions from developers.</p><p>Overall, Johnston sees the integration of generative AI and increasing development velocity as a multifaceted expansion that benefits developers and meets growing demand for applications in the market.</p><p>Learn more from The New Stack about Generative AI and Docker:</p><p><a href="https://thenewstack.io/ai/">Generative AI News, Analysis, and Resources</a></p><p><a href="https://thenewstack.io/docker-launches-genai-stack-and-ai-assistant-at-dockercon/">Docker Launches GenAI Stack and AI Assistant at DockerCon</a></p><p><a href="https://thenewstack.io/docker-rolls-out-3-tools-to-speed-and-ease-development/">Docker Rolls out 3 Tools to Speed and Ease Development</a></p>
]]></description>
      <pubDate>Thu, 2 Nov 2023 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Docker, Alex Williams, The New Stack, Scott Johnston)</author>
      <link>https://thenewstack.simplecast.com/episodes/will-genai-take-developer-jobs-docker-ceo-weighs-in-CkXbFMkB</link>
      <content:encoded><![CDATA[<p>In this episode, Scott Johnston, CEO of Docker, highlights the evolving role of developers, emphasizing their increasing importance in architectural decision-making and tool development for applications. This shift in prioritizing a great developer experience and rapid tool development has led to substantial spending in the industry.</p><p>Johnston expressed confidence that integrating generative AI into the developer experience will drive business growth and expand the customer base. He downplayed concerns about AI taking jobs, explaining that it would alleviate repetitive tasks, enabling developers to focus on more complex problem-solving. Johnston likened this evolution to expanding bike lanes in a city, leading to increased bike traffic, equating it to the development of more apps due to increased speed and efficiency.</p><p>In his talk with TNS host, Alex Williams, Johnston emphasized that each advancement in programming languages and tools has expanded the developer market and driven greater demand for applications. Notably, the demand for over 750 million apps in the next two years, as reported by IDC, demonstrates the ever-increasing appetite for creative solutions from developers.</p><p>Overall, Johnston sees the integration of generative AI and increasing development velocity as a multifaceted expansion that benefits developers and meets growing demand for applications in the market.</p><p>Learn more from The New Stack about Generative AI and Docker:</p><p><a href="https://thenewstack.io/ai/">Generative AI News, Analysis, and Resources</a></p><p><a href="https://thenewstack.io/docker-launches-genai-stack-and-ai-assistant-at-dockercon/">Docker Launches GenAI Stack and AI Assistant at DockerCon</a></p><p><a href="https://thenewstack.io/docker-rolls-out-3-tools-to-speed-and-ease-development/">Docker Rolls out 3 Tools to Speed and Ease Development</a></p>
]]></content:encoded>
      <enclosure length="20607521" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/4c2434c9-6d6c-48cf-9056-258640fd4ba2/audio/d8213ff8-acdf-482f-9ca4-5161fb4d3ee6/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Will GenAI Take Developer Jobs? Docker CEO Weighs In</itunes:title>
      <itunes:author>Docker, Alex Williams, The New Stack, Scott Johnston</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/f24ee29b-3fb6-468f-8e25-15aa0030f7f9/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:21:27</itunes:duration>
      <itunes:summary>TNS host Alex Williams sits down with the CEO of Docker at DockerCon to discuss generative AI and the developer experience.</itunes:summary>
      <itunes:subtitle>TNS host Alex Williams sits down with the CEO of Docker at DockerCon to discuss generative AI and the developer experience.</itunes:subtitle>
      <itunes:keywords>generative ai, dockercon, software developer, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, docker, software development, the new stack makers, software engineer, dockercon 2023</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1436</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">be1326e1-0571-4765-894d-0a11fec24e93</guid>
      <title>Powertools for AWS Lambda Grows with Help of Volunteers</title>
      <description><![CDATA[<p>This episode of The New Stack Makers was recorded on the road at the Linux Foundation’s Open Source Summit Europe in Bilbao, Spain. A pair of technologists from Amazon Web Services (AWS) join us to discuss the development of Powertools for AWS Lambda. Andrea Amorosi, a senior solutions architect at AWS, and Leandro Damascena, a specialist solutions architect, share insights into how Powertools evolved from an observability tool to support more advanced use cases like ensuring workload safety, batch processing, streaming data, and idempotency.</p><p>Powertools primarily supports Python, TypeScript, Java, and .NET. The latest feature, idempotency for TypeScript, was introduced to help customers achieve best practices for developing resilient and fault-tolerant workloads. By integrating these best practices during the development phase, Powertools reduces the need for costly re-architecting and rewriting of code.</p><p>The success of Powertools can be attributed to its strong open source community, which fosters collaboration and contributions from users. AWS ensures transparency by conducting all project activities in the open, allowing anyone to understand and influence feature prioritization and contribute in various ways. Furthermore, the project's international support team offers assistance in multiple languages and time zones.</p><p>A noteworthy aspect is that 40% of new Powertools features have been contributed by the community, providing contributors with valuable networking opportunities at a prominent tech giant like AWS. Overall, Powertools demonstrates how open source principles can thrive within a major corporation, offering benefits to both the company and the open source community.</p><p>Learn more from The New Stack about Powertools, Lambda, and Amazon Web Services:</p><p><a href="https://thenewstack.io/aws-offers-a-typescript-interface-for-lambda-observability/">AWS Offers a TypeScript Interface for Lambda Observability</a></p><p><a href="https://thenewstack.io/how-donating-open-source-code-can-advance-your-career/">How Donating Open Source Code Can Advance Your Career</a></p><p><a href="https://thenewstack.io/turn-aws-lambda-functions-stateful-with-amazon-elastic-file-system/">Turn AWS Lambda Functions Stateful with Amazon Elastic File System</a></p>
]]></description>
      <pubDate>Wed, 1 Nov 2023 14:58:41 +0000</pubDate>
      <author>podcasts@thenewstack.io (amazon web services, the new stack, aws, Andrea Amorosi, Leandro Damascena, Jennifer Riggins)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-powertools-for-aws-lambda-grew-via-40-volunteers-eGOswsaG</link>
      <content:encoded><![CDATA[<p>This episode of The New Stack Makers was recorded on the road at the Linux Foundation’s Open Source Summit Europe in Bilbao, Spain. A pair of technologists from Amazon Web Services (AWS) join us to discuss the development of Powertools for AWS Lambda. Andrea Amorosi, a senior solutions architect at AWS, and Leandro Damascena, a specialist solutions architect, share insights into how Powertools evolved from an observability tool to support more advanced use cases like ensuring workload safety, batch processing, streaming data, and idempotency.</p><p>Powertools primarily supports Python, TypeScript, Java, and .NET. The latest feature, idempotency for TypeScript, was introduced to help customers achieve best practices for developing resilient and fault-tolerant workloads. By integrating these best practices during the development phase, Powertools reduces the need for costly re-architecting and rewriting of code.</p><p>The success of Powertools can be attributed to its strong open source community, which fosters collaboration and contributions from users. AWS ensures transparency by conducting all project activities in the open, allowing anyone to understand and influence feature prioritization and contribute in various ways. Furthermore, the project's international support team offers assistance in multiple languages and time zones.</p><p>A noteworthy aspect is that 40% of new Powertools features have been contributed by the community, providing contributors with valuable networking opportunities at a prominent tech giant like AWS. Overall, Powertools demonstrates how open source principles can thrive within a major corporation, offering benefits to both the company and the open source community.</p><p>Learn more from The New Stack about Powertools, Lambda, and Amazon Web Services:</p><p><a href="https://thenewstack.io/aws-offers-a-typescript-interface-for-lambda-observability/">AWS Offers a TypeScript Interface for Lambda Observability</a></p><p><a href="https://thenewstack.io/how-donating-open-source-code-can-advance-your-career/">How Donating Open Source Code Can Advance Your Career</a></p><p><a href="https://thenewstack.io/turn-aws-lambda-functions-stateful-with-amazon-elastic-file-system/">Turn AWS Lambda Functions Stateful with Amazon Elastic File System</a></p>
]]></content:encoded>
      <enclosure length="17016416" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/639a112b-ec70-44ff-8f77-77e6f9ad3c32/audio/7fe1faf9-f86a-48b7-ac13-b1f4d1d9ce82/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Powertools for AWS Lambda Grows with Help of Volunteers</itunes:title>
      <itunes:author>amazon web services, the new stack, aws, Andrea Amorosi, Leandro Damascena, Jennifer Riggins</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/c693031d-9369-40f2-a053-b87052d219e6/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:17:43</itunes:duration>
      <itunes:summary>Two solutions architects from AWS share how Powertools for AWS Lambda is seeing increased adoption with the open source community backing it. This episode was recorded at Open Source Summit EU.</itunes:summary>
      <itunes:subtitle>Two solutions architects from AWS share how Powertools for AWS Lambda is seeing increased adoption with the open source community backing it. This episode was recorded at Open Source Summit EU.</itunes:subtitle>
      <itunes:keywords>software developer, software engineering, tech podcast, the new stack, python programming, devops, devops podcast, amazon web services, tech, developer podcast, software development, the new stack makers, software engineer, open source, typescript, aws, aws lambda</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1435</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">efa1db87-a44b-4fa3-b02e-dbc18b906060</guid>
      <title>What Will Be Hot at KubeCon in Chicago?</title>
      <description><![CDATA[<p>KubeCon 2023 is set to feature three hot topics, according to Taylor Dolezal from the Cloud Native Computing Foundation. Firstly, GenAI and Large Language Models (LLMs) are taking the spotlight, particularly regarding their security and integration with legacy infrastructure. Platform engineering is also on the rise, with over 25 sessions at KubeCon Chicago focusing on its definition and how it benefits internal product teams by fostering a culture of product proliferation. Lastly, WebAssembly is emerging as a significant topic, with a dedicated day during the conference week. It is maturing and finding its place, potentially complementing containers, especially in edge computing scenarios. Wasm allows for efficient data processing before data reaches the cloud, adding depth to architectural possibilities.</p><p>Overall, these three trends are expected to dominate discussions and presentations at KubeCon NA 2023, offering insights into the future of cloud-native technology.</p><p>See what came out of the last KubeCon event in Amsterdam earlier this year:</p><p><a href="https://thenewstack.io/ai-talk-at-kubecon/" target="_blank">AI Talk at KubeCon</a></p><p><a href="https://thenewstack.io/dont-force-containers-and-disrupt-workflows/" target="_blank">Don’t Force Containers and Disrupt Workflows</a></p><p><a href="https://thenewstack.io/a-boring-kubernetes-release/" target="_blank">A Boring Kubernetes Release</a></p>
]]></description>
      <pubDate>Tue, 31 Oct 2023 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack, Alex Williams, Cloud Native Computing Foundation, Taylor Dolezal)</author>
      <link>https://thenewstack.simplecast.com/episodes/what-will-be-hot-at-kubecon-in-chicago-eKQKHViI</link>
      <content:encoded><![CDATA[<p>KubeCon 2023 is set to feature three hot topics, according to Taylor Dolezal from the Cloud Native Computing Foundation. Firstly, GenAI and Large Language Models (LLMs) are taking the spotlight, particularly regarding their security and integration with legacy infrastructure. Platform engineering is also on the rise, with over 25 sessions at KubeCon Chicago focusing on its definition and how it benefits internal product teams by fostering a culture of product proliferation. Lastly, WebAssembly is emerging as a significant topic, with a dedicated day during the conference week. It is maturing and finding its place, potentially complementing containers, especially in edge computing scenarios. Wasm allows for efficient data processing before data reaches the cloud, adding depth to architectural possibilities.</p><p>Overall, these three trends are expected to dominate discussions and presentations at KubeCon NA 2023, offering insights into the future of cloud-native technology.</p><p>See what came out of the last KubeCon event in Amsterdam earlier this year:</p><p><a href="https://thenewstack.io/ai-talk-at-kubecon/" target="_blank">AI Talk at KubeCon</a></p><p><a href="https://thenewstack.io/dont-force-containers-and-disrupt-workflows/" target="_blank">Don’t Force Containers and Disrupt Workflows</a></p><p><a href="https://thenewstack.io/a-boring-kubernetes-release/" target="_blank">A Boring Kubernetes Release</a></p>
]]></content:encoded>
      <enclosure length="21148778" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/8f64f725-542b-4dc2-803f-b40ff65ce5c9/audio/b784ba00-31b9-44de-9d52-08437cd7caa7/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>What Will Be Hot at KubeCon in Chicago?</itunes:title>
      <itunes:author>The New Stack, Alex Williams, Cloud Native Computing Foundation, Taylor Dolezal</itunes:author>
      <itunes:duration>00:22:01</itunes:duration>
      <itunes:summary>KubeCon + CloudNativeCon coming in hot! We spoke with Cloud Native Computing Foundation’s Taylor Dolezal to find out which topics at the event in Chicago are going to be hot, hot, hot.</itunes:summary>
      <itunes:subtitle>KubeCon + CloudNativeCon coming in hot! We spoke with Cloud Native Computing Foundation’s Taylor Dolezal to find out which topics at the event in Chicago are going to be hot, hot, hot.</itunes:subtitle>
      <itunes:keywords>generative ai, cloud native computing foundation, kubecon na 2023, software developer, tech podcast, the new stack, devops, cloud native, devops podcast, tech, developer podcast, kubecon na, webassembly, kubernetes, large language models, the new stack makers, software engineer, platform engineering, cncf, kubecon</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1434</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">3fbd9a44-c54c-4dcd-95a0-afc1f6744fe0</guid>
      <title>How Will AI Enhance Platform Engineering and DevEx?</title>
      <description><![CDATA[<p>Digital.ai, an AI-powered DevSecOps platform, serves large enterprises such as financial institutions, insurance companies, and gaming firms. The primary challenge faced by these clients is scaling their DevOps practices across vast organizations. They aim to combine modern development methodologies like agile DevOps with the need for speed and intimacy with end-users on a large scale.</p><p>This episode features a discussion between Wing To of Digital.ai and TNS host Heather Joslyn about platform engineering and the role of AI in enhancing automation. It delves into the dilemma of whether increased code production and release frequency driven by DevOps practices are inherently beneficial. Additionally, it explores the emerging challenge of AI-assisted development and how large enterprises are striving to realize productivity gains across their organizations.</p><p>Digital.ai is focused on incorporating AI into automation to assist developers in creating and delivering code while helping organizations derive more business value from their software in production. The company employs templates to capture and replicate key aspects of software delivery processes and uses AI to automate the rapid setup of developer environments and tooling. These efforts contribute to the concept of the internal developer platform, which consists of multiple toolsets for tasks like creating pipelines and setting up various components.</p><p>Learn more from The New Stack about Platform Engineering, DevSecOps and Digital.ai:</p><p><a href="https://thenewstack.io/platform-engineering/">Platform Engineering Overview, News, and Trends</a></p><p><a href="https://thenewstack.io/platform-engineering/sre-vs-devops-vs-platform-engineering/">SRE vs. DevOps vs. Platform Engineering</a></p><p><a href="https://thenewstack.io/meet-the-new-devsecops/">Meet the New DevSecOps</a></p>
]]></description>
      <pubDate>Fri, 27 Oct 2023 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Wing To, Digital.ai, The New Stack, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-will-ai-enhance-platform-engineering-and-devex-obZD9ueJ</link>
      <content:encoded><![CDATA[<p>Digital.ai, an AI-powered DevSecOps platform, serves large enterprises such as financial institutions, insurance companies, and gaming firms. The primary challenge faced by these clients is scaling their DevOps practices across vast organizations. They aim to combine modern development methodologies like agile DevOps with the need for speed and intimacy with end-users on a large scale.</p><p>This episode features a discussion between Wing To of Digital.ai and TNS host Heather Joslyn about platform engineering and the role of AI in enhancing automation. It delves into the dilemma of whether increased code production and release frequency driven by DevOps practices are inherently beneficial. Additionally, it explores the emerging challenge of AI-assisted development and how large enterprises are striving to realize productivity gains across their organizations.</p><p>Digital.ai is focused on incorporating AI into automation to assist developers in creating and delivering code while helping organizations derive more business value from their software in production. The company employs templates to capture and replicate key aspects of software delivery processes and uses AI to automate the rapid setup of developer environments and tooling. These efforts contribute to the concept of the internal developer platform, which consists of multiple toolsets for tasks like creating pipelines and setting up various components.</p><p>Learn more from The New Stack about Platform Engineering, DevSecOps and Digital.ai:</p><p><a href="https://thenewstack.io/platform-engineering/">Platform Engineering Overview, News, and Trends</a></p><p><a href="https://thenewstack.io/platform-engineering/sre-vs-devops-vs-platform-engineering/">SRE vs. DevOps vs. Platform Engineering</a></p><p><a href="https://thenewstack.io/meet-the-new-devsecops/">Meet the New DevSecOps</a></p>
]]></content:encoded>
      <enclosure length="19373288" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/b4117552-299e-4986-b233-3a318a8a92be/audio/b6b5dbb2-b842-4c82-9cd7-eb488d524c1a/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Will AI Enhance Platform Engineering and DevEx?</itunes:title>
      <itunes:author>Wing To, Digital.ai, The New Stack, Heather Joslyn</itunes:author>
      <itunes:duration>00:20:10</itunes:duration>
      <itunes:summary>Wing To of Digital.ai joins us to chat about how scaling and generating value from increased developer productivity are big challenges for many companies that adopt DevOps. We also discuss the latest AI-driven approaches in platform engineering.</itunes:summary>
      <itunes:subtitle>Wing To of Digital.ai joins us to chat about how scaling and generating value from increased developer productivity are big challenges for many companies that adopt DevOps. We also discuss the latest AI-driven approaches in platform engineering.</itunes:subtitle>
      <itunes:keywords>software developer, software engineering, tech podcast, the new stack, devops, devops podcast, tech, wing to, developer podcast, developers, artificial intelligence, software development, the new stack makers, software engineer, platform engineering, digital.ai, automation, devsecops</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1433</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">760b20d8-74d5-4eb7-bd16-f5a4742cfaff</guid>
      <title>Why the Cloud Makes Forecasts Difficult and How FinOps Helps</title>
      <description><![CDATA[<p>Moving workloads to the cloud presents cost prediction challenges. Traditional setups with on-premises hardware offer predictability, but cloud costs are usage-based and granular. In this podcast episode, Matt Stellpflug, a senior FinOps specialist at ProsperOps, discusses the complexities of forecasting cloud expenses with TNS host Heather Joslyn.</p><p>Cloud users face fluctuating costs due to continuous deployments and changing workloads. There are additional expenses for data access and transfer. Stellpflug emphasizes the importance of establishing reference workloads and benchmarks for accurate forecasting.</p><p>Engineers play a vital role in FinOps initiatives since they ensure application availability and system integrity. Stellpflug suggests collaborating with engineering teams to identify essential metrics. He co-authored an "Engineer's Guide to Cloud Cost Optimization," highlighting the distinction between resource and rate optimization. Best practices involve addressing high-impact, low-risk areas first, engaging subject matter experts for complex issues, and maintaining momentum. This episode also provides further insights into implementing FinOps for effective cloud cost management.</p><p>Learn more from The New Stack about FinOps and ProsperOps:</p><p><a href="https://thenewstack.io/finops/">FinOps Overview, News, and Trends</a></p><p><a href="https://thenewstack.io/prosperops-wants-to-automate-your-finops-strategy/">ProsperOps Wants to Automate Your FinOps Strategy</a></p><p><a href="https://thenewstack.io/engineers-guide-to-cloud-cost-optimization-manual-diy-optimization/">Engineer’s Guide to Cloud Cost Optimization: Manual DIY Optimization</a></p><p><a href="https://thenewstack.io/engineers-guide-to-cloud-cost-optimization-engineering-resources-in-the-cloud/">Engineer’s Guide to Cloud Cost Optimization: Engineering Resources in the Cloud</a></p><p><a href="https://thenewstack.io/engineers-guide-to-cloud-cost-optimization-prioritize-cloud-rate-optimization/">Engineer’s Guide to Cloud Cost Optimization: Prioritize Cloud Rate Optimization</a></p>
]]></description>
      <pubDate>Thu, 26 Oct 2023 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack, ProsperOps, Matt Stellpflug, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/why-the-cloud-makes-forecasts-difficult-and-how-finops-helps-2lHAX1dn</link>
      <content:encoded><![CDATA[<p>Moving workloads to the cloud presents cost prediction challenges. Traditional setups with on-premises hardware offer predictability, but cloud costs are usage-based and granular. In this podcast episode, Matt Stellpflug, a senior FinOps specialist at ProsperOps, discusses the complexities of forecasting cloud expenses with TNS host Heather Joslyn.</p><p>Cloud users face fluctuating costs due to continuous deployments and changing workloads. There are additional expenses for data access and transfer. Stellpflug emphasizes the importance of establishing reference workloads and benchmarks for accurate forecasting.</p><p>Engineers play a vital role in FinOps initiatives since they ensure application availability and system integrity. Stellpflug suggests collaborating with engineering teams to identify essential metrics. He co-authored an "Engineer's Guide to Cloud Cost Optimization," highlighting the distinction between resource and rate optimization. Best practices involve addressing high-impact, low-risk areas first, engaging subject matter experts for complex issues, and maintaining momentum. This episode also provides further insights into implementing FinOps for effective cloud cost management.</p><p>Learn more from The New Stack about FinOps and ProsperOps:</p><p><a href="https://thenewstack.io/finops/">FinOps Overview, News, and Trends</a></p><p><a href="https://thenewstack.io/prosperops-wants-to-automate-your-finops-strategy/">ProsperOps Wants to Automate Your FinOps Strategy</a></p><p><a href="https://thenewstack.io/engineers-guide-to-cloud-cost-optimization-manual-diy-optimization/">Engineer’s Guide to Cloud Cost Optimization: Manual DIY Optimization</a></p><p><a href="https://thenewstack.io/engineers-guide-to-cloud-cost-optimization-engineering-resources-in-the-cloud/">Engineer’s Guide to Cloud Cost Optimization: Engineering Resources in the Cloud</a></p><p><a href="https://thenewstack.io/engineers-guide-to-cloud-cost-optimization-prioritize-cloud-rate-optimization/">Engineer’s Guide to Cloud Cost Optimization: Prioritize Cloud Rate Optimization</a></p>
]]></content:encoded>
      <enclosure length="12993559" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/d87e9514-8d14-4983-b912-b7d3bd67aba9/audio/0cff15c6-ddce-4f51-a5c8-e2e0fd63e5a2/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Why the Cloud Makes Forecasts Difficult and How FinOps Helps</itunes:title>
      <itunes:author>The New Stack, ProsperOps, Matt Stellpflug, Heather Joslyn</itunes:author>
      <itunes:duration>00:13:32</itunes:duration>
      <itunes:summary>Matt Stellpflug of ProsperOps explains how software engineers can help establish benchmarks and share relevant metrics so their organizations can have an easier time predicting and containing cloud computing costs.</itunes:summary>
      <itunes:subtitle>Matt Stellpflug of ProsperOps explains how software engineers can help establish benchmarks and share relevant metrics so their organizations can have an easier time predicting and containing cloud computing costs.</itunes:subtitle>
      <itunes:keywords>software developer, prosperops, software engineering, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, finops, the new stack makers, software engineer, cloud computing</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1432</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1e29189a-2131-4db3-b7a1-7261bb7f6fad</guid>
      <title>How to Be a Better Ally in Open Source Communities</title>
      <description><![CDATA[<p>In her keynote address at the Linux Foundation's Open Source Summit Europe, Fatima Sarah Khalid emphasized that being an ally is more than just superficial gestures like wearing pronouns on badges or correctly pronouncing coworkers' names. True allyship involves taking meaningful actions to support and uplift individuals from underrepresented or marginalized backgrounds. This support is essential, not only in obvious ways but also in everyday interactions, which collectively create a more inclusive community.</p><p>Open source communities typically lack diversity, with only a small percentage of women, non-binary contributors, and individuals from underrepresented backgrounds. Khalid stressed the importance of improving diversity and inclusion through various means, including using inclusive language, facilitating asynchronous communication to accommodate global contributors, and welcoming non-technical contributions such as documentation.</p><p>Khalid also provided insights on making open source events more inclusive, like welcoming newcomers and marginalized groups, providing quiet spaces and enforcing a code of conduct, and partnering newcomers with mentors. Moreover, she highlighted GitLab's unique approach to allyship within the organization, including the Ally Lab, which pairs employees from different backgrounds to learn about and understand each other's experiences.</p><p>To encourage the audience to embrace allyship, Khalid shared a set of commitments to keep in mind, such as educating oneself about the experiences of marginalized groups, speaking up against inappropriate behavior, using one's voice to amplify marginalized voices, donating to support such groups, and advocating for equity and justice through social networks and connections. She also shared real-life examples of allyship, illustrating how meaningful actions can create positive change in communities.</p><p>Khalid's discussion with host Jennifer Riggins emphasizes the significance of meaningful, everyday actions to promote allyship in open source communities and organizations, ultimately contributing to a more diverse, inclusive, and equitable tech industry.</p><p>Learn more from The New Stack about Open Source, Allyship, and GitLab:</p><p><a href="https://thenewstack.io/embracing-open-source-for-greater-business-impact/">Embracing Open Source for Greater Business Impact</a></p><p><a href="https://thenewstack.io/leadership-and-inclusion-in-the-open-source-community/">Leadership and Inclusion in the Open Source Community</a></p><p><a href="https://thenewstack.io/how-implicit-bias-impacts-open-source-diversity-and-inclusion/">How Implicit Bias Impacts Open Source Diversity and Inclusion</a></p><p><a href="https://thenewstack.io/investing-in-the-next-generation-of-tech-talent/">Investing in the Next Generation of Tech Talent</a></p>
]]></description>
      <pubDate>Wed, 25 Oct 2023 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack, GitLab, Fatima Sarah Khalid, Jennifer Riggins)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-to-be-a-better-ally-in-open-source-communities-HV9Xnx0p</link>
      <content:encoded><![CDATA[<p>In her keynote address at the Linux Foundation's Open Source Summit Europe, Fatima Sarah Khalid emphasized that being an ally is more than just superficial gestures like wearing pronouns on badges or correctly pronouncing coworkers' names. True allyship involves taking meaningful actions to support and uplift individuals from underrepresented or marginalized backgrounds. This support is essential, not only in obvious ways but also in everyday interactions, which collectively create a more inclusive community.</p><p>Open source communities typically lack diversity, with only a small percentage of women, non-binary contributors, and individuals from underrepresented backgrounds. Khalid stressed the importance of improving diversity and inclusion through various means, including using inclusive language, facilitating asynchronous communication to accommodate global contributors, and welcoming non-technical contributions such as documentation.</p><p>Khalid also provided insights on making open source events more inclusive, like welcoming newcomers and marginalized groups, providing quiet spaces and enforcing a code of conduct, and partnering newcomers with mentors. Moreover, she highlighted GitLab's unique approach to allyship within the organization, including the Ally Lab, which pairs employees from different backgrounds to learn about and understand each other's experiences.</p><p>To encourage the audience to embrace allyship, Khalid shared a set of commitments to keep in mind, such as educating oneself about the experiences of marginalized groups, speaking up against inappropriate behavior, using one's voice to amplify marginalized voices, donating to support such groups, and advocating for equity and justice through social networks and connections. She also shared real-life examples of allyship, illustrating how meaningful actions can create positive change in communities.</p><p>Khalid's discussion with host Jennifer Riggins emphasizes the significance of meaningful, everyday actions to promote allyship in open source communities and organizations, ultimately contributing to a more diverse, inclusive, and equitable tech industry.</p><p>Learn more from The New Stack about Open Source, Allyship, and GitLab:</p><p><a href="https://thenewstack.io/embracing-open-source-for-greater-business-impact/">Embracing Open Source for Greater Business Impact</a></p><p><a href="https://thenewstack.io/leadership-and-inclusion-in-the-open-source-community/">Leadership and Inclusion in the Open Source Community</a></p><p><a href="https://thenewstack.io/how-implicit-bias-impacts-open-source-diversity-and-inclusion/">How Implicit Bias Impacts Open Source Diversity and Inclusion</a></p><p><a href="https://thenewstack.io/investing-in-the-next-generation-of-tech-talent/">Investing in the Next Generation of Tech Talent</a></p>
]]></content:encoded>
      <enclosure length="15967338" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/a44d58fb-1f54-4363-ac42-c360a48e3160/audio/0973f0ba-9f81-479a-8de0-aa3000c5c2ff/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How to Be a Better Ally in Open Source Communities</itunes:title>
      <itunes:author>The New Stack, GitLab, Fatima Sarah Khalid, Jennifer Riggins</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/a1fbe6a1-077b-435d-817c-09bf08d35154/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:16:37</itunes:duration>
      <itunes:summary>Fatima Sarah Khalid, a developer advocate at GitLab, joined us at the Open Source Summit in Spain to dive into the topic of allyship in open source communities and beyond.</itunes:summary>
      <itunes:subtitle>Fatima Sarah Khalid, a developer advocate at GitLab, joined us at the Open Source Summit in Spain to dive into the topic of allyship in open source communities and beyond.</itunes:subtitle>
      <itunes:keywords>allyship, software developer, software engineering, open source summit europe, tech podcast, the new stack, devops, devops podcast, tech, ally, developer podcast, the new stack makers, software engineer, open source, osseu2023, gitlab, open source summit</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1431</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b6728df2-7cbf-4228-a7de-7a96ecbc780e</guid>
      <title>Open Source Development Threatened in Europe</title>
      <description><![CDATA[<p>In a recent conversation at the Open Source Summit in Bilbao, Spain, Gabriel Colombo, the General Manager of the Linux Foundation Europe and the Executive Director of the Fintech Open Source Foundation, discussed the potential impact of the Cyber Resilience Act (CRA) on the open source community. The conversation shed light on the challenges and opportunities that the CRA presents to open source and how individuals and organizations can respond.</p><p>The conversation began by addressing the Cyber Resilience Act and its significance. Gabriel Colombo explained that while the Act is being touted as a measure to bolster cybersecurity and national security, it could have unintended consequences for the open source ecosystem, particularly in Europe. The Act, currently in the legislative process, aims to address cybersecurity concerns but could inadvertently hinder open source development and collaboration.</p><p>Jim Zemlin, the Executive Director of the Linux Foundation, had previously mentioned the importance of forks in open source development, emphasizing that they are a healthy aspect of the ecosystem. However, Colombo pointed out that the CRA could create a sense of unease, as it might deter people and companies from participating in open source projects or using open source software due to potential legal liabilities.</p><p>To grasp the implications of the CRA, Colombo explained some of the key provisions. The initial drafts of the Act proposed potential liability for individual developers, open source foundations, and package managers. This raised concerns about the open source supply chain's potential vulnerability and the distribution of liability.</p><p>As the Act evolves, the liability landscape has shifted somewhat. Individual developers may not be held liable unless they consistently receive donations from commercial companies. However, for open source foundations, especially those accepting recurring donations from commercial entities, there remains a concern about potential liabilities and the need to conform to the CRA's requirements.</p><p>Colombo emphasized that this issue isn't limited to Europe. It could impact the entire global open source ecosystem and affect the ability of European developers and small to medium-sized businesses to participate effectively.</p><p>The conversation highlighted the challenges open source communities face when engaging with policymakers. Open source is not structured like traditional corporations or industry consortiums, making it more challenging to present a unified front. Additionally, the legislative process can be slow and complex, which may not align with the rapid pace of technology development.</p><p>The lack of proactive engagement from the European Commission and the absence of open source communities in the initial consultations on the Act are concerning. The understanding of open source, its nuances, and the role it plays in the broader software supply chain appears limited within policy-making circles.</p><p><strong>What Can Be Done?</strong></p><p>Gabriel Colombo stressed the importance of awareness and education. It is vital for individuals, businesses, and open source foundations to understand the implications of the CRA. The Linux Foundation and other organizations have launched campaigns to provide information and resources to help stakeholders comprehend the Act's potential impact.</p><p>Being vocal and advocating for open source within your network, organization, and through public affairs channels can also make a difference. Engagement with policymakers, especially as the Act progresses through the legislative process, is crucial. Colombo encouraged businesses to emphasize the significance of open source in their operations and supply chains, making policymakers aware of how the CRA might affect their activities.</p><p>In the face of the Cyber Resilience Act, the open source community must unite and actively engage with policymakers. It's essential to educate and raise awareness about the potential impact of the Act and advocate for a balanced approach that strengthens cybersecurity without stifling open source innovation.</p><p>The Act's development is ongoing, and there is time for stakeholders to make their voices heard. With a united effort, the open source community can help shape the legislation to ensure that open source remains vibrant and resilient in the face of evolving cybersecurity challenges.</p><p>Learn more from The New Stack about open source and Linux Foundation Europe:</p><p><a href="https://thenewstack.io/open-source-summit-introducing-linux-foundation-europe/">At Open Source Summit: Introducing Linux Foundation Europe</a></p><p><a href="https://thenewstack.io/making-europes-romantic-open-source-world-more-practical/">Making Europe's 'Romantic' Open Source World More Practical</a></p><p><a href="https://thenewstack.io/embracing-open-source-for-greater-business-impact/">Embracing Open Source for Greater Business Impact</a></p>
]]></description>
      <pubDate>Thu, 19 Oct 2023 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The Linux Foundation, The New Stack, Gabriel Colombo, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/open-source-development-threatened-in-europe-ahltfWFa</link>
      <content:encoded><![CDATA[<p>In a recent conversation at the Open Source Summit in Bilbao, Spain, Gabriel Colombo, the General Manager of the Linux Foundation Europe and the Executive Director of the Fintech Open Source Foundation, discussed the potential impact of the Cyber Resilience Act (CRA) on the open source community. The conversation shed light on the challenges and opportunities that the CRA presents to open source and how individuals and organizations can respond.</p><p>The conversation began by addressing the Cyber Resilience Act and its significance. Gabriel Colombo explained that while the Act is being touted as a measure to bolster cybersecurity and national security, it could have unintended consequences for the open source ecosystem, particularly in Europe. The Act, currently in the legislative process, aims to address cybersecurity concerns but could inadvertently hinder open source development and collaboration.</p><p>Jim Zemlin, the Executive Director of the Linux Foundation, had previously mentioned the importance of forks in open source development, emphasizing that they are a healthy aspect of the ecosystem. However, Colombo pointed out that the CRA could create a sense of unease, as it might deter people and companies from participating in open source projects or using open source software due to potential legal liabilities.</p><p>To grasp the implications of the CRA, Colombo explained some of the key provisions. The initial drafts of the Act proposed potential liability for individual developers, open source foundations, and package managers. This raised concerns about the open source supply chain's potential vulnerability and the distribution of liability.</p><p>As the Act evolves, the liability landscape has shifted somewhat. Individual developers may not be held liable unless they consistently receive donations from commercial companies. However, for open source foundations, especially those accepting recurring donations from commercial entities, there remains a concern about potential liabilities and the need to conform to the CRA's requirements.</p><p>Colombo emphasized that this issue isn't limited to Europe. It could impact the entire global open source ecosystem and affect the ability of European developers and small to medium-sized businesses to participate effectively.</p><p>The conversation highlighted the challenges open source communities face when engaging with policymakers. Open source is not structured like traditional corporations or industry consortiums, making it more challenging to present a unified front. Additionally, the legislative process can be slow and complex, which may not align with the rapid pace of technology development.</p><p>The lack of proactive engagement from the European Commission and the absence of open source communities in the initial consultations on the Act are concerning. The understanding of open source, its nuances, and the role it plays in the broader software supply chain appears limited within policy-making circles.</p><p><strong>What Can Be Done?</strong></p><p>Gabriel Colombo stressed the importance of awareness and education. It is vital for individuals, businesses, and open source foundations to understand the implications of the CRA. The Linux Foundation and other organizations have launched campaigns to provide information and resources to help stakeholders comprehend the Act's potential impact.</p><p>Being vocal and advocating for open source within your network, organization, and through public affairs channels can also make a difference. Engagement with policymakers, especially as the Act progresses through the legislative process, is crucial. Colombo encouraged businesses to emphasize the significance of open source in their operations and supply chains, making policymakers aware of how the CRA might affect their activities.</p><p>In the face of the Cyber Resilience Act, the open source community must unite and actively engage with policymakers. It's essential to educate and raise awareness about the potential impact of the Act and advocate for a balanced approach that strengthens cybersecurity without stifling open source innovation.</p><p>The Act's development is ongoing, and there is time for stakeholders to make their voices heard. With a united effort, the open source community can help shape the legislation to ensure that open source remains vibrant and resilient in the face of evolving cybersecurity challenges.</p><p>Learn more from The New Stack about open source and Linux Foundation Europe:</p><p><a href="https://thenewstack.io/open-source-summit-introducing-linux-foundation-europe/">At Open Source Summit: Introducing Linux Foundation Europe</a></p><p><a href="https://thenewstack.io/making-europes-romantic-open-source-world-more-practical/">Making Europe's 'Romantic' Open Source World More Practical</a></p><p><a href="https://thenewstack.io/embracing-open-source-for-greater-business-impact/">Embracing Open Source for Greater Business Impact</a></p>
]]></content:encoded>
      <enclosure length="19495426" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/4d8a35ee-c210-47ab-b5bc-b36cf32293cc/audio/352fe482-88bc-46e3-9772-ca67869be225/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Open Source Development Threatened in Europe</itunes:title>
      <itunes:author>The Linux Foundation, The New Stack, Gabriel Colombo, Alex Williams</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/f9ef3c3d-4139-423f-8d71-22aa22493b45/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:20:18</itunes:duration>
      <itunes:summary>At the Open Source Summit in Europe, we caught up with Gabriel Colombo, the General Manager of the Linux Foundation Europe to discuss the potential impact of the Cyber Resilience Act (CRA) on the open source community. He and TNS host Alex Williams talked about the challenges and opportunities that the CRA presents to open source and how individuals and organizations can respond.</itunes:summary>
      <itunes:subtitle>At the Open Source Summit in Europe, we caught up with Gabriel Colombo, the General Manager of the Linux Foundation Europe to discuss the potential impact of the Cyber Resilience Act (CRA) on the open source community. He and TNS host Alex Williams talked about the challenges and opportunities that the CRA presents to open source and how individuals and organizations can respond.</itunes:subtitle>
      <itunes:keywords>linux foundation europe, software developer, cybersecurity, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, software engineer, open source, linux foundation</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1430</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">2c78afb0-bd52-4788-ba48-1644d8b827a0</guid>
      <title>How to Get Your Organization Started with FinOps</title>
      <description><![CDATA[<p>In this episode of The New Stack Makers podcast, Uma Daniel, a product manager at UST, discusses the current complexities in the global economy, marked by low unemployment except in the tech industry, high inflation, high interest rates, a volatile stock market, and the looming threat of recession. Amid these challenges, organizations are seeking ways to enhance their operational efficiency.</p><p>Daniel introduces the concept of FinOps, which goes beyond just managing cloud costs. Instead, it focuses on leveraging the cloud to generate revenue. This represents a cultural shift in many organizations, emphasizing the need for a mindset change across different departments, including business, finance, and procurement.</p><p>She dispels misconceptions, such as the belief that only certain teams should be involved in the FinOps process. Daniel stresses that it's a collaborative effort involving various teams, and it's best to adopt FinOps at the beginning of a cloud journey. Once an organization is already established in the cloud, implementing FinOps becomes more challenging.</p><p>To foster collaboration, Daniel suggests identifying team members willing to champion FinOps and forming cross-functional teams to lead the initiative. Regular committee meetings and the establishment of generic policies, such as project budgets, help control cloud spending.</p><p>This episode, hosted by Heather Joslyn, provides insights into how to initiate and implement a FinOps strategy and highlights common ways in which organizations waste cloud resources.</p><p>Learn more from The New Stack about FinOps and UST:</p><p><a href="https://thenewstack.io/cloud-cost-unit-economics-a-modern-profitability-model/">Cloud Cost-Unit Economics — A Modern Profitability Model</a></p><p><a href="https://thenewstack.io/what-is-finops-understanding-finops-best-practices-for-cloud/">What Is FinOps? Understanding FinOps Best Practices for Cloud</a></p><p><a href="https://thenewstack.io/very-large-enterprises-need-a-different-approach-to-finops/">Very Large Enterprises Need a Different Approach to FinOps</a></p>
]]></description>
      <pubDate>Wed, 18 Oct 2023 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack, Uma Daniel, UST, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-to-get-your-organization-started-with-finops-hz9VJI7U</link>
      <content:encoded><![CDATA[<p>In this episode of The New Stack Makers podcast, Uma Daniel, a product manager at UST, discusses the current complexities in the global economy, marked by low unemployment except in the tech industry, high inflation, high interest rates, a volatile stock market, and the looming threat of recession. Amid these challenges, organizations are seeking ways to enhance their operational efficiency.</p><p>Daniel introduces the concept of FinOps, which goes beyond just managing cloud costs. Instead, it focuses on leveraging the cloud to generate revenue. This represents a cultural shift in many organizations, emphasizing the need for a mindset change across different departments, including business, finance, and procurement.</p><p>She dispels misconceptions, such as the belief that only certain teams should be involved in the FinOps process. Daniel stresses that it's a collaborative effort involving various teams, and it's best to adopt FinOps at the beginning of a cloud journey. Once an organization is already established in the cloud, implementing FinOps becomes more challenging.</p><p>To foster collaboration, Daniel suggests identifying team members willing to champion FinOps and forming cross-functional teams to lead the initiative. Regular committee meetings and the establishment of generic policies, such as project budgets, help control cloud spending.</p><p>This episode, hosted by Heather Joslyn, provides insights into how to initiate and implement a FinOps strategy and highlights common ways in which organizations waste cloud resources.</p><p>Learn more from The New Stack about FinOps and UST:</p><p><a href="https://thenewstack.io/cloud-cost-unit-economics-a-modern-profitability-model/">Cloud Cost-Unit Economics — A Modern Profitability Model</a></p><p><a href="https://thenewstack.io/what-is-finops-understanding-finops-best-practices-for-cloud/">What Is FinOps? Understanding FinOps Best Practices for Cloud</a></p><p><a href="https://thenewstack.io/very-large-enterprises-need-a-different-approach-to-finops/">Very Large Enterprises Need a Different Approach to FinOps</a></p>
]]></content:encoded>
      <enclosure length="22294404" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/cc0fca6e-6476-449b-af76-5285846fdd6a/audio/a05d8d50-9249-4c1c-969d-853073dd8ea2/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How to Get Your Organization Started with FinOps</itunes:title>
      <itunes:author>The New Stack, Uma Daniel, UST, Heather Joslyn</itunes:author>
      <itunes:duration>00:23:13</itunes:duration>
      <itunes:summary>Uma Daniel of UST joins the show to help explain how organizations can use FinOps to start cutting cloud costs and using the cloud to make money.</itunes:summary>
      <itunes:subtitle>Uma Daniel of UST joins the show to help explain how organizations can use FinOps to start cutting cloud costs and using the cloud to make money.</itunes:subtitle>
      <itunes:keywords>software developer, software engineering, tech podcast, ust, the new stack, devops, devops podcast, tech, developer podcast, finops, the new stack makers, software engineer, cloud computing</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1429</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">2a602a29-6f8c-4d01-b987-d703e0a35dd6</guid>
      <title>What’s Next in Building Better Generative AI Applications?</title>
      <description><![CDATA[<p>Since the release of OpenAI's ChatGPT-3 in late 2022, various industries have been actively exploring its applications. Madhukar Kumar, CMO of SingleStore, discussed his experiments with large language models (LLMs) in this podcast episode with TNS host Heather Joslyn. He mentioned a specific LLM called Gorilla, which is trained on APIs and can generate APIs based on specific tasks. Kumar also talked about SingleStore Now, an AI conference, where they plan to teach attendees how to build generative AI applications from scratch, focusing on enterprise applications.</p><p>Kumar highlighted a limitation with current LLMs - they are "frozen in time" and cannot provide real-time information. To address this, a method called "retrieval augmented generation" (RAG) has emerged. SingleStore is using RAG to keep LLMs updated. In this approach, a user query is first matched with up-to-date enterprise data to provide context, and then the LLM is tasked with generating answers based on this context. This method aims to prevent the generation of factually incorrect responses and relies on storing data as vectors for efficient real-time processing, which SingleStore enables.</p><p>This strategy ensures that LLMs can provide current and contextually accurate information, making AI applications more reliable and responsive for enterprises.</p><p>Learn more from The New Stack about LLMs and SingleStore:</p><p><a href="https://thenewstack.io/top-5-large-language-models-and-how-to-use-them-effectively/">Top 5 Large Language Models and How to Use Them Effectively</a></p><p><a href="https://thenewstack.io/using-chatgpt-for-questions-specific-to-your-company-data/">Using ChatGPT for Questions Specific to Your Company Data</a></p><p><a href="https://thenewstack.io/6-reasons-private-llms-are-key-for-enterprises/">6 Reasons Private LLMs Are Key for Enterprises</a></p>
]]></description>
      <pubDate>Thu, 12 Oct 2023 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack, SingleStore, Madhukar Kumar, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/whats-next-in-building-better-generative-ai-applications-4_VgqvDZ</link>
      <content:encoded><![CDATA[<p>Since the release of OpenAI's ChatGPT-3 in late 2022, various industries have been actively exploring its applications. Madhukar Kumar, CMO of SingleStore, discussed his experiments with large language models (LLMs) in this podcast episode with TNS host Heather Joslyn. He mentioned a specific LLM called Gorilla, which is trained on APIs and can generate APIs based on specific tasks. Kumar also talked about SingleStore Now, an AI conference, where they plan to teach attendees how to build generative AI applications from scratch, focusing on enterprise applications.</p><p>Kumar highlighted a limitation with current LLMs - they are "frozen in time" and cannot provide real-time information. To address this, a method called "retrieval augmented generation" (RAG) has emerged. SingleStore is using RAG to keep LLMs updated. In this approach, a user query is first matched with up-to-date enterprise data to provide context, and then the LLM is tasked with generating answers based on this context. This method aims to prevent the generation of factually incorrect responses and relies on storing data as vectors for efficient real-time processing, which SingleStore enables.</p><p>This strategy ensures that LLMs can provide current and contextually accurate information, making AI applications more reliable and responsive for enterprises.</p><p>Learn more from The New Stack about LLMs and SingleStore:</p><p><a href="https://thenewstack.io/top-5-large-language-models-and-how-to-use-them-effectively/">Top 5 Large Language Models and How to Use Them Effectively</a></p><p><a href="https://thenewstack.io/using-chatgpt-for-questions-specific-to-your-company-data/">Using ChatGPT for Questions Specific to Your Company Data</a></p><p><a href="https://thenewstack.io/6-reasons-private-llms-are-key-for-enterprises/">6 Reasons Private LLMs Are Key for Enterprises</a></p>
]]></content:encoded>
      <enclosure length="11349725" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/d293f8f1-0442-40e5-bac3-4d8ab421d454/audio/27191b78-6d1a-40d9-9054-49670023f5d1/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>What’s Next in Building Better Generative AI Applications?</itunes:title>
      <itunes:author>The New Stack, SingleStore, Madhukar Kumar, Heather Joslyn</itunes:author>
      <itunes:duration>00:11:49</itunes:duration>
      <itunes:summary>Madhukar Kumar of SingleStore joins us to talk about how the data for large language models is &quot;frozen in time&quot; and need the context offered by retrieval augmented generation and vector databases to grow more useful.</itunes:summary>
      <itunes:subtitle>Madhukar Kumar of SingleStore joins us to talk about how the data for large language models is &quot;frozen in time&quot; and need the context offered by retrieval augmented generation and vector databases to grow more useful.</itunes:subtitle>
      <itunes:keywords>generative ai, software developer, tech podcast, the new stack, databases, devops, devops podcast, tech, developer podcast, vector database, large language models, the new stack makers, software engineer, llms, apis, real time data, retrieval augmented generation</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1428</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">35b69061-fea5-4e38-99ab-4b6b674d532f</guid>
      <title>Cloud Native Observability: Fighting Rising Costs, Incidents</title>
      <description><![CDATA[<p>Observability in multi-cloud environments is becoming increasingly complex, as highlighted by Martin Mao, CEO and co-founder of Chronosphere. This challenge has two main components: a rise in customer-facing incidents, which demand significant engineering time for debugging, and the ineffectiveness and high cost of existing tools. These issues are creating a problematic return on investment for the industry.</p><p>Mao discussed these observability challenges on The New Stack Makers podcast with host Heather Joslyn, emphasizing the need to help teams prioritize alerts and encouraging a shift left approach for security responsibility among developers. With the adoption of distributed cloud architectures, organizations are not only dealing with a surge in data but also facing a cultural shift towards DevOps, where developers are expected to be more accountable for their software in production.</p><p>Historically, operations teams handled software in production, but in the cloud-native world, developers must take on these responsibilities themselves. Many current observability tools were designed for centralized operations teams, which creates a gap in addressing developer needs.</p><p>Mao suggests that cloud-native observability tools should empower developers to run and maintain their software in production, providing insights into the complex environments they work in. Moreover, observability tools can assist developers in understanding the intricacies of their software, such as its dependencies and operational aspects.</p><p>To streamline the data obtained from observability efforts and manage costs, Chronosphere introduced the "Observability Data Optimization Cycle." This framework starts with establishing centralized governance to set budgets for teams generating data. The goal is to optimize data usage to extract value without incurring unnecessary costs. This approach applies financial operations (FinOps) concepts to the observability space, helping organizations tackle the challenges of cloud-native observability.</p><p>Learn more from The New Stack about Observability and Chronosphere:</p><p><a href="https://thenewstack.io/observability/">Observability Overview, News and Trends</a></p><p><a href="https://thenewstack.io/4-key-observability-best-practices/">4 Key Observability Best Practices</a></p><p><a href="https://thenewstack.io/top-ways-to-reduce-your-observability-costs-part-1/">Top Ways to Reduce Your Observability Costs</a></p><p><a href="https://thenewstack.io/top-4-factors-for-cloud-native-observability-tool-selection/">Top 4 Factors for Cloud Native Observability Tool Selection</a></p>
]]></description>
      <pubDate>Wed, 11 Oct 2023 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Chronosphere, The New Stack, Martin Mao, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/cloud-native-observability-fighting-rising-costs-incidents-svYdkdjR</link>
      <content:encoded><![CDATA[<p>Observability in multi-cloud environments is becoming increasingly complex, as highlighted by Martin Mao, CEO and co-founder of Chronosphere. This challenge has two main components: a rise in customer-facing incidents, which demand significant engineering time for debugging, and the ineffectiveness and high cost of existing tools. These issues are creating a problematic return on investment for the industry.</p><p>Mao discussed these observability challenges on The New Stack Makers podcast with host Heather Joslyn, emphasizing the need to help teams prioritize alerts and encouraging a shift left approach for security responsibility among developers. With the adoption of distributed cloud architectures, organizations are not only dealing with a surge in data but also facing a cultural shift towards DevOps, where developers are expected to be more accountable for their software in production.</p><p>Historically, operations teams handled software in production, but in the cloud-native world, developers must take on these responsibilities themselves. Many current observability tools were designed for centralized operations teams, which creates a gap in addressing developer needs.</p><p>Mao suggests that cloud-native observability tools should empower developers to run and maintain their software in production, providing insights into the complex environments they work in. Moreover, observability tools can assist developers in understanding the intricacies of their software, such as its dependencies and operational aspects.</p><p>To streamline the data obtained from observability efforts and manage costs, Chronosphere introduced the "Observability Data Optimization Cycle." This framework starts with establishing centralized governance to set budgets for teams generating data. The goal is to optimize data usage to extract value without incurring unnecessary costs. This approach applies financial operations (FinOps) concepts to the observability space, helping organizations tackle the challenges of cloud-native observability.</p><p>Learn more from The New Stack about Observability and Chronosphere:</p><p><a href="https://thenewstack.io/observability/">Observability Overview, News and Trends</a></p><p><a href="https://thenewstack.io/4-key-observability-best-practices/">4 Key Observability Best Practices</a></p><p><a href="https://thenewstack.io/top-ways-to-reduce-your-observability-costs-part-1/">Top Ways to Reduce Your Observability Costs</a></p><p><a href="https://thenewstack.io/top-4-factors-for-cloud-native-observability-tool-selection/">Top 4 Factors for Cloud Native Observability Tool Selection</a></p>
]]></content:encoded>
      <enclosure length="21185559" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/e99606eb-d204-4ac9-871b-66ac8f1df9f1/audio/a7f5dd98-4864-4c5a-89ea-5f61b3bd9718/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Cloud Native Observability: Fighting Rising Costs, Incidents</itunes:title>
      <itunes:author>Chronosphere, The New Stack, Martin Mao, Heather Joslyn</itunes:author>
      <itunes:duration>00:22:04</itunes:duration>
      <itunes:summary>Martin Mao of Chronosphere joins TNS host Heather Joslyn to address why many observability tools can&apos;t keep up with the needs of distributed cloud architecture.</itunes:summary>
      <itunes:subtitle>Martin Mao of Chronosphere joins TNS host Heather Joslyn to address why many observability tools can&apos;t keep up with the needs of distributed cloud architecture.</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, the new stack, devops, cloud native, devops podcast, tech, developer podcast, finops, the new stack makers, software engineer, cloud computing, observability, chronosphere</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1427</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">458d857c-d5b3-4e2c-ab96-61ab05ec0a3b</guid>
      <title>At Run Time: Driving Outcomes with a Platform Engineering Team</title>
      <description><![CDATA[<p>Platform engineering is gaining prominence due to the need for faster application deployment, which directly impacts business velocity. Valentina Alaria, Senior Director of Product at VMware, emphasizes that not all organizations pursuing platform engineering have the same goals, context, or pain points. They tailor solutions to each organization's specific needs. Some focus on rapid onboarding for junior developers, while others aim to reduce complexity, friction, and support larger development teams with fewer operational staff.</p><p>Platform engineering aims to streamline collaboration between developers and operations engineers. Developers want portable code and the ability to focus on coding without worrying about production requirements. Operations engineers and platform teams seek a seamless environment for deploying applications in different contexts.</p><p>Successful platform engineering initiatives involve strong collaboration models, fostering a cooperative approach rather than a siloed one. The goal is to create applications and value for the organization by facilitating effective interaction between developers and operations engineers.</p><p>This podcast episode, hosted by Alex Williams of TNS, also delves into VMware Tanzu's latest tools for supporting platform engineering.</p><p>Learn more from The New Stack about platform engineering and VMware Tanzu:</p><p><a href="https://thenewstack.io/platform-engineering/">Platform Engineering Overview, News and Trends</a></p><p><a href="https://thenewstack.io/6-patterns-for-platform-engineering-success/">6 Patterns for Platform Engineering Success</a></p><p><a href="https://thenewstack.io/a-guide-to-open-source-platform-engineering/">A Guide to Open Source Platform Engineering</a></p><p><a href="https://thenewstack.io/streamline-platform-engineering-with-kubernetes/">Streamline Platform Engineering with Kubernetes</a></p>
]]></description>
      <pubDate>Thu, 5 Oct 2023 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Valentina Alaria, The New Stack, VMware, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/at-run-time-driving-outcomes-with-a-platform-engineering-team-j4wclhsu</link>
      <content:encoded><![CDATA[<p>Platform engineering is gaining prominence due to the need for faster application deployment, which directly impacts business velocity. Valentina Alaria, Senior Director of Product at VMware, emphasizes that not all organizations pursuing platform engineering have the same goals, context, or pain points. They tailor solutions to each organization's specific needs. Some focus on rapid onboarding for junior developers, while others aim to reduce complexity, friction, and support larger development teams with fewer operational staff.</p><p>Platform engineering aims to streamline collaboration between developers and operations engineers. Developers want portable code and the ability to focus on coding without worrying about production requirements. Operations engineers and platform teams seek a seamless environment for deploying applications in different contexts.</p><p>Successful platform engineering initiatives involve strong collaboration models, fostering a cooperative approach rather than a siloed one. The goal is to create applications and value for the organization by facilitating effective interaction between developers and operations engineers.</p><p>This podcast episode, hosted by Alex Williams of TNS, also delves into VMware Tanzu's latest tools for supporting platform engineering.</p><p>Learn more from The New Stack about platform engineering and VMware Tanzu:</p><p><a href="https://thenewstack.io/platform-engineering/">Platform Engineering Overview, News and Trends</a></p><p><a href="https://thenewstack.io/6-patterns-for-platform-engineering-success/">6 Patterns for Platform Engineering Success</a></p><p><a href="https://thenewstack.io/a-guide-to-open-source-platform-engineering/">A Guide to Open Source Platform Engineering</a></p><p><a href="https://thenewstack.io/streamline-platform-engineering-with-kubernetes/">Streamline Platform Engineering with Kubernetes</a></p>
]]></content:encoded>
      <enclosure length="28929088" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/b34ca0ad-7656-4323-9f84-258e685838a3/audio/8bfd9cd3-fd6b-495b-97ad-3015319ad3a7/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>At Run Time: Driving Outcomes with a Platform Engineering Team</itunes:title>
      <itunes:author>Valentina Alaria, The New Stack, VMware, Alex Williams</itunes:author>
      <itunes:duration>00:30:08</itunes:duration>
      <itunes:summary>Valentina Alaria of VMware joins TNS host Alex Williams to explore how developers and operations engineers intersect when platform engineering is introduced into an organization.</itunes:summary>
      <itunes:subtitle>Valentina Alaria of VMware joins TNS host Alex Williams to explore how developers and operations engineers intersect when platform engineering is introduced into an organization.</itunes:subtitle>
      <itunes:keywords>vmware, software developer, software engineering, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, developers, the new stack makers, software engineer, platform engineering</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1426</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">c6580bc7-d48f-4a05-97b8-7ba891e05c2c</guid>
      <title>How One Open Source Project Derived from Another’s Limits</title>
      <description><![CDATA[<p>ByConity is an open source project that emerged from ByteDance's use of Clickhouse, an open-source database system, to address their growing data volume. ByConity focuses on enhancing the separation of compute and storage, improving multitenancy support, and optimizing query performance in cloud-native environments.</p><p>ByteDance's Vini Jaiswal, a principle developer advocate at the parent company of TikTok, highlights the power of open source in fostering innovation and collaboration. She shares her personal experience of leveraging open source to solve problems quickly and efficiently. She emphasizes the importance of getting involved in open source, even for those who might be hesitant, and suggests starting by identifying a pain point and making small contributions.</p><p>ByConity's architecture, which separates compute and storage, offers benefits like preventing data lake corruption, read and write separation, elasticity, and scalability. Jaiswal also mentions her previous experience with open source during her time at CitiBank, where she realized how open source accelerated digital transformations.</p><p>Throughout the conversation, Jaiswal underscores the strength of open source communities in collectively addressing challenges. She encourages listeners to embrace open source and start contributing, emphasizing how even small contributions can lead to significant impacts over time.</p><p>The episode also delves into Jaiswal's involvement with other open source projects, such as PyTorch, and explores the intersection of open source and generative AI.</p><p>Learn more from The New Stack about open source and cloud native environments:</p><p><a href="https://thenewstack.io/cloud-native/what-is-cloud-native-and-why-does-it-matter/">What Is 'Cloud Native' (and Why Does It Matter)?</a></p><p><a href="https://thenewstack.io/cloud-native/">Cloud Native Ecosystem News and Resources</a></p><p><a href="https://thenewstack.io/how-to-build-an-open-source-community/">How to Build an Open Source Community</a></p>
]]></description>
      <pubDate>Wed, 4 Oct 2023 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Vini Jaiswal, ByteDance, The New Stack, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-one-open-source-project-derived-from-anothers-limits-WAauy25E</link>
      <content:encoded><![CDATA[<p>ByConity is an open source project that emerged from ByteDance's use of Clickhouse, an open-source database system, to address their growing data volume. ByConity focuses on enhancing the separation of compute and storage, improving multitenancy support, and optimizing query performance in cloud-native environments.</p><p>ByteDance's Vini Jaiswal, a principle developer advocate at the parent company of TikTok, highlights the power of open source in fostering innovation and collaboration. She shares her personal experience of leveraging open source to solve problems quickly and efficiently. She emphasizes the importance of getting involved in open source, even for those who might be hesitant, and suggests starting by identifying a pain point and making small contributions.</p><p>ByConity's architecture, which separates compute and storage, offers benefits like preventing data lake corruption, read and write separation, elasticity, and scalability. Jaiswal also mentions her previous experience with open source during her time at CitiBank, where she realized how open source accelerated digital transformations.</p><p>Throughout the conversation, Jaiswal underscores the strength of open source communities in collectively addressing challenges. She encourages listeners to embrace open source and start contributing, emphasizing how even small contributions can lead to significant impacts over time.</p><p>The episode also delves into Jaiswal's involvement with other open source projects, such as PyTorch, and explores the intersection of open source and generative AI.</p><p>Learn more from The New Stack about open source and cloud native environments:</p><p><a href="https://thenewstack.io/cloud-native/what-is-cloud-native-and-why-does-it-matter/">What Is 'Cloud Native' (and Why Does It Matter)?</a></p><p><a href="https://thenewstack.io/cloud-native/">Cloud Native Ecosystem News and Resources</a></p><p><a href="https://thenewstack.io/how-to-build-an-open-source-community/">How to Build an Open Source Community</a></p>
]]></content:encoded>
      <enclosure length="27678137" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/cdd5bbe8-b910-4a73-9373-e870b9df14f3/audio/276bede0-0ad4-4ee4-ae71-6b36403e0544/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How One Open Source Project Derived from Another’s Limits</itunes:title>
      <itunes:author>Vini Jaiswal, ByteDance, The New Stack, Alex Williams</itunes:author>
      <itunes:duration>00:28:49</itunes:duration>
      <itunes:summary>In this episode with TNS host Alex Williams, Vini Jaiswal, principal developer advocate at ByteDance (parent company of TikTok), discusses her journey in open source and her work on ByConity.</itunes:summary>
      <itunes:subtitle>In this episode with TNS host Alex Williams, Vini Jaiswal, principal developer advocate at ByteDance (parent company of TikTok), discusses her journey in open source and her work on ByConity.</itunes:subtitle>
      <itunes:keywords>vini jaiswal, software developer, software engineering, tech podcast, the new stack, devops, cloud native, devops podcast, tech, developer podcast, bytedance, software development, the new stack makers, software engineer, open source</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1425</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e50dcf30-2de5-4f7a-90af-b7216e909c39</guid>
      <title>The Golden Path to Platform Engineering</title>
      <description><![CDATA[<p>Along with discussing the emergence and ascension of platform engineering in this episode, we also discuss the role that Humanitec plays in helping organizations establish platforms for developers, as well as Backstage, a popular open source internal developer platform that was developed by Spotify for its own developers.</p><p>An IDP, our guest Kaspar Von Grünberg explained, is a standardized interface for developers to build applications using a  golden path of vetted tools and libraries, allowing for a high degree of efficiency for both the developers themselves as well as the engineers who are supporting the developers. They can include an integration and delivery plane, a continuous integration registry, a platform orchestrator, observability tools and a resource plane.</p><p>"How you're consuming this is a little bit up to the individual preference of the user, and what the platform team has configured for you. So we're seeing some teams like to use a user interface and some teams like to use code based interactions," Von Grünberg explained.</p><p>In some ways, a IDP is reminiscent of the platform-as-a-service packages of a decade ago. They also were designed to help developer efficiency, though devs chafed at the limited number of tools they were allowed to use  in these walled gardens. That was a mistake, Von Grünberg said.</p><p>Those platforms required developers to use a small set of pre-defined times.</p><p>"We don't want to get back to those times, which is why we want to provide sensible defaults," Von Grünberg said. A good IDP will provide developers with "golden paths" or "paved roads" as Netflix calls them.</p><p>"Developers can stay on those paths if they want," Von Grünberg said. They can enjoy the security default and service-level agreements (SLAs) from the engineers. But developers are also free to leave the path and make low-level configurations on their own as well.</p><p>"Good platform engineering is never about covering all the use cases," he said.</p><p>Learn more from The New Stack about platform engineering and Humanitec:</p><p><a href="https://thenewstack.io/platform-engineering/">Platform Engineering Overview, News, and Trends</a></p><p><a href="https://thenewstack.io/how-to-pave-golden-paths-that-actually-go-somewhere/">How to Pave Golden Paths That Actually Go Somewhere</a></p><p><a href="https://thenewstack.io/build-your-idp-at-light-speed-with-a-platform-reference-architecture/">Build Your IDP at Light Speed with a Platform Reference Architecture</a></p>
]]></description>
      <pubDate>Wed, 27 Sep 2023 17:03:19 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack, Humanitec, Kaspar Von Grünberg, Joab Jackson)</author>
      <link>https://thenewstack.simplecast.com/episodes/the-golden-path-to-platform-engineering-_q2Y_W0X</link>
      <content:encoded><![CDATA[<p>Along with discussing the emergence and ascension of platform engineering in this episode, we also discuss the role that Humanitec plays in helping organizations establish platforms for developers, as well as Backstage, a popular open source internal developer platform that was developed by Spotify for its own developers.</p><p>An IDP, our guest Kaspar Von Grünberg explained, is a standardized interface for developers to build applications using a  golden path of vetted tools and libraries, allowing for a high degree of efficiency for both the developers themselves as well as the engineers who are supporting the developers. They can include an integration and delivery plane, a continuous integration registry, a platform orchestrator, observability tools and a resource plane.</p><p>"How you're consuming this is a little bit up to the individual preference of the user, and what the platform team has configured for you. So we're seeing some teams like to use a user interface and some teams like to use code based interactions," Von Grünberg explained.</p><p>In some ways, a IDP is reminiscent of the platform-as-a-service packages of a decade ago. They also were designed to help developer efficiency, though devs chafed at the limited number of tools they were allowed to use  in these walled gardens. That was a mistake, Von Grünberg said.</p><p>Those platforms required developers to use a small set of pre-defined times.</p><p>"We don't want to get back to those times, which is why we want to provide sensible defaults," Von Grünberg said. A good IDP will provide developers with "golden paths" or "paved roads" as Netflix calls them.</p><p>"Developers can stay on those paths if they want," Von Grünberg said. They can enjoy the security default and service-level agreements (SLAs) from the engineers. But developers are also free to leave the path and make low-level configurations on their own as well.</p><p>"Good platform engineering is never about covering all the use cases," he said.</p><p>Learn more from The New Stack about platform engineering and Humanitec:</p><p><a href="https://thenewstack.io/platform-engineering/">Platform Engineering Overview, News, and Trends</a></p><p><a href="https://thenewstack.io/how-to-pave-golden-paths-that-actually-go-somewhere/">How to Pave Golden Paths That Actually Go Somewhere</a></p><p><a href="https://thenewstack.io/build-your-idp-at-light-speed-with-a-platform-reference-architecture/">Build Your IDP at Light Speed with a Platform Reference Architecture</a></p>
]]></content:encoded>
      <enclosure length="14553800" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/7cdc7556-a771-46e7-88dc-d6983549a1b6/audio/9695d611-90a1-41f9-965c-108c9075c30a/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>The Golden Path to Platform Engineering</itunes:title>
      <itunes:author>The New Stack, Humanitec, Kaspar Von Grünberg, Joab Jackson</itunes:author>
      <itunes:duration>00:15:09</itunes:duration>
      <itunes:summary>In this latest edition of The New Stack podcast, we speak with Kaspar Von Grünberg, CEO of platform services provider Humanitec, about the sudden rise of platform engineering.</itunes:summary>
      <itunes:subtitle>In this latest edition of The New Stack podcast, we speak with Kaspar Von Grünberg, CEO of platform services provider Humanitec, about the sudden rise of platform engineering.</itunes:subtitle>
      <itunes:keywords>software developer, software engineering, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, internal developer platform, humanitec, the new stack makers, software engineer, platform engineering, internal developer portal</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1424</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1967fd4b-42a1-4573-8f3f-0bfa42956ce6</guid>
      <title>Don&apos;t Listen to a Vendor About AI, Do the DevOps Redo</title>
      <description><![CDATA[<p>In this episode of The New Stack Makers, technologist and author John Willis emphasized caution when considering AI solutions from vendors. He advised against blindly following vendor recommendations for "one-size-fits-all" AI products, likening it to discouraging learning Java in the past in favor of purchasing a product.</p><p>Willis stressed that DevOps serves as an example of how human expertise, not just products, solves problems. He urged C-level executives to first understand AI's intricacies and then make informed purchasing decisions, suggesting a "DevOps redo" to encourage experimentation and collaboration, similar to the early days of the DevOps movement.</p><p>Willis highlighted that early adopters of DevOps, like successful banks, heavily invested in developing their human capital. He cautioned against hasty product purchases, as the AI landscape is rife with startups that may quickly disappear or be acquired by larger companies.</p><p>Instead, Willis advocated for educating teams on effective data management techniques, including retrieval augmentation, to fine-tune large language models. He emphasized the need for data cleansing to build robust data pipelines and prevent LLMs from generating undesirable code or sensitive information.</p><p>According to Willis, the process becomes enjoyable when done correctly, especially for companies using LLMs at scale with retrieval augmentation. To ensure success, he suggested adding governance and structure, including content moderation and red-teaming of data, which vendors may not prioritize in their offerings.</p><p>Learn more from The New Stack about DevOps and AI:</p><p><a href="https://thenewstack.io/aiops-is-devops-ready-for-an-infusion-of-artificial-intelligence/">AIOps: Is DevOps Ready for an Infusion of Artificial Intelligence?</a></p><p><a href="https://thenewstack.io/how-to-build-a-devops-engineer-in-just-six-months/">How to Build a DevOps Engineer in Just 6 Months</a></p><p><a href="https://thenewstack.io/power-up-your-devops-workflow-with-ai-and-chatgpt/">Power up Your DevOps Workflow with AI and ChatGPT</a></p>
]]></description>
      <pubDate>Thu, 21 Sep 2023 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (John Willis, Alex Williams, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/dont-listen-to-a-vendor-about-ai-ipmpZCoR</link>
      <content:encoded><![CDATA[<p>In this episode of The New Stack Makers, technologist and author John Willis emphasized caution when considering AI solutions from vendors. He advised against blindly following vendor recommendations for "one-size-fits-all" AI products, likening it to discouraging learning Java in the past in favor of purchasing a product.</p><p>Willis stressed that DevOps serves as an example of how human expertise, not just products, solves problems. He urged C-level executives to first understand AI's intricacies and then make informed purchasing decisions, suggesting a "DevOps redo" to encourage experimentation and collaboration, similar to the early days of the DevOps movement.</p><p>Willis highlighted that early adopters of DevOps, like successful banks, heavily invested in developing their human capital. He cautioned against hasty product purchases, as the AI landscape is rife with startups that may quickly disappear or be acquired by larger companies.</p><p>Instead, Willis advocated for educating teams on effective data management techniques, including retrieval augmentation, to fine-tune large language models. He emphasized the need for data cleansing to build robust data pipelines and prevent LLMs from generating undesirable code or sensitive information.</p><p>According to Willis, the process becomes enjoyable when done correctly, especially for companies using LLMs at scale with retrieval augmentation. To ensure success, he suggested adding governance and structure, including content moderation and red-teaming of data, which vendors may not prioritize in their offerings.</p><p>Learn more from The New Stack about DevOps and AI:</p><p><a href="https://thenewstack.io/aiops-is-devops-ready-for-an-infusion-of-artificial-intelligence/">AIOps: Is DevOps Ready for an Infusion of Artificial Intelligence?</a></p><p><a href="https://thenewstack.io/how-to-build-a-devops-engineer-in-just-six-months/">How to Build a DevOps Engineer in Just 6 Months</a></p><p><a href="https://thenewstack.io/power-up-your-devops-workflow-with-ai-and-chatgpt/">Power up Your DevOps Workflow with AI and ChatGPT</a></p>
]]></content:encoded>
      <enclosure length="31960964" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/db2653a5-e022-468a-b6a6-eda706548b8a/audio/4bbdcc6b-6c4d-4b07-b1c2-24118689fa3b/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Don&apos;t Listen to a Vendor About AI, Do the DevOps Redo</itunes:title>
      <itunes:author>John Willis, Alex Williams, The New Stack</itunes:author>
      <itunes:duration>00:33:17</itunes:duration>
      <itunes:summary>Technologist and author John Willis, one of the pioneers of the DevOps movement, emphasizes caution when considering AI solutions from vendors.</itunes:summary>
      <itunes:subtitle>Technologist and author John Willis, one of the pioneers of the DevOps movement, emphasizes caution when considering AI solutions from vendors.</itunes:subtitle>
      <itunes:keywords>generative ai, botchagalupe technologies, software developer, john willis, ai, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, artificial intelligence, large language models, the new stack makers, software engineer, llms</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1423</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">12078aaf-339f-46fc-9216-66879e07cffc</guid>
      <title>How Apache Flink Delivers for Deliveroo</title>
      <description><![CDATA[<p>Deliveroo, a prominent food delivery company, relies on Apache Flink, a distributed processing engine, to enhance its three-sided marketplace, connecting delivery drivers, restaurants, and customers. Seeking to improve real-time data streaming and gain insights into customer behavior, Deliveroo transitioned to Flink, comparing it to alternatives like Apache Spark and Kafka Streams. Flink, with feature parity to their previous platform, offered stability and scalability. They initially experimented with Flink on Kubernetes but turned to the Amazon Managed Service for Flink (MSF) for enhanced support and maintenance.</p><p>Engineers from Deliveroo, Felix Angell and Duc Anh Khu, emphasized the need for flexibility in data modeling to accommodate their fast-paced product development. However, flexibility can be complex, often requiring data model adjustments. They expressed the desire for a self-serve configuration feature in MSF, allowing easy customization of low-level settings and auto-scaling based on application metrics. This move to Flink and MSF has empowered Deliveroo to focus on core responsibilities like continuous integration and delivery while efficiently managing their data processing needs.</p><p>Learn more from The New Stack about Apache Flink and AWS:</p><p><a href="https://thenewstack.io/kinesis-kafka-and-amazon-managed-service-for-apache-flink/" target="_blank">Kinesis, Kafka and Amazon Managed Service for Apache Flink</a></p><p><a href="https://thenewstack.io/apache-flink-for-real-time-data-analysis/" target="_blank">Apache Flink for Real Time Data Analysis</a></p><p><a href="https://thenewstack.io/apache-flink-for-unbounded-data-streams/">Apache Flink for Unbounded Data Streams</a></p>
]]></description>
      <pubDate>Wed, 20 Sep 2023 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack, Felix Angell, Duc Anh Khu, Deliveroo, Amazon Web Services, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-apache-flink-delivers-for-deliveroo-_8L_TGZ6</link>
      <content:encoded><![CDATA[<p>Deliveroo, a prominent food delivery company, relies on Apache Flink, a distributed processing engine, to enhance its three-sided marketplace, connecting delivery drivers, restaurants, and customers. Seeking to improve real-time data streaming and gain insights into customer behavior, Deliveroo transitioned to Flink, comparing it to alternatives like Apache Spark and Kafka Streams. Flink, with feature parity to their previous platform, offered stability and scalability. They initially experimented with Flink on Kubernetes but turned to the Amazon Managed Service for Flink (MSF) for enhanced support and maintenance.</p><p>Engineers from Deliveroo, Felix Angell and Duc Anh Khu, emphasized the need for flexibility in data modeling to accommodate their fast-paced product development. However, flexibility can be complex, often requiring data model adjustments. They expressed the desire for a self-serve configuration feature in MSF, allowing easy customization of low-level settings and auto-scaling based on application metrics. This move to Flink and MSF has empowered Deliveroo to focus on core responsibilities like continuous integration and delivery while efficiently managing their data processing needs.</p><p>Learn more from The New Stack about Apache Flink and AWS:</p><p><a href="https://thenewstack.io/kinesis-kafka-and-amazon-managed-service-for-apache-flink/" target="_blank">Kinesis, Kafka and Amazon Managed Service for Apache Flink</a></p><p><a href="https://thenewstack.io/apache-flink-for-real-time-data-analysis/" target="_blank">Apache Flink for Real Time Data Analysis</a></p><p><a href="https://thenewstack.io/apache-flink-for-unbounded-data-streams/">Apache Flink for Unbounded Data Streams</a></p>
]]></content:encoded>
      <enclosure length="19814235" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/ac5452c3-712a-4029-be02-34c657e15ff5/audio/463a1da6-22c3-4c3e-b5ee-5dcf42b9a599/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Apache Flink Delivers for Deliveroo</itunes:title>
      <itunes:author>The New Stack, Felix Angell, Duc Anh Khu, Deliveroo, Amazon Web Services, Alex Williams</itunes:author>
      <itunes:duration>00:20:38</itunes:duration>
      <itunes:summary>TNS Host Alex Williams is joined by two software engineers from Deliveroo who share details about the company&apos;s transition to Apache Flink to improve real-time data streaming.</itunes:summary>
      <itunes:subtitle>TNS Host Alex Williams is joined by two software engineers from Deliveroo who share details about the company&apos;s transition to Apache Flink to improve real-time data streaming.</itunes:subtitle>
      <itunes:keywords>data processing, software developer, software engineering, tech podcast, the new stack, data management, devops, devops podcast, tech, deliveroo, developer podcast, the new stack makers, software engineer, data streaming, apache flink, real time data, aws</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1422</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">c0032647-b94d-4ca4-9125-52f737635de5</guid>
      <title>A Microservices Outcome: Testing Boomed</title>
      <description><![CDATA[<p>Over the past five to ten years, the testing of microservices has seen significant growth. This surge in testing can be attributed to the increasing adoption of microservices and Kubernetes, which signify a shift away from monolithic application architectures. Bruno Lopes, a leader at Kubernetes company incubator Kubeshop, noted this trend. Kubeshop has initiated six Kubernetes projects, including TestKube, a Kubernetes native testing framework led by Lopes.</p><p>This rise in testing is making it more accessible to a wider audience and is enhancing the developer experience through automation. Developers now have more time to focus on innovation rather than manual testing. However, there is often a disconnect between development and testing, as developers move quickly, outpacing organizational adaptation to modern testing methods.</p><p>Lopes emphasized the importance of testing before production deployment and advocated for creating production-resembling testing environments that allow for rapid deployment without waiting for manual tests. This approach is particularly critical for Site Reliability Engineering (SRE) teams who need to respond quickly to issues and minimize downtime for customers. In some cases, it's necessary to run tests within Kubernetes itself, a concept that may take time for companies to fully embrace as the developer experience continues to improve.</p><p>Learn more from The New Stack about Kubernetes, Testing and TestKube:</p><p><a href="https://thenewstack.io/testkube-cloud-native-testing-framework-for-kubernetes/">Testkube: A Cloud Native Testing Framework for Kubernetes</a></p><p><a href="https://thenewstack.io/top-5-challenges-in-modern-kubernetes-testing/">Top 5 Challenges in Modern Kubernetes Testing</a></p><p><a href="https://thenewstack.io/cloud-native/why-you-should-start-testing-in-the-cloud-native-way/">Why You Should Start Testing in the Cloud Native Way</a></p>
]]></description>
      <pubDate>Fri, 15 Sep 2023 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack, Bruno Lopes, Kubeshop, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/a-microservices-outcome-testing-boomed-mm_KZukD</link>
      <content:encoded><![CDATA[<p>Over the past five to ten years, the testing of microservices has seen significant growth. This surge in testing can be attributed to the increasing adoption of microservices and Kubernetes, which signify a shift away from monolithic application architectures. Bruno Lopes, a leader at Kubernetes company incubator Kubeshop, noted this trend. Kubeshop has initiated six Kubernetes projects, including TestKube, a Kubernetes native testing framework led by Lopes.</p><p>This rise in testing is making it more accessible to a wider audience and is enhancing the developer experience through automation. Developers now have more time to focus on innovation rather than manual testing. However, there is often a disconnect between development and testing, as developers move quickly, outpacing organizational adaptation to modern testing methods.</p><p>Lopes emphasized the importance of testing before production deployment and advocated for creating production-resembling testing environments that allow for rapid deployment without waiting for manual tests. This approach is particularly critical for Site Reliability Engineering (SRE) teams who need to respond quickly to issues and minimize downtime for customers. In some cases, it's necessary to run tests within Kubernetes itself, a concept that may take time for companies to fully embrace as the developer experience continues to improve.</p><p>Learn more from The New Stack about Kubernetes, Testing and TestKube:</p><p><a href="https://thenewstack.io/testkube-cloud-native-testing-framework-for-kubernetes/">Testkube: A Cloud Native Testing Framework for Kubernetes</a></p><p><a href="https://thenewstack.io/top-5-challenges-in-modern-kubernetes-testing/">Top 5 Challenges in Modern Kubernetes Testing</a></p><p><a href="https://thenewstack.io/cloud-native/why-you-should-start-testing-in-the-cloud-native-way/">Why You Should Start Testing in the Cloud Native Way</a></p>
]]></content:encoded>
      <enclosure length="20891315" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/43ba2e22-fcdc-4516-918f-5c5f4db4fbe5/audio/1bbf021e-c2aa-4a1d-acd4-64f26921bd72/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>A Microservices Outcome: Testing Boomed</itunes:title>
      <itunes:author>The New Stack, Bruno Lopes, Kubeshop, Alex Williams</itunes:author>
      <itunes:duration>00:21:45</itunes:duration>
      <itunes:summary>Increased adoption of microservices and Kubernetes has led to a boom in testing. Bruno Lopes of Kubeshop sits down with TNS host Alex Williams to discuss this rise in testing and what it means for the developer experience.</itunes:summary>
      <itunes:subtitle>Increased adoption of microservices and Kubernetes has led to a boom in testing. Bruno Lopes of Kubeshop sits down with TNS host Alex Williams to discuss this rise in testing and what it means for the developer experience.</itunes:subtitle>
      <itunes:keywords>software developer, software engineering, kubeshop, tech podcast, the new stack, devops, cloud native, devops podcast, tech, developer podcast, kubernetes, the new stack makers, software engineer, software testing, microservices</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1421</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">f11d4acc-974b-44d1-9aed-13bc76629646</guid>
      <title>Kinesis, Kafka and Amazon Managed Service for Apache Flink</title>
      <description><![CDATA[<p>Apache Flink is an open-source framework and distributed processing engine designed for data analytics. It excels at handling tasks such as data joins, aggregations, and ETL (Extract, Transform, Load) operations. Moreover, it supports advanced real-time techniques like complex event processing.</p><p>In this episode, Deepthi Mohan and Nagesh Honnalii from AWS discussed Apache Flink and the Amazon Managed Service for Apache Flink (MSF) with our host, Alex Williams. MSF is a service that caters to customers with varying infrastructure preferences. Some prefer complete control, while others want AWS to handle all infrastructure-related aspects.</p><p>Use cases for MSF can be grouped into three categories. First, there's streaming ETL, which involves tasks like log aggregation for later auditing. Second, it supports real-time analytics, enabling customers to create dashboards for tasks like fraud detection. Third, it handles complex event processing, where data from multiple sources is joined and aggregated to extract meaningful insights.</p><p>The origins of MSF trace back to the evolution of real-time data services within AWS. In 2013, AWS introduced Amazon Kinesis, while the open-source community developed Apache Kafka. These services paved the way for MSF by highlighting the need for real-time data processing.</p><p>To provide more flexibility, AWS launched Kinesis Data Analytics in 2016, allowing customers to write code in JVM-based languages like Java and Scala. In 2018, AWS decided to incorporate Apache Flink into its Kinesis Data Analytics offering, leading to the birth of MSF.</p><p>Today, thousands of customers use MSF, and AWS continues to enhance its offerings in the real-time data processing space, including the launch of Amazon MSK (Managed Streaming for Apache Kafka). To align with its foundation on Flink, AWS rebranded Kinesis Data Analytics for Apache Flink to Amazon Managed Service for Apache Flink, making it clearer for customers.</p><p>Learn more from The New Stack about AWS and Apache Flink:</p><p><a href="https://thenewstack.io/apache-flink-for-real-time-data-analysis/" target="_blank">Apache Flink for Real Time Data Analysis</a></p><p><a href="https://thenewstack.io/apache-flink-for-unbounded-data-streams/">Apache Flink for Unbounded Data Streams</a></p><p><a href="https://thenewstack.io/3-reasons-why-you-need-apache-flink-for-stream-processing/">3 Reasons Why You Need Apache Flink for Stream Processing</a></p>
]]></description>
      <pubDate>Tue, 12 Sep 2023 19:35:19 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack, Deepthi Mohan, Nagesh Honnalii, Alex Williams, Amazon Web Services)</author>
      <link>https://thenewstack.simplecast.com/episodes/kinesis-kafka-and-amazon-managed-service-for-apache-flink-7SdDbC_v</link>
      <content:encoded><![CDATA[<p>Apache Flink is an open-source framework and distributed processing engine designed for data analytics. It excels at handling tasks such as data joins, aggregations, and ETL (Extract, Transform, Load) operations. Moreover, it supports advanced real-time techniques like complex event processing.</p><p>In this episode, Deepthi Mohan and Nagesh Honnalii from AWS discussed Apache Flink and the Amazon Managed Service for Apache Flink (MSF) with our host, Alex Williams. MSF is a service that caters to customers with varying infrastructure preferences. Some prefer complete control, while others want AWS to handle all infrastructure-related aspects.</p><p>Use cases for MSF can be grouped into three categories. First, there's streaming ETL, which involves tasks like log aggregation for later auditing. Second, it supports real-time analytics, enabling customers to create dashboards for tasks like fraud detection. Third, it handles complex event processing, where data from multiple sources is joined and aggregated to extract meaningful insights.</p><p>The origins of MSF trace back to the evolution of real-time data services within AWS. In 2013, AWS introduced Amazon Kinesis, while the open-source community developed Apache Kafka. These services paved the way for MSF by highlighting the need for real-time data processing.</p><p>To provide more flexibility, AWS launched Kinesis Data Analytics in 2016, allowing customers to write code in JVM-based languages like Java and Scala. In 2018, AWS decided to incorporate Apache Flink into its Kinesis Data Analytics offering, leading to the birth of MSF.</p><p>Today, thousands of customers use MSF, and AWS continues to enhance its offerings in the real-time data processing space, including the launch of Amazon MSK (Managed Streaming for Apache Kafka). To align with its foundation on Flink, AWS rebranded Kinesis Data Analytics for Apache Flink to Amazon Managed Service for Apache Flink, making it clearer for customers.</p><p>Learn more from The New Stack about AWS and Apache Flink:</p><p><a href="https://thenewstack.io/apache-flink-for-real-time-data-analysis/" target="_blank">Apache Flink for Real Time Data Analysis</a></p><p><a href="https://thenewstack.io/apache-flink-for-unbounded-data-streams/">Apache Flink for Unbounded Data Streams</a></p><p><a href="https://thenewstack.io/3-reasons-why-you-need-apache-flink-for-stream-processing/">3 Reasons Why You Need Apache Flink for Stream Processing</a></p>
]]></content:encoded>
      <enclosure length="26040991" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/888c6d1f-ffeb-4447-9c98-7677481539ab/audio/96e8083e-6104-41b7-b385-8d1866fa8208/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Kinesis, Kafka and Amazon Managed Service for Apache Flink</itunes:title>
      <itunes:author>The New Stack, Deepthi Mohan, Nagesh Honnalii, Alex Williams, Amazon Web Services</itunes:author>
      <itunes:duration>00:27:07</itunes:duration>
      <itunes:summary>A pair of experts from AWS talk to us about the emergence of Amazon Kinesis, and the eventual focus on Apache Flink as a data framework.</itunes:summary>
      <itunes:subtitle>A pair of experts from AWS talk to us about the emergence of Amazon Kinesis, and the eventual focus on Apache Flink as a data framework.</itunes:subtitle>
      <itunes:keywords>software developer, data analytics, software engineering, tech podcast, the new stack, devops, devops podcast, amazon web services, tech, developer podcast, the new stack makers, software engineer, open source, apache flink, real time data, aws</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1420</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">825adb58-362a-4e9d-84fb-7122e7232c76</guid>
      <title>What You Can Expect from a Developer Conference These Days</title>
      <description><![CDATA[<p>Modern developer conferences like the upcoming Infobip Shift Conference in Croatia are centered around themes. At this particular event for developers, you can expect a lot of focus to be on the developer experience and artificial intelligence (AI).</p><p>Ivan Burazin, Chief Development Experience Officer at InfoBip, joined us on the show and emphasizes that developers spend a substantial portion of their time not coding, often losing 50 to 70% of their productive hours to non-coding activities, such as setting up environments, running tests, and building code. This highlights the importance of improving the developer experience to enhance productivity.</p><p>The developer experience has both internal and external dimensions. Externally, it impacts customer experience, while internally, it influences development velocity. A better developer experience translates to faster and more efficient coding.</p><p>The <a href="https://shift.infobip.com/">Shift Conference</a> will feature talks on six stages, one of which will focus on the developer experience, addressing its internal and external aspects. Additionally, AI will take center stage at another segment of the conference.</p><p>Although there may not be an abundance of true AI experts taking the stage, the focus will be on how individuals and companies can leverage AI to create products and services. It's recognized that AI will play a pivotal role in the future of every industry, and the conference aims to explore practical applications and strategies for integrating AI into various businesses.</p><p>Overall, the Shift Conference aims to address the challenges developers face in optimizing their productivity and explore the growing importance of AI in shaping the future of businesses and products.</p><p>Learn more from The New Stack about the developer experience and InfoBip Shift:</p><p><a href="https://thenewstack.io/7-principles-and-10-tactics-to-make-you-a-10x-developer/">7 Principles and 10 Tactics to Make You a 10x Developer</a></p><p><a href="https://thenewstack.io/the-challenges-of-marketing-software-tools-to-developers/">The Challenges of Marketing Software Tools to Developers</a></p><p><a href="https://thenewstack.io/a-guide-to-better-developer-experience/">A Guide to Better Developer Experience</a></p>
]]></description>
      <pubDate>Wed, 6 Sep 2023 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (InfoBip, Ivan Burazin, The New Stack, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/what-you-can-expect-from-a-developer-conference-these-days-FuXBupxq</link>
      <content:encoded><![CDATA[<p>Modern developer conferences like the upcoming Infobip Shift Conference in Croatia are centered around themes. At this particular event for developers, you can expect a lot of focus to be on the developer experience and artificial intelligence (AI).</p><p>Ivan Burazin, Chief Development Experience Officer at InfoBip, joined us on the show and emphasizes that developers spend a substantial portion of their time not coding, often losing 50 to 70% of their productive hours to non-coding activities, such as setting up environments, running tests, and building code. This highlights the importance of improving the developer experience to enhance productivity.</p><p>The developer experience has both internal and external dimensions. Externally, it impacts customer experience, while internally, it influences development velocity. A better developer experience translates to faster and more efficient coding.</p><p>The <a href="https://shift.infobip.com/">Shift Conference</a> will feature talks on six stages, one of which will focus on the developer experience, addressing its internal and external aspects. Additionally, AI will take center stage at another segment of the conference.</p><p>Although there may not be an abundance of true AI experts taking the stage, the focus will be on how individuals and companies can leverage AI to create products and services. It's recognized that AI will play a pivotal role in the future of every industry, and the conference aims to explore practical applications and strategies for integrating AI into various businesses.</p><p>Overall, the Shift Conference aims to address the challenges developers face in optimizing their productivity and explore the growing importance of AI in shaping the future of businesses and products.</p><p>Learn more from The New Stack about the developer experience and InfoBip Shift:</p><p><a href="https://thenewstack.io/7-principles-and-10-tactics-to-make-you-a-10x-developer/">7 Principles and 10 Tactics to Make You a 10x Developer</a></p><p><a href="https://thenewstack.io/the-challenges-of-marketing-software-tools-to-developers/">The Challenges of Marketing Software Tools to Developers</a></p><p><a href="https://thenewstack.io/a-guide-to-better-developer-experience/">A Guide to Better Developer Experience</a></p>
]]></content:encoded>
      <enclosure length="23709614" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/ecb53c9f-c255-4ff5-938f-4de26414444d/audio/45c1b196-70c1-4f4e-961f-7a3616c906b7/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>What You Can Expect from a Developer Conference These Days</itunes:title>
      <itunes:author>InfoBip, Ivan Burazin, The New Stack, Alex Williams</itunes:author>
      <itunes:duration>00:24:41</itunes:duration>
      <itunes:summary>We spoke with the founder of the InfoBip Shift event about what to expect at this year&apos;s developer conference. Highlights include the developer experience and AI.</itunes:summary>
      <itunes:subtitle>We spoke with the founder of the InfoBip Shift event about what to expect at this year&apos;s developer conference. Highlights include the developer experience and AI.</itunes:subtitle>
      <itunes:keywords>developer conference, tech conference, infobip shift, software developer, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, developer experience, artificial intelligence, software development, ivan burazin, the new stack makers, infobip, software engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1419</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">da8b67e8-f3af-465f-91fd-e71abad568e1</guid>
      <title>Apache Flink for Real Time Data Analysis</title>
      <description><![CDATA[<p>This episode delves into Apache Flink, a versatile platform for executing both batch and real-time streaming data analysis tasks. This session marks the beginning of a three-part series unveiling Amazon Web Services' (AWS) new managed service built on Flink. Future episodes will explore this service in detail and examine customer experiences.</p><p>The podcast features insights from Danny Cranmer, a principal engineer at AWS and an Apache Flink PMC and Committer, along with Hong Teoh, a software development engineer at AWS.</p><p>Flink stands out as a high-level framework for defining data analytics jobs, accommodating both batch and streaming data sets. It offers APIs for building analysis jobs in various languages, including Java, Python, and SQL. Flink also provides a distributed job execution engine with fault tolerance and horizontal scaling capabilities.</p><p>One prominent use case is Extract-Transform-Load (ETL), where raw data is swiftly processed for specific workloads. Flink excels in delivering low-latency transformations for unbounded data streams. Additionally, Flink supports event-driven applications, responding immediately to triggers such as user requests for weather data.</p><p>Flink ensures exactly-once processing, critical for scenarios like financial transactions. It employs checkpoints to maintain data integrity in case of node failures.</p><p>The podcast also touches on AWS's role in supporting the open-source Flink project and the future outlook for this powerful data processing framework.</p><p>Learn more from The New Stack about Apache Flink:</p><p><a href="https://thenewstack.io/3-reasons-why-you-need-apache-flink-for-stream-processing/">3 Reasons Why You Need Apache Flink for Stream Processing</a></p><p><a href="https://thenewstack.io/apache-flink-for-unbounded-data-streams/">Apache Flink for Unbounded Data Streams</a></p><p><a href="https://thenewstack.io/8-real-time-data-best-practices/">8 Real-Time Data Best Practices</a></p>
]]></description>
      <pubDate>Tue, 5 Sep 2023 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Danny Cranmer, Hong Teoh, The New Stack, Amazon Web Services, Joab Jackson)</author>
      <link>https://thenewstack.simplecast.com/episodes/apache-flink-for-real-time-data-analysis-Bbiwp_su</link>
      <content:encoded><![CDATA[<p>This episode delves into Apache Flink, a versatile platform for executing both batch and real-time streaming data analysis tasks. This session marks the beginning of a three-part series unveiling Amazon Web Services' (AWS) new managed service built on Flink. Future episodes will explore this service in detail and examine customer experiences.</p><p>The podcast features insights from Danny Cranmer, a principal engineer at AWS and an Apache Flink PMC and Committer, along with Hong Teoh, a software development engineer at AWS.</p><p>Flink stands out as a high-level framework for defining data analytics jobs, accommodating both batch and streaming data sets. It offers APIs for building analysis jobs in various languages, including Java, Python, and SQL. Flink also provides a distributed job execution engine with fault tolerance and horizontal scaling capabilities.</p><p>One prominent use case is Extract-Transform-Load (ETL), where raw data is swiftly processed for specific workloads. Flink excels in delivering low-latency transformations for unbounded data streams. Additionally, Flink supports event-driven applications, responding immediately to triggers such as user requests for weather data.</p><p>Flink ensures exactly-once processing, critical for scenarios like financial transactions. It employs checkpoints to maintain data integrity in case of node failures.</p><p>The podcast also touches on AWS's role in supporting the open-source Flink project and the future outlook for this powerful data processing framework.</p><p>Learn more from The New Stack about Apache Flink:</p><p><a href="https://thenewstack.io/3-reasons-why-you-need-apache-flink-for-stream-processing/">3 Reasons Why You Need Apache Flink for Stream Processing</a></p><p><a href="https://thenewstack.io/apache-flink-for-unbounded-data-streams/">Apache Flink for Unbounded Data Streams</a></p><p><a href="https://thenewstack.io/8-real-time-data-best-practices/">8 Real-Time Data Best Practices</a></p>
]]></content:encoded>
      <enclosure length="22924269" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/70179c8d-ce11-42c6-9d94-2c14c1f71690/audio/85c3fad0-5978-4dd3-9ec2-639a7efb4821/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Apache Flink for Real Time Data Analysis</itunes:title>
      <itunes:author>Danny Cranmer, Hong Teoh, The New Stack, Amazon Web Services, Joab Jackson</itunes:author>
      <itunes:duration>00:23:52</itunes:duration>
      <itunes:summary>We explore Apache Flick, a platform for running both batch and real-time streaming data analysis jobs, with two experts from AWS.</itunes:summary>
      <itunes:subtitle>We explore Apache Flick, a platform for running both batch and real-time streaming data analysis jobs, with two experts from AWS.</itunes:subtitle>
      <itunes:keywords>tech news, software developer, data analytics, software engineering, tech podcast, the new stack, devops, software development, data science, open source, streaming data, apache flink, real time data</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1418</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">16bfc899-5f34-46dc-94f0-7b25bc29bf13</guid>
      <title>The First Thing to Tell an LLM</title>
      <description><![CDATA[<p>In an interview with The New Stack, renowned technologist Adrian Cockcroft discussed the process of fine-tuning Large Language Models (LLMs) through prompt engineering. Cockcroft, known for his roles at Netflix and Amazon Web Services, explained how to obtain tailored programming advice from an LLM. By crafting specific prompts like asking the model to provide code in the style of a certain expert programmer, such as Java's James Gosling, users can guide the AI's output.</p><p>Prompt engineering involves setting up conversations to bias the AI's responses. These prompts are becoming more advanced with plugins and loaded information that shape the model's behavior before use. Cockcroft highlighted the concept of fine-tuning, where models are adapted beyond what a prompt can contain. Companies are incorporating vast amounts of their internal data, like wiki pages and corporate documents, to train the model to understand their specific domain and processes.</p><p>Cockcroft pointed out the efficacy of ChatGPT within certain tasks, illustrated by his experience using it for data analysis and programming assistance. He also discussed the growing need for improved results from LLMs, which has led to the demand for vector databases. These databases store word meanings as vectors with associated weights, enabling fuzzy matching for enhanced information retrieval from LLMs. In essence, Cockcroft emphasized the multifaceted process of shaping and optimizing LLMs through prompt engineering and fine-tuning, reflecting the evolving landscape of AI-human interactions.</p><p>Learn more from The New Stack about LLMs and Prompt Engineering:</p><p><a href="https://thenewstack.io/top-5-large-language-models-and-how-to-use-them-effectively/">Top 5 Large Language Models and How to Use Them Effectively</a></p><p><a href="https://thenewstack.io/the-pros-and-con-of-customizing-large-language-models/">The Pros (And Con) of Customizing Large Language Models</a></p><p><a href="https://thenewstack.io/prompt-engineering-get-llms-to-generate-the-content-you-want/">Prompt Engineering: Get LLMs to Generate the Content You Want</a></p><p><a href="https://thenewstack.io/developer-tips-in-ai-prompt-engineering/">Developer Tips in AI Prompt Engineering</a></p>
]]></description>
      <pubDate>Wed, 30 Aug 2023 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Adrian Cockcroft, The New Stack, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/the-first-thing-to-tell-an-llm-FEHe0Cq_</link>
      <content:encoded><![CDATA[<p>In an interview with The New Stack, renowned technologist Adrian Cockcroft discussed the process of fine-tuning Large Language Models (LLMs) through prompt engineering. Cockcroft, known for his roles at Netflix and Amazon Web Services, explained how to obtain tailored programming advice from an LLM. By crafting specific prompts like asking the model to provide code in the style of a certain expert programmer, such as Java's James Gosling, users can guide the AI's output.</p><p>Prompt engineering involves setting up conversations to bias the AI's responses. These prompts are becoming more advanced with plugins and loaded information that shape the model's behavior before use. Cockcroft highlighted the concept of fine-tuning, where models are adapted beyond what a prompt can contain. Companies are incorporating vast amounts of their internal data, like wiki pages and corporate documents, to train the model to understand their specific domain and processes.</p><p>Cockcroft pointed out the efficacy of ChatGPT within certain tasks, illustrated by his experience using it for data analysis and programming assistance. He also discussed the growing need for improved results from LLMs, which has led to the demand for vector databases. These databases store word meanings as vectors with associated weights, enabling fuzzy matching for enhanced information retrieval from LLMs. In essence, Cockcroft emphasized the multifaceted process of shaping and optimizing LLMs through prompt engineering and fine-tuning, reflecting the evolving landscape of AI-human interactions.</p><p>Learn more from The New Stack about LLMs and Prompt Engineering:</p><p><a href="https://thenewstack.io/top-5-large-language-models-and-how-to-use-them-effectively/">Top 5 Large Language Models and How to Use Them Effectively</a></p><p><a href="https://thenewstack.io/the-pros-and-con-of-customizing-large-language-models/">The Pros (And Con) of Customizing Large Language Models</a></p><p><a href="https://thenewstack.io/prompt-engineering-get-llms-to-generate-the-content-you-want/">Prompt Engineering: Get LLMs to Generate the Content You Want</a></p><p><a href="https://thenewstack.io/developer-tips-in-ai-prompt-engineering/">Developer Tips in AI Prompt Engineering</a></p>
]]></content:encoded>
      <enclosure length="27679391" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/f0a962b9-1b73-472f-933c-7945e3034623/audio/c858dbc4-1267-4422-ad8b-78236022c8e8/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>The First Thing to Tell an LLM</itunes:title>
      <itunes:author>Adrian Cockcroft, The New Stack, Alex Williams</itunes:author>
      <itunes:duration>00:28:49</itunes:duration>
      <itunes:summary>Technologist Adrian Cockcroft discusses the process of fine-tuning Large Language Models through prompt engineering with TNS Host Alex Williams.</itunes:summary>
      <itunes:subtitle>Technologist Adrian Cockcroft discusses the process of fine-tuning Large Language Models through prompt engineering with TNS Host Alex Williams.</itunes:subtitle>
      <itunes:keywords>software developer, software engineering, tech podcast, the new stack, devops, programming, tech, developer podcast, artificial intelligence, software development, large language models, software engineer, prompt engineering, adrian cockroft</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1417</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">3071ca3e-61be-42c3-9214-8006b571b36f</guid>
      <title>So You Want to Learn DevOps</title>
      <description><![CDATA[<p>TechWorld with Nana is one of the most popular resources for people looking to get into or progress a DevOps career. Nana Janashia, the creator of TechWorld with Nana, is a DevOps trainer and consultant who joined us to discuss why DevOps is needed now more than ever and how this is the perfect time to begin a career in DevOps.</p><p>Host Alex Williams and Nana go over the key concepts of DevOps. Then they talk about how the complexity of tools can sidetrack and complicate the learning process for those new to DevOps and why focusing on concepts rather than tools the way to go. Before wrapping up the conversation, they even talk about the best ways for people to get involved who are new to DevOps.</p><p>Nana's journey into DevOps commenced during her time as an engineer in Austria, where she began exploring Kubernetes. As inquiries from colleagues poured in, she recognized her knack for demystifying complex topics, catalyzing her passion for teaching. Viewers attest to switching to DevOps careers after watching her videos.</p><p>Throughout the conversation, we learned how people can discover the world of DevOps through TechWorld with Nana as an expert guide. With a large YouTube audience, online courses, workshops, and corporate training, Nana has empowered countless individuals in advancing their DevOps expertise. The six-month boot camps from TechWorld with Nana encompass a comprehensive curriculum, starting with fundamentals and culminating in hands-on programming abilities, Python automation, configuration management, and Prometheus-based monitoring.</p><p>Nana underscores that DevOps, still a relatively nascent profession, suffers from role ambiguity both among engineers and within companies aspiring to implement it. This confusion stems from differing workflows and environments when engineers switch jobs. Nana's insights bring clarity to these challenges, acknowledging the evolving chaos of the DevOps culture and its driving force for innovation in managing intricate distributed technologies.</p><p>Learn more about DevOps from TNS, Roadmap (our sister site), and TechWorld with Nana:</p><p><a href="https://www.techworld-with-nana.com/devops-bootcamp">TechWorld with Nana - DevOps  Bootcamp</a></p><p><a href="https://www.techworld-with-nana.com/devsecops-bootcamp">TechWorld with Nana - DevSecOps Bootcamp</a></p><p><a href="https://roadmap.sh/devops">DevOps Learning Roadmap</a></p><p><a href="https://thenewstack.io/devops/">DevOps News, Trends, and Analysis</a></p>
]]></description>
      <pubDate>Thu, 24 Aug 2023 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Nana Janashia, TechWorld with Nana, Alex Williams, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/so-you-want-to-learn-devops-qzXeH7oK</link>
      <content:encoded><![CDATA[<p>TechWorld with Nana is one of the most popular resources for people looking to get into or progress a DevOps career. Nana Janashia, the creator of TechWorld with Nana, is a DevOps trainer and consultant who joined us to discuss why DevOps is needed now more than ever and how this is the perfect time to begin a career in DevOps.</p><p>Host Alex Williams and Nana go over the key concepts of DevOps. Then they talk about how the complexity of tools can sidetrack and complicate the learning process for those new to DevOps and why focusing on concepts rather than tools the way to go. Before wrapping up the conversation, they even talk about the best ways for people to get involved who are new to DevOps.</p><p>Nana's journey into DevOps commenced during her time as an engineer in Austria, where she began exploring Kubernetes. As inquiries from colleagues poured in, she recognized her knack for demystifying complex topics, catalyzing her passion for teaching. Viewers attest to switching to DevOps careers after watching her videos.</p><p>Throughout the conversation, we learned how people can discover the world of DevOps through TechWorld with Nana as an expert guide. With a large YouTube audience, online courses, workshops, and corporate training, Nana has empowered countless individuals in advancing their DevOps expertise. The six-month boot camps from TechWorld with Nana encompass a comprehensive curriculum, starting with fundamentals and culminating in hands-on programming abilities, Python automation, configuration management, and Prometheus-based monitoring.</p><p>Nana underscores that DevOps, still a relatively nascent profession, suffers from role ambiguity both among engineers and within companies aspiring to implement it. This confusion stems from differing workflows and environments when engineers switch jobs. Nana's insights bring clarity to these challenges, acknowledging the evolving chaos of the DevOps culture and its driving force for innovation in managing intricate distributed technologies.</p><p>Learn more about DevOps from TNS, Roadmap (our sister site), and TechWorld with Nana:</p><p><a href="https://www.techworld-with-nana.com/devops-bootcamp">TechWorld with Nana - DevOps  Bootcamp</a></p><p><a href="https://www.techworld-with-nana.com/devsecops-bootcamp">TechWorld with Nana - DevSecOps Bootcamp</a></p><p><a href="https://roadmap.sh/devops">DevOps Learning Roadmap</a></p><p><a href="https://thenewstack.io/devops/">DevOps News, Trends, and Analysis</a></p>
]]></content:encoded>
      <enclosure length="28432135" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/114aafae-2073-4c28-8747-b3020f85ab02/audio/92c3587a-20bb-4ab5-a8a2-8891a79746c2/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>So You Want to Learn DevOps</itunes:title>
      <itunes:author>Nana Janashia, TechWorld with Nana, Alex Williams, The New Stack</itunes:author>
      <itunes:duration>00:29:36</itunes:duration>
      <itunes:summary>TNS Host Alex Williams is joined by special guest Nana Janashia, the creator and driving force behind the popular TechWorld with Nana community and resources all about DevOps.</itunes:summary>
      <itunes:subtitle>TNS Host Alex Williams is joined by special guest Nana Janashia, the creator and driving force behind the popular TechWorld with Nana community and resources all about DevOps.</itunes:subtitle>
      <itunes:keywords>techworld with nana, software developer, software engineering, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, developers, software engineer, devsecops</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1416</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">2f17869e-06bc-40f3-8d0a-bfbce58b8dbc</guid>
      <title>Open Source AI and The Llama 2 Kerfuffle</title>
      <description><![CDATA[<p>Explore the complex intersection of AI and open source with insights from experts in this illuminating discussion. Amanda Brock, CEO of OpenUK, reveals the challenges in labeling AI as open source amidst legal ambiguities. The dialogue, led by TNS host Alex Williams, delves into the evolution of open source licensing, its departure from traditional models, and the complications arising from applying open source principles to AI, which encompasses sensitive data governed by privacy laws.</p><p>The focus turns to "Llama 2," a contentious example where Meta labeled their language model as open source, sparking confusion. Notable guests Erica Brescia, Managing Director at Redpoint Ventures, and Steven Vaughan-Nichols, founder of Open Source Watch, weigh in on this topic. Brock emphasizes that AI's complexity prevents it from aligning with the Open Source Definition, necessitating a clear distinction between open innovation and open source.</p><p>Amidst these debates, the Open Source Initiative (OSI) is crafting a new definition tailored for AI, sparking anticipation and discussion about its implications. The necessity for an evolved understanding of open source and its licenses is underscored, as the rapid evolution of technology challenges established norms. The journey concludes with reflections on vendors transitioning from open source licenses to Server Side Public License (SSPL) due to cloud-related considerations, raising questions about the future of open source in a dynamically changing tech landscape.</p><p>Learn more from The New Stack about open source and AI:</p><p><a href="https://thenewstack.io/open-source-may-yet-eat-googles-and-openais-ai-lunch/">Open Source May Yet Eat Google's and OpenAI's AI Lunch</a></p><p><a href="https://thenewstack.io/open-source-movement-emerging-in-ai-to-counter-greed/">Open Source Movement Emerging in AI To Counter Greed</a></p><p><a href="https://thenewstack.io/how-ai-can-learn-from-open-source-struggles/">How AI Can Learn from the Struggles of Open Source</a></p>
]]></description>
      <pubDate>Fri, 18 Aug 2023 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack, Steven Vaughan-Nichols, Erica Brescia, Amanda Brock, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/open-source-ai-and-the-llama-2-kerfuffle-gCdp5lOZ</link>
      <content:encoded><![CDATA[<p>Explore the complex intersection of AI and open source with insights from experts in this illuminating discussion. Amanda Brock, CEO of OpenUK, reveals the challenges in labeling AI as open source amidst legal ambiguities. The dialogue, led by TNS host Alex Williams, delves into the evolution of open source licensing, its departure from traditional models, and the complications arising from applying open source principles to AI, which encompasses sensitive data governed by privacy laws.</p><p>The focus turns to "Llama 2," a contentious example where Meta labeled their language model as open source, sparking confusion. Notable guests Erica Brescia, Managing Director at Redpoint Ventures, and Steven Vaughan-Nichols, founder of Open Source Watch, weigh in on this topic. Brock emphasizes that AI's complexity prevents it from aligning with the Open Source Definition, necessitating a clear distinction between open innovation and open source.</p><p>Amidst these debates, the Open Source Initiative (OSI) is crafting a new definition tailored for AI, sparking anticipation and discussion about its implications. The necessity for an evolved understanding of open source and its licenses is underscored, as the rapid evolution of technology challenges established norms. The journey concludes with reflections on vendors transitioning from open source licenses to Server Side Public License (SSPL) due to cloud-related considerations, raising questions about the future of open source in a dynamically changing tech landscape.</p><p>Learn more from The New Stack about open source and AI:</p><p><a href="https://thenewstack.io/open-source-may-yet-eat-googles-and-openais-ai-lunch/">Open Source May Yet Eat Google's and OpenAI's AI Lunch</a></p><p><a href="https://thenewstack.io/open-source-movement-emerging-in-ai-to-counter-greed/">Open Source Movement Emerging in AI To Counter Greed</a></p><p><a href="https://thenewstack.io/how-ai-can-learn-from-open-source-struggles/">How AI Can Learn from the Struggles of Open Source</a></p>
]]></content:encoded>
      <enclosure length="33908654" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/10ed2d4a-4ba5-441a-af17-89eec18297c9/audio/866c6354-675a-414b-9332-723311219cce/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Open Source AI and The Llama 2 Kerfuffle</itunes:title>
      <itunes:author>The New Stack, Steven Vaughan-Nichols, Erica Brescia, Amanda Brock, Alex Williams</itunes:author>
      <itunes:duration>00:35:19</itunes:duration>
      <itunes:summary>The definition of open source is being challenged in the age of AI. Three experts join the conversation to discuss what needs to evolve.</itunes:summary>
      <itunes:subtitle>The definition of open source is being challenged in the age of AI. Three experts join the conversation to discuss what needs to evolve.</itunes:subtitle>
      <itunes:keywords>tech news, software developer, software engineering, ai, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, artificial intelligence, the new stack makers, software engineer, open source, cloud computing</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1415</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">767c1d19-6ea3-4b4d-b75d-5f80b1278a24</guid>
      <title>PromptOps: How Generative AI Can Help DevOps</title>
      <description><![CDATA[<p>Discover how large language models and generative AI are revolutionizing DevOps with PromptOps. The company, initially known as CtrlStack, introduces its unique process engine that comprehends human requests, reads knowledge bases, and generates code on the fly to accomplish tasks. Dev Nag, the CEO, explains how PromptOps saves users time and money by automating routine operations in  this podcast episode with The New Stack.</p><p>Dev Nag is joined by GK Brar, PromptOps' founding engineer, and our host Joab Jackson as they delve into the concept of generative AI and its potential benefits for DevOps. Traditionally, DevOps tasks often involve repetitive troubleshooting and reporting, making automation essential. PromptOps specializes in intent matching, understanding nuanced requests and providing the right solutions.</p><p>Notably, PromptOps employs generative AI offline to prepare for automating common actions and enhancing the user experience. Unlike others, PromptOps aims beyond simple enhancements. It aspires to transform the entire DevOps landscape by leveraging this groundbreaking technology.</p><p>Tune in to the podcast to gain deeper insights into this transformative approach that PromptOps brings to DevOps thanks to the power and possibilities of generative AI.</p><p>Learn more from The New Stack about DevOps and PromptOps:</p><p><a href="https://thenewstack.io/devops/">DevOps News, Trends, Analysis and Resources</a></p><p><a href="https://thenewstack.io/how-to-use-chatgpt-for-it-security-audit/">How to Use ChatGPT for IT Security Audit</a></p><p><a href="https://thenewstack.io/what-we-learned-from-building-a-chatbot/">What We Learned from Building a Chatbot</a></p>
]]></description>
      <pubDate>Fri, 11 Aug 2023 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack, PromptOps, CtrlStack, Joab Jackson, Dev Nag, GK Brar)</author>
      <link>https://thenewstack.simplecast.com/episodes/promptops-how-generative-ai-can-help-devops-sKi48Uz2</link>
      <content:encoded><![CDATA[<p>Discover how large language models and generative AI are revolutionizing DevOps with PromptOps. The company, initially known as CtrlStack, introduces its unique process engine that comprehends human requests, reads knowledge bases, and generates code on the fly to accomplish tasks. Dev Nag, the CEO, explains how PromptOps saves users time and money by automating routine operations in  this podcast episode with The New Stack.</p><p>Dev Nag is joined by GK Brar, PromptOps' founding engineer, and our host Joab Jackson as they delve into the concept of generative AI and its potential benefits for DevOps. Traditionally, DevOps tasks often involve repetitive troubleshooting and reporting, making automation essential. PromptOps specializes in intent matching, understanding nuanced requests and providing the right solutions.</p><p>Notably, PromptOps employs generative AI offline to prepare for automating common actions and enhancing the user experience. Unlike others, PromptOps aims beyond simple enhancements. It aspires to transform the entire DevOps landscape by leveraging this groundbreaking technology.</p><p>Tune in to the podcast to gain deeper insights into this transformative approach that PromptOps brings to DevOps thanks to the power and possibilities of generative AI.</p><p>Learn more from The New Stack about DevOps and PromptOps:</p><p><a href="https://thenewstack.io/devops/">DevOps News, Trends, Analysis and Resources</a></p><p><a href="https://thenewstack.io/how-to-use-chatgpt-for-it-security-audit/">How to Use ChatGPT for IT Security Audit</a></p><p><a href="https://thenewstack.io/what-we-learned-from-building-a-chatbot/">What We Learned from Building a Chatbot</a></p>
]]></content:encoded>
      <enclosure length="12438509" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/77664024-892c-4aba-bf12-f0613e03e824/audio/bace0a0c-d738-404e-af85-c0f5cf70a75b/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>PromptOps: How Generative AI Can Help DevOps</itunes:title>
      <itunes:author>The New Stack, PromptOps, CtrlStack, Joab Jackson, Dev Nag, GK Brar</itunes:author>
      <itunes:duration>00:12:57</itunes:duration>
      <itunes:summary>Dev Nag and GK Brar of PromptOps join TNS host Joab Jackson to discuss how large language models and generative AI can enhance the world of DevOps by saving time and money through the process of automating routine operations.</itunes:summary>
      <itunes:subtitle>Dev Nag and GK Brar of PromptOps join TNS host Joab Jackson to discuss how large language models and generative AI can enhance the world of DevOps by saving time and money through the process of automating routine operations.</itunes:subtitle>
      <itunes:keywords>generative ai, software developer, software engineering, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, software development, large language models, software engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1414</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">c4498c2a-f056-4190-8914-d725cc629590</guid>
      <title>Where Does WebAssembly Fit in the Cloud Native World?</title>
      <description><![CDATA[<p>In this episode, Matt Butcher, CEO of Fermyon Technologies, discusses the potential impact of the component model on WebAssembly (Wasm) and its integration into the cloud-native landscape. WebAssembly is a binary instruction format enabling code to run anywhere, written in developers' preferred languages. The component model aims to provide a common way for WebAssembly libraries to express their needs and connect with other modules, reducing the barriers and maintenance of existing libraries. Butcher believes this model could be a game changer, allowing new languages to compile WebAssembly and utilize existing libraries seamlessly.</p><p>WebAssembly also shows promise in delivering on the long-awaited potential of serverless computing. Unlike traditional virtual machines and containers, WebAssembly boasts a rapid startup time and addresses various developer challenges. Butcher states that developers have been eagerly waiting for a platform with these characteristics, hinting at a potential resurgence of serverless. He clarifies that WebAssembly is not a "Kubernetes killer" but can coexist with container technologies, evident from the Kubernetes ecosystem's interest in supporting WebAssembly.</p><p>The episode explores further developments in WebAssembly and its potential to play a central role in the cloud-native ecosystem.</p><p>Learn more from The New Stack about WebAssembly and Fermyon Technologies:</p><p><a href="https://thenewstack.io/webassembly/">WebAssembly Overview, News, and Trends</a></p><p><a href="https://thenewstack.io/webassembly/yes-webassembly-can-replace-kubernetes/">WebAssembly vs. Kubernetes</a></p><p><a href="https://thenewstack.io/fermyon-cloud-save-your-webassembly-serverless-data-locally/">Fermyon Cloud: Save Your WebAssembly Serverless Data Locally</a></p>
]]></description>
      <pubDate>Thu, 3 Aug 2023 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack, Fermyon Technologies, Matt Butcher, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/where-does-webassembly-fit-in-the-cloud-native-world-W_RLg1hG</link>
      <content:encoded><![CDATA[<p>In this episode, Matt Butcher, CEO of Fermyon Technologies, discusses the potential impact of the component model on WebAssembly (Wasm) and its integration into the cloud-native landscape. WebAssembly is a binary instruction format enabling code to run anywhere, written in developers' preferred languages. The component model aims to provide a common way for WebAssembly libraries to express their needs and connect with other modules, reducing the barriers and maintenance of existing libraries. Butcher believes this model could be a game changer, allowing new languages to compile WebAssembly and utilize existing libraries seamlessly.</p><p>WebAssembly also shows promise in delivering on the long-awaited potential of serverless computing. Unlike traditional virtual machines and containers, WebAssembly boasts a rapid startup time and addresses various developer challenges. Butcher states that developers have been eagerly waiting for a platform with these characteristics, hinting at a potential resurgence of serverless. He clarifies that WebAssembly is not a "Kubernetes killer" but can coexist with container technologies, evident from the Kubernetes ecosystem's interest in supporting WebAssembly.</p><p>The episode explores further developments in WebAssembly and its potential to play a central role in the cloud-native ecosystem.</p><p>Learn more from The New Stack about WebAssembly and Fermyon Technologies:</p><p><a href="https://thenewstack.io/webassembly/">WebAssembly Overview, News, and Trends</a></p><p><a href="https://thenewstack.io/webassembly/yes-webassembly-can-replace-kubernetes/">WebAssembly vs. Kubernetes</a></p><p><a href="https://thenewstack.io/fermyon-cloud-save-your-webassembly-serverless-data-locally/">Fermyon Cloud: Save Your WebAssembly Serverless Data Locally</a></p>
]]></content:encoded>
      <enclosure length="26318515" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/dd577dbe-34cc-49f3-b68b-962c73647508/audio/e42b7125-a2b3-4de8-9b3b-121729a60d4b/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Where Does WebAssembly Fit in the Cloud Native World?</itunes:title>
      <itunes:author>The New Stack, Fermyon Technologies, Matt Butcher, Heather Joslyn</itunes:author>
      <itunes:duration>00:27:24</itunes:duration>
      <itunes:summary>The CEO of Fermyon Technologies, Matt Butcher, joins TNS Host Heather Joslyn to talk about how the component model is likely to help WebAssembly more quickly integrate into the cloud native landscape.</itunes:summary>
      <itunes:subtitle>The CEO of Fermyon Technologies, Matt Butcher, joins TNS Host Heather Joslyn to talk about how the component model is likely to help WebAssembly more quickly integrate into the cloud native landscape.</itunes:subtitle>
      <itunes:keywords>software developer, software engineering, wasm, tech podcast, the new stack, devops, cloud native, programming, devops podcast, tech, fermyon, developer podcast, developers, webassembly, kubernetes, the new stack makers, software engineer, cloud computing, serverless</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1413</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">c4ec32dc-00d6-4876-8a6e-1e6729be888e</guid>
      <title>The Cloud Is Under Attack. How Do You Secure It?</title>
      <description><![CDATA[<p>Building and deploying applications in the cloud offers significant advantages, primarily driven by the scalability it provides. Developers appreciate the speed and ease with which cloud-based infrastructure can be set up, allowing them to scale rapidly as long as they have the necessary resources. However, the very scale that makes cloud computing attractive also poses serious risks.</p><p>The risk lies in the potential for developers to make mistakes in application building, which can lead to widespread consequences when deployed at scale. Cloud-focused attacks have seen a significant increase, tripling from 2021 to 2022, as reported in the Cloud Risk Report by Crowdstrike.</p><p>The challenges in securing the cloud are exacerbated by its relative novelty, with organizations still learning about its intricacies. The newer generation of adversaries is adept at exploiting cloud weaknesses and finding ways to attack multiple systems simultaneously. Cultural issues within organizations, such as the tension between security professionals and developers, can further complicate cloud protection.</p><p>To safeguard cloud infrastructure, best practices include adopting the principle of least privilege, regularly evaluating access rights, and avoiding hard-coding credentials. Ongoing hygiene and assessments are crucial in ensuring that access levels are appropriate and minimizing risks of cloud-focused attacks.</p><p>Overall, understanding and addressing the risks associated with cloud deployments are vital as cloud-native adversaries grow increasingly sophisticated. Implementing proper security measures, along with staying up-to-date on runtime security and avoiding misconfigurations, are essential in safeguarding cloud-based applications and data.</p><p>Elia Zaitsev of CrowdStrike joined TNS host Heather Joslyn for this conversation on the heels of the release of their Cloud Risk Report.</p><p><i>Learn more from The New Stack about cloud security and CrowdStrike:</i></p><p><a href="https://thenewstack.io/cloud-focused-attacks-growing-more-frequent-more-brazen/">Cloud-Focused Attacks Growing More Frequent, More Brazen</a></p><p><a href="https://thenewstack.io/5-best-practices-for-devsecops-teams-to-ensure-compliance/">5 Best Practices for DevSecOps Teams to Ensure Compliance</a></p><p><a href="https://thenewstack.io/what-is-devsecops/">What Is DevSecOps?</a></p>
]]></description>
      <pubDate>Fri, 28 Jul 2023 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack, CrowdStrike, Elia Zaitsev, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/the-cloud-is-under-attack-how-do-you-secure-it-n7TT9_pc</link>
      <content:encoded><![CDATA[<p>Building and deploying applications in the cloud offers significant advantages, primarily driven by the scalability it provides. Developers appreciate the speed and ease with which cloud-based infrastructure can be set up, allowing them to scale rapidly as long as they have the necessary resources. However, the very scale that makes cloud computing attractive also poses serious risks.</p><p>The risk lies in the potential for developers to make mistakes in application building, which can lead to widespread consequences when deployed at scale. Cloud-focused attacks have seen a significant increase, tripling from 2021 to 2022, as reported in the Cloud Risk Report by Crowdstrike.</p><p>The challenges in securing the cloud are exacerbated by its relative novelty, with organizations still learning about its intricacies. The newer generation of adversaries is adept at exploiting cloud weaknesses and finding ways to attack multiple systems simultaneously. Cultural issues within organizations, such as the tension between security professionals and developers, can further complicate cloud protection.</p><p>To safeguard cloud infrastructure, best practices include adopting the principle of least privilege, regularly evaluating access rights, and avoiding hard-coding credentials. Ongoing hygiene and assessments are crucial in ensuring that access levels are appropriate and minimizing risks of cloud-focused attacks.</p><p>Overall, understanding and addressing the risks associated with cloud deployments are vital as cloud-native adversaries grow increasingly sophisticated. Implementing proper security measures, along with staying up-to-date on runtime security and avoiding misconfigurations, are essential in safeguarding cloud-based applications and data.</p><p>Elia Zaitsev of CrowdStrike joined TNS host Heather Joslyn for this conversation on the heels of the release of their Cloud Risk Report.</p><p><i>Learn more from The New Stack about cloud security and CrowdStrike:</i></p><p><a href="https://thenewstack.io/cloud-focused-attacks-growing-more-frequent-more-brazen/">Cloud-Focused Attacks Growing More Frequent, More Brazen</a></p><p><a href="https://thenewstack.io/5-best-practices-for-devsecops-teams-to-ensure-compliance/">5 Best Practices for DevSecOps Teams to Ensure Compliance</a></p><p><a href="https://thenewstack.io/what-is-devsecops/">What Is DevSecOps?</a></p>
]]></content:encoded>
      <enclosure length="24434773" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/7e2c297f-97ec-4723-8c94-bff06ce72824/audio/d4acc3ac-b51d-4369-8f6d-6e572efdc9be/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>The Cloud Is Under Attack. How Do You Secure It?</itunes:title>
      <itunes:author>The New Stack, CrowdStrike, Elia Zaitsev, Heather Joslyn</itunes:author>
      <itunes:duration>00:25:27</itunes:duration>
      <itunes:summary>In this episode, Elia Zaitsev, Global CTO of CrowdStrike, spoke with us about the growing problem of cloud-focused attacks, the challenges involved in protecting against those attacks and some best practices that can help.</itunes:summary>
      <itunes:subtitle>In this episode, Elia Zaitsev, Global CTO of CrowdStrike, spoke with us about the growing problem of cloud-focused attacks, the challenges involved in protecting against those attacks and some best practices that can help.</itunes:subtitle>
      <itunes:keywords>software developer, cybersecurity, tech podcast, devops, cloud native, devops podcast, tech, developer podcast, the new stack makers, software engineer, devsecops, cloud security</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1412</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">412990e0-6a39-4bf2-950c-1f47569e30d8</guid>
      <title>Platform Engineering Not Working Out? You&apos;re Doing It Wrong.</title>
      <description><![CDATA[<p>In this episode of The New Stack Makers, Purnima Padmanabhan, a senior vice president at VMware, discusses three common mistakes organizations make when trying to move faster in meeting customer needs. The first mistake is equating application modernization with solely moving to the cloud, often resulting in a mere lift and shift of applications, without reaping the full benefits. The second mistake is a lack of automation, particularly in operations, which hinders the development process's speed. The third mistake involves adding unnecessary complexity by adopting new technologies or procedures, which slows down developers.</p><p>As a solution, Padmanabhan introduces the concept of platform engineering, which not only accelerates development but also reduces toil for operations engineers and architects. However, many organizations struggle with implementing it effectively, as they often approach platform engineering in fragmented ways, investing in separate components without fully connecting them.</p><p>To succeed in adopting platform engineering, Padmanabhan emphasizes the need for a mindset shift. The platform team must treat platform engineering as a continuously evolving product rather than a one-time delivery, ensuring that service-level agreements are continuously met, and regularly updating and improving features and velocity. The episode discusses the benefits of a well-implemented "golden path" for entire organizations and provides insights on how to start a platform engineering team.</p><p><i>Learn more from The New Stack about Platform Engineering and VMware:</i></p><p><a href="https://thenewstack.io/platform-engineering/">Platform Engineering Overview, News and Trends</a></p><p><a href="https://thenewstack.io/platform-engineers-developers-are-your-customers/">Platform Engineers: Developers Are Your Customers</a></p><p><a href="https://thenewstack.io/open-source-platform-engineering-a-decade-of-cloud-foundry/">Open Source Platform Engineering: A Decade of Cloud Foundry</a></p>
]]></description>
      <pubDate>Thu, 27 Jul 2023 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack, Purnima Padmanabhan, VMware, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/platform-engineering-not-working-out-youre-doing-it-wrong-WSUJTZdm</link>
      <content:encoded><![CDATA[<p>In this episode of The New Stack Makers, Purnima Padmanabhan, a senior vice president at VMware, discusses three common mistakes organizations make when trying to move faster in meeting customer needs. The first mistake is equating application modernization with solely moving to the cloud, often resulting in a mere lift and shift of applications, without reaping the full benefits. The second mistake is a lack of automation, particularly in operations, which hinders the development process's speed. The third mistake involves adding unnecessary complexity by adopting new technologies or procedures, which slows down developers.</p><p>As a solution, Padmanabhan introduces the concept of platform engineering, which not only accelerates development but also reduces toil for operations engineers and architects. However, many organizations struggle with implementing it effectively, as they often approach platform engineering in fragmented ways, investing in separate components without fully connecting them.</p><p>To succeed in adopting platform engineering, Padmanabhan emphasizes the need for a mindset shift. The platform team must treat platform engineering as a continuously evolving product rather than a one-time delivery, ensuring that service-level agreements are continuously met, and regularly updating and improving features and velocity. The episode discusses the benefits of a well-implemented "golden path" for entire organizations and provides insights on how to start a platform engineering team.</p><p><i>Learn more from The New Stack about Platform Engineering and VMware:</i></p><p><a href="https://thenewstack.io/platform-engineering/">Platform Engineering Overview, News and Trends</a></p><p><a href="https://thenewstack.io/platform-engineers-developers-are-your-customers/">Platform Engineers: Developers Are Your Customers</a></p><p><a href="https://thenewstack.io/open-source-platform-engineering-a-decade-of-cloud-foundry/">Open Source Platform Engineering: A Decade of Cloud Foundry</a></p>
]]></content:encoded>
      <enclosure length="24493288" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/55d62966-f3b2-4f2b-b239-69d7d92461d6/audio/43aee904-941c-4612-9e94-fd939be0bd21/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Platform Engineering Not Working Out? You&apos;re Doing It Wrong.</itunes:title>
      <itunes:author>The New Stack, Purnima Padmanabhan, VMware, Heather Joslyn</itunes:author>
      <itunes:duration>00:25:30</itunes:duration>
      <itunes:summary>Purnima Padmanabhan of VMware discusses platform engineering and how it helps create a “golden path” that speeds up not just development but overall agility for businesses.</itunes:summary>
      <itunes:subtitle>Purnima Padmanabhan of VMware discusses platform engineering and how it helps create a “golden path” that speeds up not just development but overall agility for businesses.</itunes:subtitle>
      <itunes:keywords>vmware, software developer, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, developers, the new stack makers, purnima padmanabhan, software engineer, platform engineering, cloud computing</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1411</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">94bfbe5f-4f15-4c92-a0fe-81f7c220a1cf</guid>
      <title>What Developers Need to Know About Business Logic Attacks</title>
      <description><![CDATA[<p>In this episode of The New Stack Makers, Peter Klimek, director of technology in the Office of the CTO at Imperva, discusses the vulnerability of business logic in a distributed, cloud-native environment. Business logic refers to the rules and processes that govern how applications function and how users interact with them and other systems. Klimek highlights the increasing attacks on APIs that exploit business logic vulnerabilities, with 17% of attacks on APIs in 2022 coming from malicious bots abusing business logic.</p><p>The attacks on business logic take various forms, including credential stuffing attacks, carding (testing stolen credit cards), and newer forms like influence fraud, where algorithms are manipulated to deceive platforms and users. Klimek emphasizes that protecting business logic requires a cross-functional approach involving developers, operations engineers, security, and fraud teams.</p><p>To enhance business logic security, Klimek recommends conducting a threat modeling exercise within the organization, which helps identify potential risk vectors. Additionally, he suggests referring to the Open Web Application Security Project (OWASP) website's list of automated threats as a checklist during the exercise.</p><p>Ultimately, safeguarding business logic is crucial in securing cloud-native environments, and collaboration among various teams is essential to effectively mitigate potential threats and attacks.</p><p>More from The New Stack, Imperva, and Peter Klimek:</p><p><a href="https://thenewstack.io/why-your-apis-arent-safe-and-what-to-do-about-it/" target="_blank">Why Your APIs Aren’t Safe — and What to Do about It</a></p><p><a href="https://thenewstack.io/zero-day-vulnerabilities-can-teach-us-about-supply-chain-security/" target="_blank">Zero-Day Vulnerabilities Can Teach Us About Supply-Chain Security</a></p><p><a href="https://thenewstack.io/graphql-apis-greater-flexibility-breeds-new-security-woes/">GraphQL APIs: Greater Flexibility Breeds New Security Woes</a></p>
]]></description>
      <pubDate>Wed, 26 Jul 2023 11:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Imperva, The New Stack, Peter Klimek, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/what-developers-need-to-know-about-business-logic-attacks-KatdsidG</link>
      <content:encoded><![CDATA[<p>In this episode of The New Stack Makers, Peter Klimek, director of technology in the Office of the CTO at Imperva, discusses the vulnerability of business logic in a distributed, cloud-native environment. Business logic refers to the rules and processes that govern how applications function and how users interact with them and other systems. Klimek highlights the increasing attacks on APIs that exploit business logic vulnerabilities, with 17% of attacks on APIs in 2022 coming from malicious bots abusing business logic.</p><p>The attacks on business logic take various forms, including credential stuffing attacks, carding (testing stolen credit cards), and newer forms like influence fraud, where algorithms are manipulated to deceive platforms and users. Klimek emphasizes that protecting business logic requires a cross-functional approach involving developers, operations engineers, security, and fraud teams.</p><p>To enhance business logic security, Klimek recommends conducting a threat modeling exercise within the organization, which helps identify potential risk vectors. Additionally, he suggests referring to the Open Web Application Security Project (OWASP) website's list of automated threats as a checklist during the exercise.</p><p>Ultimately, safeguarding business logic is crucial in securing cloud-native environments, and collaboration among various teams is essential to effectively mitigate potential threats and attacks.</p><p>More from The New Stack, Imperva, and Peter Klimek:</p><p><a href="https://thenewstack.io/why-your-apis-arent-safe-and-what-to-do-about-it/" target="_blank">Why Your APIs Aren’t Safe — and What to Do about It</a></p><p><a href="https://thenewstack.io/zero-day-vulnerabilities-can-teach-us-about-supply-chain-security/" target="_blank">Zero-Day Vulnerabilities Can Teach Us About Supply-Chain Security</a></p><p><a href="https://thenewstack.io/graphql-apis-greater-flexibility-breeds-new-security-woes/">GraphQL APIs: Greater Flexibility Breeds New Security Woes</a></p>
]]></content:encoded>
      <enclosure length="19783724" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/7b5a9d75-4606-4803-a15e-dbe8978af51f/audio/6ceb4062-eddc-46eb-8def-e4428a6bf21f/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>What Developers Need to Know About Business Logic Attacks</itunes:title>
      <itunes:author>Imperva, The New Stack, Peter Klimek, Heather Joslyn</itunes:author>
      <itunes:duration>00:20:36</itunes:duration>
      <itunes:summary>Peter Klimek of Imperva discusses the vulnerability of business logic in a distributed, cloud-native environment with Heather Joslyn of The New Stack.</itunes:summary>
      <itunes:subtitle>Peter Klimek of Imperva discusses the vulnerability of business logic in a distributed, cloud-native environment with Heather Joslyn of The New Stack.</itunes:subtitle>
      <itunes:keywords>software developer, cybersecurity, it security, software engineering, tech podcast, devops, cloud native, devops podcast, tech, developer podcast, software development, the new stack makers, software engineer, api</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1410</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">875a03fd-643c-447b-ab7e-93df229e175b</guid>
      <title>Why Developers Need Vector Search</title>
      <description><![CDATA[<p>In this episode of The New Stack Makers podcast, the focus is on the challenges of handling unstructured data in today's data-rich world and the potential solutions offered by vector databases and vector searches. The use of relational databases is limited when dealing with text, images, and voice data, which makes it difficult to uncover meaningful relationships between different data points.</p><p>Vector databases, which facilitate vector searches, have become increasingly popular for addressing this issue. They allow organizations to store, search, and index data that would be challenging to manage in traditional databases. Semantic search and Large Language Models have sparked interest in vector databases, providing developers with new possibilities.</p><p>Beyond standard applications like information search and recommendation bots, vector searches have also proven useful in combating copyright infringement. Social media companies like Facebook have pioneered this approach by using vectors to check copyrighted media uploads.</p><p>Vector databases excel at finding similarities between data objects, as they operate in vector spaces and perform approximate nearest neighbor searches, sacrificing a bit of accuracy for increased efficiency. However, developers need to understand their specific use cases and the scale of their applications to make the most of vector databases and search.</p><p>Frank Liu, the director of operations at Zilliz, advised listeners to educate themselves about vector databases, vector search, and machine learning to leverage the existing ecosystem of tools effectively. One notable indexing strategy for vectors is Hierarchical Navigable Small Worlds (HNSW), a graph-based algorithm created by Yury Malkov, a distinguished software engineer at VerSE Innovation who also joined us along with Nils Reimers of Cohere.</p><p>It's crucial to view vector databases and search as additional tools in the developer's toolbox rather than replacements for existing database management systems or document databases. The ultimate goal is to build applications focused on user satisfaction, not just optimizing clicks. To delve deeper into the topic and explore the gaps in current tooling,  check out the full episode.</p><p><a href="https://podurama.com/" target="_blank">Listen on Podurama</a></p><p>Learn more about vector databases at thenewstack.io</p><p><a href="https://thenewstack.io/vector-databases-what-devs-need-to-know-about-how-they-work/">Vector Databases: What Devs Need to Know about How They Work</a></p><p><a href="https://thenewstack.io/vector-primer-understand-the-lingua-franca-of-generative-ai/">Vector Primer: Understand the Lingua Franca of Generative AI</a></p><p><a href="https://thenewstack.io/how-large-language-models-fuel-the-rise-of-vector-databases/">How Large Language Models Fuel the Rise of Vector Databases</a></p>
]]></description>
      <pubDate>Tue, 18 Jul 2023 20:05:32 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack, Zilliz, Yury Malkov, Frank Liu, Nils Reimers, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/why-developers-need-vector-search-G7JDv097</link>
      <content:encoded><![CDATA[<p>In this episode of The New Stack Makers podcast, the focus is on the challenges of handling unstructured data in today's data-rich world and the potential solutions offered by vector databases and vector searches. The use of relational databases is limited when dealing with text, images, and voice data, which makes it difficult to uncover meaningful relationships between different data points.</p><p>Vector databases, which facilitate vector searches, have become increasingly popular for addressing this issue. They allow organizations to store, search, and index data that would be challenging to manage in traditional databases. Semantic search and Large Language Models have sparked interest in vector databases, providing developers with new possibilities.</p><p>Beyond standard applications like information search and recommendation bots, vector searches have also proven useful in combating copyright infringement. Social media companies like Facebook have pioneered this approach by using vectors to check copyrighted media uploads.</p><p>Vector databases excel at finding similarities between data objects, as they operate in vector spaces and perform approximate nearest neighbor searches, sacrificing a bit of accuracy for increased efficiency. However, developers need to understand their specific use cases and the scale of their applications to make the most of vector databases and search.</p><p>Frank Liu, the director of operations at Zilliz, advised listeners to educate themselves about vector databases, vector search, and machine learning to leverage the existing ecosystem of tools effectively. One notable indexing strategy for vectors is Hierarchical Navigable Small Worlds (HNSW), a graph-based algorithm created by Yury Malkov, a distinguished software engineer at VerSE Innovation who also joined us along with Nils Reimers of Cohere.</p><p>It's crucial to view vector databases and search as additional tools in the developer's toolbox rather than replacements for existing database management systems or document databases. The ultimate goal is to build applications focused on user satisfaction, not just optimizing clicks. To delve deeper into the topic and explore the gaps in current tooling,  check out the full episode.</p><p><a href="https://podurama.com/" target="_blank">Listen on Podurama</a></p><p>Learn more about vector databases at thenewstack.io</p><p><a href="https://thenewstack.io/vector-databases-what-devs-need-to-know-about-how-they-work/">Vector Databases: What Devs Need to Know about How They Work</a></p><p><a href="https://thenewstack.io/vector-primer-understand-the-lingua-franca-of-generative-ai/">Vector Primer: Understand the Lingua Franca of Generative AI</a></p><p><a href="https://thenewstack.io/how-large-language-models-fuel-the-rise-of-vector-databases/">How Large Language Models Fuel the Rise of Vector Databases</a></p>
]]></content:encoded>
      <enclosure length="26464383" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/80088d83-8a0d-4e08-a8f7-fed8e24178cd/audio/11a5c201-f4cb-4c2e-aaa5-b90835d5737e/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Why Developers Need Vector Search</itunes:title>
      <itunes:author>The New Stack, Zilliz, Yury Malkov, Frank Liu, Nils Reimers, Heather Joslyn</itunes:author>
      <itunes:duration>00:27:34</itunes:duration>
      <itunes:summary>We talked to a trio of technologists to learn how vector databases and vector searches can help find connections buried in data.</itunes:summary>
      <itunes:subtitle>We talked to a trio of technologists to learn how vector databases and vector searches can help find connections buried in data.</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, the new stack, databases, devops, devops podcast, tech, developer podcast, software development, vector database, the new stack makers, software engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1409</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">439ba041-e2b1-43a2-8684-fb74bc305d51</guid>
      <title>How Byteboard’s CEO Decided to Fix the Broken Tech Interview</title>
      <description><![CDATA[<p>Sargun Kaur, co-founder of Byteboard, aims to revolutionize the tech interview process, which she believes is flawed and ineffective. In an interview with The New Stack for our Tech Founder Odyssey podcast series, Kaur compared assessing technical skills during interviews to evaluating the abilities of basketball star Steph Curry by asking him to draw plays on a whiteboard instead of watching him perform on the court. Kaur, a former employee of Symantec and Google, became motivated to change the interview process after a talented engineer she had coached failed a Google interview due to its impractical format.</p><p>Kaur believes that traditional tech interviews overly emphasize theoretical questions that do not reflect real-world software engineering tasks. This not only limits the talent pool but also leads to mis-hires, where approximately one in four new employees is unsuitable for their roles or teams. To address these issues, Kaur co-founded Byteboard in 2018 with Nicole Hardson-Hurley, another former Google employee. Byteboard offers project-based technical interviews, adopted by companies like Dropbox, Lyft, and Robinhood, to enhance the efficiency and fairness of their hiring processes. In recognition of their work, Kaur and Hardson-Hurley received Forbes magazine's "30 Under 30" award for enterprise technology.</p><p>Kaur's journey into the tech industry was unexpected, considering her initial disinterest in her father's software engineering career. However, exposure to programming and shadowing a female engineer at Microsoft sparked her curiosity, leading her to study computer science at the University of California, Berkeley. Overcoming initial challenges as a minority in the field, Kaur eventually joined Google as an engineer, content with the work environment and mentorship she received. However, her dissatisfaction with the interview process prompted her to apply to Google's Area 120 project incubator, leading to the creation of Byteboard. Kaur's experience with Byteboard's development and growth taught her valuable lessons about entrepreneurship, the power of founders in fundraising meetings, and the potential impact of AI on tech hiring processes.</p><p>Check out more episodes in The Tech Founder Odyssey series:</p><p><a href="https://thenewstack.io/a-lifelong-maker-tackles-a-developer-onboarding-problem/" target="_blank">A Lifelong ‘Maker’ Tackles a Developer Onboarding Problem</a></p><p><a href="https://thenewstack.io/how-teleports-leader-transitioned-from-engineer-to-ceo/" target="_blank">How Teleport’s Leader Transitioned from Engineer to CEO</a></p><p><a href="https://thenewstack.io/how-2-founders-sold-their-startup-to-aqua-security-in-a-year/">How 2 Founders Sold Their Startup to Aqua Security in a Year</a></p>
]]></description>
      <pubDate>Thu, 13 Jul 2023 22:26:49 +0000</pubDate>
      <author>podcasts@thenewstack.io (Byteboard, The New Stack, Sargun Kaur, Colleen Coll, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/tech-founder-odyssey-sargun-kaur-byteboard-s8AVWecX</link>
      <content:encoded><![CDATA[<p>Sargun Kaur, co-founder of Byteboard, aims to revolutionize the tech interview process, which she believes is flawed and ineffective. In an interview with The New Stack for our Tech Founder Odyssey podcast series, Kaur compared assessing technical skills during interviews to evaluating the abilities of basketball star Steph Curry by asking him to draw plays on a whiteboard instead of watching him perform on the court. Kaur, a former employee of Symantec and Google, became motivated to change the interview process after a talented engineer she had coached failed a Google interview due to its impractical format.</p><p>Kaur believes that traditional tech interviews overly emphasize theoretical questions that do not reflect real-world software engineering tasks. This not only limits the talent pool but also leads to mis-hires, where approximately one in four new employees is unsuitable for their roles or teams. To address these issues, Kaur co-founded Byteboard in 2018 with Nicole Hardson-Hurley, another former Google employee. Byteboard offers project-based technical interviews, adopted by companies like Dropbox, Lyft, and Robinhood, to enhance the efficiency and fairness of their hiring processes. In recognition of their work, Kaur and Hardson-Hurley received Forbes magazine's "30 Under 30" award for enterprise technology.</p><p>Kaur's journey into the tech industry was unexpected, considering her initial disinterest in her father's software engineering career. However, exposure to programming and shadowing a female engineer at Microsoft sparked her curiosity, leading her to study computer science at the University of California, Berkeley. Overcoming initial challenges as a minority in the field, Kaur eventually joined Google as an engineer, content with the work environment and mentorship she received. However, her dissatisfaction with the interview process prompted her to apply to Google's Area 120 project incubator, leading to the creation of Byteboard. Kaur's experience with Byteboard's development and growth taught her valuable lessons about entrepreneurship, the power of founders in fundraising meetings, and the potential impact of AI on tech hiring processes.</p><p>Check out more episodes in The Tech Founder Odyssey series:</p><p><a href="https://thenewstack.io/a-lifelong-maker-tackles-a-developer-onboarding-problem/" target="_blank">A Lifelong ‘Maker’ Tackles a Developer Onboarding Problem</a></p><p><a href="https://thenewstack.io/how-teleports-leader-transitioned-from-engineer-to-ceo/" target="_blank">How Teleport’s Leader Transitioned from Engineer to CEO</a></p><p><a href="https://thenewstack.io/how-2-founders-sold-their-startup-to-aqua-security-in-a-year/">How 2 Founders Sold Their Startup to Aqua Security in a Year</a></p>
]]></content:encoded>
      <enclosure length="35754561" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/372cf429-797c-440e-ac63-72dd0478beaf/audio/5b06c1e0-b6fd-41e5-9e81-bb3a6113d919/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Byteboard’s CEO Decided to Fix the Broken Tech Interview</itunes:title>
      <itunes:author>Byteboard, The New Stack, Sargun Kaur, Colleen Coll, Heather Joslyn</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/69bb1d68-868e-4b91-9ff1-ec4cd4ec37a8/3000x3000/the-tech-odyssey-logo-white-bg.jpg?aid=rss_feed"/>
      <itunes:duration>00:37:14</itunes:duration>
      <itunes:summary>Tech interviews are often terrible. Through the company she leads and co-founded, Byteboard, Sargun Kaur is working to change all of that. She opens up about her strategy, journey, and more in this edition of The Tech Founder Odyssey.</itunes:summary>
      <itunes:subtitle>Tech interviews are often terrible. Through the company she leads and co-founded, Byteboard, Sargun Kaur is working to change all of that. She opens up about her strategy, journey, and more in this edition of The Tech Founder Odyssey.</itunes:subtitle>
      <itunes:keywords>software developer, software engineering, tech podcast, entrepreneur, devops, devops podcast, entrepreneurship, leadership, founders, tech, developer podcast, the new stack makers, software engineer, silicon valley</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1408</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">836211ad-7510-4114-b990-5ed3a3f80ae6</guid>
      <title>A Lifelong ‘Maker’ Tackles a Developer Onboarding Problem</title>
      <description><![CDATA[<p>Shanea Leven, co-founder and CEO of CodeSee, shared her journey as a tech founder in an episode of the Tech Founder Odyssey podcast series. Despite coming to programming later than many of her peers, Leven always had a creative spark and a passion for making things. She initially pursued fashion design but taught herself programming in college and co-founded a company building custom websites for book authors. This experience eventually led her to a job at Google, where she worked in product development.</p><p>While at Google, Leven realized the challenge of deciphering legacy code and onboarding developers to it. Inspired by a presentation by Bret Victor, she came up with the idea for CodeSee—a developer platform that helps teams understand and review code bases more effectively. She started working on CodeSee in 2019 as a side project, but it soon received venture capital funding, allowing her to quit her job and focus on the startup full-time.</p><p>Leven candidly discussed the challenges of juggling a day job and a startup, particularly after receiving funding. She also shared advice on raising money from venture capitalists and building a company culture.</p><p>Listen to the full episode and check out more installments from The Tech Founder Odyssey.</p><p><a href="https://thenewstack.io/how-teleports-leader-transitioned-from-engineer-to-ceo/" target="_blank">How Teleport’s Leader Transitioned from Engineer to CEO</a></p><p><a href="https://thenewstack.io/how-2-founders-sold-their-startup-to-aqua-security-in-a-year/" target="_blank">How 2 Founders Sold Their Startup to Aqua Security in a Year</a></p><p><a href="https://thenewstack.io/how-solvos-co-founder-got-the-guts-to-be-an-entrepreneur/" target="_blank">How Solvo’s Co-Founder Got the ‘Guts’ to Be an Entrepreneur</a></p>
]]></description>
      <pubDate>Fri, 7 Jul 2023 18:17:55 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack, CodeSee, Shanea Leven, Colleen Coll, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/lifelong-maker-tackles-developer-onboarding-problem-sJ8Uz9hO</link>
      <content:encoded><![CDATA[<p>Shanea Leven, co-founder and CEO of CodeSee, shared her journey as a tech founder in an episode of the Tech Founder Odyssey podcast series. Despite coming to programming later than many of her peers, Leven always had a creative spark and a passion for making things. She initially pursued fashion design but taught herself programming in college and co-founded a company building custom websites for book authors. This experience eventually led her to a job at Google, where she worked in product development.</p><p>While at Google, Leven realized the challenge of deciphering legacy code and onboarding developers to it. Inspired by a presentation by Bret Victor, she came up with the idea for CodeSee—a developer platform that helps teams understand and review code bases more effectively. She started working on CodeSee in 2019 as a side project, but it soon received venture capital funding, allowing her to quit her job and focus on the startup full-time.</p><p>Leven candidly discussed the challenges of juggling a day job and a startup, particularly after receiving funding. She also shared advice on raising money from venture capitalists and building a company culture.</p><p>Listen to the full episode and check out more installments from The Tech Founder Odyssey.</p><p><a href="https://thenewstack.io/how-teleports-leader-transitioned-from-engineer-to-ceo/" target="_blank">How Teleport’s Leader Transitioned from Engineer to CEO</a></p><p><a href="https://thenewstack.io/how-2-founders-sold-their-startup-to-aqua-security-in-a-year/" target="_blank">How 2 Founders Sold Their Startup to Aqua Security in a Year</a></p><p><a href="https://thenewstack.io/how-solvos-co-founder-got-the-guts-to-be-an-entrepreneur/" target="_blank">How Solvo’s Co-Founder Got the ‘Guts’ to Be an Entrepreneur</a></p>
]]></content:encoded>
      <enclosure length="28254701" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/860e59a9-740c-46f3-ac10-02fdd7a44efb/audio/82a1d994-39c0-480c-a262-a2b191eeace0/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>A Lifelong ‘Maker’ Tackles a Developer Onboarding Problem</itunes:title>
      <itunes:author>The New Stack, CodeSee, Shanea Leven, Colleen Coll, Heather Joslyn</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/b6f15061-5edc-47ce-8604-aca074128576/3000x3000/the-tech-odyssey-logo-white-bg.jpg?aid=rss_feed"/>
      <itunes:duration>00:29:25</itunes:duration>
      <itunes:summary>Shanea Leven shares her journey to becoming the co-founder and CEO of CodeSee, and how her creative streak motivated her to solve a common legacy code problem in this edition of The Tech Founder Odyssey.</itunes:summary>
      <itunes:subtitle>Shanea Leven shares her journey to becoming the co-founder and CEO of CodeSee, and how her creative streak motivated her to solve a common legacy code problem in this edition of The Tech Founder Odyssey.</itunes:subtitle>
      <itunes:keywords>software developer, software engineering, tech podcast, entrepreneur, the new stack, devops, devops podcast, entrepreneurship, startup, tech, developer podcast, the new stack makers, software engineer, the tech founder odyssey, tech founder odyssey</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1407</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">5430cc72-1506-4af8-90ba-652b2e23aa07</guid>
      <title>5 Steps to Deploy Efficient Cloud Native Foundation AI Models</title>
      <description><![CDATA[<p>In deploying cloud-native sustainable foundation AI models, there are five key steps outlined by Huamin Chen, an R&D professional at Red Hat's Office of the CTO. The first two steps involve using containers and Kubernetes to manage workloads and deploy them across a distributed infrastructure. Chen suggests employing PyTorch for programming and Jupyter Notebooks for debugging and evaluation, with Docker community files proving effective for containerizing workloads.</p><p>The third step focuses on measurement and highlights the use of Prometheus, an open-source tool for event monitoring and alerting. Prometheus enables developers to gather metrics and analyze the correlation between foundation models and runtime environments.</p><p>Analytics, the fourth step, involves leveraging existing analytics while establishing guidelines and benchmarks to assess energy usage and performance metrics. Chen emphasizes the need to challenge assumptions regarding energy consumption and model performance.</p><p>Finally, the fifth step entails taking action based on the insights gained from analytics. By optimizing energy profiles for foundation models, the goal is to achieve greater energy efficiency, benefitting the community, society, and the environment.</p><p>Chen underscores the significance of this optimization for a more sustainable future.</p><p><a href="https://thenewstack.io/" target="_blank">Learn more at thenewstack.io</a></p><p><a href="https://thenewstack.io/pytorch-takes-ai-ml-back-to-its-research-open-source-roots/">PyTorch Takes AI/ML Back to Its Research, Open Source Roots</a></p><p><a href="https://thenewstack.io/pytorch-lightning-and-the-future-of-open-source-ai/">PyTorch Lightning and the Future of Open Source AI</a></p><p><a href="https://thenewstack.io/jupyter-notebooks-the-web-based-dev-tool-youve-been-seeking/">Jupyter Notebooks: The Web-Based Dev Tool You've Been Seeking</a></p><p><a href="https://thenewstack.io/know-the-hidden-costs-of-diy-prometheus/">Know the Hidden Costs of DIY Prometheus</a></p>
]]></description>
      <pubDate>Thu, 29 Jun 2023 00:00:40 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack, Huamin Chen, Red Hat, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/5-steps-to-deploy-efficient-cloud-native-foundation-ai-models-tQrd9NBk</link>
      <content:encoded><![CDATA[<p>In deploying cloud-native sustainable foundation AI models, there are five key steps outlined by Huamin Chen, an R&D professional at Red Hat's Office of the CTO. The first two steps involve using containers and Kubernetes to manage workloads and deploy them across a distributed infrastructure. Chen suggests employing PyTorch for programming and Jupyter Notebooks for debugging and evaluation, with Docker community files proving effective for containerizing workloads.</p><p>The third step focuses on measurement and highlights the use of Prometheus, an open-source tool for event monitoring and alerting. Prometheus enables developers to gather metrics and analyze the correlation between foundation models and runtime environments.</p><p>Analytics, the fourth step, involves leveraging existing analytics while establishing guidelines and benchmarks to assess energy usage and performance metrics. Chen emphasizes the need to challenge assumptions regarding energy consumption and model performance.</p><p>Finally, the fifth step entails taking action based on the insights gained from analytics. By optimizing energy profiles for foundation models, the goal is to achieve greater energy efficiency, benefitting the community, society, and the environment.</p><p>Chen underscores the significance of this optimization for a more sustainable future.</p><p><a href="https://thenewstack.io/" target="_blank">Learn more at thenewstack.io</a></p><p><a href="https://thenewstack.io/pytorch-takes-ai-ml-back-to-its-research-open-source-roots/">PyTorch Takes AI/ML Back to Its Research, Open Source Roots</a></p><p><a href="https://thenewstack.io/pytorch-lightning-and-the-future-of-open-source-ai/">PyTorch Lightning and the Future of Open Source AI</a></p><p><a href="https://thenewstack.io/jupyter-notebooks-the-web-based-dev-tool-youve-been-seeking/">Jupyter Notebooks: The Web-Based Dev Tool You've Been Seeking</a></p><p><a href="https://thenewstack.io/know-the-hidden-costs-of-diy-prometheus/">Know the Hidden Costs of DIY Prometheus</a></p>
]]></content:encoded>
      <enclosure length="15806006" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/dd5e4e93-6b11-4ed7-b6d2-7899779ebf18/audio/1e614533-044a-4079-9df1-0f238e6df42b/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>5 Steps to Deploy Efficient Cloud Native Foundation AI Models</itunes:title>
      <itunes:author>The New Stack, Huamin Chen, Red Hat, Alex Williams</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/8dc1ddaa-4481-4f64-a592-dbbf355d3b5c/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:16:27</itunes:duration>
      <itunes:summary>We discusses how to tackle resource allocation and gain efficiencies with Kubernetes in deployment with Huamin Chen of Red Hat.</itunes:summary>
      <itunes:subtitle>We discusses how to tackle resource allocation and gain efficiencies with Kubernetes in deployment with Huamin Chen of Red Hat.</itunes:subtitle>
      <itunes:keywords>software developer, ai, tech podcast, the new stack, devops, cloud native, programming, devops podcast, tech, developer podcast, artificial intelligence, kubernetes, the new stack makers, software engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1406</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">d2a4954f-08d0-4c23-aeb8-a23df7e84e06</guid>
      <title>A Good SBOM is Hard to Find</title>
      <description><![CDATA[<p>The concept of a software bill of materials (SBOM) aims to provide consumers with information about the components inside a software, enabling better assessment of potential security issues. Justin Hutchings, Senior Director of Product Management at GitHub, emphasizes the importance of SBOMs and their potential to facilitate patching without relying solely on the vendor. He spoke with Alex Williams in this episode of The New Stack Makers.</p><p>Creating a comprehensive SBOM poses challenges. Each software package is unique, such as an Android application that combines the developer's code with numerous open-source dependencies obtained through Maven packages. The SBOM should ideally serve as a machine-readable inventory of all these dependencies, enabling developers to evaluate their security.</p><p>Hutchings notes that many SBOMs fall short in being fully machine-readable, and the vulnerability landscape is even more problematic. To achieve the standards Hutchings envisions, several actions are necessary. For instance, certain programming languages make it difficult to inspect build contents, while the lack of a centralized distribution point for dependencies in languages like C and C++ complicates the enumeration and standardization of machine-readable names and versions. Addressing these issues across the entire software supply chain is imperative.</p><p>SBOMs hold potential for enhancing software security, but the current state of implementation and machine-readability needs improvement, particularly concerning diverse programming languages and dependency management.</p><p>Learn more at <a href="https://thenewstack.io/" target="_blank">thenewstack.io</a></p><p><a href="https://thenewstack.io/creating-a-minimum-elements-sbom-document-in-5-minutes/" target="_blank">Creating a 'Minimum Elements' SBOM Document in 5 Minutes</a></p><p><a href="https://thenewstack.io/enhance-your-sbom-success-with-slsa/" target="_blank">Enhance Your SBOM Success with SLSA</a></p><p><a href="https://thenewstack.io/how-to-create-a-software-bill-of-materials/" target="_blank">How to Create a Software Bill of Materials</a></p>
]]></description>
      <pubDate>Thu, 22 Jun 2023 23:29:47 +0000</pubDate>
      <author>podcasts@thenewstack.io (Justin Hutchings, GitHub, The New Stack, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/a-good-sbom-is-hard-to-find-5AfN_2wr</link>
      <content:encoded><![CDATA[<p>The concept of a software bill of materials (SBOM) aims to provide consumers with information about the components inside a software, enabling better assessment of potential security issues. Justin Hutchings, Senior Director of Product Management at GitHub, emphasizes the importance of SBOMs and their potential to facilitate patching without relying solely on the vendor. He spoke with Alex Williams in this episode of The New Stack Makers.</p><p>Creating a comprehensive SBOM poses challenges. Each software package is unique, such as an Android application that combines the developer's code with numerous open-source dependencies obtained through Maven packages. The SBOM should ideally serve as a machine-readable inventory of all these dependencies, enabling developers to evaluate their security.</p><p>Hutchings notes that many SBOMs fall short in being fully machine-readable, and the vulnerability landscape is even more problematic. To achieve the standards Hutchings envisions, several actions are necessary. For instance, certain programming languages make it difficult to inspect build contents, while the lack of a centralized distribution point for dependencies in languages like C and C++ complicates the enumeration and standardization of machine-readable names and versions. Addressing these issues across the entire software supply chain is imperative.</p><p>SBOMs hold potential for enhancing software security, but the current state of implementation and machine-readability needs improvement, particularly concerning diverse programming languages and dependency management.</p><p>Learn more at <a href="https://thenewstack.io/" target="_blank">thenewstack.io</a></p><p><a href="https://thenewstack.io/creating-a-minimum-elements-sbom-document-in-5-minutes/" target="_blank">Creating a 'Minimum Elements' SBOM Document in 5 Minutes</a></p><p><a href="https://thenewstack.io/enhance-your-sbom-success-with-slsa/" target="_blank">Enhance Your SBOM Success with SLSA</a></p><p><a href="https://thenewstack.io/how-to-create-a-software-bill-of-materials/" target="_blank">How to Create a Software Bill of Materials</a></p>
]]></content:encoded>
      <enclosure length="24656292" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/4d25ae15-4b27-4887-aafc-333fd39c871a/audio/4e8f52dc-0ab0-4fc0-b759-2cbb19ecb4cc/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>A Good SBOM is Hard to Find</itunes:title>
      <itunes:author>Justin Hutchings, GitHub, The New Stack, Alex Williams</itunes:author>
      <itunes:duration>00:25:40</itunes:duration>
      <itunes:summary>Justin Hutchings of GitHub spoke with us about SBOMs and how developers can use the the software bill of materials to determine its security.</itunes:summary>
      <itunes:subtitle>Justin Hutchings of GitHub spoke with us about SBOMs and how developers can use the the software bill of materials to determine its security.</itunes:subtitle>
      <itunes:keywords>software developers, software engineering, software, the new stack, technology podcast, devops, programming, devops podcast, tech, software development, software engineers, programmers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1405</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1920fcb5-c293-495c-972f-0d77bec9aad5</guid>
      <title>The Developer&apos;s Career Path: Discover&apos;s Approach</title>
      <description><![CDATA[<p>Angel Diaz, Vice President of Technology, Capabilities, and Innovation at Discover Financial Services, spoke with TNS Host Alex Williams at the Open Source Summit in Vancouver, BC. Diaz emphasizes the importance of learning and collaboration among software engineers. He leads The Discover Technology Academy, a community of 15,000 engineers, which he describes as a place where craftsmen come together rather than an ivory tower institution.</p><p>Developers and engineers at Discover define and develop processes for software development. They start their journey by contributing atomic elements of knowledge, such as articles, blogs, videos, and tutorials, and then democratize that knowledge. Open source principles, communities, guilds, and established practices play a vital role in their work and discovery process.</p><p>Discover's developer experience revolves around the concept of the golden path, which goes beyond consuming content and includes aspects like code, automation, and setting up development environments. Pair programming and a cultural approach to learning are also incorporated into Discover's talent system.</p><p>Diaz highlights that Discover's work extends beyond their financial services company, as they share their knowledge and open source work with the external community through platforms like technology.discovered.com. This enables engineers to gain merit badges, such as maintainers or contributors, and showcase their expertise on professional platforms like LinkedIn.</p><p>Learn more at <a href="https://thenewstack.io/" target="_blank">thenewstack.io</a></p><p><a href="https://thenewstack.io/the-future-of-developer-careers/" target="_blank">The Future of Developer Careers</a></p><p><a href="https://thenewstack.io/platform-engineering/platform-engineer-vs-software-engineer/" target="_blank">Platform Engineer vs Software Engineer</a></p><p><a href="https://thenewstack.io/how-donating-open-source-code-can-advance-your-career/" target="_blank">How Donating Open Source Code Can Advance Your Career</a></p>
]]></description>
      <pubDate>Wed, 21 Jun 2023 20:34:11 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack, Angel Diaz, Discover Financial Services, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/the-developers-career-path-MsqH0DqN</link>
      <content:encoded><![CDATA[<p>Angel Diaz, Vice President of Technology, Capabilities, and Innovation at Discover Financial Services, spoke with TNS Host Alex Williams at the Open Source Summit in Vancouver, BC. Diaz emphasizes the importance of learning and collaboration among software engineers. He leads The Discover Technology Academy, a community of 15,000 engineers, which he describes as a place where craftsmen come together rather than an ivory tower institution.</p><p>Developers and engineers at Discover define and develop processes for software development. They start their journey by contributing atomic elements of knowledge, such as articles, blogs, videos, and tutorials, and then democratize that knowledge. Open source principles, communities, guilds, and established practices play a vital role in their work and discovery process.</p><p>Discover's developer experience revolves around the concept of the golden path, which goes beyond consuming content and includes aspects like code, automation, and setting up development environments. Pair programming and a cultural approach to learning are also incorporated into Discover's talent system.</p><p>Diaz highlights that Discover's work extends beyond their financial services company, as they share their knowledge and open source work with the external community through platforms like technology.discovered.com. This enables engineers to gain merit badges, such as maintainers or contributors, and showcase their expertise on professional platforms like LinkedIn.</p><p>Learn more at <a href="https://thenewstack.io/" target="_blank">thenewstack.io</a></p><p><a href="https://thenewstack.io/the-future-of-developer-careers/" target="_blank">The Future of Developer Careers</a></p><p><a href="https://thenewstack.io/platform-engineering/platform-engineer-vs-software-engineer/" target="_blank">Platform Engineer vs Software Engineer</a></p><p><a href="https://thenewstack.io/how-donating-open-source-code-can-advance-your-career/" target="_blank">How Donating Open Source Code Can Advance Your Career</a></p>
]]></content:encoded>
      <enclosure length="13867929" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/ea76fe60-ed7a-4ae8-a6ae-3a30910feb47/audio/551c53b2-f240-4f6b-aa2e-72ec3012bf06/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>The Developer&apos;s Career Path: Discover&apos;s Approach</itunes:title>
      <itunes:author>The New Stack, Angel Diaz, Discover Financial Services, Alex Williams</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/67c9d7b6-1e07-45de-b651-0ca71dab2af1/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:14:26</itunes:duration>
      <itunes:summary>Angel Diaz discusses how Discover Financial Services built an academy of 15,000 software engineers to learn together as craftspeople.</itunes:summary>
      <itunes:subtitle>Angel Diaz discusses how Discover Financial Services built an academy of 15,000 software engineers to learn together as craftspeople.</itunes:subtitle>
      <itunes:keywords>software developer, software engineering, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, software engineer, open source, open source summit north america, open source summit</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1404</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">07a978c1-780e-49d7-a1bc-c87267f1d8d5</guid>
      <title>The Risks of Decomposing Software Components</title>
      <description><![CDATA[<p>The Linux Foundation's Open Source Security Foundation (OSSF) is addressing the challenge of timely software component updates to prevent security vulnerabilities like Log4J. In an interview with Alex Williams of The New Stack at the Open Source Summit in Vancouver, Omkhar Arasaratnam, the new general manager of OSSF, and Brian Behlendorf, CTO of OSSF, discuss the importance of making software secure from the start and the need for rapid response when vulnerabilities occur. </p><p>In this conversation, they highlight the significance of Software Bill of Materials (SBOMs), which provide a complete list of software components and supply chain relationships. SBOMs offer data that can aid decision-making and enable reputation tracking of repositories. The interview also touches on the issues with package managers and the quantification of software vulnerability risks. Overall, the goal is to improve the efficiency and effectiveness of software component updates and leverage data to enhance security in enterprise and production environments.</p><p>Learn more from The New Stack:</p><p><a href="https://thenewstack.io/creating-a-minimum-elements-sbom-document-in-5-minutes/" target="_blank">Creating a 'Minimum Elements' SBOM Document in 5 Minutes</a></p><p><a href="https://thenewstack.io/enhance-your-sbom-success-with-slsa/" target="_blank">Enhance Your SBOM Success with SLSA</a></p>
]]></description>
      <pubDate>Wed, 14 Jun 2023 19:26:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack, Brian Behlendorf, Omkhar Arasaratnam, The Linux Foundation, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/the-risks-of-decomposing-software-components-0LG3IXRz</link>
      <content:encoded><![CDATA[<p>The Linux Foundation's Open Source Security Foundation (OSSF) is addressing the challenge of timely software component updates to prevent security vulnerabilities like Log4J. In an interview with Alex Williams of The New Stack at the Open Source Summit in Vancouver, Omkhar Arasaratnam, the new general manager of OSSF, and Brian Behlendorf, CTO of OSSF, discuss the importance of making software secure from the start and the need for rapid response when vulnerabilities occur. </p><p>In this conversation, they highlight the significance of Software Bill of Materials (SBOMs), which provide a complete list of software components and supply chain relationships. SBOMs offer data that can aid decision-making and enable reputation tracking of repositories. The interview also touches on the issues with package managers and the quantification of software vulnerability risks. Overall, the goal is to improve the efficiency and effectiveness of software component updates and leverage data to enhance security in enterprise and production environments.</p><p>Learn more from The New Stack:</p><p><a href="https://thenewstack.io/creating-a-minimum-elements-sbom-document-in-5-minutes/" target="_blank">Creating a 'Minimum Elements' SBOM Document in 5 Minutes</a></p><p><a href="https://thenewstack.io/enhance-your-sbom-success-with-slsa/" target="_blank">Enhance Your SBOM Success with SLSA</a></p>
]]></content:encoded>
      <enclosure length="18562865" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/ffd8c45e-b398-4a44-b4c4-8f071e1dfaf6/audio/d058a40d-5923-45eb-ad51-cfcfc7cd1dcd/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>The Risks of Decomposing Software Components</itunes:title>
      <itunes:author>The New Stack, Brian Behlendorf, Omkhar Arasaratnam, The Linux Foundation, Alex Williams</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/d377effa-f01a-49bf-8732-717be06e4acf/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:19:20</itunes:duration>
      <itunes:summary>Omkhar Arasaratnam, the new general manager of OSSF, and Brian Behlendorf, CTO of OSSF, discuss the importance of making software secure from the start and the need for rapid response when vulnerabilities occur. Hosted by Alex Williams of The New Stack.</itunes:summary>
      <itunes:subtitle>Omkhar Arasaratnam, the new general manager of OSSF, and Brian Behlendorf, CTO of OSSF, discuss the importance of making software secure from the start and the need for rapid response when vulnerabilities occur. Hosted by Alex Williams of The New Stack.</itunes:subtitle>
      <itunes:keywords>software developer, software engineering, software, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, software development, the new stack makers, software engineer, open source, linux foundation</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1403</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e2d6351f-15af-41b1-8165-a3937001e06d</guid>
      <title>How Apache Airflow Better Manages ML Pipelines</title>
      <description><![CDATA[<p>Apache Airflow is an open-source platform for building machine learning pipelines. It allows users to author, schedule, and monitor workflows, making it well-suited for tasks such as data management, model training, and deployment. In a discussion on The New Stack Makers, three technologists from Amazon Web Services (AWS) highlighted the improvements and ease of use in Apache Airflow.</p><p>Dennis Ferruzzi, a software developer at AWS, is working on updating Airflow's logging and metrics backend to the OpenTelemetry standard. This update will provide more granular metrics and better visibility into Airflow environments. Niko Oliveria, a senior software development engineer at AWS, focuses on reviewing and merging pull requests as a committer/maintainer for Apache Airflow. He has worked on making Airflow a more pluggable architecture through the implementation of AIP-51.</p><p>Raphaël Vandon, also a senior software engineer at AWS, is contributing to performance improvements and leveraging async capabilities in AWS Operators, which enable seamless interactions with AWS. The simplicity of Airflow is attributed to its Python base and the operator ecosystem contributed by companies like AWS, Google, and Databricks. Operators are like building blocks, each designed for a specific task, and can be chained together to create workflows across different cloud providers.</p><p>The latest version, Airflow 2.6, introduces sensors that wait for specific events and notifiers that act based on workflow success or failure. These additions aim to simplify the user experience. Overall, the growing community of contributors continues to enhance Apache Airflow, making it a popular choice for building machine learning pipelines.</p><p>Check out the full article on The New Stack:</p><p><a href="https://thenewstack.io/how-apache-airflow-better-manages-machine-learning-pipelines/" target="_blank">How Apache Airflow Better Manages Machine Learning Pipelines</a></p>
]]></description>
      <pubDate>Thu, 8 Jun 2023 21:48:08 +0000</pubDate>
      <author>podcasts@thenewstack.io (aws, The New Stack, dennis ferruzzi, niko oliveira, Raphaël Vandon, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-apache-airflow-better-manages-ml-pipelines-xkNY5xQd</link>
      <content:encoded><![CDATA[<p>Apache Airflow is an open-source platform for building machine learning pipelines. It allows users to author, schedule, and monitor workflows, making it well-suited for tasks such as data management, model training, and deployment. In a discussion on The New Stack Makers, three technologists from Amazon Web Services (AWS) highlighted the improvements and ease of use in Apache Airflow.</p><p>Dennis Ferruzzi, a software developer at AWS, is working on updating Airflow's logging and metrics backend to the OpenTelemetry standard. This update will provide more granular metrics and better visibility into Airflow environments. Niko Oliveria, a senior software development engineer at AWS, focuses on reviewing and merging pull requests as a committer/maintainer for Apache Airflow. He has worked on making Airflow a more pluggable architecture through the implementation of AIP-51.</p><p>Raphaël Vandon, also a senior software engineer at AWS, is contributing to performance improvements and leveraging async capabilities in AWS Operators, which enable seamless interactions with AWS. The simplicity of Airflow is attributed to its Python base and the operator ecosystem contributed by companies like AWS, Google, and Databricks. Operators are like building blocks, each designed for a specific task, and can be chained together to create workflows across different cloud providers.</p><p>The latest version, Airflow 2.6, introduces sensors that wait for specific events and notifiers that act based on workflow success or failure. These additions aim to simplify the user experience. Overall, the growing community of contributors continues to enhance Apache Airflow, making it a popular choice for building machine learning pipelines.</p><p>Check out the full article on The New Stack:</p><p><a href="https://thenewstack.io/how-apache-airflow-better-manages-machine-learning-pipelines/" target="_blank">How Apache Airflow Better Manages Machine Learning Pipelines</a></p>
]]></content:encoded>
      <enclosure length="16369415" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/eff3ce70-9f6f-41ac-b847-388d5c7adb64/audio/c8be04a3-5938-4191-ac8b-f62409426c92/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Apache Airflow Better Manages ML Pipelines</itunes:title>
      <itunes:author>aws, The New Stack, dennis ferruzzi, niko oliveira, Raphaël Vandon, Alex Williams</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/c3ce54c9-2198-4587-bf1e-f99f6ad60bb4/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:17:03</itunes:duration>
      <itunes:summary>In this episode of The New Stack Makers, a trio of technologists, who all work with Amazon Web Services Managed Service for Airflow team, talked about improving the Apache Airflow user experience.</itunes:summary>
      <itunes:subtitle>In this episode of The New Stack Makers, a trio of technologists, who all work with Amazon Web Services Managed Service for Airflow team, talked about improving the Apache Airflow user experience.</itunes:subtitle>
      <itunes:keywords>apache airflow, software developer, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, the new stack makers, software engineer, aws</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1402</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">8e61354a-9d06-4a88-88be-f7457901e85d</guid>
      <title>Generative AI: What&apos;s Ahead for Enterprises?</title>
      <description><![CDATA[<p>In this episode featuring Nima Negahban, CEO of <a href="https://www.kinetica.com/" target="_blank">Kinetica</a>, the potential impact of generative AI tools like ChatGPT on businesses and organizations is discussed. Negahban highlights the transformative potential of generative AI when combined with data analytics. One use case he mentions is an "Alexa for all your data," where real-time queries can be made about store performance or product underperformance in specific weather conditions. This could provide organizations with a new level of visibility into their operations.</p><p>Negahban identifies two major challenges in the generative AI space. The first is security, especially when using internal data to train AI models. The second challenge is ensuring accuracy in AI outputs to avoid misleading information. However, he emphasizes that generative AI tools, such as GitHub Copilot, can bring a new expectation of efficiency and innovation for developers.</p><p>The future of generative AI in the enterprise involves discovering how to orchestrate these models effectively and leverage them with organizational data. Negahban mentions the growing interest in vector search and vector database capabilities to generate embeddings and perform embedding search. Kinetica's processing engine, coupled with OpenAI technology, aims to enable ad hoc querying against natural language without extensive data preparation, indexing, or engineering.</p><p>Check out the episode to hear more about how the integration of generative AI and data analytics presents exciting opportunities for businesses and organizations, providing them with powerful insights and potential for creativity and innovation.</p><p><strong>Read more about Generative AI on The New Stack</strong></p><p><a href="https://thenewstack.io/is-generative-ai-augmenting-our-jobs-or-about-to-take-them/" target="_blank">Is Generative AI Augmenting Our Jobs, or About to Take Them?</a></p><p><a href="https://thenewstack.io/generative-ai-how-to-choose-the-optimal-database/" target="_blank">Generative AI: How to Choose the Optimal Database</a></p><p><a href="https://thenewstack.io/how-will-generative-ai-change-the-tech-job-market/" target="_blank">How Will Generative AI Change the Tech Job Market?</a></p><p><a href="https://thenewstack.io/generative-ai-how-companies-are-using-and-scaling-ai-models/" target="_blank">Generative AI: How Companies Are Using and Scaling AI Models</a></p>
]]></description>
      <pubDate>Wed, 7 Jun 2023 21:20:42 +0000</pubDate>
      <author>podcasts@thenewstack.io (kinetica, the new stack, Nima Negahban, Heather Joslyn)</author>
      <link>https://thenewstack.simplecast.com/episodes/generative-ai-whats-ahead-for-enterprises-S_4Kw_Q5</link>
      <content:encoded><![CDATA[<p>In this episode featuring Nima Negahban, CEO of <a href="https://www.kinetica.com/" target="_blank">Kinetica</a>, the potential impact of generative AI tools like ChatGPT on businesses and organizations is discussed. Negahban highlights the transformative potential of generative AI when combined with data analytics. One use case he mentions is an "Alexa for all your data," where real-time queries can be made about store performance or product underperformance in specific weather conditions. This could provide organizations with a new level of visibility into their operations.</p><p>Negahban identifies two major challenges in the generative AI space. The first is security, especially when using internal data to train AI models. The second challenge is ensuring accuracy in AI outputs to avoid misleading information. However, he emphasizes that generative AI tools, such as GitHub Copilot, can bring a new expectation of efficiency and innovation for developers.</p><p>The future of generative AI in the enterprise involves discovering how to orchestrate these models effectively and leverage them with organizational data. Negahban mentions the growing interest in vector search and vector database capabilities to generate embeddings and perform embedding search. Kinetica's processing engine, coupled with OpenAI technology, aims to enable ad hoc querying against natural language without extensive data preparation, indexing, or engineering.</p><p>Check out the episode to hear more about how the integration of generative AI and data analytics presents exciting opportunities for businesses and organizations, providing them with powerful insights and potential for creativity and innovation.</p><p><strong>Read more about Generative AI on The New Stack</strong></p><p><a href="https://thenewstack.io/is-generative-ai-augmenting-our-jobs-or-about-to-take-them/" target="_blank">Is Generative AI Augmenting Our Jobs, or About to Take Them?</a></p><p><a href="https://thenewstack.io/generative-ai-how-to-choose-the-optimal-database/" target="_blank">Generative AI: How to Choose the Optimal Database</a></p><p><a href="https://thenewstack.io/how-will-generative-ai-change-the-tech-job-market/" target="_blank">How Will Generative AI Change the Tech Job Market?</a></p><p><a href="https://thenewstack.io/generative-ai-how-companies-are-using-and-scaling-ai-models/" target="_blank">Generative AI: How Companies Are Using and Scaling AI Models</a></p>
]]></content:encoded>
      <enclosure length="18681147" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/6f326b8d-c208-4128-95b0-98ba521b831a/audio/a38c833c-c535-4491-93b3-49f8ca825cf3/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Generative AI: What&apos;s Ahead for Enterprises?</itunes:title>
      <itunes:author>kinetica, the new stack, Nima Negahban, Heather Joslyn</itunes:author>
      <itunes:duration>00:19:27</itunes:duration>
      <itunes:summary>Nima Negahban, the CEO of Kinetica, speaks with Heather Joslyn about what could come next for companies, especially when new AI technology is paired with data analytics. The result could be “a whole new level of visibility into how your enterprise is running.”</itunes:summary>
      <itunes:subtitle>Nima Negahban, the CEO of Kinetica, speaks with Heather Joslyn about what could come next for companies, especially when new AI technology is paired with data analytics. The result could be “a whole new level of visibility into how your enterprise is running.”</itunes:subtitle>
      <itunes:keywords>generative ai, ai tools, software developer, tech podcast, the new stack, databases, devops, devops podcast, tech, developer podcast, the new stack makers, software engineer, kinetica</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1401</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">bfea6230-df56-46d6-917a-28fb1d00f665</guid>
      <title>Don&apos;t Force Containers and Disrupt Workflows</title>
      <description><![CDATA[<p>In this episode of The New Stack Makers from KubeCon EU 2023, Rob Barnes, a senior developer advocate at HashiCorp, discusses how their networking service, Consul, allows users to incorporate containers or virtual machines into their workflows without imposing container usage. Consul, an early implementation of service mesh technology, offers a full-featured control plane with service discovery, configuration, and segmentation functionalities. It supports various environments, including traditional applications, VMs, containers, and orchestration engines like Nomad and Kubernetes.</p><p>Barnes explains that Consul can dictate which services can communicate with each other based on rules. By leveraging these capabilities, HashiCorp aims to make users' lives easier and software more secure.</p><p>Barnes emphasizes that there are misconceptions about service mesh, with some assuming it is exclusively tied to container usage. He clarifies that service mesh adoption should be flexible and meet users wherever they are in their technology stack. The future of service mesh lies in educating people about its role within the broader context and addressing any knowledge gaps.</p><p>Join Rob Barnes and our host, Alex Williams, in exploring the evolving landscape of service mesh and understanding how it can enhance workflows.</p><p>Find out more about HashiCorp or the biggest news from KubeCon on The New Stack:</p><p><a href="https://thenewstack.io/hashicorp-vault-operator-manages-kubernetes-secrets/" target="_blank">HashiCorp Vault Operator Manages Kubernetes Secrets</a></p><p><a href="https://thenewstack.io/how-hashicorp-does-site-reliability-engineering/" target="_blank">How HashiCorp Does Site Reliability Engineering</a></p><p><a href="https://thenewstack.io/a-boring-kubernetes-release/" target="_blank">A Boring Kubernetes Release</a></p>
]]></description>
      <pubDate>Thu, 25 May 2023 21:12:01 +0000</pubDate>
      <author>podcasts@thenewstack.io (Hashicorp, Rob Barnes, Alex Williams, The New  Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/dont-force-containers-and-disrupt-workflows-2gzI92HZ</link>
      <content:encoded><![CDATA[<p>In this episode of The New Stack Makers from KubeCon EU 2023, Rob Barnes, a senior developer advocate at HashiCorp, discusses how their networking service, Consul, allows users to incorporate containers or virtual machines into their workflows without imposing container usage. Consul, an early implementation of service mesh technology, offers a full-featured control plane with service discovery, configuration, and segmentation functionalities. It supports various environments, including traditional applications, VMs, containers, and orchestration engines like Nomad and Kubernetes.</p><p>Barnes explains that Consul can dictate which services can communicate with each other based on rules. By leveraging these capabilities, HashiCorp aims to make users' lives easier and software more secure.</p><p>Barnes emphasizes that there are misconceptions about service mesh, with some assuming it is exclusively tied to container usage. He clarifies that service mesh adoption should be flexible and meet users wherever they are in their technology stack. The future of service mesh lies in educating people about its role within the broader context and addressing any knowledge gaps.</p><p>Join Rob Barnes and our host, Alex Williams, in exploring the evolving landscape of service mesh and understanding how it can enhance workflows.</p><p>Find out more about HashiCorp or the biggest news from KubeCon on The New Stack:</p><p><a href="https://thenewstack.io/hashicorp-vault-operator-manages-kubernetes-secrets/" target="_blank">HashiCorp Vault Operator Manages Kubernetes Secrets</a></p><p><a href="https://thenewstack.io/how-hashicorp-does-site-reliability-engineering/" target="_blank">How HashiCorp Does Site Reliability Engineering</a></p><p><a href="https://thenewstack.io/a-boring-kubernetes-release/" target="_blank">A Boring Kubernetes Release</a></p>
]]></content:encoded>
      <enclosure length="12119188" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/632f963f-2eab-42dc-b17e-eb0eb2c22af1/audio/e689665a-44b7-4005-a7a7-9287472db033/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Don&apos;t Force Containers and Disrupt Workflows</itunes:title>
      <itunes:author>Hashicorp, Rob Barnes, Alex Williams, The New  Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/a5b080d8-75e8-44f2-a491-4e3f7a8c43cc/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:12:37</itunes:duration>
      <itunes:summary>In this episode of The New Stack Makers from KubeCon EU 2023 in Amsterdam, Rob Barnes, a senior developer advocate at HashiCorp, discusses how their networking service, Consul, allows users to incorporate containers or virtual machines into their workflows without imposing container usage.</itunes:summary>
      <itunes:subtitle>In this episode of The New Stack Makers from KubeCon EU 2023 in Amsterdam, Rob Barnes, a senior developer advocate at HashiCorp, discusses how their networking service, Consul, allows users to incorporate containers or virtual machines into their workflows without imposing container usage.</itunes:subtitle>
      <itunes:keywords>software developer, software engineering, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, software development, software engineer, service mesh, hashicorp</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1400</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">783ec09c-d6fb-4194-a84a-757eb60fc814</guid>
      <title>AI Talk at KubeCon</title>
      <description><![CDATA[<p>What did software engineers at KubeCon say about how AI is coming up in their work? That's a question we posed Taylor Dolezal, head of ecosystem for the Cloud Native Computing Foundation at KubeCon in Amsterdam. </p><p>Dolezal said AI did come up in conversation.</p><p>"I think that when it's come to this, typically with KubeCons, and other CNCF and LF events, there's always been one or two topics that have bubbled to the top," Dolezal said.</p><p>At its core, AI surfaces a data issue for users that correlates to data sharing issues, said Dolezal in this latest episode of The New Stack Makers.</p><p>Read more about AI and Kubernetes on The New Stack:</p><p><a href="https://thenewstack.io/3-important-ai-ml-tools-you-can-deploy-on-kubernetes/" target="_blank">3 Important AI/ML Tools You Can Deploy on Kubernetes</a></p><p><a href="https://thenewstack.io/flyte-an-open-source-orchestrator-for-ml-ai-workflows/" target="_blank">Flyte: An Open Source Orchestrator for ML/AI Workflows</a></p><p><a href="https://thenewstack.io/overcoming-the-kubernetes-skills-gap-with-chatgpt-assistance/" target="_blank">Overcoming the Kubernetes Skills Gap with ChatGPT Assistance</a></p>
]]></description>
      <pubDate>Wed, 24 May 2023 20:12:34 +0000</pubDate>
      <author>podcasts@thenewstack.io (cncf, The New Stack, Taylor Dolezal, Alex Williams)</author>
      <link>https://thenewstack.simplecast.com/episodes/cncf-taylor-dolezal-Sop_84XG</link>
      <content:encoded><![CDATA[<p>What did software engineers at KubeCon say about how AI is coming up in their work? That's a question we posed Taylor Dolezal, head of ecosystem for the Cloud Native Computing Foundation at KubeCon in Amsterdam. </p><p>Dolezal said AI did come up in conversation.</p><p>"I think that when it's come to this, typically with KubeCons, and other CNCF and LF events, there's always been one or two topics that have bubbled to the top," Dolezal said.</p><p>At its core, AI surfaces a data issue for users that correlates to data sharing issues, said Dolezal in this latest episode of The New Stack Makers.</p><p>Read more about AI and Kubernetes on The New Stack:</p><p><a href="https://thenewstack.io/3-important-ai-ml-tools-you-can-deploy-on-kubernetes/" target="_blank">3 Important AI/ML Tools You Can Deploy on Kubernetes</a></p><p><a href="https://thenewstack.io/flyte-an-open-source-orchestrator-for-ml-ai-workflows/" target="_blank">Flyte: An Open Source Orchestrator for ML/AI Workflows</a></p><p><a href="https://thenewstack.io/overcoming-the-kubernetes-skills-gap-with-chatgpt-assistance/" target="_blank">Overcoming the Kubernetes Skills Gap with ChatGPT Assistance</a></p>
]]></content:encoded>
      <enclosure length="16158346" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/88d6ec27-1b28-4c59-bc15-6ed8f6e05b03/audio/878c038e-016e-4cc3-9ef7-6bc5ce3bb6d7/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>AI Talk at KubeCon</itunes:title>
      <itunes:author>cncf, The New Stack, Taylor Dolezal, Alex Williams</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/05c9a379-9aa1-4e46-8699-e7b60832a28e/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:16:49</itunes:duration>
      <itunes:summary>What did software engineers at KubeCon say about how AI is coming up in their work? That&apos;s a question we posed Taylor Dolezal, head of ecosystem for the Cloud Native Computing Foundation at KubeCon in Amsterdam.</itunes:summary>
      <itunes:subtitle>What did software engineers at KubeCon say about how AI is coming up in their work? That&apos;s a question we posed Taylor Dolezal, head of ecosystem for the Cloud Native Computing Foundation at KubeCon in Amsterdam.</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, the new stack, devops, kubecon eu, devops podcast, tech, developer podcast, artificial intelligence, kubernetes, the new stack makers, software engineer, cncf, kubecon</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1399</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">76cf2f63-8799-488a-9f5e-be956511630d</guid>
      <title>A Boring Kubernetes Release</title>
      <description><![CDATA[<p><a href="https://thenewstack.io/kubernetes-1-27-arrives/">Kubernetes release 1.27</a> is boring, says <a href="https://github.com/salaxander">Xander Grzywinski</a>, a senior product manager at Microsoft.</p><p>It's a stable release, Grzywinski said on this episode of The New Stack Makers from KubeCon Europe in Amsterdam.</p><p>"It's reached a level of stability at this point," said Grzywinski. "The core feature set has become more fleshed out and fully realized.</p><p>The release has 60 total features, Grzywinski said. The features in 1.27 are solid refinements of features that have been around for a while. It's helping Kubernetes be as stable as it can be.</p><p>Examples?</p><p>It has a better developer experience, Grzywinski said. Storage primitives and APIs are more stable.</p>
]]></description>
      <pubDate>Mon, 22 May 2023 20:39:38 +0000</pubDate>
      <author>podcasts@thenewstack.io (Cloud Native Computing Foundation, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/kubernetes-127-NgFsDuDc</link>
      <content:encoded><![CDATA[<p><a href="https://thenewstack.io/kubernetes-1-27-arrives/">Kubernetes release 1.27</a> is boring, says <a href="https://github.com/salaxander">Xander Grzywinski</a>, a senior product manager at Microsoft.</p><p>It's a stable release, Grzywinski said on this episode of The New Stack Makers from KubeCon Europe in Amsterdam.</p><p>"It's reached a level of stability at this point," said Grzywinski. "The core feature set has become more fleshed out and fully realized.</p><p>The release has 60 total features, Grzywinski said. The features in 1.27 are solid refinements of features that have been around for a while. It's helping Kubernetes be as stable as it can be.</p><p>Examples?</p><p>It has a better developer experience, Grzywinski said. Storage primitives and APIs are more stable.</p>
]]></content:encoded>
      <enclosure length="14456834" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/ea02f6c6-ae13-4450-bf61-50e33e93b14d/audio/b29e6ab3-2645-4d74-8853-419afc501554/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>A Boring Kubernetes Release</itunes:title>
      <itunes:author>Cloud Native Computing Foundation, The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/0e17a809-7f85-4425-9497-a6946e662940/3000x3000/tns-makers-logo-simplecast.jpg?aid=rss_feed"/>
      <itunes:duration>00:15:03</itunes:duration>
      <itunes:summary>Kubernetes release 1.27 is boring, says Xander Grzywinski, a senior product manager at Microsoft.

It&apos;s a stable release, Grzywinski said on this episode of The New Stack Makers from KubeCon Europe in Amsterdam.

&quot;It&apos;s reached a level of stability at this point,&quot; said Grzywinski. &quot;The core feature set has become more fleshed out and fully realized.

The release has 60 total features, Grzywinski said. The features in 1.27 are solid refinements of features that have been around for a while. It&apos;s helping Kubernetes be as stable as it can be.

Examples?

It has a better developer experience, Grzywinski said. Storage primitives and APIs are more stable.</itunes:summary>
      <itunes:subtitle>Kubernetes release 1.27 is boring, says Xander Grzywinski, a senior product manager at Microsoft.

It&apos;s a stable release, Grzywinski said on this episode of The New Stack Makers from KubeCon Europe in Amsterdam.

&quot;It&apos;s reached a level of stability at this point,&quot; said Grzywinski. &quot;The core feature set has become more fleshed out and fully realized.

The release has 60 total features, Grzywinski said. The features in 1.27 are solid refinements of features that have been around for a while. It&apos;s helping Kubernetes be as stable as it can be.

Examples?

It has a better developer experience, Grzywinski said. Storage primitives and APIs are more stable.</itunes:subtitle>
      <itunes:keywords>cloud native computing foundation, software developer, tech podcast, alex williams, the new stack, devops, devops podcast, tech, developer podcast, the new stack makers, software engineer, xander grzywinski, cncf</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1398</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">3c62ca52-22cb-43b8-aa4f-4effeb94a046</guid>
      <title>How Teleport’s Leader Transitioned from Engineer to CEO</title>
      <description><![CDATA[<p>The mystery and miracle of flight sparked <a href="https://www.linkedin.com/in/kontsevoy/">Ev Kontsevoy’s</a> interest in engineering as a child growing up in the Soviet Union.</p><p>“When I was a kid, when I saw like airplane flying over, I was having a really hard time not stopping and staring at it until it's gone,” said Kontsevoy, co-founder and CEO of Teleport, said in this episode of the Tech Founders Odyssey podcast series. “I really wanted to figure out how to make it fly.”</p><p>Inevitably, he said, the engineering path led him to computers, where he was thrilled by the power he could wield through programming. “You're a teenager, no one really listens to you yet, but you tell a computer to go print number 10 ... and then you say, do it a million times. And the stupid computer just prints 10 million. You feel like a magician that just bends like machines to your will.”</p><p>In this episode of the series, part of The New Stack Makers podcast, Kontsevoy discussed his journey to co-founding Teleport, an infrastructure access platform, with TNS co-hosts <a href="https://thenewstack.io/author/colleen/">Colleen Coll</a> and <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn.</a></p>
]]></description>
      <pubDate>Thu, 4 May 2023 03:00:33 +0000</pubDate>
      <author>podcasts@thenewstack.io (teleport, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/tfo-ev-kontsevoy-teleport-HPFDCdKE</link>
      <content:encoded><![CDATA[<p>The mystery and miracle of flight sparked <a href="https://www.linkedin.com/in/kontsevoy/">Ev Kontsevoy’s</a> interest in engineering as a child growing up in the Soviet Union.</p><p>“When I was a kid, when I saw like airplane flying over, I was having a really hard time not stopping and staring at it until it's gone,” said Kontsevoy, co-founder and CEO of Teleport, said in this episode of the Tech Founders Odyssey podcast series. “I really wanted to figure out how to make it fly.”</p><p>Inevitably, he said, the engineering path led him to computers, where he was thrilled by the power he could wield through programming. “You're a teenager, no one really listens to you yet, but you tell a computer to go print number 10 ... and then you say, do it a million times. And the stupid computer just prints 10 million. You feel like a magician that just bends like machines to your will.”</p><p>In this episode of the series, part of The New Stack Makers podcast, Kontsevoy discussed his journey to co-founding Teleport, an infrastructure access platform, with TNS co-hosts <a href="https://thenewstack.io/author/colleen/">Colleen Coll</a> and <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn.</a></p>
]]></content:encoded>
      <enclosure length="32257749" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/acef70e2-051c-4815-8841-c0f86f897336/audio/cfa8a458-64db-4fe7-8fcd-80b113f1eb57/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Teleport’s Leader Transitioned from Engineer to CEO</itunes:title>
      <itunes:author>teleport, The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/b4ff54a7-28bb-4c73-a436-454cf514b06d/3000x3000/the-tech-odyssey-logo-white-bg.jpg?aid=rss_feed"/>
      <itunes:duration>00:33:35</itunes:duration>
      <itunes:summary>The mystery and miracle of flight sparked Ev Kontsevoy’s interest in engineering as a child growing up in the Soviet Union.

“When I was a kid, when I saw like airplane flying over, I was having a really hard time not stopping and staring at it until it&apos;s gone,” said Kontsevoy, co-founder and CEO of Teleport, said in this episode of the Tech Founders Odyssey podcast series. “I really wanted to figure out how to make it fly.”

Inevitably, he said, the engineering path led him to computers, where he was thrilled by the power he could wield through programming. “You&apos;re a teenager, no one really listens to you yet, but you tell a computer to go print number 10 ... and then you say, do it a million times. And the stupid computer just prints 10 million. You feel like a magician that just bends like machines to your will.”

In this episode of the series, part of The New Stack Makers podcast, Kontsevoy discussed his journey to co-founding Teleport, an infrastructure access platform, with TNS co-hosts Colleen Coll and Heather Joslyn.</itunes:summary>
      <itunes:subtitle>The mystery and miracle of flight sparked Ev Kontsevoy’s interest in engineering as a child growing up in the Soviet Union.

“When I was a kid, when I saw like airplane flying over, I was having a really hard time not stopping and staring at it until it&apos;s gone,” said Kontsevoy, co-founder and CEO of Teleport, said in this episode of the Tech Founders Odyssey podcast series. “I really wanted to figure out how to make it fly.”

Inevitably, he said, the engineering path led him to computers, where he was thrilled by the power he could wield through programming. “You&apos;re a teenager, no one really listens to you yet, but you tell a computer to go print number 10 ... and then you say, do it a million times. And the stupid computer just prints 10 million. You feel like a magician that just bends like machines to your will.”

In this episode of the series, part of The New Stack Makers podcast, Kontsevoy discussed his journey to co-founding Teleport, an infrastructure access platform, with TNS co-hosts Colleen Coll and Heather Joslyn.</itunes:subtitle>
      <itunes:keywords>software developer, teleport, tech podcast, the new stack, heather joslyn, devops, devops podcast, tech, developer podcast, the new stack makers, ev kontsevoy, software engineer, colleen coll</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1397</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">62d25b6e-e2b2-4baf-83fa-6548749fe611</guid>
      <title>Developer Tool Integrations with AI -- The AWS Approach</title>
      <description><![CDATA[<p>Developer tool integration and AI differentiate workflows to achieve that "fluid" state developers strive for in their work.</p><p><a href="https://docs.aws.amazon.com/codecatalyst/latest/userguide/welcome.html">Amazon CodeCatalyst</a> and <a href="https://docs.aws.amazon.com/codewhisperer/latest/userguide/what-is-cwspr.html">Amazon CodeWhisperer</a> exemplify how developer workflows are accelerating and helping to create these fluid states. That's a big part of the story we hear from Harry Mower, director AWS DevOps Services, and Doug Seven, director, Software Development, AWS CodeWhisperer, from our recording in Seattle earlier in April for this week's <a href="https://pages.awscloud.com/GLOBAL-event-LS-aws-developer-innovation-day-2023-reg-event.html"> AWS Developer Innovation Day</a>.</p><p>CodeCatalyst serves as an end-to-end integrated DevOps toolchain that provides developers with everything they need to go from planning through to deployment, Mower said. CodeWhisperer is an AI coding companion that generates whole-line and full-line function code recommendations in an integrated development environment (IDE).</p><p>CodeWhisperer is part of the IDE, Seven said. The acceleration is two-fold. CodeCatalyst speeds the end-to-end integration process, and CodeWhisper accelerates writing code through generative AI.</p>
]]></description>
      <pubDate>Thu, 27 Apr 2023 07:08:14 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/developer-tool-integrations-with-ai-the-aws-approach-pGZkxeZz</link>
      <content:encoded><![CDATA[<p>Developer tool integration and AI differentiate workflows to achieve that "fluid" state developers strive for in their work.</p><p><a href="https://docs.aws.amazon.com/codecatalyst/latest/userguide/welcome.html">Amazon CodeCatalyst</a> and <a href="https://docs.aws.amazon.com/codewhisperer/latest/userguide/what-is-cwspr.html">Amazon CodeWhisperer</a> exemplify how developer workflows are accelerating and helping to create these fluid states. That's a big part of the story we hear from Harry Mower, director AWS DevOps Services, and Doug Seven, director, Software Development, AWS CodeWhisperer, from our recording in Seattle earlier in April for this week's <a href="https://pages.awscloud.com/GLOBAL-event-LS-aws-developer-innovation-day-2023-reg-event.html"> AWS Developer Innovation Day</a>.</p><p>CodeCatalyst serves as an end-to-end integrated DevOps toolchain that provides developers with everything they need to go from planning through to deployment, Mower said. CodeWhisperer is an AI coding companion that generates whole-line and full-line function code recommendations in an integrated development environment (IDE).</p><p>CodeWhisperer is part of the IDE, Seven said. The acceleration is two-fold. CodeCatalyst speeds the end-to-end integration process, and CodeWhisper accelerates writing code through generative AI.</p>
]]></content:encoded>
      <enclosure length="20489239" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/4be2c0a3-795d-490e-bc64-9b6d29d9a301/audio/af92e6e1-aef3-469c-9e4b-459713c914d8/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Developer Tool Integrations with AI -- The AWS Approach</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/337fe1f0-6d2c-45d9-9ac4-26a144bc3caa/3000x3000/tns-makers-logo-simplecast.jpg?aid=rss_feed"/>
      <itunes:duration>00:21:20</itunes:duration>
      <itunes:summary>Developer tool integration and AI differentiate workflows to achieve that &quot;fluid&quot; state developers strive for in their work.

Amazon CodeCatalyst and Amazon CodeWhisperer exemplify how developer workflows are accelerating and helping to create these fluid states. That&apos;s a big part of the story we hear from Harry Mower, director AWS DevOps Services, and Doug Seven, director, Software Development, AWS CodeWhisperer, from our recording in Seattle earlier in April for this week&apos;s AWS Developer Innovation Day.

CodeCatalyst serves as an end-to-end integrated DevOps toolchain that provides developers with everything they need to go from planning through to deployment, Mower said. CodeWhisperer is an AI coding companion that generates whole-line and full-line function code recommendations in an integrated development environment (IDE).

CodeWhisperer is part of the IDE, Seven said. The acceleration is two-fold. CodeCatalyst speeds the end-to-end integration process, and CodeWhisper accelerates writing code through generative AI.</itunes:summary>
      <itunes:subtitle>Developer tool integration and AI differentiate workflows to achieve that &quot;fluid&quot; state developers strive for in their work.

Amazon CodeCatalyst and Amazon CodeWhisperer exemplify how developer workflows are accelerating and helping to create these fluid states. That&apos;s a big part of the story we hear from Harry Mower, director AWS DevOps Services, and Doug Seven, director, Software Development, AWS CodeWhisperer, from our recording in Seattle earlier in April for this week&apos;s AWS Developer Innovation Day.

CodeCatalyst serves as an end-to-end integrated DevOps toolchain that provides developers with everything they need to go from planning through to deployment, Mower said. CodeWhisperer is an AI coding companion that generates whole-line and full-line function code recommendations in an integrated development environment (IDE).

CodeWhisperer is part of the IDE, Seven said. The acceleration is two-fold. CodeCatalyst speeds the end-to-end integration process, and CodeWhisper accelerates writing code through generative AI.</itunes:subtitle>
      <itunes:keywords>generative ai, harry mower, code catalyst, software developers, ai, tech podcast, alex williams, the new stack, devops, developer podcast, software development, aws developer innovation day, the new stack makers, code whisperer, doug seven, aws</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1396</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">eb470260-11ed-420a-be9e-804a064c847b</guid>
      <title>CircleCI CTO on How to Quickly Recover From a Malicious Hack</title>
      <description><![CDATA[<p>Just as everyone was heading out to the New Year's holidays last year, CTO <a href="https://www.linkedin.com/in/robzuber/">Rob Zuber</a> got a surprise of a most unwelcome sort. A customer alerted CircleCI to suspicious GitHub OAuth activity. Although the scope of the attack appeared limited, there was still no telling if other customers of the DevOps-friendly continuous integration and continuous delivery platform were impacted.</p><p>This notification kicked off a deeper review by CircleCI’s security team with GitHub, and they rotated all GitHub OAuth tokens on behalf of their customers. On January 4, the company also made the difficult but necessary decision to alert customers of this “security instance,” asking them to <a href="https://circleci.com/blog/january-4-2023-security-alert/">immediately rotate any and all stored secrets</a> and review internal logs for any unauthorized access.</p><p>In this latest episode of The New Stack Makers podcast, we discuss with Zuber the attack and how CircleCI responded. We also talk about what other companies should do to avoid the same situation, and what to do should it happen again.</p><p> </p>
]]></description>
      <pubDate>Thu, 20 Apr 2023 09:46:23 +0000</pubDate>
      <author>podcasts@thenewstack.io (circleci, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/circleci-update-rob-zuber-_vKSCyFM</link>
      <content:encoded><![CDATA[<p>Just as everyone was heading out to the New Year's holidays last year, CTO <a href="https://www.linkedin.com/in/robzuber/">Rob Zuber</a> got a surprise of a most unwelcome sort. A customer alerted CircleCI to suspicious GitHub OAuth activity. Although the scope of the attack appeared limited, there was still no telling if other customers of the DevOps-friendly continuous integration and continuous delivery platform were impacted.</p><p>This notification kicked off a deeper review by CircleCI’s security team with GitHub, and they rotated all GitHub OAuth tokens on behalf of their customers. On January 4, the company also made the difficult but necessary decision to alert customers of this “security instance,” asking them to <a href="https://circleci.com/blog/january-4-2023-security-alert/">immediately rotate any and all stored secrets</a> and review internal logs for any unauthorized access.</p><p>In this latest episode of The New Stack Makers podcast, we discuss with Zuber the attack and how CircleCI responded. We also talk about what other companies should do to avoid the same situation, and what to do should it happen again.</p><p> </p>
]]></content:encoded>
      <enclosure length="22772132" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/f2aa2bbf-a31b-467f-90b9-1086f3a1c007/audio/f0d9c6b9-5e5b-4861-b18e-99380b67b33c/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>CircleCI CTO on How to Quickly Recover From a Malicious Hack</itunes:title>
      <itunes:author>circleci, The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/797e8e28-3f9e-476b-98e4-4f7a62b9040a/3000x3000/tns-makers-logo-simplecast.jpg?aid=rss_feed"/>
      <itunes:duration>00:23:43</itunes:duration>
      <itunes:summary>Just as everyone was heading out to the New Year&apos;s holidays last year, CTO Rob Zuber got a surprise of a most unwelcome sort. A customer alerted CircleCI to suspicious GitHub OAuth activity. Although the scope of the attack appeared limited, there was still no telling if other customers of the DevOps-friendly continuous integration and continuous delivery platform were impacted.

This notification kicked off a deeper review by CircleCI’s security team with GitHub, and they rotated all GitHub OAuth tokens on behalf of their customers. On January 4, the company also made the difficult but necessary decision to alert customers of this “security instance,” asking them to immediately rotate any and all stored secrets and review internal logs for any unauthorized access.

In this latest episode of The New Stack Makers podcast, we discuss with Zuber the attack and how CircleCI responded. We also talk about what other companies should do to avoid the same situation, and what to do should it happen again.

</itunes:summary>
      <itunes:subtitle>Just as everyone was heading out to the New Year&apos;s holidays last year, CTO Rob Zuber got a surprise of a most unwelcome sort. A customer alerted CircleCI to suspicious GitHub OAuth activity. Although the scope of the attack appeared limited, there was still no telling if other customers of the DevOps-friendly continuous integration and continuous delivery platform were impacted.

This notification kicked off a deeper review by CircleCI’s security team with GitHub, and they rotated all GitHub OAuth tokens on behalf of their customers. On January 4, the company also made the difficult but necessary decision to alert customers of this “security instance,” asking them to immediately rotate any and all stored secrets and review internal logs for any unauthorized access.

In this latest episode of The New Stack Makers podcast, we discuss with Zuber the attack and how CircleCI responded. We also talk about what other companies should do to avoid the same situation, and what to do should it happen again.

</itunes:subtitle>
      <itunes:keywords>software developer, joab jackson, tech podcast, the new stack, circleci, devops, devops podcast, tech, developer podcast, the new stack makers, software engineer, rob zuber</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1395</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">fb6d3721-47cd-43b2-b07e-f7342e4eb4aa</guid>
      <title>What Are the Next Steps for Feature Flags?</title>
      <description><![CDATA[<p><a href="https://thenewstack.io/moving-to-the-cloud-presents-new-use-cases-for-feature-flags/">Feature flags,</a> the toggles in software development that allow you to turn certain features on or off for certain customers or audiences, offer release management at scale, according to <a href="https://www.linkedin.com/in/karishmairani">Karishma Irani,</a> head of product at LaunchDarkly.</p><p>But they also help unleash innovation, as she told host <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn</a> of The New Stack in this episode of The New Stack Makers podcast. And that points the way to a future where the potential for easy testing can inspire new features and products, Irani said.</p><p>“We've observed that when the risk of releasing something is lowered, when the risk of introducing bugs in production or breaking, something is reduced, is lowered, our customers feel organically motivated to be more innovative and think about new ideas and take risks,” she said.</p>
]]></description>
      <pubDate>Wed, 12 Apr 2023 19:09:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Launch Darkly, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/launch-darkly-feature-flags-3-3-Imp9GhrK</link>
      <content:encoded><![CDATA[<p><a href="https://thenewstack.io/moving-to-the-cloud-presents-new-use-cases-for-feature-flags/">Feature flags,</a> the toggles in software development that allow you to turn certain features on or off for certain customers or audiences, offer release management at scale, according to <a href="https://www.linkedin.com/in/karishmairani">Karishma Irani,</a> head of product at LaunchDarkly.</p><p>But they also help unleash innovation, as she told host <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn</a> of The New Stack in this episode of The New Stack Makers podcast. And that points the way to a future where the potential for easy testing can inspire new features and products, Irani said.</p><p>“We've observed that when the risk of releasing something is lowered, when the risk of introducing bugs in production or breaking, something is reduced, is lowered, our customers feel organically motivated to be more innovative and think about new ideas and take risks,” she said.</p>
]]></content:encoded>
      <enclosure length="26649121" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/96b405c6-efdc-4f5b-8304-1e6544428684/audio/6fbaa1db-809d-462e-a6bf-83305ebec251/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>What Are the Next Steps for Feature Flags?</itunes:title>
      <itunes:author>Launch Darkly, The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/e74b5773-89ee-4589-9193-88d6a13b11a8/3000x3000/tns-makers-logo-simplecast.jpg?aid=rss_feed"/>
      <itunes:duration>00:27:45</itunes:duration>
      <itunes:summary>Feature flags, the toggles in software development that allow you to turn certain features on or off for certain customers or audiences, offer release management at scale, according to Karishma Irani, head of product at LaunchDarkly.

But they also help unleash innovation, as she told host Heather Joslyn of The New Stack in this episode of The New Stack Makers podcast. And that points the way to a future where the potential for easy testing can inspire new features and products, Irani said.

“We&apos;ve observed that when the risk of releasing something is lowered, when the risk of introducing bugs in production or breaking, something is reduced, is lowered, our customers feel organically motivated to be more innovative and think about new ideas and take risks,” she said.</itunes:summary>
      <itunes:subtitle>Feature flags, the toggles in software development that allow you to turn certain features on or off for certain customers or audiences, offer release management at scale, according to Karishma Irani, head of product at LaunchDarkly.

But they also help unleash innovation, as she told host Heather Joslyn of The New Stack in this episode of The New Stack Makers podcast. And that points the way to a future where the potential for easy testing can inspire new features and products, Irani said.

“We&apos;ve observed that when the risk of releasing something is lowered, when the risk of introducing bugs in production or breaking, something is reduced, is lowered, our customers feel organically motivated to be more innovative and think about new ideas and take risks,” she said.</itunes:subtitle>
      <itunes:keywords>karishma irani, software developer, tech podcast, feature flags, the new stack, heather joslyn, devops, devops podcast, tech, developer podcast, the new stack makers, software engineer, launch darkly</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1394</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">9926ed19-71c8-4446-bb7a-a068b889d86c</guid>
      <title>KubeCon + CloudNativeCon EU 2023: Hello Amsterdam</title>
      <description><![CDATA[<p>Hoi Europe and beyond!</p><p>Once again it is time for cloud native enthusiasts and professionals to converge and discuss cloud native computing in all its <a href="https://thenewstack.io/how-to-optimize-database-efficiency-in-kubernetes/">efficiency</a> and <a href="https://thenewstack.io/developer-portals-can-abstract-away-kubernetes-complexity/">complexity</a>. The Cloud Native Computing Foundation's KubeCon+CloudNativeCon 2023 is being held later this month in Amsterdam, April 18 - 21, at the Rai Convention Centre.</p><p>In this latest edition of <a href="https://thenewstack.io/podcasts/">The New Stack podcast</a>, we spoke with two of the event's co-chairs who helped define this year's themes for the show, which is expected to draw over 9,000 attendees: <a href="https://www.linkedin.com/in/subramanianaparna/">Aparna Subramanian</a>, Shopify's Director of Production Engineering for Infrastructure; and Cloud Native Infra and Security Enterprise Architect <a href="https://www.linkedin.com/in/fkautz">Frederick Kautz</a>. </p>
]]></description>
      <pubDate>Wed, 5 Apr 2023 19:38:58 +0000</pubDate>
      <author>podcasts@thenewstack.io (cncf, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/kccnc-eu-2023-pre-event-_CMh2Cjr</link>
      <content:encoded><![CDATA[<p>Hoi Europe and beyond!</p><p>Once again it is time for cloud native enthusiasts and professionals to converge and discuss cloud native computing in all its <a href="https://thenewstack.io/how-to-optimize-database-efficiency-in-kubernetes/">efficiency</a> and <a href="https://thenewstack.io/developer-portals-can-abstract-away-kubernetes-complexity/">complexity</a>. The Cloud Native Computing Foundation's KubeCon+CloudNativeCon 2023 is being held later this month in Amsterdam, April 18 - 21, at the Rai Convention Centre.</p><p>In this latest edition of <a href="https://thenewstack.io/podcasts/">The New Stack podcast</a>, we spoke with two of the event's co-chairs who helped define this year's themes for the show, which is expected to draw over 9,000 attendees: <a href="https://www.linkedin.com/in/subramanianaparna/">Aparna Subramanian</a>, Shopify's Director of Production Engineering for Infrastructure; and Cloud Native Infra and Security Enterprise Architect <a href="https://www.linkedin.com/in/fkautz">Frederick Kautz</a>. </p>
]]></content:encoded>
      <enclosure length="24148053" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/7a1493d0-649a-4dd8-be95-0dbde3419652/audio/eb5df56d-32c1-422d-b63c-8fc8e7ef4644/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>KubeCon + CloudNativeCon EU 2023: Hello Amsterdam</itunes:title>
      <itunes:author>cncf, The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/23d3f5dd-bb57-499e-a27f-956d51165936/3000x3000/tns-makers-logo-simplecast.jpg?aid=rss_feed"/>
      <itunes:duration>00:25:09</itunes:duration>
      <itunes:summary>Hoi Europe and beyond!

Once again it is time for cloud native enthusiasts and professionals to converge and discuss cloud native computing in all its efficiency and complexity. The Cloud Native Computing Foundation&apos;s KubeCon+CloudNativeCon 2023 is being held later this month in Amsterdam, April 18 - 21, at the Rai Convention Centre.

In this latest edition of The New Stack podcast, we spoke with two of the event&apos;s co-chairs who helped define this year&apos;s themes for the show, which is expected to draw over 9,000 attendees: Aparna Subramanian, Shopify&apos;s Director of Production Engineering for Infrastructure; and Cloud Native Infra and Security Enterprise Architect Frederick Kautz. </itunes:summary>
      <itunes:subtitle>Hoi Europe and beyond!

Once again it is time for cloud native enthusiasts and professionals to converge and discuss cloud native computing in all its efficiency and complexity. The Cloud Native Computing Foundation&apos;s KubeCon+CloudNativeCon 2023 is being held later this month in Amsterdam, April 18 - 21, at the Rai Convention Centre.

In this latest edition of The New Stack podcast, we spoke with two of the event&apos;s co-chairs who helped define this year&apos;s themes for the show, which is expected to draw over 9,000 attendees: Aparna Subramanian, Shopify&apos;s Director of Production Engineering for Infrastructure; and Cloud Native Infra and Security Enterprise Architect Frederick Kautz. </itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, the new stack, devops, kubecon eu, devops podcast, tech, developer podcast, the new stack makers, software engineer, cncf, kubecon</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1393</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">f4d9cede-02ed-4287-8bb1-dca6d9b1c458</guid>
      <title>The End of Programming is Nigh</title>
      <description><![CDATA[<p>s the end of programming nigh?</p><p>If you ask Matt Welsh, he'd say yes. As Richard McManus <a href="https://thenewstack.io/coding-sucks-anyway-matt-welsh-on-the-end-of-programming/" target="_blank">wrote on The New Stack</a>, Welsh is a former professor of computer science at Harvard who spoke at a virtual meetup of the <a href="http://www.chicagoacm.org/" target="_blank">Chicago Association for Computing Machinery</a> (ACM), explaining his thesis that ChatGPT and GitHub Copilot represent the beginning of the end of programming.</p><p>Welsh joined us on The New Stack Makers to discuss his perspectives about the end of programming and answer questions about the future of computer science, distributed computing, and more.</p><p>Welsh is now the founder of <a href="https://www.fixie.ai/">fixie.ai</a>, a platform they are building to let companies develop applications on top of large language models to extend with different capabilities.</p><p>For 40 to 50 years, programming language design has had one goal. Make it easier to write programs, Welsh said in the interview.</p><p>Still, programming languages are complex, Welsh said. And no amount of work is going to make it simple. </p>
]]></description>
      <pubDate>Wed, 29 Mar 2023 17:42:06 +0000</pubDate>
      <author>podcasts@thenewstack.io (fixie.ai, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/the-end-of-programming-TiFJotI3</link>
      <content:encoded><![CDATA[<p>s the end of programming nigh?</p><p>If you ask Matt Welsh, he'd say yes. As Richard McManus <a href="https://thenewstack.io/coding-sucks-anyway-matt-welsh-on-the-end-of-programming/" target="_blank">wrote on The New Stack</a>, Welsh is a former professor of computer science at Harvard who spoke at a virtual meetup of the <a href="http://www.chicagoacm.org/" target="_blank">Chicago Association for Computing Machinery</a> (ACM), explaining his thesis that ChatGPT and GitHub Copilot represent the beginning of the end of programming.</p><p>Welsh joined us on The New Stack Makers to discuss his perspectives about the end of programming and answer questions about the future of computer science, distributed computing, and more.</p><p>Welsh is now the founder of <a href="https://www.fixie.ai/">fixie.ai</a>, a platform they are building to let companies develop applications on top of large language models to extend with different capabilities.</p><p>For 40 to 50 years, programming language design has had one goal. Make it easier to write programs, Welsh said in the interview.</p><p>Still, programming languages are complex, Welsh said. And no amount of work is going to make it simple. </p>
]]></content:encoded>
      <enclosure length="30442937" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/ada58ccb-7019-4dd8-8831-d6daa98ec99b/audio/ab06a7ff-e093-44ee-a6bb-286fe39b06da/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>The End of Programming is Nigh</itunes:title>
      <itunes:author>fixie.ai, The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/4e9f6c75-0b01-49c2-807c-2195bd220a76/3000x3000/tns-makers-logo-simplecast.jpg?aid=rss_feed"/>
      <itunes:duration>00:31:42</itunes:duration>
      <itunes:summary>s the end of programming nigh?

If you ask Matt Welsh, he&apos;d say yes. As Richard McManus wrote on The New Stack, Welsh is a former professor of computer science at Harvard who spoke at a virtual meetup of the Chicago Association for Computing Machinery (ACM), explaining his thesis that ChatGPT and GitHub Copilot represent the beginning of the end of programming.

Welsh joined us on The New Stack Makers to discuss his perspectives about the end of programming and answer questions about the future of computer science, distributed computing, and more.

Welsh is now the founder of fixie.ai, a platform they are building to let companies develop applications on top of large language models to extend with different capabilities.

For 40 to 50 years, programming language design has had one goal. Make it easier to write programs, Welsh said in the interview.

Still, programming languages are complex, Welsh said. And no amount of work is going to make it simple. </itunes:summary>
      <itunes:subtitle>s the end of programming nigh?

If you ask Matt Welsh, he&apos;d say yes. As Richard McManus wrote on The New Stack, Welsh is a former professor of computer science at Harvard who spoke at a virtual meetup of the Chicago Association for Computing Machinery (ACM), explaining his thesis that ChatGPT and GitHub Copilot represent the beginning of the end of programming.

Welsh joined us on The New Stack Makers to discuss his perspectives about the end of programming and answer questions about the future of computer science, distributed computing, and more.

Welsh is now the founder of fixie.ai, a platform they are building to let companies develop applications on top of large language models to extend with different capabilities.

For 40 to 50 years, programming language design has had one goal. Make it easier to write programs, Welsh said in the interview.

Still, programming languages are complex, Welsh said. And no amount of work is going to make it simple. </itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, alex williams, the new stack, devops, devops podcast, tech, developer podcast, the new stack makers, software engineer, fixie.ai, matt welsh</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1392</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">cfdf5a23-5669-4af5-b247-2666244d7df1</guid>
      <title>How 2 Founders Sold Their Startup to Aqua Security in a Year</title>
      <description><![CDATA[<p>Speed is a recurring theme in this episode of The Tech Founder Odyssey. Also, timing.</p><p><a href="https://www.linkedin.com/in/eilon-elhadad/">Eilon Elhadad</a> and <a href="https://www.linkedin.com/in/eylamm/">Eylam Milner,</a> who met while serving in the Israeli military, discovered that source code leak was a hazardous side effect of businesses’ need to move fast and break things in order to stay competitive.</p><p>“Every new business challenge leads to a new technological solution,” said Elhadad in this episode of The New Stack's podcast series. “The business challenge was to deliver product faster to the business; the solution was to build off the supply chain. And then it leads to a new security attack surface.”</p><p>Discovering this problem, and finding a solution to it, put Milner and Elhadad in the right place at the right time — just as the tech industry was <a href="https://thenewstack.io/inside-a-150-million-plan-for-open-source-software-security/">beginning to rally itself to deal with this issue</a> and give it a name: <a href="https://thenewstack.io/new-ebook-a-blueprint-for-supply-chain-security/">software supply chain security.</a></p><p>It led them to co-found Argon Security, which was acquired by Aqua Security in late 2021, Elhadad told The New Stack, a year after Argon started.</p>
]]></description>
      <pubDate>Wed, 22 Mar 2023 19:47:31 +0000</pubDate>
      <author>podcasts@thenewstack.io (aqua security, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/tfo-aqua-security-KoqvbzZV</link>
      <content:encoded><![CDATA[<p>Speed is a recurring theme in this episode of The Tech Founder Odyssey. Also, timing.</p><p><a href="https://www.linkedin.com/in/eilon-elhadad/">Eilon Elhadad</a> and <a href="https://www.linkedin.com/in/eylamm/">Eylam Milner,</a> who met while serving in the Israeli military, discovered that source code leak was a hazardous side effect of businesses’ need to move fast and break things in order to stay competitive.</p><p>“Every new business challenge leads to a new technological solution,” said Elhadad in this episode of The New Stack's podcast series. “The business challenge was to deliver product faster to the business; the solution was to build off the supply chain. And then it leads to a new security attack surface.”</p><p>Discovering this problem, and finding a solution to it, put Milner and Elhadad in the right place at the right time — just as the tech industry was <a href="https://thenewstack.io/inside-a-150-million-plan-for-open-source-software-security/">beginning to rally itself to deal with this issue</a> and give it a name: <a href="https://thenewstack.io/new-ebook-a-blueprint-for-supply-chain-security/">software supply chain security.</a></p><p>It led them to co-found Argon Security, which was acquired by Aqua Security in late 2021, Elhadad told The New Stack, a year after Argon started.</p>
]]></content:encoded>
      <enclosure length="22298584" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/5173c72d-aef1-4302-b6b3-e50ab03e654e/audio/61eaf81b-1d0c-4eb4-95e5-17c76f2582bf/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How 2 Founders Sold Their Startup to Aqua Security in a Year</itunes:title>
      <itunes:author>aqua security, The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/c5f4ca4e-e34f-4411-a253-9c87f5341d79/3000x3000/tns-makers-logo-simplecast.jpg?aid=rss_feed"/>
      <itunes:duration>00:23:13</itunes:duration>
      <itunes:summary>Speed is a recurring theme in this episode of The Tech Founder Odyssey. Also, timing.

Eilon Elhadad and Eylam Milner, who met while serving in the Israeli military, discovered that source code leak was a hazardous side effect of businesses’ need to move fast and break things in order to stay competitive.

“Every new business challenge leads to a new technological solution,” said Elhadad in this episode of The New Stack&apos;s podcast series. “The business challenge was to deliver product faster to the business; the solution was to build off the supply chain. And then it leads to a new security attack surface.”

Discovering this problem, and finding a solution to it, put Milner and Elhadad in the right place at the right time — just as the tech industry was beginning to rally itself to deal with this issue and give it a name: software supply chain security.

It led them to co-found Argon Security, which was acquired by Aqua Security in late 2021, Elhadad told The New Stack, a year after Argon started.</itunes:summary>
      <itunes:subtitle>Speed is a recurring theme in this episode of The Tech Founder Odyssey. Also, timing.

Eilon Elhadad and Eylam Milner, who met while serving in the Israeli military, discovered that source code leak was a hazardous side effect of businesses’ need to move fast and break things in order to stay competitive.

“Every new business challenge leads to a new technological solution,” said Elhadad in this episode of The New Stack&apos;s podcast series. “The business challenge was to deliver product faster to the business; the solution was to build off the supply chain. And then it leads to a new security attack surface.”

Discovering this problem, and finding a solution to it, put Milner and Elhadad in the right place at the right time — just as the tech industry was beginning to rally itself to deal with this issue and give it a name: software supply chain security.

It led them to co-found Argon Security, which was acquired by Aqua Security in late 2021, Elhadad told The New Stack, a year after Argon started.</itunes:subtitle>
      <itunes:keywords>aqua security, software developer, tech podcast, the new stack, heather joslyn, devops, devops podcast, tech, developer podcast, eylam milner, the new stack makers, software engineer, eilon elhadad, colleen coll</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1391</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">a7e3be64-cc02-4c66-b5d8-8c227664eb18</guid>
      <title>Why Your APIs Aren’t Safe — and What to Do About It</title>
      <description><![CDATA[<p>Given the vulnerability of so many systems, it’s not surprising that <a href="https://www.imperva.com/resources/resource-library/reports/ddos-threat-landscape-report-2023/">cyberattacks on applications and APIs increased 82% in 2022</a> compared to the previous year, according to a report released this year by <a href="https://www.imperva.com/?utm_content=inline-mention" target="_blank">Imperva’s</a> global threat researchers.</p><p>What might rattle even the most experienced technologists is the sheer scale of those attacks. Digging into the data, Imperva, an application and data security company, found that the largest layer seven, distributed denial of service (DDoS) attack it mitigated during 2022 involved — you might want to sit down for this — more than 3.9 million API requests per second.</p><p>“Most developers, when they think about their <a href="https://thenewstack.io/what-devs-must-know-about-apis-before-designing-and-using-them/">APIs,</a> they’re usually dealing with traffic that’s maybe 1,000 requests per second, not too much more than that. Twenty thousand, for a larger API,” said <a href="https://www.linkedin.com/in/peter-klimek-37588962">Peter Klimek, </a>director of technology at Imperva, in this episode of The New Stack Makers podcast. “So, to get to 3.9 million, it’s really staggering.”</p><p>Klimek spoke to <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn</a> of TNS about the special challenges of APIs and <a href="https://thenewstack.io/security/">cybersecurity</a> and <a href="https://www.imperva.com/resources/resource-library/white-papers/improve-api-performance-with-a-sound-api-security-strategy/">steps organizations can take to keep their APIs safe.</a></p><p>The episode was sponsored by Imperva.</p>
]]></description>
      <pubDate>Tue, 21 Mar 2023 01:06:43 +0000</pubDate>
      <author>podcasts@thenewstack.io (Imperva, The New  Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/imperva-shadow-apis-iy5bdN8Q</link>
      <content:encoded><![CDATA[<p>Given the vulnerability of so many systems, it’s not surprising that <a href="https://www.imperva.com/resources/resource-library/reports/ddos-threat-landscape-report-2023/">cyberattacks on applications and APIs increased 82% in 2022</a> compared to the previous year, according to a report released this year by <a href="https://www.imperva.com/?utm_content=inline-mention" target="_blank">Imperva’s</a> global threat researchers.</p><p>What might rattle even the most experienced technologists is the sheer scale of those attacks. Digging into the data, Imperva, an application and data security company, found that the largest layer seven, distributed denial of service (DDoS) attack it mitigated during 2022 involved — you might want to sit down for this — more than 3.9 million API requests per second.</p><p>“Most developers, when they think about their <a href="https://thenewstack.io/what-devs-must-know-about-apis-before-designing-and-using-them/">APIs,</a> they’re usually dealing with traffic that’s maybe 1,000 requests per second, not too much more than that. Twenty thousand, for a larger API,” said <a href="https://www.linkedin.com/in/peter-klimek-37588962">Peter Klimek, </a>director of technology at Imperva, in this episode of The New Stack Makers podcast. “So, to get to 3.9 million, it’s really staggering.”</p><p>Klimek spoke to <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn</a> of TNS about the special challenges of APIs and <a href="https://thenewstack.io/security/">cybersecurity</a> and <a href="https://www.imperva.com/resources/resource-library/white-papers/improve-api-performance-with-a-sound-api-security-strategy/">steps organizations can take to keep their APIs safe.</a></p><p>The episode was sponsored by Imperva.</p>
]]></content:encoded>
      <enclosure length="23579211" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/80711ccb-e104-4225-a334-73ea0eb06075/audio/42e7075b-7b0e-4fc0-b81b-700b93e3d8d2/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Why Your APIs Aren’t Safe — and What to Do About It</itunes:title>
      <itunes:author>Imperva, The New  Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/ec6fca42-ba71-4eea-8b50-7b9b145df540/3000x3000/tns-makers-logo-simplecast.jpg?aid=rss_feed"/>
      <itunes:duration>00:24:33</itunes:duration>
      <itunes:summary>Given the vulnerability of so many systems, it’s not surprising that cyberattacks on applications and APIs increased 82% in 2022 compared to the previous year, according to a report released this year by Imperva’s global threat researchers.

What might rattle even the most experienced technologists is the sheer scale of those attacks. Digging into the data, Imperva, an application and data security company, found that the largest layer seven, distributed denial of service (DDoS) attack it mitigated during 2022 involved — you might want to sit down for this — more than 3.9 million API requests per second.

“Most developers, when they think about their APIs, they&apos;re usually dealing with traffic that&apos;s maybe 1,000 requests per second, not too much more than that. Twenty thousand, for a larger API,” said Peter Klimek, director of technology at Imperva, in this episode of The New Stack Makers podcast. “So, to get to 3.9 million, it&apos;s really staggering.”

Klimek spoke to Heather Joslyn of TNS about the special challenges of APIs and cybersecurity and steps organizations can take to keep their APIs safe.

The episode was sponsored by Imperva.
</itunes:summary>
      <itunes:subtitle>Given the vulnerability of so many systems, it’s not surprising that cyberattacks on applications and APIs increased 82% in 2022 compared to the previous year, according to a report released this year by Imperva’s global threat researchers.

What might rattle even the most experienced technologists is the sheer scale of those attacks. Digging into the data, Imperva, an application and data security company, found that the largest layer seven, distributed denial of service (DDoS) attack it mitigated during 2022 involved — you might want to sit down for this — more than 3.9 million API requests per second.

“Most developers, when they think about their APIs, they&apos;re usually dealing with traffic that&apos;s maybe 1,000 requests per second, not too much more than that. Twenty thousand, for a larger API,” said Peter Klimek, director of technology at Imperva, in this episode of The New Stack Makers podcast. “So, to get to 3.9 million, it&apos;s really staggering.”

Klimek spoke to Heather Joslyn of TNS about the special challenges of APIs and cybersecurity and steps organizations can take to keep their APIs safe.

The episode was sponsored by Imperva.
</itunes:subtitle>
      <itunes:keywords>software developer, data security, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, #imperva, the new stack makers, software engineer, apis</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1390</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">fe05e2df-d19d-4dcb-b621-2a334a37361d</guid>
      <title>Unix Creator Ken Thompson to Keynote Scale Conference</title>
      <description><![CDATA[<p><a class="editor-rtfLink" href="https://www.socallinuxexpo.org/scale/20x" target="_blank" rel="noopener"><span data-preserver-spaces="true">The 20th Annual Southern California Linux Expo</span></a><span data-preserver-spaces="true"> (SCALE) runs Thursday through Sunday</span><strong><span data-preserver-spaces="true"> </span></strong><span data-preserver-spaces="true">at the Pasadena Convention Center in Pasadena, Ca., featuring keynotes from notables such as <a href="https://en.wikipedia.org/wiki/Ken_Thompson">Ken Thompson</a>, the creator of Unix, said <a href="https://www.linkedin.com/in/irabinovitch/">Ilan Rabinovich</a>, one of the co-founders and conference chair for the conference on this week's edition of The New Stack Makers.</span></p><p> </p><p><iframe width="560" height="315" src="https://www.youtube.com/embed/htHY0bhb7FI" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe></p><p> </p><p><span data-preserver-spaces="true">"Honestly, most of the speakers we've had, you know, we got at SCALE in the early days, we just, we, we emailed them and said: 'Would you come to speak at the event?' We ran a call for proposals, and some of them came in as submissions, but a lot of it was just cold outreach. I don't know if that succeeded, because that's the state of where the community was at the time and there wasn't as much demand or just because or out of sheer dumb luck. I assure you, it wasn't skill or any sort of network that we like, we just, you know, we just we managed to, we managed to do that. And that's continued through today. When we do our call for papers, we get hundreds and hundreds of submissions, and that makes it really hard to choose from." </span></p><p> </p><p><p class="attribution"><iframe width="100%" height="200px" frameborder="no" scrolling="no" seamless="" src="https://player.simplecast.com/9a45a180-1277-4083-8b55-cafab9a21e18?dark=true"><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span></iframe></p></p><p><span class="media-direct-link"><a href="https://thenewstack.simplecast.com/episodes/rethinking-web-application-firewalls">Rethinking Web Application Firewalls </a></span></p><p> </p><p><span data-preserver-spaces="true">Thompson, who turned 80 on February 4 (Happy Birthday, Mr. Thompson), created Unix at Bell Labs. He worked with people like </span><a class="editor-rtfLink" href="https://en.wikipedia.org/wiki/Robert_Griesemer" target="_blank" rel="noopener"><span data-preserver-spaces="true">Robert Griesemer</span></a><span data-preserver-spaces="true"> and </span><a class="editor-rtfLink" href="https://en.wikipedia.org/wiki/Rob_Pike" target="_blank" rel="noopener"><span data-preserver-spaces="true">Rob Pike</span></a><span data-preserver-spaces="true"> on developing the Go programming language and other projects over the years, including Plan 9, UTF-8, and more.</span></p><p> </p><p><span data-preserver-spaces="true">Rabinovich is pretty humble about the keynote speakers that the conference attracts. He and the conference organizers scoured the Internet and found Thompson's email, who said he'd love to join them. That's how they attracted Lawrence Lessig, the creator of the Creative Commons license, who spoke at SCALE12x in 2014 about the legal sides of open source, content sharing, and free software.</span></p><p> </p><p><span data-preserver-spaces="true">"I wish I could say, we have this very deep network of connections," Rabinovich said. "It's just, these folks are surprisingly approachable, despite, you know, even after years and years of doing amazing work."</span></p><p> </p><p><span data-preserver-spaces="true">SCALE is the largest community-run open-source and free software conference in North America, with roots befitting an event that started with a group of college students wanting to share their learnings about Linux.</span></p><p> </p><p><a class="editor-rtfLink" href="https://app.asana.com/0/0/1203797235309616" target="_blank" rel="noopener"><span data-preserver-spaces="true">Rabinovitch</span></a><span data-preserver-spaces="true"> was one of those college students attending UCSB, the University of California, Santa Barbara.</span></p><p> </p><p><span data-preserver-spaces="true">"A lot of the history of SCALE comes from the LA area back when open source was still relatively new and Linux was still fairly hard to get up and running," Rabinovitch said. "There were LUGS (Linux User Groups) on every corner. I think we had like 25 LUGS in the LA area at one point. And so so there was a vibrant open source community.'</span></p><p> </p><p><span data-preserver-spaces="true">Los Angeles's freeways and traffic made it difficult to get the open source community together. So they started LUGFest. They held the day-long event at a Nortel building until the telco went belly up.</span></p><p> </p><p><span data-preserver-spaces="true">So, as open source people tend to do, they decided to scale, so to speak, the community gatherings. And so SCALE came to be – led by students like Rabinovitch. The conference started with a healthy community of 200 to 250 people. By the pandemic, 3,500 people were attending.</span></p><p> </p><p><span data-preserver-spaces="true">For more about SCALE, listen to the full episode of The New Stack Makers wherever you get your podcasts.</span></p>
]]></description>
      <pubDate>Wed, 8 Mar 2023 22:18:13 +0000</pubDate>
      <author>podcasts@thenewstack.io (scale, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/unix-creator-ken-thompson-to-keynote-scale-conference-hverxHG9</link>
      <content:encoded><![CDATA[<p><a class="editor-rtfLink" href="https://www.socallinuxexpo.org/scale/20x" target="_blank" rel="noopener"><span data-preserver-spaces="true">The 20th Annual Southern California Linux Expo</span></a><span data-preserver-spaces="true"> (SCALE) runs Thursday through Sunday</span><strong><span data-preserver-spaces="true"> </span></strong><span data-preserver-spaces="true">at the Pasadena Convention Center in Pasadena, Ca., featuring keynotes from notables such as <a href="https://en.wikipedia.org/wiki/Ken_Thompson">Ken Thompson</a>, the creator of Unix, said <a href="https://www.linkedin.com/in/irabinovitch/">Ilan Rabinovich</a>, one of the co-founders and conference chair for the conference on this week's edition of The New Stack Makers.</span></p><p> </p><p><iframe width="560" height="315" src="https://www.youtube.com/embed/htHY0bhb7FI" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe></p><p> </p><p><span data-preserver-spaces="true">"Honestly, most of the speakers we've had, you know, we got at SCALE in the early days, we just, we, we emailed them and said: 'Would you come to speak at the event?' We ran a call for proposals, and some of them came in as submissions, but a lot of it was just cold outreach. I don't know if that succeeded, because that's the state of where the community was at the time and there wasn't as much demand or just because or out of sheer dumb luck. I assure you, it wasn't skill or any sort of network that we like, we just, you know, we just we managed to, we managed to do that. And that's continued through today. When we do our call for papers, we get hundreds and hundreds of submissions, and that makes it really hard to choose from." </span></p><p> </p><p><p class="attribution"><iframe width="100%" height="200px" frameborder="no" scrolling="no" seamless="" src="https://player.simplecast.com/9a45a180-1277-4083-8b55-cafab9a21e18?dark=true"><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span></iframe></p></p><p><span class="media-direct-link"><a href="https://thenewstack.simplecast.com/episodes/rethinking-web-application-firewalls">Rethinking Web Application Firewalls </a></span></p><p> </p><p><span data-preserver-spaces="true">Thompson, who turned 80 on February 4 (Happy Birthday, Mr. Thompson), created Unix at Bell Labs. He worked with people like </span><a class="editor-rtfLink" href="https://en.wikipedia.org/wiki/Robert_Griesemer" target="_blank" rel="noopener"><span data-preserver-spaces="true">Robert Griesemer</span></a><span data-preserver-spaces="true"> and </span><a class="editor-rtfLink" href="https://en.wikipedia.org/wiki/Rob_Pike" target="_blank" rel="noopener"><span data-preserver-spaces="true">Rob Pike</span></a><span data-preserver-spaces="true"> on developing the Go programming language and other projects over the years, including Plan 9, UTF-8, and more.</span></p><p> </p><p><span data-preserver-spaces="true">Rabinovich is pretty humble about the keynote speakers that the conference attracts. He and the conference organizers scoured the Internet and found Thompson's email, who said he'd love to join them. That's how they attracted Lawrence Lessig, the creator of the Creative Commons license, who spoke at SCALE12x in 2014 about the legal sides of open source, content sharing, and free software.</span></p><p> </p><p><span data-preserver-spaces="true">"I wish I could say, we have this very deep network of connections," Rabinovich said. "It's just, these folks are surprisingly approachable, despite, you know, even after years and years of doing amazing work."</span></p><p> </p><p><span data-preserver-spaces="true">SCALE is the largest community-run open-source and free software conference in North America, with roots befitting an event that started with a group of college students wanting to share their learnings about Linux.</span></p><p> </p><p><a class="editor-rtfLink" href="https://app.asana.com/0/0/1203797235309616" target="_blank" rel="noopener"><span data-preserver-spaces="true">Rabinovitch</span></a><span data-preserver-spaces="true"> was one of those college students attending UCSB, the University of California, Santa Barbara.</span></p><p> </p><p><span data-preserver-spaces="true">"A lot of the history of SCALE comes from the LA area back when open source was still relatively new and Linux was still fairly hard to get up and running," Rabinovitch said. "There were LUGS (Linux User Groups) on every corner. I think we had like 25 LUGS in the LA area at one point. And so so there was a vibrant open source community.'</span></p><p> </p><p><span data-preserver-spaces="true">Los Angeles's freeways and traffic made it difficult to get the open source community together. So they started LUGFest. They held the day-long event at a Nortel building until the telco went belly up.</span></p><p> </p><p><span data-preserver-spaces="true">So, as open source people tend to do, they decided to scale, so to speak, the community gatherings. And so SCALE came to be – led by students like Rabinovitch. The conference started with a healthy community of 200 to 250 people. By the pandemic, 3,500 people were attending.</span></p><p> </p><p><span data-preserver-spaces="true">For more about SCALE, listen to the full episode of The New Stack Makers wherever you get your podcasts.</span></p>
]]></content:encoded>
      <enclosure length="18794832" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/a223b5eb-b205-4d81-9bf5-4e82b914677a/audio/24d4402d-ed60-444d-97d8-f0dead813176/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Unix Creator Ken Thompson to Keynote Scale Conference</itunes:title>
      <itunes:author>scale, The New Stack</itunes:author>
      <itunes:duration>00:19:34</itunes:duration>
      <itunes:summary>The 20th Annual Southern California Linux Expo (SCALE) runs Thursday through Sunday at the Pasadena Convention Center in Pasadena, Ca., featuring keynotes from notables such as Ken Thompson, the creator of Unix, said Ilan Rabinovich, one of the co-founders and conference chair for the conference on this week&apos;s edition of The New Stack Makers.

&quot;Honestly, most of the speakers we&apos;ve had, you know, we got at SCALE in the early days, we just, we, we emailed them and said: &apos;Would you come to speak at the event?&apos; We ran a call for proposals, and some of them came in as submissions, but a lot of it was just cold outreach. I don&apos;t know if that succeeded, because that&apos;s the state of where the community was at the time and there wasn&apos;t as much demand or just because or out of sheer dumb luck. I assure you, it wasn&apos;t skill or any sort of network that we like, we just, you know, we just we managed to, we managed to do that. And that&apos;s continued through today. When we do our call for papers, we get hundreds and hundreds of submissions, and that makes it really hard to choose from.&quot; 

Ilan Rabinovitch - https://www.linkedin.com/in/irabinovitch/
Alex Williams - @alexwilliams 
The New Stack - @thenewstack</itunes:summary>
      <itunes:subtitle>The 20th Annual Southern California Linux Expo (SCALE) runs Thursday through Sunday at the Pasadena Convention Center in Pasadena, Ca., featuring keynotes from notables such as Ken Thompson, the creator of Unix, said Ilan Rabinovich, one of the co-founders and conference chair for the conference on this week&apos;s edition of The New Stack Makers.

&quot;Honestly, most of the speakers we&apos;ve had, you know, we got at SCALE in the early days, we just, we, we emailed them and said: &apos;Would you come to speak at the event?&apos; We ran a call for proposals, and some of them came in as submissions, but a lot of it was just cold outreach. I don&apos;t know if that succeeded, because that&apos;s the state of where the community was at the time and there wasn&apos;t as much demand or just because or out of sheer dumb luck. I assure you, it wasn&apos;t skill or any sort of network that we like, we just, you know, we just we managed to, we managed to do that. And that&apos;s continued through today. When we do our call for papers, we get hundreds and hundreds of submissions, and that makes it really hard to choose from.&quot; 

Ilan Rabinovitch - https://www.linkedin.com/in/irabinovitch/
Alex Williams - @alexwilliams 
The New Stack - @thenewstack</itunes:subtitle>
      <itunes:keywords>ilan rabinovitch, software developer, tech podcast, alex williams, the new stack, devops, devops podcast, tech, developer podcast, the new stack makers, software engineer, scale</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1389</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">69996794-cdef-4194-aeee-ebb1458730d6</guid>
      <title>How Solvo’s Co-Founder Got the ‘Guts’ to Be an Entrepreneur</title>
      <description><![CDATA[<p>When she was a student in her native Israel, <a href="https://www.linkedin.com/in/shira-shamban">Shira Shamban</a> was a self-proclaimed “geek.”</p><p> </p><p>But, unusually for a tech company founder and CEO, not a computer geek.</p><p> </p><p>Shamban was a science nerd, with her sights set on becoming a doctor. But first, she had to do her state-mandated military service. And that’s where her path diverged.</p><p> </p><p>In the military, she was not only immersed in computers but spent years working in intelligence; she stayed in the service for more than a decade, eventually rising to become head of an intelligence sector for the Israeli Defense Forces. At home, she began building her own projects to experiment with ideas that could help her team.</p><p> </p><p>“So that kind of helped me not to be intimidated by technology, to learn that I can learn anything I want by myself,” said Shamban, co-founder of Solvo, a company focused on data and cloud infrastructure security. “And the most important thing is to just try out things that you learn.”</p><p> </p><p>To date, Solvo has raised about $11 million through investors like Surround Ventures, Magenta Venture Partners, TLV Partners and others. In this episode of The New Stack Makers podcast series The Tech Founder Odyssey, Shamban talked to <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn</a> and <a href="https://thenewstack.io/author/colleen/">Colleen Coll</a> of TNS about her journey.</p><p><h2>In-Person Teamwork</h2></p><p>Shamban opted to stay in the technology world, nurturing a desire to eventually start her own company. It was during a stint at Dome9, a cloud security company, that she met her future Solvo co-founder, <a href="https://www.linkedin.com/in/david-hendri">David Hendri</a> — and built a foundation for entrepreneurship.</p><p> </p><p>“After that episode, I got the guts,” she said. “Or I got stupid enough.”</p><p> </p><p>Hendri, now Solvo’s chief technology officer, struck Shamban as having the right sensibility to be a partner in a startup. At Dome9, she said, “very often, I used to stay up late in the office, and I would see him as well. So we'd grab something to eat.”</p><p> </p><p>Their casual conversations quickly revealed that Hendri was often staying late to troubleshoot issues that were not his or his team’s responsibility, but simply things that someone needed to fix. That sense of ownership, she realized, “is exactly the kind of approach one would need to bring to the table in a startup.”</p><p> </p><p>The mealtime chats that started Solvo have carried over into its current organizational culture. The company employs 20 people; workers based in Tel Aviv are expected to come to the office four days a week.</p><p> </p><p>Hendri and Shamban started their company in the auspicious month of March 2020, just as the Covid-19 pandemic started. While <a href="https://thenewstack.io/how-will-working-in-tech-change-in-2023/">many companies have moved to all-remote work,</a> Solvo never did.</p><p> </p><p>“We knew we wanted to sit together in the same room, because the conversations you have over a cup of coffee are not the same ones that you have on a chat, and on Slack,” the CEO said. “So that was our decision. And for a long time, it was an unpopular decision.”</p><p> </p><p>As the company scales, finding employees who align with its culture can make recruiting tricky, Shamban said.</p><p> </p><p>It's not only about your technical expertise, it's also about what kind of person you are,” she said. “Sometimes we found very professional people that we didn't think would make a good fit to the culture that we want to build. So we did not hire them. And in the boom times, when it was really hard to hire engineers.</p><p> </p><p>“These were tough decisions. But we had to make them because we knew that building a culture is easier in a way than fixing a culture.</p><p> </p><p>Listen to the full episode to hear more about Shamban's journey.</p>
]]></description>
      <pubDate>Wed, 1 Mar 2023 19:57:28 +0000</pubDate>
      <author>podcasts@thenewstack.io (solvo, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-solvos-co-founder-got-the-guts-to-be-an-entrepreneur-dapUAiq2</link>
      <content:encoded><![CDATA[<p>When she was a student in her native Israel, <a href="https://www.linkedin.com/in/shira-shamban">Shira Shamban</a> was a self-proclaimed “geek.”</p><p> </p><p>But, unusually for a tech company founder and CEO, not a computer geek.</p><p> </p><p>Shamban was a science nerd, with her sights set on becoming a doctor. But first, she had to do her state-mandated military service. And that’s where her path diverged.</p><p> </p><p>In the military, she was not only immersed in computers but spent years working in intelligence; she stayed in the service for more than a decade, eventually rising to become head of an intelligence sector for the Israeli Defense Forces. At home, she began building her own projects to experiment with ideas that could help her team.</p><p> </p><p>“So that kind of helped me not to be intimidated by technology, to learn that I can learn anything I want by myself,” said Shamban, co-founder of Solvo, a company focused on data and cloud infrastructure security. “And the most important thing is to just try out things that you learn.”</p><p> </p><p>To date, Solvo has raised about $11 million through investors like Surround Ventures, Magenta Venture Partners, TLV Partners and others. In this episode of The New Stack Makers podcast series The Tech Founder Odyssey, Shamban talked to <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn</a> and <a href="https://thenewstack.io/author/colleen/">Colleen Coll</a> of TNS about her journey.</p><p><h2>In-Person Teamwork</h2></p><p>Shamban opted to stay in the technology world, nurturing a desire to eventually start her own company. It was during a stint at Dome9, a cloud security company, that she met her future Solvo co-founder, <a href="https://www.linkedin.com/in/david-hendri">David Hendri</a> — and built a foundation for entrepreneurship.</p><p> </p><p>“After that episode, I got the guts,” she said. “Or I got stupid enough.”</p><p> </p><p>Hendri, now Solvo’s chief technology officer, struck Shamban as having the right sensibility to be a partner in a startup. At Dome9, she said, “very often, I used to stay up late in the office, and I would see him as well. So we'd grab something to eat.”</p><p> </p><p>Their casual conversations quickly revealed that Hendri was often staying late to troubleshoot issues that were not his or his team’s responsibility, but simply things that someone needed to fix. That sense of ownership, she realized, “is exactly the kind of approach one would need to bring to the table in a startup.”</p><p> </p><p>The mealtime chats that started Solvo have carried over into its current organizational culture. The company employs 20 people; workers based in Tel Aviv are expected to come to the office four days a week.</p><p> </p><p>Hendri and Shamban started their company in the auspicious month of March 2020, just as the Covid-19 pandemic started. While <a href="https://thenewstack.io/how-will-working-in-tech-change-in-2023/">many companies have moved to all-remote work,</a> Solvo never did.</p><p> </p><p>“We knew we wanted to sit together in the same room, because the conversations you have over a cup of coffee are not the same ones that you have on a chat, and on Slack,” the CEO said. “So that was our decision. And for a long time, it was an unpopular decision.”</p><p> </p><p>As the company scales, finding employees who align with its culture can make recruiting tricky, Shamban said.</p><p> </p><p>It's not only about your technical expertise, it's also about what kind of person you are,” she said. “Sometimes we found very professional people that we didn't think would make a good fit to the culture that we want to build. So we did not hire them. And in the boom times, when it was really hard to hire engineers.</p><p> </p><p>“These were tough decisions. But we had to make them because we knew that building a culture is easier in a way than fixing a culture.</p><p> </p><p>Listen to the full episode to hear more about Shamban's journey.</p>
]]></content:encoded>
      <enclosure length="27213366" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/bfa6f603-82e5-4d1c-bfc1-37ab284cedc6/audio/0012ba52-73ec-4387-880f-3496f141df32/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Solvo’s Co-Founder Got the ‘Guts’ to Be an Entrepreneur</itunes:title>
      <itunes:author>solvo, The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/38e3e8e0-2b6f-4fd2-b6db-a176774ee470/3000x3000/the-tech-odyssey-logo-white-bg.jpg?aid=rss_feed"/>
      <itunes:duration>00:28:20</itunes:duration>
      <itunes:summary>When she was a student in her native Israel, Shira Shamban was a self-proclaimed “geek.”

But, unusually for a tech company founder and CEO, not a computer geek.

Shamban was a science nerd, with her sights set on becoming a doctor. But first, she had to do her state-mandated military service. And that’s where her path diverged.

In the military, she was not only immersed in computers but spent years working in intelligence; she stayed in the service for more than a decade, eventually rising to become head of an intelligence sector for the Israeli Defense Forces. At home, she began building her own projects to experiment with ideas that could help her team.

“So that kind of helped me not to be intimidated by technology, to learn that I can learn anything I want by myself,” said Shamban, co-founder of Solvo, a company focused on data and cloud infrastructure security. “And the most important thing is to just try out things that you learn.”

To date, Solvo has raised about $11 million through investors like Surround Ventures, Magenta Venture Partners, TLV Partners and others. In this episode of The New Stack Makers podcast series The Tech Founder Odyssey, Shamban talked to Heather Joslyn and Colleen Coll of TNS about her journey.

Shira Shamban - @ShambanIT  
Heather Joslyn - @ha_joslyn
The New Stack - @thenewstack</itunes:summary>
      <itunes:subtitle>When she was a student in her native Israel, Shira Shamban was a self-proclaimed “geek.”

But, unusually for a tech company founder and CEO, not a computer geek.

Shamban was a science nerd, with her sights set on becoming a doctor. But first, she had to do her state-mandated military service. And that’s where her path diverged.

In the military, she was not only immersed in computers but spent years working in intelligence; she stayed in the service for more than a decade, eventually rising to become head of an intelligence sector for the Israeli Defense Forces. At home, she began building her own projects to experiment with ideas that could help her team.

“So that kind of helped me not to be intimidated by technology, to learn that I can learn anything I want by myself,” said Shamban, co-founder of Solvo, a company focused on data and cloud infrastructure security. “And the most important thing is to just try out things that you learn.”

To date, Solvo has raised about $11 million through investors like Surround Ventures, Magenta Venture Partners, TLV Partners and others. In this episode of The New Stack Makers podcast series The Tech Founder Odyssey, Shamban talked to Heather Joslyn and Colleen Coll of TNS about her journey.

Shira Shamban - @ShambanIT  
Heather Joslyn - @ha_joslyn
The New Stack - @thenewstack</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, the new stack makers, software engineer, shira shamban, the tech founder odyssey, tech founder odyssey, solvo</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1388</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">2c5e0001-6e32-475a-bbb9-a71218e0d45d</guid>
      <title>Ambient Mesh: No Sidecar Required</title>
      <description><![CDATA[<p>At <a href="https://www.cncf.io/blog/2023/01/04/cloudnativesecuritycon-north-america-2023-5-sessions-you-dont-want-to-miss/">Cloud Native Security Con</a>, we sat down with <a href="https://www.solo.io/">Solo.io's</a> Marino Wijay and Jim Barton, who discussed how <a href="https://thenewstack.io/key-concepts/service-mesh/">service mesh technologies</a> have matured, especially now with the removal of sidecars in <a href="https://istio.io/latest/blog/2022/introducing-ambient-mesh/">Ambient Mesh</a> that it developed with Google.</p><p> </p><p>Ambient Mesh is "a new proxy architecture that, according to the Solo.io site, "moves the proxy to the node level for <a href="https://www.cloudflare.com/learning/access-management/what-is-mutual-tls/">mTLS</a> and identity. It also allows a policy-enforcement policy to manage Layer 7 security filters and policies.</p><p> </p><p>A sidecar is a mini-proxy, a mini-firewall, like an all-in-one router, said Wijay, who does developer relations and advocacy for Solo. A sidecar receives instructions from an upstream control plane.</p><p> </p><p>"Now, one of the things that we started to realize with different workloads and different patterns of communication is that not all these workloads need a sidecar or can take advantage of the sidecar," Wijay said. "Some better operate without the sidecar."</p><p> </p><p>Ambient Mesh reflects the maturity of service mesh and the difference between day one and day two operations, said Barton, a field engineer with Solo.</p><p> </p><p>"Day one operations are a lot about understanding concepts, enabling developers, initial configurations, that sort of thing," Barton said. "The community is really much more focused and Ambient Mesh is a good example of this on day two concerns. How do I scale this? How do I make it perform in large environments? How can I expand this across clusters, clusters in multiple zones in multiple regions, that sort of thing? Those are the kinds of initiatives that we're really seeing come to the forefront at this point."</p><p> </p><p>With the maturity of service mesh comes the users. In the context of security, that means the developer security operations person, Barton said. It's not the developer's job to connect services. Their job is to build out the services.</p><p> </p><p>"It's up to the platform operator, or DevSecOps engineers to create that, that fundamental plane or foundation for where you can deploy your services, and then provide the security on top of it," Barton said.</p><p> </p><p>The engineers then have to configure it and think it through. "How do I know who's doing what and who's talking to who, so that I can start forming my zero trust posture?," Barton said.</p>
]]></description>
      <pubDate>Wed, 22 Feb 2023 21:09:15 +0000</pubDate>
      <author>podcasts@thenewstack.io (solo.io, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/ambient-mesh-no-sidecar-required-QY0aS_Vv</link>
      <content:encoded><![CDATA[<p>At <a href="https://www.cncf.io/blog/2023/01/04/cloudnativesecuritycon-north-america-2023-5-sessions-you-dont-want-to-miss/">Cloud Native Security Con</a>, we sat down with <a href="https://www.solo.io/">Solo.io's</a> Marino Wijay and Jim Barton, who discussed how <a href="https://thenewstack.io/key-concepts/service-mesh/">service mesh technologies</a> have matured, especially now with the removal of sidecars in <a href="https://istio.io/latest/blog/2022/introducing-ambient-mesh/">Ambient Mesh</a> that it developed with Google.</p><p> </p><p>Ambient Mesh is "a new proxy architecture that, according to the Solo.io site, "moves the proxy to the node level for <a href="https://www.cloudflare.com/learning/access-management/what-is-mutual-tls/">mTLS</a> and identity. It also allows a policy-enforcement policy to manage Layer 7 security filters and policies.</p><p> </p><p>A sidecar is a mini-proxy, a mini-firewall, like an all-in-one router, said Wijay, who does developer relations and advocacy for Solo. A sidecar receives instructions from an upstream control plane.</p><p> </p><p>"Now, one of the things that we started to realize with different workloads and different patterns of communication is that not all these workloads need a sidecar or can take advantage of the sidecar," Wijay said. "Some better operate without the sidecar."</p><p> </p><p>Ambient Mesh reflects the maturity of service mesh and the difference between day one and day two operations, said Barton, a field engineer with Solo.</p><p> </p><p>"Day one operations are a lot about understanding concepts, enabling developers, initial configurations, that sort of thing," Barton said. "The community is really much more focused and Ambient Mesh is a good example of this on day two concerns. How do I scale this? How do I make it perform in large environments? How can I expand this across clusters, clusters in multiple zones in multiple regions, that sort of thing? Those are the kinds of initiatives that we're really seeing come to the forefront at this point."</p><p> </p><p>With the maturity of service mesh comes the users. In the context of security, that means the developer security operations person, Barton said. It's not the developer's job to connect services. Their job is to build out the services.</p><p> </p><p>"It's up to the platform operator, or DevSecOps engineers to create that, that fundamental plane or foundation for where you can deploy your services, and then provide the security on top of it," Barton said.</p><p> </p><p>The engineers then have to configure it and think it through. "How do I know who's doing what and who's talking to who, so that I can start forming my zero trust posture?," Barton said.</p>
]]></content:encoded>
      <enclosure length="13794368" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/f7d0e668-8f2c-49a6-a2cf-09aa16042544/audio/56a0a270-aa80-49e8-a570-aacb2d255f01/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Ambient Mesh: No Sidecar Required</itunes:title>
      <itunes:author>solo.io, The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/81d94120-8856-4954-b21c-51f35d678b89/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:14:22</itunes:duration>
      <itunes:summary>At Cloud Native Security Con, we sat down with Solo.io&apos;s Marino Wijay and Jim Barton, who discussed how service mesh technologies have matured, especially now with the removal of sidecars in Ambient Mesh that it developed with Google.

Ambient Mesh is &quot;a new proxy architecture that, according to the Solo.io site, &quot;moves the proxy to the node level for mTLS and identity. It also allows a policy-enforcement policy to manage Layer 7 security filters and policies.

A sidecar is a mini-proxy, a mini-firewall, like an all-in-one router, said Wijay, who does developer relations and advocacy for Solo. A sidecar receives instructions from an upstream control plane.

&quot;Now, one of the things that we started to realize with different workloads and different patterns of communication is that not all these workloads need a sidecar or can take advantage of the sidecar,&quot; Wijay said. &quot;Some better operate without the sidecar.&quot;

Marino Wijay - @virtualized6ix
Jim Barton - @jameshbarton  
Alex Williams - @alexwilliams 
The New Stack - @thenewstack</itunes:summary>
      <itunes:subtitle>At Cloud Native Security Con, we sat down with Solo.io&apos;s Marino Wijay and Jim Barton, who discussed how service mesh technologies have matured, especially now with the removal of sidecars in Ambient Mesh that it developed with Google.

Ambient Mesh is &quot;a new proxy architecture that, according to the Solo.io site, &quot;moves the proxy to the node level for mTLS and identity. It also allows a policy-enforcement policy to manage Layer 7 security filters and policies.

A sidecar is a mini-proxy, a mini-firewall, like an all-in-one router, said Wijay, who does developer relations and advocacy for Solo. A sidecar receives instructions from an upstream control plane.

&quot;Now, one of the things that we started to realize with different workloads and different patterns of communication is that not all these workloads need a sidecar or can take advantage of the sidecar,&quot; Wijay said. &quot;Some better operate without the sidecar.&quot;

Marino Wijay - @virtualized6ix
Jim Barton - @jameshbarton  
Alex Williams - @alexwilliams 
The New Stack - @thenewstack</itunes:subtitle>
      <itunes:keywords>cloud native security con, cloud native security con 2023, software developer, tech podcast, alex williams, the new stack, solo.io, devops, cloud native security conference, devops podcast, tech, developer podcast, the new stack makers, software engineer, jim barton, marino wijay</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1387</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">7a155cb2-ea4e-42d3-ad64-bd530bb428b2</guid>
      <title>2023 Hotness: Cloud IDEs, Web Assembly, and SBOMs</title>
      <description><![CDATA[<p><span data-preserver-spaces="true">Here's a breakdown of what we cover:</span></p><p><ul></p><p> <li><span data-preserver-spaces="true">Cloud IDEs will mature as <a href="https://thenewstack.io/this-week-in-programming-github-codespaces-portable-dev-environment/">GitHub's Codespaces</a> platform gains acceptance through its integration into the GitHub service. Other factors include new startups in the space, such as</span><a class="editor-rtfLink" href="https://www.gitpod.io/" target="_blank" rel="noopener"><span data-preserver-spaces="true"> GitPod</span></a><span data-preserver-spaces="true">, which offers a secure, cloud-based IDE, and </span><a class="editor-rtfLink" href="https://www.uptycs.com/" target="_blank" rel="noopener"><span data-preserver-spaces="true">Uptycs</span></a><span data-preserver-spaces="true">, which uses telemetry data to lock-down developer environments. "So I think you'll, you're just gonna see more people exposed to it, and they're gonna be like, 'holy crap, this makes my life a lot easier '."</span></li></p><p> <li><span data-preserver-spaces="true">FinOps reflects the more stringent views on managing costs, focusing on the efficiency of resources that a company provides for developers. The focus also translates to the GreenOps movement with its emphasis on efficiency.</span></li></p><p> <li><span data-preserver-spaces="true">Software bill of materials (SBOMs) will continue to mature with Sigstore as the project with the fastest expected adoption. Witness, from </span><a class="editor-rtfLink" href="https://www.testifysec.com/blog/" target="_blank" rel="noopener"><span data-preserver-spaces="true">Telemetry Project</span></a><span data-preserver-spaces="true">, is another project. The SPDX community has been at the center of the movement for over a decade now before people cared about it. </span></li></p><p> <li><span data-preserver-spaces="true"><a href="https://thenewstack.io/i-need-to-talk-to-you-about-kubernetes-gitops/">GitOps</a> and <a href="https://thenewstack.io/opentelemetry-properly-explained-and-demoed/">Open Telemetry</a>: This year, KubeCon submissions topics on GitOps were super high. OpenTelemetry is the second most popular project in the CNCF, behind Kubernetes.</span></li></p><p> <li><span data-preserver-spaces="true">Platform engineering is hot. Anisczyk cites <a href="https://thenewstack.io/spotifys-backstage-a-strategic-guide/">Backstage</a>, a CNCF project, as one he is watching. It has a healthy plugin extension ecosystem and a corresponding large community. People make fun of Jenkins, but Jenkins is likely going to be around as long as Linux because of the plugin community. Backstage is going along that same route.</span></li></p><p> <li><span data-preserver-spaces="true"><a href="https://thenewstack.io/webassembly-to-let-developers-combine-languages/">WebAssembly</a>: "You will probably see an uptick in edge cases, like smaller deployments as opposed to full-blown cloud-based workloads. Web Assembly will mix with containers and VMs. "It's just the way that software works."</span></li></p><p> <li><span data-preserver-spaces="true">Kubernetes is part of today's distributed fabric. Linux is now everywhere. Kubernetes is going through the same evolution. Kubernetes is going into airplanes, cars, and fast-food restaurants. "People are going to focus on the layers up top, not necessarily like, the core Kubernetes project itself. It's going to be all the cool stuff built on top."</span></li></p><p></ul></p><p> </p>
]]></description>
      <pubDate>Thu, 16 Feb 2023 16:41:18 +0000</pubDate>
      <author>podcasts@thenewstack.io (Cloud Native Computing Foundation, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/2023-hotness-cloud-ides-web-assembly-and-sboms-M4VK3fIu</link>
      <content:encoded><![CDATA[<p><span data-preserver-spaces="true">Here's a breakdown of what we cover:</span></p><p><ul></p><p> <li><span data-preserver-spaces="true">Cloud IDEs will mature as <a href="https://thenewstack.io/this-week-in-programming-github-codespaces-portable-dev-environment/">GitHub's Codespaces</a> platform gains acceptance through its integration into the GitHub service. Other factors include new startups in the space, such as</span><a class="editor-rtfLink" href="https://www.gitpod.io/" target="_blank" rel="noopener"><span data-preserver-spaces="true"> GitPod</span></a><span data-preserver-spaces="true">, which offers a secure, cloud-based IDE, and </span><a class="editor-rtfLink" href="https://www.uptycs.com/" target="_blank" rel="noopener"><span data-preserver-spaces="true">Uptycs</span></a><span data-preserver-spaces="true">, which uses telemetry data to lock-down developer environments. "So I think you'll, you're just gonna see more people exposed to it, and they're gonna be like, 'holy crap, this makes my life a lot easier '."</span></li></p><p> <li><span data-preserver-spaces="true">FinOps reflects the more stringent views on managing costs, focusing on the efficiency of resources that a company provides for developers. The focus also translates to the GreenOps movement with its emphasis on efficiency.</span></li></p><p> <li><span data-preserver-spaces="true">Software bill of materials (SBOMs) will continue to mature with Sigstore as the project with the fastest expected adoption. Witness, from </span><a class="editor-rtfLink" href="https://www.testifysec.com/blog/" target="_blank" rel="noopener"><span data-preserver-spaces="true">Telemetry Project</span></a><span data-preserver-spaces="true">, is another project. The SPDX community has been at the center of the movement for over a decade now before people cared about it. </span></li></p><p> <li><span data-preserver-spaces="true"><a href="https://thenewstack.io/i-need-to-talk-to-you-about-kubernetes-gitops/">GitOps</a> and <a href="https://thenewstack.io/opentelemetry-properly-explained-and-demoed/">Open Telemetry</a>: This year, KubeCon submissions topics on GitOps were super high. OpenTelemetry is the second most popular project in the CNCF, behind Kubernetes.</span></li></p><p> <li><span data-preserver-spaces="true">Platform engineering is hot. Anisczyk cites <a href="https://thenewstack.io/spotifys-backstage-a-strategic-guide/">Backstage</a>, a CNCF project, as one he is watching. It has a healthy plugin extension ecosystem and a corresponding large community. People make fun of Jenkins, but Jenkins is likely going to be around as long as Linux because of the plugin community. Backstage is going along that same route.</span></li></p><p> <li><span data-preserver-spaces="true"><a href="https://thenewstack.io/webassembly-to-let-developers-combine-languages/">WebAssembly</a>: "You will probably see an uptick in edge cases, like smaller deployments as opposed to full-blown cloud-based workloads. Web Assembly will mix with containers and VMs. "It's just the way that software works."</span></li></p><p> <li><span data-preserver-spaces="true">Kubernetes is part of today's distributed fabric. Linux is now everywhere. Kubernetes is going through the same evolution. Kubernetes is going into airplanes, cars, and fast-food restaurants. "People are going to focus on the layers up top, not necessarily like, the core Kubernetes project itself. It's going to be all the cool stuff built on top."</span></li></p><p></ul></p><p> </p>
]]></content:encoded>
      <enclosure length="18304566" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/d88ad19b-22bf-4579-83c9-aa914ef3d952/audio/d7b688e5-f5fb-4718-911a-521653f66936/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>2023 Hotness: Cloud IDEs, Web Assembly, and SBOMs</itunes:title>
      <itunes:author>Cloud Native Computing Foundation, The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/87b32a74-d84c-4cb6-b2b0-5f5317d41bc8/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:19:04</itunes:duration>
      <itunes:summary>Cloud IDEs are hot, but several other trends are taking shape in 2023 that Cloud Native Computing Foundation CTO Chris Aniszczyk highlights in the latest episode of The New Stack Makers.</itunes:summary>
      <itunes:subtitle>Cloud IDEs are hot, but several other trends are taking shape in 2023 that Cloud Native Computing Foundation CTO Chris Aniszczyk highlights in the latest episode of The New Stack Makers.</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, alex williams, the new stack, devops, cloud native security conference, devops podcast, tech, developer podcast, the new stack makers, software engineer, cncf, chris anyszczyk</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1386</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">8d7525a7-b578-4aec-9599-d9775d9b57d5</guid>
      <title>Generative AI: Don&apos;t Fire Your Copywriters Just Yet</title>
      <description><![CDATA[<p>Everyone in the community was surprised by <a href="https://thenewstack.io/just-out-of-the-box-chatgpt-causing-waves-of-talk-concern/">ChatGPT last year</a>, which a web service responded to any and all user questions <a href="https://archive.ph/2023.01.25-131857/https://fortune.com/longform/chatgpt-openai-sam-altman-microsoft/">with a surprising fluidity</a>.</p><p> </p><p>ChatGPT is a variant of the powerful <a href="https://thenewstack.io/openais-gpt-3-makes-big-leap-forward-for-natural-language-processing/">GPT-3 large language model</a> created by OpenAI, a company owned by Microsoft. It is still a demo though it is pretty clear that this type of generative AI will be rapidly commercialized. Indeed Microsoft is <a href="https://www.nytimes.com/2023/02/07/technology/microsoft-ai-chatgpt-bing.html">embedding the generative AI in its Bing Search service</a>, and Google <a href="https://www.cnn.com/2023/02/06/tech/google-bard-chatgpt-rival/index.html">is building a rival offering</a>.</p><p> </p><p>So what are smaller businesses to do to ensure their messages are heard to these machine learning giants?</p><p> </p><p>For this latest podcast from The New Stack, we discussed these issues with <a href="https://www.linkedin.com/in/ryanejohnston/">Ryan Johnston</a>, chief marketing officer for <a href="https://writer.com/">Writer</a>. Writer has enjoyed an early success in generative AI technologies. The company's service is dedicated to a single mission: making sure its customers' content adheres to the guidelines set in place.</p><p> </p><p>This can include features such as ensuring the language in the copy matches the company's own designated terminology, or making sure that a piece of content covers all the required topic points, or even that a press release has quotes that are not out of scope with the project mission itself.</p><p> </p><p>In short, the service promises "consistently on-brand content at scale," Johnston said. "It's not taking away my creativity. But it is doing a great job of figuring out how to create content for me at a faster pace, [content] that actually sounds like what I want it to sound like."</p><p> </p><p>For our conversation, we first delved into how the company was started, its value proposition ("what is it used for?") and what role that AI plays in the company's offering. We also delve a bit into the technology stack Writer deploys to offer these services, as well as what material the Writer may require from their customers themselves to make the service work.</p><p> </p><p>For the second part of our conversation, we turn our attention to how other companies (that are not search giants) can get their message across in the land of large language models, and maybe even find a few new sources of AI-generated value along the way. And, for those public-facing businesses dealing with Google and Bing, we chat about how they should they refine their own <a href="https://thenewstack.io/does-jamstack-or-wordpress-handle-seo-requirements-better/">search engine optimization</a> (SEO) strategies to be best represented in these large models?</p><p> </p><p>One point to consider: While AI can generate a lot of pretty convincing text, you still need a human in the loop to oversee the results, Johnston advised.</p><p> </p><p>"We are augmenting content teams copywriters to do what they do best, just even better. So we're scaling the mundane parts of the process that you may not love. We are helping you get a first draft on paper when you've got writer's block," Johnston said. "But at the end of the day, our belief is there needs to be a great writer in the driver's seat. [You] should never just be fully reliant on AI to produce things that you're going to immediately take to market."</p>
]]></description>
      <pubDate>Thu, 9 Feb 2023 19:30:18 +0000</pubDate>
      <author>podcasts@thenewstack.io (writer, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/generative-ai-dont-fire-your-copywriters-just-yet-5FqVw6Nb</link>
      <content:encoded><![CDATA[<p>Everyone in the community was surprised by <a href="https://thenewstack.io/just-out-of-the-box-chatgpt-causing-waves-of-talk-concern/">ChatGPT last year</a>, which a web service responded to any and all user questions <a href="https://archive.ph/2023.01.25-131857/https://fortune.com/longform/chatgpt-openai-sam-altman-microsoft/">with a surprising fluidity</a>.</p><p> </p><p>ChatGPT is a variant of the powerful <a href="https://thenewstack.io/openais-gpt-3-makes-big-leap-forward-for-natural-language-processing/">GPT-3 large language model</a> created by OpenAI, a company owned by Microsoft. It is still a demo though it is pretty clear that this type of generative AI will be rapidly commercialized. Indeed Microsoft is <a href="https://www.nytimes.com/2023/02/07/technology/microsoft-ai-chatgpt-bing.html">embedding the generative AI in its Bing Search service</a>, and Google <a href="https://www.cnn.com/2023/02/06/tech/google-bard-chatgpt-rival/index.html">is building a rival offering</a>.</p><p> </p><p>So what are smaller businesses to do to ensure their messages are heard to these machine learning giants?</p><p> </p><p>For this latest podcast from The New Stack, we discussed these issues with <a href="https://www.linkedin.com/in/ryanejohnston/">Ryan Johnston</a>, chief marketing officer for <a href="https://writer.com/">Writer</a>. Writer has enjoyed an early success in generative AI technologies. The company's service is dedicated to a single mission: making sure its customers' content adheres to the guidelines set in place.</p><p> </p><p>This can include features such as ensuring the language in the copy matches the company's own designated terminology, or making sure that a piece of content covers all the required topic points, or even that a press release has quotes that are not out of scope with the project mission itself.</p><p> </p><p>In short, the service promises "consistently on-brand content at scale," Johnston said. "It's not taking away my creativity. But it is doing a great job of figuring out how to create content for me at a faster pace, [content] that actually sounds like what I want it to sound like."</p><p> </p><p>For our conversation, we first delved into how the company was started, its value proposition ("what is it used for?") and what role that AI plays in the company's offering. We also delve a bit into the technology stack Writer deploys to offer these services, as well as what material the Writer may require from their customers themselves to make the service work.</p><p> </p><p>For the second part of our conversation, we turn our attention to how other companies (that are not search giants) can get their message across in the land of large language models, and maybe even find a few new sources of AI-generated value along the way. And, for those public-facing businesses dealing with Google and Bing, we chat about how they should they refine their own <a href="https://thenewstack.io/does-jamstack-or-wordpress-handle-seo-requirements-better/">search engine optimization</a> (SEO) strategies to be best represented in these large models?</p><p> </p><p>One point to consider: While AI can generate a lot of pretty convincing text, you still need a human in the loop to oversee the results, Johnston advised.</p><p> </p><p>"We are augmenting content teams copywriters to do what they do best, just even better. So we're scaling the mundane parts of the process that you may not love. We are helping you get a first draft on paper when you've got writer's block," Johnston said. "But at the end of the day, our belief is there needs to be a great writer in the driver's seat. [You] should never just be fully reliant on AI to produce things that you're going to immediately take to market."</p>
]]></content:encoded>
      <enclosure length="22549359" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/fca9c388-79ea-47c3-96e5-9fbea1037b97/audio/0281d25d-467c-4062-bae4-413a56f05d63/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Generative AI: Don&apos;t Fire Your Copywriters Just Yet</itunes:title>
      <itunes:author>writer, The New Stack</itunes:author>
      <itunes:duration>00:23:29</itunes:duration>
      <itunes:summary>Everyone in the community was surprised by ChatGPT last year, which a web service responded to any and all user questions with a surprising fluidity.

ChatGPT is a variant of the powerful GPT-3 large language model created by OpenAI, a company owned by Microsoft. It is still a demo though it is pretty clear that this type of generative AI will be rapidly commercialized. Indeed Microsoft is embedding the generative AI in its Bing Search service, and Google is building a rival offering.

So what are smaller businesses to do to ensure their messages are heard to these machine learning giants?

For this latest podcast from The New Stack, we discussed these issues with Ryan Johnston, chief marketing officer for Writer. Writer has enjoyed an early success in generative AI technologies. The company&apos;s service is dedicated to a single mission: making sure its customers&apos; content adheres to the guidelines set in place.</itunes:summary>
      <itunes:subtitle>Everyone in the community was surprised by ChatGPT last year, which a web service responded to any and all user questions with a surprising fluidity.

ChatGPT is a variant of the powerful GPT-3 large language model created by OpenAI, a company owned by Microsoft. It is still a demo though it is pretty clear that this type of generative AI will be rapidly commercialized. Indeed Microsoft is embedding the generative AI in its Bing Search service, and Google is building a rival offering.

So what are smaller businesses to do to ensure their messages are heard to these machine learning giants?

For this latest podcast from The New Stack, we discussed these issues with Ryan Johnston, chief marketing officer for Writer. Writer has enjoyed an early success in generative AI technologies. The company&apos;s service is dedicated to a single mission: making sure its customers&apos; content adheres to the guidelines set in place.</itunes:subtitle>
      <itunes:keywords>software developer, joab jackson, tech podcast, the new stack, devops, devops podcast, ryan johnston, tech, chatgpt, developer podcast, the new stack makers, software engineer, writer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1385</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">24df9469-630e-4d1e-839e-7b7189578551</guid>
      <title>Feature Flags are not Just for Devs</title>
      <description><![CDATA[<p>The story goes something like this:</p><p> </p><p>There's this marketing manager who is trying to time a launch. She asks the developer team when the service will be ready. The dev team says maybe a few months. Let's say three months from now in April. The marketing manager begins prepping for the release.</p><p> </p><p>The dev team releases the services the following week.</p><p> </p><p>It's not an uncommon occurrence.</p><p> </p><p><a href="https://www.linkedin.com/in/edithharbaugh/">Edith Harbaugh</a> is the co-founder and CEO of <a href="https://launchdarkly.com/">LaunchDarkly</a>, a company she launched in 2014 with <a href="https://www.linkedin.com/in/jkodumal/">John Kodumal</a> to solve these problems with <a href="https://thenewstack.io/feature-flags-making-software-delivery-faster/">software releases</a> that affect organizations worldwide. Today, LaunchDarkly has 4,000 customers and an annual return revenue rate of $100 million.</p><p> </p><p>We interviewed Harbaugh for our Tech Founder Odyssey series on The New Stack Makers about her journey and LaunchDarkly's work. The interview starts with this question about the timing of dev releases and the relationship between developers and other constituencies, particularly the marketing organization.</p><p> </p><p>LaunchDarkly is the number one feature management company, Harbaugh said. "Their mission is to provide services to launch software in a measured, controlled fashion. Harbaugh and Kodumal, CTO, founded the company on the premise that software development and releasing software is arduous.</p><p> </p><p>"You wonder whether you're building the right thing," Harbaugh said, who has worked as both an engineer and a product manager. "Once you get it out to the market, it often is not quite right. And then you just run this huge risk of how do you fix things on the fly."</p><p> </p><p><a href="https://martinfowler.com/articles/feature-toggles.html">Feature flagging</a> was a technique that a lot of software companies did. Harbaugh worked at <a href="https://www.tripit.com/trips">Tripit</a>, a travel service, where they used feature flags as did companies such as Atlassian, where Kodumal had developed software.</p><p> </p><p>"So the kernel of LaunchDarkly, when we started in 2014, was to make this technique of feature flagging into a movement called feature management, to allow everybody to build better software faster, in a safer way."</p><p> </p><p>LaunchDarkly allows companies to release features however granular an organization wants, allowing a developer to push a release into production in different pieces at different times, Harbaugh said. So, a marketing organization can send a release out even after the developer team has released it into production.</p><p> </p><p>"So, for example, if, we were running a release, and we wanted somebody from The New Stack to see it first, the marketing person could turn it on just for you."</p><p> </p><p>Harbaugh describes herself as a huge geek. But she also gets it in a rare way for geeks and non-geeks alike. She and Kodumal took a concept used effectively by develops, transforming it into a service that provides feature management for a broader customer base, like the marketer wanting to push releases out in a granular way for a launch on the East Coast that is pre-programmed with feature flags in advance from the company office the previous day in San Francisco.</p><p> </p><p>The idea is novel, but like many intelligent, technical founders, Harbaugh's journey reflects her place today. She's a leader in the space, and a fun person to talk to, so we hope you enjoy this latest episode in our tech founder series from The New Stack Makers.</p>
]]></description>
      <pubDate>Thu, 2 Feb 2023 21:58:22 +0000</pubDate>
      <author>podcasts@thenewstack.io (Launch Darkly, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/feature-flags-are-not-just-for-devs-sLJB3VQz</link>
      <content:encoded><![CDATA[<p>The story goes something like this:</p><p> </p><p>There's this marketing manager who is trying to time a launch. She asks the developer team when the service will be ready. The dev team says maybe a few months. Let's say three months from now in April. The marketing manager begins prepping for the release.</p><p> </p><p>The dev team releases the services the following week.</p><p> </p><p>It's not an uncommon occurrence.</p><p> </p><p><a href="https://www.linkedin.com/in/edithharbaugh/">Edith Harbaugh</a> is the co-founder and CEO of <a href="https://launchdarkly.com/">LaunchDarkly</a>, a company she launched in 2014 with <a href="https://www.linkedin.com/in/jkodumal/">John Kodumal</a> to solve these problems with <a href="https://thenewstack.io/feature-flags-making-software-delivery-faster/">software releases</a> that affect organizations worldwide. Today, LaunchDarkly has 4,000 customers and an annual return revenue rate of $100 million.</p><p> </p><p>We interviewed Harbaugh for our Tech Founder Odyssey series on The New Stack Makers about her journey and LaunchDarkly's work. The interview starts with this question about the timing of dev releases and the relationship between developers and other constituencies, particularly the marketing organization.</p><p> </p><p>LaunchDarkly is the number one feature management company, Harbaugh said. "Their mission is to provide services to launch software in a measured, controlled fashion. Harbaugh and Kodumal, CTO, founded the company on the premise that software development and releasing software is arduous.</p><p> </p><p>"You wonder whether you're building the right thing," Harbaugh said, who has worked as both an engineer and a product manager. "Once you get it out to the market, it often is not quite right. And then you just run this huge risk of how do you fix things on the fly."</p><p> </p><p><a href="https://martinfowler.com/articles/feature-toggles.html">Feature flagging</a> was a technique that a lot of software companies did. Harbaugh worked at <a href="https://www.tripit.com/trips">Tripit</a>, a travel service, where they used feature flags as did companies such as Atlassian, where Kodumal had developed software.</p><p> </p><p>"So the kernel of LaunchDarkly, when we started in 2014, was to make this technique of feature flagging into a movement called feature management, to allow everybody to build better software faster, in a safer way."</p><p> </p><p>LaunchDarkly allows companies to release features however granular an organization wants, allowing a developer to push a release into production in different pieces at different times, Harbaugh said. So, a marketing organization can send a release out even after the developer team has released it into production.</p><p> </p><p>"So, for example, if, we were running a release, and we wanted somebody from The New Stack to see it first, the marketing person could turn it on just for you."</p><p> </p><p>Harbaugh describes herself as a huge geek. But she also gets it in a rare way for geeks and non-geeks alike. She and Kodumal took a concept used effectively by develops, transforming it into a service that provides feature management for a broader customer base, like the marketer wanting to push releases out in a granular way for a launch on the East Coast that is pre-programmed with feature flags in advance from the company office the previous day in San Francisco.</p><p> </p><p>The idea is novel, but like many intelligent, technical founders, Harbaugh's journey reflects her place today. She's a leader in the space, and a fun person to talk to, so we hope you enjoy this latest episode in our tech founder series from The New Stack Makers.</p>
]]></content:encoded>
      <enclosure length="25694502" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/01c6e861-aad5-445d-a2da-40d970d63b01/audio/15106e4f-48d3-4238-a04f-4024494cd6df/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Feature Flags are not Just for Devs</itunes:title>
      <itunes:author>Launch Darkly, The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/b2b037ef-9774-4844-a99c-1f97705ff6ef/3000x3000/the-tech-odyssey-logo-white-bg.jpg?aid=rss_feed"/>
      <itunes:duration>00:26:45</itunes:duration>
      <itunes:summary>The story goes something like this:

There&apos;s this marketing manager who is trying to time a launch. She asks the developer team when the service will be ready. The dev team says maybe a few months. Let&apos;s say three months from now in April. The marketing manager begins prepping for the release.

The dev team releases the services the following week.

It&apos;s not an uncommon occurrence.

Edith Harbaugh is the co-founder and CEO of LaunchDarkly, a company she launched in 2014 with John Kodumal to solve these problems with software releases that affect organizations worldwide. Today, LaunchDarkly has 4,000 customers and an annual return revenue rate of $100 million.

We interviewed Harbaugh for our Tech Founder Odyssey series on The New Stack Makers about her journey and LaunchDarkly&apos;s work. The interview starts with this question about the timing of dev releases and the relationship between developers and other constituencies, particularly the marketing organization.

Edith Harbaugh - @edith_h
Alex Williams - @alexwilliams 
The New Stack - @thenewstack</itunes:summary>
      <itunes:subtitle>The story goes something like this:

There&apos;s this marketing manager who is trying to time a launch. She asks the developer team when the service will be ready. The dev team says maybe a few months. Let&apos;s say three months from now in April. The marketing manager begins prepping for the release.

The dev team releases the services the following week.

It&apos;s not an uncommon occurrence.

Edith Harbaugh is the co-founder and CEO of LaunchDarkly, a company she launched in 2014 with John Kodumal to solve these problems with software releases that affect organizations worldwide. Today, LaunchDarkly has 4,000 customers and an annual return revenue rate of $100 million.

We interviewed Harbaugh for our Tech Founder Odyssey series on The New Stack Makers about her journey and LaunchDarkly&apos;s work. The interview starts with this question about the timing of dev releases and the relationship between developers and other constituencies, particularly the marketing organization.

Edith Harbaugh - @edith_h
Alex Williams - @alexwilliams 
The New Stack - @thenewstack</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, alex williams, the new stack, devops, devops podcast, tech, developer podcast, the new stack makers, software engineer, edith harbaugh, the tech founder odyssey, launch darkly</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1384</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">16f7da18-a439-413b-8564-ecc3b9b489bc</guid>
      <title>Port: Platform Engineering Needs a Holistic Approach</title>
      <description><![CDATA[<p>By now, almost everyone agreed <a href="https://thenewstack.io/whats-platform-engineering-and-how-does-it-support-devops/">platform engineering</a> is <a href="https://thenewstack.io/cyberark-decreases-cognitive-load-with-platform-engineering/">probably a good idea</a>, in which an organizations builds an <a href="https://thenewstack.io/internal-developer-portal-what-it-is-and-why-you-need-one/">internal development platform</a> to empower coders and speed application releases. So, for this latest edition of <a href="https://thenewstack.io/podcasts/">The New Stack podcast</a>,  we spoke with one of the pioneers in this space,  <a href="https://www.linkedin.com/in/zohar-einy-b6219612a/?originalSubdomain=il">Zohar Einy</a>, CEO of Port, to see how platform engineering would work in your organization. TNS Editor Joab Jackson hosted this conversation.</p><p> </p><p>Port offers what it claims is the world's first <a href="https://www.getport.io/company">low code platform</a> for developers.</p><p> </p><p><a href="https://thenewstack.simplecast.com/episodes/rethinking-web-application-firewalls">Rethinking Web Application Firewalls</a></p><p> </p><p>With Port, an organization can build a software catalogue of approved tools, import its own data model, and set up workflows. Developers can consume all the resources they need through a self-service catalogue, without needing the knowledge how to set up a complex application, like Kubernetes. The DevOps and platform teams themselves maintain the platform.</p><p> </p><p>Application owners aren't  the only potential users of a self-service catalogues, Einy points out in our convo. DevOps and system administration teams can also use the platform. A DevOps teams can set up automations "to make sure that [developers are] using the platform with the right mindset that fits with their organizational standards in terms of compliance, security, and performance aspects."</p><p> </p><p>Even machines themselves could benefit from a self-service platform, for those who are looking to automate deployments as much as possible.</p><p> </p><p>Einy offered an example: A CI/CD process could create a build process on its own. If it needs to check the maturity level of some tool, it can do so through an API call. If it's not adequately certified, the developer is notified, but if all the tools are sufficiently mature than the automated process can finish the build without further developer intervention.</p><p> </p><p>Another possible process that could be automated would be the termination of permissions when their deadline has passed. Think about an early-warning system for expired digital certificates. "So it's a big driver for both for cost reduction and security best practices," Einy said.</p><p> </p><h2>Too Many Choices, Not Enough Code</h2><p> </p><p>But what about developer choice? Won't developers feel frustrated when barred from using the tools they are most fond of?</p><p> </p><p>But this freedom to use any tool available was what led us to the current state of overcomplexity in full-stack development, Einy responded. This is why the role of "full-stack developer" seems like an impossible, given all the possible permutations at each layer of the stack.</p><p> </p><p>Like the artist who finds inspiration in a limited palette, the developer should be able to find everything they need in a well-curated platform.</p><p> </p><p>"In the past, when we talked about 'you-build-it-you-own-it', we thought that the developer needs to know everything about anything, and they have the full ownership to choose anything that they want. And they got sick of it, right, because they needed to know too much," Einy said. "So I think we are getting into a transition where developers are OK with getting what they need with a click of a button because they have so much work on their own."</p><p> </p><p>In this conversation, we also discussed measuring success, the role of access control in DevOps, and open source <a href="https://thenewstack.io/spotifys-backstage-a-strategic-guide/">Backstage platform</a>, and its recent inclusion of paid plug-ins. Give it a listen!</p><p> </p><p> </p><p> </p><p> </p>
]]></description>
      <pubDate>Wed, 25 Jan 2023 17:05:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (port, the new stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/port-platform-engineering-can-be-the-first-step-in-system-automation-4i2b5LVU</link>
      <content:encoded><![CDATA[<p>By now, almost everyone agreed <a href="https://thenewstack.io/whats-platform-engineering-and-how-does-it-support-devops/">platform engineering</a> is <a href="https://thenewstack.io/cyberark-decreases-cognitive-load-with-platform-engineering/">probably a good idea</a>, in which an organizations builds an <a href="https://thenewstack.io/internal-developer-portal-what-it-is-and-why-you-need-one/">internal development platform</a> to empower coders and speed application releases. So, for this latest edition of <a href="https://thenewstack.io/podcasts/">The New Stack podcast</a>,  we spoke with one of the pioneers in this space,  <a href="https://www.linkedin.com/in/zohar-einy-b6219612a/?originalSubdomain=il">Zohar Einy</a>, CEO of Port, to see how platform engineering would work in your organization. TNS Editor Joab Jackson hosted this conversation.</p><p> </p><p>Port offers what it claims is the world's first <a href="https://www.getport.io/company">low code platform</a> for developers.</p><p> </p><p><a href="https://thenewstack.simplecast.com/episodes/rethinking-web-application-firewalls">Rethinking Web Application Firewalls</a></p><p> </p><p>With Port, an organization can build a software catalogue of approved tools, import its own data model, and set up workflows. Developers can consume all the resources they need through a self-service catalogue, without needing the knowledge how to set up a complex application, like Kubernetes. The DevOps and platform teams themselves maintain the platform.</p><p> </p><p>Application owners aren't  the only potential users of a self-service catalogues, Einy points out in our convo. DevOps and system administration teams can also use the platform. A DevOps teams can set up automations "to make sure that [developers are] using the platform with the right mindset that fits with their organizational standards in terms of compliance, security, and performance aspects."</p><p> </p><p>Even machines themselves could benefit from a self-service platform, for those who are looking to automate deployments as much as possible.</p><p> </p><p>Einy offered an example: A CI/CD process could create a build process on its own. If it needs to check the maturity level of some tool, it can do so through an API call. If it's not adequately certified, the developer is notified, but if all the tools are sufficiently mature than the automated process can finish the build without further developer intervention.</p><p> </p><p>Another possible process that could be automated would be the termination of permissions when their deadline has passed. Think about an early-warning system for expired digital certificates. "So it's a big driver for both for cost reduction and security best practices," Einy said.</p><p> </p><h2>Too Many Choices, Not Enough Code</h2><p> </p><p>But what about developer choice? Won't developers feel frustrated when barred from using the tools they are most fond of?</p><p> </p><p>But this freedom to use any tool available was what led us to the current state of overcomplexity in full-stack development, Einy responded. This is why the role of "full-stack developer" seems like an impossible, given all the possible permutations at each layer of the stack.</p><p> </p><p>Like the artist who finds inspiration in a limited palette, the developer should be able to find everything they need in a well-curated platform.</p><p> </p><p>"In the past, when we talked about 'you-build-it-you-own-it', we thought that the developer needs to know everything about anything, and they have the full ownership to choose anything that they want. And they got sick of it, right, because they needed to know too much," Einy said. "So I think we are getting into a transition where developers are OK with getting what they need with a click of a button because they have so much work on their own."</p><p> </p><p>In this conversation, we also discussed measuring success, the role of access control in DevOps, and open source <a href="https://thenewstack.io/spotifys-backstage-a-strategic-guide/">Backstage platform</a>, and its recent inclusion of paid plug-ins. Give it a listen!</p><p> </p><p> </p><p> </p><p> </p>
]]></content:encoded>
      <enclosure length="20602088" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/c7762e76-e504-4950-9e0f-8a2754066583/audio/8c4d474e-55f6-4a70-8b02-d7e150e8028f/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Port: Platform Engineering Needs a Holistic Approach</itunes:title>
      <itunes:author>port, the new stack</itunes:author>
      <itunes:duration>00:21:27</itunes:duration>
      <itunes:summary>By now, almost everyone agreed platform engineering is probably a good idea, in which an organizations builds an internal development platform to empower coders and speed application releases. So, for this latest edition of The New Stack podcast,  we spoke with one of the pioneers in this space,  Zohar Einy, CEO of Port, to see how platform engineering would work in your organization. TNS Editor Joab Jackson hosted this conversation.

Port offers what it claims is the world&apos;s first low code platform for developers.

With Port, an organization can build a software catalogue of approved tools, import its own data model, and set up workflows. Developers can consume all the resources they need through a self-service catalogue, without needing the knowledge how to set up a complex application, like Kubernetes. The DevOps and platform teams themselves maintain the platform.

Zohar Einy - @ZoharEiny 
Joab Jackson - @Joab_Jackson  
The New Stack - @thenewstack</itunes:summary>
      <itunes:subtitle>By now, almost everyone agreed platform engineering is probably a good idea, in which an organizations builds an internal development platform to empower coders and speed application releases. So, for this latest edition of The New Stack podcast,  we spoke with one of the pioneers in this space,  Zohar Einy, CEO of Port, to see how platform engineering would work in your organization. TNS Editor Joab Jackson hosted this conversation.

Port offers what it claims is the world&apos;s first low code platform for developers.

With Port, an organization can build a software catalogue of approved tools, import its own data model, and set up workflows. Developers can consume all the resources they need through a self-service catalogue, without needing the knowledge how to set up a complex application, like Kubernetes. The DevOps and platform teams themselves maintain the platform.

Zohar Einy - @ZoharEiny 
Joab Jackson - @Joab_Jackson  
The New Stack - @thenewstack</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1383</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">6963302d-fff0-4a0f-91fa-0e82a463ef30</guid>
      <title>Platform Engineering Benefits Developers, and Companies Too</title>
      <description><![CDATA[<p>In this latest episode of The New Stack Makers podcast, we delve more deeply into the emerging practice of platform engineering. The guests for this show are <a href="https://www.linkedin.com/in/aeris-stewart-%F0%9F%8C%88-083487187/">Aeris Stewart</a>, community manager at platform orchestration provider Humanitec and  <a href="https://www.linkedin.com/in/thegalloway/">Michael Galloway</a>, an engineering leader for infrastructure software provider HashiCorp. TNS Features Editor <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn</a> hosted this conversation.</p><p> </p><p>Although the term has been around for <a href="https://thenewstack.io/platform-engineering-infrastructure-meets-dev-experience/">several</a> <a href="https://thenewstack.io/platform-engineering-challenges-and-solutions/">years</a>, platform engineering caught the industry's attention in a big way last September, when Humanitec <a href="https://thenewstack.io/platform-engineering-is-devops-evolved-new-report-shows/">published a report</a> that identified how widespread the practice was quickly becoming, citing its use by Nike, Starbucks, GitHub and others.</p><p> </p><p>Right after the report was released, Stewart <a href="https://thenewstack.io/devops-is-dead-embrace-platform-engineering/">provided an analysis</a> for TNS arguing that platform engineering solved the many issues that another practice, DevOps, was struggling with. "Developers don’t want to do operations anymore, and that’s a bad sign for DevOps," Stewart wrote. The post stirred a great deal of conversation around the success of DevOps.</p><p> </p><p>Platform engineering is "a discipline of designing and building tool chains and workflows that enable developer self service," Stewart explained. The purpose is to give the developers in your organization a set of standard tools that will allow them to do their job — write and fix apps — as quickly as possible. The platform provides the tools and services "that free up engineering time by reducing manual toil cognitive load," Galloway added.</p><p> </p><p>But platform engineering also has an advantage for the business itself, Galloway elaborated. With an internal developer platform in place, a business can scale up with "reliability, cost efficiency and security," Galloway said.</p><p> </p><p>Before HashiCorp, Galloway was an engineer at Netflix, and there he saw the benefits of platform engineering for both the dev and the business itself. "All teams were enabled to own the entire lifecycle from design to operation. This is really central to how Netflix was able to scale," Galloway said. A platform engineering team created a set of services that made it possible for Netflix engineers to deliver code "without needing to be continuous delivery experts."</p><p> </p><p>The conversation also touched on the challenges of implementing platform engineering, and what metrics you should use to quantify its success.</p><p> </p><p>And because platform engineering is a new discipline, we also discussed education and community. Humanitec's debut PlatformCon drew over 6,000 attendees last June (and <a href="https://platformcon.com/">Platform 2023</a> has just been scheduled for June).  There is also a platform engineering <a href="https://platformengineering.org/slack-rd">Slack channel</a>, which has drawn over 8,000 participants thus far.</p><p> </p><p>"I think the community is playing a really big role right now, especially as a lot of organizations' awareness of platform engineering is just starting," Stewart said. "There's a lot of knowledge that can be gained by building a platform that you don't necessarily want to learn the hard way."</p>
]]></description>
      <pubDate>Wed, 18 Jan 2023 20:29:33 +0000</pubDate>
      <author>podcasts@thenewstack.io (Hashicorp, The New Stack, humanitec)</author>
      <link>https://thenewstack.simplecast.com/episodes/platform-engineering-benefits-developers-and-companies-too-it1C_Y41</link>
      <content:encoded><![CDATA[<p>In this latest episode of The New Stack Makers podcast, we delve more deeply into the emerging practice of platform engineering. The guests for this show are <a href="https://www.linkedin.com/in/aeris-stewart-%F0%9F%8C%88-083487187/">Aeris Stewart</a>, community manager at platform orchestration provider Humanitec and  <a href="https://www.linkedin.com/in/thegalloway/">Michael Galloway</a>, an engineering leader for infrastructure software provider HashiCorp. TNS Features Editor <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn</a> hosted this conversation.</p><p> </p><p>Although the term has been around for <a href="https://thenewstack.io/platform-engineering-infrastructure-meets-dev-experience/">several</a> <a href="https://thenewstack.io/platform-engineering-challenges-and-solutions/">years</a>, platform engineering caught the industry's attention in a big way last September, when Humanitec <a href="https://thenewstack.io/platform-engineering-is-devops-evolved-new-report-shows/">published a report</a> that identified how widespread the practice was quickly becoming, citing its use by Nike, Starbucks, GitHub and others.</p><p> </p><p>Right after the report was released, Stewart <a href="https://thenewstack.io/devops-is-dead-embrace-platform-engineering/">provided an analysis</a> for TNS arguing that platform engineering solved the many issues that another practice, DevOps, was struggling with. "Developers don’t want to do operations anymore, and that’s a bad sign for DevOps," Stewart wrote. The post stirred a great deal of conversation around the success of DevOps.</p><p> </p><p>Platform engineering is "a discipline of designing and building tool chains and workflows that enable developer self service," Stewart explained. The purpose is to give the developers in your organization a set of standard tools that will allow them to do their job — write and fix apps — as quickly as possible. The platform provides the tools and services "that free up engineering time by reducing manual toil cognitive load," Galloway added.</p><p> </p><p>But platform engineering also has an advantage for the business itself, Galloway elaborated. With an internal developer platform in place, a business can scale up with "reliability, cost efficiency and security," Galloway said.</p><p> </p><p>Before HashiCorp, Galloway was an engineer at Netflix, and there he saw the benefits of platform engineering for both the dev and the business itself. "All teams were enabled to own the entire lifecycle from design to operation. This is really central to how Netflix was able to scale," Galloway said. A platform engineering team created a set of services that made it possible for Netflix engineers to deliver code "without needing to be continuous delivery experts."</p><p> </p><p>The conversation also touched on the challenges of implementing platform engineering, and what metrics you should use to quantify its success.</p><p> </p><p>And because platform engineering is a new discipline, we also discussed education and community. Humanitec's debut PlatformCon drew over 6,000 attendees last June (and <a href="https://platformcon.com/">Platform 2023</a> has just been scheduled for June).  There is also a platform engineering <a href="https://platformengineering.org/slack-rd">Slack channel</a>, which has drawn over 8,000 participants thus far.</p><p> </p><p>"I think the community is playing a really big role right now, especially as a lot of organizations' awareness of platform engineering is just starting," Stewart said. "There's a lot of knowledge that can be gained by building a platform that you don't necessarily want to learn the hard way."</p>
]]></content:encoded>
      <enclosure length="23541177" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/fe7013ea-b24e-420b-aff7-04770fbb1e7c/audio/088de6c7-aa56-4780-9125-2c7c61c6b1d2/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Platform Engineering Benefits Developers, and Companies Too</itunes:title>
      <itunes:author>Hashicorp, The New Stack, humanitec</itunes:author>
      <itunes:duration>00:24:31</itunes:duration>
      <itunes:summary>In this latest episode of The New Stack Makers podcast, we delve more deeply into the emerging practice of platform engineering. The guests for this show are Aeris Stewart, community manager at platform orchestration provider  and  Michael Galloway, an engineering leader for infrastructure software provider HashiCorp. TNS Features Editor Heather Joslyn hosted this conversation.</itunes:summary>
      <itunes:subtitle>In this latest episode of The New Stack Makers podcast, we delve more deeply into the emerging practice of platform engineering. The guests for this show are Aeris Stewart, community manager at platform orchestration provider  and  Michael Galloway, an engineering leader for infrastructure software provider HashiCorp. TNS Features Editor Heather Joslyn hosted this conversation.</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, the new stack, aeris stewart, heather joslyn, michael galloway, devops, devops podcast, tech, developer podcast, humanitec, the new stack makers, software engineer, hashicorp</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1382</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">9d8f1ce8-600b-46b0-aeab-4d4b30b849dd</guid>
      <title>What’s Platform Engineering? And How Does It Support DevOps?</title>
      <description><![CDATA[<p><a href="https://thenewstack.io/devops-is-dead-embrace-platform-engineering/">Platform engineering</a> “is the art of designing and binding all of the different tech and tools that you have inside of an organization into a golden path that enables self service for developers and reduces cognitive load,” said <a href="https://humanitec.com/author/kaspar-von-grunberg">Kaspar Von Grünberg,</a> founder and CEO of Humanitec, in this episode of The New Stack Makers podcast.</p><p> </p><p><iframe width="560" height="315" src="https://www.youtube.com/embed/6sCTIVpdC08" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe></p><p> </p><p>This is structure is important for individual contributors, Grünberg said, as well as backend engineers: “if you look at the operation teams, it reduces their burden to do repetitive things. And so platform engineers build and design internal developer platforms, and help and serve users. “</p><p> </p><p>This conversation, hosted by Heather Joslyn, TNS features editor, dove into platform engineering: what it is, how it works, the problems it is intended to solve, and how to get started in building a platform engineering operation in your organization. It also debunks some key fallacies around the concept.</p><p> </p><p>This episode was sponsored by Humanitec.</p><p><h2>The Limits of ‘You Build It, You Run It’</h2></p><p>The notion of <a href="https://queue.acm.org/detail.cfm?id=1142065">“you build it, you run it”</a> — first coined by <a href="https://www.linkedin.com/in/wernervogels/">Werner Vogels,</a> chief technology officer of [sponsor_inline_mention slug="amazon-web-services-aws" ]Amazon,[/sponsor_inline_mention] in a 2006 interview — established that developers should “own” their applications throughout their entire lifecycle. But, Grünberg said, that may not be realistic in an age of rapidly proliferating microservices and multiple, distributed deployment environments.</p><p> </p><p>“The scale that we're operating today is just totally different,” he said. “The applications are much more complex.” End-to-end ownership, he added, is “a noble dream, but unfair towards the individual contributor. We're asking developers to do so much at once. And then we're always complaining that the output isn't there or not delivering fast enough. But we're not making it easy for them to deliver.”</p><p> </p><p>Creating a “golden path” — though the creation by platform teams of <a href="https://thenewstack.io/platform-engineering-is-devops-evolved-new-report-shows/">internal developer platforms (IDPs)</a> — can not only free developers from unnecessary cognitive load, Grünberg said, but also help make their code more secure and standardized.</p><p> </p><p>For Ops engineers, he said, the adoption of platform engineering can also help free them from doing the same tasks over and over.</p><p> </p><p>“If you want to know whether it's a good idea to look at platform engineering, I recommend go to your service desk and look at the tickets that you're receiving,”  Grünberg said. “And if you have things like, ‘Hey, can you debug that deployment?’ and ‘Can you spin up in a moment all these repetitive requests?’ that's probably a good time to take a step back and ask yourself, ‘Should the operations people actually spend time doing these manual things?’”</p><p><h2>The Biggest Fallacies about Platform Engineering</h2></p><p>For organizations that are interested in adopting platform engineering, the Humanitec CEO attacked some of the biggest misconceptions about the practice. Chief among them: failing to treat their platform as a product, in the same way a company would begin creating any product, by starting with research into customer needs.</p><p> </p><p>“If you think about how we would develop a software feature, we wouldn't be sitting in a room and taking some assumptions and then building something,” he said. “We would go out to the user, and then actually interview them and say, ‘Hey, what's your problem? What's the most pressing problem?’”</p><p> </p><p>Other fallacies embraced by platform engineering newbies, he said, are “visualization” — the belief that all devs need is another snazzy new dashboard or portal to look at — and believing the platform team has to go all-in right from the start, scaling up a big effort immediately. Such an effort, he said is “doomed to fail.”</p><p> </p><p>Instead, Grünberg said, “I'm always advocating for starting really small, come up with what's the most lowest common tech denominator. Is that containerization with EKS? Perfect, then focus on that."</p><p> </p><p>And don’t forget to give special attention to those early adopters, so they can become evangelists for the product. “make them fans, prioritize the right way, and then show that to other teams as a, ‘Hey, you want to join in? OK, what's the next cool thing we could build?’”</p><p> </p><p>Check out the entire episode for much more detail about platform engineering and how to get started with it.</p>
]]></description>
      <pubDate>Wed, 11 Jan 2023 19:23:33 +0000</pubDate>
      <author>podcasts@thenewstack.io (humanitec, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/whats-platform-engineering-and-how-does-it-support-devops-4R90VxIS</link>
      <content:encoded><![CDATA[<p><a href="https://thenewstack.io/devops-is-dead-embrace-platform-engineering/">Platform engineering</a> “is the art of designing and binding all of the different tech and tools that you have inside of an organization into a golden path that enables self service for developers and reduces cognitive load,” said <a href="https://humanitec.com/author/kaspar-von-grunberg">Kaspar Von Grünberg,</a> founder and CEO of Humanitec, in this episode of The New Stack Makers podcast.</p><p> </p><p><iframe width="560" height="315" src="https://www.youtube.com/embed/6sCTIVpdC08" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe></p><p> </p><p>This is structure is important for individual contributors, Grünberg said, as well as backend engineers: “if you look at the operation teams, it reduces their burden to do repetitive things. And so platform engineers build and design internal developer platforms, and help and serve users. “</p><p> </p><p>This conversation, hosted by Heather Joslyn, TNS features editor, dove into platform engineering: what it is, how it works, the problems it is intended to solve, and how to get started in building a platform engineering operation in your organization. It also debunks some key fallacies around the concept.</p><p> </p><p>This episode was sponsored by Humanitec.</p><p><h2>The Limits of ‘You Build It, You Run It’</h2></p><p>The notion of <a href="https://queue.acm.org/detail.cfm?id=1142065">“you build it, you run it”</a> — first coined by <a href="https://www.linkedin.com/in/wernervogels/">Werner Vogels,</a> chief technology officer of [sponsor_inline_mention slug="amazon-web-services-aws" ]Amazon,[/sponsor_inline_mention] in a 2006 interview — established that developers should “own” their applications throughout their entire lifecycle. But, Grünberg said, that may not be realistic in an age of rapidly proliferating microservices and multiple, distributed deployment environments.</p><p> </p><p>“The scale that we're operating today is just totally different,” he said. “The applications are much more complex.” End-to-end ownership, he added, is “a noble dream, but unfair towards the individual contributor. We're asking developers to do so much at once. And then we're always complaining that the output isn't there or not delivering fast enough. But we're not making it easy for them to deliver.”</p><p> </p><p>Creating a “golden path” — though the creation by platform teams of <a href="https://thenewstack.io/platform-engineering-is-devops-evolved-new-report-shows/">internal developer platforms (IDPs)</a> — can not only free developers from unnecessary cognitive load, Grünberg said, but also help make their code more secure and standardized.</p><p> </p><p>For Ops engineers, he said, the adoption of platform engineering can also help free them from doing the same tasks over and over.</p><p> </p><p>“If you want to know whether it's a good idea to look at platform engineering, I recommend go to your service desk and look at the tickets that you're receiving,”  Grünberg said. “And if you have things like, ‘Hey, can you debug that deployment?’ and ‘Can you spin up in a moment all these repetitive requests?’ that's probably a good time to take a step back and ask yourself, ‘Should the operations people actually spend time doing these manual things?’”</p><p><h2>The Biggest Fallacies about Platform Engineering</h2></p><p>For organizations that are interested in adopting platform engineering, the Humanitec CEO attacked some of the biggest misconceptions about the practice. Chief among them: failing to treat their platform as a product, in the same way a company would begin creating any product, by starting with research into customer needs.</p><p> </p><p>“If you think about how we would develop a software feature, we wouldn't be sitting in a room and taking some assumptions and then building something,” he said. “We would go out to the user, and then actually interview them and say, ‘Hey, what's your problem? What's the most pressing problem?’”</p><p> </p><p>Other fallacies embraced by platform engineering newbies, he said, are “visualization” — the belief that all devs need is another snazzy new dashboard or portal to look at — and believing the platform team has to go all-in right from the start, scaling up a big effort immediately. Such an effort, he said is “doomed to fail.”</p><p> </p><p>Instead, Grünberg said, “I'm always advocating for starting really small, come up with what's the most lowest common tech denominator. Is that containerization with EKS? Perfect, then focus on that."</p><p> </p><p>And don’t forget to give special attention to those early adopters, so they can become evangelists for the product. “make them fans, prioritize the right way, and then show that to other teams as a, ‘Hey, you want to join in? OK, what's the next cool thing we could build?’”</p><p> </p><p>Check out the entire episode for much more detail about platform engineering and how to get started with it.</p>
]]></content:encoded>
      <enclosure length="22475799" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/9b3b5e7e-8efb-4935-a2d4-12c7e804886d/audio/a88c9234-1e75-411c-be67-78769fb58663/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>What’s Platform Engineering? And How Does It Support DevOps?</itunes:title>
      <itunes:author>humanitec, The New Stack</itunes:author>
      <itunes:duration>00:23:24</itunes:duration>
      <itunes:summary>Platform engineering “is the art of designing and binding all of the different tech and tools that you have inside of an organization into a golden path that enables self service for developers and reduces cognitive load,” said Kaspar Von Grünberg, founder and CEO of Humanitec, in this episode of The New Stack Makers podcast.

This is structure is important for individual contributors, Grünberg said, as well as backend engineers: “if you look at the operation teams, it reduces their burden to do repetitive things. And so platform engineers build and design internal developer platforms, and help and serve users. “

This conversation, hosted by Heather Joslyn, TNS features editor, dove into platform engineering: what it is, how it works, the problems it is intended to solve, and how to get started in building a platform engineering operation in your organization. It also debunks some key fallacies around the concept.

This episode was sponsored by Humanitec.

Kaspar Von Grünberg - https://www.linkedin.com/in/kvgruenberg/
Heather Joslyn - @ha_joslyn  
The New Stack - @thenewstack</itunes:summary>
      <itunes:subtitle>Platform engineering “is the art of designing and binding all of the different tech and tools that you have inside of an organization into a golden path that enables self service for developers and reduces cognitive load,” said Kaspar Von Grünberg, founder and CEO of Humanitec, in this episode of The New Stack Makers podcast.

This is structure is important for individual contributors, Grünberg said, as well as backend engineers: “if you look at the operation teams, it reduces their burden to do repetitive things. And so platform engineers build and design internal developer platforms, and help and serve users. “

This conversation, hosted by Heather Joslyn, TNS features editor, dove into platform engineering: what it is, how it works, the problems it is intended to solve, and how to get started in building a platform engineering operation in your organization. It also debunks some key fallacies around the concept.

This episode was sponsored by Humanitec.

Kaspar Von Grünberg - https://www.linkedin.com/in/kvgruenberg/
Heather Joslyn - @ha_joslyn  
The New Stack - @thenewstack</itunes:subtitle>
      <itunes:keywords>kaspar von grünberg, software developer, tech podcast, the new stack, heather joslyn, devops, devops podcast, tech, developer podcast, humanitec, the new stack makers, software engineer, platform engineering</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1381</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">af96374a-7e8e-4c92-8686-6199e72c1c37</guid>
      <title>What LaunchDarkly Learned from &apos;Eating Its Own Dog Food&apos;</title>
      <description><![CDATA[<p>Feature flags — the on/off toggles, written in conditional statements, that allow organizations greater control over the user experience once code has been deployed —  are proliferating and growing more complex, and demand robust feature management, said <a href="https://www.linkedin.com/in/karishmairani/">Karishma Irani,</a> head of product at LaunchDarkly, in this episode of The New Stack Makers.</p><p> </p><p>In a November survey by LaunchDarkly, which queried more than 1,000 DevOps professionals,  <a href="https://thenewstack.io/launchdarkly-feature-management-is-a-must-have/">69% of participants said that feature flags are “must-have, mission-critical and/or high priority”</a> for their organizations.</p><p> </p><p>“<a href="https://thenewstack.io/whats-the-future-of-feature-management-feature-flags/">Feature management,</a> we believe, is a modern practice that's becoming more and more common with companies that want to deploy more frequently, innovate faster, and just keep a healthy engineering team,” Irani said.</p><p> </p><p>The idea of feature management, Irani said, is to “maximize value while minimizing risk.”</p><p> </p><p>LaunchDarkly uses its own software, she said, and eating its own dog food, as the saying goes, has paid off in gaining insights into user needs.</p><p> </p><p>As part of LaunchDarkly’s <a href="https://launchdarkly.com/trajectory/2022/">virtual conference Trajectory</a> in November, Irani joined <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn,</a> features editor of The New Stack, for a wide-ranging conversation about the latest developments in feature management.</p><p> </p><p>This episode of Makers was sponsored by LaunchDarkly.</p><p><h2>Automating Approvals</h2></p><p>As an example of the benefits of having first-hand knowledge of how their company's products are used, Irani pointed to an internal project in mid-2022.</p><p> </p><p>When the company migrated from [sponsor_inline_mention slug="mongodb" ]MongoDB[/sponsor_inline_mention] to CockroachDB, it used new capabilities in its Feature Workflows product, which allow users to define a workflow that can schedule the gradual release of a feature flag for a future date and time, and automate approval requests.</p><p> </p><p>“All of these async processes around approvals schedules, they're critical to releasing software, but they do slow you down and add more potential for manual error or human error,” Irani said. “And so our goal with Feature Workflows was to essentially automate the entire process of a feature release.”</p><p><h2>Overhauling Experimentation</h2></p><p>This past June, the company also revised its Experimentation offering, she said. Led by <a href="https://www.linkedin.com/in/jamescfrost/">James Frost,</a> LaunchDarkly’s head of experimentation, the team did “a complete overhaul of our stats engine, they enhanced the integration path of our customers’ existing data sets and metrics,” Irani said. “They redesigned our UX and the codified model and experimentation best practices into the product itself.”</p><p> </p><p>For instance, a new metric import API helps prevent the problem of multiple teams or users within a company using different tools for A/B and other experiments. It “significantly cuts down on manual duplicate work when importing metrics for experimentation,” said Irani. “So you can get set up faster.”</p><p> </p><p>Another addition to the Experimentation product is a sample ratio mismatch test, she said, so “you can be confident that all of your experiments are correctly allocating traffic to each variant.”</p><p> </p><p>These innovations, along with new capabilities to the company’s Core Flagging Platform, are in general availability. On the horizon — and now available through <a href="https://launchdarkly.com/EAP">LaunchDarkly’s early access program,</a> is Accelerate, which lets users track and visualize key engineering metrics, such as deployment frequency, release frequency, lead time for code changes, and flag coverage.</p><p> </p><p>“I'm sure you've caught on already,” Irani said, “but a few of these are <a href="https://thenewstack.io/googles-formula-for-elite-devops-performance/">Dora metrics,</a> which obviously are extremely critical to our users.”</p><p> </p><p>Check out the entire episode for more details on what’s new from LaunchDarkly and the problems that innovators in the feature management space still need to solve.</p>
]]></description>
      <pubDate>Wed, 4 Jan 2023 21:07:21 +0000</pubDate>
      <author>podcasts@thenewstack.io (Launch Darkly, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/what-launchdarkly-learned-from-eating-its-own-dog-food-9WdNhBZH</link>
      <content:encoded><![CDATA[<p>Feature flags — the on/off toggles, written in conditional statements, that allow organizations greater control over the user experience once code has been deployed —  are proliferating and growing more complex, and demand robust feature management, said <a href="https://www.linkedin.com/in/karishmairani/">Karishma Irani,</a> head of product at LaunchDarkly, in this episode of The New Stack Makers.</p><p> </p><p>In a November survey by LaunchDarkly, which queried more than 1,000 DevOps professionals,  <a href="https://thenewstack.io/launchdarkly-feature-management-is-a-must-have/">69% of participants said that feature flags are “must-have, mission-critical and/or high priority”</a> for their organizations.</p><p> </p><p>“<a href="https://thenewstack.io/whats-the-future-of-feature-management-feature-flags/">Feature management,</a> we believe, is a modern practice that's becoming more and more common with companies that want to deploy more frequently, innovate faster, and just keep a healthy engineering team,” Irani said.</p><p> </p><p>The idea of feature management, Irani said, is to “maximize value while minimizing risk.”</p><p> </p><p>LaunchDarkly uses its own software, she said, and eating its own dog food, as the saying goes, has paid off in gaining insights into user needs.</p><p> </p><p>As part of LaunchDarkly’s <a href="https://launchdarkly.com/trajectory/2022/">virtual conference Trajectory</a> in November, Irani joined <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn,</a> features editor of The New Stack, for a wide-ranging conversation about the latest developments in feature management.</p><p> </p><p>This episode of Makers was sponsored by LaunchDarkly.</p><p><h2>Automating Approvals</h2></p><p>As an example of the benefits of having first-hand knowledge of how their company's products are used, Irani pointed to an internal project in mid-2022.</p><p> </p><p>When the company migrated from [sponsor_inline_mention slug="mongodb" ]MongoDB[/sponsor_inline_mention] to CockroachDB, it used new capabilities in its Feature Workflows product, which allow users to define a workflow that can schedule the gradual release of a feature flag for a future date and time, and automate approval requests.</p><p> </p><p>“All of these async processes around approvals schedules, they're critical to releasing software, but they do slow you down and add more potential for manual error or human error,” Irani said. “And so our goal with Feature Workflows was to essentially automate the entire process of a feature release.”</p><p><h2>Overhauling Experimentation</h2></p><p>This past June, the company also revised its Experimentation offering, she said. Led by <a href="https://www.linkedin.com/in/jamescfrost/">James Frost,</a> LaunchDarkly’s head of experimentation, the team did “a complete overhaul of our stats engine, they enhanced the integration path of our customers’ existing data sets and metrics,” Irani said. “They redesigned our UX and the codified model and experimentation best practices into the product itself.”</p><p> </p><p>For instance, a new metric import API helps prevent the problem of multiple teams or users within a company using different tools for A/B and other experiments. It “significantly cuts down on manual duplicate work when importing metrics for experimentation,” said Irani. “So you can get set up faster.”</p><p> </p><p>Another addition to the Experimentation product is a sample ratio mismatch test, she said, so “you can be confident that all of your experiments are correctly allocating traffic to each variant.”</p><p> </p><p>These innovations, along with new capabilities to the company’s Core Flagging Platform, are in general availability. On the horizon — and now available through <a href="https://launchdarkly.com/EAP">LaunchDarkly’s early access program,</a> is Accelerate, which lets users track and visualize key engineering metrics, such as deployment frequency, release frequency, lead time for code changes, and flag coverage.</p><p> </p><p>“I'm sure you've caught on already,” Irani said, “but a few of these are <a href="https://thenewstack.io/googles-formula-for-elite-devops-performance/">Dora metrics,</a> which obviously are extremely critical to our users.”</p><p> </p><p>Check out the entire episode for more details on what’s new from LaunchDarkly and the problems that innovators in the feature management space still need to solve.</p>
]]></content:encoded>
      <enclosure length="27474645" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/344f47f0-8287-41c5-bfa8-502438db7bfd/audio/601dc18c-8923-4d7e-bf8a-e4435f760dd1/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>What LaunchDarkly Learned from &apos;Eating Its Own Dog Food&apos;</itunes:title>
      <itunes:author>Launch Darkly, The New Stack</itunes:author>
      <itunes:duration>00:28:37</itunes:duration>
      <itunes:summary>Feature flags — the on/off toggles, written in conditional statements, that allow organizations greater control over the user experience once code has been deployed —  are proliferating and growing more complex, and demand robust feature management, said Karishma Irani, head of product at LaunchDarkly, in this episode of The New Stack Makers.

In a November survey by LaunchDarkly, which queried more than 1,000 DevOps professionals,  69% of participants said that feature flags are “must-have, mission-critical and/or high priority” for their organizations.

“Feature management, we believe, is a modern practice that&apos;s becoming more and more common with companies that want to deploy more frequently, innovate faster, and just keep a healthy engineering team,” Irani said.

The idea of feature management, Irani said, is to “maximize value while minimizing risk.”

LaunchDarkly uses its own software, she said, and eating its own dog food, as the saying goes, has paid off in gaining insights into user needs.

As part of LaunchDarkly’s virtual conference Trajectory in November, Irani joined Heather Joslyn, features editor of The New Stack, for a wide-ranging conversation about the latest developments in feature management.

This episode of Makers was sponsored by LaunchDarkly.

Karishma Irani - @karishma_irani  
Heather Joslyn - @ha_joslyn  
The New Stack - @thenewstack</itunes:summary>
      <itunes:subtitle>Feature flags — the on/off toggles, written in conditional statements, that allow organizations greater control over the user experience once code has been deployed —  are proliferating and growing more complex, and demand robust feature management, said Karishma Irani, head of product at LaunchDarkly, in this episode of The New Stack Makers.

In a November survey by LaunchDarkly, which queried more than 1,000 DevOps professionals,  69% of participants said that feature flags are “must-have, mission-critical and/or high priority” for their organizations.

“Feature management, we believe, is a modern practice that&apos;s becoming more and more common with companies that want to deploy more frequently, innovate faster, and just keep a healthy engineering team,” Irani said.

The idea of feature management, Irani said, is to “maximize value while minimizing risk.”

LaunchDarkly uses its own software, she said, and eating its own dog food, as the saying goes, has paid off in gaining insights into user needs.

As part of LaunchDarkly’s virtual conference Trajectory in November, Irani joined Heather Joslyn, features editor of The New Stack, for a wide-ranging conversation about the latest developments in feature management.

This episode of Makers was sponsored by LaunchDarkly.

Karishma Irani - @karishma_irani  
Heather Joslyn - @ha_joslyn  
The New Stack - @thenewstack</itunes:subtitle>
      <itunes:keywords>karishma irani, software developer, tech podcast, the new stack, heather joslyn, devops, devops podcast, tech, developer podcast, the new stack makers, software engineer, launch darkly</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1380</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e29765e6-1f0c-43b6-89c2-c9145549fae5</guid>
      <title>Hazelcast and the Benefits of Real Time Data</title>
      <description><![CDATA[<p>In this latest podcast from The New Stack, we interview <a href="https://www.linkedin.com/in/manishdevgan/">Manish Devgan</a>, chief product officer for Hazelcast, which offers a real time stream processing engine. This interview was recorded at <a href="https://thenewstack.io/kubeconcloudnativecon-2022-rolls-into-detroit/">KubeCon+CloudNativeCon</a>, held last October in Detroit.</p><p> </p><p>"'Real time' means different things to different people, but it's really a business term," Devgan explained. In the business world, time is money, and the more quickly you can make a decision, using the right data, the more quickly one can take action.</p><p> </p><p>Although we have many "batch-processing" systems, the data itself rarely comes in batches, Devgan said. "A lot of times I hear from customers that are using a batch system, because those are the things which are available at that time.</p><p> </p><p>But data is created in real time sensors, your machines, espionage data, or even customer data — right when customers are transacting with you."</p><p> </p><h2>What is a Real Time Data Processing Engine?</h2><p> </p><p>A real time data processing engine can analyze data as it is coming in from the source. This is different from traditional approaches that store the data first, then analyze it later. Bank loans may is example of this approach.</p><p> </p><p>With a real time data processing engine in place, a bank can offer a loan to a customer using an automated teller machine (ATM) in real time, Devgan suggested.  "As the data comes in, you can actually take action based on context of the data," he argued.</p><p> </p><p>Such a loan app may combine real-time data from the customer alongside historical data stored in a traditional database. Hazelcast can combine historical data with real time data to make workloads like this possible.</p><p> </p><p>In this interview, we also debated the <a href="https://thenewstack.io/where-elasticity-meets-open-source-and-how-to-adapt/">merits of Kafka</a>, the benefits of using a managed service rather than running an application in house, Hazelcast's users, and features in the latest release of the Hazelcast platform.</p><p> </p><p> </p><p> </p><p> </p><p> </p><p> </p><p> </p><p> </p>
]]></description>
      <pubDate>Wed, 28 Dec 2022 18:21:14 +0000</pubDate>
      <author>podcasts@thenewstack.io (hazelcast, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/hazelcast-and-the-benefits-of-real-time-data-RftCYsJr</link>
      <content:encoded><![CDATA[<p>In this latest podcast from The New Stack, we interview <a href="https://www.linkedin.com/in/manishdevgan/">Manish Devgan</a>, chief product officer for Hazelcast, which offers a real time stream processing engine. This interview was recorded at <a href="https://thenewstack.io/kubeconcloudnativecon-2022-rolls-into-detroit/">KubeCon+CloudNativeCon</a>, held last October in Detroit.</p><p> </p><p>"'Real time' means different things to different people, but it's really a business term," Devgan explained. In the business world, time is money, and the more quickly you can make a decision, using the right data, the more quickly one can take action.</p><p> </p><p>Although we have many "batch-processing" systems, the data itself rarely comes in batches, Devgan said. "A lot of times I hear from customers that are using a batch system, because those are the things which are available at that time.</p><p> </p><p>But data is created in real time sensors, your machines, espionage data, or even customer data — right when customers are transacting with you."</p><p> </p><h2>What is a Real Time Data Processing Engine?</h2><p> </p><p>A real time data processing engine can analyze data as it is coming in from the source. This is different from traditional approaches that store the data first, then analyze it later. Bank loans may is example of this approach.</p><p> </p><p>With a real time data processing engine in place, a bank can offer a loan to a customer using an automated teller machine (ATM) in real time, Devgan suggested.  "As the data comes in, you can actually take action based on context of the data," he argued.</p><p> </p><p>Such a loan app may combine real-time data from the customer alongside historical data stored in a traditional database. Hazelcast can combine historical data with real time data to make workloads like this possible.</p><p> </p><p>In this interview, we also debated the <a href="https://thenewstack.io/where-elasticity-meets-open-source-and-how-to-adapt/">merits of Kafka</a>, the benefits of using a managed service rather than running an application in house, Hazelcast's users, and features in the latest release of the Hazelcast platform.</p><p> </p><p> </p><p> </p><p> </p><p> </p><p> </p><p> </p><p> </p>
]]></content:encoded>
      <enclosure length="14072408" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/e8013ae3-b9be-4f02-9f8f-db9266d7dfd9/audio/80c6f6b3-dd22-4add-ba54-8b25523c65e5/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Hazelcast and the Benefits of Real Time Data</itunes:title>
      <itunes:author>hazelcast, The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/55755b7f-0493-4e7d-93ae-4ad00877fc40/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:14:31</itunes:duration>
      <itunes:summary>In this latest podcast from The New Stack, we interview Manish Devgan, chief product officer for Hazelcast, which offers a real time stream processing engine. This interview was recorded at KubeCon+CloudNativeCon, held last October in Detroit.

&quot;&apos;Real time&apos; means different things to different people, but it&apos;s really a business term,&quot; Devgan explained. In the business world, time is money, and the more quickly you can make a decision, using the right data, the more quickly one can take action.

Although we have many &quot;batch-processing&quot; systems, the data itself rarely comes in batches, Devgan said. &quot;A lot of times I hear from customers that are using a batch system, because those are the things which are available at that time.

But data is created in real time sensors, your machines, espionage data, or even customer data — right when customers are transacting with you.&quot;</itunes:summary>
      <itunes:subtitle>In this latest podcast from The New Stack, we interview Manish Devgan, chief product officer for Hazelcast, which offers a real time stream processing engine. This interview was recorded at KubeCon+CloudNativeCon, held last October in Detroit.

&quot;&apos;Real time&apos; means different things to different people, but it&apos;s really a business term,&quot; Devgan explained. In the business world, time is money, and the more quickly you can make a decision, using the right data, the more quickly one can take action.

Although we have many &quot;batch-processing&quot; systems, the data itself rarely comes in batches, Devgan said. &quot;A lot of times I hear from customers that are using a batch system, because those are the things which are available at that time.

But data is created in real time sensors, your machines, espionage data, or even customer data — right when customers are transacting with you.&quot;</itunes:subtitle>
      <itunes:keywords>kubecon 2022, software developer, joab jackson, tech podcast, the new stack, manish devgan, devops, devops podcast, tech, hazelcast, developer podcast, kubecon detroit, the new stack makers, software engineer, kubecon</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1379</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">02093445-cb8d-4cac-abe6-af35265e87ba</guid>
      <title>Hachyderm.io, from Side Project to 38,000+ Users and Counting</title>
      <description><![CDATA[<p>Back in April, <a href="https://www.linkedin.com/in/kris-nova/">Kris Nóva,</a> now principal engineer at GitHub, started <a href="https://thenewstack.io/build-your-own-decentralized-twitter-part-3-hello-mastodon/">creating a server on Mastodon</a> as a side project in her basement lab.</p><p> </p><p>Then in late October, Elon Musk bought Twitter for an eye-watering $44 billion, and began cutting thousands of jobs at the social media giant and making changes that alienated longtime users.</p><p> </p><p>And over the next few weeks, usage of Nóva’s hobby site, Hachyderm.io, exploded.</p><p> </p><p>“The server started very small,” she said on this episode of The New Stack Makers podcast. “And I think like, one of my friends turned into two of my friends turned into 10 of my friends turned into 20 colleagues, and it just so happens, a lot of them were big names in the tech industry. And now all of a sudden, I have 30,000 people I have to babysit.”</p><p> </p><p>Though the rate at which new users are joining Hachyderm has slowed down in recent days, Nóva said, it stood at <a href="https://grafana.hachyderm.io/public">more than 38,000 users as of Dec. 20.</a></p><p> </p><p>Hachyderm.io is still run by a handful of volunteers, who also handle content moderation. Nóva is now seeking nonprofit status for it with the U.S. Internal Revenue Service, with intentions of building a new organization around Hachyderm.</p><p> </p><p>This episode of Makers, hosted by <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn,</a> TNS features editor, recounts Hachyderm’s origins and the challenges involved in scaling it as Twitter users from the tech community gravitated to it.</p><p> </p><p>Nóva and Joslyn were joined by <a href="https://www.linkedin.com/in/gabemonroy/">Gabe Monroy,</a> chief product officer at DigitalOcean, which has helped Hachyderm cope with the technical demands of its growth spurt.</p><p><h2>HugOps and Solving Storage Issues</h2></p><p>Suddenly having a social media network to “babysit” brings numerous challenges, including the technical issues involved in a rapid scale up. Monroy and Nóva worked on Kubernetes projects when both were employed at Microsoft, “so we’re all about that horizontal distribution life.” But the Mastodon application’s structure proved confounding.</p><p> </p><p>“Here I am operating a Ruby on Rails monolith that's designed to be vertically scaled on a single piece of hardware,” Nóva said. “And we're trying to break that apart and run that horizontally across the rack behind me. So we got into a lot of trouble very early on by just taking the service itself and starting to decompose it into microservices.”</p><p> </p><p>Storage also rapidly became an issue. “We had some non-enterprise but consumer-grade SSDs. And we were doing on the order of millions of reads and writes per day, just keeping the Postgres database online. And that was causing cascading failures and cascading outages across our distributed footprint, just because our Postgres service couldn't keep up.”</p><p> </p><p><a href="https://www.digitalocean.com/blog/digitalocean-spaces-mastodon-hachyderm">DigitalOcean helped with the storage issues;</a> the site now uses a data center in Germany, whose servers DigitalOcean manages. (Previously, its servers had been <a href="https://community.hachyderm.io/blog/2022/12/03/leaving-the-basement/">living in Nóva’s basement lab.</a>)</p><p> </p><p>Monroy, longtime friends with Nóva, was an early Hachyderm user and reached out when he noticed problems on the site, such as when he had difficulty posting videos and noticed other people complaining about similar problems.</p><p> </p><p>“This is a ‘success failure’ in the making here, the scale of this is sort of overwhelming,” Monroy said. “So I just texted Nóva, ‘Hey, what's going on? Anything I could do to help?’</p><p> </p><p>“In the community, we like to talk about the concept of HugOps, right? When people are having issues on this stuff, you reach out, try and help. You give a hug. And so, that was all I did. Nóva is very crisp and clear: This is what I got going on. These are the issues. These are the areas where you could help.”</p><p><h2>Sustaining ‘the NPR of Social Media’</h2></p><p>One challenge in particular has nudged Nóva to seek nonprofit status: operating costs.</p><p> </p><p>“Right now, I'm able to just kind of like eat the cost myself,” she said. “I operate a Twitch stream, and we're taking the proceeds of that and putting it towards operating service.” But that, she acknowledges, won’t be sustainable as Hachyderm grows.</p><p> </p><p>“The whole goal of it, as far as I'm concerned, is to keep it as sustainable as possible,” Nóva said. “So that we're not having to offset the operating costs with ads or marketing or product marketing. We can just try to keep it as neutral and, frankly, boring as possible — the NPR of social media, if you could imagine such a thing.”</p><p> </p><p>Check out the full episode for more details on how Hachyderm is scaling and plans for its future, and Nóva and Monroy’s thoughts about the status of Twitter.</p><p> </p><p><hr /></p><p> </p><p><em>Feedback? Find me at <a href="https://hachyderm.io/@hajoslyn">@hajoslyn</a> on Hachyderm.io.</em></p>
]]></description>
      <pubDate>Thu, 22 Dec 2022 21:46:08 +0000</pubDate>
      <author>podcasts@thenewstack.io (hachyderm, digital ocean, The New  Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/hachyderm-o3E1i0LN</link>
      <content:encoded><![CDATA[<p>Back in April, <a href="https://www.linkedin.com/in/kris-nova/">Kris Nóva,</a> now principal engineer at GitHub, started <a href="https://thenewstack.io/build-your-own-decentralized-twitter-part-3-hello-mastodon/">creating a server on Mastodon</a> as a side project in her basement lab.</p><p> </p><p>Then in late October, Elon Musk bought Twitter for an eye-watering $44 billion, and began cutting thousands of jobs at the social media giant and making changes that alienated longtime users.</p><p> </p><p>And over the next few weeks, usage of Nóva’s hobby site, Hachyderm.io, exploded.</p><p> </p><p>“The server started very small,” she said on this episode of The New Stack Makers podcast. “And I think like, one of my friends turned into two of my friends turned into 10 of my friends turned into 20 colleagues, and it just so happens, a lot of them were big names in the tech industry. And now all of a sudden, I have 30,000 people I have to babysit.”</p><p> </p><p>Though the rate at which new users are joining Hachyderm has slowed down in recent days, Nóva said, it stood at <a href="https://grafana.hachyderm.io/public">more than 38,000 users as of Dec. 20.</a></p><p> </p><p>Hachyderm.io is still run by a handful of volunteers, who also handle content moderation. Nóva is now seeking nonprofit status for it with the U.S. Internal Revenue Service, with intentions of building a new organization around Hachyderm.</p><p> </p><p>This episode of Makers, hosted by <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn,</a> TNS features editor, recounts Hachyderm’s origins and the challenges involved in scaling it as Twitter users from the tech community gravitated to it.</p><p> </p><p>Nóva and Joslyn were joined by <a href="https://www.linkedin.com/in/gabemonroy/">Gabe Monroy,</a> chief product officer at DigitalOcean, which has helped Hachyderm cope with the technical demands of its growth spurt.</p><p><h2>HugOps and Solving Storage Issues</h2></p><p>Suddenly having a social media network to “babysit” brings numerous challenges, including the technical issues involved in a rapid scale up. Monroy and Nóva worked on Kubernetes projects when both were employed at Microsoft, “so we’re all about that horizontal distribution life.” But the Mastodon application’s structure proved confounding.</p><p> </p><p>“Here I am operating a Ruby on Rails monolith that's designed to be vertically scaled on a single piece of hardware,” Nóva said. “And we're trying to break that apart and run that horizontally across the rack behind me. So we got into a lot of trouble very early on by just taking the service itself and starting to decompose it into microservices.”</p><p> </p><p>Storage also rapidly became an issue. “We had some non-enterprise but consumer-grade SSDs. And we were doing on the order of millions of reads and writes per day, just keeping the Postgres database online. And that was causing cascading failures and cascading outages across our distributed footprint, just because our Postgres service couldn't keep up.”</p><p> </p><p><a href="https://www.digitalocean.com/blog/digitalocean-spaces-mastodon-hachyderm">DigitalOcean helped with the storage issues;</a> the site now uses a data center in Germany, whose servers DigitalOcean manages. (Previously, its servers had been <a href="https://community.hachyderm.io/blog/2022/12/03/leaving-the-basement/">living in Nóva’s basement lab.</a>)</p><p> </p><p>Monroy, longtime friends with Nóva, was an early Hachyderm user and reached out when he noticed problems on the site, such as when he had difficulty posting videos and noticed other people complaining about similar problems.</p><p> </p><p>“This is a ‘success failure’ in the making here, the scale of this is sort of overwhelming,” Monroy said. “So I just texted Nóva, ‘Hey, what's going on? Anything I could do to help?’</p><p> </p><p>“In the community, we like to talk about the concept of HugOps, right? When people are having issues on this stuff, you reach out, try and help. You give a hug. And so, that was all I did. Nóva is very crisp and clear: This is what I got going on. These are the issues. These are the areas where you could help.”</p><p><h2>Sustaining ‘the NPR of Social Media’</h2></p><p>One challenge in particular has nudged Nóva to seek nonprofit status: operating costs.</p><p> </p><p>“Right now, I'm able to just kind of like eat the cost myself,” she said. “I operate a Twitch stream, and we're taking the proceeds of that and putting it towards operating service.” But that, she acknowledges, won’t be sustainable as Hachyderm grows.</p><p> </p><p>“The whole goal of it, as far as I'm concerned, is to keep it as sustainable as possible,” Nóva said. “So that we're not having to offset the operating costs with ads or marketing or product marketing. We can just try to keep it as neutral and, frankly, boring as possible — the NPR of social media, if you could imagine such a thing.”</p><p> </p><p>Check out the full episode for more details on how Hachyderm is scaling and plans for its future, and Nóva and Monroy’s thoughts about the status of Twitter.</p><p> </p><p><hr /></p><p> </p><p><em>Feedback? Find me at <a href="https://hachyderm.io/@hajoslyn">@hajoslyn</a> on Hachyderm.io.</em></p>
]]></content:encoded>
      <enclosure length="25487248" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/86b321bd-a171-4b8e-a50f-8246af1460e1/audio/3585da5d-ebfa-4408-bd07-5fa923b03a0a/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Hachyderm.io, from Side Project to 38,000+ Users and Counting</itunes:title>
      <itunes:author>hachyderm, digital ocean, The New  Stack</itunes:author>
      <itunes:duration>00:26:32</itunes:duration>
      <itunes:summary>Back in April, Kris Nóva, now principal engineer at GitHub, started creating a server on Mastodon as a side project in her basement lab.

Then in late October, Elon Musk bought Twitter for an eye-watering $44 billion, and began cutting thousands of jobs at the social media giant and making changes that alienated longtime users.

And over the next few weeks, usage of Nóva’s hobby site, Hachyderm.io, exploded.

“The server started very small,” she said on this episode of The New Stack Makers podcast. “And I think like, one of my friends turned into two of my friends turned into 10 of my friends turned into 20 colleagues, and it just so happens, a lot of them were big names in the tech industry. And now all of a sudden, I have 30,000 people I have to babysit.”

Though the rate at which new users are joining Hachyderm has slowed down in recent days, Nóva said, it stood at more than 38,000 users as of Dec. 20.

Hachyderm.io is still run by a handful of volunteers, who also handle content moderation. Nóva is now seeking nonprofit status for it with the U.S. Internal Revenue Service, with intentions of building a new organization around Hachyderm.

This episode of Makers, hosted by Heather Joslyn, TNS features editor, recounts Hachyderm’s origins and the challenges involved in scaling it as Twitter users from the tech community gravitated to it.

Nóva and Joslyn were joined by Gabe Monroy, chief product officer at DigitalOcean, which has helped Hachyderm cope with the technical demands of its growth spurt.</itunes:summary>
      <itunes:subtitle>Back in April, Kris Nóva, now principal engineer at GitHub, started creating a server on Mastodon as a side project in her basement lab.

Then in late October, Elon Musk bought Twitter for an eye-watering $44 billion, and began cutting thousands of jobs at the social media giant and making changes that alienated longtime users.

And over the next few weeks, usage of Nóva’s hobby site, Hachyderm.io, exploded.

“The server started very small,” she said on this episode of The New Stack Makers podcast. “And I think like, one of my friends turned into two of my friends turned into 10 of my friends turned into 20 colleagues, and it just so happens, a lot of them were big names in the tech industry. And now all of a sudden, I have 30,000 people I have to babysit.”

Though the rate at which new users are joining Hachyderm has slowed down in recent days, Nóva said, it stood at more than 38,000 users as of Dec. 20.

Hachyderm.io is still run by a handful of volunteers, who also handle content moderation. Nóva is now seeking nonprofit status for it with the U.S. Internal Revenue Service, with intentions of building a new organization around Hachyderm.

This episode of Makers, hosted by Heather Joslyn, TNS features editor, recounts Hachyderm’s origins and the challenges involved in scaling it as Twitter users from the tech community gravitated to it.

Nóva and Joslyn were joined by Gabe Monroy, chief product officer at DigitalOcean, which has helped Hachyderm cope with the technical demands of its growth spurt.</itunes:subtitle>
      <itunes:keywords>hachyderm, gabe monroy, digital ocean, software developer, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, kris nova, the new stack makers, software engineer, mastadon</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1378</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b0101268-95ea-4b9e-b937-c6670f1a294e</guid>
      <title>Automation for Cloud Optimization</title>
      <description><![CDATA[<p>During the pandemic, <a href="https://thenewstack.io/digital-transformation-during-the-pandemic/">many organizations sped up their move to the cloud</a> — without fully understanding the costs, both human and financial, they would pay for the convenience and scalability of a digital transformation.</p><p> </p><p>“They really didn’t have a baseline,” said <a href="https://www.linkedin.com/in/mekkacodes/">Mekka Williams,</a> principal engineer, at Spot by NetApp, in this episode of The New Stack Makers podcast. “And so the those first <a href="https://thenewstack.io/want-to-save-the-world-start-by-cutting-your-cloud-costs/">cloud bills</a>, I'm sure were shocking, because you don't get a cloud bill, when you run on your on-premises environment, or even your private cloud, where you've already paid the cost for the infrastructure that you're using.</p><p> </p><p>What’s especially worrisome is that many of those costs are simply wasted, Williams said. “Most of the containerized applications running in Kubernetes clusters are running underutilized,” she said. “And anything that's underutilized in the cloud equates to waste. And if we want to be really lean and clean and use resources in a very efficient manner, we have to have really good cloud strategy in order to do that.”</p><p> </p><p>This episode of The New Stack Makers, hosted by Heather Joslyn, TNS features editor, focused on CloudOps, which in this case stands for “cloud operations.” (It can also stand for “cloud optimization,” but more about that later.)</p><p> </p><p>The conversation was sponsored by Spot by NetApp.</p><p> </p><h2>Automation for Cloud Optimization</h2><p> </p><p>Many organizations that moved quickly to the cloud during the dog days of the pandemic have begun to revisit the decisions they made and update their strategies, Williams said.</p><p> </p><p>“We see some organizations that are trying to modernize their applications further, to make better use of the services that are available in the cloud,” she said. “The cloud is getting more complex as they grow and mature in their journey.</p><p> </p><p>“And so they're looking for ways to simplify their operations. And as always keep their costs down. Keep things simple for their DevOps and SRE, to  is not incur additional technical debt, but still make the most make the best use out of their cloud, wherever they are.”</p><p> </p><p>Automation holds the key to CloudOps — both definitions — according to Williams. For starters, it makes teams more efficient.</p><p> </p><p>“The less tasks that your workforce have to perform manually, the more time they have to spend focused on business logic and being innovative,” Williams said. “Automation also helps you with repeatability. And it's less error-prone, and it helps you standardize. Really good automation simplifies your environment greatly.”</p><p> </p><p>Automating repetitive tasks can also help prevent your <a href="https://thenewstack.io/experts-weigh-in-on-the-state-of-site-reliability-engineering/">site reliability engineers (SREs)</a> from <a href="https://thenewstack.io/how-to-recognize-recover-from-and-prevent-burnout/">burnout,</a> she said.</p><p> </p><p>Practicing “good data hygiene,” Williams said, also helps contain costs and reduce toil: “Making sure you're using the right tier of data, making sure you're not over-provisioned. And the type of storage you need, you don't need to pay top dollar for high-performing storage, if it's just backup data that doesn't get accessed that often.”</p><p> </p><p>Such practices are “good to know on-premises, but these are imperative to know when you're in the cloud,” she said, in order to reduce waste.</p><p> </p><p>During this episode, Williams pointed to solutions in the Spot by Netapp portfolio that use automation to help make the most of cloud infrastructure, such as its flagship product, Elastigroup, which takes advantage of excess capacity to scale workloads.</p><p> </p><p>In June, Spot by NetApp acquired Instaclustr, a solution for managing open source database and streaming technologies. The company recognizes the growing importance of open source for enterprises. “We're paying attention to trends for cloud applications,” Williams said, “and we're growing the portfolio to address the needs that are top of mind for those customers.”</p><p> </p><p>Check out the entire episode to learn more about CloudOps.</p>
]]></description>
      <pubDate>Tue, 20 Dec 2022 21:05:07 +0000</pubDate>
      <author>podcasts@thenewstack.io (Spot By NetApp, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/automation-for-cloud-optimization-sl9b5VAF</link>
      <content:encoded><![CDATA[<p>During the pandemic, <a href="https://thenewstack.io/digital-transformation-during-the-pandemic/">many organizations sped up their move to the cloud</a> — without fully understanding the costs, both human and financial, they would pay for the convenience and scalability of a digital transformation.</p><p> </p><p>“They really didn’t have a baseline,” said <a href="https://www.linkedin.com/in/mekkacodes/">Mekka Williams,</a> principal engineer, at Spot by NetApp, in this episode of The New Stack Makers podcast. “And so the those first <a href="https://thenewstack.io/want-to-save-the-world-start-by-cutting-your-cloud-costs/">cloud bills</a>, I'm sure were shocking, because you don't get a cloud bill, when you run on your on-premises environment, or even your private cloud, where you've already paid the cost for the infrastructure that you're using.</p><p> </p><p>What’s especially worrisome is that many of those costs are simply wasted, Williams said. “Most of the containerized applications running in Kubernetes clusters are running underutilized,” she said. “And anything that's underutilized in the cloud equates to waste. And if we want to be really lean and clean and use resources in a very efficient manner, we have to have really good cloud strategy in order to do that.”</p><p> </p><p>This episode of The New Stack Makers, hosted by Heather Joslyn, TNS features editor, focused on CloudOps, which in this case stands for “cloud operations.” (It can also stand for “cloud optimization,” but more about that later.)</p><p> </p><p>The conversation was sponsored by Spot by NetApp.</p><p> </p><h2>Automation for Cloud Optimization</h2><p> </p><p>Many organizations that moved quickly to the cloud during the dog days of the pandemic have begun to revisit the decisions they made and update their strategies, Williams said.</p><p> </p><p>“We see some organizations that are trying to modernize their applications further, to make better use of the services that are available in the cloud,” she said. “The cloud is getting more complex as they grow and mature in their journey.</p><p> </p><p>“And so they're looking for ways to simplify their operations. And as always keep their costs down. Keep things simple for their DevOps and SRE, to  is not incur additional technical debt, but still make the most make the best use out of their cloud, wherever they are.”</p><p> </p><p>Automation holds the key to CloudOps — both definitions — according to Williams. For starters, it makes teams more efficient.</p><p> </p><p>“The less tasks that your workforce have to perform manually, the more time they have to spend focused on business logic and being innovative,” Williams said. “Automation also helps you with repeatability. And it's less error-prone, and it helps you standardize. Really good automation simplifies your environment greatly.”</p><p> </p><p>Automating repetitive tasks can also help prevent your <a href="https://thenewstack.io/experts-weigh-in-on-the-state-of-site-reliability-engineering/">site reliability engineers (SREs)</a> from <a href="https://thenewstack.io/how-to-recognize-recover-from-and-prevent-burnout/">burnout,</a> she said.</p><p> </p><p>Practicing “good data hygiene,” Williams said, also helps contain costs and reduce toil: “Making sure you're using the right tier of data, making sure you're not over-provisioned. And the type of storage you need, you don't need to pay top dollar for high-performing storage, if it's just backup data that doesn't get accessed that often.”</p><p> </p><p>Such practices are “good to know on-premises, but these are imperative to know when you're in the cloud,” she said, in order to reduce waste.</p><p> </p><p>During this episode, Williams pointed to solutions in the Spot by Netapp portfolio that use automation to help make the most of cloud infrastructure, such as its flagship product, Elastigroup, which takes advantage of excess capacity to scale workloads.</p><p> </p><p>In June, Spot by NetApp acquired Instaclustr, a solution for managing open source database and streaming technologies. The company recognizes the growing importance of open source for enterprises. “We're paying attention to trends for cloud applications,” Williams said, “and we're growing the portfolio to address the needs that are top of mind for those customers.”</p><p> </p><p>Check out the entire episode to learn more about CloudOps.</p>
]]></content:encoded>
      <enclosure length="21879789" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/37054a6c-c1ec-455c-b01a-d7606d0da4db/audio/ec00fbbf-7cf9-41e2-889b-c565a1c8b3d7/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Automation for Cloud Optimization</itunes:title>
      <itunes:author>Spot By NetApp, The New Stack</itunes:author>
      <itunes:duration>00:22:47</itunes:duration>
      <itunes:summary>During the pandemic, many organizations sped up their move to the cloud — without fully understanding the costs, both human and financial, they would pay for the convenience and scalability of a digital transformation.

“They really didn’t have a baseline,” said Mekka Williams, principal engineer, at Spot by NetApp, in this episode of The New Stack Makers podcast. “And so the those first cloud bills, I&apos;m sure were shocking, because you don&apos;t get a cloud bill, when you run on your on-premises environment, or even your private cloud, where you&apos;ve already paid the cost for the infrastructure that you&apos;re using.

What’s especially worrisome is that many of those costs are simply wasted, Williams said. “Most of the containerized applications running in Kubernetes clusters are running underutilized,” she said. “And anything that&apos;s underutilized in the cloud equates to waste. And if we want to be really lean and clean and use resources in a very efficient manner, we have to have really good cloud strategy in order to do that.”

This episode of The New Stack Makers, hosted by Heather Joslyn, TNS features editor, focused on CloudOps, which in this case stands for “cloud operations.” (It can also stand for “cloud optimization,” but more about that later.)

The conversation was sponsored by Spot by NetApp.</itunes:summary>
      <itunes:subtitle>During the pandemic, many organizations sped up their move to the cloud — without fully understanding the costs, both human and financial, they would pay for the convenience and scalability of a digital transformation.

“They really didn’t have a baseline,” said Mekka Williams, principal engineer, at Spot by NetApp, in this episode of The New Stack Makers podcast. “And so the those first cloud bills, I&apos;m sure were shocking, because you don&apos;t get a cloud bill, when you run on your on-premises environment, or even your private cloud, where you&apos;ve already paid the cost for the infrastructure that you&apos;re using.

What’s especially worrisome is that many of those costs are simply wasted, Williams said. “Most of the containerized applications running in Kubernetes clusters are running underutilized,” she said. “And anything that&apos;s underutilized in the cloud equates to waste. And if we want to be really lean and clean and use resources in a very efficient manner, we have to have really good cloud strategy in order to do that.”

This episode of The New Stack Makers, hosted by Heather Joslyn, TNS features editor, focused on CloudOps, which in this case stands for “cloud operations.” (It can also stand for “cloud optimization,” but more about that later.)

The conversation was sponsored by Spot by NetApp.</itunes:subtitle>
      <itunes:keywords>software developer, netapp, tech podcast, the new stack, heather joslyn, devops, devops podcast, tech, developer podcast, spot by netapp, the new stack makers, software engineer, instaclusr, instaclustr, mekka williams</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1377</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">0037c6bb-3a62-44f5-b524-34357c0854b1</guid>
      <title>Redis Looks Beyond Cache Toward Everything Data</title>
      <description><![CDATA[<p><a href="https://redis.io/">Redis</a>, best known as a data cache or real-time data platform, is <a href="https://thenewstack.io/redis-is-not-just-a-cache/">evolving into much more</a>, <a href="https://www.linkedin.com/in/timehall">Tim Hall</a>, chief of product at the company told The New Stack in a recent TNS Makers podcast.</p><p> </p><p>Redis is an in-memory database or memory-first database, which means the data lands there and people are using us for both caching and persistence. However, these days, the company has a number of flexible data models, but one of the brand promises of Redis is developers can store the data as they're working with it. So as opposed to a SQL database where you might have to turn your data structures into columns and tables, you can actually store the data structures that you're working with directly into Redis, Hall said.</p><p> </p><p> </p><h2>Primary Database?</h2><p> </p><p>“About 40% of our customers today are using us as a primary database technology,” he said. “That may surprise some people if you're sort of a classic Redis user and you knew us from in-memory caching, you probably didn't realize we added a variety of mechanisms for persistence over the years.”</p><p> </p><p>Meanwhile, to store the data, Redis does store it on disk, sort of behind the scenes while keeping a copy in memory. So if there's any sort of failure, Redis can recover the data off of disk and replay it into memory and get you back up and running. That's a mechanism that has been around about half a decade now.</p><p> </p><p>Yet, Redis is playing what Hall called the ‘long game', particularly in terms of continuing to reach out to developers and showing them what the latest capabilities are.</p><p> </p><p>“If you look at the top 10 databases on the planet, they've all moved into the multimodal category. And Redis is no different from that perspective” Hall said. “So if you look at Oracle it was traditionally a relational database, <a href="https://thenewstack.io/mongodb-6-0-brings-encrypted-queries-time-series-data-collection/">Mongo</a> is traditionally JSON documents store only, and obviously Redis is a key-value store. We've all moved down the field now. Now, why would we do that? We're all looking to simplify the developer’s world, right?”</p><p> </p><p>Yet, each vendor is really trying to leverage their core differentiation and expand out from there. And the good news for Redis is speed is its core differentiation.</p><p> </p><p>“Why would you want a slow data platform? You don't, Hall said. “So the more that we can offer those extended capabilities for working with things like JSON, or we just launched a data structure called <a href="https://github.com/usmanm/redis-tdigest">t-digest</a>, that people can use along and we've had support for <a href="https://redis.com/blog/bloom-filter/">Bloom filter</a>, which is a probabilistic data structure like all of these things, we kind of expand our footprint, we're saying if you need speed, and reducing latency, and having high interactivity is your goal Redis should be your starting point. If you want some esoteric edge case functionality where you need to manipulate JSON in some very strange way, you probably should go with Mongo. I probably won't support that for a long time. But if you're just working with the basic data structures, you need to be able to query, you need to be able to update your JSON document. Those straightforward use cases we support very, very well, and we support them at speed and scale.”</p><p> </p><h2>Customer View</h2><p> </p><p>As a Redis customer, <a href="https://www.linkedin.com/in/alainrussell/?originalSubdomain=nz">Alain Russell</a>, CEO at <a href="https://www.blackpepper.co.nz/">Blackpepper</a>, a digital e-commerce agency in Auckland, New Zealand, said his firm has undergone the same transition.</p><p> </p><p>“We started off as a Redis as a cache, that helped us speed up traditional data that was slower than we wanted it,” he said. “And then we went down a cloud path a couple of years ago. Part of that migration included us becoming, you know, what's deemed as ‘cloud native.’ And we started using all of these different data stores and data structures and dealing with all of them is actually complicated. You know, and from a developer perspective, it can be a bit painful.”</p><p> </p><p>So, Blackpepper started looking for how to make things simpler, but also keep their platform very fast and they looked at the Redis Stack. “And honestly, it filled all of our needs in one platform. And we're kind of in this path at the moment, we were using the basics of it. And we're very early on in our journey, right? We're still learning how things work and how to use it properly. But we also have a big list of things that we're using other data stores for traditional data, and working out, okay, this will be something that we will migrate to, you know, because we use persistent heavily now, in Redis.”</p><p> </p><p>Twenty-year-old Blackpepper works with predominantly traditional retailers and helps them in their omni-channel journey.</p><p> </p><h2>Commercial vs. Open Source</h2><p> </p><p>Hall said there are three modes of access to the Redis technology: the Redis open source project, the Redis Stack – which the company recommends that developers start with today -- and then there's Redis Enterprise Edition, which is available as software or in the cloud.</p><p> </p><p>“It's the most popular NoSQL database on the planet six years running,” Hall said. “And people love it because of its simplicity.”</p><p> </p><p>Meanwhile, it takes effort to maintain both the commercial product and the open source effort. Allen, who has worked at Hortonworks, InfluxData, said “Not every open source company is the same in terms of how you make decisions about what lands in your commercial offering and what lands in open source and where the contributions come from and who's involved.”</p><p> </p><p>For instance, “if there was something that somebody wanted to contribute that was going to go against our commercial interest, we probably not would not merge that,” Hall said.</p><p> </p><p>Redis was run by project founder <a href="https://github.com/antirez">Salvatore Sanfilippo</a>, for many, many years, and he was the sole arbiter of what landed and what did not land in Redis itself. Then, over the last couple of years, Redis created a core steering committee. It's made up of one individual from AWS, one individual from Alibaba, and three Redis employees who look after the contributions that are coming in from the Redis open source community members who want to contribute those things.</p><p> </p><p>“And then we reconcile what we want from a commercial interest perspective, either upstream, or things that, frankly, may have been commoditized and that we want to push downstream into the open source offering, Hall said. “And so the thing that you're asking about is sort of my core existential challenge all the time, that is figuring out where we're going from a commercial perspective. What do we want to land there first? And how can we create a conveyor belt of commercial opportunity that keeps us in business as a software company, creating differentiation against potential competitors show up? And then over time, making sure that those things that do become commoditized, or maybe are not as differentiating anymore, I want to release those to the open source community. But this upstream/downstream kind of challenge is something that we're constantly working through.”</p><p> </p><p>Blackpepper was an open source Redis user initially, but they started a journey where they used <a href="https://thenewstack.io/how-pinterest-tuned-memcached-for-big-performance-gains/">Memcached</a> to speed up data. Then they migrated to Redis when they moved to the AWS cloud, Russell said.</p><p> </p><h2>Listen to the Podcast</h2><p> </p><p>The Redis TNS Makers podcast goes on to look at the use of AI/ML in the platform, the acquisition of RESP.app, the importance of <a href="https://thenewstack.io/building-large-scale-real-time-json-applications/">JSON</a> and RediSearch, and where Redis is headed in the future.</p>
]]></description>
      <pubDate>Wed, 14 Dec 2022 20:40:42 +0000</pubDate>
      <author>podcasts@thenewstack.io (Redis, BlackPepper, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/redis-looks-beyond-cache-toward-everything-data-npEsCxjA</link>
      <content:encoded><![CDATA[<p><a href="https://redis.io/">Redis</a>, best known as a data cache or real-time data platform, is <a href="https://thenewstack.io/redis-is-not-just-a-cache/">evolving into much more</a>, <a href="https://www.linkedin.com/in/timehall">Tim Hall</a>, chief of product at the company told The New Stack in a recent TNS Makers podcast.</p><p> </p><p>Redis is an in-memory database or memory-first database, which means the data lands there and people are using us for both caching and persistence. However, these days, the company has a number of flexible data models, but one of the brand promises of Redis is developers can store the data as they're working with it. So as opposed to a SQL database where you might have to turn your data structures into columns and tables, you can actually store the data structures that you're working with directly into Redis, Hall said.</p><p> </p><p> </p><h2>Primary Database?</h2><p> </p><p>“About 40% of our customers today are using us as a primary database technology,” he said. “That may surprise some people if you're sort of a classic Redis user and you knew us from in-memory caching, you probably didn't realize we added a variety of mechanisms for persistence over the years.”</p><p> </p><p>Meanwhile, to store the data, Redis does store it on disk, sort of behind the scenes while keeping a copy in memory. So if there's any sort of failure, Redis can recover the data off of disk and replay it into memory and get you back up and running. That's a mechanism that has been around about half a decade now.</p><p> </p><p>Yet, Redis is playing what Hall called the ‘long game', particularly in terms of continuing to reach out to developers and showing them what the latest capabilities are.</p><p> </p><p>“If you look at the top 10 databases on the planet, they've all moved into the multimodal category. And Redis is no different from that perspective” Hall said. “So if you look at Oracle it was traditionally a relational database, <a href="https://thenewstack.io/mongodb-6-0-brings-encrypted-queries-time-series-data-collection/">Mongo</a> is traditionally JSON documents store only, and obviously Redis is a key-value store. We've all moved down the field now. Now, why would we do that? We're all looking to simplify the developer’s world, right?”</p><p> </p><p>Yet, each vendor is really trying to leverage their core differentiation and expand out from there. And the good news for Redis is speed is its core differentiation.</p><p> </p><p>“Why would you want a slow data platform? You don't, Hall said. “So the more that we can offer those extended capabilities for working with things like JSON, or we just launched a data structure called <a href="https://github.com/usmanm/redis-tdigest">t-digest</a>, that people can use along and we've had support for <a href="https://redis.com/blog/bloom-filter/">Bloom filter</a>, which is a probabilistic data structure like all of these things, we kind of expand our footprint, we're saying if you need speed, and reducing latency, and having high interactivity is your goal Redis should be your starting point. If you want some esoteric edge case functionality where you need to manipulate JSON in some very strange way, you probably should go with Mongo. I probably won't support that for a long time. But if you're just working with the basic data structures, you need to be able to query, you need to be able to update your JSON document. Those straightforward use cases we support very, very well, and we support them at speed and scale.”</p><p> </p><h2>Customer View</h2><p> </p><p>As a Redis customer, <a href="https://www.linkedin.com/in/alainrussell/?originalSubdomain=nz">Alain Russell</a>, CEO at <a href="https://www.blackpepper.co.nz/">Blackpepper</a>, a digital e-commerce agency in Auckland, New Zealand, said his firm has undergone the same transition.</p><p> </p><p>“We started off as a Redis as a cache, that helped us speed up traditional data that was slower than we wanted it,” he said. “And then we went down a cloud path a couple of years ago. Part of that migration included us becoming, you know, what's deemed as ‘cloud native.’ And we started using all of these different data stores and data structures and dealing with all of them is actually complicated. You know, and from a developer perspective, it can be a bit painful.”</p><p> </p><p>So, Blackpepper started looking for how to make things simpler, but also keep their platform very fast and they looked at the Redis Stack. “And honestly, it filled all of our needs in one platform. And we're kind of in this path at the moment, we were using the basics of it. And we're very early on in our journey, right? We're still learning how things work and how to use it properly. But we also have a big list of things that we're using other data stores for traditional data, and working out, okay, this will be something that we will migrate to, you know, because we use persistent heavily now, in Redis.”</p><p> </p><p>Twenty-year-old Blackpepper works with predominantly traditional retailers and helps them in their omni-channel journey.</p><p> </p><h2>Commercial vs. Open Source</h2><p> </p><p>Hall said there are three modes of access to the Redis technology: the Redis open source project, the Redis Stack – which the company recommends that developers start with today -- and then there's Redis Enterprise Edition, which is available as software or in the cloud.</p><p> </p><p>“It's the most popular NoSQL database on the planet six years running,” Hall said. “And people love it because of its simplicity.”</p><p> </p><p>Meanwhile, it takes effort to maintain both the commercial product and the open source effort. Allen, who has worked at Hortonworks, InfluxData, said “Not every open source company is the same in terms of how you make decisions about what lands in your commercial offering and what lands in open source and where the contributions come from and who's involved.”</p><p> </p><p>For instance, “if there was something that somebody wanted to contribute that was going to go against our commercial interest, we probably not would not merge that,” Hall said.</p><p> </p><p>Redis was run by project founder <a href="https://github.com/antirez">Salvatore Sanfilippo</a>, for many, many years, and he was the sole arbiter of what landed and what did not land in Redis itself. Then, over the last couple of years, Redis created a core steering committee. It's made up of one individual from AWS, one individual from Alibaba, and three Redis employees who look after the contributions that are coming in from the Redis open source community members who want to contribute those things.</p><p> </p><p>“And then we reconcile what we want from a commercial interest perspective, either upstream, or things that, frankly, may have been commoditized and that we want to push downstream into the open source offering, Hall said. “And so the thing that you're asking about is sort of my core existential challenge all the time, that is figuring out where we're going from a commercial perspective. What do we want to land there first? And how can we create a conveyor belt of commercial opportunity that keeps us in business as a software company, creating differentiation against potential competitors show up? And then over time, making sure that those things that do become commoditized, or maybe are not as differentiating anymore, I want to release those to the open source community. But this upstream/downstream kind of challenge is something that we're constantly working through.”</p><p> </p><p>Blackpepper was an open source Redis user initially, but they started a journey where they used <a href="https://thenewstack.io/how-pinterest-tuned-memcached-for-big-performance-gains/">Memcached</a> to speed up data. Then they migrated to Redis when they moved to the AWS cloud, Russell said.</p><p> </p><h2>Listen to the Podcast</h2><p> </p><p>The Redis TNS Makers podcast goes on to look at the use of AI/ML in the platform, the acquisition of RESP.app, the importance of <a href="https://thenewstack.io/building-large-scale-real-time-json-applications/">JSON</a> and RediSearch, and where Redis is headed in the future.</p>
]]></content:encoded>
      <enclosure length="39054986" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/9f05b8ac-1acd-4eb5-afd9-e1f2ece317c4/audio/cdf69bb6-37e1-4429-b3d7-d4e6493b6548/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Redis Looks Beyond Cache Toward Everything Data</itunes:title>
      <itunes:author>Redis, BlackPepper, The New Stack</itunes:author>
      <itunes:duration>00:40:40</itunes:duration>
      <itunes:summary>Redis, best known as a data cache or real-time data platform, is evolving into much more, Tim Hall, chief of product at the company told The New Stack in a recent TNS Makers podcast.

Redis is an in-memory database or memory-first database, which means the data lands there and people are using us for both caching and persistence. However, these days, the company has a number of flexible data models, but one of the brand promises of Redis is developers can store the data as they&apos;re working with it. So as opposed to a SQL database where you might have to turn your data structures into columns and tables, you can actually store the data structures that you&apos;re working with directly into Redis, Hall said.</itunes:summary>
      <itunes:subtitle>Redis, best known as a data cache or real-time data platform, is evolving into much more, Tim Hall, chief of product at the company told The New Stack in a recent TNS Makers podcast.

Redis is an in-memory database or memory-first database, which means the data lands there and people are using us for both caching and persistence. However, these days, the company has a number of flexible data models, but one of the brand promises of Redis is developers can store the data as they&apos;re working with it. So as opposed to a SQL database where you might have to turn your data structures into columns and tables, you can actually store the data structures that you&apos;re working with directly into Redis, Hall said.</itunes:subtitle>
      <itunes:keywords>alain russell, software developer, tech podcast, the new stack, tim hall, darryl taft, devops, devops podcast, tech, developer podcast, the new stack makers, software engineer, redis, blackpepper</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1376</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">64911af2-0c2d-4800-8134-7f1e9e3f6715</guid>
      <title>Couchbase’s Managed Database Services: Computing at the Edge</title>
      <description><![CDATA[<p>Let’s say you’re a passenger on a cruise ship. Floating in the middle of the ocean, far from reliable Wi-Fi, you wear a device that lets you into your room, that discreetly tracks your move from the bar to the dinner table to the pool and delivers your drink order wherever you are. You can buy sunscreen or toothpaste or souvenirs in the ship’s stores without touching anything.</p><p> </p><p>If you’re a Carnival Cruise Lines passenger, this is reality right now, in part because of the company’s partnership with Couchbase, according to <a href="https://www.linkedin.com/in/magamble/">Mark Gamble</a>, product and solutions marketing director, Couchbase.</p><p> </p><p>Couchbase provides a cloud native, no SQL database technology that's used to power applications for customers including Carnival but also <a href="https://thenewstack.io/how-mongodbs-atlas-helped-amadeus-reengineer-a-crucial-app/">Amadeus,</a> <a href="https://thenewstack.io/how-comcast-was-no-longer-blinded-by-the-light/">Comcast,</a> <a href="https://thenewstack.io/how-linkedin-redesigned-its-17-year-old-monolithic-messaging-platform/">LinkedIn,</a> and Tesco.</p><p> </p><p>In Carnival’s case, Gamble said, “they run an edge data center on their ships to power their Ocean Medallion application, which they are super proud of. They use it a lot in their ads, because it provides a personalized service, which is a differentiator for them to their customers.”</p><p> </p><p>In this episode of The New Stack Makers, Gamble spoke to <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn,</a> features editor of TNS, about edge computing, 5G, and Couchbase Capella, its Database as a Service (DBaaS) offering for enterprises.</p><p> </p><p>This episode of Makers was sponsored by Couchbase.</p><p><h2>5G and Offline-First Apps</h2></p><p>The goal of edge computing, Gamble told our podcast audience, is bring data and compute closer to the applications that consume it. This speeds up data processing, he said, “because data doesn't have to travel all the way to the cloud and back.” But it also has other benefits</p><p> </p><p>“This serves to make applications more reliable, because local data processing sort of removes internet slowness and outages from the equation,” he said.</p><p> </p><p>The innovation of 5G networks has also had a big impact on reducing latency and increasing uptime, Gamble said.</p><p> </p><p>“To compare with 4G, things like the average round trip data travel time between the device, and the cell tower is like 15 milliseconds. And with 5G, that latency drops to like two milliseconds. And 5G can support they say, a million devices, within a third of a mile radius, way more than what's possible with 4G.”</p><p> </p><p>But 5G, Gamble said, “really requires edge computing to realize its its full potential.” Increasingly, he said, Couchbase hears interest from its customers in building “offline-first” applications, which can run even in Wi-Fi dead zones.</p><p> </p><p>The use cases, he said, are everywhere: “When I pass a fast food restaurant, it's starting to become more common, where you'll see that, instead of just a box you're talking to, there's a person holding a tablet, and they walk down the line, and they're taking orders. And as they come closer to the restaurant, it syncs up with the kitchen. They find that just a better, more efficient way to serve customers. And so it becomes a competitive differentiator forum.”</p><p> </p><p>As part of Couchbase’s Capella product, it recently announced Capella App Service, a new capability for mobile developers, is a fully managed backend designed for mobile, Internet of Things (IoT) and edge applications.</p><p> </p><p>“Developers use it to access and sync data between the Database as a Service and their edge devices, as well as it handles authenticating and managing mobile and edge app users,” he said.</p><p> </p><p>Used in conjunction with Couchbase Lite, a lightweight, embedded NoSQL database used with mobile and IoT devices, Capella App Services synchronizes the data between backend and edge devices.</p><p> </p><p>Even for workers in remote areas, “eventually, you have to make sure that data updates are shared with the rest of the ecosystem,” Gamble said. “ And that's what App Services is meant to do, as conductivity allows — so during network disruptions in areas with no internet, apps will still continue to operate.”</p><p> </p><p>Check out the rest of the conversation to learn more about edge computing and the challenges Gamble thinks still need to be addressed in that space.</p>
]]></description>
      <pubDate>Wed, 7 Dec 2022 19:35:25 +0000</pubDate>
      <author>podcasts@thenewstack.io (Couchbase, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/couchbases-managed-database-services-computing-at-the-edge-TOXBRNyJ</link>
      <content:encoded><![CDATA[<p>Let’s say you’re a passenger on a cruise ship. Floating in the middle of the ocean, far from reliable Wi-Fi, you wear a device that lets you into your room, that discreetly tracks your move from the bar to the dinner table to the pool and delivers your drink order wherever you are. You can buy sunscreen or toothpaste or souvenirs in the ship’s stores without touching anything.</p><p> </p><p>If you’re a Carnival Cruise Lines passenger, this is reality right now, in part because of the company’s partnership with Couchbase, according to <a href="https://www.linkedin.com/in/magamble/">Mark Gamble</a>, product and solutions marketing director, Couchbase.</p><p> </p><p>Couchbase provides a cloud native, no SQL database technology that's used to power applications for customers including Carnival but also <a href="https://thenewstack.io/how-mongodbs-atlas-helped-amadeus-reengineer-a-crucial-app/">Amadeus,</a> <a href="https://thenewstack.io/how-comcast-was-no-longer-blinded-by-the-light/">Comcast,</a> <a href="https://thenewstack.io/how-linkedin-redesigned-its-17-year-old-monolithic-messaging-platform/">LinkedIn,</a> and Tesco.</p><p> </p><p>In Carnival’s case, Gamble said, “they run an edge data center on their ships to power their Ocean Medallion application, which they are super proud of. They use it a lot in their ads, because it provides a personalized service, which is a differentiator for them to their customers.”</p><p> </p><p>In this episode of The New Stack Makers, Gamble spoke to <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn,</a> features editor of TNS, about edge computing, 5G, and Couchbase Capella, its Database as a Service (DBaaS) offering for enterprises.</p><p> </p><p>This episode of Makers was sponsored by Couchbase.</p><p><h2>5G and Offline-First Apps</h2></p><p>The goal of edge computing, Gamble told our podcast audience, is bring data and compute closer to the applications that consume it. This speeds up data processing, he said, “because data doesn't have to travel all the way to the cloud and back.” But it also has other benefits</p><p> </p><p>“This serves to make applications more reliable, because local data processing sort of removes internet slowness and outages from the equation,” he said.</p><p> </p><p>The innovation of 5G networks has also had a big impact on reducing latency and increasing uptime, Gamble said.</p><p> </p><p>“To compare with 4G, things like the average round trip data travel time between the device, and the cell tower is like 15 milliseconds. And with 5G, that latency drops to like two milliseconds. And 5G can support they say, a million devices, within a third of a mile radius, way more than what's possible with 4G.”</p><p> </p><p>But 5G, Gamble said, “really requires edge computing to realize its its full potential.” Increasingly, he said, Couchbase hears interest from its customers in building “offline-first” applications, which can run even in Wi-Fi dead zones.</p><p> </p><p>The use cases, he said, are everywhere: “When I pass a fast food restaurant, it's starting to become more common, where you'll see that, instead of just a box you're talking to, there's a person holding a tablet, and they walk down the line, and they're taking orders. And as they come closer to the restaurant, it syncs up with the kitchen. They find that just a better, more efficient way to serve customers. And so it becomes a competitive differentiator forum.”</p><p> </p><p>As part of Couchbase’s Capella product, it recently announced Capella App Service, a new capability for mobile developers, is a fully managed backend designed for mobile, Internet of Things (IoT) and edge applications.</p><p> </p><p>“Developers use it to access and sync data between the Database as a Service and their edge devices, as well as it handles authenticating and managing mobile and edge app users,” he said.</p><p> </p><p>Used in conjunction with Couchbase Lite, a lightweight, embedded NoSQL database used with mobile and IoT devices, Capella App Services synchronizes the data between backend and edge devices.</p><p> </p><p>Even for workers in remote areas, “eventually, you have to make sure that data updates are shared with the rest of the ecosystem,” Gamble said. “ And that's what App Services is meant to do, as conductivity allows — so during network disruptions in areas with no internet, apps will still continue to operate.”</p><p> </p><p>Check out the rest of the conversation to learn more about edge computing and the challenges Gamble thinks still need to be addressed in that space.</p>
]]></content:encoded>
      <enclosure length="24739048" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/eb13c06c-323a-49a1-8ad2-e2618390eb5d/audio/f9993a47-7f6e-4fc5-9272-15b126ac5c04/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Couchbase’s Managed Database Services: Computing at the Edge</itunes:title>
      <itunes:author>Couchbase, The New Stack</itunes:author>
      <itunes:duration>00:25:46</itunes:duration>
      <itunes:summary>Let’s say you’re a passenger on a cruise ship. Floating in the middle of the ocean, far from reliable Wi-Fi, you wear a device that lets you into your room, that discreetly tracks your move from the bar to the dinner table to the pool and delivers your drink order wherever you are. You can buy sunscreen or toothpaste or souvenirs in the ship’s stores without touching anything.

If you’re a Carnival Cruise Lines passenger, this is reality right now, in part because of the company’s partnership with Couchbase, according to Mark Gamble, product and solutions marketing director, Couchbase.

Couchbase provides a cloud native, no SQL database technology that&apos;s used to power applications for customers including Carnival but also Amadeus, Comcast, LinkedIn, and Tesco.

In Carnival’s case, Gamble said, “they run an edge data center on their ships to power their Ocean Medallion application, which they are super proud of. They use it a lot in their ads, because it provides a personalized service, which is a differentiator for them to their customers.”

In this episode of The New Stack Makers, Gamble spoke to Heather Joslyn, features editor of TNS, about edge computing, 5G, and Couchbase Capella, its Database as a Service (DBaaS) offering for enterprises.

This episode of Makers was sponsored by Couchbase.</itunes:summary>
      <itunes:subtitle>Let’s say you’re a passenger on a cruise ship. Floating in the middle of the ocean, far from reliable Wi-Fi, you wear a device that lets you into your room, that discreetly tracks your move from the bar to the dinner table to the pool and delivers your drink order wherever you are. You can buy sunscreen or toothpaste or souvenirs in the ship’s stores without touching anything.

If you’re a Carnival Cruise Lines passenger, this is reality right now, in part because of the company’s partnership with Couchbase, according to Mark Gamble, product and solutions marketing director, Couchbase.

Couchbase provides a cloud native, no SQL database technology that&apos;s used to power applications for customers including Carnival but also Amadeus, Comcast, LinkedIn, and Tesco.

In Carnival’s case, Gamble said, “they run an edge data center on their ships to power their Ocean Medallion application, which they are super proud of. They use it a lot in their ads, because it provides a personalized service, which is a differentiator for them to their customers.”

In this episode of The New Stack Makers, Gamble spoke to Heather Joslyn, features editor of TNS, about edge computing, 5G, and Couchbase Capella, its Database as a Service (DBaaS) offering for enterprises.

This episode of Makers was sponsored by Couchbase.</itunes:subtitle>
      <itunes:keywords>database services, mark gamble, software developer, tech podcast, the new stack, heather joslyn, devops, devops podcast, edge iot, tech, developer podcast, the new stack makers, software engineer, iot, couchbase</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1375</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">71b01e53-df1f-4e2f-b4db-74deb79529f3</guid>
      <title>Open Source Underpins A Home Furnishings Provider’s Global Ambitions</title>
      <description><![CDATA[<p>Wayfair describes itself as the “the destination for all things home: helping everyone, anywhere create their feeling of home.” It provides an online platform to acquire home furniture, outdoor decor and other furnishings. It also supports its suppliers so they can use the platform to sell their home goods, explained <a href="https://de.linkedin.com/in/natalivlatko">Natali Vlatko,</a> global lead, open source program office (OSPO) and senior software engineering manager, for Wayfair as the featured guest in Detroit during<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/"> </a><a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/">KubeCon + CloudNativeCon North America 2022</a>.</p><p> </p><p>“It takes a lot of technical, technical work behind the scenes to kind of get that going,” Vlatko said. This is especially true as Wayfair scales its operations worldwide. The infrastructure must be highly distributed, relying on containerization, microservices, Kubernetes, and especially, open source to get the job done.</p><p> </p><p>“We have technologists throughout the world, in North America and throughout Europe as well,”  Vlatko said. “And we want to make sure that we are utilizing cloud native and open source, not just as technologies that fuel our business, but also as the ways that are great for us to work in now.”</p><p> </p><p>Open source has served as a “great avenue” for creating and offering technical services, and to accomplish that, Vlatko amassed the requite tallent, she said. Vlatko was able to amass a small team of engineers to focus on platform work, advocacy, community management and internally on compliance with licenses.</p><p> </p><p>About five years ago when Vlatko joined Wayfair, the company had yet to go “full tilt into going all cloud native,”  Vlatko said. Wayfair had a hybrid mix of on-premise and cloud infrastructure. After decoupling from a monolith into a microservices architecture “that journey really began where we understood the really great benefits of microservices and got to a point where we thought, ‘okay, this hybrid model for us actually would benefit our microservices being fully in the cloud,” Vlatko said. In late 2020, Wayfair had made the decision to “get out of the data centers” and shift operations to the cloud, which was completed in October, Vlatko said. </p><p> </p><p>The company culture is such that engineers have room to experiment without major fear of failure by doing a lot of development work in a sandbox environment. “We've been able to create production environments that are close to our production environments so that experimentation in sandboxes can occur. Folks can learn as they go without actually fearing failure or fearing a mistake,”  Vlatko said. “So, I think experimentation is a really important aspect of our own learning and growth for cloud native. Also, coming to great events like KubeCon + CloudNativeCon and other events [has been helpful]. We're hearing from other companies who've done the same journey and process and are learning from the use cases.”</p>
]]></description>
      <pubDate>Thu, 1 Dec 2022 21:06:39 +0000</pubDate>
      <author>podcasts@thenewstack.io (Kubecon, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/open-source-underspins-a-home-furnishings-providers-global-ambitions-xs720voq</link>
      <content:encoded><![CDATA[<p>Wayfair describes itself as the “the destination for all things home: helping everyone, anywhere create their feeling of home.” It provides an online platform to acquire home furniture, outdoor decor and other furnishings. It also supports its suppliers so they can use the platform to sell their home goods, explained <a href="https://de.linkedin.com/in/natalivlatko">Natali Vlatko,</a> global lead, open source program office (OSPO) and senior software engineering manager, for Wayfair as the featured guest in Detroit during<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/"> </a><a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/">KubeCon + CloudNativeCon North America 2022</a>.</p><p> </p><p>“It takes a lot of technical, technical work behind the scenes to kind of get that going,” Vlatko said. This is especially true as Wayfair scales its operations worldwide. The infrastructure must be highly distributed, relying on containerization, microservices, Kubernetes, and especially, open source to get the job done.</p><p> </p><p>“We have technologists throughout the world, in North America and throughout Europe as well,”  Vlatko said. “And we want to make sure that we are utilizing cloud native and open source, not just as technologies that fuel our business, but also as the ways that are great for us to work in now.”</p><p> </p><p>Open source has served as a “great avenue” for creating and offering technical services, and to accomplish that, Vlatko amassed the requite tallent, she said. Vlatko was able to amass a small team of engineers to focus on platform work, advocacy, community management and internally on compliance with licenses.</p><p> </p><p>About five years ago when Vlatko joined Wayfair, the company had yet to go “full tilt into going all cloud native,”  Vlatko said. Wayfair had a hybrid mix of on-premise and cloud infrastructure. After decoupling from a monolith into a microservices architecture “that journey really began where we understood the really great benefits of microservices and got to a point where we thought, ‘okay, this hybrid model for us actually would benefit our microservices being fully in the cloud,” Vlatko said. In late 2020, Wayfair had made the decision to “get out of the data centers” and shift operations to the cloud, which was completed in October, Vlatko said. </p><p> </p><p>The company culture is such that engineers have room to experiment without major fear of failure by doing a lot of development work in a sandbox environment. “We've been able to create production environments that are close to our production environments so that experimentation in sandboxes can occur. Folks can learn as they go without actually fearing failure or fearing a mistake,”  Vlatko said. “So, I think experimentation is a really important aspect of our own learning and growth for cloud native. Also, coming to great events like KubeCon + CloudNativeCon and other events [has been helpful]. We're hearing from other companies who've done the same journey and process and are learning from the use cases.”</p>
]]></content:encoded>
      <enclosure length="15416050" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/e1c2bc5a-ae1f-4e06-8c81-2fab297dbdc8/audio/408d2ccc-5ccd-44e5-892e-239a24572270/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Open Source Underpins A Home Furnishings Provider’s Global Ambitions</itunes:title>
      <itunes:author>Kubecon, The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/957e7858-0826-4133-8bd3-86505458dffb/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:16:03</itunes:duration>
      <itunes:summary>Wayfair describes itself as the “the destination for all things home: helping everyone, anywhere create their feeling of home.” It provides an online platform to acquire home furniture, outdoor decor and other furnishings. It also supports its suppliers so they can use the platform to sell their home goods, explained Natali Vlatko, global lead, open source program office (OSPO) and senior software engineering manager, for Wayfair as the featured guest in Detroit during KubeCon + CloudNativeCon North America 2022.</itunes:summary>
      <itunes:subtitle>Wayfair describes itself as the “the destination for all things home: helping everyone, anywhere create their feeling of home.” It provides an online platform to acquire home furniture, outdoor decor and other furnishings. It also supports its suppliers so they can use the platform to sell their home goods, explained Natali Vlatko, global lead, open source program office (OSPO) and senior software engineering manager, for Wayfair as the featured guest in Detroit during KubeCon + CloudNativeCon North America 2022.</itunes:subtitle>
      <itunes:keywords>cloud native computing foundation, kubecon 2022, software developer, tech podcast, the new stack, devops, bruce gain, natali vlatko, devops podcast, tech, developer podcast, the new stack makers, software engineer, kubecon</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1374</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">74f98de1-587c-4062-ae38-54974a4ef97e</guid>
      <title>ML Can Prevent Getting Burned For Kubernetes Provisioning</title>
      <description><![CDATA[<p><span style="font-weight: 400;">In the rush to create, provision and manage Kubernetes, often left out is proper resource provisioning. According to StormForge, a company paying, for example, a million dollars a month on cloud computing resources is likely wasting $6 million a year of resources on the cloud on Kubernetes that are left unused. The reasons for this are manifold and can vary. They include how DevOps teams can tend to estimate too conservatively or aggressively or overspend on resource provisioning. In this podcast with StormForge’s </span><a href="https://www.linkedin.com/in/yasminrajabi"><span style="font-weight: 400;">Yasmin Rajabi,</span></a><span style="font-weight: 400;"> vice president of product management, and </span><a href="https://www.linkedin.com/in/bergstrompatrick"><span style="font-weight: 400;">Patrick Bergstrom</span></a><span style="font-weight: 400;"> CTO, we look at how to properly provision Kubernetes resources and the associated challenges. The podcast was recorded live in Detroit during</span><a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/"><span style="font-weight: 400;"> KubeCon + CloudNativeCon Europe 2022.</span></a></p><p> </p><p><p class="attribution"><iframe width="100%" height="200px" frameborder="no" scrolling="no" seamless="" src="https://player.simplecast.com/9a45a180-1277-4083-8b55-cafab9a21e18?dark=true"><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span></iframe></p></p><p><span class="media-direct-link"><a href="https://thenewstack.simplecast.com/episodes/rethinking-web-application-firewalls">Rethinking Web Application Firewalls </a></span></p><p> </p><p><span style="font-weight: 400;">Almost ironically, the most commonly used Kubernetes resources can even complicate the ability to optimize resources for applications.The processes typically involve Kubernetes resource requests and limits, and predicting how the resources might impact quality of service for pods. Developers deploying an application on Kubernetes often need to set CPU-request, memory-request and other resource limits. “They are usually like ‘I don't know — whatever was there before or whatever the default is,’” Rajabi said. “They are in the dark.”</span></p><p> </p><p><iframe width="560" height="315" src="https://www.youtube.com/embed/k9cUk8kBXKM" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></p><p> </p><p><span style="font-weight: 400;">Sometimes, developers might use their favorite observability tool and say “‘we look where the max is, and then take a guess,’” Rajabi said. “The challenge is, if you start from there when you start to scale that out — especially for organizations that are using horizontal scaling with Kubernetes — is that then you're taking that problem and you're just amplifying it everywhere,” Rajabi said. “And so, when you've hit that complexity at scale, taking a second to look back and ‘say, how do we fix this?’ you don't want to just arbitrarily go reduce resources, because you have to look at the trade off of how that impacts your reliability.”</span></p><p> </p><p><span style="font-weight: 400;">The process then becomes very hit or miss. “That's where it becomes really complex, when there are so many settings across all those environments, all those namespaces,” Rajabi said. “It's almost a problem that can only be solved by machine learning, which makes it very interesting.”</span></p><p> </p><p><span style="font-weight: 400;">But before organizations learn the hard way about not automating optimizing deployments and management of Kubernetes, many resources — and costs — are bared to waste. “It's one of those things that becomes a bigger and bigger challenge, the more you grow as an organization,” Bergstrom said. Many StormForge customers are deploying into thousands of namespaces and thousands of workloads. “You are suddenly trying to manage each workload individually to make sure it has the resources and the memory that it needs,” Bergstrom said. “It becomes a bigger and bigger challenge.”</span></p><p> </p><p><span style="font-weight: 400;">The process should actually be pain free, when ML is properly implemented. With StormForge’s partnership with </span><a href="https://www.datadoghq.com/"><span style="font-weight: 400;">Datadog</span></a><span style="font-weight: 400;">, it is possible to apply ML to collect historical data, Bergstrom explained. “Then, within just hours of us deploying our algorithm into your environment, we have machine learning that's used two to three weeks worth of data to train that can then automatically set the correct resources for your application. This is  because we know what the application is actually using,” Bergstrom said. “We can predict the patterns and we know what it needs in order to be successful.”</span></p>
]]></description>
      <pubDate>Wed, 30 Nov 2022 20:50:11 +0000</pubDate>
      <author>podcasts@thenewstack.io (stormforge, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/ml-canprevent-getting-burned-for-kubernetes-provisioning-QJFb_3nu</link>
      <content:encoded><![CDATA[<p><span style="font-weight: 400;">In the rush to create, provision and manage Kubernetes, often left out is proper resource provisioning. According to StormForge, a company paying, for example, a million dollars a month on cloud computing resources is likely wasting $6 million a year of resources on the cloud on Kubernetes that are left unused. The reasons for this are manifold and can vary. They include how DevOps teams can tend to estimate too conservatively or aggressively or overspend on resource provisioning. In this podcast with StormForge’s </span><a href="https://www.linkedin.com/in/yasminrajabi"><span style="font-weight: 400;">Yasmin Rajabi,</span></a><span style="font-weight: 400;"> vice president of product management, and </span><a href="https://www.linkedin.com/in/bergstrompatrick"><span style="font-weight: 400;">Patrick Bergstrom</span></a><span style="font-weight: 400;"> CTO, we look at how to properly provision Kubernetes resources and the associated challenges. The podcast was recorded live in Detroit during</span><a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/"><span style="font-weight: 400;"> KubeCon + CloudNativeCon Europe 2022.</span></a></p><p> </p><p><p class="attribution"><iframe width="100%" height="200px" frameborder="no" scrolling="no" seamless="" src="https://player.simplecast.com/9a45a180-1277-4083-8b55-cafab9a21e18?dark=true"><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span></iframe></p></p><p><span class="media-direct-link"><a href="https://thenewstack.simplecast.com/episodes/rethinking-web-application-firewalls">Rethinking Web Application Firewalls </a></span></p><p> </p><p><span style="font-weight: 400;">Almost ironically, the most commonly used Kubernetes resources can even complicate the ability to optimize resources for applications.The processes typically involve Kubernetes resource requests and limits, and predicting how the resources might impact quality of service for pods. Developers deploying an application on Kubernetes often need to set CPU-request, memory-request and other resource limits. “They are usually like ‘I don't know — whatever was there before or whatever the default is,’” Rajabi said. “They are in the dark.”</span></p><p> </p><p><iframe width="560" height="315" src="https://www.youtube.com/embed/k9cUk8kBXKM" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></p><p> </p><p><span style="font-weight: 400;">Sometimes, developers might use their favorite observability tool and say “‘we look where the max is, and then take a guess,’” Rajabi said. “The challenge is, if you start from there when you start to scale that out — especially for organizations that are using horizontal scaling with Kubernetes — is that then you're taking that problem and you're just amplifying it everywhere,” Rajabi said. “And so, when you've hit that complexity at scale, taking a second to look back and ‘say, how do we fix this?’ you don't want to just arbitrarily go reduce resources, because you have to look at the trade off of how that impacts your reliability.”</span></p><p> </p><p><span style="font-weight: 400;">The process then becomes very hit or miss. “That's where it becomes really complex, when there are so many settings across all those environments, all those namespaces,” Rajabi said. “It's almost a problem that can only be solved by machine learning, which makes it very interesting.”</span></p><p> </p><p><span style="font-weight: 400;">But before organizations learn the hard way about not automating optimizing deployments and management of Kubernetes, many resources — and costs — are bared to waste. “It's one of those things that becomes a bigger and bigger challenge, the more you grow as an organization,” Bergstrom said. Many StormForge customers are deploying into thousands of namespaces and thousands of workloads. “You are suddenly trying to manage each workload individually to make sure it has the resources and the memory that it needs,” Bergstrom said. “It becomes a bigger and bigger challenge.”</span></p><p> </p><p><span style="font-weight: 400;">The process should actually be pain free, when ML is properly implemented. With StormForge’s partnership with </span><a href="https://www.datadoghq.com/"><span style="font-weight: 400;">Datadog</span></a><span style="font-weight: 400;">, it is possible to apply ML to collect historical data, Bergstrom explained. “Then, within just hours of us deploying our algorithm into your environment, we have machine learning that's used two to three weeks worth of data to train that can then automatically set the correct resources for your application. This is  because we know what the application is actually using,” Bergstrom said. “We can predict the patterns and we know what it needs in order to be successful.”</span></p>
]]></content:encoded>
      <enclosure length="15191188" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/975973fb-81c3-4aab-acb1-591dfc30e38b/audio/065aab4a-c5de-4c0f-bbaa-7a40c9364522/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>ML Can Prevent Getting Burned For Kubernetes Provisioning</itunes:title>
      <itunes:author>stormforge, The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/d7c479b2-c046-4975-8eb1-57abe931ee0c/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:15:49</itunes:duration>
      <itunes:summary>In the rush to create, provision and manage Kubernetes, often left out is proper resource provisioning. According to StormForge, a company paying, for example, a million dollars a month on cloud computing resources is likely wasting $6 million a year of resources on the cloud on Kubernetes that are left unused. The reasons for this are manifold and can vary. They include how DevOps teams can tend to estimate too conservatively or aggressively or overspend on resource provisioning. In this podcast with StormForge’s Yasmin Rajabi, vice president of product management, and Patrick Bergstrom CTO, we look at how to properly provision Kubernetes resources and the associated challenges. The podcast was recorded live in Detroit during KubeCon + CloudNativeCon Europe 2022.

Patrick Bergstrom - https://www.linkedin.com/in/bergstrompatrick/
Yasmin Rajabi - https://www.linkedin.com/in/yasminrajabi/
Bruce Gain - @bcamerongain 
The New Stack - @thenewstack</itunes:summary>
      <itunes:subtitle>In the rush to create, provision and manage Kubernetes, often left out is proper resource provisioning. According to StormForge, a company paying, for example, a million dollars a month on cloud computing resources is likely wasting $6 million a year of resources on the cloud on Kubernetes that are left unused. The reasons for this are manifold and can vary. They include how DevOps teams can tend to estimate too conservatively or aggressively or overspend on resource provisioning. In this podcast with StormForge’s Yasmin Rajabi, vice president of product management, and Patrick Bergstrom CTO, we look at how to properly provision Kubernetes resources and the associated challenges. The podcast was recorded live in Detroit during KubeCon + CloudNativeCon Europe 2022.

Patrick Bergstrom - https://www.linkedin.com/in/bergstrompatrick/
Yasmin Rajabi - https://www.linkedin.com/in/yasminrajabi/
Bruce Gain - @bcamerongain 
The New Stack - @thenewstack</itunes:subtitle>
      <itunes:keywords>stormforge, machine learning, kubecon 2022, yasmin rajabi, software developer, patrick bergstrom, tech podcast, the new stack, devops, bruce gain, devops podcast, tech, developer podcast, kubernetes, the new stack makers, software engineer</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1373</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e1795d9c-e8a3-4853-b9b3-e06022021efc</guid>
      <title>What’s the Future of Feature Management?</title>
      <description><![CDATA[<p><a href="https://thenewstack.io/why-feature-management-is-the-future-of-software-delivery/">Feature management</a> isn’t a new idea but lately it’s a trend that’s picked up speed. Analysts like Forrester and Gartner have cited adoption of the practice as being, respectively, <a href="https://www.forrester.com/blogs/feature-management-is-hot-feature-experimentation-is-just-warming-up/">“hot”</a> and <a href="https://launchdarkly.com/blog/feature-management-gartner-hype-cycle-for-agile-and-devops-2022/">“the dominant approach to experimentation in software engineering.”</a></p><p> </p><p>A study released in November found that <a href="https://launchdarkly.com/blog/state-of-feature-management-2022/">60% of 1,000 software and IT professionals surveyed started using feature flags only in the past year</a>, according to the report sponsored by LaunchDarkly, the feature management platform and conducted by Wakefield Research.</p><p> </p><p>At the heart of feature management are <a href="https://thenewstack.io/moving-to-the-cloud-presents-new-use-cases-for-feature-flags/">feature flags</a>, which give organizations the ability to turn features on and off, without having to re-deploy an entire app. Feature flags allow organizations test new features, and control things like access to premium versions of a customer-facing service.</p><p> </p><p>An overall feature management practice that includes feature flags allows organizations “to release progressively any new feature to any segment of users, any environment, any cohort of customers in a controlled manner that really reduces the risk of each release,” said <a href="https://www.linkedin.com/in/rthar/">Ravi Tharisayi,</a> senior director of product marketing at LaunchDarkly, in this episode of The New Stack Makers podcast.</p><p> </p><p>Tharisayi talked to The New Stack’s features editor, <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn,</a> about the future of feature management, on the eve of the company’s latest Trajectory user conference. This episode of Makers was sponsored by LaunchDarkly.</p><p><h2>Streamlining Management, Saving Money</h2></p><p>The participants in the new survey worked at companies of at least 200 employees, and nearly all of them that use feature flags — 98%— said they believe they save their organizations money and demonstrate a return on investment.</p><p> </p><p>Furthermore, 70% said that their company views feature management as either a mission-critical or a high-priority investment.</p><p> </p><p>Fielding the annual survey, Tharisayi said, has offered a window into how organizations are using feature flags. Fifty-five percent of customers in the 2022 survey said they use feature flags as long-term operational controls — for API rate limiting, for instance, to prioritize certain API calls in high-traffic situations.</p><p> </p><p>The second most common use, the survey found — cited by 47% of users — was for entitlements, “managing access to different types of plans, premium plans versus other plans, for example,” Tharisayi said.</p><p> </p><p>“This is really a powerful capability because of this ability to allow product managers or other personas to manage who has access to certain features to certain plans, without having to have developers be involved,” he said. “Previously, that required a lot of developer involvement.”</p><p><h2>Experimentation, Metrics, Cultural Shifts</h2></p><p>LaunchDarkly, Tharisayi said, has been investing in and improving its platform’s experimentation and measurement capabilities: “At the core of that is this notion that experimentation can be a lot more successful when it's tightly integrated to the developer workflow.”</p><p> </p><p>As an example, he pointed to CCP Games, makers of the gaming platform EVE Online, which serves millions of players.</p><p> </p><p>“They were recently thinking through how to evolve their recommendation engine, because they wanted this engine to recommend actions for their gamers that will hopefully increase their ultimate North Star metric,” its tracking of how much time gamers spend with their games.</p><p> </p><p>By using LaunchDarkly’s platform, CCP was able to run A/B tests and increase gamers’ session lengths and engagement. ”So that's the kind of capability that we think is going to be an increasing priority,” Tharisayi said.</p><p> </p><p>As feature management matures and standardizes, he said, he pointed to <a href="https://thenewstack.io/target-embraces-cross-organizational-devops-culture/">the adoption of DevOps</a> as a model and cautionary tale.</p><p> </p><p>”When it comes to cultural shifts, like DevOps or feature management that require teams to work in a different way, oftentimes there can be early success with a small team,” Tharisayi said “But then there can be some cultural and process barriers as you're trying to standardize to the team level and multi-team level, before figuring out the kinks in deploying it at an organization-wide level.”</p><p> </p><p>He added, “that's one of the trends that we observed a little bit in this survey, is that there are some cultural elements to getting success at scale, with something like feature management and the opportunity as an industry to support organizations as they're making that quest to standardize a practice like this, like any other cultural practice.”</p><p> </p><p>Check out the full episode for more on the survey and on what’s next for feature management.</p>
]]></description>
      <pubDate>Tue, 29 Nov 2022 16:58:17 +0000</pubDate>
      <author>podcasts@thenewstack.io (Launch Darkly, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/whats-the-future-of-feature-management-E_xTJiPF</link>
      <content:encoded><![CDATA[<p><a href="https://thenewstack.io/why-feature-management-is-the-future-of-software-delivery/">Feature management</a> isn’t a new idea but lately it’s a trend that’s picked up speed. Analysts like Forrester and Gartner have cited adoption of the practice as being, respectively, <a href="https://www.forrester.com/blogs/feature-management-is-hot-feature-experimentation-is-just-warming-up/">“hot”</a> and <a href="https://launchdarkly.com/blog/feature-management-gartner-hype-cycle-for-agile-and-devops-2022/">“the dominant approach to experimentation in software engineering.”</a></p><p> </p><p>A study released in November found that <a href="https://launchdarkly.com/blog/state-of-feature-management-2022/">60% of 1,000 software and IT professionals surveyed started using feature flags only in the past year</a>, according to the report sponsored by LaunchDarkly, the feature management platform and conducted by Wakefield Research.</p><p> </p><p>At the heart of feature management are <a href="https://thenewstack.io/moving-to-the-cloud-presents-new-use-cases-for-feature-flags/">feature flags</a>, which give organizations the ability to turn features on and off, without having to re-deploy an entire app. Feature flags allow organizations test new features, and control things like access to premium versions of a customer-facing service.</p><p> </p><p>An overall feature management practice that includes feature flags allows organizations “to release progressively any new feature to any segment of users, any environment, any cohort of customers in a controlled manner that really reduces the risk of each release,” said <a href="https://www.linkedin.com/in/rthar/">Ravi Tharisayi,</a> senior director of product marketing at LaunchDarkly, in this episode of The New Stack Makers podcast.</p><p> </p><p>Tharisayi talked to The New Stack’s features editor, <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn,</a> about the future of feature management, on the eve of the company’s latest Trajectory user conference. This episode of Makers was sponsored by LaunchDarkly.</p><p><h2>Streamlining Management, Saving Money</h2></p><p>The participants in the new survey worked at companies of at least 200 employees, and nearly all of them that use feature flags — 98%— said they believe they save their organizations money and demonstrate a return on investment.</p><p> </p><p>Furthermore, 70% said that their company views feature management as either a mission-critical or a high-priority investment.</p><p> </p><p>Fielding the annual survey, Tharisayi said, has offered a window into how organizations are using feature flags. Fifty-five percent of customers in the 2022 survey said they use feature flags as long-term operational controls — for API rate limiting, for instance, to prioritize certain API calls in high-traffic situations.</p><p> </p><p>The second most common use, the survey found — cited by 47% of users — was for entitlements, “managing access to different types of plans, premium plans versus other plans, for example,” Tharisayi said.</p><p> </p><p>“This is really a powerful capability because of this ability to allow product managers or other personas to manage who has access to certain features to certain plans, without having to have developers be involved,” he said. “Previously, that required a lot of developer involvement.”</p><p><h2>Experimentation, Metrics, Cultural Shifts</h2></p><p>LaunchDarkly, Tharisayi said, has been investing in and improving its platform’s experimentation and measurement capabilities: “At the core of that is this notion that experimentation can be a lot more successful when it's tightly integrated to the developer workflow.”</p><p> </p><p>As an example, he pointed to CCP Games, makers of the gaming platform EVE Online, which serves millions of players.</p><p> </p><p>“They were recently thinking through how to evolve their recommendation engine, because they wanted this engine to recommend actions for their gamers that will hopefully increase their ultimate North Star metric,” its tracking of how much time gamers spend with their games.</p><p> </p><p>By using LaunchDarkly’s platform, CCP was able to run A/B tests and increase gamers’ session lengths and engagement. ”So that's the kind of capability that we think is going to be an increasing priority,” Tharisayi said.</p><p> </p><p>As feature management matures and standardizes, he said, he pointed to <a href="https://thenewstack.io/target-embraces-cross-organizational-devops-culture/">the adoption of DevOps</a> as a model and cautionary tale.</p><p> </p><p>”When it comes to cultural shifts, like DevOps or feature management that require teams to work in a different way, oftentimes there can be early success with a small team,” Tharisayi said “But then there can be some cultural and process barriers as you're trying to standardize to the team level and multi-team level, before figuring out the kinks in deploying it at an organization-wide level.”</p><p> </p><p>He added, “that's one of the trends that we observed a little bit in this survey, is that there are some cultural elements to getting success at scale, with something like feature management and the opportunity as an industry to support organizations as they're making that quest to standardize a practice like this, like any other cultural practice.”</p><p> </p><p>Check out the full episode for more on the survey and on what’s next for feature management.</p>
]]></content:encoded>
      <enclosure length="26662511" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/72edb1a6-c9fc-44ac-80f4-61c72bb8e1ae/audio/a77be508-8df6-4e28-901d-66a666c7edea/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>What’s the Future of Feature Management?</itunes:title>
      <itunes:author>Launch Darkly, The New Stack</itunes:author>
      <itunes:duration>00:27:27</itunes:duration>
      <itunes:summary>Feature management isn’t a new idea but lately it’s a trend that’s picked up speed. Analysts like Forrester and Gartner have cited adoption of the practice as being, respectively, “hot” and “the dominant approach to experimentation in software engineering.”

A study released in November found that 60% of 1,000 software and IT professionals surveyed started using feature flags only in the past year, according to the report sponsored by LaunchDarkly, the feature management platform and conducted by Wakefield Research.

At the heart of feature management are feature flags, which give organizations the ability to turn features on and off, without having to re-deploy an entire app. Feature flags allow organizations test new features, and control things like access to premium versions of a customer-facing service.

An overall feature management practice that includes feature flags allows organizations “to release progressively any new feature to any segment of users, any environment, any cohort of customers in a controlled manner that really reduces the risk of each release,” said Ravi Tharisayi, senior director of product marketing at LaunchDarkly, in this episode of The New Stack Makers podcast.

Tharisayi talked to The New Stack’s features editor, Heather Joslyn, about the future of feature management, on the eve of the company’s latest Trajectory user conference. This episode of Makers was sponsored by LaunchDarkly.</itunes:summary>
      <itunes:subtitle>Feature management isn’t a new idea but lately it’s a trend that’s picked up speed. Analysts like Forrester and Gartner have cited adoption of the practice as being, respectively, “hot” and “the dominant approach to experimentation in software engineering.”

A study released in November found that 60% of 1,000 software and IT professionals surveyed started using feature flags only in the past year, according to the report sponsored by LaunchDarkly, the feature management platform and conducted by Wakefield Research.

At the heart of feature management are feature flags, which give organizations the ability to turn features on and off, without having to re-deploy an entire app. Feature flags allow organizations test new features, and control things like access to premium versions of a customer-facing service.

An overall feature management practice that includes feature flags allows organizations “to release progressively any new feature to any segment of users, any environment, any cohort of customers in a controlled manner that really reduces the risk of each release,” said Ravi Tharisayi, senior director of product marketing at LaunchDarkly, in this episode of The New Stack Makers podcast.

Tharisayi talked to The New Stack’s features editor, Heather Joslyn, about the future of feature management, on the eve of the company’s latest Trajectory user conference. This episode of Makers was sponsored by LaunchDarkly.</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, feature flags, the new stack, heather joslyn, devops, devops podcast, tech, developer podcast, ravi tharisayi, the new stack makers, software engineer, launch darkly</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1372</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">ca7d3e96-dea0-4398-816c-c0aae8c10142</guid>
      <title>Chronosphere Nudges Observability Standards Toward Maturity</title>
      <description><![CDATA[<p>DETROIT — <a href="https://www.linkedin.com/in/robskillington/">Rob Skillington’s</a> grandfather was a civil engineer, working in an industry that, in over a century, developed processes and know-how that enabled the creation of buildings, bridges and road.</p><p> </p><p>“A lot of those processes matured to a point where they could reliably build these things,” said Skillington, co-founder and chief technology officer at Chronosphere, an observability platform. “And I think about observability as that same maturity of engineering practice. When  it comes to building software that actually is useful in the world, it is this process that helps you actually achieve the deployment and operation of these large scale systems that we use every day.”</p><p> </p><p>Skillington spoke about the evolution of observability, and his company’s recent donation of an open source project to <a href="https://prometheus.io/">Prometheus,</a> in this episode of The New Stack Makers podcast. <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn,</a> features editor of TNS, hosted the conversation.</p><p> </p><p>This On the Road edition of The New Stack Makers was recorded at KubeCon + CloudNativeCon North America, in the Motor City. The episode was sponsored by Chronosphere.</p><p><h2>A Donation to the Prometheus Project</h2></p><p>Helping observability practices grow as mature and reliable as civil engineering rules that help build sturdy skyscrapers is a tough task, Skillington suggested.</p><p> </p><p>In the cloud era, he said, “you have to really prepare the software for a whole set of runtime environments. And so the challenges around that is really about making it consistent, well understood and robust.”</p><p> </p><p>At KubeCon in late October, Chronosphere and PromLabs (founded by <a href="https://www.linkedin.com/in/julius-volz">Julius Volz,</a> creator of Prometheus) announced that they had donated their open source project <a href="https://promlens.com/">PromLens</a> to the Prometheus project, the open source monitoring and alerts primitive.</p><p> </p><p>The donation is a way of placing a bet on a tool that integrates well with Kubernetes. “There's this real yearning for essentially a standard that can be built upon by everyone in the industry, when it comes to these core primitives, essentially,” Skillington said. “And <a href="https://thenewstack.io/cncf-prometheus-agent-could-be-a-game-changer-for-edge/">Prometheus</a> is one of those primitives. We want to continue to solidify that as a primitive that stands the test of time.”</p><p> </p><p>“We can't build a self-driving car if we're always building a different car,” he added.</p><p> </p><p>PromLens <a href="https://thenewstack.io/query-optimization-in-the-prometheus-world/">builds Prometheus queries</a> in a sort of integrated development environment (IDE), Skillington said. It also makes it easier for more people in an organization to create queries and understand the meaning and seriousness of alerts.</p><p> </p><p>The PromLens tool breaks queries into a visual format, and allows users to edit them through a UI. “Basically, it's kind of like a What You See Is What You Get editor, or  WYSIWYG editor, for Prometheus queries,” Skillington said.</p><p> </p><p>“Some of our customers have tens of thousands of these alerts to find in PromQL, which is the query language for Prometheus,” he noted. “Having a tool like an integrated development environment — where you can really understand these complex queries and iterate faster on, setting these up and getting back to your day job — is incredibly important.”</p><p> </p><p>Check out the full episode for more on PromLens and the current state of observability.</p>
]]></description>
      <pubDate>Wed, 23 Nov 2022 21:34:23 +0000</pubDate>
      <author>podcasts@thenewstack.io (Chronosphere, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/chronosphere-nudges-observability-standards-toward-maturity-i74nWX3b</link>
      <content:encoded><![CDATA[<p>DETROIT — <a href="https://www.linkedin.com/in/robskillington/">Rob Skillington’s</a> grandfather was a civil engineer, working in an industry that, in over a century, developed processes and know-how that enabled the creation of buildings, bridges and road.</p><p> </p><p>“A lot of those processes matured to a point where they could reliably build these things,” said Skillington, co-founder and chief technology officer at Chronosphere, an observability platform. “And I think about observability as that same maturity of engineering practice. When  it comes to building software that actually is useful in the world, it is this process that helps you actually achieve the deployment and operation of these large scale systems that we use every day.”</p><p> </p><p>Skillington spoke about the evolution of observability, and his company’s recent donation of an open source project to <a href="https://prometheus.io/">Prometheus,</a> in this episode of The New Stack Makers podcast. <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn,</a> features editor of TNS, hosted the conversation.</p><p> </p><p>This On the Road edition of The New Stack Makers was recorded at KubeCon + CloudNativeCon North America, in the Motor City. The episode was sponsored by Chronosphere.</p><p><h2>A Donation to the Prometheus Project</h2></p><p>Helping observability practices grow as mature and reliable as civil engineering rules that help build sturdy skyscrapers is a tough task, Skillington suggested.</p><p> </p><p>In the cloud era, he said, “you have to really prepare the software for a whole set of runtime environments. And so the challenges around that is really about making it consistent, well understood and robust.”</p><p> </p><p>At KubeCon in late October, Chronosphere and PromLabs (founded by <a href="https://www.linkedin.com/in/julius-volz">Julius Volz,</a> creator of Prometheus) announced that they had donated their open source project <a href="https://promlens.com/">PromLens</a> to the Prometheus project, the open source monitoring and alerts primitive.</p><p> </p><p>The donation is a way of placing a bet on a tool that integrates well with Kubernetes. “There's this real yearning for essentially a standard that can be built upon by everyone in the industry, when it comes to these core primitives, essentially,” Skillington said. “And <a href="https://thenewstack.io/cncf-prometheus-agent-could-be-a-game-changer-for-edge/">Prometheus</a> is one of those primitives. We want to continue to solidify that as a primitive that stands the test of time.”</p><p> </p><p>“We can't build a self-driving car if we're always building a different car,” he added.</p><p> </p><p>PromLens <a href="https://thenewstack.io/query-optimization-in-the-prometheus-world/">builds Prometheus queries</a> in a sort of integrated development environment (IDE), Skillington said. It also makes it easier for more people in an organization to create queries and understand the meaning and seriousness of alerts.</p><p> </p><p>The PromLens tool breaks queries into a visual format, and allows users to edit them through a UI. “Basically, it's kind of like a What You See Is What You Get editor, or  WYSIWYG editor, for Prometheus queries,” Skillington said.</p><p> </p><p>“Some of our customers have tens of thousands of these alerts to find in PromQL, which is the query language for Prometheus,” he noted. “Having a tool like an integrated development environment — where you can really understand these complex queries and iterate faster on, setting these up and getting back to your day job — is incredibly important.”</p><p> </p><p>Check out the full episode for more on PromLens and the current state of observability.</p>
]]></content:encoded>
      <enclosure length="15580834" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/45b9beef-5a0f-4143-966c-0365130003ae/audio/3ec4de49-3ef2-4858-b009-be5b773aa860/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Chronosphere Nudges Observability Standards Toward Maturity</itunes:title>
      <itunes:author>Chronosphere, The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/ba0b10d7-e3c1-462e-9334-2ad4456ffea9/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:15:31</itunes:duration>
      <itunes:summary>DETROIT — Rob Skillington’s grandfather was a civil engineer, working in an industry that, in over a century, developed processes and know-how that enabled the creation of buildings, bridges and road.

“A lot of those processes matured to a point where they could reliably build these things,” said Skillington, co-founder and chief technology officer at Chronosphere,an observability platform. “And I think about observability as that same maturity of engineering practice. When  it comes to building software that actually is useful in the world, it is this process that helps you actually achieve the deployment and operation of these large scale systems that we use every day.”

Skillington spoke about the evolution of observability, and his company’s recent donation of an open source project to Prometheus, in this episode of The New Stack Makers podcast. Heather Joslyn, features editor of TNS, hosted the conversation.

This On the Road edition of The New Stack Makers was recorded at KubeCon + CloudNativeCon North America in the Motor City. The episode was sponsored by Chronosphere.</itunes:summary>
      <itunes:subtitle>DETROIT — Rob Skillington’s grandfather was a civil engineer, working in an industry that, in over a century, developed processes and know-how that enabled the creation of buildings, bridges and road.

“A lot of those processes matured to a point where they could reliably build these things,” said Skillington, co-founder and chief technology officer at Chronosphere,an observability platform. “And I think about observability as that same maturity of engineering practice. When  it comes to building software that actually is useful in the world, it is this process that helps you actually achieve the deployment and operation of these large scale systems that we use every day.”

Skillington spoke about the evolution of observability, and his company’s recent donation of an open source project to Prometheus, in this episode of The New Stack Makers podcast. Heather Joslyn, features editor of TNS, hosted the conversation.

This On the Road edition of The New Stack Makers was recorded at KubeCon + CloudNativeCon North America in the Motor City. The episode was sponsored by Chronosphere.</itunes:subtitle>
      <itunes:keywords>kubecon 2022, software developer, tech podcast, the new stack, heather joslyn, devops, devops podcast, tech, developer podcast, kubernetes, rob skillington, the new stack makers, software engineer, observability, chronosphere</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1371</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">3eaceac5-cf90-4534-8b7c-3649f964eb8b</guid>
      <title>How Boeing Uses Cloud Native</title>
      <description><![CDATA[<p>In this latest podcast from The New Stack, we spoke with Ricardo Torres, who is the chief engineer of open source and cloud native for aerospace giant Boeing. Torres also joined the Cloud Native Computing Foundation in May to serve as a board member. In this interview, recorded at KubeCon+CloudNativeCon last month, Torres speaks about Boeing's use of open source software, as well as its adoption of cloud native technologies.</p><p> </p><p>While we may think of Boeing as an airplane manufacturer, it would be more accurate to think of the company as a large-scale system integrator, one that uses a lot of software. So, like other large-scale companies, Boeing sees a distinct advantage in maintaining good relations with the open source community.</p><p> </p><p>"Being able to leverage the best technologists out there in the rest of the world is of great value to us strategically," Torres said. This strategy allows Boeing to "differentiate on what we do as our core business rather than having to reinvent the wheel all the time on all of the technology."</p><p> </p><p>Like many other large companies, Boeing has created <a href="https://thenewstack.io/how-an-ospo-can-help-your-engineers-give-back-to-open-source/">an open source office</a> to better work with the open source community. Although Boeing is primarily a consumer of open source software, it still wants to work with the community. "We want to make sure that we have a strategy around how we contribute back to the open source community, and then leverage those learnings for inner sourcing," he said.</p><p> </p><p>Boeing also manages how it uses open source internally, keeping <a href="https://thenewstack.io/sboms-are-great-for-supply-chain-security-but-buyers-beware/">tight controls on the supply chain</a> of open source software it uses. "As part of the software engineering organization, we partner with our internal IT organization, to look at our internet traffic and assure nobody's going out and downloading directly from an untrusted repository or registry. And then we host instead, we have approved sources internally."</p><p> </p><p>It's not surprising that Boeing, which deals with a lot of government agencies, embraces the practice of <a href="https://thenewstack.io/how-to-prepare-your-apps-for-regulated-markets/">using software bills of material</a> (SBOMs), which provide a full listing of what components are being used in a software system. In fact, the company has been working to extend the comprehensiveness of SBOMs, according to Torres.</p><p> </p><p>" I think one of the interesting things now is the automation," he said of SBOMs. "And so we're always looking to beef up the heuristics because a lot of the tools are relatively naïve, and that they trust that the dependencies that are specified are actually representative of everything that's delivered. And that's not good enough for a company like Boeing. We have to be absolutely certain that what's there is exactly what did we expected to be there."</p><p><h2>Cloud Native Computing</h2></p><p>While Boeing builds many systems that reside in private data centers, the company is also increasingly relying on the cloud as well. Earlier this year, Boeing had signed agreements with the three largest cloud service providers (CSPs): Amazon Web Services, Microsoft Azure and the Google Cloud Platform.</p><p> </p><p>"A lot of our cloud presence is about our development environments. And so, you know, we have cloud-based software factories that are using a number of CNCF and CNCF-adjacent technologies to enable our developers to move fast," Torres said.</p>
]]></description>
      <pubDate>Wed, 23 Nov 2022 18:01:08 +0000</pubDate>
      <author>podcasts@thenewstack.io (Cloud Native Computing Foundation, Boeing, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-boeing-uses-cloud-native-ZOgHGWAl</link>
      <content:encoded><![CDATA[<p>In this latest podcast from The New Stack, we spoke with Ricardo Torres, who is the chief engineer of open source and cloud native for aerospace giant Boeing. Torres also joined the Cloud Native Computing Foundation in May to serve as a board member. In this interview, recorded at KubeCon+CloudNativeCon last month, Torres speaks about Boeing's use of open source software, as well as its adoption of cloud native technologies.</p><p> </p><p>While we may think of Boeing as an airplane manufacturer, it would be more accurate to think of the company as a large-scale system integrator, one that uses a lot of software. So, like other large-scale companies, Boeing sees a distinct advantage in maintaining good relations with the open source community.</p><p> </p><p>"Being able to leverage the best technologists out there in the rest of the world is of great value to us strategically," Torres said. This strategy allows Boeing to "differentiate on what we do as our core business rather than having to reinvent the wheel all the time on all of the technology."</p><p> </p><p>Like many other large companies, Boeing has created <a href="https://thenewstack.io/how-an-ospo-can-help-your-engineers-give-back-to-open-source/">an open source office</a> to better work with the open source community. Although Boeing is primarily a consumer of open source software, it still wants to work with the community. "We want to make sure that we have a strategy around how we contribute back to the open source community, and then leverage those learnings for inner sourcing," he said.</p><p> </p><p>Boeing also manages how it uses open source internally, keeping <a href="https://thenewstack.io/sboms-are-great-for-supply-chain-security-but-buyers-beware/">tight controls on the supply chain</a> of open source software it uses. "As part of the software engineering organization, we partner with our internal IT organization, to look at our internet traffic and assure nobody's going out and downloading directly from an untrusted repository or registry. And then we host instead, we have approved sources internally."</p><p> </p><p>It's not surprising that Boeing, which deals with a lot of government agencies, embraces the practice of <a href="https://thenewstack.io/how-to-prepare-your-apps-for-regulated-markets/">using software bills of material</a> (SBOMs), which provide a full listing of what components are being used in a software system. In fact, the company has been working to extend the comprehensiveness of SBOMs, according to Torres.</p><p> </p><p>" I think one of the interesting things now is the automation," he said of SBOMs. "And so we're always looking to beef up the heuristics because a lot of the tools are relatively naïve, and that they trust that the dependencies that are specified are actually representative of everything that's delivered. And that's not good enough for a company like Boeing. We have to be absolutely certain that what's there is exactly what did we expected to be there."</p><p><h2>Cloud Native Computing</h2></p><p>While Boeing builds many systems that reside in private data centers, the company is also increasingly relying on the cloud as well. Earlier this year, Boeing had signed agreements with the three largest cloud service providers (CSPs): Amazon Web Services, Microsoft Azure and the Google Cloud Platform.</p><p> </p><p>"A lot of our cloud presence is about our development environments. And so, you know, we have cloud-based software factories that are using a number of CNCF and CNCF-adjacent technologies to enable our developers to move fast," Torres said.</p>
]]></content:encoded>
      <enclosure length="11584665" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/6818cff1-d043-45c2-af2a-0c539e94c380/audio/0097bb3d-9cbf-44a1-9c6f-243d42b43a25/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Boeing Uses Cloud Native</itunes:title>
      <itunes:author>Cloud Native Computing Foundation, Boeing, The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/211aff0b-d863-45d0-a0dd-647f725550ba/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:12:04</itunes:duration>
      <itunes:summary>In this latest podcast from The New Stack, we spoke with Ricardo Torres, who is the chief engineer of open source and cloud native for aerospace giant Boeing. Torres also joined the Cloud Native Computing Foundation in May to serve as a board member. In this interview, recorded at KubeCon+CloudNativeCon last month, Torres speaks about Boeing&apos;s use of open source software, as well as its adoption of cloud native technologies.</itunes:summary>
      <itunes:subtitle>In this latest podcast from The New Stack, we spoke with Ricardo Torres, who is the chief engineer of open source and cloud native for aerospace giant Boeing. Torres also joined the Cloud Native Computing Foundation in May to serve as a board member. In this interview, recorded at KubeCon+CloudNativeCon last month, Torres speaks about Boeing&apos;s use of open source software, as well as its adoption of cloud native technologies.</itunes:subtitle>
      <itunes:keywords>ricardo torres, kubecon 2022, software developer, joab jackson, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, kubernetes, the new stack makers, software engineer, boeing, kubecon</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1370</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">606cda19-688d-4a9f-81ae-4b349a175cbc</guid>
      <title>Case Study: How Dell Technologies Is Building a DevRel Team</title>
      <description><![CDATA[<p>DETROIT — <a href="https://thenewstack.io/4-forecasts-for-the-future-of-developer-relations/">Developer relations,</a> or DevRel to its friends, is not only <a href="https://thenewstack.io/devrel-and-the-increasing-popularity-of-the-developer-advocate/">a coveted career path</a> but also essential to helping developers learn and adopt new technologies.</p><p> </p><p>That guidance is a matter of survival for many organizations. The cloud native era demands new skills and new ways of thinking about developers and engineers’ day-to-day jobs. At Dell Technologies, it meant responding to the challenges faced by its existing customer base, which is “very Ops centric — server admins, system admins,” according to <a href="https://www.linkedin.com/in/bradmaltz">Brad Maltz,</a> of Dell.</p><p> </p><p>With the rise of the DevOps movement, “what we realized is our end users have been trying to figure out how to become infrastructure developers,” said Maltz, the company’s senior director of DevOps portfolio and DevRel. “They've been trying to figure out how to use infrastructure as code Kubernetes, cloud, all those things.”</p><p> </p><p>“And what that means is we need to be able to speak to them where they want to go, when they want to become those developers. That’s led us to build out a developer relations program ... and in doing that, we need to grow out the community, and really help our end users get to where they want to.”</p><p> </p><p>In this episode of The New Stack’s Makers podcast, Maltz spoke to <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn,</a> TNS features editor, about how Dell has, since August, been busy <a href="https://thenewstack.io/devrel-for-beginners-how-to-get-started/">creating a DevRel team</a> to aid its enterprise customers seeking to adopt DevOps as a way of doing business.</p><p> </p><p>This On the Road edition of Makers, recorded at KubeCon + CloudNativeCon North America in the Motor City, was sponsored by Dell Technologies.</p><p> </p><h2>Recruiting Influencers</h2><p> </p><p>Maltz, an eight-year veteran of Dell, has moved quickly in assembling his team, with three hires made by late October and a fourth planned before year’s end. That’s lightning fast, especially for a large, established company like Dell, which was founded in 1984.</p><p> </p><p>“There's <a href="https://thenewstack.io/youre-doing-it-wrong-recruiting-a-devrel/">two ways of building a DevOps team,</a>” he said. “One way is to actually kind of go and try to homegrow people on the inside and get them more presence in the community. That's the slower road.</p><p> </p><p>“But we decided we have to go and find industry influencers that believe in our cause, that believe in the problem space that we live in. And that's really how we started this: we went out to find some very, very strong top talent in the industry and bring them on board.”</p><p> </p><p>In addition to spreading the DevOps solutions gospel at conferences like KubeCon, Maltz’s vision for the team is currently focused on social media and building out a website, <a href="https://developer.dell.com/">developer.dell.com,</a> which will serve as the landing page for the company’s DevRel knowledge, including links to community, training, how-to videos and an API marketplace.</p><p> </p><p>In building the team, the company made an unorthodox choice. “We decided to put Dev Rel into product management on the product side, not marketing,” Maltz said. “The reason we did that was we want the DevRel folks to really focus on community contributions, education, all that stuff.</p><p> </p><p>“But while they're doing that, their job is to bring the data back from those discussions they're having in the field back to product management, to enable our tooling to be able to satisfy some of those problems that they're bringing back so we can start going full circle.”</p><p> </p><h2>Facing the Limits of ‘Shift Left’</h2><p> </p><p>The roles that Dell’s DevRel team is focusing on in the DevOps culture are site reliability engineers (SREs) and platform engineers. These not only align with its traditional audience of Ops engineers, but reflect a reality Dell is seeing in the wider tech world.</p><p> </p><p>“The reality is, application developers don't want to shift left, they don't want to operate. They don't want they want somebody else to take it, and they want to keep developing,” Maltz said.  “where DevOps has transitioned for us is, how do we help those people that are kind of that operator turning into infrastructure developer fit into that DevOps culture?”</p><p> </p><p>The rise of platform engineering, he suggested, is a reaction to the endless choices of tools available to developers these days.</p><p> </p><p>“The notion is developers in the wild are able to use any tool on any cloud with any language, and they can do whatever they want. That's hard to support,” he said.</p><p> </p><p>“That's where DevOps got introduced, and was to basically say, Hey, we're gonna put you into a little bit of a box, just enough of a box that we can start to gain control and get ahead of the game. The platform engineering team, in this case, they're the ones in charge of that box.”</p><p> </p><p>But all of that, Maltz said, doesn’t mean that “shift left” — giving devs greater responsibility for their applications — is dead. It simply means most organizations aren’t ready for it yet: “That will take a few more years of maturity within these DevOps operating models, and other things that are coming down the road.”</p><p> </p><p>Check out the full episode for more from Maltz, including new solutions from Dell aimed at platform engineers and SREs and collaborations with Red Hat OpenShift.</p>
]]></description>
      <pubDate>Tue, 22 Nov 2022 13:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (Dell Technologies, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/case-study-how-dell-technologies-is-building-a-devrel-team-_rUm3Wcr</link>
      <content:encoded><![CDATA[<p>DETROIT — <a href="https://thenewstack.io/4-forecasts-for-the-future-of-developer-relations/">Developer relations,</a> or DevRel to its friends, is not only <a href="https://thenewstack.io/devrel-and-the-increasing-popularity-of-the-developer-advocate/">a coveted career path</a> but also essential to helping developers learn and adopt new technologies.</p><p> </p><p>That guidance is a matter of survival for many organizations. The cloud native era demands new skills and new ways of thinking about developers and engineers’ day-to-day jobs. At Dell Technologies, it meant responding to the challenges faced by its existing customer base, which is “very Ops centric — server admins, system admins,” according to <a href="https://www.linkedin.com/in/bradmaltz">Brad Maltz,</a> of Dell.</p><p> </p><p>With the rise of the DevOps movement, “what we realized is our end users have been trying to figure out how to become infrastructure developers,” said Maltz, the company’s senior director of DevOps portfolio and DevRel. “They've been trying to figure out how to use infrastructure as code Kubernetes, cloud, all those things.”</p><p> </p><p>“And what that means is we need to be able to speak to them where they want to go, when they want to become those developers. That’s led us to build out a developer relations program ... and in doing that, we need to grow out the community, and really help our end users get to where they want to.”</p><p> </p><p>In this episode of The New Stack’s Makers podcast, Maltz spoke to <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn,</a> TNS features editor, about how Dell has, since August, been busy <a href="https://thenewstack.io/devrel-for-beginners-how-to-get-started/">creating a DevRel team</a> to aid its enterprise customers seeking to adopt DevOps as a way of doing business.</p><p> </p><p>This On the Road edition of Makers, recorded at KubeCon + CloudNativeCon North America in the Motor City, was sponsored by Dell Technologies.</p><p> </p><h2>Recruiting Influencers</h2><p> </p><p>Maltz, an eight-year veteran of Dell, has moved quickly in assembling his team, with three hires made by late October and a fourth planned before year’s end. That’s lightning fast, especially for a large, established company like Dell, which was founded in 1984.</p><p> </p><p>“There's <a href="https://thenewstack.io/youre-doing-it-wrong-recruiting-a-devrel/">two ways of building a DevOps team,</a>” he said. “One way is to actually kind of go and try to homegrow people on the inside and get them more presence in the community. That's the slower road.</p><p> </p><p>“But we decided we have to go and find industry influencers that believe in our cause, that believe in the problem space that we live in. And that's really how we started this: we went out to find some very, very strong top talent in the industry and bring them on board.”</p><p> </p><p>In addition to spreading the DevOps solutions gospel at conferences like KubeCon, Maltz’s vision for the team is currently focused on social media and building out a website, <a href="https://developer.dell.com/">developer.dell.com,</a> which will serve as the landing page for the company’s DevRel knowledge, including links to community, training, how-to videos and an API marketplace.</p><p> </p><p>In building the team, the company made an unorthodox choice. “We decided to put Dev Rel into product management on the product side, not marketing,” Maltz said. “The reason we did that was we want the DevRel folks to really focus on community contributions, education, all that stuff.</p><p> </p><p>“But while they're doing that, their job is to bring the data back from those discussions they're having in the field back to product management, to enable our tooling to be able to satisfy some of those problems that they're bringing back so we can start going full circle.”</p><p> </p><h2>Facing the Limits of ‘Shift Left’</h2><p> </p><p>The roles that Dell’s DevRel team is focusing on in the DevOps culture are site reliability engineers (SREs) and platform engineers. These not only align with its traditional audience of Ops engineers, but reflect a reality Dell is seeing in the wider tech world.</p><p> </p><p>“The reality is, application developers don't want to shift left, they don't want to operate. They don't want they want somebody else to take it, and they want to keep developing,” Maltz said.  “where DevOps has transitioned for us is, how do we help those people that are kind of that operator turning into infrastructure developer fit into that DevOps culture?”</p><p> </p><p>The rise of platform engineering, he suggested, is a reaction to the endless choices of tools available to developers these days.</p><p> </p><p>“The notion is developers in the wild are able to use any tool on any cloud with any language, and they can do whatever they want. That's hard to support,” he said.</p><p> </p><p>“That's where DevOps got introduced, and was to basically say, Hey, we're gonna put you into a little bit of a box, just enough of a box that we can start to gain control and get ahead of the game. The platform engineering team, in this case, they're the ones in charge of that box.”</p><p> </p><p>But all of that, Maltz said, doesn’t mean that “shift left” — giving devs greater responsibility for their applications — is dead. It simply means most organizations aren’t ready for it yet: “That will take a few more years of maturity within these DevOps operating models, and other things that are coming down the road.”</p><p> </p><p>Check out the full episode for more from Maltz, including new solutions from Dell aimed at platform engineers and SREs and collaborations with Red Hat OpenShift.</p>
]]></content:encoded>
      <enclosure length="13006097" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/4a42d89a-87d8-4f62-8fbf-785fd9c19f54/audio/3b3a5902-bb5e-4cdd-b2a2-fb6ec6f8502e/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Case Study: How Dell Technologies Is Building a DevRel Team</itunes:title>
      <itunes:author>Dell Technologies, The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/0050c485-f2bc-47cc-a5f4-9388a6040d27/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:13:32</itunes:duration>
      <itunes:summary>DETROIT — Developer relations, or DevRel to its friends, is not only a coveted career path but also essential to helping developers learn and adopt new technologies.

That guidance is a matter of survival for many organizations. The cloud native era demands new skills and new ways of thinking about developers and engineers’ day-to-day jobs. At Dell Technologies, it meant responding to the challenges faced by its existing customer base, which is “very Ops centric — server admins, system admins,” according to Brad Maltz, of Dell.

With the rise of the DevOps movement, “what we realized is our end users have been trying to figure out how to become infrastructure developers,” said Maltz, the company’s senior director of DevOps portfolio and DevRel. “They&apos;ve been trying to figure out how to use infrastructure as code Kubernetes, cloud, all those things.”

“And what that means is we need to be able to speak to them where they want to go, when they want to become those developers. That’s led us to build out a developer relations program ... and in doing that, we need to grow out the community, and really help our end users get to where they want to.”

In this episode of The New Stack’s Makers podcast, Maltz spoke to Heather Joslyn, TNS features editor, about how Dell has, since August, been busy creating a DevRel team to aid its enterprise customers seeking to adopt DevOps as a way of doing business.

This On the Road edition of Makers, recorded at KubeCon + CloudNativeCon North America in the Motor City, was sponsored by Dell Technologies.</itunes:summary>
      <itunes:subtitle>DETROIT — Developer relations, or DevRel to its friends, is not only a coveted career path but also essential to helping developers learn and adopt new technologies.

That guidance is a matter of survival for many organizations. The cloud native era demands new skills and new ways of thinking about developers and engineers’ day-to-day jobs. At Dell Technologies, it meant responding to the challenges faced by its existing customer base, which is “very Ops centric — server admins, system admins,” according to Brad Maltz, of Dell.

With the rise of the DevOps movement, “what we realized is our end users have been trying to figure out how to become infrastructure developers,” said Maltz, the company’s senior director of DevOps portfolio and DevRel. “They&apos;ve been trying to figure out how to use infrastructure as code Kubernetes, cloud, all those things.”

“And what that means is we need to be able to speak to them where they want to go, when they want to become those developers. That’s led us to build out a developer relations program ... and in doing that, we need to grow out the community, and really help our end users get to where they want to.”

In this episode of The New Stack’s Makers podcast, Maltz spoke to Heather Joslyn, TNS features editor, about how Dell has, since August, been busy creating a DevRel team to aid its enterprise customers seeking to adopt DevOps as a way of doing business.

This On the Road edition of Makers, recorded at KubeCon + CloudNativeCon North America in the Motor City, was sponsored by Dell Technologies.</itunes:subtitle>
      <itunes:keywords>kubecon 2022, software developer, tech podcast, the new stack, heather joslyn, brad maltz, devops, devops podcast, tech, developer podcast, kubernetes, the new stack makers, software engineer, dell technologies, kubecon</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1369</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b5df193e-e9ec-403e-aa66-3c05dadacf04</guid>
      <title>Kubernetes and Amazon Web Services</title>
      <description><![CDATA[<p>Cloud giant Amazon Web Services manages the largest number of Kubernetes clusters in the world, according to the company.  In this podcast recording, AWS Senior Engineer <a href="https://www.linkedin.com/in/jaypipes/">Jay Pipes</a> discusses AWS' use of Kubernetes, as well as the company's contribution to the Kubernetes code base. The interview was recorded at KubeCon North America last month.</p><p><h2>The Difference Between Kubernetes and AWS</h2></p><p>Kubernetes is an open source container orchestration platform. AWS is one of the largest providers of cloud services. In 2021, the company generated $61.1 billion in revenue, worldwide. AWS provides a commercial Kubernetes service, called the <a href="https://aws.amazon.com/eks/">Amazon Elastic Kubernetes Service</a> (EKS). It simplifies the Kubernetes experience by adding a control plane and worker nodes.</p><p> </p><p>In addition to providing a commercial Kubernetes service, AWS supports the development of Kubernetes, by dedicating engineers to the work on the open source project.</p><p> </p><p>"It's a responsibility of all of the engineers in the service team to be aware of what's going on and the upstream community to be contributing to that upstream community, and making it succeed," Pipes said. "If the upstream open source projects upon which we depend are suffering or not doing well, then our service is not going to do well. And by the same token, if we can help that upstream project or project to be successful, that means our service is going to be more successful."</p><p><h2>What is Kubernetes in AWS?</h2></p><p>In addition to EKS, AWS has also a number of other tools to help Kubernetes users. One is <a href="https://karpenter.sh/">Karpenter</a>, an open-source, flexible, high-performance Kubernetes cluster autoscaler built with AWS. Karpenter provides more fine-grained scaling capabilities, compared to Kubernetes' built-in Cluster Autoscaler, Pipes said. Instead of using Cluster Autoscaler, Karpenter deploys AWS' own Fleet API, which offers superior scheduling capabilities.</p><p> </p><p>Another tool for Kubernetes users is <a href="https://cdk8s.io/">cdk8s</a>, which is an open-source software development framework for defining Kubernetes applications and reusable abstractions using familiar programming languages and rich object-oriented APIs. It is similar to the <a href="https://aws.amazon.com/cdk/">AWS Cloud Development Kit</a> (<em>CDK</em>), which helps users deploy applications using <a href="https://aws.amazon.com/cloudformation/">AWS CloudFormation</a>, but instead of the output being a CloudFormation template, the output is a YAML manifest that can be understood by Kubernetes.</p><p><h2>AWS and Kubernetes</h2></p><p>In addition to providing open source development help to Kubernetes, AWS has offered to help defray the considerable expenses of hosting the Kubernetes development and deployment process. Currently, the Kubernetes upstream build process is hosted on the Google Cloud Platform, and artifact registry is hosted in Google's <a href="https://cloud.google.com/container-registry/">container registry</a>, and totals about 1.5TB worth of storage. Each month, AWS alone was paying $90-$100,000 a month for egress costs, just to have the Kubernetes code on an AWS-hosted infrastructure, Pipes said.</p><p> </p><p>AWS has been working on a mirror of the Kubernetes assets that would reside on the company's own cloud servers, thereby eliminating the Google egress costs typically borne by the Cloud Native Computing Foundation.</p><p> </p><p>"By doing that we completely eliminate the egress costs out of Google data centers and into AWS data centers," Pipes said.</p>
]]></description>
      <pubDate>Thu, 17 Nov 2022 23:54:29 +0000</pubDate>
      <author>podcasts@thenewstack.io (Amazon Web Services, The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/kubernetes-and-amazon-web-services-p2XhydM0</link>
      <content:encoded><![CDATA[<p>Cloud giant Amazon Web Services manages the largest number of Kubernetes clusters in the world, according to the company.  In this podcast recording, AWS Senior Engineer <a href="https://www.linkedin.com/in/jaypipes/">Jay Pipes</a> discusses AWS' use of Kubernetes, as well as the company's contribution to the Kubernetes code base. The interview was recorded at KubeCon North America last month.</p><p><h2>The Difference Between Kubernetes and AWS</h2></p><p>Kubernetes is an open source container orchestration platform. AWS is one of the largest providers of cloud services. In 2021, the company generated $61.1 billion in revenue, worldwide. AWS provides a commercial Kubernetes service, called the <a href="https://aws.amazon.com/eks/">Amazon Elastic Kubernetes Service</a> (EKS). It simplifies the Kubernetes experience by adding a control plane and worker nodes.</p><p> </p><p>In addition to providing a commercial Kubernetes service, AWS supports the development of Kubernetes, by dedicating engineers to the work on the open source project.</p><p> </p><p>"It's a responsibility of all of the engineers in the service team to be aware of what's going on and the upstream community to be contributing to that upstream community, and making it succeed," Pipes said. "If the upstream open source projects upon which we depend are suffering or not doing well, then our service is not going to do well. And by the same token, if we can help that upstream project or project to be successful, that means our service is going to be more successful."</p><p><h2>What is Kubernetes in AWS?</h2></p><p>In addition to EKS, AWS has also a number of other tools to help Kubernetes users. One is <a href="https://karpenter.sh/">Karpenter</a>, an open-source, flexible, high-performance Kubernetes cluster autoscaler built with AWS. Karpenter provides more fine-grained scaling capabilities, compared to Kubernetes' built-in Cluster Autoscaler, Pipes said. Instead of using Cluster Autoscaler, Karpenter deploys AWS' own Fleet API, which offers superior scheduling capabilities.</p><p> </p><p>Another tool for Kubernetes users is <a href="https://cdk8s.io/">cdk8s</a>, which is an open-source software development framework for defining Kubernetes applications and reusable abstractions using familiar programming languages and rich object-oriented APIs. It is similar to the <a href="https://aws.amazon.com/cdk/">AWS Cloud Development Kit</a> (<em>CDK</em>), which helps users deploy applications using <a href="https://aws.amazon.com/cloudformation/">AWS CloudFormation</a>, but instead of the output being a CloudFormation template, the output is a YAML manifest that can be understood by Kubernetes.</p><p><h2>AWS and Kubernetes</h2></p><p>In addition to providing open source development help to Kubernetes, AWS has offered to help defray the considerable expenses of hosting the Kubernetes development and deployment process. Currently, the Kubernetes upstream build process is hosted on the Google Cloud Platform, and artifact registry is hosted in Google's <a href="https://cloud.google.com/container-registry/">container registry</a>, and totals about 1.5TB worth of storage. Each month, AWS alone was paying $90-$100,000 a month for egress costs, just to have the Kubernetes code on an AWS-hosted infrastructure, Pipes said.</p><p> </p><p>AWS has been working on a mirror of the Kubernetes assets that would reside on the company's own cloud servers, thereby eliminating the Google egress costs typically borne by the Cloud Native Computing Foundation.</p><p> </p><p>"By doing that we completely eliminate the egress costs out of Google data centers and into AWS data centers," Pipes said.</p>
]]></content:encoded>
      <enclosure length="29485810" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/78546e1d-fb15-4048-b510-30863d562c76/audio/9d06c99f-10df-43c2-8068-879ad8cb5272/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Kubernetes and Amazon Web Services</itunes:title>
      <itunes:author>Amazon Web Services, The New Stack</itunes:author>
      <itunes:duration>00:30:42</itunes:duration>
      <itunes:summary>Cloud giant Amazon Web Services manages the largest number of Kubernetes clusters in the world, according to the company.  In this podcast recording, AWS Senior Engineer Jay Pipes discusses AWS&apos; use of Kubernetes, as well as the company&apos;s contribution to the Kubernetes code base. The interview was recorded at KubeCon North America last month.

Jay Pipes - @jaypipes  
Joab Jackson - @Joab_Jackson
The New Stack - @thenewstack</itunes:summary>
      <itunes:subtitle>Cloud giant Amazon Web Services manages the largest number of Kubernetes clusters in the world, according to the company.  In this podcast recording, AWS Senior Engineer Jay Pipes discusses AWS&apos; use of Kubernetes, as well as the company&apos;s contribution to the Kubernetes code base. The interview was recorded at KubeCon North America last month.

Jay Pipes - @jaypipes  
Joab Jackson - @Joab_Jackson
The New Stack - @thenewstack</itunes:subtitle>
      <itunes:keywords>software developer, jay pipes, joab jackson, tech podcast, the new stack, devops, devops podcast, amazon web services, tech, developer podcast, kubernetes, the new stack makers, software engineer, cncf, aws</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1368</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">ec70012f-5dd9-46d3-b903-6db240502541</guid>
      <title>Case Study: How SeatGeek Adopted HashiCorp’s Nomad</title>
      <description><![CDATA[<p>LOS ANGELES — Kubernetes, the open source container orchestrator, may have a big footprint in the cloud native world, but some organizations are doing just fine without it. Take, for example, SeatGeek, which runs a mobile application that serves as a primary and secondary market for event tickets.</p><p> </p><p>For cloud infrastructure, the 12-year-old company’s workloads — which include non-containerized applications — have largely run on Amazon Web Services. A few years ago, it turned to HashiCorp’s Nomad, a scheduler built for running for apps whether they’re containerized or not.</p><p> </p><p>“In the beginning, we had a platform that an engineer would deploy something to but it was very constrained. We could only give them certain number of options that they could use, as very static experience,” said <a href="https://www.linkedin.com/in/josediazgonzalez">Jose Diaz-Gonzalez,</a> a staff engineer at SeatGeek, in this episode of The New Stack Makers podcast.</p><p> </p><p>“If they want to scale, an application required manual toil on the platform team side, and then they can do some work. And so for us, we wanted to expose more of the platform to engineers and allow them to have more control over what it is that they were shipping, how that runtime environment was executed, and how they scale their applications.”</p><p> </p><p>This On the Road episode of Makers, recorded here during HashiConf, HashiCorp’s annual user conference, featured a case study of SeatGeek’s <a href="https://thenewstack.io/conductor-why-we-migrated-from-kubernetes-to-nomad/">adoption of Nomad</a> and the HashiCorp Cloud Platform. The conversation was hosted by <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn,</a> features editor of TNS.</p><p> </p><p>This episode was sponsored by HashiCorp.</p><p> </p><h2>Nomad vs. Kubernetes: Trade-Offs</h2><p> </p><p>SeatGeek essentially runs the back office for ticket sales for its partners, including Broadway productions and NFL teams like Dallas Cowboys, providing them with “something like a software as a service,” said Diaz-Gonzalez.</p><p> </p><p>“All of those installations, they're single tenant, but they run roughly the same way for every single customer. And then on the consumer side we run a ton of different services and microservices and that sort of thing.”</p><p> </p><p>Though the workloads run in different languages or on different frameworks, he said, they are essentially homogeneous in their deployment patterns; SeatGeek deploys to Windows and Linux containers on the enterprise side, and to Linux on the consumer, and deploys to both the U.S. and European Union regions.</p><p> </p><p>It began using Nomad to give developers more control over their applications; previously, the deployment experience had been very constrained, Diaz-Gonzalez said, resulting in what he called “a very static experience.</p><p> </p><p>“To scale an application required manual toil on the platform team side, and then they can do some work,” he said. “And so for us, we wanted to expose more of the platform to engineers and allow them to have more control over what it is that they were shipping, how that how that runtime environment was executed and how they scale their applications.”</p><p> </p><p>Now, he said, SeatGeek uses Nomad ‘to provide basically the entire orchestration layer for our deployments</p><p> </p><p>Foregoing Kubernetes (K8s) does have its drawbacks. The <a href="https://landscape.cncf.io/">cloud native ecosystem</a> is largely built around products meant to run with K8s, rather than Nomad.</p><p> </p><p>The ecosystem built around HashiCorp’s product is “a much smaller community. If we need support, we lean heavily on HashiCorp Enterprise. And we're willing, on the support team, to answer questions. But if we need support on making some particular change, or using some certain feature, we might be one of the few people starting to use that feature.”</p><p> </p><p>“That said, it's much easier for us to manage and support Nomad and its integration with the rest of our platform, because it's so simple to run.”</p><p> </p><p>To learn more about SeatGeek’s cloud journey and the challenges it faced — such as dealing with security and policy — check out the full episode.</p>
]]></description>
      <pubDate>Wed, 16 Nov 2022 13:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/case-study-how-seatgeek-adopted-hashicorps-nomad-Slgq_KDp</link>
      <content:encoded><![CDATA[<p>LOS ANGELES — Kubernetes, the open source container orchestrator, may have a big footprint in the cloud native world, but some organizations are doing just fine without it. Take, for example, SeatGeek, which runs a mobile application that serves as a primary and secondary market for event tickets.</p><p> </p><p>For cloud infrastructure, the 12-year-old company’s workloads — which include non-containerized applications — have largely run on Amazon Web Services. A few years ago, it turned to HashiCorp’s Nomad, a scheduler built for running for apps whether they’re containerized or not.</p><p> </p><p>“In the beginning, we had a platform that an engineer would deploy something to but it was very constrained. We could only give them certain number of options that they could use, as very static experience,” said <a href="https://www.linkedin.com/in/josediazgonzalez">Jose Diaz-Gonzalez,</a> a staff engineer at SeatGeek, in this episode of The New Stack Makers podcast.</p><p> </p><p>“If they want to scale, an application required manual toil on the platform team side, and then they can do some work. And so for us, we wanted to expose more of the platform to engineers and allow them to have more control over what it is that they were shipping, how that runtime environment was executed, and how they scale their applications.”</p><p> </p><p>This On the Road episode of Makers, recorded here during HashiConf, HashiCorp’s annual user conference, featured a case study of SeatGeek’s <a href="https://thenewstack.io/conductor-why-we-migrated-from-kubernetes-to-nomad/">adoption of Nomad</a> and the HashiCorp Cloud Platform. The conversation was hosted by <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn,</a> features editor of TNS.</p><p> </p><p>This episode was sponsored by HashiCorp.</p><p> </p><h2>Nomad vs. Kubernetes: Trade-Offs</h2><p> </p><p>SeatGeek essentially runs the back office for ticket sales for its partners, including Broadway productions and NFL teams like Dallas Cowboys, providing them with “something like a software as a service,” said Diaz-Gonzalez.</p><p> </p><p>“All of those installations, they're single tenant, but they run roughly the same way for every single customer. And then on the consumer side we run a ton of different services and microservices and that sort of thing.”</p><p> </p><p>Though the workloads run in different languages or on different frameworks, he said, they are essentially homogeneous in their deployment patterns; SeatGeek deploys to Windows and Linux containers on the enterprise side, and to Linux on the consumer, and deploys to both the U.S. and European Union regions.</p><p> </p><p>It began using Nomad to give developers more control over their applications; previously, the deployment experience had been very constrained, Diaz-Gonzalez said, resulting in what he called “a very static experience.</p><p> </p><p>“To scale an application required manual toil on the platform team side, and then they can do some work,” he said. “And so for us, we wanted to expose more of the platform to engineers and allow them to have more control over what it is that they were shipping, how that how that runtime environment was executed and how they scale their applications.”</p><p> </p><p>Now, he said, SeatGeek uses Nomad ‘to provide basically the entire orchestration layer for our deployments</p><p> </p><p>Foregoing Kubernetes (K8s) does have its drawbacks. The <a href="https://landscape.cncf.io/">cloud native ecosystem</a> is largely built around products meant to run with K8s, rather than Nomad.</p><p> </p><p>The ecosystem built around HashiCorp’s product is “a much smaller community. If we need support, we lean heavily on HashiCorp Enterprise. And we're willing, on the support team, to answer questions. But if we need support on making some particular change, or using some certain feature, we might be one of the few people starting to use that feature.”</p><p> </p><p>“That said, it's much easier for us to manage and support Nomad and its integration with the rest of our platform, because it's so simple to run.”</p><p> </p><p>To learn more about SeatGeek’s cloud journey and the challenges it faced — such as dealing with security and policy — check out the full episode.</p>
]]></content:encoded>
      <enclosure length="12795864" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/5611fe24-8302-4798-8c09-fbae445b0dbf/audio/e6bada20-338d-41c6-a1f4-d590f1289f5b/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Case Study: How SeatGeek Adopted HashiCorp’s Nomad</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/2aced310-581d-4651-a5f6-9682ad637d2c/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:13:19</itunes:duration>
      <itunes:summary>LOS ANGELES — Kubernetes, the open source container orchestrator, may have a big footprint in the cloud native world, but some organizations are doing just fine without it. Take, for example, SeatGeek, which runs a mobile application that serves as a primary and secondary market for event tickets.

For cloud infrastructure, the 12-year-old company’s workloads — which include non-containerized applications — have largely run on Amazon Web Services. A few years ago, it turned to HashiCorp’s Nomad, a scheduler built for running for apps whether they’re containerized or not.

“In the beginning, we had a platform that an engineer would deploy something to but it was very constrained. We could only give them certain number of options that they could use, as very static experience,” said Jose Diaz-Gonzalez, a staff engineer at SeatGeek, in this episode of The New Stack Makers podcast.</itunes:summary>
      <itunes:subtitle>LOS ANGELES — Kubernetes, the open source container orchestrator, may have a big footprint in the cloud native world, but some organizations are doing just fine without it. Take, for example, SeatGeek, which runs a mobile application that serves as a primary and secondary market for event tickets.

For cloud infrastructure, the 12-year-old company’s workloads — which include non-containerized applications — have largely run on Amazon Web Services. A few years ago, it turned to HashiCorp’s Nomad, a scheduler built for running for apps whether they’re containerized or not.

“In the beginning, we had a platform that an engineer would deploy something to but it was very constrained. We could only give them certain number of options that they could use, as very static experience,” said Jose Diaz-Gonzalez, a staff engineer at SeatGeek, in this episode of The New Stack Makers podcast.</itunes:subtitle>
      <itunes:keywords>nomad, software developer, seat geek, tech podcast, the new stack, heather joslyn, hashiconf global 2022, devops, devops podcast, jose diaz-gonzales, tech, developer podcast, hashiconf, the new stack makers, software engineer, hashicorp</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1367</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">02dbc166-52f0-4b52-8b30-3f57a736e907</guid>
      <title>OpenTelemetry Properly Explained and Demoed</title>
      <description><![CDATA[<p><a href="https://opentelemetry.io/">OpenTelemetry</a> project offers vendor-neutral integration points that help organizations obtain the raw materials — the "telemetry" — that fuel modern observability tools, and with minimal effort at integration time. But what does OpenTelemetry mean for those who use their favorite observability tools but don’t exactly understand how it can help them? How might OpenTelemetry be relevant to the folks who are new to Kuberentes (the majority of KubeCon attendees during the past years) and those who are just getting started with observability? </p><p> </p><p><a href="https://www.linkedin.com/in/austinlparker">Austin Parker</a>, head of developer relations, Lightstep and <a href="https://ca.linkedin.com/in/morganmclean">Morgan McLean</a>, director of product management, Splunk, discuss during this podcast at <a href="https://events.linuxfoundation.org/gitopscon-north-america/">KubeCon + CloudNativeCon 2022</a> how the OpenTelemetry project has created demo services to help cloud native community members better understand cloud native development practices and test out OpenTelemetry, as well as Kubernetes, observability software, etc. </p><p> </p><p>At this conjecture in DevOps history, there has been considerable hype around observability for developers and operations teams, and more recently, much attention has been given to helping combine the different observability solutions out there in use through a single interface, and to that end, OpenTelemetry has emerged as a key standard. </p><p> </p><p>DevOps teams today need OpenTelemetry since they typically work with a lot of different data sources for observability processes, Parker said. “If you want observability, you need to transform and send that data out to any number of open source or commercial solutions and you need a lingua franca to to be consistent. Every time I have a host, or an IP address, or any kind of metadata, consistency is key and that's what OpenTelemetry provides.”</p><p> </p><p>Additionally, as a developer or an operator, OpenTelemetry serves to instrument your system for observability, McLean said. “OpenTelemetry does that through the power of the community working together to define those standards and to provide the components needed to extract that data among hundreds of thousands of different combinations of software and hardware and infrastructure that people are using,” McLean said.</p><p> </p><p>Observability and OpenTelemetry, while conceptually straightforward, do require a learning curve to use. To that end, the OpenTelemetry project has released a demo to help. It is intended to both better understand cloud native development practices and to test out OpenTelemetry, as well as Kubernetes, observability software, etc.,the project’s creators say.</p><p> </p><p><a href="https://github.com/open-telemetry/opentelemetry-demo/tree/v1.0.0">OpenTelemetry Demo v1.0</a> general release is available on GitHub and on the <a href="https://opentelemetry.io/blog/2022/announcing-opentelemetry-demo-release/">OpenTelemetry site.</a> The demo helps with learning how to add instrumentation to an application to gather metrics, logs and traces for observability. There is heavy instruction for open source projects like Prometheus for Kubernetes and <a href="https://www.jaegertracing.io/">Jaeger</a> for distributed tracing. How to acquaint yourself with tools such as Grafana to create dashboards are shown. The demo also extends to scenarios in which failures are created and OpenTelemetry data is used for troubleshooting and remediation. The demo was designed for the beginner or the intermediate level user, and can be set up to run on Docker or Kubernetes in about five minutes. </p><p> </p><p>“The demo is a great way for people to get started,” Parker said. “We've also seen a lot of great uptake from our commercial partners as well who have said ‘we'll use this to demo our platform.’”</p>
]]></description>
      <pubDate>Tue, 15 Nov 2022 13:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/opentelemetry-properly-explained-and-demoed-_VQuFcgR</link>
      <content:encoded><![CDATA[<p><a href="https://opentelemetry.io/">OpenTelemetry</a> project offers vendor-neutral integration points that help organizations obtain the raw materials — the "telemetry" — that fuel modern observability tools, and with minimal effort at integration time. But what does OpenTelemetry mean for those who use their favorite observability tools but don’t exactly understand how it can help them? How might OpenTelemetry be relevant to the folks who are new to Kuberentes (the majority of KubeCon attendees during the past years) and those who are just getting started with observability? </p><p> </p><p><a href="https://www.linkedin.com/in/austinlparker">Austin Parker</a>, head of developer relations, Lightstep and <a href="https://ca.linkedin.com/in/morganmclean">Morgan McLean</a>, director of product management, Splunk, discuss during this podcast at <a href="https://events.linuxfoundation.org/gitopscon-north-america/">KubeCon + CloudNativeCon 2022</a> how the OpenTelemetry project has created demo services to help cloud native community members better understand cloud native development practices and test out OpenTelemetry, as well as Kubernetes, observability software, etc. </p><p> </p><p>At this conjecture in DevOps history, there has been considerable hype around observability for developers and operations teams, and more recently, much attention has been given to helping combine the different observability solutions out there in use through a single interface, and to that end, OpenTelemetry has emerged as a key standard. </p><p> </p><p>DevOps teams today need OpenTelemetry since they typically work with a lot of different data sources for observability processes, Parker said. “If you want observability, you need to transform and send that data out to any number of open source or commercial solutions and you need a lingua franca to to be consistent. Every time I have a host, or an IP address, or any kind of metadata, consistency is key and that's what OpenTelemetry provides.”</p><p> </p><p>Additionally, as a developer or an operator, OpenTelemetry serves to instrument your system for observability, McLean said. “OpenTelemetry does that through the power of the community working together to define those standards and to provide the components needed to extract that data among hundreds of thousands of different combinations of software and hardware and infrastructure that people are using,” McLean said.</p><p> </p><p>Observability and OpenTelemetry, while conceptually straightforward, do require a learning curve to use. To that end, the OpenTelemetry project has released a demo to help. It is intended to both better understand cloud native development practices and to test out OpenTelemetry, as well as Kubernetes, observability software, etc.,the project’s creators say.</p><p> </p><p><a href="https://github.com/open-telemetry/opentelemetry-demo/tree/v1.0.0">OpenTelemetry Demo v1.0</a> general release is available on GitHub and on the <a href="https://opentelemetry.io/blog/2022/announcing-opentelemetry-demo-release/">OpenTelemetry site.</a> The demo helps with learning how to add instrumentation to an application to gather metrics, logs and traces for observability. There is heavy instruction for open source projects like Prometheus for Kubernetes and <a href="https://www.jaegertracing.io/">Jaeger</a> for distributed tracing. How to acquaint yourself with tools such as Grafana to create dashboards are shown. The demo also extends to scenarios in which failures are created and OpenTelemetry data is used for troubleshooting and remediation. The demo was designed for the beginner or the intermediate level user, and can be set up to run on Docker or Kubernetes in about five minutes. </p><p> </p><p>“The demo is a great way for people to get started,” Parker said. “We've also seen a lot of great uptake from our commercial partners as well who have said ‘we'll use this to demo our platform.’”</p>
]]></content:encoded>
      <enclosure length="17537682" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/c5aa4094-2286-4d29-94ae-e09e07cd41cb/audio/0e20aa5c-08ca-4e05-8cd7-74d25fa559a4/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>OpenTelemetry Properly Explained and Demoed</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/b7f55a77-6187-41bc-a1f1-86cca7289199/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:18:16</itunes:duration>
      <itunes:summary>OpenTelemetry project offers vendor-neutral integration points that help organizations obtain the raw materials — the &quot;telemetry&quot; — that fuel modern observability tools, and with minimal effort at integration time. But what does OpenTelemetry mean for those who use their favorite observability tools but don’t exactly understand how it can help them? How might OpenTelemetry be relevant to the folks who are new to Kuberentes (the majority of KubeCon attendees during the past years) and those who are just getting started with observability? 

Austin Parker, head of developer relations, Lightstep and Morgan McLean, director of product management, Splunk, discuss during this podcast at KubeCon + CloudNativeCon 2022 how the OpenTelemetry project has created demo services to help cloud native community members better understand cloud native development practices and test out OpenTelemetry, as well as Kubernetes, observability software, etc. </itunes:summary>
      <itunes:subtitle>OpenTelemetry project offers vendor-neutral integration points that help organizations obtain the raw materials — the &quot;telemetry&quot; — that fuel modern observability tools, and with minimal effort at integration time. But what does OpenTelemetry mean for those who use their favorite observability tools but don’t exactly understand how it can help them? How might OpenTelemetry be relevant to the folks who are new to Kuberentes (the majority of KubeCon attendees during the past years) and those who are just getting started with observability? 

Austin Parker, head of developer relations, Lightstep and Morgan McLean, director of product management, Splunk, discuss during this podcast at KubeCon + CloudNativeCon 2022 how the OpenTelemetry project has created demo services to help cloud native community members better understand cloud native development practices and test out OpenTelemetry, as well as Kubernetes, observability software, etc. </itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, the new stack, devops, bruce gain, devops podcast, lightstep, tech, developer podcast, kubernetes, open telemetry, the new stack makers, software engineer, austin parker, morgan mcclean, kubecon, splunk</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1366</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">12466016-2d27-4ba6-9a6d-44e62e1452a9</guid>
      <title>The Latest Milestones on WebAssembly&apos;s Road to Maturity</title>
      <description><![CDATA[<p>DETROIT — Even in the midst of hand-wringing at KubeCon + CloudNativeCon North America about how the global economy will make it tough for startups to gain support in the near future, the news about a couple of young WebAssembly-centric companies was bright.</p><p> </p><p>Cosmonic announced that it had raised $8.5 million in a seed round led by Vertex Ventures. And Fermyon Technologies unveiled both funding and product news: a $20 million A Series led by Insight Partners (which also owns The New Stack) and the launch of <a href="https://www.fermyon.com/cloud">Fermyon Cloud</a>, a hosted platform for running <a href="https://webassembly.org/">WebAssembly (Wasm)</a> microservices. Both <a href="https://thenewstack.io/what-makes-wasm-different/">Cosmonic</a> and <a href="https://thenewstack.io/whats-next-in-webassembly/">Fermyon</a> were founded in 2021.</p><p> </p><p>“A lot of people think that Wasm is this maybe up and coming thing, or it's just totally new thing that's out there in the future,” noted <a href="https://www.linkedin.com/in/baileyhayes">Bailey Hayes,</a> a director at Cosmonic, in this episode of The New Stack Makers podcast.</p><p> </p><p>But the future is already here, she said: “It's one of technology's best kept secrets, because you're using it today, all over. And many of the applications that we use day-to-day —  Zoom, Google Meet, Prime Video, I mean, it really is everywhere. The thing that's going to change for developers is that this will be their compilation target in their build file.”</p><p> </p><p>In this On the Road episode of Makers, recorded at KubeCon here in the Motor City, Hayes and <a href="https://www.linkedin.com/in/kate-goldenring-33aa45126">Kate Goldenring,</a> a software engineer at Fermyon, spoke to <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn,</a> TNS’ features editor, about the state of WebAssembly. This episode was sponsored by the Cloud Native Computing Foundation (CNCF).</p><p> </p><h2>Wasm and Docker, Java, Python</h2><p> </p><p><a href="https://thenewstack.io/what-is-webassembly/">WebAssembly</a> – the roughly five-year-old binary instruction format for a stack-based virtual machine, is designed to execute binary code on the web, lets developers bring the performance of languages like <a href="https://thenewstack.io/how-to-compile-c-code-into-webassembly-with-emscripten/">C, C++,</a> and <a href="https://thenewstack.io/using-web-assembly-written-in-rust-on-the-server-side/">Rust</a> to the web development area.</p><p> </p><p>At Wasm Day, a co-located event that preceded KubeCon, support for a number of other languages — including Java, .Net, Python and PHP — was announced. At the same event, Docker also revealed that <a href="https://docs.docker.com/desktop/wasm/">it has added Wasm as a runtime</a> that developers can target; that feature is now in beta.</p><p> </p><p>Such steps move WebAssembly closer to fulfilling its promise to devs that they can “build once, run anywhere.”</p><p> </p><p>“With Wasm, developers shouldn't need to know necessarily that it's their compilation target,” said Hayes. But, she added, “what you do know is that you're now able to move that Wasm module anywhere in any cloud. The same one that you built on your desktop that might be on Windows can go and run on an ARM Linux server.”</p><p> </p><p>Goldenring pointed to the findings of the <a href="https://www.cncf.io/blog/2022/10/24/cncf-wasm-microsurvey-a-transformative-technology-yes-but-time-to-get-serious/">CNCF’s “mini survey” of WebAssembly users,</a>  released at Wasm Day, as evidence that the technology’s user cases are proliferating quickly.</p><p> </p><p>“Even though WebAssembly was made for the web, the number one response —it was around a little over 60% — said serverless,” she noted. “And then it said, the edge and then it said web development, and then it said IoT, and the use cases just keep going. And that's because it is this incredibly powerful, portable target that you can put in all these different use cases. It's secure, it has instant startup time.”</p><p> </p><h2>Worlds and Warg Craft</h2><p> </p><p>The podcast guests talked about recent efforts to make it easier to use Wasm, share code and reuse it, including <a href="https://thenewstack.io/whats-stopping-webassembly-from-widespread-adoption/">the development of the component model, which proponents hope will simplify how WebAssembly works outside the browser.</a> Goldenring and Hayes discussed efforts now under construction, including “worlds” files and Warg, a package registry for WebAssembly. (Hayes <a href="https://youtu.be/lihQEVhOR58">co-presented at Wasm Day</a> on the work being done on WebAssembly package management, including Warg.)</p><p> </p><p>A world file, Hayes said, is a way of defining your environment.  "One way to think of it is like .profile, but for Wasm, for a component. And so it tells me what types of capabilities I need for my web module to run successfully in the runtime and can read that and give me the right stuff.”</p><p> </p><p>And as for Warg, Hayes said: “It's really a protocol and a set of APIs, so that we can slot it into existing ecosystems. A lot of people think of it as us trying to pave over existing technologies. And that's really not the case. The purpose of Warg is to be able to slot right in, so that you continue working in your current developer environment and experience and using the packages that you're used to. But get all of the advantages of the component model, which is this new specification we've been working on" at the <a href="https://www.w3.org/wasm/">W3C's WebAssembly Working Group.</a></p><p> </p><p>Goldenring added another finding from the CNCF survey: “Around 30% of people wanted better code reuse. That's a sign of a more mature ecosystem. So having something like Warg is going to help everyone who's involved in the server side of the WebAssembly space.”</p><p> </p><p>Listen to the full conversation to learn more about WebAssembly and how these two companies are tackling its challenges for developers.</p>
]]></description>
      <pubDate>Thu, 10 Nov 2022 21:16:52 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/the-latest-milestones-on-webassemblys-road-to-maturity-mNvnG_Dc</link>
      <content:encoded><![CDATA[<p>DETROIT — Even in the midst of hand-wringing at KubeCon + CloudNativeCon North America about how the global economy will make it tough for startups to gain support in the near future, the news about a couple of young WebAssembly-centric companies was bright.</p><p> </p><p>Cosmonic announced that it had raised $8.5 million in a seed round led by Vertex Ventures. And Fermyon Technologies unveiled both funding and product news: a $20 million A Series led by Insight Partners (which also owns The New Stack) and the launch of <a href="https://www.fermyon.com/cloud">Fermyon Cloud</a>, a hosted platform for running <a href="https://webassembly.org/">WebAssembly (Wasm)</a> microservices. Both <a href="https://thenewstack.io/what-makes-wasm-different/">Cosmonic</a> and <a href="https://thenewstack.io/whats-next-in-webassembly/">Fermyon</a> were founded in 2021.</p><p> </p><p>“A lot of people think that Wasm is this maybe up and coming thing, or it's just totally new thing that's out there in the future,” noted <a href="https://www.linkedin.com/in/baileyhayes">Bailey Hayes,</a> a director at Cosmonic, in this episode of The New Stack Makers podcast.</p><p> </p><p>But the future is already here, she said: “It's one of technology's best kept secrets, because you're using it today, all over. And many of the applications that we use day-to-day —  Zoom, Google Meet, Prime Video, I mean, it really is everywhere. The thing that's going to change for developers is that this will be their compilation target in their build file.”</p><p> </p><p>In this On the Road episode of Makers, recorded at KubeCon here in the Motor City, Hayes and <a href="https://www.linkedin.com/in/kate-goldenring-33aa45126">Kate Goldenring,</a> a software engineer at Fermyon, spoke to <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn,</a> TNS’ features editor, about the state of WebAssembly. This episode was sponsored by the Cloud Native Computing Foundation (CNCF).</p><p> </p><h2>Wasm and Docker, Java, Python</h2><p> </p><p><a href="https://thenewstack.io/what-is-webassembly/">WebAssembly</a> – the roughly five-year-old binary instruction format for a stack-based virtual machine, is designed to execute binary code on the web, lets developers bring the performance of languages like <a href="https://thenewstack.io/how-to-compile-c-code-into-webassembly-with-emscripten/">C, C++,</a> and <a href="https://thenewstack.io/using-web-assembly-written-in-rust-on-the-server-side/">Rust</a> to the web development area.</p><p> </p><p>At Wasm Day, a co-located event that preceded KubeCon, support for a number of other languages — including Java, .Net, Python and PHP — was announced. At the same event, Docker also revealed that <a href="https://docs.docker.com/desktop/wasm/">it has added Wasm as a runtime</a> that developers can target; that feature is now in beta.</p><p> </p><p>Such steps move WebAssembly closer to fulfilling its promise to devs that they can “build once, run anywhere.”</p><p> </p><p>“With Wasm, developers shouldn't need to know necessarily that it's their compilation target,” said Hayes. But, she added, “what you do know is that you're now able to move that Wasm module anywhere in any cloud. The same one that you built on your desktop that might be on Windows can go and run on an ARM Linux server.”</p><p> </p><p>Goldenring pointed to the findings of the <a href="https://www.cncf.io/blog/2022/10/24/cncf-wasm-microsurvey-a-transformative-technology-yes-but-time-to-get-serious/">CNCF’s “mini survey” of WebAssembly users,</a>  released at Wasm Day, as evidence that the technology’s user cases are proliferating quickly.</p><p> </p><p>“Even though WebAssembly was made for the web, the number one response —it was around a little over 60% — said serverless,” she noted. “And then it said, the edge and then it said web development, and then it said IoT, and the use cases just keep going. And that's because it is this incredibly powerful, portable target that you can put in all these different use cases. It's secure, it has instant startup time.”</p><p> </p><h2>Worlds and Warg Craft</h2><p> </p><p>The podcast guests talked about recent efforts to make it easier to use Wasm, share code and reuse it, including <a href="https://thenewstack.io/whats-stopping-webassembly-from-widespread-adoption/">the development of the component model, which proponents hope will simplify how WebAssembly works outside the browser.</a> Goldenring and Hayes discussed efforts now under construction, including “worlds” files and Warg, a package registry for WebAssembly. (Hayes <a href="https://youtu.be/lihQEVhOR58">co-presented at Wasm Day</a> on the work being done on WebAssembly package management, including Warg.)</p><p> </p><p>A world file, Hayes said, is a way of defining your environment.  "One way to think of it is like .profile, but for Wasm, for a component. And so it tells me what types of capabilities I need for my web module to run successfully in the runtime and can read that and give me the right stuff.”</p><p> </p><p>And as for Warg, Hayes said: “It's really a protocol and a set of APIs, so that we can slot it into existing ecosystems. A lot of people think of it as us trying to pave over existing technologies. And that's really not the case. The purpose of Warg is to be able to slot right in, so that you continue working in your current developer environment and experience and using the packages that you're used to. But get all of the advantages of the component model, which is this new specification we've been working on" at the <a href="https://www.w3.org/wasm/">W3C's WebAssembly Working Group.</a></p><p> </p><p>Goldenring added another finding from the CNCF survey: “Around 30% of people wanted better code reuse. That's a sign of a more mature ecosystem. So having something like Warg is going to help everyone who's involved in the server side of the WebAssembly space.”</p><p> </p><p>Listen to the full conversation to learn more about WebAssembly and how these two companies are tackling its challenges for developers.</p>
]]></content:encoded>
      <enclosure length="15517267" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/7a057784-8921-45eb-9087-a6ff5a53c971/audio/a3d9a853-e060-4869-b19a-e8f8361ecd1a/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>The Latest Milestones on WebAssembly&apos;s Road to Maturity</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/d0354719-e148-41f7-821a-3004f4d74d47/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:16:09</itunes:duration>
      <itunes:summary>DETROIT — Even in the midst of hand-wringing at KubeCon + CloudNativeCon North America about how the global economy will make it tough for startups to gain support in the near future, the news about a couple of young WebAssembly-centric companies was bright.

Cosmonic announced that it had raised $8.5 million in a seed round led by Vertex Ventures. And Fermyon Technologies unveiled both funding and product news: a $20 million A Series led by Insight Partners (which also owns The New Stack) and the launch of Fermyon Cloud, a hosted platform for running WebAssembly (Wasm) microservices. Both Cosmonic and Fermyon were founded in 2021.

“A lot of people think that Wasm is this maybe up and coming thing, or it&apos;s just totally new thing that&apos;s out there in the future,” noted Bailey Hayes, a director at Cosmonic, in this episode of The New Stack Makers podcast.

But the future is already here, she said: “It&apos;s one of technology&apos;s best kept secrets, because you&apos;re using it today, all over. And many of the applications that we use day-to-day —  Zoom, Google Meet, Prime Video, I mean, it really is everywhere. The thing that&apos;s going to change for developers is that this will be their compilation target in their build file.”

In this On the Road episode of Makers, recorded at KubeCon here in the Motor City, Hayes and Kate Goldenring, a software engineer at Fermyon, spoke to Heather Joslyn, TNS’ features editor, about the state of WebAssembly. This episode was sponsored by the Cloud Native Computing Foundation (CNCF).</itunes:summary>
      <itunes:subtitle>DETROIT — Even in the midst of hand-wringing at KubeCon + CloudNativeCon North America about how the global economy will make it tough for startups to gain support in the near future, the news about a couple of young WebAssembly-centric companies was bright.

Cosmonic announced that it had raised $8.5 million in a seed round led by Vertex Ventures. And Fermyon Technologies unveiled both funding and product news: a $20 million A Series led by Insight Partners (which also owns The New Stack) and the launch of Fermyon Cloud, a hosted platform for running WebAssembly (Wasm) microservices. Both Cosmonic and Fermyon were founded in 2021.

“A lot of people think that Wasm is this maybe up and coming thing, or it&apos;s just totally new thing that&apos;s out there in the future,” noted Bailey Hayes, a director at Cosmonic, in this episode of The New Stack Makers podcast.

But the future is already here, she said: “It&apos;s one of technology&apos;s best kept secrets, because you&apos;re using it today, all over. And many of the applications that we use day-to-day —  Zoom, Google Meet, Prime Video, I mean, it really is everywhere. The thing that&apos;s going to change for developers is that this will be their compilation target in their build file.”

In this On the Road episode of Makers, recorded at KubeCon here in the Motor City, Hayes and Kate Goldenring, a software engineer at Fermyon, spoke to Heather Joslyn, TNS’ features editor, about the state of WebAssembly. This episode was sponsored by the Cloud Native Computing Foundation (CNCF).</itunes:subtitle>
      <itunes:keywords>software developer, wasm, cosmonic, tech podcast, the new stack, devops, devops podcast, tech, fermyon, developer podcast, webassembly, kubernetes, the new stack makers, bailey hayes, kubecon detroit 2022, software engineer, kubecon, kate goldenring</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1365</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">bac78bb8-9373-4abc-9145-a64901e4f529</guid>
      <title>Zero Trust Security and the HashiCorp Cloud Platform</title>
      <description><![CDATA[<p>Organizations are now, almost by default, now becoming multi-cloud operations. No cloud service offers the full breadth of what an enterprise may need, and enterprises themselves find themselves using more than one service, often inadvertently.</p><p> </p><p>HashiCorp is one company preparing enterprises for the challenges with managing more than a single cloud, through the use of a coherent set of software tools. To learn more, we spoke with  <a class="ext-link" href="https://www.linkedin.com/in/megan-laflamme-b6362315/" rel="external ">Megan Laflamme</a>, <a class="ext-link" href="https://www.hashicorp.com/?utm_content=inline-mention" target="_blank" rel="external noopener">HashiCorp</a> director of product marketing, at the HashiConf user conference, for this latest episode of The New Stack Makers podcast. We talked about zero trust computing, the importance identity and the general availability of HashiCorp Boundary single sign-on tool.</p><p> </p><p>"In the cloud operating model, the [security] perimeter is no longer static, and you move to a much more dynamic infrastructure environment," she explained.</p><p><h2><strong>What is the HashiCorp Cloud Platform?</strong></h2></p><p>The <a class="ext-link" href="https://cloud.hashicorp.com/products/boundary" rel="external ">HashiCorp Cloud Platform</a> (HCP) is a fully-managed platform offering HashiCorp software including Consul, Vault, and other services, all connected through HashiCorp Virtual Networks (HVN). Through a web portal or by Terraform, HCP can manage log-ins, access control, and billing across multiple cloud assets.</p><p> </p><p>The HashiCorp Cloud Platform <a href="https://thenewstack.io/hashicorp-cloud-can-now-spin-up-a-single-sign-on-zero-trust-network/">now offers</a> the ability to do single sign-on, reducing a lot of the headache of signing into multiple applications and services.</p><p><h2>What is HashiCorp Boundary?</h2></p><p>Boundary is the client that enables this “secure remote access” and is now generally available to users of the platform. It is a remote access client that manages fine-grained authorizations through trusted identities. It <a href="https://thenewstack.io/hashicorp-cloud-can-now-spin-up-a-single-sign-on-zero-trust-network/">provides</a> the session connection, establishment, and credential issuance and revocation.</p><p> </p><p>"With Boundary, we enable a much more streamlined workflow for permitting access to critical infrastructure where we have integrations with cloud providers or service registries," Laflamme said.</p><p> </p><p>The <a class="ext-link" href="https://cloud.hashicorp.com/products/boundary" rel="external ">HCP Boundary</a> is a fully managed version of <a class="ext-link" href="https://www.boundaryproject.io/" rel="external ">HashiCorp Boundary</a> that is run on the HashiCorp Cloud. With Boundary, the user <a class="ext-link" href="https://www.hashicorp.com/solutions/zero-trust-security" rel="external ">signs on once</a>, and everything else is handled beneath the floorboards, so to speak. Identities for applications, networks, and people are handled through HashiCorp Vault and HashiCorp Consul. Every action is authorized and documented.</p><p> </p><p>Boundary authenticates and authorizes users, by drawing on existing identity providers (IDPs) such as Okta, Azure Active Directory, and GitHub. Consul authenticates and authorizes access between applications and services. This way, networks aren’t exposed, and there is no need to issue and distribute credentials. <a class="ext-link" href="https://developer.hashicorp.com/boundary/tutorials/hcp-administration/hcp-ssh-cred-injection?in=boundary%2Fhcp-administration" rel="external ">Dynamic credential injection for user sessions</a> is done with HashiCorp Vault, which injects single-use credentials for passwordless authentication to the remote host.</p><p><h2>What is Zero Trust Security?</h2></p><p>With <a class="local-link" href="https://thenewstack.io/beyondcorp-google-ditched-virtual-private-networking-internal-applications/">zero trust security</a>, users are authenticated at the service level, rather than through a centralized firewall, which becomes increasingly infeasible in multicloud designs.</p><p> </p><p>In the industry, there is a shift “from high trust IP based authorization in the more static data centers and infrastructure, to the cloud, to a low trust model where everything is predicated on <a class="local-link" href="https://thenewstack.io/what-do-authentication-and-authorization-mean-in-zero-trust/">identity</a>,” Laflamme explained.</p><p> </p><p>This approach does require users to sign on to each individual service, in some form, which can be a headache to those (i.e. developers and system engineers) who sign on to a lot of apps in their daily routine.</p><p> </p><p> </p>
]]></description>
      <pubDate>Wed, 9 Nov 2022 16:40:23 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/zero-trust-security-and-the-hashicorp-cloud-platform-2jW_FFDb</link>
      <content:encoded><![CDATA[<p>Organizations are now, almost by default, now becoming multi-cloud operations. No cloud service offers the full breadth of what an enterprise may need, and enterprises themselves find themselves using more than one service, often inadvertently.</p><p> </p><p>HashiCorp is one company preparing enterprises for the challenges with managing more than a single cloud, through the use of a coherent set of software tools. To learn more, we spoke with  <a class="ext-link" href="https://www.linkedin.com/in/megan-laflamme-b6362315/" rel="external ">Megan Laflamme</a>, <a class="ext-link" href="https://www.hashicorp.com/?utm_content=inline-mention" target="_blank" rel="external noopener">HashiCorp</a> director of product marketing, at the HashiConf user conference, for this latest episode of The New Stack Makers podcast. We talked about zero trust computing, the importance identity and the general availability of HashiCorp Boundary single sign-on tool.</p><p> </p><p>"In the cloud operating model, the [security] perimeter is no longer static, and you move to a much more dynamic infrastructure environment," she explained.</p><p><h2><strong>What is the HashiCorp Cloud Platform?</strong></h2></p><p>The <a class="ext-link" href="https://cloud.hashicorp.com/products/boundary" rel="external ">HashiCorp Cloud Platform</a> (HCP) is a fully-managed platform offering HashiCorp software including Consul, Vault, and other services, all connected through HashiCorp Virtual Networks (HVN). Through a web portal or by Terraform, HCP can manage log-ins, access control, and billing across multiple cloud assets.</p><p> </p><p>The HashiCorp Cloud Platform <a href="https://thenewstack.io/hashicorp-cloud-can-now-spin-up-a-single-sign-on-zero-trust-network/">now offers</a> the ability to do single sign-on, reducing a lot of the headache of signing into multiple applications and services.</p><p><h2>What is HashiCorp Boundary?</h2></p><p>Boundary is the client that enables this “secure remote access” and is now generally available to users of the platform. It is a remote access client that manages fine-grained authorizations through trusted identities. It <a href="https://thenewstack.io/hashicorp-cloud-can-now-spin-up-a-single-sign-on-zero-trust-network/">provides</a> the session connection, establishment, and credential issuance and revocation.</p><p> </p><p>"With Boundary, we enable a much more streamlined workflow for permitting access to critical infrastructure where we have integrations with cloud providers or service registries," Laflamme said.</p><p> </p><p>The <a class="ext-link" href="https://cloud.hashicorp.com/products/boundary" rel="external ">HCP Boundary</a> is a fully managed version of <a class="ext-link" href="https://www.boundaryproject.io/" rel="external ">HashiCorp Boundary</a> that is run on the HashiCorp Cloud. With Boundary, the user <a class="ext-link" href="https://www.hashicorp.com/solutions/zero-trust-security" rel="external ">signs on once</a>, and everything else is handled beneath the floorboards, so to speak. Identities for applications, networks, and people are handled through HashiCorp Vault and HashiCorp Consul. Every action is authorized and documented.</p><p> </p><p>Boundary authenticates and authorizes users, by drawing on existing identity providers (IDPs) such as Okta, Azure Active Directory, and GitHub. Consul authenticates and authorizes access between applications and services. This way, networks aren’t exposed, and there is no need to issue and distribute credentials. <a class="ext-link" href="https://developer.hashicorp.com/boundary/tutorials/hcp-administration/hcp-ssh-cred-injection?in=boundary%2Fhcp-administration" rel="external ">Dynamic credential injection for user sessions</a> is done with HashiCorp Vault, which injects single-use credentials for passwordless authentication to the remote host.</p><p><h2>What is Zero Trust Security?</h2></p><p>With <a class="local-link" href="https://thenewstack.io/beyondcorp-google-ditched-virtual-private-networking-internal-applications/">zero trust security</a>, users are authenticated at the service level, rather than through a centralized firewall, which becomes increasingly infeasible in multicloud designs.</p><p> </p><p>In the industry, there is a shift “from high trust IP based authorization in the more static data centers and infrastructure, to the cloud, to a low trust model where everything is predicated on <a class="local-link" href="https://thenewstack.io/what-do-authentication-and-authorization-mean-in-zero-trust/">identity</a>,” Laflamme explained.</p><p> </p><p>This approach does require users to sign on to each individual service, in some form, which can be a headache to those (i.e. developers and system engineers) who sign on to a lot of apps in their daily routine.</p><p> </p><p> </p>
]]></content:encoded>
      <enclosure length="13365542" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/3214fb72-04b3-4367-805c-f9e561b08d03/audio/5fd44d9a-4558-4caf-ac1b-5a14b1378613/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Zero Trust Security and the HashiCorp Cloud Platform</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/f010af3d-c6b3-4626-b2e6-26eb3ed08796/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:13:55</itunes:duration>
      <itunes:summary>Organizations are now, almost by default, now becoming multi-cloud operations. No cloud service offers the full breadth of what an enterprise may need, and enterprises themselves find themselves using more than one service, often inadvertently.

HashiCorp is one company preparing enterprises for the challenges with managing more than a single cloud, through the use of a coherent set of software tools. To learn more, we spoke with  Megan Laflamme, HashiCorp director of product marketing, at the HashiConf user conference, for this latest episode of The New Stack Makers podcast. We talked about zero trust computing, the importance identity and the general availability of HashiCorp Boundary single sign-on tool.</itunes:summary>
      <itunes:subtitle>Organizations are now, almost by default, now becoming multi-cloud operations. No cloud service offers the full breadth of what an enterprise may need, and enterprises themselves find themselves using more than one service, often inadvertently.

HashiCorp is one company preparing enterprises for the challenges with managing more than a single cloud, through the use of a coherent set of software tools. To learn more, we spoke with  Megan Laflamme, HashiCorp director of product marketing, at the HashiConf user conference, for this latest episode of The New Stack Makers podcast. We talked about zero trust computing, the importance identity and the general availability of HashiCorp Boundary single sign-on tool.</itunes:subtitle>
      <itunes:keywords>software developer, joab jackson, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, hashiconf, the new stack makers, software engineer, megan laflamme, hashicorp</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1364</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">5164aeab-982e-4a81-a4f7-70e8028e6832</guid>
      <title>How Do We Protect the Software Supply Chain?</title>
      <description><![CDATA[<p>DETROIT — Modern software projects’ emphasis on agility and building community has caused a lot of security best practices, developed in the early days of the Linux kernel, to fall by the wayside, according to <a href="https://www.linkedin.com/in/aevaonline">Aeva Black, </a>an open source veteran of 25 years.</p><p> </p><p>“And now we're playing catch up,“ said Black, an open source hacker in Microsoft Azure’s Office of the CTO  “A lot of less than ideal practices have taken root in the past five years. We're trying to help educate everybody now.”</p><p> </p><p><a href="https://www.linkedin.com/in/thechrisshort">Chris Short,</a> senior developer advocate with Amazon Web Services (AWS), challenged the notion of “shifting left” and giving developers greater responsibility for security. “If security is everybody's job, it's nobody's job,” said Short, founder of the DevOps-ish newsletter.</p><p> </p><p>“We've gone through this evolution: just develop secure code, and you'll be fine,” he said. “There's no such thing as secure code. There are errors in the underlying languages sometimes …. There's no such thing as secure software. So you have to mitigate and then be ready to defend against coming vulnerabilities.”</p><p> </p><p>Black and Short talked about the state of the software supply chain’s security in an On the Road episode of The New Stack Makers podcast.</p><p> </p><p>Their conversation with Heather Joslyn, features editor of TNS, was recorded at KubeCon + CloudNativeCon North America here in the Motor City.</p><p> </p><p>This podcast episode was sponsored by AWS.</p><p><h2>‘Trust, but Verify’</h2></p><p>For our podcast guests, “trust, but verify” is a slogan more organizations need to live by.</p><p> </p><p>A lot of the security problems that plague the software supply chain, Black said, are companies — especially smaller organizations — “just pulling software directly from upstream. They trust a build someone's published, they don't verify, they don't check the hash, they don't check a signature, they just download a Docker image or binary from somewhere and run it in production.”</p><p> </p><p>That practice, Black said, “exposes them to anything that's changed upstream. If upstream has a bug or a network error in that repository, then they can't update as well.” Organizations, they said, should maintain an internal staging environment where they can verify code retrieved from upstream before pushing it to production — or rebuild it, in case a vulnerability is found, and push it back upstream.</p><p> </p><p>That build environment should also be firewalled, Short added: “Create those safeguards of, ‘Oh, you want to pull a package from not an approved source or not a trusted source? Sorry, not gonna happen.’”</p><p> </p><p>Being able to rebuild code that has vulnerabilities to make it more secure — or even being able to identify what’s wrong, and quickly — are skills that not enough developers have, the podcast guests noted.</p><p> </p><p>More automation is part of the solution, Short said. But, he added, by itself it's not enough. “Continuous learning is what we do here as a job," he said. "If you're kind of like, this is my skill set, this is my toolbox and I'm not willing to grow past that, you’re setting yourself up for failure, right? So you have to be able to say, almost at a moment's notice, ‘I need to change something across my entire environment. How do I do that?’”</p><p><h2>GitBOM and the ‘Signal-to-Noise Ratio’</h2></p><p>As both Black and Short said during our conversation, there’s no such thing as perfectly secure code. And even such highly touted tools as <a href="https://thenewstack.io/how-to-create-a-software-bill-of-materials/">software bills of materials, or SBOMs, </a>fall short of giving teams all the information they need to determine code’s safety.</p><p> </p><p>“Many projects have dependencies 10, 20 30 layers deep,” Black said. “And so if your SBOM only goes one or two layers, you just don't have enough information to know if as a vulnerability five or 10 layers down.”</p><p> </p><p>Short brought up another issue with SBOMs: “There's nothing you can act on. The biggest thing for Ops teams or security teams is actionable information.”</p><p> </p><p>While Short applauded recent efforts to improve user education, he said he’s pessimistic about the state of cybersecurity: “There’s not a lot right now that's getting people actionable data. It's a lot of noise still, and we need to refine these systems well enough to know that, like, just because I have Bash doesn't necessarily mean I have every vulnerability in Bash.”</p><p> </p><p>One project aimed at addressing the situation is <a href="https://gitbom.dev/">GitBOM,</a> a new open source initiative. “Fundamentally, I think it’s the best bet we have to provide really high fidelity signal to defense teams,” said Black, who has worked on the project and <a href="https://gitbom.dev/resources/whitepaper/">produced a white paper on it this past January.</a></p><p> </p><p>GitBOM — the name will likely be changed, Black said —takes the underlying technology that Git relies on, using a hash table to track changes in a project's code over time, and reapplies it to track the supply chain of software. The technology is used to build a hash table connecting all of the dependencies in a project and building what GItBOM’s creators call an artifact dependency graph.</p><p> </p><p>“We had a team working on it a couple of proof of concepts right now,” Black said. “And the main effect I'm hoping to achieve from this is a small change in every language and compiler … then we can get traceability across the whole supply chain.”</p><p> </p><p>In the meantime, Short said, there’s plenty of room for broader adoption of the best practtices that currently exist. “Security vendors, I feel,  like need to do a better job of moving teams in the right direction as far as action,” he said.</p><p> </p><p>At DevOps Chicago this fall, Short said, he ran an open space session in which he asked participants for their pain points related to working with containers</p><p> </p><p>“And the whole room admitted to not using least privilege, not using policy engines that are available in the Kubernetes space,” he said. “So there's a lot of complexity that we’ve got to help people understand the need for it, and how to implement it.”</p><p> </p><p>Listen to whole podcast to learn more about the state of software supply chain security.</p>
]]></description>
      <pubDate>Tue, 8 Nov 2022 13:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-do-we-protect-the-software-supply-chain-zgvH9Sg7</link>
      <content:encoded><![CDATA[<p>DETROIT — Modern software projects’ emphasis on agility and building community has caused a lot of security best practices, developed in the early days of the Linux kernel, to fall by the wayside, according to <a href="https://www.linkedin.com/in/aevaonline">Aeva Black, </a>an open source veteran of 25 years.</p><p> </p><p>“And now we're playing catch up,“ said Black, an open source hacker in Microsoft Azure’s Office of the CTO  “A lot of less than ideal practices have taken root in the past five years. We're trying to help educate everybody now.”</p><p> </p><p><a href="https://www.linkedin.com/in/thechrisshort">Chris Short,</a> senior developer advocate with Amazon Web Services (AWS), challenged the notion of “shifting left” and giving developers greater responsibility for security. “If security is everybody's job, it's nobody's job,” said Short, founder of the DevOps-ish newsletter.</p><p> </p><p>“We've gone through this evolution: just develop secure code, and you'll be fine,” he said. “There's no such thing as secure code. There are errors in the underlying languages sometimes …. There's no such thing as secure software. So you have to mitigate and then be ready to defend against coming vulnerabilities.”</p><p> </p><p>Black and Short talked about the state of the software supply chain’s security in an On the Road episode of The New Stack Makers podcast.</p><p> </p><p>Their conversation with Heather Joslyn, features editor of TNS, was recorded at KubeCon + CloudNativeCon North America here in the Motor City.</p><p> </p><p>This podcast episode was sponsored by AWS.</p><p><h2>‘Trust, but Verify’</h2></p><p>For our podcast guests, “trust, but verify” is a slogan more organizations need to live by.</p><p> </p><p>A lot of the security problems that plague the software supply chain, Black said, are companies — especially smaller organizations — “just pulling software directly from upstream. They trust a build someone's published, they don't verify, they don't check the hash, they don't check a signature, they just download a Docker image or binary from somewhere and run it in production.”</p><p> </p><p>That practice, Black said, “exposes them to anything that's changed upstream. If upstream has a bug or a network error in that repository, then they can't update as well.” Organizations, they said, should maintain an internal staging environment where they can verify code retrieved from upstream before pushing it to production — or rebuild it, in case a vulnerability is found, and push it back upstream.</p><p> </p><p>That build environment should also be firewalled, Short added: “Create those safeguards of, ‘Oh, you want to pull a package from not an approved source or not a trusted source? Sorry, not gonna happen.’”</p><p> </p><p>Being able to rebuild code that has vulnerabilities to make it more secure — or even being able to identify what’s wrong, and quickly — are skills that not enough developers have, the podcast guests noted.</p><p> </p><p>More automation is part of the solution, Short said. But, he added, by itself it's not enough. “Continuous learning is what we do here as a job," he said. "If you're kind of like, this is my skill set, this is my toolbox and I'm not willing to grow past that, you’re setting yourself up for failure, right? So you have to be able to say, almost at a moment's notice, ‘I need to change something across my entire environment. How do I do that?’”</p><p><h2>GitBOM and the ‘Signal-to-Noise Ratio’</h2></p><p>As both Black and Short said during our conversation, there’s no such thing as perfectly secure code. And even such highly touted tools as <a href="https://thenewstack.io/how-to-create-a-software-bill-of-materials/">software bills of materials, or SBOMs, </a>fall short of giving teams all the information they need to determine code’s safety.</p><p> </p><p>“Many projects have dependencies 10, 20 30 layers deep,” Black said. “And so if your SBOM only goes one or two layers, you just don't have enough information to know if as a vulnerability five or 10 layers down.”</p><p> </p><p>Short brought up another issue with SBOMs: “There's nothing you can act on. The biggest thing for Ops teams or security teams is actionable information.”</p><p> </p><p>While Short applauded recent efforts to improve user education, he said he’s pessimistic about the state of cybersecurity: “There’s not a lot right now that's getting people actionable data. It's a lot of noise still, and we need to refine these systems well enough to know that, like, just because I have Bash doesn't necessarily mean I have every vulnerability in Bash.”</p><p> </p><p>One project aimed at addressing the situation is <a href="https://gitbom.dev/">GitBOM,</a> a new open source initiative. “Fundamentally, I think it’s the best bet we have to provide really high fidelity signal to defense teams,” said Black, who has worked on the project and <a href="https://gitbom.dev/resources/whitepaper/">produced a white paper on it this past January.</a></p><p> </p><p>GitBOM — the name will likely be changed, Black said —takes the underlying technology that Git relies on, using a hash table to track changes in a project's code over time, and reapplies it to track the supply chain of software. The technology is used to build a hash table connecting all of the dependencies in a project and building what GItBOM’s creators call an artifact dependency graph.</p><p> </p><p>“We had a team working on it a couple of proof of concepts right now,” Black said. “And the main effect I'm hoping to achieve from this is a small change in every language and compiler … then we can get traceability across the whole supply chain.”</p><p> </p><p>In the meantime, Short said, there’s plenty of room for broader adoption of the best practtices that currently exist. “Security vendors, I feel,  like need to do a better job of moving teams in the right direction as far as action,” he said.</p><p> </p><p>At DevOps Chicago this fall, Short said, he ran an open space session in which he asked participants for their pain points related to working with containers</p><p> </p><p>“And the whole room admitted to not using least privilege, not using policy engines that are available in the Kubernetes space,” he said. “So there's a lot of complexity that we’ve got to help people understand the need for it, and how to implement it.”</p><p> </p><p>Listen to whole podcast to learn more about the state of software supply chain security.</p>
]]></content:encoded>
      <enclosure length="20399377" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/19b2f68c-b1e7-4c60-88ea-8ccb75e352a9/audio/1a011a89-207e-4f00-a743-a2c8c1a81841/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Do We Protect the Software Supply Chain?</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/82c9b66b-dfaa-4141-86dc-2e2f9e5de2d5/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:21:14</itunes:duration>
      <itunes:summary>DETROIT — Modern software projects’ emphasis on agility and building community has caused a lot of security best practices, developed in the early days of the Linux kernel, to fall by the wayside, according to Aeva Black, an open source veteran of 25 years.

“And now we&apos;re playing catch up,“ said Black, an open source hacker in Microsoft Azure’s Office of the CTO  “A lot of less than ideal practices have taken root in the past five years. We&apos;re trying to help educate everybody now.”

Chris Short, senior developer advocate with Amazon Web Services (AWS), challenged the notion of “shifting left” and giving developers greater responsibility for security. “If security is everybody&apos;s job, it&apos;s nobody&apos;s job,” said Short, founder of the DevOps-ish newsletter.

“We&apos;ve gone through this evolution: just develop secure code, and you&apos;ll be fine,” he said. “There&apos;s no such thing as secure code. There are errors in the underlying languages sometimes …. There&apos;s no such thing as secure software. So you have to mitigate and then be ready to defend against coming vulnerabilities.”

Black and Short talked about the state of the software supply chain’s security in an On the Road episode of The New Stack Makers podcast.

Their conversation with Heather Joslyn, features editor of TNS, was recorded at KubeCon + CloudNativeCon North America here in the Motor City.

This podcast episode was sponsored by AWS.</itunes:summary>
      <itunes:subtitle>DETROIT — Modern software projects’ emphasis on agility and building community has caused a lot of security best practices, developed in the early days of the Linux kernel, to fall by the wayside, according to Aeva Black, an open source veteran of 25 years.

“And now we&apos;re playing catch up,“ said Black, an open source hacker in Microsoft Azure’s Office of the CTO  “A lot of less than ideal practices have taken root in the past five years. We&apos;re trying to help educate everybody now.”

Chris Short, senior developer advocate with Amazon Web Services (AWS), challenged the notion of “shifting left” and giving developers greater responsibility for security. “If security is everybody&apos;s job, it&apos;s nobody&apos;s job,” said Short, founder of the DevOps-ish newsletter.

“We&apos;ve gone through this evolution: just develop secure code, and you&apos;ll be fine,” he said. “There&apos;s no such thing as secure code. There are errors in the underlying languages sometimes …. There&apos;s no such thing as secure software. So you have to mitigate and then be ready to defend against coming vulnerabilities.”

Black and Short talked about the state of the software supply chain’s security in an On the Road episode of The New Stack Makers podcast.

Their conversation with Heather Joslyn, features editor of TNS, was recorded at KubeCon + CloudNativeCon North America here in the Motor City.

This podcast episode was sponsored by AWS.</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, the new stack, devops, chris short, devops podcast, amazon web services, tech, developer podcast, kubecon 2022 detroit, kubernetes, the new stack makers, software engineer, aeva black, kubecon, aws</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1363</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">981cccb6-58fc-4054-908a-94c48713f67b</guid>
      <title>Ukraine Has a Bright Future</title>
      <description><![CDATA[<p><span data-preserver-spaces="true">Ukraine has a bright future. It will soon be time to rebuild. But rebuilding requires more than the resources needed to construct a hydroelectric plant or a hospital. It involves software and an understanding of how to use it.</span></p><p> </p><p><span data-preserver-spaces="true">Ihor Dvoretskyi, developer advocate at the Cloud Native Computing Foundation (CNCF), and Dima Zakhalyavko, board member at <a href="https://www.razomforukraine.org/">Razom for Ukraine</a>, came to KubeCon in Detroit to discuss the push to provide training materials for Ukraine as they rebuild from the destruction caused by Russia's invasion.</span></p><p> </p><p><span data-preserver-spaces="true">Razom</span><span data-preserver-spaces="true">, a nonprofit, amplifies the voices of Ukrainians in the United States and helps with humanitarian efforts and IT training. Razom formed before Russia's 2014 invasion of the Crimean peninsula of Ukraine, Zakhalyavko said. Since the full-scale invasion earlier this year, Razom has had an understandable increase in donations and volunteers helping in their efforts.</span></p><p> </p><p><span data-preserver-spaces="true">Individual first aid kits for soldiers, tourniquets, and medics supplies are provided by Razom, but so is IT training, materials to train the next generation of IT, translated into Ukrainian.</span></p><p> </p><p><span data-preserver-spaces="true">The Linux Foundation is participating with the Cloud Native Computing Foundation (CNCF) in participation with Razom for Ukraine on its </span><a class="editor-rtfLink" href="https://www.razomforukraine.org/projects/veteranius/" target="_blank" rel="noopener"><span data-preserver-spaces="true">Project Veteranius</span></a><span data-preserver-spaces="true"> to provide access to technology education for Ukrainian veterans, their families, and Ukrainians in need. </span></p><p> </p><p><span data-preserver-spaces="true">"We've realized that basically, we can benefit from the Linux Foundation training portfolio, including the most popular courses like the intro to Linux, or intro to Kubernetes, that can be pretty much easily translated to Ukrainian," Dvoretskyi said. "And in this way, we'll be able to offer the educational materials in their native language."</span></p><p> </p><p><span data-preserver-spaces="true">Ukraine has a pretty bright future. </span></p><p> </p><p><span data-preserver-spaces="true">"We just need to get through these difficult times," Dvoretskyi said. "But in the future, it's clear the tech industry in Ukraine is growing. Yeah. And people are needed for that."</span></p><p> </p><p><span data-preserver-spaces="true">Every effort matters, Dvoretskyi said.</span></p><p> </p><p><span data-preserver-spaces="true">"A strong, democratic Ukraine – that's essentially the vision – a European country, a truly European country, that is whole in terms of territorial integrity," Zakhalyavko said. "The future is in technology. And if we can help enable that – in any case, I think that's a win for Ukraine and the world. Technology can make the world a better place."</span></p>
]]></description>
      <pubDate>Fri, 4 Nov 2022 20:22:48 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/ukraine-has-a-bright-future-yoH_uh2_</link>
      <content:encoded><![CDATA[<p><span data-preserver-spaces="true">Ukraine has a bright future. It will soon be time to rebuild. But rebuilding requires more than the resources needed to construct a hydroelectric plant or a hospital. It involves software and an understanding of how to use it.</span></p><p> </p><p><span data-preserver-spaces="true">Ihor Dvoretskyi, developer advocate at the Cloud Native Computing Foundation (CNCF), and Dima Zakhalyavko, board member at <a href="https://www.razomforukraine.org/">Razom for Ukraine</a>, came to KubeCon in Detroit to discuss the push to provide training materials for Ukraine as they rebuild from the destruction caused by Russia's invasion.</span></p><p> </p><p><span data-preserver-spaces="true">Razom</span><span data-preserver-spaces="true">, a nonprofit, amplifies the voices of Ukrainians in the United States and helps with humanitarian efforts and IT training. Razom formed before Russia's 2014 invasion of the Crimean peninsula of Ukraine, Zakhalyavko said. Since the full-scale invasion earlier this year, Razom has had an understandable increase in donations and volunteers helping in their efforts.</span></p><p> </p><p><span data-preserver-spaces="true">Individual first aid kits for soldiers, tourniquets, and medics supplies are provided by Razom, but so is IT training, materials to train the next generation of IT, translated into Ukrainian.</span></p><p> </p><p><span data-preserver-spaces="true">The Linux Foundation is participating with the Cloud Native Computing Foundation (CNCF) in participation with Razom for Ukraine on its </span><a class="editor-rtfLink" href="https://www.razomforukraine.org/projects/veteranius/" target="_blank" rel="noopener"><span data-preserver-spaces="true">Project Veteranius</span></a><span data-preserver-spaces="true"> to provide access to technology education for Ukrainian veterans, their families, and Ukrainians in need. </span></p><p> </p><p><span data-preserver-spaces="true">"We've realized that basically, we can benefit from the Linux Foundation training portfolio, including the most popular courses like the intro to Linux, or intro to Kubernetes, that can be pretty much easily translated to Ukrainian," Dvoretskyi said. "And in this way, we'll be able to offer the educational materials in their native language."</span></p><p> </p><p><span data-preserver-spaces="true">Ukraine has a pretty bright future. </span></p><p> </p><p><span data-preserver-spaces="true">"We just need to get through these difficult times," Dvoretskyi said. "But in the future, it's clear the tech industry in Ukraine is growing. Yeah. And people are needed for that."</span></p><p> </p><p><span data-preserver-spaces="true">Every effort matters, Dvoretskyi said.</span></p><p> </p><p><span data-preserver-spaces="true">"A strong, democratic Ukraine – that's essentially the vision – a European country, a truly European country, that is whole in terms of territorial integrity," Zakhalyavko said. "The future is in technology. And if we can help enable that – in any case, I think that's a win for Ukraine and the world. Technology can make the world a better place."</span></p>
]]></content:encoded>
      <enclosure length="15052079" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/8d07bdf0-f905-444b-8548-f3db00b8beb0/audio/2601c0ad-6989-4b48-ba20-1be3fefaf339/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Ukraine Has a Bright Future</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/48b26859-f777-48f7-99f8-474706919806/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:15:40</itunes:duration>
      <itunes:summary>Ukraine has a bright future. It will soon be time to rebuild. But rebuilding requires more than the resources needed to construct a hydroelectric plant or a hospital. It involves software and an understanding of how to use it.

Ihor Dvoretskyi, developer advocate at the Cloud Native Computing Foundation (CNCF), and Dima Zakhalyavko, board member at Razom for Ukraine, came to KubeCon in Detroit to discuss the push to provide training materials for Ukraine as they rebuild from the destruction caused by Russia&apos;s invasion.</itunes:summary>
      <itunes:subtitle>Ukraine has a bright future. It will soon be time to rebuild. But rebuilding requires more than the resources needed to construct a hydroelectric plant or a hospital. It involves software and an understanding of how to use it.

Ihor Dvoretskyi, developer advocate at the Cloud Native Computing Foundation (CNCF), and Dima Zakhalyavko, board member at Razom for Ukraine, came to KubeCon in Detroit to discuss the push to provide training materials for Ukraine as they rebuild from the destruction caused by Russia&apos;s invasion.</itunes:subtitle>
      <itunes:keywords>software developer, ukraine, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, kubernetes, the new stack makers, software engineer, cncf, kubecon</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1362</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">c5c20723-215c-40e5-8806-8eba647837b7</guid>
      <title>Redis is not just a Cache</title>
      <description><![CDATA[<p><a href="https://redis.io/">Redis</a> is not just a cache. It is used in the broader cloud native ecosystem, fits into many service-oriented architectures, and simplifies the deployment and development of modern applications, according to <a href="https://github.com/madolson">Madelyn Olson</a>, a principal engineer at AWS, during an interview on the <a href="https://thenewstack.io/the-aws-open-source-strategy/">New Stack Makers</a> at <a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/">KubeCon North America in Detroit</a>.</p><p> </p><p>Olson said that people have a primary backend database or some other workflow that takes a long time to run. They store the intermediate results in Redis, which provides lower latency and higher throughput.</p><p> </p><p>"But there are plenty of other ways you can use Redis," Olson said. "One common way is what I like to call it a data projection API. So you basically take a bunch of different sources of data, maybe a Postgres database, or some other type of Cassandra database, and you project that data into Redis. And then you just pull from the Redis instance. This is a really great, great use case for low latency applications."</p><p> </p><p>Redis creator <a href="http://invece.org/">Salvatore Sanfilippo's</a> approach provides a lesson in how to contribute to open source, which Olson recounted in our interview.</p><p> </p><p>Olson said he was the only maintainer with write permissions for the project. That meant contributors would have to engage quite a bit to get a response from Sanfilippo. So Olson did what open source contributors do when they want to get noticed. She "chopped wood and carried water," a term that in open source reflects on working to take care of tasks that need attention. That helped Sanfilippo scale himself a bit and helped Olson get involved in the project.</p><p> </p><p>It is daunting to get into open source development work, Olson said. A new contributor will face people with a lot more experience and get afraid to open issues. But if a contributor has a use case and helps with documentation or a bug, then most open source maintainers are willing to help.</p><p> </p><p>"One big problem throughout open source is, they're usually resource constrained, right?," Olson said. "Open source is oftentimes a lot of volunteers. So they're usually very willing to get more people to help with the project."</p><p> </p><p>What's it like now working at AWS on open source projects?</p><p> </p><p>Things have changed a lot since Olson joined AWS in 2015, Olson said. APIs were proprietary back in those days. Today, it's almost the opposite of how it used to be.</p><p> </p><p>To keep something internal now requires approval, Olson said. Internal differentiation is not needed. For example, open source Redis is most important, with AWS on top as the managed service.</p>
]]></description>
      <pubDate>Thu, 3 Nov 2022 19:52:06 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/redis-is-not-just-a-cache-SFtCMyFy</link>
      <content:encoded><![CDATA[<p><a href="https://redis.io/">Redis</a> is not just a cache. It is used in the broader cloud native ecosystem, fits into many service-oriented architectures, and simplifies the deployment and development of modern applications, according to <a href="https://github.com/madolson">Madelyn Olson</a>, a principal engineer at AWS, during an interview on the <a href="https://thenewstack.io/the-aws-open-source-strategy/">New Stack Makers</a> at <a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/">KubeCon North America in Detroit</a>.</p><p> </p><p>Olson said that people have a primary backend database or some other workflow that takes a long time to run. They store the intermediate results in Redis, which provides lower latency and higher throughput.</p><p> </p><p>"But there are plenty of other ways you can use Redis," Olson said. "One common way is what I like to call it a data projection API. So you basically take a bunch of different sources of data, maybe a Postgres database, or some other type of Cassandra database, and you project that data into Redis. And then you just pull from the Redis instance. This is a really great, great use case for low latency applications."</p><p> </p><p>Redis creator <a href="http://invece.org/">Salvatore Sanfilippo's</a> approach provides a lesson in how to contribute to open source, which Olson recounted in our interview.</p><p> </p><p>Olson said he was the only maintainer with write permissions for the project. That meant contributors would have to engage quite a bit to get a response from Sanfilippo. So Olson did what open source contributors do when they want to get noticed. She "chopped wood and carried water," a term that in open source reflects on working to take care of tasks that need attention. That helped Sanfilippo scale himself a bit and helped Olson get involved in the project.</p><p> </p><p>It is daunting to get into open source development work, Olson said. A new contributor will face people with a lot more experience and get afraid to open issues. But if a contributor has a use case and helps with documentation or a bug, then most open source maintainers are willing to help.</p><p> </p><p>"One big problem throughout open source is, they're usually resource constrained, right?," Olson said. "Open source is oftentimes a lot of volunteers. So they're usually very willing to get more people to help with the project."</p><p> </p><p>What's it like now working at AWS on open source projects?</p><p> </p><p>Things have changed a lot since Olson joined AWS in 2015, Olson said. APIs were proprietary back in those days. Today, it's almost the opposite of how it used to be.</p><p> </p><p>To keep something internal now requires approval, Olson said. Internal differentiation is not needed. For example, open source Redis is most important, with AWS on top as the managed service.</p>
]]></content:encoded>
      <enclosure length="15000181" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/78244317-813d-4ae5-a31f-a600863acc00/audio/9ddc70a6-a1d1-4ffb-b424-febb118c1b1a/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Redis is not just a Cache</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/1b6edf12-b5f2-4ade-ae8e-db41e3cab73b/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:15:37</itunes:duration>
      <itunes:summary>Redis is not just a cache. It is used in the broader cloud native ecosystem, fits into many service-oriented architectures, and simplifies the deployment and development of modern applications, according to Madelyn Olson, a principal engineer at AWS, during an interview on the New Stack Makers at KubeCon North America in Detroit.

Olson said that people have a primary backend database or some other workflow that takes a long time to run. They store the intermediate results in Redis, which provides lower latency and higher throughput.</itunes:summary>
      <itunes:subtitle>Redis is not just a cache. It is used in the broader cloud native ecosystem, fits into many service-oriented architectures, and simplifies the deployment and development of modern applications, according to Madelyn Olson, a principal engineer at AWS, during an interview on the New Stack Makers at KubeCon North America in Detroit.

Olson said that people have a primary backend database or some other workflow that takes a long time to run. They store the intermediate results in Redis, which provides lower latency and higher throughput.</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, aws open source, kubernetes, the new stack makers, software engineer, madelyn olson, redis, kubecon, aws</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1361</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">4a44ac9a-ec0b-4393-ac08-8069a4b24b67</guid>
      <title>Case Study: How BOK Financial Managed Its Cloud Migration</title>
      <description><![CDATA[<p>LOS ANGELES — When you’re deploying a business-critical application to the cloud, it’s nice to not need the “war room” you’ve assembled to troubleshoot Day 1 problems.</p><p> </p><p>When BOK Financial, a financial services company that’s been moving apps to the cloud over the last three years, was launching its largest application on the cloud, its engineers supported it with a “war room type situation, monitoring everything” according to BOK’s <a href="https://www.linkedin.com/in/andrewrau/">Andrew Rau</a>.</p><p> </p><p>“After the first day, the system just scaled like it was supposed to … and they're like, ‘OK, I guess we don't need this anymore.’”</p><p> </p><p>In this On the Road episode of The New Stack’s Makers podcast, Rau, BOK’s vice president and manager, cloud services, offered a case study about his organization’s cloud journey over the past four years, and the role HashiCorp’s Vault and Cloud Platform played in it.</p><p> </p><p>Rau spoke to <a href="https://thenewstack.io/author/hjoslyn">Heather Joslyn</a>, features editor of The New Stack, about the challenges of moving a very traditional organization in a highly regulated industry to the cloud while maintaining tight security and resilience.</p><p> </p><p>This episode of Makers, recorded in October at HashiConf in Los Angeles, was and sponsored by HashiCorp.</p><p><h2>Upskilling for ‘Everything as Code’</h2></p><p>In late 2019, Rau said, BOK Financial deployed one small application to the cloud, an initial step on its digital transformation journey. It’s been building out its cloud infrastructure ever since, and soon ran into the limits of each cloud provider’s native tooling.</p><p> </p><p>“Where we struggled was we didn't want to deploy and manage our clouds in different ways,” he said. “We didn't want our cloud engineers to know just one cloud provider, and their technology and their tech stack. So that's when we really started looking at how else can we do this. And that's when Terraform was a great option for us.”</p><p> </p><p>In 2020, BOK Financial began using HashCorp’s open source Terraform to automate the creation of cloud infrastructure. “We made a conscious effort to really focus on automation,” Rau said. “We didn't want to do things manually, which is really that traditional data center, how we've done things for decades.</p><p> </p><p>In tandem with adopting Terraform, BOK Financial’s teams began using GitOps processes for CI/CD. But doing “everything as code,” as Rau put it, “required a lot of upskilling for some of our staff, because they've never done version control or automation capabilities. So in addition to learning Terraform, and these other cloud concepts, they had to learn all of that.”</p><p> </p><p>The challenge, though, has been worth it: “It's really empowered us to move a lot faster, and give our application teams the ability to deploy at their pace, versus waiting on other teams.”</p><p><h2>Seeking Automated Security</h2></p><p>It took about a year, Rau said, to get BOK Financial’s developers comfortable using Terraform, largely because many were new to version control procedures and strategies.</p><p> </p><p>Because the company works in a highly regulated industry, handling customers’ financial data, security is of utmost importance.</p><p> </p><p>“We had users credentials for our clouds, and we had them separated out based on the type of deployment that [developers] were doing,” said Rau.</p><p> </p><p>“But it wasn't easy for us to rotate those credentials on a frequent basis. And so we really felt the need that we want to make these short, limited tokens, no more than an hour for that deployment. And so that's where we looked at Vault.”</p><p> </p><p>HashiCorp’s secrets storage and management tool proved an easy add-on with Terraform. “That's really given us the ability to have effectively no credentials — long-lived credentials — out there,” Rau said. “And secure our environment even more.” And because BOK’s teams don’t want to manage Vault and its complexities themselves, it has opted for HashiCorp Cloud Platform to manage it.</p><p> </p><p>For other organizations on a cloud native journey, Rau recommended taking time to do things right. “We went back to rework some things periodically, because we learned something too late,” he said.</p><p> </p><p>Also, he advised, keep stakeholders in the loop:  “You need to stay in front of the communication with business partners, IT leaders, that it's going to take longer to set this up. But once you do, it's incredible.”</p><p> </p><p>Check out the podcast to learn more about BOK Financial's cloud native transformation.</p>
]]></description>
      <pubDate>Wed, 2 Nov 2022 16:57:12 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/case-study-how-bok-financial-managed-its-cloud-migration-gpge_5Nw</link>
      <content:encoded><![CDATA[<p>LOS ANGELES — When you’re deploying a business-critical application to the cloud, it’s nice to not need the “war room” you’ve assembled to troubleshoot Day 1 problems.</p><p> </p><p>When BOK Financial, a financial services company that’s been moving apps to the cloud over the last three years, was launching its largest application on the cloud, its engineers supported it with a “war room type situation, monitoring everything” according to BOK’s <a href="https://www.linkedin.com/in/andrewrau/">Andrew Rau</a>.</p><p> </p><p>“After the first day, the system just scaled like it was supposed to … and they're like, ‘OK, I guess we don't need this anymore.’”</p><p> </p><p>In this On the Road episode of The New Stack’s Makers podcast, Rau, BOK’s vice president and manager, cloud services, offered a case study about his organization’s cloud journey over the past four years, and the role HashiCorp’s Vault and Cloud Platform played in it.</p><p> </p><p>Rau spoke to <a href="https://thenewstack.io/author/hjoslyn">Heather Joslyn</a>, features editor of The New Stack, about the challenges of moving a very traditional organization in a highly regulated industry to the cloud while maintaining tight security and resilience.</p><p> </p><p>This episode of Makers, recorded in October at HashiConf in Los Angeles, was and sponsored by HashiCorp.</p><p><h2>Upskilling for ‘Everything as Code’</h2></p><p>In late 2019, Rau said, BOK Financial deployed one small application to the cloud, an initial step on its digital transformation journey. It’s been building out its cloud infrastructure ever since, and soon ran into the limits of each cloud provider’s native tooling.</p><p> </p><p>“Where we struggled was we didn't want to deploy and manage our clouds in different ways,” he said. “We didn't want our cloud engineers to know just one cloud provider, and their technology and their tech stack. So that's when we really started looking at how else can we do this. And that's when Terraform was a great option for us.”</p><p> </p><p>In 2020, BOK Financial began using HashCorp’s open source Terraform to automate the creation of cloud infrastructure. “We made a conscious effort to really focus on automation,” Rau said. “We didn't want to do things manually, which is really that traditional data center, how we've done things for decades.</p><p> </p><p>In tandem with adopting Terraform, BOK Financial’s teams began using GitOps processes for CI/CD. But doing “everything as code,” as Rau put it, “required a lot of upskilling for some of our staff, because they've never done version control or automation capabilities. So in addition to learning Terraform, and these other cloud concepts, they had to learn all of that.”</p><p> </p><p>The challenge, though, has been worth it: “It's really empowered us to move a lot faster, and give our application teams the ability to deploy at their pace, versus waiting on other teams.”</p><p><h2>Seeking Automated Security</h2></p><p>It took about a year, Rau said, to get BOK Financial’s developers comfortable using Terraform, largely because many were new to version control procedures and strategies.</p><p> </p><p>Because the company works in a highly regulated industry, handling customers’ financial data, security is of utmost importance.</p><p> </p><p>“We had users credentials for our clouds, and we had them separated out based on the type of deployment that [developers] were doing,” said Rau.</p><p> </p><p>“But it wasn't easy for us to rotate those credentials on a frequent basis. And so we really felt the need that we want to make these short, limited tokens, no more than an hour for that deployment. And so that's where we looked at Vault.”</p><p> </p><p>HashiCorp’s secrets storage and management tool proved an easy add-on with Terraform. “That's really given us the ability to have effectively no credentials — long-lived credentials — out there,” Rau said. “And secure our environment even more.” And because BOK’s teams don’t want to manage Vault and its complexities themselves, it has opted for HashiCorp Cloud Platform to manage it.</p><p> </p><p>For other organizations on a cloud native journey, Rau recommended taking time to do things right. “We went back to rework some things periodically, because we learned something too late,” he said.</p><p> </p><p>Also, he advised, keep stakeholders in the loop:  “You need to stay in front of the communication with business partners, IT leaders, that it's going to take longer to set this up. But once you do, it's incredible.”</p><p> </p><p>Check out the podcast to learn more about BOK Financial's cloud native transformation.</p>
]]></content:encoded>
      <enclosure length="13027413" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/6f0b53a8-4b41-4548-ab87-aecdd7ba05bb/audio/6a189566-a51d-4807-b3bf-17a8287b7698/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Case Study: How BOK Financial Managed Its Cloud Migration</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/3bbe7f8a-312a-4e8b-b410-d55694e137fe/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:13:34</itunes:duration>
      <itunes:summary>LOS ANGELES — When you’re deploying a business-critical application to the cloud, it’s nice to not need the “war room” you’ve assembled to troubleshoot Day 1 problems.

When BOK Financial, a financial services company that’s been moving apps to the cloud over the last three years, was launching its largest application on the cloud, its engineers supported it with a “war room type situation, monitoring everything” according to BOK’s Andrew Rau.

“After the first day, the system just scaled like it was supposed to … and they&apos;re like, ‘OK, I guess we don&apos;t need this anymore.’”

In this On the Road episode of The New Stack’s Makers podcast, Rau, BOK’s vice president and manager, cloud services, offered a case study about his organization’s cloud journey over the past four years, and the role HashiCorp’sVault and Cloud Platform played in it.

Rau spoke to Heather Joslyn, features editor of The New Stack, about the challenges of moving a very traditional organization in a highly regulated industry to the cloud while maintaining tight security and resilience.

This episode of Makers, recorded in October at HashiConf in Los Angeles, was and sponsored by HashiCorp.</itunes:summary>
      <itunes:subtitle>LOS ANGELES — When you’re deploying a business-critical application to the cloud, it’s nice to not need the “war room” you’ve assembled to troubleshoot Day 1 problems.

When BOK Financial, a financial services company that’s been moving apps to the cloud over the last three years, was launching its largest application on the cloud, its engineers supported it with a “war room type situation, monitoring everything” according to BOK’s Andrew Rau.

“After the first day, the system just scaled like it was supposed to … and they&apos;re like, ‘OK, I guess we don&apos;t need this anymore.’”

In this On the Road episode of The New Stack’s Makers podcast, Rau, BOK’s vice president and manager, cloud services, offered a case study about his organization’s cloud journey over the past four years, and the role HashiCorp’sVault and Cloud Platform played in it.

Rau spoke to Heather Joslyn, features editor of The New Stack, about the challenges of moving a very traditional organization in a highly regulated industry to the cloud while maintaining tight security and resilience.

This episode of Makers, recorded in October at HashiConf in Los Angeles, was and sponsored by HashiCorp.</itunes:subtitle>
      <itunes:keywords>bok financial, software developer, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, hashiconf, the new stack makers, software engineer, hashicorp</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1360</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e9777554-3297-42d6-aff9-92d7dafee920</guid>
      <title>Devs and Ops: Can This Marriage Be Saved?</title>
      <description><![CDATA[<p>DETROIT — Are we still shifting left? Is it realistic to expect developers to take on the burdens of security and infrastructure provisioning, as well as writing their applications? Is platform engineering the answer to saving the DevOps dream?</p><p> </p><p>Bottom line: Do Devs and Ops really talk to each other — or just passive-aggressively swap Jira tickets?</p><p> </p><p>These are some of the topics explored by a panel, “Devs and Ops People: It’s Time for Some Kubernetes Couples Therapy,” convened by The New Stack at KubeCon + CloudNativeCon North America, here in the Motor City, on Thursday.</p><p> </p><p>Panelists included <a href="https://www.linkedin.com/in/saad-a-malik">Saad Malik,</a> chief technology officer and co-founder of Spectro Cloud; <a href="https://www.linkedin.com/in/viktorfarcic/">Viktor Farcic,</a> developer advocate at Upbound; <a href="https://www.linkedin.com/in/lizrice/">Liz Rice,</a> chief open source officer at Isolalent, and <a href="https://www.linkedin.com/in/aeris-stewart-%F0%9F%8C%88-083487187/">Aeris Stewart,</a> community manager at Humanitec.</p><p> </p><p>The latest TNS pancake breakfast was hosted by <a href="https://thenewstack.io/author/alex/">Alex Williams,</a> The New Stack’s founder and publisher, with <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn,</a> TNS features editor, fielding questions from the audience. The event was sponsored by Spectro Cloud.</p><p> </p><h2>Alleviating Cognitive Load for Devs</h2><p> </p><p>A big pain point in the DevOps structure — the marriage of frontend and backend in cross-functional teams — is that all devs aren’t necessarily willing or able to take on all the additional responsibilities demanded of them.</p><p> </p><p>A lot of organizations have “copy-pasted this one size fits all approach to DevOps,” said Stewart.</p><p> </p><p>“If you look at the tooling landscape, it is rapidly growing not just in terms of the volume of tools, but also the complexity of the tools themselves,” they said. “And developers are in parallel expected to take over an increasing amount of the software delivery process. And all of this, together, is too much cognitive load for them.”</p><p> </p><p>This situation also has an impact on operations engineers, who must help alleviate developers’ burdens. “It’s causing a lot of inefficiencies of these organizations,” they added, “and a lot of the same inefficiencies that DevOps was supposed to get rid of.”</p><p> </p><p><a href="https://thenewstack.io/devops-is-dead-embrace-platform-engineering/">Platform engineering</a> — in which operations engineers provide devs with an internal developer platform that abstracts away some of the complexity — is “a sign of hope,” Stewart said, for organizations for whom DevOps is proving tough to implement.</p><p> </p><p>The concept behind DevOps is “about making teams self-sufficient, so they have full control of their application, right from the idea until it is running in production,” said Farcic.</p><p> </p><p>But, he added, “you cannot expect them to have 17 years of experience in Kubernetes, and AWS and whatnot. And that's where platforms come in. That's how other teams, who have certain expertise, provide services so that those  … developers and operators can actually do the work that they're supposed to do, just as operators today are using services from AWS to do their work. So what AWS for Ops is to Ops, to me, that's what internal developer platforms are to application developers.”</p><p> </p><h2>Consistency vs. Innovation</h2><p> </p><p>Platform engineering has been a hot topic in DevOps circles (and at KubeCon) but the definition remains a bit fuzzy, the panelists acknowledged. (“In a lot of organizations, ‘platform engineering’ is just a fancy new way of saying ‘Ops,’” said Rice.)</p><p> </p><p>The audience served up questions to the panel about the limits of the DevOps model and how platform engineering fits into that discussion. One audience member asked about balancing the need to provide a consistent platform to an organization’s developers while also allowing devs to customize and innovate.</p><p> </p><p>Malik said that both consistency and innovation are possible in a platform engineering structure.   “An organization will decide where they want to be able to provide that abstraction,” he said, adding, “When they think about where they want to be as a whole, they could think about, Hey, when we provide our platform, we're going to be providing everything from security to CI/CD from GitHub, from repository management, this is what you will get if you use our IDP or platform itself.</p><p> </p><p>But “there are going to be unique use cases,” Malik added, such as developers who are building a <a href="https://thenewstack.io/open-source-blockchain-development-strong-despite-funding-cuts/">new blockchain technology</a> or running <a href="https://thenewstack.io/what-is-webassembly/">WebAssembly.</a></p><p> </p><p>“I think it's okay to give those development teams the ability to run their own platform, as long as you tell them, these are the areas that you have to be responsible for,” he said. “ You're responsible for your own security, your own backup, your own retention capabilities.”</p><p> </p><p>One audience member mentioned <a href="https://teamtopologies.com/book">“Team Topologies,”</a> a 2019 engineering management book by <a href="https://www.linkedin.com/in/manuelpais">Manuel Pais</a> and <a href="https://www.linkedin.com/in/matthewskelton">Matthew Skelton</a>, and asked the panel if platform engineering is related to DevOps in that it’s more of an approach to engineering management than a destination.</p><p> </p><p>“Platform engineering is in the budding stage of its evolution,” said Stewart. “And right now, it's really focused on addressing the problems that organizations ran into when they were implementing DevOps.</p><p> </p><p>They added, “I think as we see the community come together more and get more best practices about how to develop platform, you will see it become more than just a different approach to DevOps and become something more distinct. But I don't think it's there quite yet.”</p><p> </p><p>Check out the full panel discussion to hear more from our DevOps “counseling session.”</p>
]]></description>
      <pubDate>Tue, 1 Nov 2022 19:52:29 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/devs-and-ops-can-this-marriage-be-saved-6WOQHKkX</link>
      <content:encoded><![CDATA[<p>DETROIT — Are we still shifting left? Is it realistic to expect developers to take on the burdens of security and infrastructure provisioning, as well as writing their applications? Is platform engineering the answer to saving the DevOps dream?</p><p> </p><p>Bottom line: Do Devs and Ops really talk to each other — or just passive-aggressively swap Jira tickets?</p><p> </p><p>These are some of the topics explored by a panel, “Devs and Ops People: It’s Time for Some Kubernetes Couples Therapy,” convened by The New Stack at KubeCon + CloudNativeCon North America, here in the Motor City, on Thursday.</p><p> </p><p>Panelists included <a href="https://www.linkedin.com/in/saad-a-malik">Saad Malik,</a> chief technology officer and co-founder of Spectro Cloud; <a href="https://www.linkedin.com/in/viktorfarcic/">Viktor Farcic,</a> developer advocate at Upbound; <a href="https://www.linkedin.com/in/lizrice/">Liz Rice,</a> chief open source officer at Isolalent, and <a href="https://www.linkedin.com/in/aeris-stewart-%F0%9F%8C%88-083487187/">Aeris Stewart,</a> community manager at Humanitec.</p><p> </p><p>The latest TNS pancake breakfast was hosted by <a href="https://thenewstack.io/author/alex/">Alex Williams,</a> The New Stack’s founder and publisher, with <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn,</a> TNS features editor, fielding questions from the audience. The event was sponsored by Spectro Cloud.</p><p> </p><h2>Alleviating Cognitive Load for Devs</h2><p> </p><p>A big pain point in the DevOps structure — the marriage of frontend and backend in cross-functional teams — is that all devs aren’t necessarily willing or able to take on all the additional responsibilities demanded of them.</p><p> </p><p>A lot of organizations have “copy-pasted this one size fits all approach to DevOps,” said Stewart.</p><p> </p><p>“If you look at the tooling landscape, it is rapidly growing not just in terms of the volume of tools, but also the complexity of the tools themselves,” they said. “And developers are in parallel expected to take over an increasing amount of the software delivery process. And all of this, together, is too much cognitive load for them.”</p><p> </p><p>This situation also has an impact on operations engineers, who must help alleviate developers’ burdens. “It’s causing a lot of inefficiencies of these organizations,” they added, “and a lot of the same inefficiencies that DevOps was supposed to get rid of.”</p><p> </p><p><a href="https://thenewstack.io/devops-is-dead-embrace-platform-engineering/">Platform engineering</a> — in which operations engineers provide devs with an internal developer platform that abstracts away some of the complexity — is “a sign of hope,” Stewart said, for organizations for whom DevOps is proving tough to implement.</p><p> </p><p>The concept behind DevOps is “about making teams self-sufficient, so they have full control of their application, right from the idea until it is running in production,” said Farcic.</p><p> </p><p>But, he added, “you cannot expect them to have 17 years of experience in Kubernetes, and AWS and whatnot. And that's where platforms come in. That's how other teams, who have certain expertise, provide services so that those  … developers and operators can actually do the work that they're supposed to do, just as operators today are using services from AWS to do their work. So what AWS for Ops is to Ops, to me, that's what internal developer platforms are to application developers.”</p><p> </p><h2>Consistency vs. Innovation</h2><p> </p><p>Platform engineering has been a hot topic in DevOps circles (and at KubeCon) but the definition remains a bit fuzzy, the panelists acknowledged. (“In a lot of organizations, ‘platform engineering’ is just a fancy new way of saying ‘Ops,’” said Rice.)</p><p> </p><p>The audience served up questions to the panel about the limits of the DevOps model and how platform engineering fits into that discussion. One audience member asked about balancing the need to provide a consistent platform to an organization’s developers while also allowing devs to customize and innovate.</p><p> </p><p>Malik said that both consistency and innovation are possible in a platform engineering structure.   “An organization will decide where they want to be able to provide that abstraction,” he said, adding, “When they think about where they want to be as a whole, they could think about, Hey, when we provide our platform, we're going to be providing everything from security to CI/CD from GitHub, from repository management, this is what you will get if you use our IDP or platform itself.</p><p> </p><p>But “there are going to be unique use cases,” Malik added, such as developers who are building a <a href="https://thenewstack.io/open-source-blockchain-development-strong-despite-funding-cuts/">new blockchain technology</a> or running <a href="https://thenewstack.io/what-is-webassembly/">WebAssembly.</a></p><p> </p><p>“I think it's okay to give those development teams the ability to run their own platform, as long as you tell them, these are the areas that you have to be responsible for,” he said. “ You're responsible for your own security, your own backup, your own retention capabilities.”</p><p> </p><p>One audience member mentioned <a href="https://teamtopologies.com/book">“Team Topologies,”</a> a 2019 engineering management book by <a href="https://www.linkedin.com/in/manuelpais">Manuel Pais</a> and <a href="https://www.linkedin.com/in/matthewskelton">Matthew Skelton</a>, and asked the panel if platform engineering is related to DevOps in that it’s more of an approach to engineering management than a destination.</p><p> </p><p>“Platform engineering is in the budding stage of its evolution,” said Stewart. “And right now, it's really focused on addressing the problems that organizations ran into when they were implementing DevOps.</p><p> </p><p>They added, “I think as we see the community come together more and get more best practices about how to develop platform, you will see it become more than just a different approach to DevOps and become something more distinct. But I don't think it's there quite yet.”</p><p> </p><p>Check out the full panel discussion to hear more from our DevOps “counseling session.”</p>
]]></content:encoded>
      <enclosure length="41069424" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/c5ca980e-f5a2-4478-95be-84d53004bc5c/audio/34aeefd1-ad2f-4612-a5da-57b1ba39e3ee/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Devs and Ops: Can This Marriage Be Saved?</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/b92e4c55-984c-4566-aff6-c5aec4506028/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:42:09</itunes:duration>
      <itunes:summary>DETROIT — Are we still shifting left? Is it realistic to expect developers to take on the burdens of security and infrastructure provisioning, as well as writing their applications? Is platform engineering the answer to saving the DevOps dream?

Bottom line: Do Devs and Ops really talk to each other — or just passive-aggressively swap Jira tickets?

These are some of the topics explored by a panel, “Devs and Ops People: It’s Time for Some Kubernetes Couples Therapy,” convened by The New Stack at KubeCon + CloudNativeCon North America, here in the Motor City, on Thursday.

Panelists included Saad Malik, chief technology officer and co-founder of Spectro Cloud; Viktor Farcic, developer advocate at Upbound; Liz Rice, chief open source officer at Isolalent, and Aeris Stewart, community manager at Humanitec.

The latest TNS pancake breakfast was hosted by Alex Williams, The New Stack’s founder and publisher, with Heather Joslyn, TNS features editor, fielding questions from the audience. The event was sponsored by Spectro Cloud.</itunes:summary>
      <itunes:subtitle>DETROIT — Are we still shifting left? Is it realistic to expect developers to take on the burdens of security and infrastructure provisioning, as well as writing their applications? Is platform engineering the answer to saving the DevOps dream?

Bottom line: Do Devs and Ops really talk to each other — or just passive-aggressively swap Jira tickets?

These are some of the topics explored by a panel, “Devs and Ops People: It’s Time for Some Kubernetes Couples Therapy,” convened by The New Stack at KubeCon + CloudNativeCon North America, here in the Motor City, on Thursday.

Panelists included Saad Malik, chief technology officer and co-founder of Spectro Cloud; Viktor Farcic, developer advocate at Upbound; Liz Rice, chief open source officer at Isolalent, and Aeris Stewart, community manager at Humanitec.

The latest TNS pancake breakfast was hosted by Alex Williams, The New Stack’s founder and publisher, with Heather Joslyn, TNS features editor, fielding questions from the audience. The event was sponsored by Spectro Cloud.</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, the new stack, devops, devops podcast, pancake breakfast, tech, developer podcast, kubecon na, humanitec, spectrocloud, the new stack makers, software engineer, kubecon, upbound</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1359</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">24c4eecb-368f-4f68-a048-e59dc40b1b6e</guid>
      <title>Latest Enhancements to HashiCorp Terraform and Terraform Cloud</title>
      <description><![CDATA[<p><h2><strong>What is Terraform?</strong></h2></p><p>Terraform is HashiCorp’s flagship software. The open source tool provides a way to define IT resources — such as monitoring software or cloud services — in human-readable configuration files. These files, which serve as blueprints, can then be used to automatically provision the systems themselves. Kubernetes deployments, for instance, can be streamlined through Terraform.</p><p> </p><p>"Terraform basically translates what your configuration was codified in by your configuration, and provisions it to that desired end state," explained <a href="https://www.linkedin.com/in/meghanliese/">Meghan Liese</a>, [sponsor_inline_mention slug="hashicorp" ]HashiCorp[/sponsor_inline_mention] vice president of product and partner marketing in this podcast and video recording, recorded at the company's user conference, HashiConf 2022, held this month in Los Angeles.</p><p> </p><p>For this interview, Liese discusses the latest enhancements to Terraform, and Terraform Cloud, a managed service offering that is part of the HashiCorp Cloud Platform.</p><p> </p><p>[Embed Podcast]</p><p><h2>Why Should Developers be Interested in Terraform?</h2></p><p>Typically, the DevOps teams, or system administrators, use Terraform to provision infrastructure, but there is also growing interest to allow developers to do it themselves, in a self-service fashion, Liese explained. Multicloud skills are in short supply, concluded the <a class="ext-link" href="https://www.hashicorp.com/state-of-the-cloud#skills-shortages-ranked-as-top-multi-cloud-barrier" rel="external ">2022 HashiCorp State of Cloud Strategy Survey</a>, so making the provision process easier could help more developers, the company reckons.</p><p> </p><p>A Terraform self-service model, which was introduced earlier this year, could “cut down on the training an organization would need to do to get developers up to speed on using the infrastructure-as-code software,” Liese said.</p><p> </p><p>In this “no code” setup, developers can pick from a catalog of no-code-ready modules, which can be deployed directly to workspaces. No need to <a class="ext-link" href="https://learn.hashicorp.com/collections/terraform/configuration-language" rel="external ">learn</a> the <a class="ext-link" href="https://github.com/hashicorp/hcl" rel="external ">HCL configuration language</a>. And the administrators will no longer have to answer the same “how-do-I-do-this-in-HCL?” queries.</p><p> </p><p>The new console interface aims to greatly expand the use of Terraform.  The company has been offering self-service options for a while, by way of an architecture that allows for modules to be reused through the private registry for Terraform Cloud and Terraform Enterprise.</p><p><h2>What is the Make Code Block and Why is it Important?</h2></p><p>The <a class="ext-link" href="https://www.hashicorp.com/blog/terraform-1-3-improves-extensibility-and-maintainability-of-terraform-modules" rel="external ">recent release</a> of <a class="ext-link" href="https://www.terraform.io/downloads" rel="external ">Terraform 1.3</a> came with the promise to greatly reduce the amount of code HCL jockeys must manage, through the improvement of the <code>make</code> code block.</p><p> </p><p>Actually, <code>make</code> has been available since Terraform 1.1, but some kinks were worked out for this latest release. What <code>make</code> does is provide the ability to refactor resources within a Terraform configuration file, moving large code blocks off as separate modules, where they can be discovered through a public or private registry.</p><p><h2>What is Continuous Validation?</h2></p><p>With the known state of a system captured on Terraform, it is a short step to check to ensure that the actual running system is identical to the desired state captured in HCL. Many times “drift” can occur, as administrators, or even the apps themselves, make changes to the system. Especially in regulated environments, such as hospitals, it is essential that a system is in a correct state.</p><p> </p><p>Earlier this year, HashiCorp added <a class="ext-link" href="https://www.hashicorp.com/campaign/drift-detection-for-terraform-cloud" rel="external ">Drift Detection</a> to Terraform Cloud to continuously check infrastructure state to detect changes and provide alerts and offer remediation if that option is chosen. Now, another update, <a class="ext-link" href="https://www.terraform.io/cloud-docs/workspaces/health" rel="external ">Continuous validation</a> expands these checks to include user assertions, or post-conditions, as well.</p><p> </p><p>One post-condition may be something like ensuring that certificates haven’t expired. If they do, the software can offer an alert to the admin to update the certs. Another condition might be to check for new container images, which may have been updated as a response to a security patch.</p><p> </p><p> </p><p> </p><p> </p>
]]></description>
      <pubDate>Wed, 26 Oct 2022 18:47:13 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/latest-enhancements-to-hashicorp-terraform-and-terraform-cloud-V_DZnBYS</link>
      <content:encoded><![CDATA[<p><h2><strong>What is Terraform?</strong></h2></p><p>Terraform is HashiCorp’s flagship software. The open source tool provides a way to define IT resources — such as monitoring software or cloud services — in human-readable configuration files. These files, which serve as blueprints, can then be used to automatically provision the systems themselves. Kubernetes deployments, for instance, can be streamlined through Terraform.</p><p> </p><p>"Terraform basically translates what your configuration was codified in by your configuration, and provisions it to that desired end state," explained <a href="https://www.linkedin.com/in/meghanliese/">Meghan Liese</a>, [sponsor_inline_mention slug="hashicorp" ]HashiCorp[/sponsor_inline_mention] vice president of product and partner marketing in this podcast and video recording, recorded at the company's user conference, HashiConf 2022, held this month in Los Angeles.</p><p> </p><p>For this interview, Liese discusses the latest enhancements to Terraform, and Terraform Cloud, a managed service offering that is part of the HashiCorp Cloud Platform.</p><p> </p><p>[Embed Podcast]</p><p><h2>Why Should Developers be Interested in Terraform?</h2></p><p>Typically, the DevOps teams, or system administrators, use Terraform to provision infrastructure, but there is also growing interest to allow developers to do it themselves, in a self-service fashion, Liese explained. Multicloud skills are in short supply, concluded the <a class="ext-link" href="https://www.hashicorp.com/state-of-the-cloud#skills-shortages-ranked-as-top-multi-cloud-barrier" rel="external ">2022 HashiCorp State of Cloud Strategy Survey</a>, so making the provision process easier could help more developers, the company reckons.</p><p> </p><p>A Terraform self-service model, which was introduced earlier this year, could “cut down on the training an organization would need to do to get developers up to speed on using the infrastructure-as-code software,” Liese said.</p><p> </p><p>In this “no code” setup, developers can pick from a catalog of no-code-ready modules, which can be deployed directly to workspaces. No need to <a class="ext-link" href="https://learn.hashicorp.com/collections/terraform/configuration-language" rel="external ">learn</a> the <a class="ext-link" href="https://github.com/hashicorp/hcl" rel="external ">HCL configuration language</a>. And the administrators will no longer have to answer the same “how-do-I-do-this-in-HCL?” queries.</p><p> </p><p>The new console interface aims to greatly expand the use of Terraform.  The company has been offering self-service options for a while, by way of an architecture that allows for modules to be reused through the private registry for Terraform Cloud and Terraform Enterprise.</p><p><h2>What is the Make Code Block and Why is it Important?</h2></p><p>The <a class="ext-link" href="https://www.hashicorp.com/blog/terraform-1-3-improves-extensibility-and-maintainability-of-terraform-modules" rel="external ">recent release</a> of <a class="ext-link" href="https://www.terraform.io/downloads" rel="external ">Terraform 1.3</a> came with the promise to greatly reduce the amount of code HCL jockeys must manage, through the improvement of the <code>make</code> code block.</p><p> </p><p>Actually, <code>make</code> has been available since Terraform 1.1, but some kinks were worked out for this latest release. What <code>make</code> does is provide the ability to refactor resources within a Terraform configuration file, moving large code blocks off as separate modules, where they can be discovered through a public or private registry.</p><p><h2>What is Continuous Validation?</h2></p><p>With the known state of a system captured on Terraform, it is a short step to check to ensure that the actual running system is identical to the desired state captured in HCL. Many times “drift” can occur, as administrators, or even the apps themselves, make changes to the system. Especially in regulated environments, such as hospitals, it is essential that a system is in a correct state.</p><p> </p><p>Earlier this year, HashiCorp added <a class="ext-link" href="https://www.hashicorp.com/campaign/drift-detection-for-terraform-cloud" rel="external ">Drift Detection</a> to Terraform Cloud to continuously check infrastructure state to detect changes and provide alerts and offer remediation if that option is chosen. Now, another update, <a class="ext-link" href="https://www.terraform.io/cloud-docs/workspaces/health" rel="external ">Continuous validation</a> expands these checks to include user assertions, or post-conditions, as well.</p><p> </p><p>One post-condition may be something like ensuring that certificates haven’t expired. If they do, the software can offer an alert to the admin to update the certs. Another condition might be to check for new container images, which may have been updated as a response to a security patch.</p><p> </p><p> </p><p> </p><p> </p>
]]></content:encoded>
      <enclosure length="17163537" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/227890b6-3db1-4ab4-8715-8f94e635a649/audio/cec8bd32-9020-49ad-827e-1ef43c7a2402/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Latest Enhancements to HashiCorp Terraform and Terraform Cloud</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/ec31498c-4fbb-4240-b25c-0347f42e52fe/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:17:52</itunes:duration>
      <itunes:summary>Terraform is HashiCorp’s flagship software. The open source tool provides a way to define IT resources — such as monitoring software or cloud services — in human-readable configuration files. These files, which serve as blueprints, can then be used to automatically provision the systems themselves. Kubernetes deployments, for instance, can be streamlined through Terraform.

&quot;Terraform basically translates what your configuration was codified in by your configuration, and provisions it to that desired end state,&quot; explained Meghan Liese, HashiCorp vice president of product and partner marketing in this podcast and video recording, recorded at the company&apos;s user conference, HashiConf 2022, held this month in Los Angeles.

For this interview, Liese discusses the latest enhancements to Terraform, and Terraform Cloud, a managed service offering that is part of the HashiCorp Cloud Platform.</itunes:summary>
      <itunes:subtitle>Terraform is HashiCorp’s flagship software. The open source tool provides a way to define IT resources — such as monitoring software or cloud services — in human-readable configuration files. These files, which serve as blueprints, can then be used to automatically provision the systems themselves. Kubernetes deployments, for instance, can be streamlined through Terraform.

&quot;Terraform basically translates what your configuration was codified in by your configuration, and provisions it to that desired end state,&quot; explained Meghan Liese, HashiCorp vice president of product and partner marketing in this podcast and video recording, recorded at the company&apos;s user conference, HashiConf 2022, held this month in Los Angeles.

For this interview, Liese discusses the latest enhancements to Terraform, and Terraform Cloud, a managed service offering that is part of the HashiCorp Cloud Platform.</itunes:subtitle>
      <itunes:keywords>software developer, joab jackson, tech podcast, the new stack, devops, devops podcast, meghan liese, tech, developer podcast, hashiconf, the new stack makers, software engineer, hashicorp</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1358</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b72fad84-034d-491f-a7e0-62fcb1d12d3f</guid>
      <title>How ScyllaDB Helped an AdTech Company Focus on Core Business</title>
      <description><![CDATA[<p>GumGum is a company whose platform serves up online ads related to the context in which potential customers are already shopping or searching. (For instance: it will send ads for Zurich restaurants to someone who’s booked travel to Switzerland.) To handle that granular targeting, it relies on its proprietary machine learning platform, Verity.</p><p> </p><p>“For all of our publishers, we send a list of URLs to Verity,” according to <a href="https://www.linkedin.com/in/ksader/">Keith Sader,</a> GumGum’s director of engineering. “Verity goes in and basically categorizes those URLs as different [internal bus] categories. So the IB has tons of taxonomies, based on autos, based upon clothing based upon entertainment. And then that's how we do our targeting.”</p><p> </p><p>Verity’s targeting data is stored in DynamoDB, but the rest of GumGum’s data is stored in managed MySQL and its daily tracking data is stored in ScyllaDB, a database designed for data-intensive applications. Scylla, Sader said, helps his company avoid serving audiences the same ads over and over again, by keeping track of which ads customers have already seen.</p><p> </p><p>“That’s where Scylla comes into the picture for us,” he said. “Scylla is our rate limiter on ad serving.”</p><p> </p><p>In this episode of The New Stack’s Makers podcast, Sader and <a href="https://www.linkedin.com/in/dor-laor/">Dor Laor,</a> CEO and co-founder of Scylla, told how GumGum has used ScyllaDB shift more IT resources to its core business and keep it from repeating ads to audiences that have already seen them, no matter where they travel.</p><p> </p><p>This case study episode of Makers, hosted <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn,</a> TNS features editor, was sponsored by ScyllaDB.</p><p> </p><h2>‘Where Do We Spend Our Limited Funds?’</h2><p> </p><p>Before adding ScyllaDB to its stack, Sader said, “We had a Cassandra-based system that some very smart people put in. But Cassandra relies upon you to have an engineering staff to support it.</p><p> </p><p>“That’s great. But like many types of systems, managing Cassandra databases is not really what our business makes money at.”</p><p> </p><p>GumGum was hosting its Cassandra database, installed on Amazon Web Services, by itself — and the drain on resources brought the company’s teams to a crossroards, Sader said. “Where do we spend our limited funds? Do we spend it on Cassandra maintenance? Or do we hire someone to do it for us? And that’s really what determined the switch away from a sort of self-installed, self-managed Cassanda to another provider.”</p><p> </p><p>A core issue for GumGum, Sader said, was making sure that it wasn’t over-serving consumers, even as they moved around the globe. “If you see an ad in one place, we need to make sure, if you fly across the country, you don’t see it agin,” he said.</p><p> </p><p>That’s an issue Cassandra solved for his company, he said. Because ScyllaDB is a drop-in replacement for Apache Cassandra, it also helped prevent over-serving in all regions of the globe — thus preventing GumGum from losing money.</p><p> </p><p>In addition to managing its database for GumGum and other customers, Laor said that an advantage ScyllaDB brings is an “always on” guarantee.</p><p> </p><p>“We have a big legacy of infrastructure that's supposed to be resilient,” he said. “For example, every implementation of ours has consistent configurable consistency, so you can have multiple replicas.”</p><p> </p><p>Laor added, “Many many times organizations have multiple data centers. Sometimes it's for disaster recovery, sometimes it's also to shorten the latency and be closer to the client.” Replica databases located in data centers that are geographically distributed, he said, protect against failure in any one data center.</p><p> </p><h2>Seeing Results</h2><p> </p><p>Bringing ScyllaDB to GumGum was not without challenges, both Sader and Laor said. When ScyllaDB is added to an organization’s stack, Laor said, it likes to start with as small a deployment as possible.</p><p> </p><p>“But in the GumGum case, all of these clients were new processes,” Laor said.  So hundreds or thousands of processes, all trying to connect to the database, it's really a connection storm.”</p><p> </p><p>Scylla’s team created a private version of its database to work on the problem and eventually solved it: “We had to massage the algorithm and make sure that all of the [open source] code committers upstream are summing it up.”</p><p> </p><p>It ultimately designed an admission control mechanism that measures the amount of parallel requests that the distributed database is handling, and to slow down requests that arrived for the first time from a new process. “We tried to have the complexity on our end,” Laor said.</p><p> </p><p>GumGum has seen the results of handing off that complexity and toil to a managed database. “We have pretty much reduced our entire operations effort with Scylla, to almost nothing,” Sader said.</p><p> </p><p>He added, “We're coming into our busy point of the year, ads really get picked up in Q4. So we reach out so we go, ‘Hey, we need more nodes in these regions, can you make that happen for us?’ They go, ‘Yep.’ Give us the things, we pay the money. And it happens.”</p><p> </p><p>In 2021, Sader said, “we increased our volume by probably 75% plus 50%, over our standard. The toughest thing to do in this industry is make things look easy. And Scylla helped us make ad serving look easy.”</p><p> </p><p>Check out the podcast to get more detail about GumGum’s move to a managed database.</p>
]]></description>
      <pubDate>Thu, 20 Oct 2022 20:10:32 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-scylladb-helped-an-adtech-company-focus-on-core-business-surkBK0J</link>
      <content:encoded><![CDATA[<p>GumGum is a company whose platform serves up online ads related to the context in which potential customers are already shopping or searching. (For instance: it will send ads for Zurich restaurants to someone who’s booked travel to Switzerland.) To handle that granular targeting, it relies on its proprietary machine learning platform, Verity.</p><p> </p><p>“For all of our publishers, we send a list of URLs to Verity,” according to <a href="https://www.linkedin.com/in/ksader/">Keith Sader,</a> GumGum’s director of engineering. “Verity goes in and basically categorizes those URLs as different [internal bus] categories. So the IB has tons of taxonomies, based on autos, based upon clothing based upon entertainment. And then that's how we do our targeting.”</p><p> </p><p>Verity’s targeting data is stored in DynamoDB, but the rest of GumGum’s data is stored in managed MySQL and its daily tracking data is stored in ScyllaDB, a database designed for data-intensive applications. Scylla, Sader said, helps his company avoid serving audiences the same ads over and over again, by keeping track of which ads customers have already seen.</p><p> </p><p>“That’s where Scylla comes into the picture for us,” he said. “Scylla is our rate limiter on ad serving.”</p><p> </p><p>In this episode of The New Stack’s Makers podcast, Sader and <a href="https://www.linkedin.com/in/dor-laor/">Dor Laor,</a> CEO and co-founder of Scylla, told how GumGum has used ScyllaDB shift more IT resources to its core business and keep it from repeating ads to audiences that have already seen them, no matter where they travel.</p><p> </p><p>This case study episode of Makers, hosted <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn,</a> TNS features editor, was sponsored by ScyllaDB.</p><p> </p><h2>‘Where Do We Spend Our Limited Funds?’</h2><p> </p><p>Before adding ScyllaDB to its stack, Sader said, “We had a Cassandra-based system that some very smart people put in. But Cassandra relies upon you to have an engineering staff to support it.</p><p> </p><p>“That’s great. But like many types of systems, managing Cassandra databases is not really what our business makes money at.”</p><p> </p><p>GumGum was hosting its Cassandra database, installed on Amazon Web Services, by itself — and the drain on resources brought the company’s teams to a crossroards, Sader said. “Where do we spend our limited funds? Do we spend it on Cassandra maintenance? Or do we hire someone to do it for us? And that’s really what determined the switch away from a sort of self-installed, self-managed Cassanda to another provider.”</p><p> </p><p>A core issue for GumGum, Sader said, was making sure that it wasn’t over-serving consumers, even as they moved around the globe. “If you see an ad in one place, we need to make sure, if you fly across the country, you don’t see it agin,” he said.</p><p> </p><p>That’s an issue Cassandra solved for his company, he said. Because ScyllaDB is a drop-in replacement for Apache Cassandra, it also helped prevent over-serving in all regions of the globe — thus preventing GumGum from losing money.</p><p> </p><p>In addition to managing its database for GumGum and other customers, Laor said that an advantage ScyllaDB brings is an “always on” guarantee.</p><p> </p><p>“We have a big legacy of infrastructure that's supposed to be resilient,” he said. “For example, every implementation of ours has consistent configurable consistency, so you can have multiple replicas.”</p><p> </p><p>Laor added, “Many many times organizations have multiple data centers. Sometimes it's for disaster recovery, sometimes it's also to shorten the latency and be closer to the client.” Replica databases located in data centers that are geographically distributed, he said, protect against failure in any one data center.</p><p> </p><h2>Seeing Results</h2><p> </p><p>Bringing ScyllaDB to GumGum was not without challenges, both Sader and Laor said. When ScyllaDB is added to an organization’s stack, Laor said, it likes to start with as small a deployment as possible.</p><p> </p><p>“But in the GumGum case, all of these clients were new processes,” Laor said.  So hundreds or thousands of processes, all trying to connect to the database, it's really a connection storm.”</p><p> </p><p>Scylla’s team created a private version of its database to work on the problem and eventually solved it: “We had to massage the algorithm and make sure that all of the [open source] code committers upstream are summing it up.”</p><p> </p><p>It ultimately designed an admission control mechanism that measures the amount of parallel requests that the distributed database is handling, and to slow down requests that arrived for the first time from a new process. “We tried to have the complexity on our end,” Laor said.</p><p> </p><p>GumGum has seen the results of handing off that complexity and toil to a managed database. “We have pretty much reduced our entire operations effort with Scylla, to almost nothing,” Sader said.</p><p> </p><p>He added, “We're coming into our busy point of the year, ads really get picked up in Q4. So we reach out so we go, ‘Hey, we need more nodes in these regions, can you make that happen for us?’ They go, ‘Yep.’ Give us the things, we pay the money. And it happens.”</p><p> </p><p>In 2021, Sader said, “we increased our volume by probably 75% plus 50%, over our standard. The toughest thing to do in this industry is make things look easy. And Scylla helped us make ad serving look easy.”</p><p> </p><p>Check out the podcast to get more detail about GumGum’s move to a managed database.</p>
]]></content:encoded>
      <enclosure length="25784838" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/315d3f5f-d798-42df-a6ad-f7884610d9e6/audio/8332ce04-f162-4992-9c6d-de5d7aaa3f76/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How ScyllaDB Helped an AdTech Company Focus on Core Business</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:duration>00:26:51</itunes:duration>
      <itunes:summary>GumGum is a company whose platform serves up online ads related to the context in which potential customers are already shopping or searching. (For instance: it will send ads for Zurich restaurants to someone who’s booked travel to Switzerland.) To handle that granular targeting, it relies on its proprietary machine learning platform, Verity.

“For all of our publishers, we send a list of URLs to Verity,” according to Keith Sader, GumGum’s director of engineering. “Verity goes in and basically categorizes those URLs as different [internal bus] categories. So the IB has tons of taxonomies, based on autos, based upon clothing based upon entertainment. And then that&apos;s how we do our targeting.”

In this episode of The New Stack’s Makers podcast, Sader and Dor Laor, CEO and co-founder of Scylla, told how GumGum has used ScyllaDB shift more IT resources to its core business and keep it from repeating ads to audiences that have already seen them, no matter where they travel.

This case study episode of Makers, hosted Heather Joslyn, TNS features editor, was sponsored by ScyllaDB.</itunes:summary>
      <itunes:subtitle>GumGum is a company whose platform serves up online ads related to the context in which potential customers are already shopping or searching. (For instance: it will send ads for Zurich restaurants to someone who’s booked travel to Switzerland.) To handle that granular targeting, it relies on its proprietary machine learning platform, Verity.

“For all of our publishers, we send a list of URLs to Verity,” according to Keith Sader, GumGum’s director of engineering. “Verity goes in and basically categorizes those URLs as different [internal bus] categories. So the IB has tons of taxonomies, based on autos, based upon clothing based upon entertainment. And then that&apos;s how we do our targeting.”

In this episode of The New Stack’s Makers podcast, Sader and Dor Laor, CEO and co-founder of Scylla, told how GumGum has used ScyllaDB shift more IT resources to its core business and keep it from repeating ads to audiences that have already seen them, no matter where they travel.

This case study episode of Makers, hosted Heather Joslyn, TNS features editor, was sponsored by ScyllaDB.</itunes:subtitle>
      <itunes:keywords>software developer, gumgum, tech podcast, the new stack, dor laor, heather joslyn, scylladb, devops, devops podcast, tech, developer podcast, the new stack makers, software engineer, keith sader</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1357</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">2aaaab99-49f0-456e-ab7e-c2d963e26651</guid>
      <title>Terraform&apos;s Best Practices and Pitfalls</title>
      <description><![CDATA[<p><a href="https://www.wix.com/">Wix</a> is a cloud-based development site for making <a href="https://thenewstack.io/html-5-1-replaces-html-5-w3c-standard/">HTML 5</a> websites and mobile sites with drag and drop tools. It is suited for the beginning user or the advanced developer, said <a href="https://www.linkedin.com/in/hila-fish/?originalSubdomain=il">Hila Fish</a>, senior DevOps engineer for Wix, in an interview for <a href="https://thenewstack.io/podcasts/">The New Stack Makers</a> at <a href="https://www.hashicorp.com/">HashiCorp’s</a> HashiConf Global conference in Los Angeles earlier this month.</p><p> </p><p>Our questions for Fish focused on <a href="https://thenewstack.io/terraform-cloud-now-offers-less-code-and-no-code-options/">Terraform</a>, the open source infrastructure-as-code software tool:</p><p> </p><ul><li>How has Terraform evolved in uses since Fish started using it in 2018?</li><li>How does Wix make the most of Terraform to scale its infrastructure?</li><li>What are some best practices Wix has used with Terraform?</li><li>What are some pitfalls to avoid with Terraform?</li><li>What is the approach to scaling across teams and avoiding refactoring to keep the integrations elegant and working</li></ul><p> </p><p>Fish started using Terraform in an ad-hoc manner back in 2018. Over time she has learned how to use it for scaling operations.</p><p> </p><p>“If you want to scale your infrastructure, you need to use Terraform in a way that will allow you to do that,” Fish said. </p><p> </p><p>Terraform can be used ad-hoc to create a machine as a resource, but scale comes with enabling infrastructure that allows the engineers to develop templates that get reused across many servers.</p><p> </p><p>“You need to use it in a way that will allow you to scale up as much as you can,” Fish said.</p><p> </p><p>Fish said best practices come from how to structure the Terraform code base.</p><p> </p><p>Much of it comes down to the teams and how Terraform gets implemented. Engineers each have their way of working. Standard practices can help. In onboarding new teams, a structured code base can be beneficial. New teams onboard and use models already in the code base.</p><p> </p><p>And what are some of the pitfalls of using Terraform?</p><p> </p><p>We get to that in the recording and more about integrations, why Wix is still on version 0.13, and some new capabilities for developers to use Terraform.</p><p> </p><p>Users have historically needed to learn HashiCorp configuration language (HCL) to use the HashiCorp configuration language. At Wix, Fish said, the company is implementing Terraform on the backend with a UI that developers can use without needing to learn HCL.</p>
]]></description>
      <pubDate>Wed, 19 Oct 2022 17:12:01 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/terraforms-best-practices-and-pitfalls-vFjeQIlb</link>
      <content:encoded><![CDATA[<p><a href="https://www.wix.com/">Wix</a> is a cloud-based development site for making <a href="https://thenewstack.io/html-5-1-replaces-html-5-w3c-standard/">HTML 5</a> websites and mobile sites with drag and drop tools. It is suited for the beginning user or the advanced developer, said <a href="https://www.linkedin.com/in/hila-fish/?originalSubdomain=il">Hila Fish</a>, senior DevOps engineer for Wix, in an interview for <a href="https://thenewstack.io/podcasts/">The New Stack Makers</a> at <a href="https://www.hashicorp.com/">HashiCorp’s</a> HashiConf Global conference in Los Angeles earlier this month.</p><p> </p><p>Our questions for Fish focused on <a href="https://thenewstack.io/terraform-cloud-now-offers-less-code-and-no-code-options/">Terraform</a>, the open source infrastructure-as-code software tool:</p><p> </p><ul><li>How has Terraform evolved in uses since Fish started using it in 2018?</li><li>How does Wix make the most of Terraform to scale its infrastructure?</li><li>What are some best practices Wix has used with Terraform?</li><li>What are some pitfalls to avoid with Terraform?</li><li>What is the approach to scaling across teams and avoiding refactoring to keep the integrations elegant and working</li></ul><p> </p><p>Fish started using Terraform in an ad-hoc manner back in 2018. Over time she has learned how to use it for scaling operations.</p><p> </p><p>“If you want to scale your infrastructure, you need to use Terraform in a way that will allow you to do that,” Fish said. </p><p> </p><p>Terraform can be used ad-hoc to create a machine as a resource, but scale comes with enabling infrastructure that allows the engineers to develop templates that get reused across many servers.</p><p> </p><p>“You need to use it in a way that will allow you to scale up as much as you can,” Fish said.</p><p> </p><p>Fish said best practices come from how to structure the Terraform code base.</p><p> </p><p>Much of it comes down to the teams and how Terraform gets implemented. Engineers each have their way of working. Standard practices can help. In onboarding new teams, a structured code base can be beneficial. New teams onboard and use models already in the code base.</p><p> </p><p>And what are some of the pitfalls of using Terraform?</p><p> </p><p>We get to that in the recording and more about integrations, why Wix is still on version 0.13, and some new capabilities for developers to use Terraform.</p><p> </p><p>Users have historically needed to learn HashiCorp configuration language (HCL) to use the HashiCorp configuration language. At Wix, Fish said, the company is implementing Terraform on the backend with a UI that developers can use without needing to learn HCL.</p>
]]></content:encoded>
      <enclosure length="13674053" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/63ed8a97-4def-4c42-b23a-d6db4b8ed432/audio/8b1e82cb-11e6-4b53-8151-28c2c931fe16/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Terraform&apos;s Best Practices and Pitfalls</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/fbbaa388-fe69-46b0-a37f-8fd49e3e2f66/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:14:14</itunes:duration>
      <itunes:summary>Wix is a cloud-based development site for making HTML 5 websites and mobile sites with drag and drop tools. It is suited for the beginning user or the advanced developer, said Hila Fish, senior DevOps engineer for Wix, in an interview for The New Stack Makers at HashiCorp’s HashiConf Global conference in Los Angeles earlier this month.</itunes:summary>
      <itunes:subtitle>Wix is a cloud-based development site for making HTML 5 websites and mobile sites with drag and drop tools. It is suited for the beginning user or the advanced developer, said Hila Fish, senior DevOps engineer for Wix, in an interview for The New Stack Makers at HashiCorp’s HashiConf Global conference in Los Angeles earlier this month.</itunes:subtitle>
      <itunes:keywords>wix, software developer, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, hashiconf, hila fish, the new stack makers, software engineer, terraform</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1356</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">691d9895-cf19-4eb5-8f5f-a8954c903839</guid>
      <title>How Can Open Source Help Fight Climate Change?</title>
      <description><![CDATA[<p>DUBLIN — The mission of <a href="https://www.lfenergy.org/">Linux Foundation Energy</a> —  a collaborative, international effort by power companies to help move the world away from fossil fuels — has never seemed more urgent.</p><p> </p><p>In addition to the increased frequency and ferocity of extreme weather events like hurricanes and heat waves, the war between Russia and Ukraine has oil-dependent countries looking ahead to a winter of likely energy shortages.</p><p> </p><p>“I think we need to go faster,” said <a href="https://www.linkedin.com/in/benoitjeanson/">Benoît Jeanson,</a> an enterprise architect at RTE, the French electricity transmission system operator.  He aded, “What we are doing with the Linux Foundation Energy is really something that will help for the future, and we need to go faster and faster.</p><p> </p><p>For this On the Road episode of The New Stack’s Makers podcast, recorded at Open Source Summit Europe here, we were joined by two guests who work in the power industry and whose organizations are part of LF Energy.</p><p> </p><p>In addition to Jeanson, this episode featured <a href="https://www.linkedin.com/in/jonasvandenbogaard/">Jonas van den Bogaard</a>, a solution architect and open source ambassador at Alliander, an energy network company that provides energy transport and distribution to a large part of the Netherlands. Van den Bogaard also serves on the technical advisory council of LF Energy.</p><p> </p><p><a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn,</a> features editor of TNS, hosted this conversation.</p><p><h2>18 Open Source Projects</h2></p><p>LF Energy, started in 2018, now includes <a href="https://landscape.lfenergy.org/members">59 member organizations,</a> including cloud providers Google and Microsoft, enterprises like General Electric, and research institutions like Stanford University. It currently hosts <a href="https://www.lfenergy.org/projects/">18 open source projects</a>; the podcast guests encouraged listeners to check them out and contribute to them.</p><p> </p><p>Among them: <a href="https://www.lfenergy.org/projects/openstef/">OpenSTEF,</a> automated machine learning pipelines to deliver accurate forecasts of the load on the energy grid 48 hours ahead of time. “It gives us the opportunity to take action in time to prevent the maximum grid capacity [from being] reached,” said van den Bogaard.</p><p> </p><p>“That’s going to prevent blackouts and that sort of thing. And also, another side: it makes us able to add renewable energies to the grid.”</p><p> </p><p>Jeanson said that the open source projects aim to cover “every level of the stack. We also have tools that we want to develop at the substation level, in the field.” Among them: <a href="https://www.lfenergy.org/projects/operatorfabric/">OperatorFabric,</a> Written in Java and based on the Spring framework, OperatorFabric is a modular, extensible  platform for systems operators, including several features aimed at helping utility operators.</p><p> </p><p>It helps operators coordinate the many tasks and alerts they need to keep track of by aggregate notifications from several applications into a single screen.</p><p> </p><p>“Energy is of importance for everyone,” said van den Bogaard. “And especially moving to more cleaner and renewable energy is key for us all. We have great minds all around the world. And I really believe that we can achieve that. The best way to do that is to combine the efforts of all those great minds. Open source can be a great enabler of that.”</p><p><h2>Cultural Education Needed</h2></p><p>But persuading decision-makers in the power industry to participate in building the next generation of open source solutions can be a challenge, van den Bogaard acknowledged.</p><p> </p><p>“You see, that the energy domain has been there for a long time, and has been quite stable, up to like 10 years ago.” he said. In such a tradition-bound culture, change is hard. In the cloud era, he added, a lot of organizations “need to digitalize and focus more on it and those capabilities are new. And also, open source, for in that matter is also a very new concept.”</p><p> </p><p>One obstacle in the energy industry taking more advantage of open source tools, Jeanson noted, is security: “Some organizations still see open source to be a potential risk.” Getting them on board, he said, requires education and training.</p><p> </p><p>He added, “vendors need to understand that open source is an opportunity that they should not be afraid of. That we want to do business with them based on open source. We just need to accelerate the momentum.</p><p> </p><p>Check out the whole episode to learn more about LF Energy’s work.</p>
]]></description>
      <pubDate>Tue, 18 Oct 2022 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-can-open-source-help-fight-climate-change-R4ibUer_</link>
      <content:encoded><![CDATA[<p>DUBLIN — The mission of <a href="https://www.lfenergy.org/">Linux Foundation Energy</a> —  a collaborative, international effort by power companies to help move the world away from fossil fuels — has never seemed more urgent.</p><p> </p><p>In addition to the increased frequency and ferocity of extreme weather events like hurricanes and heat waves, the war between Russia and Ukraine has oil-dependent countries looking ahead to a winter of likely energy shortages.</p><p> </p><p>“I think we need to go faster,” said <a href="https://www.linkedin.com/in/benoitjeanson/">Benoît Jeanson,</a> an enterprise architect at RTE, the French electricity transmission system operator.  He aded, “What we are doing with the Linux Foundation Energy is really something that will help for the future, and we need to go faster and faster.</p><p> </p><p>For this On the Road episode of The New Stack’s Makers podcast, recorded at Open Source Summit Europe here, we were joined by two guests who work in the power industry and whose organizations are part of LF Energy.</p><p> </p><p>In addition to Jeanson, this episode featured <a href="https://www.linkedin.com/in/jonasvandenbogaard/">Jonas van den Bogaard</a>, a solution architect and open source ambassador at Alliander, an energy network company that provides energy transport and distribution to a large part of the Netherlands. Van den Bogaard also serves on the technical advisory council of LF Energy.</p><p> </p><p><a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn,</a> features editor of TNS, hosted this conversation.</p><p><h2>18 Open Source Projects</h2></p><p>LF Energy, started in 2018, now includes <a href="https://landscape.lfenergy.org/members">59 member organizations,</a> including cloud providers Google and Microsoft, enterprises like General Electric, and research institutions like Stanford University. It currently hosts <a href="https://www.lfenergy.org/projects/">18 open source projects</a>; the podcast guests encouraged listeners to check them out and contribute to them.</p><p> </p><p>Among them: <a href="https://www.lfenergy.org/projects/openstef/">OpenSTEF,</a> automated machine learning pipelines to deliver accurate forecasts of the load on the energy grid 48 hours ahead of time. “It gives us the opportunity to take action in time to prevent the maximum grid capacity [from being] reached,” said van den Bogaard.</p><p> </p><p>“That’s going to prevent blackouts and that sort of thing. And also, another side: it makes us able to add renewable energies to the grid.”</p><p> </p><p>Jeanson said that the open source projects aim to cover “every level of the stack. We also have tools that we want to develop at the substation level, in the field.” Among them: <a href="https://www.lfenergy.org/projects/operatorfabric/">OperatorFabric,</a> Written in Java and based on the Spring framework, OperatorFabric is a modular, extensible  platform for systems operators, including several features aimed at helping utility operators.</p><p> </p><p>It helps operators coordinate the many tasks and alerts they need to keep track of by aggregate notifications from several applications into a single screen.</p><p> </p><p>“Energy is of importance for everyone,” said van den Bogaard. “And especially moving to more cleaner and renewable energy is key for us all. We have great minds all around the world. And I really believe that we can achieve that. The best way to do that is to combine the efforts of all those great minds. Open source can be a great enabler of that.”</p><p><h2>Cultural Education Needed</h2></p><p>But persuading decision-makers in the power industry to participate in building the next generation of open source solutions can be a challenge, van den Bogaard acknowledged.</p><p> </p><p>“You see, that the energy domain has been there for a long time, and has been quite stable, up to like 10 years ago.” he said. In such a tradition-bound culture, change is hard. In the cloud era, he added, a lot of organizations “need to digitalize and focus more on it and those capabilities are new. And also, open source, for in that matter is also a very new concept.”</p><p> </p><p>One obstacle in the energy industry taking more advantage of open source tools, Jeanson noted, is security: “Some organizations still see open source to be a potential risk.” Getting them on board, he said, requires education and training.</p><p> </p><p>He added, “vendors need to understand that open source is an opportunity that they should not be afraid of. That we want to do business with them based on open source. We just need to accelerate the momentum.</p><p> </p><p>Check out the whole episode to learn more about LF Energy’s work.</p>
]]></content:encoded>
      <enclosure length="12305251" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/bd3d80e3-2e9c-4cc3-95a0-a89395e3ab0b/audio/7bf604dc-a11e-41a6-ae3d-dae84a4b8284/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Can Open Source Help Fight Climate Change?</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/fe632b17-e30a-4bb0-8a96-7ff9acea91c5/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:12:49</itunes:duration>
      <itunes:summary>DUBLIN — The mission of Linux Foundation Energy —  a collaborative, international effort by power companies to help move the world away from fossil fuels — has never seemed more urgent.

In addition to the increased frequency and ferocity of extreme weather events like hurricanes and heat waves, the war between Russia and Ukraine has oil-dependent countries looking ahead to a winter of likely energy shortages.

“I think we need to go faster,” said Benoît Jeanson, an enterprise architect at RTE, the French electricity transmission system operator.  He aded, “What we are doing with the Linux Foundation Energy is really something that will help for the future, and we need to go faster and faster.

For this On the Road episode of The New Stack’s Makers podcast, recorded at Open Source Summit Europe here, we were joined by two guests who work in the power industry and whose organizations are part of LF Energy.</itunes:summary>
      <itunes:subtitle>DUBLIN — The mission of Linux Foundation Energy —  a collaborative, international effort by power companies to help move the world away from fossil fuels — has never seemed more urgent.

In addition to the increased frequency and ferocity of extreme weather events like hurricanes and heat waves, the war between Russia and Ukraine has oil-dependent countries looking ahead to a winter of likely energy shortages.

“I think we need to go faster,” said Benoît Jeanson, an enterprise architect at RTE, the French electricity transmission system operator.  He aded, “What we are doing with the Linux Foundation Energy is really something that will help for the future, and we need to go faster and faster.

For this On the Road episode of The New Stack’s Makers podcast, recorded at Open Source Summit Europe here, we were joined by two guests who work in the power industry and whose organizations are part of LF Energy.</itunes:subtitle>
      <itunes:keywords>the linux foundation, software developer, tech podcast, benoît jeanson, the new stack, devops, devops podcast, tech, developer podcast, the new stack makers, software engineer, jonas van den bogaard</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1355</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">f1a44f34-2ff9-48e3-84d9-1118b04eab88</guid>
      <title>KubeCon+CloudNativeCon 2022 Rolls into Detroit</title>
      <description><![CDATA[<p>It's that time of the year again, when cloud native enthusiasts and professionals assemble to discuss all things Kubernetes. KubeCon+CloudNativeCon 2023 is being held later this month in Detroit, October 24-28.</p><p> </p><p>In this latest edition of The New Stack Makers podcast, we spoke with <a href="https://www.linkedin.com/in/pritianka/">Priyanka Sharma,</a> general manager of the Cloud Native Computing Foundation — which organizes KubeCon —and CERN computer engineer and KubeCon co-chair <a href="https://www.linkedin.com/in/ricardo-rocha-739aa718/?originalSubdomain=ch">Ricardo Rocha</a>. For this show, we discussed what we can expect from the upcoming event.</p><p> </p><p>This year, there will be a focus on Kubernetes in the enterprise, Sharma said. "We are reaching a point where Kubernetes is becoming the de facto standard when it comes to container orchestration. And there's a reason for it. It's not just about Kubernetes. Kubernetes spawned the cloud native ecosystem and the heart of the cloud native movement is building fast, resiliently observable software that meets customer needs. So ultimately, it's making you a better provider to your customers, no matter what kind of business you are."</p><p> </p><p>Of this year's topics, security will be a big theme, Rocha said. Technologies such as Falco and Cilium will be discussed. Linux kernel add-on eBPF is popping up in a lot of topics, especially around networking. Observability and hybrid deployments also weigh heavily on the agenda. "The number of solutions [around Hybrid] are quite large, so it's interesting to see what people come up with," he said.</p><p> </p><p>In addition to KubeCon itself, this year there are a number of co-located events, held during or before the conference itself. Some of them hosted by CNCF while others are hosted by other companies such as Canonical. They include the Network Application Day, BackstageCon, CloudNative eBPF Day, CloudNativeSecurityCon, CloudNative WASM Day, Data-on-Kubernetes Day, EnvoyCon, gRPCConf, KNativeCon, Spinnaker Summit, Open Observability Day, Cloud Native Telco Day, Operator Day, The Continuous Delivery Summit, among others.</p><p> </p><p>What's amazing is not only the number of co-located events, but the high quality of talks being held there.</p><p> </p><p>"Co-located events are a great way to know what's exciting to folks in the ecosystem right now," Sharma said. "Cloud native has really become the scaffolding of future progress. People want to build on cloud native, but have their own focus areas."</p><p> </p><p>WebAssembly (WASM) is a great example of this. "In the beginning, you wouldn't have thought of WebAssembly as part of the cloud native narrative, but here we are," Sharma said. "The same thinking from professionals who conceptualized cloud native in the beginning are now taking it a step further."</p><p> </p><p>"There's a lot of value in co-located events, because you get a group of people for a longer period in the same room, focusing on one topic," Rocha said.</p><p> </p><p>Other topics discussed in the podcast include the choice of Detroit as a conference hub, the fun activities that CNCF have planned in between the technical sessions, surprises at the keynotes, and so much more! Give it a listen.</p>
]]></description>
      <pubDate>Thu, 13 Oct 2022 19:06:31 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/kubeconcloudnativecon-2022-rolls-into-detroit-ejmM2ycr</link>
      <content:encoded><![CDATA[<p>It's that time of the year again, when cloud native enthusiasts and professionals assemble to discuss all things Kubernetes. KubeCon+CloudNativeCon 2023 is being held later this month in Detroit, October 24-28.</p><p> </p><p>In this latest edition of The New Stack Makers podcast, we spoke with <a href="https://www.linkedin.com/in/pritianka/">Priyanka Sharma,</a> general manager of the Cloud Native Computing Foundation — which organizes KubeCon —and CERN computer engineer and KubeCon co-chair <a href="https://www.linkedin.com/in/ricardo-rocha-739aa718/?originalSubdomain=ch">Ricardo Rocha</a>. For this show, we discussed what we can expect from the upcoming event.</p><p> </p><p>This year, there will be a focus on Kubernetes in the enterprise, Sharma said. "We are reaching a point where Kubernetes is becoming the de facto standard when it comes to container orchestration. And there's a reason for it. It's not just about Kubernetes. Kubernetes spawned the cloud native ecosystem and the heart of the cloud native movement is building fast, resiliently observable software that meets customer needs. So ultimately, it's making you a better provider to your customers, no matter what kind of business you are."</p><p> </p><p>Of this year's topics, security will be a big theme, Rocha said. Technologies such as Falco and Cilium will be discussed. Linux kernel add-on eBPF is popping up in a lot of topics, especially around networking. Observability and hybrid deployments also weigh heavily on the agenda. "The number of solutions [around Hybrid] are quite large, so it's interesting to see what people come up with," he said.</p><p> </p><p>In addition to KubeCon itself, this year there are a number of co-located events, held during or before the conference itself. Some of them hosted by CNCF while others are hosted by other companies such as Canonical. They include the Network Application Day, BackstageCon, CloudNative eBPF Day, CloudNativeSecurityCon, CloudNative WASM Day, Data-on-Kubernetes Day, EnvoyCon, gRPCConf, KNativeCon, Spinnaker Summit, Open Observability Day, Cloud Native Telco Day, Operator Day, The Continuous Delivery Summit, among others.</p><p> </p><p>What's amazing is not only the number of co-located events, but the high quality of talks being held there.</p><p> </p><p>"Co-located events are a great way to know what's exciting to folks in the ecosystem right now," Sharma said. "Cloud native has really become the scaffolding of future progress. People want to build on cloud native, but have their own focus areas."</p><p> </p><p>WebAssembly (WASM) is a great example of this. "In the beginning, you wouldn't have thought of WebAssembly as part of the cloud native narrative, but here we are," Sharma said. "The same thinking from professionals who conceptualized cloud native in the beginning are now taking it a step further."</p><p> </p><p>"There's a lot of value in co-located events, because you get a group of people for a longer period in the same room, focusing on one topic," Rocha said.</p><p> </p><p>Other topics discussed in the podcast include the choice of Detroit as a conference hub, the fun activities that CNCF have planned in between the technical sessions, surprises at the keynotes, and so much more! Give it a listen.</p>
]]></content:encoded>
      <enclosure length="25943661" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/b639e814-f64e-4ee6-bdb2-1f408c87b58d/audio/6f3d73dc-f7e7-4bf1-8dbe-fad62e514d34/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>KubeCon+CloudNativeCon 2022 Rolls into Detroit</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:duration>00:27:01</itunes:duration>
      <itunes:summary>It&apos;s that time of the year again, when cloud native enthusiasts and professionals assemble to discuss all things Kubernetes. KubeCon+CloudNativeCon 2023 is being held later this month in Detroit, October 24-28.

In this latest edition of The New Stack Makers podcast, we spoke with Priyanka Sharma, general manager of the Cloud Native Computing Foundation — which organizes KubeCon —and CERN computer engineer and KubeCon co-chair Ricardo Rocha. For this show, we discussed what we can expect from the upcoming event.</itunes:summary>
      <itunes:subtitle>It&apos;s that time of the year again, when cloud native enthusiasts and professionals assemble to discuss all things Kubernetes. KubeCon+CloudNativeCon 2023 is being held later this month in Detroit, October 24-28.

In this latest edition of The New Stack Makers podcast, we spoke with Priyanka Sharma, general manager of the Cloud Native Computing Foundation — which organizes KubeCon —and CERN computer engineer and KubeCon co-chair Ricardo Rocha. For this show, we discussed what we can expect from the upcoming event.</itunes:subtitle>
      <itunes:keywords>cloud native computing foundation, software developer, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, ricardo rocha, cloud native con, priyanka sharma, the new stack makers, software engineer, kccnc, kubecon</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1354</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b8fc997f-aa0e-4f3e-91aa-1d8b3d867de1</guid>
      <title>Armon Dadgar on HashiCorp&apos;s Practitioner Approach</title>
      <description><![CDATA[<p><a href="https://www.linkedin.com/in/armon-dadgar/">Armon Dadgar </a>and <a href="https://www.linkedin.com/in/mitchellh/">Mitchell Hashimoto</a> are long-time open source practitioners. It's that practitioner focus they established as core to their approach when they started <a href="https://www.hashicorp.com/">HashiCorp</a> about ten years ago. Today, HashiCorp is a publicly traded company.</p><p> </p><p>Before they started HashiCorp, Dadgar and Hashimoto were students at the University of Washington. Through college and afterward, they cut their teeth on open source and learning how to build software in open source.</p><p> </p><p>HashiCorp's business is an outgrowth of the two as practitioners in open source communities, said Dadgar, co-founder and CTO of HashiCorp, in an interview at the HashiConf conference in Los Angeles earlier this month.</p><p> </p><p>Both of them wanted to recreate the asynchronous collaboration that they loved so much about the open source projects they worked on as practitioners, Dadgar said. They knew that they did not want bureaucracy or a hard-to-follow roadmap.</p><p> </p><p>Dadgar cited Terraform as an example of their approach. <a href="https://thenewstack.io/terraform-cloud-now-offers-less-code-and-no-code-options/">Terraform</a> is Hashicorp's open-source, infrastructure-as-code, software tool and reflects the company's model to control its core while providing a good user experience. That experience goes beyond community development and into the application architecture itself.</p><p> </p><p>"If you're a weekend warrior, and you want to contribute something, you're not gonna go read this massively complicated codebase to understand how it works, just to do an integration," Dadgar said." So instead, we built a very specific integration surface area for Terraform."</p><p> </p><p>The integration is about 200 lines of code, Dadgar said. They call the integration their core plus plugin model, with a prescriptive scaffold, examples of how to integrate, and the SDK. Their "golden path" to integration is how the company has developed a program that today has about 2,500 providers.</p><p> </p><p>The HashiCorp open source model relies on its core and plugin model. On Twitter, one person asked why doesn't HashiCorp be a proprietary company. Dadgar referred to HashiCorp's open source approach when asked that question in our interview.</p><p> </p><p>"Oh, that's an interesting question," Dadgar said. "You know, I think it'd be a much harder, company to scale. And what I mean by that is, if you take a look at like a Terraform community or Vault – there's thousands of contributors. And that's what solves the integration problem. Right? And so if you said, we were proprietary, hey, how many engineers would it take to build 2000 TerraForm integrations? It'd be a whole lot more people that we have today. And so I think fundamentally, what open source helps you solve is the fact that, you know, modern infrastructure has this really wide surface area of integration. And I don't think you can solve that as a proprietary business."</p><p> </p><p>"I don't think we'd be able to have nearly the breadth of integration. We could maybe cover the core cloud providers. But you'd have 50 Terraform providers, not 2500 Terraform providers."</p><p> </p><p> </p>
]]></description>
      <pubDate>Wed, 12 Oct 2022 20:19:02 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/armon-dadgar-on-hashicorps-practitioner-approach-6PkUgsPP</link>
      <content:encoded><![CDATA[<p><a href="https://www.linkedin.com/in/armon-dadgar/">Armon Dadgar </a>and <a href="https://www.linkedin.com/in/mitchellh/">Mitchell Hashimoto</a> are long-time open source practitioners. It's that practitioner focus they established as core to their approach when they started <a href="https://www.hashicorp.com/">HashiCorp</a> about ten years ago. Today, HashiCorp is a publicly traded company.</p><p> </p><p>Before they started HashiCorp, Dadgar and Hashimoto were students at the University of Washington. Through college and afterward, they cut their teeth on open source and learning how to build software in open source.</p><p> </p><p>HashiCorp's business is an outgrowth of the two as practitioners in open source communities, said Dadgar, co-founder and CTO of HashiCorp, in an interview at the HashiConf conference in Los Angeles earlier this month.</p><p> </p><p>Both of them wanted to recreate the asynchronous collaboration that they loved so much about the open source projects they worked on as practitioners, Dadgar said. They knew that they did not want bureaucracy or a hard-to-follow roadmap.</p><p> </p><p>Dadgar cited Terraform as an example of their approach. <a href="https://thenewstack.io/terraform-cloud-now-offers-less-code-and-no-code-options/">Terraform</a> is Hashicorp's open-source, infrastructure-as-code, software tool and reflects the company's model to control its core while providing a good user experience. That experience goes beyond community development and into the application architecture itself.</p><p> </p><p>"If you're a weekend warrior, and you want to contribute something, you're not gonna go read this massively complicated codebase to understand how it works, just to do an integration," Dadgar said." So instead, we built a very specific integration surface area for Terraform."</p><p> </p><p>The integration is about 200 lines of code, Dadgar said. They call the integration their core plus plugin model, with a prescriptive scaffold, examples of how to integrate, and the SDK. Their "golden path" to integration is how the company has developed a program that today has about 2,500 providers.</p><p> </p><p>The HashiCorp open source model relies on its core and plugin model. On Twitter, one person asked why doesn't HashiCorp be a proprietary company. Dadgar referred to HashiCorp's open source approach when asked that question in our interview.</p><p> </p><p>"Oh, that's an interesting question," Dadgar said. "You know, I think it'd be a much harder, company to scale. And what I mean by that is, if you take a look at like a Terraform community or Vault – there's thousands of contributors. And that's what solves the integration problem. Right? And so if you said, we were proprietary, hey, how many engineers would it take to build 2000 TerraForm integrations? It'd be a whole lot more people that we have today. And so I think fundamentally, what open source helps you solve is the fact that, you know, modern infrastructure has this really wide surface area of integration. And I don't think you can solve that as a proprietary business."</p><p> </p><p>"I don't think we'd be able to have nearly the breadth of integration. We could maybe cover the core cloud providers. But you'd have 50 Terraform providers, not 2500 Terraform providers."</p><p> </p><p> </p>
]]></content:encoded>
      <enclosure length="16454750" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/453a04d5-e145-46f8-bb41-c3cce37617dd/audio/a8957db7-94fe-45e1-a302-07a03bb357aa/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Armon Dadgar on HashiCorp&apos;s Practitioner Approach</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/c8017fe8-ffb2-467b-b81a-bc4f111ff005/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:17:08</itunes:duration>
      <itunes:summary>Armon Dadgar and Mitchell Hashimoto are long-time open source practitioners. It&apos;s that practitioner focus they established as core to their approach when they started HashiCorp about ten years ago. Today, HashiCorp is a publicly traded company.

Before they started HashiCorp, Dadgar and Hashimoto were students at the University of Washington. Through college and afterward, they cut their teeth on open source and learning how to build software in open source.

HashiCorp&apos;s business is an outgrowth of the two as practitioners in open source communities, said Dadgar, co-founder and CTO of HashiCorp, in an interview at the HashiConf Global conference in Los Angeles earlier this month.</itunes:summary>
      <itunes:subtitle>Armon Dadgar and Mitchell Hashimoto are long-time open source practitioners. It&apos;s that practitioner focus they established as core to their approach when they started HashiCorp about ten years ago. Today, HashiCorp is a publicly traded company.

Before they started HashiCorp, Dadgar and Hashimoto were students at the University of Washington. Through college and afterward, they cut their teeth on open source and learning how to build software in open source.

HashiCorp&apos;s business is an outgrowth of the two as practitioners in open source communities, said Dadgar, co-founder and CTO of HashiCorp, in an interview at the HashiConf Global conference in Los Angeles earlier this month.</itunes:subtitle>
      <itunes:keywords>armon dadgar, software developer, tech podcast, alex williams, the new stack, devops, devops podcast, hashiconf global, tech, developer podcast, the new stack makers, software engineer, hashicorp</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1353</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">887ddd71-ca0e-49de-ae29-c9fe060f97e6</guid>
      <title>Making Europe’s ‘Romantic’ Open Source World More Practical</title>
      <description><![CDATA[<p>DUBLIN — <a href="https://thenewstack.io/open-source-summit-introducing-linux-foundation-europe/">Europe's open source contributors,</a> according to The Linux Foundation's first-ever survey of them released in September, are driven more by idealism than their American counterparts. The data showed that social reasons for contributing to open source projects were more often cited by Europeans than by Americans, who were more likely to say they participate in open source for professional advancement.</p><p> </p><p>A big part of <a href="https://www.linkedin.com/in/columbro">Gabriele (Gab) Columbro's</a> mission as the general manager of the new Linux Foundation Europe, will be to marry Europe's "romantic" view of open source to greater commercial opportunities, Columbro told The New Stack's Makers podcast.</p><p> </p><p>The On the Road episode of Makers, recorded in Dublin at Open Source Summit Europe, was hosted by Heather Joslyn, TNS's features editor.</p><p> </p><p>Columbro, a native of Italy who also heads <a href="https://www.finos.org/" rel="external ">FINOS</a>, the fintech open source foundation. recalled his own roots as an individual contributor to the Apache project, and cited what he called "a very grassroots, passion, romantic aspect of open source" in Europe</p><p> </p><p>By contrast, he noted, "there is definitely a much stronger commercial ecosystem in the United States. But the reality is that those two, you know, natures of open source are not alternatives."</p><p> </p><p>Columbro said he sees advantages in both the idealistic and the practical aspects of open source, along with the notion in the European Union and other countries in the region that the Internet and the software that supports it have value as shared resources.</p><p> </p><p>"I'm really all about marrying sort of these three natures of open source: the individual-slash-romantic nature, the commercial dynamics, and the public sector sort of collective value," he said.</p><p><h2>A 'Springboard' for Regional Projects</h2></p><p>Europe sits thousands of miles away from the headquarters of the FAANG tech behemoths — Facebook, Apple, Amazon, Netflix and Google. (Columbro, in fact, is still based in Silicon Valley, though he says he plans to return to Europe at some point.)</p><p> </p><p>For individual developers, he said, Linux Foundation Europe will help give regional projects increased visibility and greater access to potential contributors. Contributing a project to Linux Foundation Europe, he said, is "a powerful way to potentially supercharge your project."</p><p> </p><p>He added, "I think any developer should consider this as a potential springboard platform for the technology, not just to be visible in Europe, but then hopefully, beyond."</p><p> </p><p>The European organization's first major project, the OpenWallet Foundation, will aim to help create a template for developers to build digital wallets. "I find it very aligned with not only the vision of the Linux Foundation that is about not only creating successful open source projects but defining new markets and new commercial ecosystems around these open source projects."</p><p> </p><p>It's also, Columbro added, "very much aligned with the sort of vision of Europe of creating a digital commons, based on open source whereby they can achieve a sort of digital independence."</p><p><h2>Europe's Turmoil Could Spark Innovation</h2></p><p>As geopolitical and economic turmoil roils several nations in Europe, Columbro suggested that open source could see a boom if the region's companies start cutting costs.</p><p> </p><p>He places his hopes on open source collaboration to help reconcile some differences. "Certainly I do believe that open source has the potential to bring parties together, " Columbro said.</p><p> </p><p>Also, he noted, "generally we see open source and investment in open source to be counter-cyclical with the trends of investments in proprietary software. ...  in other words, when there is more pressure, and when there is more pressure to reduce costs, or to, you know, reduce the workforce.</p><p> </p><p>"That’s when people are forced to look more seriously about ways to actually collaborate while still maintaining throughput and efficiency. And I think open source is the prime way to do so.</p><p> </p><p>Listen to this On the Road episode of Makers to learn more about Linux Foundation Europe.</p>
]]></description>
      <pubDate>Tue, 11 Oct 2022 19:12:49 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/making-europes-romantic-open-source-world-more-practical-Sh8qQMiR</link>
      <content:encoded><![CDATA[<p>DUBLIN — <a href="https://thenewstack.io/open-source-summit-introducing-linux-foundation-europe/">Europe's open source contributors,</a> according to The Linux Foundation's first-ever survey of them released in September, are driven more by idealism than their American counterparts. The data showed that social reasons for contributing to open source projects were more often cited by Europeans than by Americans, who were more likely to say they participate in open source for professional advancement.</p><p> </p><p>A big part of <a href="https://www.linkedin.com/in/columbro">Gabriele (Gab) Columbro's</a> mission as the general manager of the new Linux Foundation Europe, will be to marry Europe's "romantic" view of open source to greater commercial opportunities, Columbro told The New Stack's Makers podcast.</p><p> </p><p>The On the Road episode of Makers, recorded in Dublin at Open Source Summit Europe, was hosted by Heather Joslyn, TNS's features editor.</p><p> </p><p>Columbro, a native of Italy who also heads <a href="https://www.finos.org/" rel="external ">FINOS</a>, the fintech open source foundation. recalled his own roots as an individual contributor to the Apache project, and cited what he called "a very grassroots, passion, romantic aspect of open source" in Europe</p><p> </p><p>By contrast, he noted, "there is definitely a much stronger commercial ecosystem in the United States. But the reality is that those two, you know, natures of open source are not alternatives."</p><p> </p><p>Columbro said he sees advantages in both the idealistic and the practical aspects of open source, along with the notion in the European Union and other countries in the region that the Internet and the software that supports it have value as shared resources.</p><p> </p><p>"I'm really all about marrying sort of these three natures of open source: the individual-slash-romantic nature, the commercial dynamics, and the public sector sort of collective value," he said.</p><p><h2>A 'Springboard' for Regional Projects</h2></p><p>Europe sits thousands of miles away from the headquarters of the FAANG tech behemoths — Facebook, Apple, Amazon, Netflix and Google. (Columbro, in fact, is still based in Silicon Valley, though he says he plans to return to Europe at some point.)</p><p> </p><p>For individual developers, he said, Linux Foundation Europe will help give regional projects increased visibility and greater access to potential contributors. Contributing a project to Linux Foundation Europe, he said, is "a powerful way to potentially supercharge your project."</p><p> </p><p>He added, "I think any developer should consider this as a potential springboard platform for the technology, not just to be visible in Europe, but then hopefully, beyond."</p><p> </p><p>The European organization's first major project, the OpenWallet Foundation, will aim to help create a template for developers to build digital wallets. "I find it very aligned with not only the vision of the Linux Foundation that is about not only creating successful open source projects but defining new markets and new commercial ecosystems around these open source projects."</p><p> </p><p>It's also, Columbro added, "very much aligned with the sort of vision of Europe of creating a digital commons, based on open source whereby they can achieve a sort of digital independence."</p><p><h2>Europe's Turmoil Could Spark Innovation</h2></p><p>As geopolitical and economic turmoil roils several nations in Europe, Columbro suggested that open source could see a boom if the region's companies start cutting costs.</p><p> </p><p>He places his hopes on open source collaboration to help reconcile some differences. "Certainly I do believe that open source has the potential to bring parties together, " Columbro said.</p><p> </p><p>Also, he noted, "generally we see open source and investment in open source to be counter-cyclical with the trends of investments in proprietary software. ...  in other words, when there is more pressure, and when there is more pressure to reduce costs, or to, you know, reduce the workforce.</p><p> </p><p>"That’s when people are forced to look more seriously about ways to actually collaborate while still maintaining throughput and efficiency. And I think open source is the prime way to do so.</p><p> </p><p>Listen to this On the Road episode of Makers to learn more about Linux Foundation Europe.</p>
]]></content:encoded>
      <enclosure length="16614400" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/d63d4e13-6aa3-4e90-b91c-eee1966ec72c/audio/4ac9ac42-3167-47df-89ba-cae3a1f33758/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Making Europe’s ‘Romantic’ Open Source World More Practical</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/7e0c6d09-3c05-498b-ad82-f61362789451/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:17:18</itunes:duration>
      <itunes:summary>DUBLIN — Europe&apos;s open source contributors, according to The Linux Foundation&apos;s first-ever survey of them released in September, are driven more by idealism than their American counterparts. The data showed that social reasons for contributing to open source projects were more often cited by Europeans than by Americans, who were more likely to say they participate in open source for professional advancement.

A big part of Gabriele (Gab) Columbro&apos;s mission as the general manager of the new Linux Foundation Europe, will be to marry Europe&apos;s &quot;romantic&quot; view of open source to greater commercial opportunities, Columbro told The New Stack&apos;s Makers podcast.

The On the Road episode of Makers, recorded in Dublin at Open Source Summit Europe, was hosted by Heather Joslyn, TNS&apos;s features editor.</itunes:summary>
      <itunes:subtitle>DUBLIN — Europe&apos;s open source contributors, according to The Linux Foundation&apos;s first-ever survey of them released in September, are driven more by idealism than their American counterparts. The data showed that social reasons for contributing to open source projects were more often cited by Europeans than by Americans, who were more likely to say they participate in open source for professional advancement.

A big part of Gabriele (Gab) Columbro&apos;s mission as the general manager of the new Linux Foundation Europe, will be to marry Europe&apos;s &quot;romantic&quot; view of open source to greater commercial opportunities, Columbro told The New Stack&apos;s Makers podcast.

The On the Road episode of Makers, recorded in Dublin at Open Source Summit Europe, was hosted by Heather Joslyn, TNS&apos;s features editor.</itunes:subtitle>
      <itunes:keywords>the linux foundation, software developer, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, the new stack makers, software engineer, gab columbro, open source summit</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1352</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">439a09cf-e421-4155-8e12-9031959e350a</guid>
      <title>After GitHub, Brian Douglas Builds a ‘Saucy’ Startup</title>
      <description><![CDATA[<p><a href="https://www.linkedin.com/in/brianldouglas">Brian Douglas</a> was “the Beyoncé of GitHub.” He jokingly crowned himself with that title during his years at that company, where he advocated for open source and a more inclusive community supporting it. His work there eventually led to his new startup, <a href="https://opensauced.pizza/">Open Sauced</a>.</p><p> </p><p>Like the Queen Bey, Douglas’ mission is to empower a community. In his case, he’s seeking to support the open source community. With his former employer, GitHub, serving 4 million developers worldwide, the potential size of that audience is huge.</p><p> </p><p>In this episode of <a href="https://thenewstack.io/how-idit-levines-athletic-past-fueled-solo-ios-startup/">The Tech Founder Odyssey</a> podcast, he shared why empowerment and breaking down barriers to make anyone “awesome” in open source was the motivation behind his startup journey.</p><p> </p><p>Beyoncé “has a superfan group, the Beyhive, that will go to bat for her,” Douglas pointed out. “So if Beyoncé makes a country song, the Beyhive is there supporting her country song. If she starts doing the house music, which is her latest album, [they] are there to the point where like, you cannot say bad stuff about, he pointed out,. So what I’m focused on is having a strong community and having strong ties.”</p><p> </p><p>Open Sauced, which launched in June, seeks to build open source intelligence platform to help companies to stay competitive. Its aim is to help give more potential open source contributors the information they need to get started with projects, and help maintain them over time</p><p> </p><p>The conversation was co-hosted by <a href="https://www.linkedin.com/in/colleen-coll-b971505/">Colleen Coll</a> and <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn</a> of The New Stack.</p><p><h2>Web 2.0 ‘Opened the World’</h2></p><p>Douglas’ introduction to tech started as a kid “cutting his teeth” on a Packard Bell and a shared computer at the community center inside his apartment complex, where he grew up outside of Tampa, Florida.</p><p> </p><p>“I don't know what computer was in there, but it ran DOS,” he said. “And I got to play, like, Wolfenstein and eventually Duke Nukem and stuff like that. So that was my first sort of like, touch of a computer and I actually knew what I was doing.”</p><p> </p><p>With his MBA in finance, the last recession in 2008 left only sales jobs available. But Douglas always knew he wanted to “build stuff.”</p><p> </p><p>“I've always been like a copy and paste [person] and loved playing DOS games,” he told The New Stack. “I eventually [created] a pretty nice MySpace profile. then someone told me ‘Hey, you know, you could actually build apps now.’</p><p> </p><p>“And post Web 2.0. people have frameworks and rails and Django. You just have to run a couple scripts, and you've got a web page live and put that in Heroku, or another server, and you're good. And that opened the world.”</p><p> </p><p>Open Sauced began as a side project when he was director of developer advocacy at GitHub; He started working on the project full time in June, after about two years of tinkering with it.</p><p> </p><p>Douglas didn’t grow up with money, he said, so moving from as an employee to the risky life of a CEO seeking funding prompted him to create his own comprehensive strategy. This included content creation (including a podcast, <a href="https://www.youtube.com/playlist?list=PLHyZ0Wz_A44VR4BXl_JOWSecQeWcZ-kS3">The Secret Sauce</a>), other marketing, and shipping frontend code.</p><p> </p><p>GitHub was very supportive of him spinning off Open Sauced as an independent startup, with colleagues assisting in refining his pitches to venture capital investors to raise funds.</p><p> </p><p>“At GitHub, they have inside of their employee employment contract a moonlight clause,” Douglas said. Which means, he noted, because the company is powered by open source, “basically, whatever you work on, as long as you're not competing directly against GitHub, rebuilding it from the ground up, feel free to do whatever you need to do moonlight.”</p><p><h2>Support for Blacks in Tech</h2></p><p>Open Sauced will also continue Douglas’ efforts to increase representation of Blacks in tech and open pathways to level up their skills, similar to his work at GitHub with the Employee Resource Group (ERG) the <a href="https://github.com/about/diversity/communities-of-belonging/blacktocats">Blacktocats</a>.</p><p> </p><p>“The focus there was to make sure that people had a home, like a community of belonging,” he said. “If you're a black employee at GitHub, you have a space and it was very helpful with things like 2020, during George Floyd. lt was the community [in which] we all supported each other during that situation.”</p><p> </p><p>Douglas’ mission to rid the effects of imposter syndrome and champion anyone interested in open source makes him sound more like an open source ”whisperer”’ than a Beyoncé. Whatever the title, his iconic pizza brand — the company’s web address is <a href="https://opensauced.pizza/">“opensauced.pizza”</a> — was his version, he said, of creating album cover art before forming the band.</p><p> </p><p>His podcast’s tagline urges listeners to “stay saucy.” His plan for doing that at Open Sauced is to encourage new open source contributors.</p><p> </p><p>“It's nice to know that projects can now opt in … but as a first-time contributor, where do I start? We can show you, ‘Hey, this project had five contributions, they're doing a great job. Why don't you start here?’</p>
]]></description>
      <pubDate>Fri, 7 Oct 2022 18:31:41 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/after-github-brian-douglas-builds-a-saucy-startup-8_DMoQO_</link>
      <content:encoded><![CDATA[<p><a href="https://www.linkedin.com/in/brianldouglas">Brian Douglas</a> was “the Beyoncé of GitHub.” He jokingly crowned himself with that title during his years at that company, where he advocated for open source and a more inclusive community supporting it. His work there eventually led to his new startup, <a href="https://opensauced.pizza/">Open Sauced</a>.</p><p> </p><p>Like the Queen Bey, Douglas’ mission is to empower a community. In his case, he’s seeking to support the open source community. With his former employer, GitHub, serving 4 million developers worldwide, the potential size of that audience is huge.</p><p> </p><p>In this episode of <a href="https://thenewstack.io/how-idit-levines-athletic-past-fueled-solo-ios-startup/">The Tech Founder Odyssey</a> podcast, he shared why empowerment and breaking down barriers to make anyone “awesome” in open source was the motivation behind his startup journey.</p><p> </p><p>Beyoncé “has a superfan group, the Beyhive, that will go to bat for her,” Douglas pointed out. “So if Beyoncé makes a country song, the Beyhive is there supporting her country song. If she starts doing the house music, which is her latest album, [they] are there to the point where like, you cannot say bad stuff about, he pointed out,. So what I’m focused on is having a strong community and having strong ties.”</p><p> </p><p>Open Sauced, which launched in June, seeks to build open source intelligence platform to help companies to stay competitive. Its aim is to help give more potential open source contributors the information they need to get started with projects, and help maintain them over time</p><p> </p><p>The conversation was co-hosted by <a href="https://www.linkedin.com/in/colleen-coll-b971505/">Colleen Coll</a> and <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn</a> of The New Stack.</p><p><h2>Web 2.0 ‘Opened the World’</h2></p><p>Douglas’ introduction to tech started as a kid “cutting his teeth” on a Packard Bell and a shared computer at the community center inside his apartment complex, where he grew up outside of Tampa, Florida.</p><p> </p><p>“I don't know what computer was in there, but it ran DOS,” he said. “And I got to play, like, Wolfenstein and eventually Duke Nukem and stuff like that. So that was my first sort of like, touch of a computer and I actually knew what I was doing.”</p><p> </p><p>With his MBA in finance, the last recession in 2008 left only sales jobs available. But Douglas always knew he wanted to “build stuff.”</p><p> </p><p>“I've always been like a copy and paste [person] and loved playing DOS games,” he told The New Stack. “I eventually [created] a pretty nice MySpace profile. then someone told me ‘Hey, you know, you could actually build apps now.’</p><p> </p><p>“And post Web 2.0. people have frameworks and rails and Django. You just have to run a couple scripts, and you've got a web page live and put that in Heroku, or another server, and you're good. And that opened the world.”</p><p> </p><p>Open Sauced began as a side project when he was director of developer advocacy at GitHub; He started working on the project full time in June, after about two years of tinkering with it.</p><p> </p><p>Douglas didn’t grow up with money, he said, so moving from as an employee to the risky life of a CEO seeking funding prompted him to create his own comprehensive strategy. This included content creation (including a podcast, <a href="https://www.youtube.com/playlist?list=PLHyZ0Wz_A44VR4BXl_JOWSecQeWcZ-kS3">The Secret Sauce</a>), other marketing, and shipping frontend code.</p><p> </p><p>GitHub was very supportive of him spinning off Open Sauced as an independent startup, with colleagues assisting in refining his pitches to venture capital investors to raise funds.</p><p> </p><p>“At GitHub, they have inside of their employee employment contract a moonlight clause,” Douglas said. Which means, he noted, because the company is powered by open source, “basically, whatever you work on, as long as you're not competing directly against GitHub, rebuilding it from the ground up, feel free to do whatever you need to do moonlight.”</p><p><h2>Support for Blacks in Tech</h2></p><p>Open Sauced will also continue Douglas’ efforts to increase representation of Blacks in tech and open pathways to level up their skills, similar to his work at GitHub with the Employee Resource Group (ERG) the <a href="https://github.com/about/diversity/communities-of-belonging/blacktocats">Blacktocats</a>.</p><p> </p><p>“The focus there was to make sure that people had a home, like a community of belonging,” he said. “If you're a black employee at GitHub, you have a space and it was very helpful with things like 2020, during George Floyd. lt was the community [in which] we all supported each other during that situation.”</p><p> </p><p>Douglas’ mission to rid the effects of imposter syndrome and champion anyone interested in open source makes him sound more like an open source ”whisperer”’ than a Beyoncé. Whatever the title, his iconic pizza brand — the company’s web address is <a href="https://opensauced.pizza/">“opensauced.pizza”</a> — was his version, he said, of creating album cover art before forming the band.</p><p> </p><p>His podcast’s tagline urges listeners to “stay saucy.” His plan for doing that at Open Sauced is to encourage new open source contributors.</p><p> </p><p>“It's nice to know that projects can now opt in … but as a first-time contributor, where do I start? We can show you, ‘Hey, this project had five contributions, they're doing a great job. Why don't you start here?’</p>
]]></content:encoded>
      <enclosure length="32470457" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/a7995fa8-1162-4e89-8c53-d3159b9758ac/audio/ab170cc1-9cd1-490f-93ec-a6daf80771a5/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>After GitHub, Brian Douglas Builds a ‘Saucy’ Startup</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/995bdf45-34a5-4523-990c-ce66c8fb60f4/3000x3000/the-tech-odyssey-logo-white-bg.jpg?aid=rss_feed"/>
      <itunes:duration>00:33:49</itunes:duration>
      <itunes:summary>Brian Douglas was “the Beyoncé of GitHub.” He jokingly crowned himself with that title during his years at that company, where he advocated for open source and a more inclusive community supporting it. His work there eventually led to his new startup, Open Sauced.

Like the Queen Bey, Douglas’ mission is to empower a community. In his case, he’s seeking to support the open source community. With his former employer, GitHub, serving 4 million developers worldwide, the potential size of that audience is huge.

In this episode of The Tech Founder Odyssey podcast, he shared why empowerment and breaking down barriers to make anyone “awesome” in open source was the motivation behind his startup journey.</itunes:summary>
      <itunes:subtitle>Brian Douglas was “the Beyoncé of GitHub.” He jokingly crowned himself with that title during his years at that company, where he advocated for open source and a more inclusive community supporting it. His work there eventually led to his new startup, Open Sauced.

Like the Queen Bey, Douglas’ mission is to empower a community. In his case, he’s seeking to support the open source community. With his former employer, GitHub, serving 4 million developers worldwide, the potential size of that audience is huge.

In this episode of The Tech Founder Odyssey podcast, he shared why empowerment and breaking down barriers to make anyone “awesome” in open source was the motivation behind his startup journey.</itunes:subtitle>
      <itunes:keywords>brian douglas, software developer, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, open sauced, the new stack makers, software engineer, open source</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1351</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">7e248473-f01e-48aa-8b72-82dda9d01995</guid>
      <title>The AWS Open Source Strategy</title>
      <description><![CDATA[<p><a href="https://aws.amazon.com/">Amazon Web Services</a> would not be what it is today without open source.</p><p> </p><p>"I think it starts with sustainability," said <a href="https://www.linkedin.com/in/davidnalley/">David Nalley</a>, head of open source and marketing at AWS in an interview at the <a href="https://thenewstack.io/open-source-summit-introducing-linux-foundation-europe/">Open Source Summit</a> in Dublin for The New Stack Makers. "And this really goes back to the origin of Amazon Web Services. AWS would not be what it is today without open source."</p><p> </p><p>Long-term support for open source is one of three pillars of the organization's open source strategy. AWS builds and innovates on top of open source and will maintain that approach for its innovation, customers, and the larger digital economy.</p><p> </p><p>"And that means that there's a long history of us benefiting from open source and investing in open source," Nalley said. "But ultimately, we're here for the long haul. We're going to continue making investments. We're going to increase our investments in open source."</p><p> </p><p>Customers' interest in open source is the second pillar of the AWS open source strategy.</p><p> </p><p>"We feel like we have to make investments on behalf of our customers," Nalley said. "But the reality is our customers are choosing open source to run their workloads on."</p><p> </p><p>[sponsor_note slug="amazon-web-services-aws" ][/sponsor_note]</p><p> </p><p>The third pillar focuses on advocating for open source in the larger digital economy.</p><p> </p><p>Notable is how much AWS's presence in the market played a part in <a href="https://en.wikipedia.org/wiki/Paul_Vixie">Paul Vixie's</a> decision to join the company. Vixie, an Internet pioneer, is now vice president of security and an AWS distinguished engineer who was also interviewed for the <a href="https://thenewstack.io/paul-vixie-story-of-an-internet-hero/">New Stack Makers podcast at the Open Source Summit</a>.</p><p> </p><p>Nalley has his recognizable importance in the community. Nalley is the president of the <a href="https://www.apache.org/">Apache Software Foundation</a>, one of the world's most essential open source foundations.</p><p> </p><p>The importance of its three-pillar strategy shows in many of the projects that AWS supports. AWS recently donated $10 million to the <a href="https://openssf.org/">Open Source Software Supply Chain Foundation</a>, part of the Linux Foundation.</p><p> </p><p>AWS is a significant supporter of the <a href="https://foundation.rust-lang.org/">Rust Foundation</a>, which supports the <a href="https://thenewstack.io/rust-whats-next-for-the-fast-growing-programming-language/">Rust programming language</a> and ecosystem. It puts a particular focus on maintainers that govern the project.</p><p> </p><p>Last month, Facebook unveiled the <a href="https://pytorch.org/foundation">PyTorch Foundation</a> that the Linux Foundation will manage. AWS is on the governing board.</p>
]]></description>
      <pubDate>Wed, 5 Oct 2022 19:42:57 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/the-aws-open-source-strategy-ylj365V8</link>
      <content:encoded><![CDATA[<p><a href="https://aws.amazon.com/">Amazon Web Services</a> would not be what it is today without open source.</p><p> </p><p>"I think it starts with sustainability," said <a href="https://www.linkedin.com/in/davidnalley/">David Nalley</a>, head of open source and marketing at AWS in an interview at the <a href="https://thenewstack.io/open-source-summit-introducing-linux-foundation-europe/">Open Source Summit</a> in Dublin for The New Stack Makers. "And this really goes back to the origin of Amazon Web Services. AWS would not be what it is today without open source."</p><p> </p><p>Long-term support for open source is one of three pillars of the organization's open source strategy. AWS builds and innovates on top of open source and will maintain that approach for its innovation, customers, and the larger digital economy.</p><p> </p><p>"And that means that there's a long history of us benefiting from open source and investing in open source," Nalley said. "But ultimately, we're here for the long haul. We're going to continue making investments. We're going to increase our investments in open source."</p><p> </p><p>Customers' interest in open source is the second pillar of the AWS open source strategy.</p><p> </p><p>"We feel like we have to make investments on behalf of our customers," Nalley said. "But the reality is our customers are choosing open source to run their workloads on."</p><p> </p><p>[sponsor_note slug="amazon-web-services-aws" ][/sponsor_note]</p><p> </p><p>The third pillar focuses on advocating for open source in the larger digital economy.</p><p> </p><p>Notable is how much AWS's presence in the market played a part in <a href="https://en.wikipedia.org/wiki/Paul_Vixie">Paul Vixie's</a> decision to join the company. Vixie, an Internet pioneer, is now vice president of security and an AWS distinguished engineer who was also interviewed for the <a href="https://thenewstack.io/paul-vixie-story-of-an-internet-hero/">New Stack Makers podcast at the Open Source Summit</a>.</p><p> </p><p>Nalley has his recognizable importance in the community. Nalley is the president of the <a href="https://www.apache.org/">Apache Software Foundation</a>, one of the world's most essential open source foundations.</p><p> </p><p>The importance of its three-pillar strategy shows in many of the projects that AWS supports. AWS recently donated $10 million to the <a href="https://openssf.org/">Open Source Software Supply Chain Foundation</a>, part of the Linux Foundation.</p><p> </p><p>AWS is a significant supporter of the <a href="https://foundation.rust-lang.org/">Rust Foundation</a>, which supports the <a href="https://thenewstack.io/rust-whats-next-for-the-fast-growing-programming-language/">Rust programming language</a> and ecosystem. It puts a particular focus on maintainers that govern the project.</p><p> </p><p>Last month, Facebook unveiled the <a href="https://pytorch.org/foundation">PyTorch Foundation</a> that the Linux Foundation will manage. AWS is on the governing board.</p>
]]></content:encoded>
      <enclosure length="13837836" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/3788045b-552e-4e58-95ed-71671fbc3150/audio/6a76db92-e7b1-4974-b1f9-f7b7d77a80cc/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>The AWS Open Source Strategy</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/1fdbf78c-538c-47c7-a326-a5680685e64c/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:14:24</itunes:duration>
      <itunes:summary>Amazon Web Services would not be what it is today without open source.

&quot;I think it starts with sustainability,&quot; said David Nalley, head of open source strategy and marketing at AWS in an interview at the Open Source Summit in Dublin for The New Stack Makers. &quot;And this really goes back to the origin of Amazon Web Services. AWS would not be what it is today without open source.&quot;

Long-term support for open source is one of three pillars of the organization&apos;s open source strategy. AWS builds and innovates on top of open source and will maintain that approach for its innovation, customers, and the larger digital economy.</itunes:summary>
      <itunes:subtitle>Amazon Web Services would not be what it is today without open source.

&quot;I think it starts with sustainability,&quot; said David Nalley, head of open source strategy and marketing at AWS in an interview at the Open Source Summit in Dublin for The New Stack Makers. &quot;And this really goes back to the origin of Amazon Web Services. AWS would not be what it is today without open source.&quot;

Long-term support for open source is one of three pillars of the organization&apos;s open source strategy. AWS builds and innovates on top of open source and will maintain that approach for its innovation, customers, and the larger digital economy.</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, the new stack, devops, devops podcast, tech, developer podcast, the new stack makers, software engineer, david nalley, aws, open source summit</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1350</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">fd0f201e-0be9-4722-9667-927eb9b6d80c</guid>
      <title>Paul Vixie: Story of an Internet Hero</title>
      <description><![CDATA[<p><a href="https://en.wikipedia.org/wiki/Paul_Vixie">Paul Vixie</a> grew up in San Francisco. He dropped out of high school in 1980. He worked on the first Internet gateways at <a href="https://en.wikipedia.org/wiki/Paul_Vixie">DEC</a> and, from there, started the <a href="https://www.isc.org/">Internet Software Consortium</a> (ISC), establishing Internet protocols, particularly the <a href="https://thenewstack.io/researcher-hijacked-several-io-tld-nameservers/">Domain Name System</a> (DNS).</p><p> </p><p>Today, Vixie is one of the few dozen in the technology world with the title "distinguished engineer," working at Amazon Web Services as vice president of security, where he believes he can make the Internet a more safe place. As safe as before the Internet emerged.</p><p> </p><p>"I am worried about how much less safe we all are in the Internet era than we were before," Vixie said in an interview at the <a href="https://thenewstack.io/how-can-open-source-sustain-itself-without-creating-burnout/">Open Source Summit in Dublin</a> earlier this month for The New Stack Makers podcast. "And everything is connected, and very little is understood. And so, my mission for the last 20 years has been to restore human safety to pre-internet levels. And doing that at scale is quite the challenge. It'll take me a lifetime."</p><p> </p><p>So why join AWS? He spent decades establishing the ISC. He started a company called <a href="https://www.farsightsecurity.com/">Farsight</a>, which came out of ISC. He sold Farsight in November of last year when conversations began with AWS.</p><p> </p><p>Vixie thought about his mission to better restore human safety to pre-internet levels when AWS asked a question that changed the conversation and led him to his new role.</p><p> </p><p>"They asked me, what is now in retrospect, an obvious question, 'AWS hosts, probably the largest share of the digital economy that you're trying to protect," Vixie said. "Don't you think you can complete your mission by working to help secure AWS?' "The answer is yes. In fact, I feel like I'm going to get more traction now that I can focus on strategy and technology and not also operate a company on the side. And so it was a very good win for me, and I hope for them."</p><p> </p><p>Interviewing Vixie is such an honor. It's people like Paul who made so much possible for anyone who uses the Internet. Just think of that for a minute -- anyone who uses the Internet have people like Paul to thank.</p><p> </p><p>Thanks Paul -- you are a hero to many. Here's to your next run at AWS.</p><p> </p><p> </p><p> </p><p> </p>
]]></description>
      <pubDate>Wed, 28 Sep 2022 20:28:14 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/paul-vixie-story-of-an-internet-hero-M5cLWAij</link>
      <content:encoded><![CDATA[<p><a href="https://en.wikipedia.org/wiki/Paul_Vixie">Paul Vixie</a> grew up in San Francisco. He dropped out of high school in 1980. He worked on the first Internet gateways at <a href="https://en.wikipedia.org/wiki/Paul_Vixie">DEC</a> and, from there, started the <a href="https://www.isc.org/">Internet Software Consortium</a> (ISC), establishing Internet protocols, particularly the <a href="https://thenewstack.io/researcher-hijacked-several-io-tld-nameservers/">Domain Name System</a> (DNS).</p><p> </p><p>Today, Vixie is one of the few dozen in the technology world with the title "distinguished engineer," working at Amazon Web Services as vice president of security, where he believes he can make the Internet a more safe place. As safe as before the Internet emerged.</p><p> </p><p>"I am worried about how much less safe we all are in the Internet era than we were before," Vixie said in an interview at the <a href="https://thenewstack.io/how-can-open-source-sustain-itself-without-creating-burnout/">Open Source Summit in Dublin</a> earlier this month for The New Stack Makers podcast. "And everything is connected, and very little is understood. And so, my mission for the last 20 years has been to restore human safety to pre-internet levels. And doing that at scale is quite the challenge. It'll take me a lifetime."</p><p> </p><p>So why join AWS? He spent decades establishing the ISC. He started a company called <a href="https://www.farsightsecurity.com/">Farsight</a>, which came out of ISC. He sold Farsight in November of last year when conversations began with AWS.</p><p> </p><p>Vixie thought about his mission to better restore human safety to pre-internet levels when AWS asked a question that changed the conversation and led him to his new role.</p><p> </p><p>"They asked me, what is now in retrospect, an obvious question, 'AWS hosts, probably the largest share of the digital economy that you're trying to protect," Vixie said. "Don't you think you can complete your mission by working to help secure AWS?' "The answer is yes. In fact, I feel like I'm going to get more traction now that I can focus on strategy and technology and not also operate a company on the side. And so it was a very good win for me, and I hope for them."</p><p> </p><p>Interviewing Vixie is such an honor. It's people like Paul who made so much possible for anyone who uses the Internet. Just think of that for a minute -- anyone who uses the Internet have people like Paul to thank.</p><p> </p><p>Thanks Paul -- you are a hero to many. Here's to your next run at AWS.</p><p> </p><p> </p><p> </p><p> </p>
]]></content:encoded>
      <enclosure length="27505102" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/bec946e2-8c17-43e1-bf03-2b006c9a7988/audio/04becd67-1e23-4fd4-b292-581da38df56b/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Paul Vixie: Story of an Internet Hero</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/4d04ba5b-9912-4d1c-a171-bc4462316d22/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:28:39</itunes:duration>
      <itunes:summary>Paul Vixie grew up in San Francisco. He dropped out of high school in 1980. He worked on the first Internet gateways at DEC and, from there, started the Internet Software Consortium (ISC), establishing Internet protocols, particularly the Domain Name System (DNS).

Today, Vixie is one of the few dozen in the technology world with the title &quot;distinguished engineer,&quot; working at Amazon Web Services as vice president of security, where he believes he can make the Internet a more safe place. As safe as before the Internet emerged.

&quot;I am worried about how much less safe we all are in the Internet era than we were before,&quot; Vixie said in an interview at the Open Source Summit in Dublin earlier this month for The New Stack Makers podcast. &quot;And everything is connected, and very little is understood. And so, my mission for the last 20 years has been to restore human safety to pre-internet levels. And doing that at scale is quite the challenge. It&apos;ll take me a lifetime.&quot;</itunes:summary>
      <itunes:subtitle>Paul Vixie grew up in San Francisco. He dropped out of high school in 1980. He worked on the first Internet gateways at DEC and, from there, started the Internet Software Consortium (ISC), establishing Internet protocols, particularly the Domain Name System (DNS).

Today, Vixie is one of the few dozen in the technology world with the title &quot;distinguished engineer,&quot; working at Amazon Web Services as vice president of security, where he believes he can make the Internet a more safe place. As safe as before the Internet emerged.

&quot;I am worried about how much less safe we all are in the Internet era than we were before,&quot; Vixie said in an interview at the Open Source Summit in Dublin earlier this month for The New Stack Makers podcast. &quot;And everything is connected, and very little is understood. And so, my mission for the last 20 years has been to restore human safety to pre-internet levels. And doing that at scale is quite the challenge. It&apos;ll take me a lifetime.&quot;</itunes:subtitle>
      <itunes:keywords>software developer, tech podcast, alex williams, the new stack, devops, devops podcast, tech, developer podcast, paul vixie, the new stack makers, software engineer, aws, open source summit</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1349</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b03dc2bc-13ad-4e15-ada1-de3e71a79910</guid>
      <title>Deno&apos;s Ryan Dahl is an Asynchronous Guy</title>
      <description><![CDATA[<p><a href="https://tinyclouds.org/">Ryan Dahl</a> is the co-founder and creator of <a href="https://deno.land/">Deno,</a> a runtime for JavaScript, TypeScript, and WebAssembly based on the V8 JavaScript engine and the Rust programming language. He is also the creator of<a href="https://nodejs.org/en/"> Node.js.</a></p><p> </p><p>We interviewed Dahl for The New Stack Technical Founder Odyssey series.</p><p> </p><p>"Yeah, so we have a <a href="https://dev.to/snickdx/what-is-the-javascript-runtime-4n09">JavaScript runtime</a>," Dahl said. "It's pretty similar in, in essence, to Node. It executes some JavaScript, but it's much more modern. "</p><p> </p><p>The Deno project started four years ago, Dahl said. He recounted how writing code helped him rethink how he developed Node. Dahl wrote a demo of a modern, server-side JavaSript runtime. He didn't think it would go anywhere, but sure enough, it did. People got pretty interested in it.</p><p> </p><p>Deno has "many, many" components, which serve as its foundation. It's written in Rust and C++ with a different type of event loop library. Deno has non-blocking IO as does Node.</p><p> </p><p>Dahl has built his work on the use of <a href="https://thenewstack.io/async-officially-coming-javascript-year/">asynchronous technologies</a>. The belief system carries over into how he manages the company. Dahl is an asynchronous guy and runs his company in such a fashion.</p><p> </p><p>As an engineer, Dahl learned that he does not like to be interrupted by meetings. The work should be as asynchronous as possible to avoid interruptions.</p><p> </p><p><a href="https://thenewstack.io/with-additional-funding-deno-sets-out-to-challenge-node-js/">Deno, the company</a>, started during the pandemic, Dahl said. Everyone is remote. They pair program a lot and focus on short, productive conversations. That's an excellent way to socialize and look deeper into problems.</p><p> </p><p>How is for Dahl to go <a href="https://thenewstack.io/a-technical-founders-story-jake-warner-on-cycle-io-2/">from programming to CEO</a>?</p><p> </p><p>"I'd say it's relatively challenging," Dahl said. I like programming a lot. Ideally, I would spend most of my time in an editor solving programming problems. That's not really what the job of being a CEO is."</p><p> </p><p>Dahl said there's a lot more communication as the CEO operates on a larger scale. Engineering teams need management to ensure they work together effectively, deliver features and solve problems for developers.</p><p> </p><p>Overall, Dahl takes it one day at a time. He has no fundamental theory of management. He's just trying to solve problems as they come.</p><p> </p><p>"I mean, my claim to fame is like bringing asynchronous sockets to the mainstream with nonblocking IO and stuff. So, you know, asynchronous is deeply embedded and what I'm thinking about. When it comes to company organization, asynchronous means that we have rotating meeting schedules to adapt to people in different time zones. We do a lot of meeting recordings. So if you can't make it for whatever reason, you're not in the right time zone, you're, you know, you're, picking up your kids, whatever. You can go back and watch the recording. So we basically record every every meeting, we try to keep the meeting short. I think that's important because nobody wants to watch hours and hours of videos. And we use, we use chats a lot. And chat and email are forms of asynchronous communication where you don't need to kind of meet with people one on one. And yeah, I guess I guess the other aspect of that is just keeping meetings to a minimum. Like there's there's a few situations where you really need to get everybody in the room. I mean, there are certainly times when you need to do that. But I tried to avoid that as much as possible, because I think that really disrupts the flow of a lot of people working."</p>
]]></description>
      <pubDate>Tue, 27 Sep 2022 18:31:06 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/denos-ryan-dahl-is-an-asynchronous-guy-dkfVHx9K</link>
      <content:encoded><![CDATA[<p><a href="https://tinyclouds.org/">Ryan Dahl</a> is the co-founder and creator of <a href="https://deno.land/">Deno,</a> a runtime for JavaScript, TypeScript, and WebAssembly based on the V8 JavaScript engine and the Rust programming language. He is also the creator of<a href="https://nodejs.org/en/"> Node.js.</a></p><p> </p><p>We interviewed Dahl for The New Stack Technical Founder Odyssey series.</p><p> </p><p>"Yeah, so we have a <a href="https://dev.to/snickdx/what-is-the-javascript-runtime-4n09">JavaScript runtime</a>," Dahl said. "It's pretty similar in, in essence, to Node. It executes some JavaScript, but it's much more modern. "</p><p> </p><p>The Deno project started four years ago, Dahl said. He recounted how writing code helped him rethink how he developed Node. Dahl wrote a demo of a modern, server-side JavaSript runtime. He didn't think it would go anywhere, but sure enough, it did. People got pretty interested in it.</p><p> </p><p>Deno has "many, many" components, which serve as its foundation. It's written in Rust and C++ with a different type of event loop library. Deno has non-blocking IO as does Node.</p><p> </p><p>Dahl has built his work on the use of <a href="https://thenewstack.io/async-officially-coming-javascript-year/">asynchronous technologies</a>. The belief system carries over into how he manages the company. Dahl is an asynchronous guy and runs his company in such a fashion.</p><p> </p><p>As an engineer, Dahl learned that he does not like to be interrupted by meetings. The work should be as asynchronous as possible to avoid interruptions.</p><p> </p><p><a href="https://thenewstack.io/with-additional-funding-deno-sets-out-to-challenge-node-js/">Deno, the company</a>, started during the pandemic, Dahl said. Everyone is remote. They pair program a lot and focus on short, productive conversations. That's an excellent way to socialize and look deeper into problems.</p><p> </p><p>How is for Dahl to go <a href="https://thenewstack.io/a-technical-founders-story-jake-warner-on-cycle-io-2/">from programming to CEO</a>?</p><p> </p><p>"I'd say it's relatively challenging," Dahl said. I like programming a lot. Ideally, I would spend most of my time in an editor solving programming problems. That's not really what the job of being a CEO is."</p><p> </p><p>Dahl said there's a lot more communication as the CEO operates on a larger scale. Engineering teams need management to ensure they work together effectively, deliver features and solve problems for developers.</p><p> </p><p>Overall, Dahl takes it one day at a time. He has no fundamental theory of management. He's just trying to solve problems as they come.</p><p> </p><p>"I mean, my claim to fame is like bringing asynchronous sockets to the mainstream with nonblocking IO and stuff. So, you know, asynchronous is deeply embedded and what I'm thinking about. When it comes to company organization, asynchronous means that we have rotating meeting schedules to adapt to people in different time zones. We do a lot of meeting recordings. So if you can't make it for whatever reason, you're not in the right time zone, you're, you know, you're, picking up your kids, whatever. You can go back and watch the recording. So we basically record every every meeting, we try to keep the meeting short. I think that's important because nobody wants to watch hours and hours of videos. And we use, we use chats a lot. And chat and email are forms of asynchronous communication where you don't need to kind of meet with people one on one. And yeah, I guess I guess the other aspect of that is just keeping meetings to a minimum. Like there's there's a few situations where you really need to get everybody in the room. I mean, there are certainly times when you need to do that. But I tried to avoid that as much as possible, because I think that really disrupts the flow of a lot of people working."</p>
]]></content:encoded>
      <enclosure length="19798835" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/b93863ef-e472-45fa-aff8-780a30b77750/audio/1a83ee16-ce85-4b3b-b8e6-b7422de33c5c/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Deno&apos;s Ryan Dahl is an Asynchronous Guy</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/cb617e2b-2ad1-4dd1-800d-96224d2d26e2/3000x3000/the-tech-odyssey-logo-white-bg.jpg?aid=rss_feed"/>
      <itunes:duration>00:20:37</itunes:duration>
      <itunes:summary>Ryan Dahl is the co-founder and creator of Deno, a runtime for JavaScript, TypeScript, and WebAssembly based on the V8 JavaScript engine and the Rust programming language. He is also the creator of Node.js.

We interviewed Dahl for The New Stack Technical Founder Odyssey series.</itunes:summary>
      <itunes:subtitle>Ryan Dahl is the co-founder and creator of Deno, a runtime for JavaScript, TypeScript, and WebAssembly based on the V8 JavaScript engine and the Rust programming language. He is also the creator of Node.js.

We interviewed Dahl for The New Stack Technical Founder Odyssey series.</itunes:subtitle>
      <itunes:keywords>deno, software developer, ryan dahl, alex williams, the new stack, devops, devops podcast, the new stack makers, software engineer, the tech founder odyssey, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1348</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1f4394e0-c95a-44bb-b320-73bb7e213dad</guid>
      <title>How Can Open Source Sustain Itself Without Creating Burnout?</title>
      <description><![CDATA[<p>The whole world uses open source, but as we’ve learned from <a href="https://thenewstack.io/log4shell-we-are-in-so-much-trouble/">the Log4j debacle,</a> “free” software isn’t really free. Organizations and their customers pay for it when projects aren’t frequently updated and maintained.</p><p> </p><p>How can we support open source project maintainers — and how can we decide which projects are worth the time and effort to maintain?</p><p> </p><p>“A lot of people pick up open source projects, and use them in their products and in their companies without really thinking about whether or not that project is likely to be successful over the long term,” <a href="https://www.linkedin.com/in/dawnfoster">Dawn Foster</a>, director of open source community strategy at VMware’s <a href="https://thenewstack.io/how-an-ospo-can-help-your-engineers-give-back-to-open-source/">open source program office (OSPO),</a> told The New Stack’s audience during this On the Road edition of The New Stack’s Makers podcast.</p><p> </p><p>In this conversation recorded at Open Source Summit Europe in Dublin, Ireland, Foster elaborated on the human cost of keeping open source software maintained, improved and secure —  and how such projects can be sustained over the long term.</p><p> </p><p>The conversation, sponsored by Amazon Web Services, was hosted by Heather Joslyn, features editor at The New Stack.</p><p> </p><h2>Assessing Project Health: the ‘Lottery Factor’</h2><p> </p><p>One of the first ways to evaluate the health of an open source project, Foster said, is the “lottery factor”: “It's basically if one of your key maintainers for a project won the lottery, retired on a beach tomorrow, could the project continue to be successful?”</p><p> </p><p>“And if you have enough maintainers and you have the work spread out over enough people, then yes. But if you're a single maintainer project and that maintainer retires, there might not be anybody left to pick it up.”</p><p> </p><p>Foster is on the governing board for an project called Community Health Analytics Open Source Software — <a href="https://chaoss.community/">CHAOSS,</a> to its friends — that aims to provide some reliable metrics to judge the health of an open source initiative.</p><p> </p><p>The metrics CHAOSS is developing, she said, “help you understand where your project is healthy and where it isn't, so that you can decide what changes you need to make within your project to make it better.”</p><p> </p><p>CHAOSS uses tooling like <a href="https://chaoss.community/software/#user-content-augur">Augur</a> and <a href="https://chaoss.community/software/#user-content-grimoirelab">GrimoireLab</a> to help get notifications and analytics on project health. And it’s friendly to newcomers, Foster said.</p><p> </p><p>“We spend...a lot of time just defining metrics, which means working in a Google Doc and thinking about all of the different ways you might possibly measure something — something like, are you getting a diverse set of contributors into your project from different organizations, for example.”</p><p> </p><h2>Paying Maintainers, Onboarding Newbies</h2><p> </p><p>It’s important to pay open source maintainers in order to help sustain projects, she said. “The people that are being paid to do it are going to have a lot more time to devote to these open source projects. So they're going to tend to be a little bit more reliable just because they're they're going to have a certain amount of time that's devoted to contributing to these projects.”</p><p> </p><p>Not only does paying people help keep vital projects going, but it also helps increase <a href="https://thenewstack.io/open-source-communities-need-more-safe-spaces-and-codes-of-conducts-now/">the diversity of contributors</a>, “because you by paying people salaries to do this work in open source, you get people who wouldn't naturally have time to do that.</p><p> </p><p>“So in a lot of cases, this is women who have extra childcare responsibilities. This is people from underrepresented backgrounds who have other commitments outside of work,” Foster said. “But by allowing them to do that within their work time, you not only get healthier, longer sustaining open source projects, you get more diverse contributions.”</p><p> </p><p>The community can also help bring in new contributors by providing solid documentation and easy onboarding for newcomers, she said. “If people don't know how to build your software, or how to get a development environment up and running, they're not going to be able to contribute to the project.”</p><p> </p><p>And showing people how to contribute properly can help alleviate the issue of burnout for project maintainers, Foster said:  “Any random person can file issues and bug maintainers all day, in ways that are not productive. And, you know, we end up with maintainer burnout...because we just don't have enough maintainers," said Foster.</p><p> </p><p>“Getting new people into these projects and participating in ways that are eventually reducing the load on these horribly overworked maintainers is a good thing.”</p><p> </p><p>Listen or watch this episode to learn more about maintaining open source sustainability.</p>
]]></description>
      <pubDate>Thu, 22 Sep 2022 18:58:06 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-can-open-source-sustain-itselfwithout-creating-burnout-Tptl1Pla</link>
      <content:encoded><![CDATA[<p>The whole world uses open source, but as we’ve learned from <a href="https://thenewstack.io/log4shell-we-are-in-so-much-trouble/">the Log4j debacle,</a> “free” software isn’t really free. Organizations and their customers pay for it when projects aren’t frequently updated and maintained.</p><p> </p><p>How can we support open source project maintainers — and how can we decide which projects are worth the time and effort to maintain?</p><p> </p><p>“A lot of people pick up open source projects, and use them in their products and in their companies without really thinking about whether or not that project is likely to be successful over the long term,” <a href="https://www.linkedin.com/in/dawnfoster">Dawn Foster</a>, director of open source community strategy at VMware’s <a href="https://thenewstack.io/how-an-ospo-can-help-your-engineers-give-back-to-open-source/">open source program office (OSPO),</a> told The New Stack’s audience during this On the Road edition of The New Stack’s Makers podcast.</p><p> </p><p>In this conversation recorded at Open Source Summit Europe in Dublin, Ireland, Foster elaborated on the human cost of keeping open source software maintained, improved and secure —  and how such projects can be sustained over the long term.</p><p> </p><p>The conversation, sponsored by Amazon Web Services, was hosted by Heather Joslyn, features editor at The New Stack.</p><p> </p><h2>Assessing Project Health: the ‘Lottery Factor’</h2><p> </p><p>One of the first ways to evaluate the health of an open source project, Foster said, is the “lottery factor”: “It's basically if one of your key maintainers for a project won the lottery, retired on a beach tomorrow, could the project continue to be successful?”</p><p> </p><p>“And if you have enough maintainers and you have the work spread out over enough people, then yes. But if you're a single maintainer project and that maintainer retires, there might not be anybody left to pick it up.”</p><p> </p><p>Foster is on the governing board for an project called Community Health Analytics Open Source Software — <a href="https://chaoss.community/">CHAOSS,</a> to its friends — that aims to provide some reliable metrics to judge the health of an open source initiative.</p><p> </p><p>The metrics CHAOSS is developing, she said, “help you understand where your project is healthy and where it isn't, so that you can decide what changes you need to make within your project to make it better.”</p><p> </p><p>CHAOSS uses tooling like <a href="https://chaoss.community/software/#user-content-augur">Augur</a> and <a href="https://chaoss.community/software/#user-content-grimoirelab">GrimoireLab</a> to help get notifications and analytics on project health. And it’s friendly to newcomers, Foster said.</p><p> </p><p>“We spend...a lot of time just defining metrics, which means working in a Google Doc and thinking about all of the different ways you might possibly measure something — something like, are you getting a diverse set of contributors into your project from different organizations, for example.”</p><p> </p><h2>Paying Maintainers, Onboarding Newbies</h2><p> </p><p>It’s important to pay open source maintainers in order to help sustain projects, she said. “The people that are being paid to do it are going to have a lot more time to devote to these open source projects. So they're going to tend to be a little bit more reliable just because they're they're going to have a certain amount of time that's devoted to contributing to these projects.”</p><p> </p><p>Not only does paying people help keep vital projects going, but it also helps increase <a href="https://thenewstack.io/open-source-communities-need-more-safe-spaces-and-codes-of-conducts-now/">the diversity of contributors</a>, “because you by paying people salaries to do this work in open source, you get people who wouldn't naturally have time to do that.</p><p> </p><p>“So in a lot of cases, this is women who have extra childcare responsibilities. This is people from underrepresented backgrounds who have other commitments outside of work,” Foster said. “But by allowing them to do that within their work time, you not only get healthier, longer sustaining open source projects, you get more diverse contributions.”</p><p> </p><p>The community can also help bring in new contributors by providing solid documentation and easy onboarding for newcomers, she said. “If people don't know how to build your software, or how to get a development environment up and running, they're not going to be able to contribute to the project.”</p><p> </p><p>And showing people how to contribute properly can help alleviate the issue of burnout for project maintainers, Foster said:  “Any random person can file issues and bug maintainers all day, in ways that are not productive. And, you know, we end up with maintainer burnout...because we just don't have enough maintainers," said Foster.</p><p> </p><p>“Getting new people into these projects and participating in ways that are eventually reducing the load on these horribly overworked maintainers is a good thing.”</p><p> </p><p>Listen or watch this episode to learn more about maintaining open source sustainability.</p>
]]></content:encoded>
      <enclosure length="16906977" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/495c3531-4a1f-47db-8487-12a0c1fa35ba/audio/54f5c40f-e715-44b8-a8ef-1592e5a354d7/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Can Open Source Sustain Itself Without Creating Burnout?</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/7a4a1be0-87b3-4185-a6ef-28e4023e5bb0/3000x3000/otr-bug.jpg?aid=rss_feed"/>
      <itunes:duration>00:17:36</itunes:duration>
      <itunes:summary>The whole world uses open source, but as we’ve learned from the Log4j debacle, “free” software isn’t really free. Organizations and their customers pay for it when projects aren’t frequently updated and maintained.

How can we support open source project maintainers — and how can we decide which projects are worth the time and effort to maintain?

“A lot of people pick up open source projects, and use them in their products and in their companies without really thinking about whether or not that project is likely to be successful over the long term,” Dawn Foster, director of open source community strategy at VMware’s open source program office (OSPO), told The New Stack’s audience during this On the Road edition of The New Stack’s Makers podcast.

In this conversation recorded at Open Source Summit Europe in Dublin, Ireland, Foster elaborated on the human cost of keeping open source software maintained, improved and secure —  and how such projects can be sustained over the long term.

The conversation, sponsored by Amazon Web Services, was hosted by Heather Joslyn, features editor at The New Stack.</itunes:summary>
      <itunes:subtitle>The whole world uses open source, but as we’ve learned from the Log4j debacle, “free” software isn’t really free. Organizations and their customers pay for it when projects aren’t frequently updated and maintained.

How can we support open source project maintainers — and how can we decide which projects are worth the time and effort to maintain?

“A lot of people pick up open source projects, and use them in their products and in their companies without really thinking about whether or not that project is likely to be successful over the long term,” Dawn Foster, director of open source community strategy at VMware’s open source program office (OSPO), told The New Stack’s audience during this On the Road edition of The New Stack’s Makers podcast.

In this conversation recorded at Open Source Summit Europe in Dublin, Ireland, Foster elaborated on the human cost of keeping open source software maintained, improved and secure —  and how such projects can be sustained over the long term.

The conversation, sponsored by Amazon Web Services, was hosted by Heather Joslyn, features editor at The New Stack.</itunes:subtitle>
      <itunes:keywords>vmware, software developer, the new stack, devops, devops podcast, aws open source, dawn foster, software engineer, makers, open source summit</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1347</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">d5dab4ea-b1df-4b16-9010-d0c972607213</guid>
      <title>Charity Majors: Taking an Outsider&apos;s Approach to a Startup</title>
      <description><![CDATA[<p>In the early 2000s, <a href="https://www.linkedin.com/in/charity-majors/">Charity Majors</a> was a homeschooled kid who’d gotten a scholarship to study classical piano performance at the University of Idaho.</p><p> </p><p>“I realized, over the course of that first year, that music majors tended to still be hanging around the music department in their 30s and 40s,” she said. “And nobody really had very much money, and they were all doing it for the love of the game. And I was just like, I don't want to be poor for the rest of my life.”</p><p> </p><p>Fortunately, she said, it was pretty easy at that time to jump into the much more lucrative tech world. “It was buzzing, they were willing to take anyone who knew what Unix was,” she said of her first tech job, running computer systems for the university.</p><p> </p><p>Eventually, she dropped out of college, she said, “made my way to Silicon Valley, and I’ve been here ever since.”</p><p> </p><p>Majors, co-founder and chief technology officer of the six-year-old Honeycomb.io, an observability platform company, told her story for The New Stack’s podcast series, The Tech Founder Odyssey, which spotlights the personal journeys of some of the most interesting technical startup creators in the cloud native industry.</p><p> </p><p>It’s been a busy year for her and the company she co-founded with <a href="https://www.linkedin.com/in/christineyen/">Christine Yen,</a> a colleague from Parse, a mobile application development company that was bought by Facebook. In May, O’Reilly published “Observability Engineering,” which Majors co-wrote with <a href="https://www.linkedin.com/in/gmiranda23/">George Miranda</a> and <a href="https://www.linkedin.com/in/efong">Liz Fong-Jones</a>. In June, Gartner named Honeycomb.io as a Leader in the Magic Quadrant for Application Performance Monitoring and Observability.</p><p> </p><p>Thus far Honeycomb.io, now employing about 200 people, has raised just under $97 million, including a $50 million Series C funding round it closed in October, led by Insight Partners (which owns The New Stack).</p><p> </p><p>This Tech Founder Odyssey conversation was co-hosted by <a href="https://www.linkedin.com/in/colleen-coll-b971505/">Colleen Coll</a> and <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn</a> of TNS.</p><p> </p><h2>‘Rage-Driven Development’</h2><p> </p><p>Honeycomb.io grew from efforts at Parse to solve a stubborn observability problem: systems crashed frequently, and rarely for the same reasons each time. “We invested a lot in the last generation of monitoring technology, we had all these dashboards, we have all these graphs,” Majors said. “But in order to figure out what's going on, you kind of had to know in advance what was going to break.”</p><p> </p><p>Once Parse was acquired by Facebook, Majors, Yen and their teams began piping data into a Facebook tool called Scuba, which ”was aggressively hostile to users,” she recalled.</p><p> </p><p>But, “it did one thing really well, which is let you slice and dice in real time on dimensions that have very high cardinality,” meaning those that contain lots of unique terms. This set it apart from the then-current monitoring technologies, which were built around assessing low cardinality dimensions.</p><p> </p><p>Scuba allowed Majors’ organization to gain more control over its reliability problem. And it got her and Yen thinking about how a platform tool that could analyze high cardinality data about system health in real time. “Everything is a high cardinality dimension now,” Majors said. “And [with] the old generation of tools, you hit a wall really fast and really hard.”</p><p> </p><p>And so, Honeycomb.io was created to build that platform. “My entire career has been rage-driven development,” she said. “Like: sounds cool, I'm gonna go play with that. This isn't working — I'm gonna go fix it from anger.”</p><p> </p><h2>A Reluctant CEO</h2><p> </p><p>Yen now holds the CEO role at Honeycomb.io, but Majors wound up with the job for roughly the first half of the company’s life.</p><p> </p><p>Did Majors like being the boss? “Hated it,” she said. “Constitutionally what you want in a CEO is someone who is reliable, predictable, dependable, someone who doesn't mind showing up every Tuesday at 10:30 to talk to the same people.</p><p> </p><p>“I am not structured. I really chafe against that stuff.”</p><p> </p><p>However, she acknowledged, she may have been the right leader in the startup’s beginning: “It was a state of chaos, like we didn't think we were going to survive. And that's where I thrive.”</p><p> </p><p>Fortunately, in Honeycomb.io’s early days, raising money wasn’t a huge challenge, due to its founders’ background at Facebook. “There were people who were coming to us, like, do you want $2 million for a seed thing? Which is good, because I've seen the slides that we put together, and they are laughable. If I had seen those slides as an investor, I would have run the other way.”</p><p> </p><p>The “pedigree” conferred on her by investors due to her association with Facebook didn’t sit comfortably with her. “I really hated it,” she said. “Because I did not learn to be a better engineer at Facebook. And part of me kind of wanted to just reject it. But I also felt this like responsibility on behalf of all dropouts, and queer women everywhere, to take the money and do something with it. So that worked out.”</p><p> </p><p>Majors, a frequent speaker at tech conferences, has established herself as a thought leader in not only <a href="https://thenewstack.io/observability-a-3-year-retrospective/">observability</a> but also <a href="https://thenewstack.io/charity-majors-recipe-for-high-performing-teams/">engineering management.</a> For other women, people of color, or people in the tech field with an unconventional story, she advised “investing a little bit in your public speaking skills, and making yourself a bit of a profile. Being externally known for what you do is really helpful because it counterbalances the default assumptions that you're not technical or that you're not as good.”</p><p> </p><p>She added, “if someone can Google your name plus a technology, and something comes up, you're assumed to be an expert. And I think that that really works to people's advantage.“</p><p> </p><p>Majors had a lot more to say about how her outsider perspective has shaped the way she approaches hiring, leadership and scaling up her organization. Check out this latest episode of the Tech Founder Odyssey.</p>
]]></description>
      <pubDate>Wed, 21 Sep 2022 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/charity-majors-taking-an-outsiders-approach-to-a-startup-DK_1aVoI</link>
      <content:encoded><![CDATA[<p>In the early 2000s, <a href="https://www.linkedin.com/in/charity-majors/">Charity Majors</a> was a homeschooled kid who’d gotten a scholarship to study classical piano performance at the University of Idaho.</p><p> </p><p>“I realized, over the course of that first year, that music majors tended to still be hanging around the music department in their 30s and 40s,” she said. “And nobody really had very much money, and they were all doing it for the love of the game. And I was just like, I don't want to be poor for the rest of my life.”</p><p> </p><p>Fortunately, she said, it was pretty easy at that time to jump into the much more lucrative tech world. “It was buzzing, they were willing to take anyone who knew what Unix was,” she said of her first tech job, running computer systems for the university.</p><p> </p><p>Eventually, she dropped out of college, she said, “made my way to Silicon Valley, and I’ve been here ever since.”</p><p> </p><p>Majors, co-founder and chief technology officer of the six-year-old Honeycomb.io, an observability platform company, told her story for The New Stack’s podcast series, The Tech Founder Odyssey, which spotlights the personal journeys of some of the most interesting technical startup creators in the cloud native industry.</p><p> </p><p>It’s been a busy year for her and the company she co-founded with <a href="https://www.linkedin.com/in/christineyen/">Christine Yen,</a> a colleague from Parse, a mobile application development company that was bought by Facebook. In May, O’Reilly published “Observability Engineering,” which Majors co-wrote with <a href="https://www.linkedin.com/in/gmiranda23/">George Miranda</a> and <a href="https://www.linkedin.com/in/efong">Liz Fong-Jones</a>. In June, Gartner named Honeycomb.io as a Leader in the Magic Quadrant for Application Performance Monitoring and Observability.</p><p> </p><p>Thus far Honeycomb.io, now employing about 200 people, has raised just under $97 million, including a $50 million Series C funding round it closed in October, led by Insight Partners (which owns The New Stack).</p><p> </p><p>This Tech Founder Odyssey conversation was co-hosted by <a href="https://www.linkedin.com/in/colleen-coll-b971505/">Colleen Coll</a> and <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn</a> of TNS.</p><p> </p><h2>‘Rage-Driven Development’</h2><p> </p><p>Honeycomb.io grew from efforts at Parse to solve a stubborn observability problem: systems crashed frequently, and rarely for the same reasons each time. “We invested a lot in the last generation of monitoring technology, we had all these dashboards, we have all these graphs,” Majors said. “But in order to figure out what's going on, you kind of had to know in advance what was going to break.”</p><p> </p><p>Once Parse was acquired by Facebook, Majors, Yen and their teams began piping data into a Facebook tool called Scuba, which ”was aggressively hostile to users,” she recalled.</p><p> </p><p>But, “it did one thing really well, which is let you slice and dice in real time on dimensions that have very high cardinality,” meaning those that contain lots of unique terms. This set it apart from the then-current monitoring technologies, which were built around assessing low cardinality dimensions.</p><p> </p><p>Scuba allowed Majors’ organization to gain more control over its reliability problem. And it got her and Yen thinking about how a platform tool that could analyze high cardinality data about system health in real time. “Everything is a high cardinality dimension now,” Majors said. “And [with] the old generation of tools, you hit a wall really fast and really hard.”</p><p> </p><p>And so, Honeycomb.io was created to build that platform. “My entire career has been rage-driven development,” she said. “Like: sounds cool, I'm gonna go play with that. This isn't working — I'm gonna go fix it from anger.”</p><p> </p><h2>A Reluctant CEO</h2><p> </p><p>Yen now holds the CEO role at Honeycomb.io, but Majors wound up with the job for roughly the first half of the company’s life.</p><p> </p><p>Did Majors like being the boss? “Hated it,” she said. “Constitutionally what you want in a CEO is someone who is reliable, predictable, dependable, someone who doesn't mind showing up every Tuesday at 10:30 to talk to the same people.</p><p> </p><p>“I am not structured. I really chafe against that stuff.”</p><p> </p><p>However, she acknowledged, she may have been the right leader in the startup’s beginning: “It was a state of chaos, like we didn't think we were going to survive. And that's where I thrive.”</p><p> </p><p>Fortunately, in Honeycomb.io’s early days, raising money wasn’t a huge challenge, due to its founders’ background at Facebook. “There were people who were coming to us, like, do you want $2 million for a seed thing? Which is good, because I've seen the slides that we put together, and they are laughable. If I had seen those slides as an investor, I would have run the other way.”</p><p> </p><p>The “pedigree” conferred on her by investors due to her association with Facebook didn’t sit comfortably with her. “I really hated it,” she said. “Because I did not learn to be a better engineer at Facebook. And part of me kind of wanted to just reject it. But I also felt this like responsibility on behalf of all dropouts, and queer women everywhere, to take the money and do something with it. So that worked out.”</p><p> </p><p>Majors, a frequent speaker at tech conferences, has established herself as a thought leader in not only <a href="https://thenewstack.io/observability-a-3-year-retrospective/">observability</a> but also <a href="https://thenewstack.io/charity-majors-recipe-for-high-performing-teams/">engineering management.</a> For other women, people of color, or people in the tech field with an unconventional story, she advised “investing a little bit in your public speaking skills, and making yourself a bit of a profile. Being externally known for what you do is really helpful because it counterbalances the default assumptions that you're not technical or that you're not as good.”</p><p> </p><p>She added, “if someone can Google your name plus a technology, and something comes up, you're assumed to be an expert. And I think that that really works to people's advantage.“</p><p> </p><p>Majors had a lot more to say about how her outsider perspective has shaped the way she approaches hiring, leadership and scaling up her organization. Check out this latest episode of the Tech Founder Odyssey.</p>
]]></content:encoded>
      <enclosure length="32919763" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/c80cb540-ca2d-4f98-9ebf-0516329f705f/audio/8d7748fe-dc2c-4e9b-8aca-2883c188e9d9/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Charity Majors: Taking an Outsider&apos;s Approach to a Startup</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/09362b99-e8e5-4521-8e41-4f00cab7411b/3000x3000/the-tech-odyssey-logo-white-bg.jpg?aid=rss_feed"/>
      <itunes:duration>00:34:17</itunes:duration>
      <itunes:summary>In the early 2000s, Charity Majors was a homeschooled kid who’d gotten a scholarship to study classical piano performance at the University of Idaho.

“I realized, over the course of that first year, that music majors tended to still be hanging around the music department in their 30s and 40s,” she said. “And nobody really had very much money, and they were all doing it for the love of the game. And I was just like, I don&apos;t want to be poor for the rest of my life.”

Fortunately, she said, it was pretty easy at that time to jump into the much more lucrative tech world. “It was buzzing, they were willing to take anyone who knew what Unix was,” she said of her first tech job, running computer systems for the university.

Eventually, she dropped out of college, she said, “made my way to Silicon Valley, and I’ve been here ever since.”

Majors, co-founder and chief technology officer of the six-year-old [sponsor_inline_mention slug=&quot;honeycomb&quot; ]Honeycomb.io,[/sponsor_inline_mention] an observability platform company, told her story for The New Stack’s podcast series, The Tech Founder Odyssey, which spotlights the personal journeys of some of the most interesting technical startup creators in the cloud native industry.</itunes:summary>
      <itunes:subtitle>In the early 2000s, Charity Majors was a homeschooled kid who’d gotten a scholarship to study classical piano performance at the University of Idaho.

“I realized, over the course of that first year, that music majors tended to still be hanging around the music department in their 30s and 40s,” she said. “And nobody really had very much money, and they were all doing it for the love of the game. And I was just like, I don&apos;t want to be poor for the rest of my life.”

Fortunately, she said, it was pretty easy at that time to jump into the much more lucrative tech world. “It was buzzing, they were willing to take anyone who knew what Unix was,” she said of her first tech job, running computer systems for the university.

Eventually, she dropped out of college, she said, “made my way to Silicon Valley, and I’ve been here ever since.”

Majors, co-founder and chief technology officer of the six-year-old [sponsor_inline_mention slug=&quot;honeycomb&quot; ]Honeycomb.io,[/sponsor_inline_mention] an observability platform company, told her story for The New Stack’s podcast series, The Tech Founder Odyssey, which spotlights the personal journeys of some of the most interesting technical startup creators in the cloud native industry.</itunes:subtitle>
      <itunes:keywords>software developer, the new stack, devops, devops podcast, charity majors, software engineer, honeycomb.io, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1346</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">024988f0-bbf3-42be-bb4d-45baee9904be</guid>
      <title>How Idit Levine’s Athletic Past Fueled Solo.io‘s Startup</title>
      <description><![CDATA[<p><a href="https://www.linkedin.com/in/iditlevine/">Idit Levine’s</a> tech journey originated in an unexpected place: a basketball court. As a seventh grader in Israel, playing in hoops  tournaments definitely sparked her competitive side.</p><p> </p><p>“I was basically going to compete with all my international friends for two minutes without parents, without anything,” Levine said. “I think it made me who I am today. It’s really giving you a lot of confidence to teach you how to handle situations … stay calm and still focus.”</p><p> </p><p>Developing that calm and focus proved an asset during Levine’s subsequent career in professional basketball in Israel, and when she later started her own company. In this episode of <a href="https://thenewstack.io/the-stone-ages-of-open-source-security/">The Tech Founder Odyssey</a> podcast series, Levine, founder and CEO of <a href="https://www.solo.io/">Solo.io,</a> an <a href="https://thenewstack.io/solo-io-intros-gloo-mesh-enterprise-2-0/">application networking company </a>with a $1 billion valuation, shared her startup story.</p><p> </p><p>The conversation was co-hosted by <a href="https://www.linkedin.com/in/colleen-coll-b971505/">Colleen Coll</a> and <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn</a> of The New Stack</p><p> </p><p>After finishing school and service in the Israeli Army, Levine was still unsure of what she wanted to do. She noticed her brother and sister’s fascination with computers. Soon enough, she recalled,  “I picked up a book to teach myself how to program.”</p><p> </p><p>It was only a matter of time before she found her true love: the cloud native ecosystem. “It's so dynamic, there's always something new coming. So it's not boring, right? You can assess it, and it's very innovative.”</p><p> </p><p>Moving from one startup company to the next, then on to bigger companies including Dell EMC where she was chief technology officer of the cloud management division, Levine was happy seeking experiences that challenged her technically. “And at one point, I said to myself, maybe I should stop looking and create one.”</p><p><h2>Learning How to Pitch</h2></p><p>Winning support for Solo.io demanded that the former hoops player acquire an unfamiliar skill: how to pitch. Levine’s company started in her current home of Boston, and she found raising money in that environment more of a challenge than it would be in, say, Silicon Valley.</p><p> </p><p>It was difficult to get an introduction without a connection, she said:  “I didn't understand what pitches even were but I learned how … to tell the story. That helped out a lot.”</p><p> </p><p>Founding Solo.io was not about coming up with an idea to solve a problem at first. “The main thing at Solo.io, and I think this is the biggest point, is that it's a place for amazing technologists, to deal with technology, and, beyond the top of innovation, figure out how to change the world, honestly,” said Levine.</p><p> </p><p>Even when the focus is software, she believes it’s eventually always about people. “You need to understand what's driving them and make sure that they're there, they are happy. And this is true in your own company. But this is also [true] in the ecosystem in general.”</p><p> </p><p>Levine credits the company’s success with its ability to establish amazing relationships with customers – Solo.io has a renewal rate of 98.9% – using a very different customer engagement model that is similar to users in the open source community. “We’re working together to build the product.”</p><p> </p><p>Throughout her journey, she has carried the idea of a team: in her early beginnings in basketball, in how she established a “no politics” office culture, and even in the way she involves her family with Solo.io.</p><p> </p><p>As for the ever-elusive work/life balance, Levine called herself a workaholic, but suggested that her journey has prepared her for it:  “I trained really well. Chaos is a part of my personal life.”</p><p> </p><p>She elaborated, “I think that one way to do this is to basically bring the company to [my] personal life.  My family was really involved from the beginning and my daughter chose the logos. They’re all very knowledgeable and part of it.”</p>
]]></description>
      <pubDate>Fri, 16 Sep 2022 15:14:34 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/how-idit-levines-athletic-past-fueled-soloios-startup-4uckSg3k</link>
      <content:encoded><![CDATA[<p><a href="https://www.linkedin.com/in/iditlevine/">Idit Levine’s</a> tech journey originated in an unexpected place: a basketball court. As a seventh grader in Israel, playing in hoops  tournaments definitely sparked her competitive side.</p><p> </p><p>“I was basically going to compete with all my international friends for two minutes without parents, without anything,” Levine said. “I think it made me who I am today. It’s really giving you a lot of confidence to teach you how to handle situations … stay calm and still focus.”</p><p> </p><p>Developing that calm and focus proved an asset during Levine’s subsequent career in professional basketball in Israel, and when she later started her own company. In this episode of <a href="https://thenewstack.io/the-stone-ages-of-open-source-security/">The Tech Founder Odyssey</a> podcast series, Levine, founder and CEO of <a href="https://www.solo.io/">Solo.io,</a> an <a href="https://thenewstack.io/solo-io-intros-gloo-mesh-enterprise-2-0/">application networking company </a>with a $1 billion valuation, shared her startup story.</p><p> </p><p>The conversation was co-hosted by <a href="https://www.linkedin.com/in/colleen-coll-b971505/">Colleen Coll</a> and <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn</a> of The New Stack</p><p> </p><p>After finishing school and service in the Israeli Army, Levine was still unsure of what she wanted to do. She noticed her brother and sister’s fascination with computers. Soon enough, she recalled,  “I picked up a book to teach myself how to program.”</p><p> </p><p>It was only a matter of time before she found her true love: the cloud native ecosystem. “It's so dynamic, there's always something new coming. So it's not boring, right? You can assess it, and it's very innovative.”</p><p> </p><p>Moving from one startup company to the next, then on to bigger companies including Dell EMC where she was chief technology officer of the cloud management division, Levine was happy seeking experiences that challenged her technically. “And at one point, I said to myself, maybe I should stop looking and create one.”</p><p><h2>Learning How to Pitch</h2></p><p>Winning support for Solo.io demanded that the former hoops player acquire an unfamiliar skill: how to pitch. Levine’s company started in her current home of Boston, and she found raising money in that environment more of a challenge than it would be in, say, Silicon Valley.</p><p> </p><p>It was difficult to get an introduction without a connection, she said:  “I didn't understand what pitches even were but I learned how … to tell the story. That helped out a lot.”</p><p> </p><p>Founding Solo.io was not about coming up with an idea to solve a problem at first. “The main thing at Solo.io, and I think this is the biggest point, is that it's a place for amazing technologists, to deal with technology, and, beyond the top of innovation, figure out how to change the world, honestly,” said Levine.</p><p> </p><p>Even when the focus is software, she believes it’s eventually always about people. “You need to understand what's driving them and make sure that they're there, they are happy. And this is true in your own company. But this is also [true] in the ecosystem in general.”</p><p> </p><p>Levine credits the company’s success with its ability to establish amazing relationships with customers – Solo.io has a renewal rate of 98.9% – using a very different customer engagement model that is similar to users in the open source community. “We’re working together to build the product.”</p><p> </p><p>Throughout her journey, she has carried the idea of a team: in her early beginnings in basketball, in how she established a “no politics” office culture, and even in the way she involves her family with Solo.io.</p><p> </p><p>As for the ever-elusive work/life balance, Levine called herself a workaholic, but suggested that her journey has prepared her for it:  “I trained really well. Chaos is a part of my personal life.”</p><p> </p><p>She elaborated, “I think that one way to do this is to basically bring the company to [my] personal life.  My family was really involved from the beginning and my daughter chose the logos. They’re all very knowledgeable and part of it.”</p>
]]></content:encoded>
      <enclosure length="33007187" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/e760adcb-1ae5-484e-85e7-bbe9d6bca96b/audio/cd79ee15-ddf1-42f0-a4c4-1d5b2a56a2a3/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>How Idit Levine’s Athletic Past Fueled Solo.io‘s Startup</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/cda65951-42be-4c78-9f4e-04b711ccd782/3000x3000/the-tech-odyssey-logo-white-bg.jpg?aid=rss_feed"/>
      <itunes:duration>00:34:22</itunes:duration>
      <itunes:summary>Idit Levine’s tech journey originated in an unexpected place: a basketball court. As a seventh grader in Israel, playing in hoops  tournaments definitely sparked her competitive side.

“I was basically going to compete with all my international friends for two minutes without parents, without anything,” Levine said. “I think it made me who I am today. It’s really giving you a lot of confidence to teach you how to handle situations … stay calm and still focus.”

Developing that calm and focus proved an asset during Levine’s subsequent career in professional basketball in Israel, and when she later started her own company. In this episode of The Tech Founder Odyssey podcast series, Levine, founder and CEO of Solo.io, an application networking company with a $1 billion valuation, shared her startup story.

The conversation was co-hosted by Colleen Coll and Heather Joslyn of The New Stack</itunes:summary>
      <itunes:subtitle>Idit Levine’s tech journey originated in an unexpected place: a basketball court. As a seventh grader in Israel, playing in hoops  tournaments definitely sparked her competitive side.

“I was basically going to compete with all my international friends for two minutes without parents, without anything,” Levine said. “I think it made me who I am today. It’s really giving you a lot of confidence to teach you how to handle situations … stay calm and still focus.”

Developing that calm and focus proved an asset during Levine’s subsequent career in professional basketball in Israel, and when she later started her own company. In this episode of The Tech Founder Odyssey podcast series, Levine, founder and CEO of Solo.io, an application networking company with a $1 billion valuation, shared her startup story.

The conversation was co-hosted by Colleen Coll and Heather Joslyn of The New Stack</itunes:subtitle>
      <itunes:keywords>software developer, the new stack, solo.io, devops, devops podcast, developer, the new stack makers, software engineeer, makers, idit levine</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1345</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">7a8e8bbe-0344-4cfe-b92b-7898df07d572</guid>
      <title>From DB2 to Real-Time with Aerospike Founder Srini Srinivasan</title>
      <description><![CDATA[<p><a href="https://aerospike.com/">Aerospike</a> Founder <a href="https://www.linkedin.com/in/drvsrini/">Srini Srinivasan</a> had just finished his Ph.D. at the University of Wisconsin when he joined IBM and worked under <a href="https://en.wikipedia.org/wiki/Donald_Haderle">Don Haderle</a>, the creator of DB2, the first commercial relational database management system.</p><p> </p><p>Haderle became a major influencer on Srinivasan when he started Aerospike, a real-time data platform. To this day, Haderle is an advisor to Aerospike.</p><p> </p><p>"He was the first one I went back to for advice as to how to succeed," Srinivasan said in the most recent episode of The New Stack Maker series, "The Tech Founder Odyssey."</p><p> </p><p>A young, ambitious engineer, Srinivasan left IBM to join a startup. Impatient with the pace he considered slow, Srinivasan met with Haderle, who told him to go, challenge himself, and try new things that might be uncomfortable.</p><p> </p><p>Today, Srinivasan seeks a balance between research and product development, similar to the approach at IBM that he learned -- the balance between what is very hard and what's impossible.</p><p> </p><p>Technical startup founders find themselves with complex technical problems all the time. Srinivasan talked about inspiration to solve those problems, but what does inspiration mean at all?</p><p> </p><p>Inspiration is a complex topic to parse. It can be thought of as almost trivial or superficial to discuss. Srinivasan said inspiration becomes relevant when it is part of the work and how one honestly faces that work. Inspiration is honesty.</p><p> </p><p>"Because once one is honest, you're able to get the trust of the people you're working with," Srinivasan said. "So honesty leads to trust. Once you have trust, I think there can be a collaboration because now people don't have to worry about watching their back. You can make mistakes, and then you know that it's a trusted group of people. And they will, you know, watch your back. And then, with a team like that, you can now set goals that seem impossible. But with the combination of honesty and trust and collaboration, you can lead the team to essentially solve those hard problems. And in some cases, you have to be honest enough to realize that you don't have all the skills required to solve the problem, and you should be willing to go out and get somebody new to help you with that."</p><p> </p><p>Srinivasan uses the principles of honesty in Aerospike's software development. How does that manifest in the work Aerospike does? It leads to all kinds of insights about Unix, Linux, systems technologies, and everything built on top of the infrastructure. And that's the work Srinivasan enjoys so much – building foundational technology that may take years to build but over time, establishes the work that's important, scalable, and has great performance.</p>
]]></description>
      <pubDate>Thu, 8 Sep 2022 16:26:20 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/from-db2-to-real-time-with-aerospike-founder-srini-srinivasan-9BiDP9FH</link>
      <content:encoded><![CDATA[<p><a href="https://aerospike.com/">Aerospike</a> Founder <a href="https://www.linkedin.com/in/drvsrini/">Srini Srinivasan</a> had just finished his Ph.D. at the University of Wisconsin when he joined IBM and worked under <a href="https://en.wikipedia.org/wiki/Donald_Haderle">Don Haderle</a>, the creator of DB2, the first commercial relational database management system.</p><p> </p><p>Haderle became a major influencer on Srinivasan when he started Aerospike, a real-time data platform. To this day, Haderle is an advisor to Aerospike.</p><p> </p><p>"He was the first one I went back to for advice as to how to succeed," Srinivasan said in the most recent episode of The New Stack Maker series, "The Tech Founder Odyssey."</p><p> </p><p>A young, ambitious engineer, Srinivasan left IBM to join a startup. Impatient with the pace he considered slow, Srinivasan met with Haderle, who told him to go, challenge himself, and try new things that might be uncomfortable.</p><p> </p><p>Today, Srinivasan seeks a balance between research and product development, similar to the approach at IBM that he learned -- the balance between what is very hard and what's impossible.</p><p> </p><p>Technical startup founders find themselves with complex technical problems all the time. Srinivasan talked about inspiration to solve those problems, but what does inspiration mean at all?</p><p> </p><p>Inspiration is a complex topic to parse. It can be thought of as almost trivial or superficial to discuss. Srinivasan said inspiration becomes relevant when it is part of the work and how one honestly faces that work. Inspiration is honesty.</p><p> </p><p>"Because once one is honest, you're able to get the trust of the people you're working with," Srinivasan said. "So honesty leads to trust. Once you have trust, I think there can be a collaboration because now people don't have to worry about watching their back. You can make mistakes, and then you know that it's a trusted group of people. And they will, you know, watch your back. And then, with a team like that, you can now set goals that seem impossible. But with the combination of honesty and trust and collaboration, you can lead the team to essentially solve those hard problems. And in some cases, you have to be honest enough to realize that you don't have all the skills required to solve the problem, and you should be willing to go out and get somebody new to help you with that."</p><p> </p><p>Srinivasan uses the principles of honesty in Aerospike's software development. How does that manifest in the work Aerospike does? It leads to all kinds of insights about Unix, Linux, systems technologies, and everything built on top of the infrastructure. And that's the work Srinivasan enjoys so much – building foundational technology that may take years to build but over time, establishes the work that's important, scalable, and has great performance.</p>
]]></content:encoded>
      <enclosure length="27282818" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/3c2be8a2-bb6d-4578-8141-c5c0e114b235/audio/13ed2f7f-7971-403a-8452-fcbe70ee24bf/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>From DB2 to Real-Time with Aerospike Founder Srini Srinivasan</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/37d09b29-6e26-4bb7-a010-cee8a3ff8842/3000x3000/the-tech-odyssey-logo-white-bg.jpg?aid=rss_feed"/>
      <itunes:duration>00:28:25</itunes:duration>
      <itunes:summary>Aerospike Founder Srini Srinivasan had just finished his Ph.D. at the University of Wisconsin when he joined IBM and worked under Don Haderle, the creator of DB2, the first commercial relational database management system.

Haderle became a major influencer on Srinivasan when he started Aerospike, a real-time data platform. To this day, Haderle is an advisor to Aerospike.

&quot;He was the first one I went back to for advice as to how to succeed,&quot; Srinivasan said in the most recent episode of The New Stack Maker series, &quot;The Tech Founder Odyssey.&quot;</itunes:summary>
      <itunes:subtitle>Aerospike Founder Srini Srinivasan had just finished his Ph.D. at the University of Wisconsin when he joined IBM and worked under Don Haderle, the creator of DB2, the first commercial relational database management system.

Haderle became a major influencer on Srinivasan when he started Aerospike, a real-time data platform. To this day, Haderle is an advisor to Aerospike.

&quot;He was the first one I went back to for advice as to how to succeed,&quot; Srinivasan said in the most recent episode of The New Stack Maker series, &quot;The Tech Founder Odyssey.&quot;</itunes:subtitle>
      <itunes:keywords>software developer, alex williams, the new stack, devops, aerospike, developer podcast, srini srinivasan, the new stack makers, the tech founder odyssey, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1344</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">645d0ee9-8bb8-4e83-b67a-0b7fe401c092</guid>
      <title>The Stone Ages of Open Source Security</title>
      <description><![CDATA[<p>Ask a developer about how they got into programming, and you learn so much about them.</p><p> </p><p>In this week's episode of The New Stack Makers, Chainguard founder Dan Lorenc said he got into programming halfway through college while studying mechanical engineering.</p><p> </p><p>"I got into programming because we had to do simulations and stuff in MATLAB," Lorenc said. And then I switched over to Python because it was similar. And we didn't need those licenses or whatever that we needed. And then I was like, Oh, this is much faster than you know, ordering parts and going to the machine shop and reserving time, so I got into it that way."</p><p> </p><p>It was three or four years ago that Lorenc got into the field of open source security.</p><p> </p><p>"Open source security and supply chain security weren't buzzwords back then," Lorenc said. "Nobody was talking about it. And I kind of got paranoid about it."</p><p> </p><p>Lorenc worked on the Minikube open source project at Google where he first saw how insecure it could be to work on open source projects. In the interview, he talks about the threats he saw in that work.</p><p> </p><p>It was so odd for Lorenc. State of art for open source security was not state of the art at all. It was the stone age.</p><p> </p><p>Lorenc said it felt weird for him to build the first release in MiniKube that did not raise questions about security.</p><p> </p><p>"But I mean, this is like a 200 megabyte Go binary that people were just running as root on their laptops across the Kubernetes community," Lorenc said. "And nobody had any idea what I put in there if it matched the source on GitHub or anything. So that was pretty terrifying. And that got me paranoid about the space and kind of went down this long rabbit hole that eventually resulted in starting Chainguard.</p><p> </p><p>Today, the world is burning down, and that's good for a security startup like Chainguard.</p><p> </p><p>"Yeah, we've got a mess of an industry to tackle here," Lorenc said. "If you've been following the news at all, it might seem like the software industry is burning on fire or falling down or anything because of all of these security problems. It's bad news for a lot of folks, but it's good news if you're in the security space."</p><p> </p><p>Good news, yes ,but how does it fit into a larger story?</p><p> </p><p>"Right now, one of our big focuses is figuring out how do we explain where we fit into the bigger landscape," Lorenc. said. "Because the security market is massive and confusing and full of vendors, putting buzzwords on their websites, like zero trust and stuff like that. And it's pretty easy to get lost in that mess. And so figuring out how we position ourselves, how we handle the branding, the marketing, and making it clear to prospective customers and community members, everything exactly what it is we do and what threats our products mitigate, to make sure we're being accurate there. And conveying that to our customers. That's my big focus right now."</p>
]]></description>
      <pubDate>Tue, 30 Aug 2022 19:45:14 +0000</pubDate>
      <author>podcasts@thenewstack.io (dan lorenc, the new stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/the-stone-ages-of-open-source-security-75U8Yw6c</link>
      <content:encoded><![CDATA[<p>Ask a developer about how they got into programming, and you learn so much about them.</p><p> </p><p>In this week's episode of The New Stack Makers, Chainguard founder Dan Lorenc said he got into programming halfway through college while studying mechanical engineering.</p><p> </p><p>"I got into programming because we had to do simulations and stuff in MATLAB," Lorenc said. And then I switched over to Python because it was similar. And we didn't need those licenses or whatever that we needed. And then I was like, Oh, this is much faster than you know, ordering parts and going to the machine shop and reserving time, so I got into it that way."</p><p> </p><p>It was three or four years ago that Lorenc got into the field of open source security.</p><p> </p><p>"Open source security and supply chain security weren't buzzwords back then," Lorenc said. "Nobody was talking about it. And I kind of got paranoid about it."</p><p> </p><p>Lorenc worked on the Minikube open source project at Google where he first saw how insecure it could be to work on open source projects. In the interview, he talks about the threats he saw in that work.</p><p> </p><p>It was so odd for Lorenc. State of art for open source security was not state of the art at all. It was the stone age.</p><p> </p><p>Lorenc said it felt weird for him to build the first release in MiniKube that did not raise questions about security.</p><p> </p><p>"But I mean, this is like a 200 megabyte Go binary that people were just running as root on their laptops across the Kubernetes community," Lorenc said. "And nobody had any idea what I put in there if it matched the source on GitHub or anything. So that was pretty terrifying. And that got me paranoid about the space and kind of went down this long rabbit hole that eventually resulted in starting Chainguard.</p><p> </p><p>Today, the world is burning down, and that's good for a security startup like Chainguard.</p><p> </p><p>"Yeah, we've got a mess of an industry to tackle here," Lorenc said. "If you've been following the news at all, it might seem like the software industry is burning on fire or falling down or anything because of all of these security problems. It's bad news for a lot of folks, but it's good news if you're in the security space."</p><p> </p><p>Good news, yes ,but how does it fit into a larger story?</p><p> </p><p>"Right now, one of our big focuses is figuring out how do we explain where we fit into the bigger landscape," Lorenc. said. "Because the security market is massive and confusing and full of vendors, putting buzzwords on their websites, like zero trust and stuff like that. And it's pretty easy to get lost in that mess. And so figuring out how we position ourselves, how we handle the branding, the marketing, and making it clear to prospective customers and community members, everything exactly what it is we do and what threats our products mitigate, to make sure we're being accurate there. And conveying that to our customers. That's my big focus right now."</p>
]]></content:encoded>
      <enclosure length="25336382" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/585c3857-710c-4d13-b0b5-fa96f15fa0eb/audio/8b64508a-d499-4612-bd4e-b333215bf0cb/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>The Stone Ages of Open Source Security</itunes:title>
      <itunes:author>dan lorenc, the new stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/a1ad5c2a-bd70-4ec3-b72b-158c67cac531/3000x3000/the-tech-odyssey-logo-white-bg.jpg?aid=rss_feed"/>
      <itunes:duration>00:26:23</itunes:duration>
      <itunes:summary>Ask a developer about how they got into programming, and you learn so much about them.

In this week&apos;s episode of The New Stack Makers, Chainguard founder Dan Lorenc said he got into programming halfway through college while studying mechanical engineering.

&quot;I got into programming because we had to do simulations and stuff in MATLAB,&quot; Lorenc said. And then I switched over to Python because it was similar. And we didn&apos;t need those licenses or whatever that we needed. And then I was like, Oh, this is much faster than you know, ordering parts and going to the machine shop and reserving time, so I got into it that way.&quot;</itunes:summary>
      <itunes:subtitle>Ask a developer about how they got into programming, and you learn so much about them.

In this week&apos;s episode of The New Stack Makers, Chainguard founder Dan Lorenc said he got into programming halfway through college while studying mechanical engineering.

&quot;I got into programming because we had to do simulations and stuff in MATLAB,&quot; Lorenc said. And then I switched over to Python because it was similar. And we didn&apos;t need those licenses or whatever that we needed. And then I was like, Oh, this is much faster than you know, ordering parts and going to the machine shop and reserving time, so I got into it that way.&quot;</itunes:subtitle>
      <itunes:keywords>chainguard, the new stack, devops, devops podcast, developer, dan lorenc, software engineer, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1343</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">90962f46-c2c6-45c2-a342-f9c61bb24341</guid>
      <title>Curating for the SRE Through Lessons Learned at Google News</title>
      <description><![CDATA[<p>In the early 1990s, many kids got into programming video games. Tina Huang enjoyed developing her GeoCities site but not making games. Huang loved automating her website.</p><p> </p><p>"It is not a lie to say that what got me excited about coding was automation," said Huang, co-founder of <a href="https://www.transposit.com/">Transposit,</a> in this week's episode of The New Stack Makers as part of our Tech Founder Series. "Now, you're probably going to think to yourself: 'what middle school kid likes automation?' "</p><p> </p><p>Huang loved the idea of automating mundane tasks with a bit of code, so she did not have to hand type – just like the Jetsons and Rosie the Robot -- the robot people want. There to fold your laundry but not take the joy away from what people like to do.</p><p> </p><p>Huang is like many of the founders we interview. Her job can be what she wants it to be. But Huang also has to take care of everything that needs to get done. All the work comes down to what the Transposit site says on the home page: Bring calm to the chaos. Through connected workflows, give TechOps and SREs visibility, context, and actionability across people, processes, and APIs.</p><p> </p><p>The statements reflect on her own experience in using automation to provide high-quality information.</p><p> </p><p>"I've always been swimming upstream against the tide when I worked at companies like Google and Twitter, where, you know, the tagline for Google News back then was "News by Robots," Huang said. "The ideal in their mind was how do you get robots to do all the news reporting. And that is funny because now I think we have a different opinion. But at the time, it was popular to think news by robots would be more factual, more Democratic."</p><p> </p><p>Huang worked on a project at Google exploring how to use algorithms to curate the first pass of curation for human editors to go in and then add that human touch to the news. The work reflected her love for long-form journalism and that human touch to information.</p><p> </p><p>Transport offers a similar next level of integration. Any RSS fans out there? Huang has a love/hate relationship with RSS. She loves it for what it can feed, but if the feed is not filtered, then it becomes overwhelming. Getting inundated with information happens when multiple integrations start to layer from Slack, for example, and other sources.</p><p> </p><p>"And suddenly, you're inundated with information because it was information designed for the consumption by machines, not at the human scale," Huang said. "You need that next layer of curation on top of it. Like how do you allow people to annotate that information? "</p><p> </p><p>Providing a choice in subscriptions can help. But at what level? And that's one of the areas that Huang hopes to tackle with Transposit."</p>
]]></description>
      <pubDate>Wed, 24 Aug 2022 15:58:29 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/curating-for-the-sre-through-lessons-learned-at-google-news-NBl6gobg</link>
      <content:encoded><![CDATA[<p>In the early 1990s, many kids got into programming video games. Tina Huang enjoyed developing her GeoCities site but not making games. Huang loved automating her website.</p><p> </p><p>"It is not a lie to say that what got me excited about coding was automation," said Huang, co-founder of <a href="https://www.transposit.com/">Transposit,</a> in this week's episode of The New Stack Makers as part of our Tech Founder Series. "Now, you're probably going to think to yourself: 'what middle school kid likes automation?' "</p><p> </p><p>Huang loved the idea of automating mundane tasks with a bit of code, so she did not have to hand type – just like the Jetsons and Rosie the Robot -- the robot people want. There to fold your laundry but not take the joy away from what people like to do.</p><p> </p><p>Huang is like many of the founders we interview. Her job can be what she wants it to be. But Huang also has to take care of everything that needs to get done. All the work comes down to what the Transposit site says on the home page: Bring calm to the chaos. Through connected workflows, give TechOps and SREs visibility, context, and actionability across people, processes, and APIs.</p><p> </p><p>The statements reflect on her own experience in using automation to provide high-quality information.</p><p> </p><p>"I've always been swimming upstream against the tide when I worked at companies like Google and Twitter, where, you know, the tagline for Google News back then was "News by Robots," Huang said. "The ideal in their mind was how do you get robots to do all the news reporting. And that is funny because now I think we have a different opinion. But at the time, it was popular to think news by robots would be more factual, more Democratic."</p><p> </p><p>Huang worked on a project at Google exploring how to use algorithms to curate the first pass of curation for human editors to go in and then add that human touch to the news. The work reflected her love for long-form journalism and that human touch to information.</p><p> </p><p>Transport offers a similar next level of integration. Any RSS fans out there? Huang has a love/hate relationship with RSS. She loves it for what it can feed, but if the feed is not filtered, then it becomes overwhelming. Getting inundated with information happens when multiple integrations start to layer from Slack, for example, and other sources.</p><p> </p><p>"And suddenly, you're inundated with information because it was information designed for the consumption by machines, not at the human scale," Huang said. "You need that next layer of curation on top of it. Like how do you allow people to annotate that information? "</p><p> </p><p>Providing a choice in subscriptions can help. But at what level? And that's one of the areas that Huang hopes to tackle with Transposit."</p>
]]></content:encoded>
      <enclosure length="29237195" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/6afafab2-3e88-4e6a-81d0-0b75e2fad8d4/audio/d2cc13e8-6be1-4529-976a-48579be27ab4/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Curating for the SRE Through Lessons Learned at Google News</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/04f5bb3c-7741-491e-befd-1c72220e5937/3000x3000/the-tech-odyssey-logo-white-bg.jpg?aid=rss_feed"/>
      <itunes:duration>00:30:27</itunes:duration>
      <itunes:summary>In the early 1990s, many kids got into programming video games. Tina Huang enjoyed developing her GeoCities site but not making games. Huang loved automating her website.

Huang is like many of the founders we interview. Her job can be what she wants it to be. But Huang also has to take care of everything that needs to get done. All the work comes down to what the Transposit site says on the home page: Bring calm to the chaos. Through connected workflows, give TechOps and SREs visibi</itunes:summary>
      <itunes:subtitle>In the early 1990s, many kids got into programming video games. Tina Huang enjoyed developing her GeoCities site but not making games. Huang loved automating her website.

Huang is like many of the founders we interview. Her job can be what she wants it to be. But Huang also has to take care of everything that needs to get done. All the work comes down to what the Transposit site says on the home page: Bring calm to the chaos. Through connected workflows, give TechOps and SREs visibi</itunes:subtitle>
      <itunes:keywords>software engineering, the new stack, devops, devops podcast, transposit, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1342</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">7ebc2b97-d61d-4b3c-9791-ffbd5ac0da53</guid>
      <title>A Technical Founder&apos;s Story: Jake Warner on Cycle.io</title>
      <description><![CDATA[<p>Welcome to the first in our series on The New Stack Makers about technical founders, those engineers who have moved from engineering jobs to running a company of their own. What we want to know is what that's like for the founder. How is it to be an engineer turned entrepreneur?</p><p> </p><p>We like to ask technologists about their first computer or when they started programming. We always find a connection to what the engineer does today. It's these kinds of questions you will hear us ask in the series to get more insight into everything that happens when the engineer is responsible for the entire organization. We've listened to feedback about what people want from this series. Here are a few of the replies we received to my tweet asking for feedback about the new series.</p><p><blockquote class="twitter-tweet"></p><p><p dir=" ltr" lang=" en">If they have kids, how much work is taken on by their SO? Lots of technical founders are only able to do what they do because their partner is lifting a lot in the background — they hardly ever get the credits tho</p></p><p>— Anaïs Urlichs ☀️ (@urlichsanais) <a href="https://twitter.com/urlichsanais/status/1555173443791978499?ref_src=twsrc%5Etfw">August 4, 2022</a></blockquote></p><p><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p><p> </p><p>I host the first four interviews. The New Stack's Colleen Coll and Heather Joslyn will co-host the following shows we run in the series.</p><p> </p><p>We interviewed Cycle.io Founder Jake Warner for the first episode in the series about how he went from downloading a virus on an inherited Windows 95 machine as a 10-year-old to leading a startup.</p><p> </p><p>"You know, I had to apologize to my Dad for needing to do a full reinstall on the family computer," Warner said. "But it was the fact that someone through just the use of a file could cause that much damage that started making me wonder, wow, there's a lot more to this than I thought."</p><p> </p><p>Warner was never much of a gamer. He preferred the chat rooms and conversation more so than playing <a href="https://starcraft2.com/">Starcraft</a>, the game he liked to talk about more than play. Warner met people in those chat rooms who preferred to talk about the game instead of playing it. He became friends with a group that liked playing games over the network hosted by Starcraft. Games that kids play all the time. They were learning about firewalls to attack each other virtually, between chat rooms, for example.</p><p> </p><p>"And because of that, that got me interested in all kinds of firewalls and security things, which led to getting into programming," Warner said. "And so it was, I guess, the point the to get back to your question, it started with a game, but very quickly went from a lot more than that.</p><p> </p><p>And now Warner is leading Cycle, which he and his colleagues have built from the ground up. For a long time, they marketed Cycle as a container orchestrator. Now they call Cycle a platform for building platforms – ironically similar to the story of a kid playing a game in a game.</p><p> </p><p>Warner has been leading a company that he described as a container orchestrator for some time. There is one orchestrator that enterprise engineers know well. And that's Kubernetes. Warner and his team realized that Cycle is different than a container orchestrator. So how to change the message?</p><p> </p><p>Knowing what to do is the challenge of any founder. And that's a big aspect of what we will explore in our series on technical founders. We hope you enjoy the interviews. Please provide feedback and your questions. They are always invaluable and serve as a way to draw thoughtful perspectives from the founders we interview.</p>
]]></description>
      <pubDate>Wed, 17 Aug 2022 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (the new)</author>
      <link>https://thenewstack.simplecast.com/episodes/a-technical-founders-story-jake-warner-on-cycleio-_JTNfMbI</link>
      <content:encoded><![CDATA[<p>Welcome to the first in our series on The New Stack Makers about technical founders, those engineers who have moved from engineering jobs to running a company of their own. What we want to know is what that's like for the founder. How is it to be an engineer turned entrepreneur?</p><p> </p><p>We like to ask technologists about their first computer or when they started programming. We always find a connection to what the engineer does today. It's these kinds of questions you will hear us ask in the series to get more insight into everything that happens when the engineer is responsible for the entire organization. We've listened to feedback about what people want from this series. Here are a few of the replies we received to my tweet asking for feedback about the new series.</p><p><blockquote class="twitter-tweet"></p><p><p dir=" ltr" lang=" en">If they have kids, how much work is taken on by their SO? Lots of technical founders are only able to do what they do because their partner is lifting a lot in the background — they hardly ever get the credits tho</p></p><p>— Anaïs Urlichs ☀️ (@urlichsanais) <a href="https://twitter.com/urlichsanais/status/1555173443791978499?ref_src=twsrc%5Etfw">August 4, 2022</a></blockquote></p><p><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p><p> </p><p>I host the first four interviews. The New Stack's Colleen Coll and Heather Joslyn will co-host the following shows we run in the series.</p><p> </p><p>We interviewed Cycle.io Founder Jake Warner for the first episode in the series about how he went from downloading a virus on an inherited Windows 95 machine as a 10-year-old to leading a startup.</p><p> </p><p>"You know, I had to apologize to my Dad for needing to do a full reinstall on the family computer," Warner said. "But it was the fact that someone through just the use of a file could cause that much damage that started making me wonder, wow, there's a lot more to this than I thought."</p><p> </p><p>Warner was never much of a gamer. He preferred the chat rooms and conversation more so than playing <a href="https://starcraft2.com/">Starcraft</a>, the game he liked to talk about more than play. Warner met people in those chat rooms who preferred to talk about the game instead of playing it. He became friends with a group that liked playing games over the network hosted by Starcraft. Games that kids play all the time. They were learning about firewalls to attack each other virtually, between chat rooms, for example.</p><p> </p><p>"And because of that, that got me interested in all kinds of firewalls and security things, which led to getting into programming," Warner said. "And so it was, I guess, the point the to get back to your question, it started with a game, but very quickly went from a lot more than that.</p><p> </p><p>And now Warner is leading Cycle, which he and his colleagues have built from the ground up. For a long time, they marketed Cycle as a container orchestrator. Now they call Cycle a platform for building platforms – ironically similar to the story of a kid playing a game in a game.</p><p> </p><p>Warner has been leading a company that he described as a container orchestrator for some time. There is one orchestrator that enterprise engineers know well. And that's Kubernetes. Warner and his team realized that Cycle is different than a container orchestrator. So how to change the message?</p><p> </p><p>Knowing what to do is the challenge of any founder. And that's a big aspect of what we will explore in our series on technical founders. We hope you enjoy the interviews. Please provide feedback and your questions. They are always invaluable and serve as a way to draw thoughtful perspectives from the founders we interview.</p>
]]></content:encoded>
      <enclosure length="25915674" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/b35b2c9b-cc08-4555-8390-fc71f9c3f5a3/audio/592c4775-5e1c-4393-bafd-adc5acee40b4/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>A Technical Founder&apos;s Story: Jake Warner on Cycle.io</itunes:title>
      <itunes:author>the new</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/8cf875ff-9c1d-4f7a-a22d-e5592be9b439/3000x3000/the-tech-odyssey-logo-white-bg.jpg?aid=rss_feed"/>
      <itunes:duration>00:26:59</itunes:duration>
      <itunes:summary>Welcome to the first in our series on The New Stack Makers about technical founders, those engineers who have moved from engineering jobs to running a company of their own. What we want to know is what that&apos;s like for the founder. How is it to be an engineer turned entrepreneur?

We interviewed Cycle.io Founder Jake Warner for the first episode in the series about how he went from downloading a virus on an inherited Windows 95 machine as a 10-year-old to leading a startup.</itunes:summary>
      <itunes:subtitle>Welcome to the first in our series on The New Stack Makers about technical founders, those engineers who have moved from engineering jobs to running a company of their own. What we want to know is what that&apos;s like for the founder. How is it to be an engineer turned entrepreneur?

We interviewed Cycle.io Founder Jake Warner for the first episode in the series about how he went from downloading a virus on an inherited Windows 95 machine as a 10-year-old to leading a startup.</itunes:subtitle>
      <itunes:keywords>the new stack, devops, devops podcast, software engineer, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1341</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">91aa7142-3b3d-43a7-8eaa-6ff0b8c7b715</guid>
      <title>Rethinking Web Application Firewalls</title>
      <description><![CDATA[<p>Web Application Firewalls (WAF) first emerged in the late 1990s as Web server attacks became more common. Today, in the context of cloud native technologies, there’s an ongoing rethinking of how a WAF should be applied.</p><p> </p><p>No longer is it solely static applications sitting behind a WAF, said Tigera CEO Ratan Tipirneni, President & CEO of Tigera in this episode of The New Stack Makers.</p><p> </p><p>“With cloud native applications and a microservices distributed architecture, you have to assume that something inside your cluster has been compromised,” Tipirneni said. “So just sitting behind a WAF doesn't give you adequate protection; you have to assume that every single microservice container is almost open to the Internet, metaphorically speaking.</p><p> </p><p>So then the question is how do you apply WAF controls?</p><p> </p><p>Today’s WAF has to be workload-centric, Tiperneni said. In his view, every workload has to have its own WAF. When a container launches, the WAF control is automatically spun up.</p><p> </p><p>So that way, even if something inside a cluster is compromised or exposes some of the services to the Internet, it doesn't matter because the workload is protected, Tiperneni said.</p><p> </p><p>So how do you apply this level of security? You have to think in terms of a workload-centric WAF.</p><p>The Scenario</p><p> </p><p>The vulnerabilities are so numerous now and cloud native applications have larger attack surfaces with no way to mitigate vulnerabilities using traditional means, Tiperneni</p><p> </p><p>“It's no longer sufficient to throw out a report that tells you about all the vulnerabilities in your system,” Tiperneni said. “Because that report is not actionable. People operating the services are discovering that the amount of time and effort it takes to remediate all these vulnerabilities is incredible, right? So they're looking for some level of prioritization in terms of where to start.”</p><p> </p><p>And the onus is on the user to mitigate the problem, Tiperneni said. Those customers have to think about the blast radius of the vulnerability and its context in the system. The second part: how to manage the attack surface.</p><p> </p><p>In this world of cloud native applications, customers are discovering very quickly, that trying to protect every single thing, when everything has access to everything else is an almost impossible task, Tiperneni said.</p><p> </p><p>What’s needed is a way for users to control how microservices talk to each with permissions set for intercommunciation. In some cases, specific microservices should not be talking to each other at all.</p><p> </p><p>“So that is a highly leveraged activity and security control that can stop many of these attacks,” Tiperneni said.</p><p> </p><p>Even after all of that, the user still has to assume that attacks will happen, mainly because there's always the threat of an insider attack.</p><p> </p><p>And in that situation, the search is for patterns of anomalous behavior at the process level, at the file system level or the system call level to determine the baseline for standard behavior that can then tell the user how to identify deviations, Tiperneni said. Then it’s a matter of trying to tease out some signals, which are indicators of either an attack or of a compromise.</p><p> </p><p>“Maybe a simpler use case of that is to constantly be able to monitor and monitor at run time for known bad hashes or files or binaries, that are known to be bad,” Tipirneni said.</p><p> </p><p>The real challenge for companies is setting up the architecture to make microservices secure. There are a number of vectors the market may take. In the recording, Tipirneni talks about the evolution of WAF, the importance of observability and better ways to establish context with the services a company has deployed and the overall systems that companies have architected.</p><p> </p><p>“There is no single silver bullet,” Tipirneni said. “You have to be able to do multiple things to keep your application safe inside cloud native architectures.”</p>
]]></description>
      <pubDate>Tue, 9 Aug 2022 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/rethinking-web-application-firewalls-coBXjN_G</link>
      <content:encoded><![CDATA[<p>Web Application Firewalls (WAF) first emerged in the late 1990s as Web server attacks became more common. Today, in the context of cloud native technologies, there’s an ongoing rethinking of how a WAF should be applied.</p><p> </p><p>No longer is it solely static applications sitting behind a WAF, said Tigera CEO Ratan Tipirneni, President & CEO of Tigera in this episode of The New Stack Makers.</p><p> </p><p>“With cloud native applications and a microservices distributed architecture, you have to assume that something inside your cluster has been compromised,” Tipirneni said. “So just sitting behind a WAF doesn't give you adequate protection; you have to assume that every single microservice container is almost open to the Internet, metaphorically speaking.</p><p> </p><p>So then the question is how do you apply WAF controls?</p><p> </p><p>Today’s WAF has to be workload-centric, Tiperneni said. In his view, every workload has to have its own WAF. When a container launches, the WAF control is automatically spun up.</p><p> </p><p>So that way, even if something inside a cluster is compromised or exposes some of the services to the Internet, it doesn't matter because the workload is protected, Tiperneni said.</p><p> </p><p>So how do you apply this level of security? You have to think in terms of a workload-centric WAF.</p><p>The Scenario</p><p> </p><p>The vulnerabilities are so numerous now and cloud native applications have larger attack surfaces with no way to mitigate vulnerabilities using traditional means, Tiperneni</p><p> </p><p>“It's no longer sufficient to throw out a report that tells you about all the vulnerabilities in your system,” Tiperneni said. “Because that report is not actionable. People operating the services are discovering that the amount of time and effort it takes to remediate all these vulnerabilities is incredible, right? So they're looking for some level of prioritization in terms of where to start.”</p><p> </p><p>And the onus is on the user to mitigate the problem, Tiperneni said. Those customers have to think about the blast radius of the vulnerability and its context in the system. The second part: how to manage the attack surface.</p><p> </p><p>In this world of cloud native applications, customers are discovering very quickly, that trying to protect every single thing, when everything has access to everything else is an almost impossible task, Tiperneni said.</p><p> </p><p>What’s needed is a way for users to control how microservices talk to each with permissions set for intercommunciation. In some cases, specific microservices should not be talking to each other at all.</p><p> </p><p>“So that is a highly leveraged activity and security control that can stop many of these attacks,” Tiperneni said.</p><p> </p><p>Even after all of that, the user still has to assume that attacks will happen, mainly because there's always the threat of an insider attack.</p><p> </p><p>And in that situation, the search is for patterns of anomalous behavior at the process level, at the file system level or the system call level to determine the baseline for standard behavior that can then tell the user how to identify deviations, Tiperneni said. Then it’s a matter of trying to tease out some signals, which are indicators of either an attack or of a compromise.</p><p> </p><p>“Maybe a simpler use case of that is to constantly be able to monitor and monitor at run time for known bad hashes or files or binaries, that are known to be bad,” Tipirneni said.</p><p> </p><p>The real challenge for companies is setting up the architecture to make microservices secure. There are a number of vectors the market may take. In the recording, Tipirneni talks about the evolution of WAF, the importance of observability and better ways to establish context with the services a company has deployed and the overall systems that companies have architected.</p><p> </p><p>“There is no single silver bullet,” Tipirneni said. “You have to be able to do multiple things to keep your application safe inside cloud native architectures.”</p>
]]></content:encoded>
      <enclosure length="26233737" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/9a45a180-1277-4083-8b55-cafab9a21e18/audio/2f5cdb64-68d0-42e7-858f-2b8d7fc447b6/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Rethinking Web Application Firewalls</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:duration>00:27:19</itunes:duration>
      <itunes:summary>Web Application Firewalls (WAF) first emerged in the late 1990s as Web server attacks became more common. Today, in the context of cloud native technologies, there’s an ongoing rethinking of how a WAF should be applied.

No longer is it solely static applications sitting behind a WAF, said Tigera CEO Ratan Tipirneni, President &amp; CEO of Tigera in this episode of The New Stack Makers.</itunes:summary>
      <itunes:subtitle>Web Application Firewalls (WAF) first emerged in the late 1990s as Web server attacks became more common. Today, in the context of cloud native technologies, there’s an ongoing rethinking of how a WAF should be applied.

No longer is it solely static applications sitting behind a WAF, said Tigera CEO Ratan Tipirneni, President &amp; CEO of Tigera in this episode of The New Stack Makers.</itunes:subtitle>
      <itunes:keywords>thenewstack, devops, tigera, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1340</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e97abc0f-cc03-4c8a-9959-bd3293cb0187</guid>
      <title>Passage: A Passwordless Service with Biometrics</title>
      <description><![CDATA[<p><a href="https://passage.id/">Passage</a> adds device native biometric authorization to web sites to allow passwordless security on devices with or without Touch ID.</p><p> </p><p>In this episode of The New Stack Makers, Passage Co-Founders Cole Hecht and Anna Pobletts talk about how the service works for developers to offer users its biometric service.</p><p> </p><p>Hecht and Pobletts have worked in product security for many years and the recurring problem is always password-based security. But there really is no great solution, Pobletts said. Multi-factor authentication adds security but the user experience is lacking. <a href="https://thenewstack.io/stytchs-api-first-approach-to-passwordless-authentication/">Magic links</a>, <a href="https://thenewstack.io/how-to-optimize-customer-identity-and-access-management/">adaptive MFA</a>, and other techniques add a bit of improvement but are not a great balance of user experience and security.</p><p> </p><p>“Whereas biometrics is the only option we've ever seen that gives you both great security and great user experience right out of the box,” Pobletts.</p><p> </p><p>The goal for Hecht and Pobletts: offer developers what is challenging to implement themselves: a passwordless service with a high security level and a great user experience.</p><p> </p><p>Passage is built on WebAuthn, a Web protocol that allows a developer to connect Web sites with browsers and various devices through the authenticators on those devices, Pobletts said.</p><p> </p><p>“So that could be anything right now,” Pobletts said. “It's things like fingerprint readers and face identification. But in the future, it could be voice identification, or it could be, you know, your presence and things like that like it could be all sorts of stuff in the future. But ultimately, your device is generating a cryptographic key pair and storing the private key in the TPM of your device. The cool thing about this protocol is that your biometric data never leaves your device, it's a huge win for privacy. In that passage, your browser, no one ever actually sees your fingerprint data in any way.”</p><p> </p><p>It’s cryptographically secure under the hood with Passage as the platform on top, Pobletts said.</p><p> </p><p><a href="https://webauthn.guide/">WebAuthn</a> is designed for single devices, Pobletts said. A developer authenticated one fingerprint, for example, to one device. But that does not work well on the Internet where a user may have a phone, a tablet, and a computer. Passage coordinates and orchestrates between different devices to give an easy experience.</p><p> </p><p>“So in my case, I have an iPhone, I do face ID,” said Hecht showing the service. “And then I'm going to be signed in on both devices automatically. So that's a great way to kind of give every user access to the site no matter what device they're on.”</p><p> </p><p>With Passage, the biometric is added to any device a user adds, Hecht said. Passage handles the multidevice orchestration.</p><p> </p><p>Use cases?</p><p> </p><p>“FinTech people like the security properties of it, they kind of like that cool, shiny user experience that they want to deliver to their end users,” Hecht said. And then any website or business that cares about conversions is kind of a general term. People who want signups, who are trying to measure success by the number of people registering and creating accounts, are signing up. “Passage has a really nice story for that because we cut out so much friction around those conversion points.”</p><p> </p><p> </p>
]]></description>
      <pubDate>Tue, 2 Aug 2022 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack Podcast)</author>
      <link>https://thenewstack.simplecast.com/episodes/passage-a-passwordless-service-with-biometrics-QiRLfwkT</link>
      <content:encoded><![CDATA[<p><a href="https://passage.id/">Passage</a> adds device native biometric authorization to web sites to allow passwordless security on devices with or without Touch ID.</p><p> </p><p>In this episode of The New Stack Makers, Passage Co-Founders Cole Hecht and Anna Pobletts talk about how the service works for developers to offer users its biometric service.</p><p> </p><p>Hecht and Pobletts have worked in product security for many years and the recurring problem is always password-based security. But there really is no great solution, Pobletts said. Multi-factor authentication adds security but the user experience is lacking. <a href="https://thenewstack.io/stytchs-api-first-approach-to-passwordless-authentication/">Magic links</a>, <a href="https://thenewstack.io/how-to-optimize-customer-identity-and-access-management/">adaptive MFA</a>, and other techniques add a bit of improvement but are not a great balance of user experience and security.</p><p> </p><p>“Whereas biometrics is the only option we've ever seen that gives you both great security and great user experience right out of the box,” Pobletts.</p><p> </p><p>The goal for Hecht and Pobletts: offer developers what is challenging to implement themselves: a passwordless service with a high security level and a great user experience.</p><p> </p><p>Passage is built on WebAuthn, a Web protocol that allows a developer to connect Web sites with browsers and various devices through the authenticators on those devices, Pobletts said.</p><p> </p><p>“So that could be anything right now,” Pobletts said. “It's things like fingerprint readers and face identification. But in the future, it could be voice identification, or it could be, you know, your presence and things like that like it could be all sorts of stuff in the future. But ultimately, your device is generating a cryptographic key pair and storing the private key in the TPM of your device. The cool thing about this protocol is that your biometric data never leaves your device, it's a huge win for privacy. In that passage, your browser, no one ever actually sees your fingerprint data in any way.”</p><p> </p><p>It’s cryptographically secure under the hood with Passage as the platform on top, Pobletts said.</p><p> </p><p><a href="https://webauthn.guide/">WebAuthn</a> is designed for single devices, Pobletts said. A developer authenticated one fingerprint, for example, to one device. But that does not work well on the Internet where a user may have a phone, a tablet, and a computer. Passage coordinates and orchestrates between different devices to give an easy experience.</p><p> </p><p>“So in my case, I have an iPhone, I do face ID,” said Hecht showing the service. “And then I'm going to be signed in on both devices automatically. So that's a great way to kind of give every user access to the site no matter what device they're on.”</p><p> </p><p>With Passage, the biometric is added to any device a user adds, Hecht said. Passage handles the multidevice orchestration.</p><p> </p><p>Use cases?</p><p> </p><p>“FinTech people like the security properties of it, they kind of like that cool, shiny user experience that they want to deliver to their end users,” Hecht said. And then any website or business that cares about conversions is kind of a general term. People who want signups, who are trying to measure success by the number of people registering and creating accounts, are signing up. “Passage has a really nice story for that because we cut out so much friction around those conversion points.”</p><p> </p><p> </p>
]]></content:encoded>
      <enclosure length="10904237" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/6ef5429c-ddd6-4e76-8ad2-4407f43e490d/audio/6955b920-5192-4b8d-9f04-0f3e6d49c11d/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Passage: A Passwordless Service with Biometrics</itunes:title>
      <itunes:author>The New Stack Podcast</itunes:author>
      <itunes:duration>00:11:21</itunes:duration>
      <itunes:summary>Passage adds device native biometric authorization to web sites to allow passwordless security on devices with or without Touch ID.

In this episode of The New Stack Makers, Passage Co-Founders Cole Hecht and Anna Pobletts talk about how the service works for developers to offer users its biometric service.</itunes:summary>
      <itunes:subtitle>Passage adds device native biometric authorization to web sites to allow passwordless security on devices with or without Touch ID.

In this episode of The New Stack Makers, Passage Co-Founders Cole Hecht and Anna Pobletts talk about how the service works for developers to offer users its biometric service.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1339</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">9d8463d9-bed0-442d-87f5-0418c338bbf6</guid>
      <title>What Does Kubernetes Cost You?</title>
      <description><![CDATA[<p>In this episode of The New Stack’s On the Road show at Open Source Summit in Austin, <a href="https://www.linkedin.com/in/webbbrown">Webb Brown</a>, CEO and co-founder of <a href="https://www.kubecost.com">KubeCost</a>, talked with The New Stack about opening up the black box on how much Kubernetes is really costing. <br /><br />Whether we’re talking about cloud costs in general or the costs specifically associated with Kubernetes, the problem teams complain about is lack of visibility. This is a cliche complaint about AWS, but it gets even more complicated once Kubernetes enters the picture. “Now everything’s distributed, everything’s shared,” Brown said. “It becomes much harder to understand and break down these costs. And things just tend to be way more dynamic.” The ability of pods to spin up and down is a key advantage of Kubernetes and brings resilience, but it also makes it harder to understand how much it costs to run a specific feature. <br /><br />And costs aren’t just about money, either. Even with unlimited money, looking at cost information can provide important information about performance issues, reliability or availability. “Our founding team was at Google working on infrastructure monitoring, we view costs as a really important part of this equation, but only one part of the equation, which is you’re really looking at the relationship between performance and cost,” Brown said. “Even with unlimited budged, you would still care about resourcing and configuration, because it can really impact reliability and availability of your services.”</p>
]]></description>
      <pubDate>Wed, 27 Jul 2022 15:10:40 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/what-does-kubernetes-cost-you-m4waOyWp</link>
      <content:encoded><![CDATA[<p>In this episode of The New Stack’s On the Road show at Open Source Summit in Austin, <a href="https://www.linkedin.com/in/webbbrown">Webb Brown</a>, CEO and co-founder of <a href="https://www.kubecost.com">KubeCost</a>, talked with The New Stack about opening up the black box on how much Kubernetes is really costing. <br /><br />Whether we’re talking about cloud costs in general or the costs specifically associated with Kubernetes, the problem teams complain about is lack of visibility. This is a cliche complaint about AWS, but it gets even more complicated once Kubernetes enters the picture. “Now everything’s distributed, everything’s shared,” Brown said. “It becomes much harder to understand and break down these costs. And things just tend to be way more dynamic.” The ability of pods to spin up and down is a key advantage of Kubernetes and brings resilience, but it also makes it harder to understand how much it costs to run a specific feature. <br /><br />And costs aren’t just about money, either. Even with unlimited money, looking at cost information can provide important information about performance issues, reliability or availability. “Our founding team was at Google working on infrastructure monitoring, we view costs as a really important part of this equation, but only one part of the equation, which is you’re really looking at the relationship between performance and cost,” Brown said. “Even with unlimited budged, you would still care about resourcing and configuration, because it can really impact reliability and availability of your services.”</p>
]]></content:encoded>
      <enclosure length="11966282" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/fab615fd-5a7f-46fa-952c-6fdfdfda206a/audio/4b697ef0-9b7b-4375-88eb-ad7422fd4f2c/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>What Does Kubernetes Cost You?</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:duration>00:12:27</itunes:duration>
      <itunes:summary>In this episode of The New Stack’s On the Road show at Open Source Summit in Austin, Webb Brown, CEO and co-founder of KubeCost, talked with The New Stack about opening up the black box on how much Kubernetes is really costing.</itunes:summary>
      <itunes:subtitle>In this episode of The New Stack’s On the Road show at Open Source Summit in Austin, Webb Brown, CEO and co-founder of KubeCost, talked with The New Stack about opening up the black box on how much Kubernetes is really costing.</itunes:subtitle>
      <itunes:keywords>thenewstack, software engineer podcast, devops, devops podcast, software engineer, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1338</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">cc140da5-003f-439a-a257-2ced75c65223</guid>
      <title>Open Technology, Financial Sustainability and the Importance of Community</title>
      <description><![CDATA[<p>In this episode of The New Stack’s On the Road show at Open Source Summit in Austin, <a href="https://www.linkedin.com/in/amandabrocktech/">Amanda Brock</a>, CEO and founder of <a href="https://openuk.uk">OpenUK</a>, talked with The New Stack about revenue models for open source and how those fit into building a sustainable project.</p><p>Funding an open source project has to be part of the sustainability question — open source requires humans to contribute, and those humans have bills to pay and risk burnout if the open source project is a side gig after their full time job. That’s not the only expenses a project might accrue, either — there might be cloud costs, for example. Brock says there are essentially eight categories of funding models for open source, of which really two or three have been proven successful. They are support, subscription and open core.</p><p>So how do we define open core, exactly? “You get different kinds of open core businesses, one that is driven very much by the needs of the company, and one that is driven by the needs of the open source project and community,” Brock said. In other words, sometimes the project exists to drive revenue, sometime the revenue exists to support the project — a subtle distinction, but it’s easy to see how one or the other orientation could change a company’s relationship with open source.</p><p>Are both types really open source? For Brock, it all comes down to community. “It’s the companies that have proper community that are really open source to me,” she said. “That’s where you’ve got a proper project with a real community, the community is not entirely based off of your employees.”</p>
]]></description>
      <pubDate>Tue, 19 Jul 2022 16:13:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/open-technology-financial-sustainability-and-the-importance-of-community-3pqNaygn</link>
      <content:encoded><![CDATA[<p>In this episode of The New Stack’s On the Road show at Open Source Summit in Austin, <a href="https://www.linkedin.com/in/amandabrocktech/">Amanda Brock</a>, CEO and founder of <a href="https://openuk.uk">OpenUK</a>, talked with The New Stack about revenue models for open source and how those fit into building a sustainable project.</p><p>Funding an open source project has to be part of the sustainability question — open source requires humans to contribute, and those humans have bills to pay and risk burnout if the open source project is a side gig after their full time job. That’s not the only expenses a project might accrue, either — there might be cloud costs, for example. Brock says there are essentially eight categories of funding models for open source, of which really two or three have been proven successful. They are support, subscription and open core.</p><p>So how do we define open core, exactly? “You get different kinds of open core businesses, one that is driven very much by the needs of the company, and one that is driven by the needs of the open source project and community,” Brock said. In other words, sometimes the project exists to drive revenue, sometime the revenue exists to support the project — a subtle distinction, but it’s easy to see how one or the other orientation could change a company’s relationship with open source.</p><p>Are both types really open source? For Brock, it all comes down to community. “It’s the companies that have proper community that are really open source to me,” she said. “That’s where you’ve got a proper project with a real community, the community is not entirely based off of your employees.”</p>
]]></content:encoded>
      <enclosure length="12056966" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/76a8b028-663f-4807-808a-afa2be146d35/audio/77386e13-2ff7-4e41-b5f4-faf9e06da8b4/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Open Technology, Financial Sustainability and the Importance of Community</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:duration>00:12:33</itunes:duration>
      <itunes:summary>In this episode of The New Stack’s On the Road show at Open Source Summit in Austin, Amanda Brock, CEO and founder of OpenUK, talked with The New Stack about revenue models for open source and how those fit into building a sustainable project. </itunes:summary>
      <itunes:subtitle>In this episode of The New Stack’s On the Road show at Open Source Summit in Austin, Amanda Brock, CEO and founder of OpenUK, talked with The New Stack about revenue models for open source and how those fit into building a sustainable project. </itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1337</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">bc47f62e-d786-49f1-b09c-9b805d6817bd</guid>
      <title>What Can the Tech Community Do to Protect Its Trans Members?</title>
      <description><![CDATA[<p>AUSTIN, TEX. — In one of the most compelling keynote addresses at The Linux Foundation’s Open Source Summit North America, held here in June, <a href="https://www.linkedin.com/in/aevaonline/">Aeva Black</a>, a veteran of the open source community, said that a friend of theirs recently commented that, “I feel like all the trans women I know on Twitter, are software developers.”</p><p> </p><p>There’s a reason for that, Black said. It’s called “survivor bias”: The transgender software developers the friend knows on Twitter are only a small sample of the trans kids who survived into adulthood, or didn’t get pushed out of mainstream society.</p><p> </p><p>“It's a pretty common trope, at least on the internet: transwomen are all software developers, we all have high-paying jobs, we're TikTok or on Twitter. And that's really a sampling bias, the transgender people who have the privilege to be loud,” said Black, in this On the Road episode of The New Stack Makers podcast.</p><p> </p><p>Black, whose keynote alerted the conference attendees about how the rights of transgender individuals are under attack around the United States, and the role tech can play, currently works in Microsoft Azure's Office of the Chief Technology Officer and holds seats on the boards of the Open Source Initiative and on the OpenSSF's Technical Advisory Council. In this episode of Makers, they unpacked the keynote’s themes with <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn</a>, TNS features editor.</p><p> </p><p>Citing Pew Research Center data, released in June, reports that <a href="https://www.pewresearch.org/fact-tank/2022/06/07/about-5-of-young-adults-in-the-u-s-say-their-gender-is-different-from-their-sex-assigned-at-birth/">5% of Americans under 30 identify as transgender or nonbinary</a> — <a href="https://www.worldatlas.com/articles/what-percentage-of-the-world-population-have-red-hair.html">roughly the same percentage that have red hair</a>.</p><p> </p><p>The Pew study, and the latest <a href="https://survey.stackoverflow.co/2022/#demographics-trans-learn">"Stack Overflow Developer Survey</a>," reveal that younger people are more likely than their elders to claim a transgender or nonbinary identity. Failure to accept these people, Black said, could have an impact on open source work, and tech work more generally.</p><p> </p><p>“If you're managing a project, and you want to attract younger developers who could then pick it up and carry on the work over time, you need to make sure that you're welcoming of all younger developers,” they said.</p><p><h2>Rethinking Codes of Conduct</h2></p><p><a href="https://thenewstack.io/open-source-communities-need-more-safe-spaces-and-codes-of-conducts-now/">Codes of Conduct</a>, must-haves for meetups, conferences and open source projects  over the past few years, are too often thought of as tools for punishment, Black said in their keynote. For Makers, they advocated for thinking of those codes as tools for community stewardship.</p><p> </p><p>As a former member of the Kubernetes Code of Conduct committee, Black pointed out that “80% of what we did …  while I served wasn't punishing people. It was stepping in when there was conflict, when people you know, stepped on someone else's toe, accidentally offended somebody. Like, ‘OK, hang on, Let's sort this out.' So it was much more stewardship, incident response mediation.”</p><p> </p><p>LGBT people are currently the targets of new legislation in several U.S. states. The tech world and its community leaders should protect community members who may be vulnerable in this new political climate, Black said.</p><p> </p><p>“The culture of a community is determined by the worst behavior its leaders tolerate, we have to understand and it's often difficult to do so how our actions impact those who have less privileged than us, the most marginalized in our community,” they said.</p><p> </p><p>For example, “When thinking of where to host a conference, think about the people in one's community, even those who may be new contributors. Will they be safe in that location?”</p><p> </p><p>Listen to the episode to hear more of The New Stack’s conversation with Black.</p>
]]></description>
      <pubDate>Wed, 13 Jul 2022 18:04:48 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/what-can-the-tech-community-do-to-protect-its-trans-members-RMVQnuu4</link>
      <content:encoded><![CDATA[<p>AUSTIN, TEX. — In one of the most compelling keynote addresses at The Linux Foundation’s Open Source Summit North America, held here in June, <a href="https://www.linkedin.com/in/aevaonline/">Aeva Black</a>, a veteran of the open source community, said that a friend of theirs recently commented that, “I feel like all the trans women I know on Twitter, are software developers.”</p><p> </p><p>There’s a reason for that, Black said. It’s called “survivor bias”: The transgender software developers the friend knows on Twitter are only a small sample of the trans kids who survived into adulthood, or didn’t get pushed out of mainstream society.</p><p> </p><p>“It's a pretty common trope, at least on the internet: transwomen are all software developers, we all have high-paying jobs, we're TikTok or on Twitter. And that's really a sampling bias, the transgender people who have the privilege to be loud,” said Black, in this On the Road episode of The New Stack Makers podcast.</p><p> </p><p>Black, whose keynote alerted the conference attendees about how the rights of transgender individuals are under attack around the United States, and the role tech can play, currently works in Microsoft Azure's Office of the Chief Technology Officer and holds seats on the boards of the Open Source Initiative and on the OpenSSF's Technical Advisory Council. In this episode of Makers, they unpacked the keynote’s themes with <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn</a>, TNS features editor.</p><p> </p><p>Citing Pew Research Center data, released in June, reports that <a href="https://www.pewresearch.org/fact-tank/2022/06/07/about-5-of-young-adults-in-the-u-s-say-their-gender-is-different-from-their-sex-assigned-at-birth/">5% of Americans under 30 identify as transgender or nonbinary</a> — <a href="https://www.worldatlas.com/articles/what-percentage-of-the-world-population-have-red-hair.html">roughly the same percentage that have red hair</a>.</p><p> </p><p>The Pew study, and the latest <a href="https://survey.stackoverflow.co/2022/#demographics-trans-learn">"Stack Overflow Developer Survey</a>," reveal that younger people are more likely than their elders to claim a transgender or nonbinary identity. Failure to accept these people, Black said, could have an impact on open source work, and tech work more generally.</p><p> </p><p>“If you're managing a project, and you want to attract younger developers who could then pick it up and carry on the work over time, you need to make sure that you're welcoming of all younger developers,” they said.</p><p><h2>Rethinking Codes of Conduct</h2></p><p><a href="https://thenewstack.io/open-source-communities-need-more-safe-spaces-and-codes-of-conducts-now/">Codes of Conduct</a>, must-haves for meetups, conferences and open source projects  over the past few years, are too often thought of as tools for punishment, Black said in their keynote. For Makers, they advocated for thinking of those codes as tools for community stewardship.</p><p> </p><p>As a former member of the Kubernetes Code of Conduct committee, Black pointed out that “80% of what we did …  while I served wasn't punishing people. It was stepping in when there was conflict, when people you know, stepped on someone else's toe, accidentally offended somebody. Like, ‘OK, hang on, Let's sort this out.' So it was much more stewardship, incident response mediation.”</p><p> </p><p>LGBT people are currently the targets of new legislation in several U.S. states. The tech world and its community leaders should protect community members who may be vulnerable in this new political climate, Black said.</p><p> </p><p>“The culture of a community is determined by the worst behavior its leaders tolerate, we have to understand and it's often difficult to do so how our actions impact those who have less privileged than us, the most marginalized in our community,” they said.</p><p> </p><p>For example, “When thinking of where to host a conference, think about the people in one's community, even those who may be new contributors. Will they be safe in that location?”</p><p> </p><p>Listen to the episode to hear more of The New Stack’s conversation with Black.</p>
]]></content:encoded>
      <enclosure length="9760226" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/c0c751a3-c6a6-4940-8528-62e293bab6b0/audio/00c2d096-30b3-455c-a5bb-9cc9c4554754/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>What Can the Tech Community Do to Protect Its Trans Members?</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:duration>00:10:09</itunes:duration>
      <itunes:summary>AUSTIN, TEX. — In one of the most compelling keynote addresses at The Linux Foundation’s Open Source Summit North America, held here in June, Aeva Black, a veteran of the open source community, said that a friend of theirs recently commented that, “I feel like all the trans women I know on Twitter, are software developers.”

There’s a reason for that, Black said. It’s called “survivor bias”: The transgender software developers the friend knows on Twitter are only a small sample of the trans kids who survived into adulthood, or didn’t get pushed out of mainstream society.

“It&apos;s a pretty common trope, at least on the internet: transwomen are all software developers, we all have high-paying jobs, we&apos;re TikTok or on Twitter. And that&apos;s really a sampling bias, the transgender people who have the privilege to be loud,” said Black, in this On the Road episode of The New Stack Makers podcast.

Black, whose keynote alerted the conference attendees about how the rights of transgender individuals are under attack around the United States, and the role tech can play, currently works in Microsoft Azure&apos;s Office of the Chief Technology Officer and holds seats on the boards of the Open Source Initiative and on the OpenSSF&apos;s Technical Advisory Council. In this episode of Makers, they unpacked the keynote’s themes with Heather Joslyn, TNS features editor.</itunes:summary>
      <itunes:subtitle>AUSTIN, TEX. — In one of the most compelling keynote addresses at The Linux Foundation’s Open Source Summit North America, held here in June, Aeva Black, a veteran of the open source community, said that a friend of theirs recently commented that, “I feel like all the trans women I know on Twitter, are software developers.”

There’s a reason for that, Black said. It’s called “survivor bias”: The transgender software developers the friend knows on Twitter are only a small sample of the trans kids who survived into adulthood, or didn’t get pushed out of mainstream society.

“It&apos;s a pretty common trope, at least on the internet: transwomen are all software developers, we all have high-paying jobs, we&apos;re TikTok or on Twitter. And that&apos;s really a sampling bias, the transgender people who have the privilege to be loud,” said Black, in this On the Road episode of The New Stack Makers podcast.

Black, whose keynote alerted the conference attendees about how the rights of transgender individuals are under attack around the United States, and the role tech can play, currently works in Microsoft Azure&apos;s Office of the Chief Technology Officer and holds seats on the boards of the Open Source Initiative and on the OpenSSF&apos;s Technical Advisory Council. In this episode of Makers, they unpacked the keynote’s themes with Heather Joslyn, TNS features editor.</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1336</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">c0b0cfd5-d250-47cc-a798-f9c010b957d1</guid>
      <title>What’s Next in WebAssembly?</title>
      <description><![CDATA[<p>AUSTIN, TEX. —What’s the future of <a href="https://thenewstack.io/what-is-webassembly/">WebAssembly</a> — Wasm, to its friends — the binary instruction format for a stack-based virtual machine that allows developers to build in their favorite programming language and run their code anywhere?</p><p>For <a href="https://www.linkedin.com/in/mattbutcher/">Matt Butcher</a>, CEO and founder of Fermyon Technologies, the future of Wasm lies in running it outside of the browser and running it inside of everything, from proxy servers to video games.”</p><p>And, he added, “the really exciting part is being able to run it in the cloud, as well as a cloud service alongside like virtual machines and <a href="https://thenewstack.io/category/containers/">containers</a>.”</p><p>For this On the Road episode of The New Stack Makers podcast, Butcher was interviewed by <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn</a>, features editor of TNS.</p><p>With key programming languages like Ruby, Python and C# adding support for WebAssembly’s new capabilities, Wasm is gaining critical mass, Butcher said.</p><p>“What we're talking about now is the realization of the potential that's been around in WebAssembly for a long time. But as people get excited, and open source projects start to adopt it, then what we're seeing now is like the beginning of the tidal wave.”</p><p>But before widespread adoption can happen, Butcher said, there’s still work to be done in preparing the environment the next wave of Wasm: <a href="https://thenewstack.io/how-webassembly-could-streamline-cloud-native-computing/">cloud computing</a>.</p><p>Along with other members of the <a href="https://bytecodealliance.org/">Bytecode Alliance</a>, such as <a href="https://thenewstack.io/what-makes-wasm-different/">Cosmonic</a>, Fastly, Intel and Fermyon is working to improve the developer experience and environment this year. The next step, he added is to “start to build this first wave of applications that really highlight where it can happen for us.”</p><p>The rise of Wasm represents a new era in cloud native technology, Butcher noted. “We love containers. Many of us have been involved in the <a href="/category/kubernetes/">Kubernetes</a> ecosystem for years and years. I built <a href="https://helm.sh/">Helm</a> originally; that's still, in a way, my baby.</p><p>“But also we're excited because now we're finding solutions to some problems that we didn't see get solved in the container ecosystem. And that's why we talk about it as sort of like the next wave.”</p><h2>Wasm and a ‘Frictionless’ Dev Experience</h2><p>Fermyon introduced its <a href="https://www.fermyon.com/platform">“frictionless” WebAssembly platform</a> in June here at The Linux Foundation’s Open Source Summit North America. The platform, built on technologies including HashiCorp’s Nomad and Consul, enables the writing of microservices and web applications. Fermyon’s open source tool, <a href="https://github.com/fermyon/spin">Spin</a>, helps developers push apps from their local dev environments into their Fermyon platform.</p><p>One aspect of Wasm’s future that Butcher highlighted in our Makers discussion is how it can be scalable while also remaining lightweight in terms of the cloud resources it consumes.</p><p>“Along with creating this great developer experience in a secure platform, we're also going to help people save money on their cloud costs, because cloud costs have just kind of ballooned out of control,” he said.</p><p>“If we can be really mindful of the resources we use, and help the developer understand what it means to write code that can be nimble, and can be light on resource usage. The real objective is to make it so when they write code, it just happens to have those characteristics.”</p><p>For those interested in taking WebAssembly for a spin, Fermyon has created an online game called <a href="https://www.finickywhiskers.com/index.html">Finicky Whiskers</a>, intended to show how <a href="/category/microservices/">microservices</a> can be reimagined with Wasm.</p>
]]></description>
      <pubDate>Tue, 12 Jul 2022 18:25:48 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/whats-next-in-webassembly-UL8gGbLK</link>
      <content:encoded><![CDATA[<p>AUSTIN, TEX. —What’s the future of <a href="https://thenewstack.io/what-is-webassembly/">WebAssembly</a> — Wasm, to its friends — the binary instruction format for a stack-based virtual machine that allows developers to build in their favorite programming language and run their code anywhere?</p><p>For <a href="https://www.linkedin.com/in/mattbutcher/">Matt Butcher</a>, CEO and founder of Fermyon Technologies, the future of Wasm lies in running it outside of the browser and running it inside of everything, from proxy servers to video games.”</p><p>And, he added, “the really exciting part is being able to run it in the cloud, as well as a cloud service alongside like virtual machines and <a href="https://thenewstack.io/category/containers/">containers</a>.”</p><p>For this On the Road episode of The New Stack Makers podcast, Butcher was interviewed by <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn</a>, features editor of TNS.</p><p>With key programming languages like Ruby, Python and C# adding support for WebAssembly’s new capabilities, Wasm is gaining critical mass, Butcher said.</p><p>“What we're talking about now is the realization of the potential that's been around in WebAssembly for a long time. But as people get excited, and open source projects start to adopt it, then what we're seeing now is like the beginning of the tidal wave.”</p><p>But before widespread adoption can happen, Butcher said, there’s still work to be done in preparing the environment the next wave of Wasm: <a href="https://thenewstack.io/how-webassembly-could-streamline-cloud-native-computing/">cloud computing</a>.</p><p>Along with other members of the <a href="https://bytecodealliance.org/">Bytecode Alliance</a>, such as <a href="https://thenewstack.io/what-makes-wasm-different/">Cosmonic</a>, Fastly, Intel and Fermyon is working to improve the developer experience and environment this year. The next step, he added is to “start to build this first wave of applications that really highlight where it can happen for us.”</p><p>The rise of Wasm represents a new era in cloud native technology, Butcher noted. “We love containers. Many of us have been involved in the <a href="/category/kubernetes/">Kubernetes</a> ecosystem for years and years. I built <a href="https://helm.sh/">Helm</a> originally; that's still, in a way, my baby.</p><p>“But also we're excited because now we're finding solutions to some problems that we didn't see get solved in the container ecosystem. And that's why we talk about it as sort of like the next wave.”</p><h2>Wasm and a ‘Frictionless’ Dev Experience</h2><p>Fermyon introduced its <a href="https://www.fermyon.com/platform">“frictionless” WebAssembly platform</a> in June here at The Linux Foundation’s Open Source Summit North America. The platform, built on technologies including HashiCorp’s Nomad and Consul, enables the writing of microservices and web applications. Fermyon’s open source tool, <a href="https://github.com/fermyon/spin">Spin</a>, helps developers push apps from their local dev environments into their Fermyon platform.</p><p>One aspect of Wasm’s future that Butcher highlighted in our Makers discussion is how it can be scalable while also remaining lightweight in terms of the cloud resources it consumes.</p><p>“Along with creating this great developer experience in a secure platform, we're also going to help people save money on their cloud costs, because cloud costs have just kind of ballooned out of control,” he said.</p><p>“If we can be really mindful of the resources we use, and help the developer understand what it means to write code that can be nimble, and can be light on resource usage. The real objective is to make it so when they write code, it just happens to have those characteristics.”</p><p>For those interested in taking WebAssembly for a spin, Fermyon has created an online game called <a href="https://www.finickywhiskers.com/index.html">Finicky Whiskers</a>, intended to show how <a href="/category/microservices/">microservices</a> can be reimagined with Wasm.</p>
]]></content:encoded>
      <enclosure length="12994813" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/c507e012-1a02-4c0e-ae3a-bb84bf39e4e1/audio/6a08cdc5-972a-41b4-9c13-75758d1a5cf5/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>What’s Next in WebAssembly?</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:duration>00:13:32</itunes:duration>
      <itunes:summary>AUSTIN, TEX. —What’s the future of WebAssembly — Wasm, to its friends — the binary instruction format for a stack-based virtual machine that allows developers to build in their favorite programming language and run their code anywhere?

For Matt Butcher, CEO and founder of Fermyon Technologies, the future of Wasm lies in running it outside of the browser and running it inside of everything, from proxy servers to video games.”

And, he added, “the really exciting part is being able to run it in the cloud, as well as a cloud service alongside like virtual machines and containers.”

For this On the Road episode of The New Stack Makers podcast, Butcher was interviewed by Heather Joslyn, features editor of TNS.

</itunes:summary>
      <itunes:subtitle>AUSTIN, TEX. —What’s the future of WebAssembly — Wasm, to its friends — the binary instruction format for a stack-based virtual machine that allows developers to build in their favorite programming language and run their code anywhere?

For Matt Butcher, CEO and founder of Fermyon Technologies, the future of Wasm lies in running it outside of the browser and running it inside of everything, from proxy servers to video games.”

And, he added, “the really exciting part is being able to run it in the cloud, as well as a cloud service alongside like virtual machines and containers.”

For this On the Road episode of The New Stack Makers podcast, Butcher was interviewed by Heather Joslyn, features editor of TNS.

</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1335</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">220b3023-a030-434c-bf8d-cae44ea2b4bf</guid>
      <title>What Makes Wasm Different</title>
      <description><![CDATA[<p>VALENCIA, Spain —  WebAssembly (Wasm) is among the more hot topics under the CNCF project umbrella.  In this episode of The New Stack Makers podcast, recorded on the show floor of<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/"> KubeCon + CloudNativeCon Europe 2022</a>, <a href="https://www.linkedin.com/in/hectaman">Liam Randall</a>, CEO and co-founder, Cosmonic, and <a href="https://www.linkedin.com/in/colin-murphy-08b3601b/">Colin Murphy,</a> senior software engineer, <a href="https://www.adobe.com">Adobe</a>, discuss why Wasm’s future looks bright. </p><p>A quintessential feature of Wasm is that it functions on a CPU level, not unlike Java or Flash. This means, Randall said, that Wasm “can run anywhere.” “Everybody can start using Wasm, which functionally works like a tiny CPU. You can even put WebAssembly inside other applications.”</p><p>The fact that Wasm has a binary format (with .wasm file format) and can be used to run on a CPU level like C or C++ does means it is highly portable. “WebAssembly really is exciting because it gives us two fundamental things that are truly amazing: One is portability across a diverse set of CPUs and architectures, and even portability into other places, like into a web browser,” said Randall. “It also gives us a security model that's portable, and works the same across all of those different landscape settings.”</p><p>This portability makes wasm an excellent candidate for edge applications. Its inference capabilities for machine learning (ML) at the edge are particularly promising for applications distributed across many different applications, Murphy described. Wasm is also particularly apt for collaboration for ML edge and other applications. “Collaborative experiences are what WebAssembly is really perfectly in position for," he continued.</p><p>In many ways, the name “WebAssembly” is not intuitively reflective of its meaning. “WebAssembly is neither web nor assembly — so, it's a somewhat awkwardly named technology, but a technology that is worth looking into,” Randall said. “There are incredible opportunities for your internal teams to transform the way they do business to save costs and be more secure by adopting this new standard.”</p>
]]></description>
      <pubDate>Thu, 7 Jul 2022 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/what-makes-wasm-different-MHcFjUbv</link>
      <content:encoded><![CDATA[<p>VALENCIA, Spain —  WebAssembly (Wasm) is among the more hot topics under the CNCF project umbrella.  In this episode of The New Stack Makers podcast, recorded on the show floor of<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/"> KubeCon + CloudNativeCon Europe 2022</a>, <a href="https://www.linkedin.com/in/hectaman">Liam Randall</a>, CEO and co-founder, Cosmonic, and <a href="https://www.linkedin.com/in/colin-murphy-08b3601b/">Colin Murphy,</a> senior software engineer, <a href="https://www.adobe.com">Adobe</a>, discuss why Wasm’s future looks bright. </p><p>A quintessential feature of Wasm is that it functions on a CPU level, not unlike Java or Flash. This means, Randall said, that Wasm “can run anywhere.” “Everybody can start using Wasm, which functionally works like a tiny CPU. You can even put WebAssembly inside other applications.”</p><p>The fact that Wasm has a binary format (with .wasm file format) and can be used to run on a CPU level like C or C++ does means it is highly portable. “WebAssembly really is exciting because it gives us two fundamental things that are truly amazing: One is portability across a diverse set of CPUs and architectures, and even portability into other places, like into a web browser,” said Randall. “It also gives us a security model that's portable, and works the same across all of those different landscape settings.”</p><p>This portability makes wasm an excellent candidate for edge applications. Its inference capabilities for machine learning (ML) at the edge are particularly promising for applications distributed across many different applications, Murphy described. Wasm is also particularly apt for collaboration for ML edge and other applications. “Collaborative experiences are what WebAssembly is really perfectly in position for," he continued.</p><p>In many ways, the name “WebAssembly” is not intuitively reflective of its meaning. “WebAssembly is neither web nor assembly — so, it's a somewhat awkwardly named technology, but a technology that is worth looking into,” Randall said. “There are incredible opportunities for your internal teams to transform the way they do business to save costs and be more secure by adopting this new standard.”</p>
]]></content:encoded>
      <enclosure length="15731191" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/ad9ff72c-5707-4c05-a3da-b1ba5c2d06df/audio/06e1b4e9-b9a7-42fc-b618-05a3b626a626/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>What Makes Wasm Different</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:duration>00:16:23</itunes:duration>
      <itunes:summary>VALENCIA, Spain —  WebAssembly (Wasm) is among the more hot topics under the CNCF project umbrella.  In this episode of The New Stack Makers podcast, recorded on the show floor of KubeCon + CloudNativeCon Europe 2022, Liam Randall, CEO and co-founder, Cosmonic, and Colin Murphy, senior software engineer, Adobe, discuss why Wasm’s future looks bright. 
</itunes:summary>
      <itunes:subtitle>VALENCIA, Spain —  WebAssembly (Wasm) is among the more hot topics under the CNCF project umbrella.  In this episode of The New Stack Makers podcast, recorded on the show floor of KubeCon + CloudNativeCon Europe 2022, Liam Randall, CEO and co-founder, Cosmonic, and Colin Murphy, senior software engineer, Adobe, discuss why Wasm’s future looks bright. 
</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1334</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">67fff6b9-7852-454c-b4ca-bf79bf7c993d</guid>
      <title>The Social Model of Open Source</title>
      <description><![CDATA[<p><p>In this episode of The New Stack’s On the Road show at Open Source Summit in Austin, <a href="https://www.linkedin.com/in/juliaferraioli/">Julia Ferraioli</a>, open source technical leader at <a href="https://www.cisco.com">Cisco’s</a> open source programs office, spoke with The New Stack about some alternative ways to define what is and is not ‘open source.’ <br /><br />When someone says, well, that’s ‘technically’ open source, it’s usually to be snarky about a project that meets the legal criteria to be open source, but doesn’t follow the spirit of open source. Ferraioli doesn’t think that the ‘classic’ open source project, like a Kubernetes or Linux, are the only valid models for open source. She gives the sample of a research project — the code might be open sourced specifically so that others can see the code and reproduce the results themselves. However, for the research to remain valid, they it can’t accept any contributions.<br /><br />“It’s no less open source than others,” Ferraioli said about the hypothetical research project. “If you break things down by purpose, it’s not always that you’re trying to build the robust community.” The social model of open source, Ferraioli says, is about understanding the different use cases for open source, as well as providing a framework for determining what appropriate success metrics could be depending on what the project’s motivations are. And if you’re just doing a project with friends for laughs, well, quantifying fun isn’t going to be easy.</p></p><p> </p>
]]></description>
      <pubDate>Wed, 6 Jul 2022 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/the-social-model-of-open-source-0CjkLmXJ</link>
      <content:encoded><![CDATA[<p><p>In this episode of The New Stack’s On the Road show at Open Source Summit in Austin, <a href="https://www.linkedin.com/in/juliaferraioli/">Julia Ferraioli</a>, open source technical leader at <a href="https://www.cisco.com">Cisco’s</a> open source programs office, spoke with The New Stack about some alternative ways to define what is and is not ‘open source.’ <br /><br />When someone says, well, that’s ‘technically’ open source, it’s usually to be snarky about a project that meets the legal criteria to be open source, but doesn’t follow the spirit of open source. Ferraioli doesn’t think that the ‘classic’ open source project, like a Kubernetes or Linux, are the only valid models for open source. She gives the sample of a research project — the code might be open sourced specifically so that others can see the code and reproduce the results themselves. However, for the research to remain valid, they it can’t accept any contributions.<br /><br />“It’s no less open source than others,” Ferraioli said about the hypothetical research project. “If you break things down by purpose, it’s not always that you’re trying to build the robust community.” The social model of open source, Ferraioli says, is about understanding the different use cases for open source, as well as providing a framework for determining what appropriate success metrics could be depending on what the project’s motivations are. And if you’re just doing a project with friends for laughs, well, quantifying fun isn’t going to be easy.</p></p><p> </p>
]]></content:encoded>
      <enclosure length="11280762" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/d45e521b-3d68-48f9-a1c0-28598fd7059d/audio/719fd7af-a108-48ac-8dd7-b7f59c8065db/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>The Social Model of Open Source</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:duration>00:11:45</itunes:duration>
      <itunes:summary>In this episode of The New Stack’s On the Road show at Open Source Summit in Austin, Julia Ferraioli, open source technical leader at Cisco’s open source programs office, spoke with The New Stack about some alternative ways to define what is and is not ‘open source.’

When someone says, well, that’s ‘technically’ open source, it’s usually to be snarky about a project that meets the legal criteria to be open source, but doesn’t follow the spirit of open source. Ferraioli doesn’t think that the ‘classic’ open source project, like a Kubernetes or Linux, are the only valid models for open source. She gives the sample of a research project — the code might be open sourced specifically so that others can see the code and reproduce the results themselves. However, for the research to remain valid, they it can’t accept any contributions.

“It’s no less open source than others,” Ferraioli said about the hypothetical research project. “If you break things down by purpose, it’s not always that you’re trying to build the robust community.” The social model of open source, Ferraioli says, is about understanding the different use cases for open source, as well as providing a framework for determining what appropriate success metrics could be depending on what the project’s motivations are. And if you’re just doing a project with friends for laughs, well, quantifying fun isn’t going to be easy.</itunes:summary>
      <itunes:subtitle>In this episode of The New Stack’s On the Road show at Open Source Summit in Austin, Julia Ferraioli, open source technical leader at Cisco’s open source programs office, spoke with The New Stack about some alternative ways to define what is and is not ‘open source.’

When someone says, well, that’s ‘technically’ open source, it’s usually to be snarky about a project that meets the legal criteria to be open source, but doesn’t follow the spirit of open source. Ferraioli doesn’t think that the ‘classic’ open source project, like a Kubernetes or Linux, are the only valid models for open source. She gives the sample of a research project — the code might be open sourced specifically so that others can see the code and reproduce the results themselves. However, for the research to remain valid, they it can’t accept any contributions.

“It’s no less open source than others,” Ferraioli said about the hypothetical research project. “If you break things down by purpose, it’s not always that you’re trying to build the robust community.” The social model of open source, Ferraioli says, is about understanding the different use cases for open source, as well as providing a framework for determining what appropriate success metrics could be depending on what the project’s motivations are. And if you’re just doing a project with friends for laughs, well, quantifying fun isn’t going to be easy.</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1333</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">21816743-6363-4790-a50c-37ae3d9f1dfc</guid>
      <title>What’s the State of Open Source Security? Don’t Ask.</title>
      <description><![CDATA[<p>AUSTIN, TEX. — How safe is the <a href="https://thenewstack.io/category/cloud-native/">open source</a> software that virtually every organization uses? You might not want to know, <a href="https://openssf.org/blog/2022/06/21/state-of-open-source-security-2022-from-snyk-and-the-linux-foundation/">according to the results of a survey</a> released by The Linux Foundation and Snyk, a cloud native cybersecurity company, at the foundation’s annual <a href="https://events.linuxfoundation.org/open-source-summit-north-america/">Open Source Summit North America</a>, held here in June.</p><p> </p><p>Forty-one percent of the more than 500 organizations surveyed don’t have high confidence in the security of the open source software they use, according to the research. Only half of participating companies said they have a <a href="https://thenewstack.io/category/security/">security</a> policy that addresses open source.</p><p> </p><p>Furthermore, it takes more than double the number of days — 98 — to fix a vulnerability compared to what was reported in the 2018 version of the survey.</p><p> </p><p>The research was conducted at the request of the <a href="https://thenewstack.io/inside-a-150-million-plan-for-open-source-software-security/">Open Source Security Foundation (OpenSSF)</a>, a project of The Linux Foundation. For this On the Road episode of The New Stack Makers, <a href="https://www.linkedin.com/in/stephendhendrick/">Steve Hendrick</a>, vice president of research at The Linux Foundation, and <a href="https://www.linkedin.com/in/mattjarvis08">Matt Jarvis</a>, director of developer relations at Snyk, were interviewed by <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn</a>, features editor at TNS.</p><p> </p><p>Despite the alarming statistics, Jarvis cautions against treating all vulnerabilities as four-alarm fires, our guests said.</p><p> </p><p>“Having a kind of zero-vulnerability target is probably unrealistic, because not all vulnerabilities are treated equal,” Jarvis said. Some “vulnerabilities” may not necessarily be a risk in your particular environment. It’s best to focus on the most critical threats to your network, applications and data.</p><p> </p><p>One bright spot in the new report: Nearly one in four respondents said they’re looking for resources to help them keep their open source software — and all that depends on it — safe. Perhaps even more relevant to vendors: 62% of survey participants said they are looking to use more intelligent security-focused tools.</p><p> </p><p>“There's a lot from a process standpoint that they are responsible for,” said Hendrick. “But they were very quick to jump on the bandwagon and say, we want the vendor community to do a better job at providing us tools, that makes our life a lot easier. Because I think everybody recognizes that solving the security problem is going to require a lot more effort than we're putting into it today.”</p><p><h2>Jumping on the ‘SBOM Bandwagon’</h2></p><p>Many organizations still seem confused about which of the dependencies the open source software they use has are direct and which are transitive (dependent on the dependencies), said Hendrick. One of the best ways to clarify things, he said, “ is to get on the <a href="https://thenewstack.io/sbom-everywhere-the-openssf-plan-for-sboms/">SBOM bandwagon</a>.”</p><p> </p><p><a href="https://thenewstack.io/securing-the-software-supply-chain-with-a-software-bill-of-materials/">Understanding an open source tool’s software bill of materials, or SBOM</a>, is “going to give you great understanding of the components, it's going to give you usability, it's going to give you trust, you're gonna be able to know that the components are nonfalsified,” Hendrick said.</p><p> </p><p>“And so that's all absolutely key from the standpoint of being able to deal with the whole componentization issue that is going on everywhere today.</p><p> </p><p>Additional results from the research, in which core project maintainers discussed their best practices, will be released in the third quarter of 2022. Listen to the podcast to learn more about the report’s results and what Linux Foundation is doing to help upskill the IT workforce in cybersecurity.</p>
]]></description>
      <pubDate>Tue, 5 Jul 2022 18:14:18 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/whats-the-state-of-open-source-security-dont-ask-GweoWGf2</link>
      <content:encoded><![CDATA[<p>AUSTIN, TEX. — How safe is the <a href="https://thenewstack.io/category/cloud-native/">open source</a> software that virtually every organization uses? You might not want to know, <a href="https://openssf.org/blog/2022/06/21/state-of-open-source-security-2022-from-snyk-and-the-linux-foundation/">according to the results of a survey</a> released by The Linux Foundation and Snyk, a cloud native cybersecurity company, at the foundation’s annual <a href="https://events.linuxfoundation.org/open-source-summit-north-america/">Open Source Summit North America</a>, held here in June.</p><p> </p><p>Forty-one percent of the more than 500 organizations surveyed don’t have high confidence in the security of the open source software they use, according to the research. Only half of participating companies said they have a <a href="https://thenewstack.io/category/security/">security</a> policy that addresses open source.</p><p> </p><p>Furthermore, it takes more than double the number of days — 98 — to fix a vulnerability compared to what was reported in the 2018 version of the survey.</p><p> </p><p>The research was conducted at the request of the <a href="https://thenewstack.io/inside-a-150-million-plan-for-open-source-software-security/">Open Source Security Foundation (OpenSSF)</a>, a project of The Linux Foundation. For this On the Road episode of The New Stack Makers, <a href="https://www.linkedin.com/in/stephendhendrick/">Steve Hendrick</a>, vice president of research at The Linux Foundation, and <a href="https://www.linkedin.com/in/mattjarvis08">Matt Jarvis</a>, director of developer relations at Snyk, were interviewed by <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn</a>, features editor at TNS.</p><p> </p><p>Despite the alarming statistics, Jarvis cautions against treating all vulnerabilities as four-alarm fires, our guests said.</p><p> </p><p>“Having a kind of zero-vulnerability target is probably unrealistic, because not all vulnerabilities are treated equal,” Jarvis said. Some “vulnerabilities” may not necessarily be a risk in your particular environment. It’s best to focus on the most critical threats to your network, applications and data.</p><p> </p><p>One bright spot in the new report: Nearly one in four respondents said they’re looking for resources to help them keep their open source software — and all that depends on it — safe. Perhaps even more relevant to vendors: 62% of survey participants said they are looking to use more intelligent security-focused tools.</p><p> </p><p>“There's a lot from a process standpoint that they are responsible for,” said Hendrick. “But they were very quick to jump on the bandwagon and say, we want the vendor community to do a better job at providing us tools, that makes our life a lot easier. Because I think everybody recognizes that solving the security problem is going to require a lot more effort than we're putting into it today.”</p><p><h2>Jumping on the ‘SBOM Bandwagon’</h2></p><p>Many organizations still seem confused about which of the dependencies the open source software they use has are direct and which are transitive (dependent on the dependencies), said Hendrick. One of the best ways to clarify things, he said, “ is to get on the <a href="https://thenewstack.io/sbom-everywhere-the-openssf-plan-for-sboms/">SBOM bandwagon</a>.”</p><p> </p><p><a href="https://thenewstack.io/securing-the-software-supply-chain-with-a-software-bill-of-materials/">Understanding an open source tool’s software bill of materials, or SBOM</a>, is “going to give you great understanding of the components, it's going to give you usability, it's going to give you trust, you're gonna be able to know that the components are nonfalsified,” Hendrick said.</p><p> </p><p>“And so that's all absolutely key from the standpoint of being able to deal with the whole componentization issue that is going on everywhere today.</p><p> </p><p>Additional results from the research, in which core project maintainers discussed their best practices, will be released in the third quarter of 2022. Listen to the podcast to learn more about the report’s results and what Linux Foundation is doing to help upskill the IT workforce in cybersecurity.</p>
]]></content:encoded>
      <enclosure length="15175306" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/6b5308c7-8cec-4fb0-b590-9d13ef873ccc/audio/fab8092d-af12-4b0d-b27d-fb102f38f3db/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>What’s the State of Open Source Security? Don’t Ask.</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:duration>00:15:48</itunes:duration>
      <itunes:summary>AUSTIN, TEX. — How safe is the open source software that virtually every organization uses? You might not want to know, according to the results of a survey released by The Linux Foundation and Snyk, a cloud native cybersecurity company, at the foundation’s annual Open Source Summit North America, held here in June.

Furthermore, it takes more than double the number of days — 98 — to fix a vulnerability compared to what was reported in the 2018 version of the survey.

The research was conducted at the request of the Open Source Security Foundation (OpenSSF), a project of The Linux Foundation. For this On the Road episode of The New Stack Makers, Steve Hendrick, vice president of research at The Linux Foundation, and Matt Jarvis, director of developer relations at Snyk, were interviewed by Heather Joslyn, features editor at TNS.

Forty-one percent of the more than 500 organizations surveyed don’t have high confidence in the security of the open source software they use, according to the research. Only half of participating companies said they have a security policy that addresses open source.</itunes:summary>
      <itunes:subtitle>AUSTIN, TEX. — How safe is the open source software that virtually every organization uses? You might not want to know, according to the results of a survey released by The Linux Foundation and Snyk, a cloud native cybersecurity company, at the foundation’s annual Open Source Summit North America, held here in June.

Furthermore, it takes more than double the number of days — 98 — to fix a vulnerability compared to what was reported in the 2018 version of the survey.

The research was conducted at the request of the Open Source Security Foundation (OpenSSF), a project of The Linux Foundation. For this On the Road episode of The New Stack Makers, Steve Hendrick, vice president of research at The Linux Foundation, and Matt Jarvis, director of developer relations at Snyk, were interviewed by Heather Joslyn, features editor at TNS.

Forty-one percent of the more than 500 organizations surveyed don’t have high confidence in the security of the open source software they use, according to the research. Only half of participating companies said they have a security policy that addresses open source.</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1332</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">9e2b0862-740e-4493-9559-3c2bb7da12ac</guid>
      <title>A Boom in Open Source Jobs Is Here. But Who Will Fill Them?</title>
      <description><![CDATA[<p>AUSTIN, TEX. —Forty-one percent of organizations in a new survey said <a href="https://thenewstack.io/companies-are-hiring-open-source-devs-but-skills-are-rare/">they expect to increase hiring for open source roles this year</a>. But the study, released in June by the Linux Foundation and online learning platform edX during the foundation’s <a href="https://events.linuxfoundation.org/open-source-summit-north-america/">Open Source Summit North America</a>, also found that 93% of employers surveyed said they struggle to find the talent to fill those roles.</p><p>At the Austin summit, The New Stack’s Makers podcast sat down with <a href="https://www.linkedin.com/in/hilarycartermsc">Hilary Carter</a>, vice president for research at the Linux Foundation, who oversaw the study. She was interviewed for this On the Road edition of Makers by <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn</a>, features editor at The New Stack.</p><p>“I think it's a very good time to be an open source developer, I think they hold all the cards right now,” Carter said. “And the fact that demand outstrips supply is nothing short of favorable for open source developers, to carry a bit of a big stick and make more demands and advocate for their improved work environments, for increased pay.”</p><p>But even sought-after developers are feeling a bit anxious about keeping pace with the cloud native ecosystem’s constant growth and change. The open source jobs study found that roughly three out of four open source developers said they need more <a href="https://thenewstack.io/category/security/">cybersecurity</a> training, up from about two-thirds in 2021’s version of the report.</p><p>“<a href="https://thenewstack.io/inside-a-150-million-plan-for-open-source-software-security/">Security is the problem of the day</a> that I think the whole community is acutely aware of, and highly focused on, and we need the talent, we need the skills,” Carter said. “And we need the resources to come together to solve the challenge of <a href="https://thenewstack.io/the-challenges-of-securing-the-open-source-supply-chain/">creating more secure software supply chains.</a>”</p><p>Carter also told the Makers audience about the role open source program offices, or OSPOs, can play in nurturing in-house open source talent, the impact a potential recession may have (or not have) on the tech job market, and new surveys in the works at Linux Foundation to essentially map the open source community outside of North America.</p><p>Its first study, of Europe’s open source communities, is slated to be released in September at <a href="https://events.linuxfoundation.org/open-source-summit-europe/">Open Source Summit Europe</a>, in Dublin. Linux Foundation Research is currently fielding its annual survey of OSPOs; <a href="https://www.research.net/r/FF3GMWQ">you can participate here</a>. It is also working with the Cloud Native Computing Foundation on its annual survey of cloud native adoption trends. <a href="https://www.research.net/r/T6D29LS">You can participate in that survey here</a>.</p>
]]></description>
      <pubDate>Fri, 1 Jul 2022 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/a-boom-in-open-source-jobs-is-here-but-who-will-fill-them-FIvPsZWF</link>
      <content:encoded><![CDATA[<p>AUSTIN, TEX. —Forty-one percent of organizations in a new survey said <a href="https://thenewstack.io/companies-are-hiring-open-source-devs-but-skills-are-rare/">they expect to increase hiring for open source roles this year</a>. But the study, released in June by the Linux Foundation and online learning platform edX during the foundation’s <a href="https://events.linuxfoundation.org/open-source-summit-north-america/">Open Source Summit North America</a>, also found that 93% of employers surveyed said they struggle to find the talent to fill those roles.</p><p>At the Austin summit, The New Stack’s Makers podcast sat down with <a href="https://www.linkedin.com/in/hilarycartermsc">Hilary Carter</a>, vice president for research at the Linux Foundation, who oversaw the study. She was interviewed for this On the Road edition of Makers by <a href="https://thenewstack.io/author/hjoslyn/">Heather Joslyn</a>, features editor at The New Stack.</p><p>“I think it's a very good time to be an open source developer, I think they hold all the cards right now,” Carter said. “And the fact that demand outstrips supply is nothing short of favorable for open source developers, to carry a bit of a big stick and make more demands and advocate for their improved work environments, for increased pay.”</p><p>But even sought-after developers are feeling a bit anxious about keeping pace with the cloud native ecosystem’s constant growth and change. The open source jobs study found that roughly three out of four open source developers said they need more <a href="https://thenewstack.io/category/security/">cybersecurity</a> training, up from about two-thirds in 2021’s version of the report.</p><p>“<a href="https://thenewstack.io/inside-a-150-million-plan-for-open-source-software-security/">Security is the problem of the day</a> that I think the whole community is acutely aware of, and highly focused on, and we need the talent, we need the skills,” Carter said. “And we need the resources to come together to solve the challenge of <a href="https://thenewstack.io/the-challenges-of-securing-the-open-source-supply-chain/">creating more secure software supply chains.</a>”</p><p>Carter also told the Makers audience about the role open source program offices, or OSPOs, can play in nurturing in-house open source talent, the impact a potential recession may have (or not have) on the tech job market, and new surveys in the works at Linux Foundation to essentially map the open source community outside of North America.</p><p>Its first study, of Europe’s open source communities, is slated to be released in September at <a href="https://events.linuxfoundation.org/open-source-summit-europe/">Open Source Summit Europe</a>, in Dublin. Linux Foundation Research is currently fielding its annual survey of OSPOs; <a href="https://www.research.net/r/FF3GMWQ">you can participate here</a>. It is also working with the Cloud Native Computing Foundation on its annual survey of cloud native adoption trends. <a href="https://www.research.net/r/T6D29LS">You can participate in that survey here</a>.</p>
]]></content:encoded>
      <enclosure length="12337363" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/c75ee750-c6fb-49d4-ae80-22ebfdf21a78/audio/461fb742-16ee-4a0a-b252-2919daa4f653/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>A Boom in Open Source Jobs Is Here. But Who Will Fill Them?</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:duration>00:12:51</itunes:duration>
      <itunes:summary>AUSTIN, TEX. —Forty-one percent of organizations in a new survey said they expect to increase hiring for open source roles this year. But the study, released in June by the Linux Foundation and online learning platform edX during the foundation’s Open Source Summit North America, also found that 93% of employers surveyed said they struggle to find the talent to fill those roles.

At the Austin summit, The New Stack’s Makers podcast sat down with Hilary Carter, vice president for research at the Linux Foundation, who oversaw the study. She was interviewed for this On the Road edition of Makers by Heather Joslyn, features editor at The New Stack.</itunes:summary>
      <itunes:subtitle>AUSTIN, TEX. —Forty-one percent of organizations in a new survey said they expect to increase hiring for open source roles this year. But the study, released in June by the Linux Foundation and online learning platform edX during the foundation’s Open Source Summit North America, also found that 93% of employers surveyed said they struggle to find the talent to fill those roles.

At the Austin summit, The New Stack’s Makers podcast sat down with Hilary Carter, vice president for research at the Linux Foundation, who oversaw the study. She was interviewed for this On the Road edition of Makers by Heather Joslyn, features editor at The New Stack.</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1331</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">92d84c8f-f1c0-4fb0-bd8c-9d623cc38809</guid>
      <title>Economic Uncertainty and the Open Source Ecosystem</title>
      <description><![CDATA[<p><p>In this episode of The New Stack’s On the Road show at Open Source Summit in Austin, <a href="https://www.linkedin.com/in/myonk/">Matt Yonkovit</a>, Head of Open Source at <a href="https://www.percona.com">Percona</a>, shared his thoughts on how economic uncertainty could affect the open source ecosystem. <br /><br />Open source, of course, is free. So what role does the economic play in whether or not open source software is contributed to, downloaded and used in production? “Generally, open source is considered a bit recession proof,” Yonkovit said. But that doesn’t mean that things won’t change. Over the past several years, the number of open source companies has increased dramatically, and the amount of funding sloshing around in the ecosystem has been huge. That might change. <br /><br />And if the funding situation does change? “I think the big differentiator for a lot of people in the open source space is going to be the communities,” Yonkovit said. When we talk about having ‘backing,’ it’s usually in reference to financial investors, but in open source the backing of a community is just as important. In the absence of deep pockets, a community of people who believe in the project can help it survive — and show that the idea is really solid. <br /><br />If you look back at the history of open source, Yonkovit said, it’s about people having an idea that inspires other people to contribute to make it a reality. Sometimes those ideas aren’t commercially viable, even in the best of times — even if they do get widespread adoption. The only thing that’s changing now is that financial investors are going to be a bit more picky in making sure the projects they fund aren’t just inspirational ideas, but also are commercially viable.</p></p>
]]></description>
      <pubDate>Thu, 30 Jun 2022 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack Podcast)</author>
      <link>https://thenewstack.simplecast.com/episodes/economic-uncertainty-and-the-open-source-ecosystem-THxtM8zV</link>
      <content:encoded><![CDATA[<p><p>In this episode of The New Stack’s On the Road show at Open Source Summit in Austin, <a href="https://www.linkedin.com/in/myonk/">Matt Yonkovit</a>, Head of Open Source at <a href="https://www.percona.com">Percona</a>, shared his thoughts on how economic uncertainty could affect the open source ecosystem. <br /><br />Open source, of course, is free. So what role does the economic play in whether or not open source software is contributed to, downloaded and used in production? “Generally, open source is considered a bit recession proof,” Yonkovit said. But that doesn’t mean that things won’t change. Over the past several years, the number of open source companies has increased dramatically, and the amount of funding sloshing around in the ecosystem has been huge. That might change. <br /><br />And if the funding situation does change? “I think the big differentiator for a lot of people in the open source space is going to be the communities,” Yonkovit said. When we talk about having ‘backing,’ it’s usually in reference to financial investors, but in open source the backing of a community is just as important. In the absence of deep pockets, a community of people who believe in the project can help it survive — and show that the idea is really solid. <br /><br />If you look back at the history of open source, Yonkovit said, it’s about people having an idea that inspires other people to contribute to make it a reality. Sometimes those ideas aren’t commercially viable, even in the best of times — even if they do get widespread adoption. The only thing that’s changing now is that financial investors are going to be a bit more picky in making sure the projects they fund aren’t just inspirational ideas, but also are commercially viable.</p></p>
]]></content:encoded>
      <enclosure length="13802310" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/cbf3903d-ef39-41a3-9a1d-63186f4d0d5f/audio/c1a4339a-7cbc-4e37-b580-4c398df5d8f6/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Economic Uncertainty and the Open Source Ecosystem</itunes:title>
      <itunes:author>The New Stack Podcast</itunes:author>
      <itunes:duration>00:14:22</itunes:duration>
      <itunes:summary>In this episode of The New Stack’s On the Road show at Open Source Summit in Austin, Matt Yonkovit, Head of Open Source at Percona, shared his thoughts on how economic uncertainty could affect the open source ecosystem.

Open source, of course, is free. So what role does the economic play in whether or not open source software is contributed to, downloaded and used in production? “Generally, open source is considered a bit recession proof,” Yonkovit said. But that doesn’t mean that things won’t change. Over the past several years, the number of open source companies has increased dramatically, and the amount of funding sloshing around in the ecosystem has been huge. That might change.

And if the funding situation does change? “I think the big differentiator for a lot of people in the open source space is going to be the communities,” Yonkovit said. When we talk about having ‘backing,’ it’s usually in reference to financial investors, but in open source the backing of a community is just as important. In the absence of deep pockets, a community of people who believe in the project can help it survive — and show that the idea is really solid.

If you look back at the history of open source, Yonkovit said, it’s about people having an idea that inspires other people to contribute to make it a reality. Sometimes those ideas aren’t commercially viable, even in the best of times — even if they do get widespread adoption. The only thing that’s changing now is that financial investors are going to be a bit more picky in making sure the projects they fund aren’t just inspirational ideas, but also are commercially viable.</itunes:summary>
      <itunes:subtitle>In this episode of The New Stack’s On the Road show at Open Source Summit in Austin, Matt Yonkovit, Head of Open Source at Percona, shared his thoughts on how economic uncertainty could affect the open source ecosystem.

Open source, of course, is free. So what role does the economic play in whether or not open source software is contributed to, downloaded and used in production? “Generally, open source is considered a bit recession proof,” Yonkovit said. But that doesn’t mean that things won’t change. Over the past several years, the number of open source companies has increased dramatically, and the amount of funding sloshing around in the ecosystem has been huge. That might change.

And if the funding situation does change? “I think the big differentiator for a lot of people in the open source space is going to be the communities,” Yonkovit said. When we talk about having ‘backing,’ it’s usually in reference to financial investors, but in open source the backing of a community is just as important. In the absence of deep pockets, a community of people who believe in the project can help it survive — and show that the idea is really solid.

If you look back at the history of open source, Yonkovit said, it’s about people having an idea that inspires other people to contribute to make it a reality. Sometimes those ideas aren’t commercially viable, even in the best of times — even if they do get widespread adoption. The only thing that’s changing now is that financial investors are going to be a bit more picky in making sure the projects they fund aren’t just inspirational ideas, but also are commercially viable.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1329</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e178e9db-e10d-4d58-b22f-e83722241730</guid>
      <title>Inside a $150 Million Plan for Open Source Software Security</title>
      <description><![CDATA[<p>AUSTIN, TEX. —Everyone uses open source software — and it’s become increasingly apparent that <a href="https://thenewstack.io/log4shell-we-are-in-so-much-trouble/">not nearly enough attention has been paid to the security of that software</a>. In a survey released by The Linux Foundation and Synk at the foundation’s Open Source Summit in Austin, Tex.,  this month, 41% of organizations said they aren’t confident in the <a href="/category/security/">security</a> of the open source software they use.</p><p>At the Austin event, The New Stack’s Makers podcast sat down with <a href="https://www.linkedin.com/in/brianbehlendorf/">Brian Behlendorf,</a> general manager of Open Source Security Foundation (OpenSSF), to talk about a new plan to attack the problem from multiple angles. He was interviewed for this On the Road edition of Makers by <a href="/author/hjoslyn/">Heather Joslyn</a>, features editor at The New Stack.</p><p>Behlendorf, who has led OpenSSF since October and serves on the boards of the Electronic Frontier Foundation and Mozilla Foundation, cited the discovery of the Log4j vulnerabilities late in 2021, and other recent security “earthquakes” as a key turning points.“I think the software industry this year really woke up to not only the fact these earthquakes were happening,” he said, “and how it's getting more and more expensive to recover from them.”</p><p>The <a href="https://openssf.org/oss-security-mobilization-plan/">Open Source Security Mobilization Plan</a> sprung from an open source security summit in May. It identifies 10 areas that will be targeted for attention, according to the report published by OpenSSF and the Linux Foundation:</p><ul><li>Security education.</li><li>Risk assessment.</li><li>Digital signatures, such as though the <a href="/kubernetes-adopts-sigstore-for-supply-chain-security/">open source Sigstore project</a>.</li><li>Memory safety.</li><li>Incident response.</li><li>Better scanning.</li><li>Code audits.</li><li>Data sharing.</li><li>Improved <a href="/the-challenges-of-securing-the-open-source-supply-chain/">software supply chains</a>.</li><li><a href="/sbom-everywhere-the-openssf-plan-for-sboms/">Software bills of material (SBOMs) everywhere</a></li></ul><p>The price tag for these initiatives over the initial two years is expected to total $150 million, Behlendorf told our Makers audience.</p><p>The plan was sparked by queries from the White House about the various initiatives underway to improve open source software security — what they would cost, and the time frame the solution-builders had in mind. “We couldn't really answer that without being able to say, well, what would it take if we were to invest?” Behlendorf said. “Because most of the time we sit there, we wait for folks to show up and hope for the best.”</p><p>The ultimate price tag, he said, was much lower than he expected it would be. Various member organizations within OpenSSF, he said, have pledged funding. “The 150 was really an estimate. And these plans are still being refined,” Behlendorf said. But by stating specific steps and their costs, he feels confident that interested parties will feel confident when it comes time to make good on those pledges.</p><p>Listen to the podcast to get more details about the Open Source Security Mobilization Plan.</p>
]]></description>
      <pubDate>Tue, 28 Jun 2022 22:13:39 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/inside-a-150-million-plan-for-open-source-software-security-tD_se_A3</link>
      <content:encoded><![CDATA[<p>AUSTIN, TEX. —Everyone uses open source software — and it’s become increasingly apparent that <a href="https://thenewstack.io/log4shell-we-are-in-so-much-trouble/">not nearly enough attention has been paid to the security of that software</a>. In a survey released by The Linux Foundation and Synk at the foundation’s Open Source Summit in Austin, Tex.,  this month, 41% of organizations said they aren’t confident in the <a href="/category/security/">security</a> of the open source software they use.</p><p>At the Austin event, The New Stack’s Makers podcast sat down with <a href="https://www.linkedin.com/in/brianbehlendorf/">Brian Behlendorf,</a> general manager of Open Source Security Foundation (OpenSSF), to talk about a new plan to attack the problem from multiple angles. He was interviewed for this On the Road edition of Makers by <a href="/author/hjoslyn/">Heather Joslyn</a>, features editor at The New Stack.</p><p>Behlendorf, who has led OpenSSF since October and serves on the boards of the Electronic Frontier Foundation and Mozilla Foundation, cited the discovery of the Log4j vulnerabilities late in 2021, and other recent security “earthquakes” as a key turning points.“I think the software industry this year really woke up to not only the fact these earthquakes were happening,” he said, “and how it's getting more and more expensive to recover from them.”</p><p>The <a href="https://openssf.org/oss-security-mobilization-plan/">Open Source Security Mobilization Plan</a> sprung from an open source security summit in May. It identifies 10 areas that will be targeted for attention, according to the report published by OpenSSF and the Linux Foundation:</p><ul><li>Security education.</li><li>Risk assessment.</li><li>Digital signatures, such as though the <a href="/kubernetes-adopts-sigstore-for-supply-chain-security/">open source Sigstore project</a>.</li><li>Memory safety.</li><li>Incident response.</li><li>Better scanning.</li><li>Code audits.</li><li>Data sharing.</li><li>Improved <a href="/the-challenges-of-securing-the-open-source-supply-chain/">software supply chains</a>.</li><li><a href="/sbom-everywhere-the-openssf-plan-for-sboms/">Software bills of material (SBOMs) everywhere</a></li></ul><p>The price tag for these initiatives over the initial two years is expected to total $150 million, Behlendorf told our Makers audience.</p><p>The plan was sparked by queries from the White House about the various initiatives underway to improve open source software security — what they would cost, and the time frame the solution-builders had in mind. “We couldn't really answer that without being able to say, well, what would it take if we were to invest?” Behlendorf said. “Because most of the time we sit there, we wait for folks to show up and hope for the best.”</p><p>The ultimate price tag, he said, was much lower than he expected it would be. Various member organizations within OpenSSF, he said, have pledged funding. “The 150 was really an estimate. And these plans are still being refined,” Behlendorf said. But by stating specific steps and their costs, he feels confident that interested parties will feel confident when it comes time to make good on those pledges.</p><p>Listen to the podcast to get more details about the Open Source Security Mobilization Plan.</p>
]]></content:encoded>
      <enclosure length="12466094" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/25bd06a9-abcf-4015-a5e3-015e3ebb3a23/audio/5bd61cf0-86e0-47c6-a6a0-1af310f16d93/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Inside a $150 Million Plan for Open Source Software Security</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:duration>00:12:59</itunes:duration>
      <itunes:summary>AUSTIN, TEX. —Everyone uses open source software — and it’s become increasingly apparent that not nearly enough attention has been paid to the security of that software. In a survey released by The Linux Foundation and Synk at the foundation’s Open Source Summit in Austin, Tex.,  this month, 41% of organizations said they aren’t confident in the security of the open source software they use.

At the Austin event, The New Stack’s Makers podcast sat down with Brian Behlendorf, general manager of Open Source Security Foundation (OpenSSF), to talk about a new plan to attack the problem from multiple angles. He was interviewed for this On the Road edition of Makers by Heather Joslyn, features editor at The New Stack.</itunes:summary>
      <itunes:subtitle>AUSTIN, TEX. —Everyone uses open source software — and it’s become increasingly apparent that not nearly enough attention has been paid to the security of that software. In a survey released by The Linux Foundation and Synk at the foundation’s Open Source Summit in Austin, Tex.,  this month, 41% of organizations said they aren’t confident in the security of the open source software they use.

At the Austin event, The New Stack’s Makers podcast sat down with Brian Behlendorf, general manager of Open Source Security Foundation (OpenSSF), to talk about a new plan to attack the problem from multiple angles. He was interviewed for this On the Road edition of Makers by Heather Joslyn, features editor at The New Stack.</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1330</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b86b1bee-cb57-45a7-8c7e-41d282a63a6a</guid>
      <title>Counting on Developers to Lead Vodafone’s Transformation Journey</title>
      <description><![CDATA[<p> </p><p>British telecommunications provider, Vodafone, which owns and operates networks in over 20 countries and is on a journey to become a tech company focused around digital services, has plans to hire thousands of software engineers and developers that can help put the company on the cloud-native track and utilize their network through API’s.</p><p>In this episode of The New Stack Makers podcast at <a href="https://www.mongodb.com/world-2022">MongoDB World 2022</a> in New York City, <a href="https://www.linkedin.com/in/lloydwoodroffe/?originalSubdomain=uk">Lloyd Woodroffe</a>, Global Product Manager at Vodafone, shares how the company is working with MongoDB on the development of a Telco as a Service (TaaS) platform to help their engineers increase their software development velocity, and drive adoption of best-practice automation within DevSecOps pipelines. <a href="https://thenewstack.io/author/alex/">Alex Williams</a>, Founder of The New Stack hosted this podcast.</p><p>Vodafone has built a backbone to keep the business resilient and scalable. But one thing they are looking to do now is innovate and give their developers the freedom and flexibility to develop creatively. “The TaaS platform – which is the product we’re building – is essentially a developer first framework that allows developers and Vodafone to build things that you think could help the business grow. But because we’re an enterprise, we need security and financial assurance and TaaS is the framework that allows us to do it in a way that gives developers the tools they need but also the security we need,” said Woodroffe.</p><p>The idea of reuse as part of an inner sourcing model is key as Vodafone’s scales. The company’s key initiative ‘one source’ enables their developers to incorporate such a strategy, “We have a single repository across all our markets and teams where you can publish your code and other teams from other countries can take that code, reuse it, and implement it into their applications,” said Woodroffe. “In terms of outsourcing to the community, our engineers want to start productizing APIs and build new, innovative applications which we'll see in a bit,” he added.</p><p>“The TaaS developer platform that we’re building with MongoDB acts as our service registry for the platform. When you provision the tools for the developer, we register the organizations, the cost center and guardrails that we’ve set up from a security and finance perspective,” said Woodroffe. “Then we provision MongoDB for the developers to use as their database of choice.”</p><p>“What we'll see ultimately, as the developer has access to these tools [TaaS] and products more, is they'll be able to build new innovations that can be utilized through our network via API's,” Woodroffe said.</p>
]]></description>
      <pubDate>Tue, 21 Jun 2022 19:51:16 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/counting-on-developers-to-lead-vodafones-transformation-journey-JHzyKGvF</link>
      <content:encoded><![CDATA[<p> </p><p>British telecommunications provider, Vodafone, which owns and operates networks in over 20 countries and is on a journey to become a tech company focused around digital services, has plans to hire thousands of software engineers and developers that can help put the company on the cloud-native track and utilize their network through API’s.</p><p>In this episode of The New Stack Makers podcast at <a href="https://www.mongodb.com/world-2022">MongoDB World 2022</a> in New York City, <a href="https://www.linkedin.com/in/lloydwoodroffe/?originalSubdomain=uk">Lloyd Woodroffe</a>, Global Product Manager at Vodafone, shares how the company is working with MongoDB on the development of a Telco as a Service (TaaS) platform to help their engineers increase their software development velocity, and drive adoption of best-practice automation within DevSecOps pipelines. <a href="https://thenewstack.io/author/alex/">Alex Williams</a>, Founder of The New Stack hosted this podcast.</p><p>Vodafone has built a backbone to keep the business resilient and scalable. But one thing they are looking to do now is innovate and give their developers the freedom and flexibility to develop creatively. “The TaaS platform – which is the product we’re building – is essentially a developer first framework that allows developers and Vodafone to build things that you think could help the business grow. But because we’re an enterprise, we need security and financial assurance and TaaS is the framework that allows us to do it in a way that gives developers the tools they need but also the security we need,” said Woodroffe.</p><p>The idea of reuse as part of an inner sourcing model is key as Vodafone’s scales. The company’s key initiative ‘one source’ enables their developers to incorporate such a strategy, “We have a single repository across all our markets and teams where you can publish your code and other teams from other countries can take that code, reuse it, and implement it into their applications,” said Woodroffe. “In terms of outsourcing to the community, our engineers want to start productizing APIs and build new, innovative applications which we'll see in a bit,” he added.</p><p>“The TaaS developer platform that we’re building with MongoDB acts as our service registry for the platform. When you provision the tools for the developer, we register the organizations, the cost center and guardrails that we’ve set up from a security and finance perspective,” said Woodroffe. “Then we provision MongoDB for the developers to use as their database of choice.”</p><p>“What we'll see ultimately, as the developer has access to these tools [TaaS] and products more, is they'll be able to build new innovations that can be utilized through our network via API's,” Woodroffe said.</p>
]]></content:encoded>
      <enclosure length="12928357" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/7f8d4278-fc17-4671-8008-bf00840a43ce/audio/8e56ef1a-ee1c-475d-8f4c-2f0b5aae64fe/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Counting on Developers to Lead Vodafone’s Transformation Journey</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:duration>00:13:27</itunes:duration>
      <itunes:summary>British telecommunications provider, Vodafone, which owns and operates networks in over 20 countries and is on a journey to become a tech company focused around digital services, has plans to hire thousands of software engineers and developers that can help put the company on the cloud-native track and utilize their network through API’s.

In this episode of The New Stack Makers podcast at MongoDB World 2022 in New York City, Lloyd Woodroofe, Global Product Manager at Vodafone, shares how the company is working with MongoDB on the development of a Telco as a Service (TaaS) platform to help their engineers increase their software development velocity, and drive adoption of best-practice automation within DevSecOps pipelines. Alex Williams, Founder of The New Stack hosted this podcast.</itunes:summary>
      <itunes:subtitle>British telecommunications provider, Vodafone, which owns and operates networks in over 20 countries and is on a journey to become a tech company focused around digital services, has plans to hire thousands of software engineers and developers that can help put the company on the cloud-native track and utilize their network through API’s.

In this episode of The New Stack Makers podcast at MongoDB World 2022 in New York City, Lloyd Woodroofe, Global Product Manager at Vodafone, shares how the company is working with MongoDB on the development of a Telco as a Service (TaaS) platform to help their engineers increase their software development velocity, and drive adoption of best-practice automation within DevSecOps pipelines. Alex Williams, Founder of The New Stack hosted this podcast.</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1328</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">ee138d0a-7008-465c-86f5-a7676e927f26</guid>
      <title>Pulumi Pursues Polyglotism to Expand Impact of DevOps</title>
      <description><![CDATA[<p>VALENCIA – The goal of DevOps was to break down silos between software development and operations. The side effect has become the blurring of lines between dev and ops. For better or for worse. Because the role of software developer is just continuously expanding causing cognitive overload and burnout. This is why the developer tooling market has exploded to automate and assist developers right when and where they need to build, in whatever language they already know.</p><p> </p><p>In this episode of <a href="https://thenewstack.io/tag/the-new-stack-makers">The New Stack Makers podcast</a>, recorded on the floor of <a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/">KubeCon + CloudNativeCon Europe 2022</a>, <a href="https://twitter.com/mattstratton">Matty Stratton</a>, staff developer advocate at Pulumi, talks about this recently universal Infrastructure-as-Code and that impact on both dev and ops teams.</p><p> </p><p>Earlier this May, <a href="https://thenewstack.io/pulumi-takes-infrastructure-as-code-universal-with-crosscode/">Pulumi released updates</a> that took the platform closer to becoming a truly <a href="https://thenewstack.io/pulumi-uses-real-programming-languages-to-enforce-cloud-best-practices/">polyglot way to enforce best cloud practices</a>, including support for:</p><p><ul></p><p> <li>Full Java ecosystem</li></p><p> <li>YAML</li></p><p> <li>Crosswalk for Amazon Web Services (AWS) in all Pulumi languages</li></p><p> <li>Deploying AWS Cloud Development Kit (CDK) in all Pulumi languages</li></p><p></ul></p><p>These are significant updates because they dramatically expand the languages that are available in this low-code way of creating, deploying and managing infrastructure on any cloud.</p><p><div class="page" title="Page 2"></p><p><div class="section"></p><p><div class="layoutArea"></p><p><div class="column"></p><p> </p><p>"A lot of times, in Infrastructure-as-Code, we're using domain-specific language using a config file. We call it Infrastructure as Code and are not actually writing any code. So I like to think about Pulumi as Infrastructure as Software." For Stratton, that means writing Pulumi code using a general purpose programming language, like TypeScript, Python, Go, .NET languages, or now Java. "The great thing about that is, not only do you maybe already know this programming language, because that's the language you use to build your applications, but you're able to use all the things that a programming language has available to it, like conditionals, and loops, and packages, and testing tools, and an IDE [integrated development enviornment] and a whole ecosystem. So that makes it a lot more powerful, and gives us a lot of great abstractions we can use," he continued.</p><p> </p><p></div></p><p></div></p><p>Pulumi now follows the low-code development trend where, Stratton says, "We're enabling people to solve a problem with just enough tech." But specifically in their common coding language, to limit the tool onboarding needed.</p><p> </p><p>This is not only attractive to new customers but specifically to expand Pulumi adoption across organizations, without much adaptation of the way they work. Just making it easier to work together.</p><p> </p><p></div></p><p></div></p><p>"I've been part of the DevOps community for a long time. And all that I want to see out of DevOps and all of this work is how do we collaborate better together? How do we be more cross functional?"</p>
]]></description>
      <pubDate>Tue, 21 Jun 2022 18:50:19 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/pulumi-pursues-polyglotism-to-expand-impact-of-devops-JdYtuSBV</link>
      <content:encoded><![CDATA[<p>VALENCIA – The goal of DevOps was to break down silos between software development and operations. The side effect has become the blurring of lines between dev and ops. For better or for worse. Because the role of software developer is just continuously expanding causing cognitive overload and burnout. This is why the developer tooling market has exploded to automate and assist developers right when and where they need to build, in whatever language they already know.</p><p> </p><p>In this episode of <a href="https://thenewstack.io/tag/the-new-stack-makers">The New Stack Makers podcast</a>, recorded on the floor of <a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/">KubeCon + CloudNativeCon Europe 2022</a>, <a href="https://twitter.com/mattstratton">Matty Stratton</a>, staff developer advocate at Pulumi, talks about this recently universal Infrastructure-as-Code and that impact on both dev and ops teams.</p><p> </p><p>Earlier this May, <a href="https://thenewstack.io/pulumi-takes-infrastructure-as-code-universal-with-crosscode/">Pulumi released updates</a> that took the platform closer to becoming a truly <a href="https://thenewstack.io/pulumi-uses-real-programming-languages-to-enforce-cloud-best-practices/">polyglot way to enforce best cloud practices</a>, including support for:</p><p><ul></p><p> <li>Full Java ecosystem</li></p><p> <li>YAML</li></p><p> <li>Crosswalk for Amazon Web Services (AWS) in all Pulumi languages</li></p><p> <li>Deploying AWS Cloud Development Kit (CDK) in all Pulumi languages</li></p><p></ul></p><p>These are significant updates because they dramatically expand the languages that are available in this low-code way of creating, deploying and managing infrastructure on any cloud.</p><p><div class="page" title="Page 2"></p><p><div class="section"></p><p><div class="layoutArea"></p><p><div class="column"></p><p> </p><p>"A lot of times, in Infrastructure-as-Code, we're using domain-specific language using a config file. We call it Infrastructure as Code and are not actually writing any code. So I like to think about Pulumi as Infrastructure as Software." For Stratton, that means writing Pulumi code using a general purpose programming language, like TypeScript, Python, Go, .NET languages, or now Java. "The great thing about that is, not only do you maybe already know this programming language, because that's the language you use to build your applications, but you're able to use all the things that a programming language has available to it, like conditionals, and loops, and packages, and testing tools, and an IDE [integrated development enviornment] and a whole ecosystem. So that makes it a lot more powerful, and gives us a lot of great abstractions we can use," he continued.</p><p> </p><p></div></p><p></div></p><p>Pulumi now follows the low-code development trend where, Stratton says, "We're enabling people to solve a problem with just enough tech." But specifically in their common coding language, to limit the tool onboarding needed.</p><p> </p><p>This is not only attractive to new customers but specifically to expand Pulumi adoption across organizations, without much adaptation of the way they work. Just making it easier to work together.</p><p> </p><p></div></p><p></div></p><p>"I've been part of the DevOps community for a long time. And all that I want to see out of DevOps and all of this work is how do we collaborate better together? How do we be more cross functional?"</p>
]]></content:encoded>
      <enclosure length="16404106" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/e4097fdd-3d10-4d4f-a2bd-623cd4b71b9c/audio/866d4ee9-ab73-4f60-a643-7aab6f2614d6/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Pulumi Pursues Polyglotism to Expand Impact of DevOps</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:duration>00:17:05</itunes:duration>
      <itunes:summary>VALENCIA – The goal of DevOps was to break down silos between software development and operations. The side effect has become the blurring of lines between dev and ops. For better or for worse. Because the role of software developer is just continuously expanding causing cognitive overload and burnout. This is why the developer tooling market has exploded to automate and assist developers right when and where they need to build, in whatever language they already know.

In this episode of The New Stack Makers podcast, recorded on the floor of KubeCon + CloudNativeCon Europe 2022, Matty Stratton, staff developer advocate at Pulumi, talks about this recently universal Infrastructure-as-Code and that impact on both dev and ops teams.</itunes:summary>
      <itunes:subtitle>VALENCIA – The goal of DevOps was to break down silos between software development and operations. The side effect has become the blurring of lines between dev and ops. For better or for worse. Because the role of software developer is just continuously expanding causing cognitive overload and burnout. This is why the developer tooling market has exploded to automate and assist developers right when and where they need to build, in whatever language they already know.

In this episode of The New Stack Makers podcast, recorded on the floor of KubeCon + CloudNativeCon Europe 2022, Matty Stratton, staff developer advocate at Pulumi, talks about this recently universal Infrastructure-as-Code and that impact on both dev and ops teams.</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1327</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">15c4d00b-714f-4dff-8097-a72fbd140b08</guid>
      <title>Unlocking the Developer</title>
      <description><![CDATA[<p>Proper tooling is perhaps the primary key to unlocking developer productivity. With the right tools and frameworks, developers can be productive in minutes versus having to toil over boilerplate code. And as data-hungry use cases such as AI and machine learning emerge, data tooling is becoming paramount.</p><p> </p><p>This was evident at the recent <a href="https://www.mongodb.com/world-2022">MongoDB World</a> conference in New York City where TNS Founder and Publisher Alex Williams recorded this episode of The New Stack Makers podcast featuring <a href="https://www.linkedin.com/in/peggyrayzis/">Peggy Rayzis</a>, senior director of developer experience at Apollo GraphQL; <a href="https://www.linkedin.com/in/leeerob">Lee Robinson</a>, vice president of developer experience at Vercel; <a href="https://www.linkedin.com/in/imassingham/?originalSubdomain=uk">Ian Massingham</a>, vice president of developer relations and community at MongoDB; and <a href="https://twitter.com/sorenbs">Søren Bramer Schmidt</a>, co-founder and CEO of Prisma, discussing how their companies’ offerings help unlock developer productivity.</p><p><h3>Apollo GraphQL and Supergraphs</h3></p><p>Apollo GraphQL unlocks developers by helping them build supergraphs, Raysiz said. A <a href="https://www.apollographql.com/blog/announcement/backend/the-supergraph-a-new-way-to-think-about-graphql/">supergraph</a> is a unified network of a company's data services and capabilities that is accessible via a consistent and discoverable place that any developer can access with a GraphQL query. GraphQL is a query language for communicating about data.</p><p> </p><p>“And what's really great about the supergraph is even though it's unified, it's very modular and incrementally adoptable. So you don't have to like rewrite all of your backend system and API's,” she said. “What's really great about the Super graph is you can connect like your legacy infrastructure, like your relational databases, and connect that to a more modern stack, like MongoDB Atlas, for example, or even connected to a mainframe as we've seen with some of our customers. And it brings that together in one place that can evolve over time. And we found that it just makes developers so much more productive, helps them shave, shave months off of their development time and create experiences that were impossible before.”</p><p><h3>[sponsor_note slug="mongodb" ][/sponsor_note]</h3></p><p><h3>Vercel: Strong Defaults</h3></p><p>Meanwhile, Robinson touted the virtues of Next.js, Vercel’s popular React-based framework, which provides developers with the tools and the production defaults to make a fast web experience. The goal is to enable frontend developers to be able to move from an idea to a global application in seconds.</p><p> </p><p>Robinson said he believes it’s important for a tool or framework to have good, strong defaults, but to also be extensible and available for developers to make changes such that they do not have necessarily eject fully out of the tool that they're using, but to be able to customize without having to leave the framework library tool of choice.</p><p> </p><p>“If you can provide that great experience for the 90% use case by default, but still allow maybe the extra 10% power, you know, power developer who needs to modify something without having to just rewrite from scratch, you can get go pretty far,” he said.</p><p><h3>Data Tooling</h3></p><p>When it comes to data tooling, MongoDB is trying to help developers manipulate and work with data in a more productive and effective way, Massingham said.</p><p> </p><p>One of the ways MongoDB does this is through the provision of first-party drivers, he said. The company offers 12 different programming language drivers for MongoDB, covering everything from Rust to Java, JavaScript, Python, etc.</p><p> </p><p>“So, as a developer, you’re importing a library into your environment,” Massingham said. “And then rather than having to construct convoluted SQL statements -- essentially learning another language to interact with the data in your database or data store -- you're going to manipulate data idiomatically using objects or whatever other constructs that are normal within the programming language that you're using. It just makes it way simpler for developers to interact with the data that's stored in MongoDB versus interacting with data in a relational database.”</p><p><h3>MongoDB and Prisma</h3></p><p>Bramer Schmidt said while a truism in software engineering is that code moves fast and data moves slow, but now we are starting to see more innovation around the data tooling space.</p><p> </p><p>“And Mongo is a great example of that,” he said. “Mongo is a database that is much nicer to use for developers, you can express more different data constructs, and Mongo can handle things under the hood.”</p><p> </p><p>Moreover, Prisma also is innovating around the developer experience for working with data, making it easier for developers to build applications that rely on data and do that faster, Bramer Schmidt said.</p><p> </p><p>“The way we do that in Prisma is we have the tooling introspect your database, it will go and assemble documents in MongoDB, and then generate a schema based on that, and then it will pull that information into your development environment, such that you can, when you write queries, you will get autocompletion, and the IDE will tell you if you're making a mistake,” he said. “You will have that confidence in your environment instead of having to look at the documentation, try to remember what fields are where or how to do things. So that is increasing the confidence of the developer enabling them to move faster.</p>
]]></description>
      <pubDate>Thu, 16 Jun 2022 17:56:01 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/unlocking-the-developer-hNdJBksG</link>
      <content:encoded><![CDATA[<p>Proper tooling is perhaps the primary key to unlocking developer productivity. With the right tools and frameworks, developers can be productive in minutes versus having to toil over boilerplate code. And as data-hungry use cases such as AI and machine learning emerge, data tooling is becoming paramount.</p><p> </p><p>This was evident at the recent <a href="https://www.mongodb.com/world-2022">MongoDB World</a> conference in New York City where TNS Founder and Publisher Alex Williams recorded this episode of The New Stack Makers podcast featuring <a href="https://www.linkedin.com/in/peggyrayzis/">Peggy Rayzis</a>, senior director of developer experience at Apollo GraphQL; <a href="https://www.linkedin.com/in/leeerob">Lee Robinson</a>, vice president of developer experience at Vercel; <a href="https://www.linkedin.com/in/imassingham/?originalSubdomain=uk">Ian Massingham</a>, vice president of developer relations and community at MongoDB; and <a href="https://twitter.com/sorenbs">Søren Bramer Schmidt</a>, co-founder and CEO of Prisma, discussing how their companies’ offerings help unlock developer productivity.</p><p><h3>Apollo GraphQL and Supergraphs</h3></p><p>Apollo GraphQL unlocks developers by helping them build supergraphs, Raysiz said. A <a href="https://www.apollographql.com/blog/announcement/backend/the-supergraph-a-new-way-to-think-about-graphql/">supergraph</a> is a unified network of a company's data services and capabilities that is accessible via a consistent and discoverable place that any developer can access with a GraphQL query. GraphQL is a query language for communicating about data.</p><p> </p><p>“And what's really great about the supergraph is even though it's unified, it's very modular and incrementally adoptable. So you don't have to like rewrite all of your backend system and API's,” she said. “What's really great about the Super graph is you can connect like your legacy infrastructure, like your relational databases, and connect that to a more modern stack, like MongoDB Atlas, for example, or even connected to a mainframe as we've seen with some of our customers. And it brings that together in one place that can evolve over time. And we found that it just makes developers so much more productive, helps them shave, shave months off of their development time and create experiences that were impossible before.”</p><p><h3>[sponsor_note slug="mongodb" ][/sponsor_note]</h3></p><p><h3>Vercel: Strong Defaults</h3></p><p>Meanwhile, Robinson touted the virtues of Next.js, Vercel’s popular React-based framework, which provides developers with the tools and the production defaults to make a fast web experience. The goal is to enable frontend developers to be able to move from an idea to a global application in seconds.</p><p> </p><p>Robinson said he believes it’s important for a tool or framework to have good, strong defaults, but to also be extensible and available for developers to make changes such that they do not have necessarily eject fully out of the tool that they're using, but to be able to customize without having to leave the framework library tool of choice.</p><p> </p><p>“If you can provide that great experience for the 90% use case by default, but still allow maybe the extra 10% power, you know, power developer who needs to modify something without having to just rewrite from scratch, you can get go pretty far,” he said.</p><p><h3>Data Tooling</h3></p><p>When it comes to data tooling, MongoDB is trying to help developers manipulate and work with data in a more productive and effective way, Massingham said.</p><p> </p><p>One of the ways MongoDB does this is through the provision of first-party drivers, he said. The company offers 12 different programming language drivers for MongoDB, covering everything from Rust to Java, JavaScript, Python, etc.</p><p> </p><p>“So, as a developer, you’re importing a library into your environment,” Massingham said. “And then rather than having to construct convoluted SQL statements -- essentially learning another language to interact with the data in your database or data store -- you're going to manipulate data idiomatically using objects or whatever other constructs that are normal within the programming language that you're using. It just makes it way simpler for developers to interact with the data that's stored in MongoDB versus interacting with data in a relational database.”</p><p><h3>MongoDB and Prisma</h3></p><p>Bramer Schmidt said while a truism in software engineering is that code moves fast and data moves slow, but now we are starting to see more innovation around the data tooling space.</p><p> </p><p>“And Mongo is a great example of that,” he said. “Mongo is a database that is much nicer to use for developers, you can express more different data constructs, and Mongo can handle things under the hood.”</p><p> </p><p>Moreover, Prisma also is innovating around the developer experience for working with data, making it easier for developers to build applications that rely on data and do that faster, Bramer Schmidt said.</p><p> </p><p>“The way we do that in Prisma is we have the tooling introspect your database, it will go and assemble documents in MongoDB, and then generate a schema based on that, and then it will pull that information into your development environment, such that you can, when you write queries, you will get autocompletion, and the IDE will tell you if you're making a mistake,” he said. “You will have that confidence in your environment instead of having to look at the documentation, try to remember what fields are where or how to do things. So that is increasing the confidence of the developer enabling them to move faster.</p>
]]></content:encoded>
      <enclosure length="21296318" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/dde6f122-d1c1-40e2-8d40-0dc84686ed9d/audio/9a3ea0b3-bc06-4e0a-a6ef-00d38fc56062/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Unlocking the Developer</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:duration>00:22:10</itunes:duration>
      <itunes:summary>Proper tooling is perhaps the primary key to unlocking developer productivity. With the right tools and frameworks, developers can be productive in minutes versus having to toil over boilerplate code. And as data-hungry use cases such as AI and machine learning emerge, data tooling is becoming paramount.

This was evident at the recent MongoDB World conference in New York City where TNS Founder and Publisher Alex Williams recorded this episode of The New Stack Makers podcast featuring Peggy Rayzis, senior director of developer experience at Apollo GraphQL; Lee Robinson, vice president of developer experience at Vercel; Ian Massingham, vice president of developer relations and community at MongoDB; and Søren Bramer Schmidt, co-founder and CEO of Prisma, discussing how their companies’ offerings help unlock developer productivity.

Apollo GraphQL and Supergraphs

Apollo GraphQL unlocks developers by helping them build supergraphs, Raysiz said. A supergraph is a unified network of a company&apos;s data services and capabilities that is accessible via a consistent and discoverable place that any developer can access with a GraphQL query. GraphQL is a query language for communicating about data.

“And what&apos;s really great about the supergraph is even though it&apos;s unified, it&apos;s very modular and incrementally adoptable. So you don&apos;t have to like rewrite all of your backend system and API&apos;s,” she said. “What&apos;s really great about the Super graph is you can connect like your legacy infrastructure, like your relational databases, and connect that to a more modern stack, like MongoDB Atlas, for example, or even connected to a mainframe as we&apos;ve seen with some of our customers. And it brings that together in one place that can evolve over time. And we found that it just makes developers so much more productive, helps them shave, shave months off of their development time and create experiences that were impossible before.”

Meanwhile, Robinson touted the virtues of Next.js, Vercel’s popular React-based framework, which provides developers with the tools and the production defaults to make a fast web experience. The goal is to enable frontend developers to be able to move from an idea to a global application in seconds.

Robinson said he believes it’s important for a tool or framework to have good, strong defaults, but to also be extensible and available for developers to make changes such that they do not have necessarily eject fully out of the tool that they&apos;re using, but to be able to customize without having to leave the framework library tool of choice.

“If you can provide that great experience for the 90% use case by default, but still allow maybe the extra 10% power, you know, power developer who needs to modify something without having to just rewrite from scratch, you can get go pretty far,” he said.

Data Tooling

When it comes to data tooling, MongoDB is trying to help developers manipulate and work with data in a more productive and effective way, Massingham said.

One of the ways MongoDB does this is through the provision of first-party drivers, he said. The company offers 12 different programming language drivers for MongoDB, covering everything from Rust to Java, JavaScript, Python, etc.

“So, as a developer, you’re importing a library into your environment,” Massingham said. “And then rather than having to construct convoluted SQL statements -- essentially learning another language to interact with the data in your database or data store -- you&apos;re going to manipulate data idiomatically using objects or whatever other constructs that are normal within the programming language that you&apos;re using. It just makes it way simpler for developers to interact with the data that&apos;s stored in MongoDB versus interacting with data in a relational database.”

MongoDB and Prisma

Bramer Schmidt said while a truism in software engineering is that code moves fast and data moves slow, but now we are starting to see more innovation around the data tooling space.

“And Mongo is a great example of that,” he said. “Mongo is a database that is much nicer to use for developers, you can express more different data constructs, and Mongo can handle things under the hood.”

Moreover, Prisma also is innovating around the developer experience for working with data, making it easier for developers to build applications that rely on data and do that faster, Bramer Schmidt said.

“The way we do that in Prisma is we have the tooling introspect your database, it will go and assemble documents in MongoDB, and then generate a schema based on that, and then it will pull that information into your development environment, such that you can, when you write queries, you will get autocompletion, and the IDE will tell you if you&apos;re making a mistake,” he said. “You will have that confidence in your environment instead of having to look at the documentation, try to remember what fields are where or how to do things. So that is increasing the confidence of the developer enabling them to move faster.</itunes:summary>
      <itunes:subtitle>Proper tooling is perhaps the primary key to unlocking developer productivity. With the right tools and frameworks, developers can be productive in minutes versus having to toil over boilerplate code. And as data-hungry use cases such as AI and machine learning emerge, data tooling is becoming paramount.

This was evident at the recent MongoDB World conference in New York City where TNS Founder and Publisher Alex Williams recorded this episode of The New Stack Makers podcast featuring Peggy Rayzis, senior director of developer experience at Apollo GraphQL; Lee Robinson, vice president of developer experience at Vercel; Ian Massingham, vice president of developer relations and community at MongoDB; and Søren Bramer Schmidt, co-founder and CEO of Prisma, discussing how their companies’ offerings help unlock developer productivity.

Apollo GraphQL and Supergraphs

Apollo GraphQL unlocks developers by helping them build supergraphs, Raysiz said. A supergraph is a unified network of a company&apos;s data services and capabilities that is accessible via a consistent and discoverable place that any developer can access with a GraphQL query. GraphQL is a query language for communicating about data.

“And what&apos;s really great about the supergraph is even though it&apos;s unified, it&apos;s very modular and incrementally adoptable. So you don&apos;t have to like rewrite all of your backend system and API&apos;s,” she said. “What&apos;s really great about the Super graph is you can connect like your legacy infrastructure, like your relational databases, and connect that to a more modern stack, like MongoDB Atlas, for example, or even connected to a mainframe as we&apos;ve seen with some of our customers. And it brings that together in one place that can evolve over time. And we found that it just makes developers so much more productive, helps them shave, shave months off of their development time and create experiences that were impossible before.”

Meanwhile, Robinson touted the virtues of Next.js, Vercel’s popular React-based framework, which provides developers with the tools and the production defaults to make a fast web experience. The goal is to enable frontend developers to be able to move from an idea to a global application in seconds.

Robinson said he believes it’s important for a tool or framework to have good, strong defaults, but to also be extensible and available for developers to make changes such that they do not have necessarily eject fully out of the tool that they&apos;re using, but to be able to customize without having to leave the framework library tool of choice.

“If you can provide that great experience for the 90% use case by default, but still allow maybe the extra 10% power, you know, power developer who needs to modify something without having to just rewrite from scratch, you can get go pretty far,” he said.

Data Tooling

When it comes to data tooling, MongoDB is trying to help developers manipulate and work with data in a more productive and effective way, Massingham said.

One of the ways MongoDB does this is through the provision of first-party drivers, he said. The company offers 12 different programming language drivers for MongoDB, covering everything from Rust to Java, JavaScript, Python, etc.

“So, as a developer, you’re importing a library into your environment,” Massingham said. “And then rather than having to construct convoluted SQL statements -- essentially learning another language to interact with the data in your database or data store -- you&apos;re going to manipulate data idiomatically using objects or whatever other constructs that are normal within the programming language that you&apos;re using. It just makes it way simpler for developers to interact with the data that&apos;s stored in MongoDB versus interacting with data in a relational database.”

MongoDB and Prisma

Bramer Schmidt said while a truism in software engineering is that code moves fast and data moves slow, but now we are starting to see more innovation around the data tooling space.

“And Mongo is a great example of that,” he said. “Mongo is a database that is much nicer to use for developers, you can express more different data constructs, and Mongo can handle things under the hood.”

Moreover, Prisma also is innovating around the developer experience for working with data, making it easier for developers to build applications that rely on data and do that faster, Bramer Schmidt said.

“The way we do that in Prisma is we have the tooling introspect your database, it will go and assemble documents in MongoDB, and then generate a schema based on that, and then it will pull that information into your development environment, such that you can, when you write queries, you will get autocompletion, and the IDE will tell you if you&apos;re making a mistake,” he said. “You will have that confidence in your environment instead of having to look at the documentation, try to remember what fields are where or how to do things. So that is increasing the confidence of the developer enabling them to move faster.</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1326</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1d945d36-7b71-4a5c-8396-a15a323d99f7</guid>
      <title>MongoDB 6.0 Offers Client-Side End-to-End Encryption</title>
      <description><![CDATA[<p>"Developers aren't cryptographers. We can only do so much security training, and frankly, they shouldn't have to make hard choices about this encryption mode or that encryption mode. It should just, like, work," said <a href="https://www.mongodb.com/blog/authors/kenneth-white">Kenneth White</a>,  a security principal at MongoDB, explaining the need for MongoDB's new <a href="https://www.mongodb.com/products/queryable-encryption">Queryable Encryption</a> feature. </p><p> </p><p>In this latest edition of <a href="/podcasts">The New Stack Makers</a> podcast, we discuss [sponsor_inline_mention slug="mongodb" ]MongoDB[/sponsor_inline_mention]'s new end-to-end client-side encryption, which allows an application to query an encrypted database and keep the queries in transit encrypted, an industry first, according to the company.</p><p> </p><p>White discussed this technology in depth to TNS publisher Alex Williams, in a conversation recorded at MongoDB World, held last week in New York.  </p><p> </p><p>MongoDB has offered the ability to <a href="https://www.mongodb.com/blog/post/field-level-encryption-is-ga">encrypt and decrypt documents</a> since MongoDB 4.2, though this release is the first to allow an application to query the encrypted data. Developers with no expertise in encryption can write apps that use this capability on the client side, and the <a href="https://www.mongodb.com/products/queryable-encryption">capability itself</a> (available in preview mode for MongoDB 6.0) adds no noticeable overhead to application performance, so claims the company.</p><p> </p><p>Data remains encrypted all times, even in memory and in the CPU; The keys never leave the application and cannot be accessed by the server. Nor can the database or cloud service administrator be able to look at the raw data.</p><p> </p><p>For organizations, queryable encryption greatly expands the utility of using MongoDB for all sorts of sensitive and secret data. Customer service reps, for instance, could use the data to help customers with issues around sensitive data, such as social security numbers or credit card numbers.</p><p> </p><p>In this podcast, White also spoke about the considerable engineering effort to make this technology possible — and make it easy to use for developers.</p><p> </p><p>"In terms of how we got here, the biggest breakthroughs weren't cryptography, they were the engineering pieces, the things that make it so that you can scale to do key management, to do indexes that really have these kinds of capabilities in a practical way," Green said. </p><p> </p><p>It was necessary to serve a user base that needs maximum scalability in their technologies. Many have "monster workloads," he notes.</p><p> </p><p>"We've got some customers that have over 800 shards, meaning 800 different physical servers around the world for one system. I mean, that's massive," he said. "So it was a lot of the engineering over the last year and a half [has been] to sort of translate those math and algorithm techniques into something that's practical in the database."</p>
]]></description>
      <pubDate>Thu, 16 Jun 2022 00:10:27 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/mongodb-60-offers-client-side-end-to-end-encryption-RuLcsxbV</link>
      <content:encoded><![CDATA[<p>"Developers aren't cryptographers. We can only do so much security training, and frankly, they shouldn't have to make hard choices about this encryption mode or that encryption mode. It should just, like, work," said <a href="https://www.mongodb.com/blog/authors/kenneth-white">Kenneth White</a>,  a security principal at MongoDB, explaining the need for MongoDB's new <a href="https://www.mongodb.com/products/queryable-encryption">Queryable Encryption</a> feature. </p><p> </p><p>In this latest edition of <a href="/podcasts">The New Stack Makers</a> podcast, we discuss [sponsor_inline_mention slug="mongodb" ]MongoDB[/sponsor_inline_mention]'s new end-to-end client-side encryption, which allows an application to query an encrypted database and keep the queries in transit encrypted, an industry first, according to the company.</p><p> </p><p>White discussed this technology in depth to TNS publisher Alex Williams, in a conversation recorded at MongoDB World, held last week in New York.  </p><p> </p><p>MongoDB has offered the ability to <a href="https://www.mongodb.com/blog/post/field-level-encryption-is-ga">encrypt and decrypt documents</a> since MongoDB 4.2, though this release is the first to allow an application to query the encrypted data. Developers with no expertise in encryption can write apps that use this capability on the client side, and the <a href="https://www.mongodb.com/products/queryable-encryption">capability itself</a> (available in preview mode for MongoDB 6.0) adds no noticeable overhead to application performance, so claims the company.</p><p> </p><p>Data remains encrypted all times, even in memory and in the CPU; The keys never leave the application and cannot be accessed by the server. Nor can the database or cloud service administrator be able to look at the raw data.</p><p> </p><p>For organizations, queryable encryption greatly expands the utility of using MongoDB for all sorts of sensitive and secret data. Customer service reps, for instance, could use the data to help customers with issues around sensitive data, such as social security numbers or credit card numbers.</p><p> </p><p>In this podcast, White also spoke about the considerable engineering effort to make this technology possible — and make it easy to use for developers.</p><p> </p><p>"In terms of how we got here, the biggest breakthroughs weren't cryptography, they were the engineering pieces, the things that make it so that you can scale to do key management, to do indexes that really have these kinds of capabilities in a practical way," Green said. </p><p> </p><p>It was necessary to serve a user base that needs maximum scalability in their technologies. Many have "monster workloads," he notes.</p><p> </p><p>"We've got some customers that have over 800 shards, meaning 800 different physical servers around the world for one system. I mean, that's massive," he said. "So it was a lot of the engineering over the last year and a half [has been] to sort of translate those math and algorithm techniques into something that's practical in the database."</p>
]]></content:encoded>
      <enclosure length="16702946" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/68332ac7-69b1-4818-95dc-c37f86dc716b/audio/b4116136-9e09-4633-82ce-3c55cde5a30f/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>MongoDB 6.0 Offers Client-Side End-to-End Encryption</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:duration>00:17:23</itunes:duration>
      <itunes:summary>&quot;Developers aren&apos;t cryptographers. We can only do so much security training, and frankly, they shouldn&apos;t have to make hard choices about this encryption mode or that encryption mode. It should just, like, work,&quot; said Kenneth White,  a security principal at MongoDB, explaining the need for MongoDB&apos;s new Queryable Encryption feature. 

In this latest edition of The New Stack Makers podcast, we discuss new end-to-end client-side encryption, which allows an application to query an encrypted database and keep the queries in transit encrypted, an industry first, according to the company.

White discussed this technology in depth to TNS publisher Alex Williams, in a conversation recorded at MongoDB World, held last week in New York.  

MongoDB has offered the ability to encrypt and decrypt documents since MongoDB 4.2, though this release is the first to allow an application to query the encrypted data. Developers with no expertise in encryption can write apps that use this capability on the client side, and the capability itself (available in preview mode for MongoDB 6.0) adds no noticeable overhead to application performance, so claims the company.

Data remains encrypted all times, even in memory and in the CPU; The keys never leave the application and cannot be accessed by the server. Nor can the database or cloud service administrator be able to look at the raw data.

For organizations, queryable encryption greatly expands the utility of using MongoDB for all sorts of sensitive and secret data. Customer service reps, for instance, could use the data to help customers with issues around sensitive data, such as social security numbers or credit card numbers.

In this podcast, White also spoke about the considerable engineering effort to make this technology possible — and make it easy to use for developers.

&quot;In terms of how we got here, the biggest breakthroughs weren&apos;t cryptography, they were the engineering pieces, the things that make it so that you can scale to do key management, to do indexes that really have these kinds of capabilities in a practical way,&quot; Green said. 

It was necessary to serve a user base that needs maximum scalability in their technologies. Many have &quot;monster workloads,&quot; he notes.

&quot;We&apos;ve got some customers that have over 800 shards, meaning 800 different physical servers around the world for one system. I mean, that&apos;s massive,&quot; he said. &quot;So it was a lot of the engineering over the last year and a half [has been] to sort of translate those math and algorithm techniques into something that&apos;s practical in the database.&quot;</itunes:summary>
      <itunes:subtitle>&quot;Developers aren&apos;t cryptographers. We can only do so much security training, and frankly, they shouldn&apos;t have to make hard choices about this encryption mode or that encryption mode. It should just, like, work,&quot; said Kenneth White,  a security principal at MongoDB, explaining the need for MongoDB&apos;s new Queryable Encryption feature. 

In this latest edition of The New Stack Makers podcast, we discuss new end-to-end client-side encryption, which allows an application to query an encrypted database and keep the queries in transit encrypted, an industry first, according to the company.

White discussed this technology in depth to TNS publisher Alex Williams, in a conversation recorded at MongoDB World, held last week in New York.  

MongoDB has offered the ability to encrypt and decrypt documents since MongoDB 4.2, though this release is the first to allow an application to query the encrypted data. Developers with no expertise in encryption can write apps that use this capability on the client side, and the capability itself (available in preview mode for MongoDB 6.0) adds no noticeable overhead to application performance, so claims the company.

Data remains encrypted all times, even in memory and in the CPU; The keys never leave the application and cannot be accessed by the server. Nor can the database or cloud service administrator be able to look at the raw data.

For organizations, queryable encryption greatly expands the utility of using MongoDB for all sorts of sensitive and secret data. Customer service reps, for instance, could use the data to help customers with issues around sensitive data, such as social security numbers or credit card numbers.

In this podcast, White also spoke about the considerable engineering effort to make this technology possible — and make it easy to use for developers.

&quot;In terms of how we got here, the biggest breakthroughs weren&apos;t cryptography, they were the engineering pieces, the things that make it so that you can scale to do key management, to do indexes that really have these kinds of capabilities in a practical way,&quot; Green said. 

It was necessary to serve a user base that needs maximum scalability in their technologies. Many have &quot;monster workloads,&quot; he notes.

&quot;We&apos;ve got some customers that have over 800 shards, meaning 800 different physical servers around the world for one system. I mean, that&apos;s massive,&quot; he said. &quot;So it was a lot of the engineering over the last year and a half [has been] to sort of translate those math and algorithm techniques into something that&apos;s practical in the database.&quot;</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1325</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">9eaf3f83-af2e-4e82-80bb-4fab290c568f</guid>
      <title>Simplifying Cloud Native Application Development with Ballerina</title>
      <description><![CDATA[<p><p style="font-weight: 400;">For the past six years, WSO2 has been developing Ballerina, an open-source programming language that streamlines the writing of new services and APIs. It aims to simplify the process of being able to use, combine, and create network services and get highly distributed applications to work together toward a determined outcome.</p></p><p><p style="font-weight: 400;">In this episode of The New Stack Makers podcast <a href="https://www.linkedin.com/in/enewcomer/">Eric Newcomer</a>, Chief Technology Officer of WSO2 discusses how the company created a new programming language from the ground up, and the plans for it to become a predominant cloud native language. <a href="https://thenewstack.io/author/darryl-taft/">Darryl Taft</a>, news editor of The New Stack hosted this podcast.</p></p><p><p style="font-weight: 400;">Founded on the idea that it was too hard to do development with integration, Ballerina was created to program in highly distributed environments. “Cloud computing is an evolution of distributed computing of integration. You're talking about microservices and APIs that need to talk to each other in the cloud,” said Newcomer. “And what Ballerina does, is it thinks about what functions outside of the program that need to be talked to,” he added.</p></p><p><p style="font-weight: 400;">With Ballerina, developers can easily pick it up to create cloud applications. The language design is informed by TypeScript and JavaScript but with some additional capabilities, Newcomer said. “Developers can create records and schemas for JSON payloads in and out to support the API's for cloud mobile or web apps, and it has concurrency for concurrent processing of multiple calls transaction control but in a very familiar syntax, like TypeScript or JavaScript.”</p></p><p><p style="font-weight: 400;">WSO2 is using Ballerina in the company’s <a href="/wso2s-choreo-offers-low-code-for-kubernetes/">low-code like offering, Choreo</a>, which includes features such as the ability to create diagrams. “The long-time challenge in the industry is how do you represent your programming code in a graphical form. [Sanjiva Weerawarana, Founder of WSO2] has solved this problem by putting into the language syntax elements from which you can create diagrams. And he did it in such a way that you can edit the diagram and create code,” said Newcomer.</p></p><p><p style="font-weight: 400;">Engineering for the cloud requires a programing language that can reengineer applications to achieve the auto scale, resiliency, and independent agility, said Newcomer. WSO2 is continuing push their work forward to tackle this challenge. “We're thinking Choreo is going to help us because it's leveraging the magic of Ballerina to help people get their job done faster. Once they see that, they'll see Ballerina and get the benefits of it,” Newcomer said.</p></p>
]]></description>
      <pubDate>Tue, 7 Jun 2022 19:49:36 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/simplifying-cloud-native-application-development-with-ballerina-8uoFG55T</link>
      <content:encoded><![CDATA[<p><p style="font-weight: 400;">For the past six years, WSO2 has been developing Ballerina, an open-source programming language that streamlines the writing of new services and APIs. It aims to simplify the process of being able to use, combine, and create network services and get highly distributed applications to work together toward a determined outcome.</p></p><p><p style="font-weight: 400;">In this episode of The New Stack Makers podcast <a href="https://www.linkedin.com/in/enewcomer/">Eric Newcomer</a>, Chief Technology Officer of WSO2 discusses how the company created a new programming language from the ground up, and the plans for it to become a predominant cloud native language. <a href="https://thenewstack.io/author/darryl-taft/">Darryl Taft</a>, news editor of The New Stack hosted this podcast.</p></p><p><p style="font-weight: 400;">Founded on the idea that it was too hard to do development with integration, Ballerina was created to program in highly distributed environments. “Cloud computing is an evolution of distributed computing of integration. You're talking about microservices and APIs that need to talk to each other in the cloud,” said Newcomer. “And what Ballerina does, is it thinks about what functions outside of the program that need to be talked to,” he added.</p></p><p><p style="font-weight: 400;">With Ballerina, developers can easily pick it up to create cloud applications. The language design is informed by TypeScript and JavaScript but with some additional capabilities, Newcomer said. “Developers can create records and schemas for JSON payloads in and out to support the API's for cloud mobile or web apps, and it has concurrency for concurrent processing of multiple calls transaction control but in a very familiar syntax, like TypeScript or JavaScript.”</p></p><p><p style="font-weight: 400;">WSO2 is using Ballerina in the company’s <a href="/wso2s-choreo-offers-low-code-for-kubernetes/">low-code like offering, Choreo</a>, which includes features such as the ability to create diagrams. “The long-time challenge in the industry is how do you represent your programming code in a graphical form. [Sanjiva Weerawarana, Founder of WSO2] has solved this problem by putting into the language syntax elements from which you can create diagrams. And he did it in such a way that you can edit the diagram and create code,” said Newcomer.</p></p><p><p style="font-weight: 400;">Engineering for the cloud requires a programing language that can reengineer applications to achieve the auto scale, resiliency, and independent agility, said Newcomer. WSO2 is continuing push their work forward to tackle this challenge. “We're thinking Choreo is going to help us because it's leveraging the magic of Ballerina to help people get their job done faster. Once they see that, they'll see Ballerina and get the benefits of it,” Newcomer said.</p></p>
]]></content:encoded>
      <enclosure length="30998044" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/9e6fd159-2ee9-4308-8c5a-cb734d8b92ad/audio/8981a0e4-bb32-4272-aa55-fb9cdeaa57f7/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Simplifying Cloud Native Application Development with Ballerina</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:duration>00:32:17</itunes:duration>
      <itunes:summary>For the past six years, WSO2 has been developing Ballerina, an open-source programming language that streamlines the writing of new services and APIs. It aims to simplify the process of being able to use, combine, and create network services and get highly distributed applications to work together toward a determined outcome.

In this episode of The New Stack Makers podcast Eric Newcomer, Chief Technology Officer of WSO2 discusses how the company created a new programming language from the ground up, and the plans for it to become a predominant cloud native language. Darryl Taft, news editor of The New Stack hosted this podcast.</itunes:summary>
      <itunes:subtitle>For the past six years, WSO2 has been developing Ballerina, an open-source programming language that streamlines the writing of new services and APIs. It aims to simplify the process of being able to use, combine, and create network services and get highly distributed applications to work together toward a determined outcome.

In this episode of The New Stack Makers podcast Eric Newcomer, Chief Technology Officer of WSO2 discusses how the company created a new programming language from the ground up, and the plans for it to become a predominant cloud native language. Darryl Taft, news editor of The New Stack hosted this podcast.</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1324</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">3014ac91-e624-4520-9fa9-54b5d423d301</guid>
      <title>The Future of Open Source Contributions from KubeCon Europe</title>
      <description><![CDATA[<p>VALENCIA – Open source code is part of at least 70% of enterprise stacks. Yet, a lot of open source contributors are still unpaid volunteers. Even more than tech as a whole, the future of open source relies on the community. Unless you're among the top tier funded open source projects, your sustainability replies on building a community – whether you want to or not – and cultivating project leadership to help recruit new maintainers – whether you want to hand over the reins or not.</p><p> </p><p>That's where the Tech Advisory Group or <a href="https://github.com/cncf/tag-contributor-strategy">TAG on Contributor Strategy</a> comes in, acting as maintainer relations for the Cloud Native Computing Foundation. In this episode of <a href="/tag/the-new-stack-makers">The New Stack Makers podcast</a>, recorded on the floor of <a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/">KubeCon + CloudNativeCon Europe 2022</a>, we talk to <a href="https://twitter.com/geekygirldawn">Dawn Foster</a><span style="font-weight: 400;">, VMware's director of open source community strategy; <a href="https://twitter.com/fuzzychef">Josh Berkus</a>, Red Hat's Kubernetes community manager; <a href="https://twitter.com/CathPaga">Catherine Paganini,</a> Bouyant's head of marketing and community; and <a href="https://twitter.com/ATechGirl">Deepthi Sigireddi</a>, a software engineer at PlanetScale</span><span style="font-weight: 400;">. Foster and Berkus are the co-chairs of the Contributor Strategy TAG, while Paganini is the creator of Linkerd and Sigireddi is a maintainer of Vitess, both CNCF graduated projects. Each brought their unique experience in both open source contribution and leadership to talk about the open source contributor experience, sustainability, governance, and guidance.</span></p><p> </p><p> </p><p> </p><p> </p><p> </p><p>With <a href="https://twitter.com/rothgar/status/1527206327956676608">65% of KubeConEU attendees</a> at a CNCF event for the first time, albeit still during a pandemic, it makes for an uncertain signal for the future of open source. It either shows that there's a burst of interest for newcomers or that there's a dwindling interest in long-term contributions. The executive director of CNCF <a href="https://twitter.com/pritianka">Priyanka Sharma</a> even noted in her keynote that contributions for the foundation's biggest project Kubernetes have grown stagnant.</p><p> </p><p>"I see it as a positive thing. I think it's always good to get some new blood into the community. And I think you know, the projects are working to do whatever they can to get new contributors," Foster said.</p><p> </p><p>[sponsor_note slug="kubecon-cloudnativecon" ][/sponsor_note]</p><p> </p><p>But it's not just about how many contributors but who. One thing that was glaringly apparent at the event was the lack of diversity, with the vast majority of the 7,000 KubeConEU participants being young, white men. This isn't surprising at all, as open source is still based on a lot of voluntary work which naturally excludes those most marginalized within the tech industry and society, which is why, according to <a href="https://opensourcesurvey.org/2017/">GitHub's State of the Octoverse</a>, it sees only about 4% women and nonbinary contributors, and only about 2% from the African continent.</p><p> </p><p><iframe width="560" height="315" src="https://www.youtube.com/embed/fa-9gIx5cAg" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="allowfullscreen"></iframe></p><p> </p><p>If open source is such an integral part of tech's future, that future is built with more inequity than ever before.</p><p> </p><p>"The barrier to entry to open source right now is having free time. And to do free work? Yes, and let's face it, women still do a lot of childcare, a lot of housework, much more than men do, and they have less free time." <span style="font-weight: 400;">Sigireddi continued that there are other factors which discourage those widely underrepresented in tech from participating, including "not having role models, not seeing people who look like you, the communities tend to have in-jokes [and other] things that are cultural, which minorities may not be able to relate to." Most open source code, while usually forked globally, exists in English only.</span></p><p> </p><p>One message throughout KubeConEU was, if a company relies on an open source project, it should pay some of its staff to contribute to and support that project because business may depend on it. This will in turn help bring OSS up a bit closer to the standard of the still abysmal tech industry statistics.</p><p> </p><p>"I think from an ecosystem perspective, I think that companies paying people to do the work on open source makes a big difference," Foster said. "At VMware, we pay lots of people who work primarily on upstream open source projects. And I think that does help us get more diversity into the community, because then people can do it as part of their regular day jobs."</p><p> </p><p>Encouraging those contributors that are underrepresented in OSS to speak up and be more representative of projects is another way to attract more diverse contributors. Berkus said the Contributors Strategy TAG had a meeting at KubeConEU with a group of primarily Italian women who have started in inclusiveness effort, starting with some things like speaker coaching and placement.</p><p> </p><p>"It turns out that a lot of things that you need to do to have more diverse contributors are things you actually needed to do anyway, just to make things better for all new contributors," Berkus explained.</p><p> </p><p>Indeed, welcoming new open source contributors – at all levels and in both technical and non-technical roles – is an important focus of the TAG. Paganini, along with colleague <a href="https://twitter.com/RJasonMorgan">Jason Morgan</a>, is co-author of the <a href="https://landscape.cncf.io/guide">CNCF Landscape Guide</a>, which acts as a welcome to the massive, overwhelming cloud native landscape. What she has found is that people will use the open source technology, but they will contribute to it because of the community.</p><p> </p><p>"We see a lot of projects really focusing on code and docs, which of course is the basics, but people don't come for the technology per se. You can have the best technology, it's amazing, and people are super excited, but if the community isn't there, if they don't feel welcome," they won't stick around, Paganini said. "People want to be part of a tribe, right?"</p><p> </p><p>Then, once you've successfully recruited and onboarded your community, you've got to work to not only retain but promote from within. All this and more is jam-packed into this lively discussion that cannot be missed!</p><p> </p><p>More on open source diversity and inclusion efforts:</p><p><ul></p><p> <li><a href="/strategies-to-beat-affinity-bias-for-more-open-source-diversity-and-inclusion/">Beat Affinity Bias with Open Source Diversity and Inclusion</a></li></p><p> <li><a href="/open-source-communities-need-more-safe-spaces-and-codes-of-conducts-now/">Open Source Communities Need More Safe Spaces and Codes of Conducts. Now.</a></li></p><p> <li><a href="https://blog.container-solutions.com/wtf-is-wrong-with-open-source-communities">WTF is Wrong with Open Source Communities</a></li></p><p> <li><a href="/look-past-the-bros-and-concerns-about-open-source-inclusion-remain/">Look Past the Bros, and Concerns About Open Source Inclusion Remain</a></li></p><p> <li><a href="/how-to-give-and-receive-technical-help-in-open-source-communities/">How to Give and Receive Technical Help in Open Source Communities</a></li></p><p> <li><a href="/navigating-the-messy-world-of-open-source-contributor-data/">Navigating the Messy World of Open Source Contributor Data</a></li></p><p> <li><a href="/how-to-find-a-mentor-and-get-started-in-open-source/">How to Find a Mentor and Get Started in Open Source</a></li></p><p></ul></p>
]]></description>
      <pubDate>Wed, 1 Jun 2022 19:43:21 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/the-future-of-open-source-contributions-from-kubecon-europe-lyf0zzvx</link>
      <content:encoded><![CDATA[<p>VALENCIA – Open source code is part of at least 70% of enterprise stacks. Yet, a lot of open source contributors are still unpaid volunteers. Even more than tech as a whole, the future of open source relies on the community. Unless you're among the top tier funded open source projects, your sustainability replies on building a community – whether you want to or not – and cultivating project leadership to help recruit new maintainers – whether you want to hand over the reins or not.</p><p> </p><p>That's where the Tech Advisory Group or <a href="https://github.com/cncf/tag-contributor-strategy">TAG on Contributor Strategy</a> comes in, acting as maintainer relations for the Cloud Native Computing Foundation. In this episode of <a href="/tag/the-new-stack-makers">The New Stack Makers podcast</a>, recorded on the floor of <a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/">KubeCon + CloudNativeCon Europe 2022</a>, we talk to <a href="https://twitter.com/geekygirldawn">Dawn Foster</a><span style="font-weight: 400;">, VMware's director of open source community strategy; <a href="https://twitter.com/fuzzychef">Josh Berkus</a>, Red Hat's Kubernetes community manager; <a href="https://twitter.com/CathPaga">Catherine Paganini,</a> Bouyant's head of marketing and community; and <a href="https://twitter.com/ATechGirl">Deepthi Sigireddi</a>, a software engineer at PlanetScale</span><span style="font-weight: 400;">. Foster and Berkus are the co-chairs of the Contributor Strategy TAG, while Paganini is the creator of Linkerd and Sigireddi is a maintainer of Vitess, both CNCF graduated projects. Each brought their unique experience in both open source contribution and leadership to talk about the open source contributor experience, sustainability, governance, and guidance.</span></p><p> </p><p> </p><p> </p><p> </p><p> </p><p>With <a href="https://twitter.com/rothgar/status/1527206327956676608">65% of KubeConEU attendees</a> at a CNCF event for the first time, albeit still during a pandemic, it makes for an uncertain signal for the future of open source. It either shows that there's a burst of interest for newcomers or that there's a dwindling interest in long-term contributions. The executive director of CNCF <a href="https://twitter.com/pritianka">Priyanka Sharma</a> even noted in her keynote that contributions for the foundation's biggest project Kubernetes have grown stagnant.</p><p> </p><p>"I see it as a positive thing. I think it's always good to get some new blood into the community. And I think you know, the projects are working to do whatever they can to get new contributors," Foster said.</p><p> </p><p>[sponsor_note slug="kubecon-cloudnativecon" ][/sponsor_note]</p><p> </p><p>But it's not just about how many contributors but who. One thing that was glaringly apparent at the event was the lack of diversity, with the vast majority of the 7,000 KubeConEU participants being young, white men. This isn't surprising at all, as open source is still based on a lot of voluntary work which naturally excludes those most marginalized within the tech industry and society, which is why, according to <a href="https://opensourcesurvey.org/2017/">GitHub's State of the Octoverse</a>, it sees only about 4% women and nonbinary contributors, and only about 2% from the African continent.</p><p> </p><p><iframe width="560" height="315" src="https://www.youtube.com/embed/fa-9gIx5cAg" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="allowfullscreen"></iframe></p><p> </p><p>If open source is such an integral part of tech's future, that future is built with more inequity than ever before.</p><p> </p><p>"The barrier to entry to open source right now is having free time. And to do free work? Yes, and let's face it, women still do a lot of childcare, a lot of housework, much more than men do, and they have less free time." <span style="font-weight: 400;">Sigireddi continued that there are other factors which discourage those widely underrepresented in tech from participating, including "not having role models, not seeing people who look like you, the communities tend to have in-jokes [and other] things that are cultural, which minorities may not be able to relate to." Most open source code, while usually forked globally, exists in English only.</span></p><p> </p><p>One message throughout KubeConEU was, if a company relies on an open source project, it should pay some of its staff to contribute to and support that project because business may depend on it. This will in turn help bring OSS up a bit closer to the standard of the still abysmal tech industry statistics.</p><p> </p><p>"I think from an ecosystem perspective, I think that companies paying people to do the work on open source makes a big difference," Foster said. "At VMware, we pay lots of people who work primarily on upstream open source projects. And I think that does help us get more diversity into the community, because then people can do it as part of their regular day jobs."</p><p> </p><p>Encouraging those contributors that are underrepresented in OSS to speak up and be more representative of projects is another way to attract more diverse contributors. Berkus said the Contributors Strategy TAG had a meeting at KubeConEU with a group of primarily Italian women who have started in inclusiveness effort, starting with some things like speaker coaching and placement.</p><p> </p><p>"It turns out that a lot of things that you need to do to have more diverse contributors are things you actually needed to do anyway, just to make things better for all new contributors," Berkus explained.</p><p> </p><p>Indeed, welcoming new open source contributors – at all levels and in both technical and non-technical roles – is an important focus of the TAG. Paganini, along with colleague <a href="https://twitter.com/RJasonMorgan">Jason Morgan</a>, is co-author of the <a href="https://landscape.cncf.io/guide">CNCF Landscape Guide</a>, which acts as a welcome to the massive, overwhelming cloud native landscape. What she has found is that people will use the open source technology, but they will contribute to it because of the community.</p><p> </p><p>"We see a lot of projects really focusing on code and docs, which of course is the basics, but people don't come for the technology per se. You can have the best technology, it's amazing, and people are super excited, but if the community isn't there, if they don't feel welcome," they won't stick around, Paganini said. "People want to be part of a tribe, right?"</p><p> </p><p>Then, once you've successfully recruited and onboarded your community, you've got to work to not only retain but promote from within. All this and more is jam-packed into this lively discussion that cannot be missed!</p><p> </p><p>More on open source diversity and inclusion efforts:</p><p><ul></p><p> <li><a href="/strategies-to-beat-affinity-bias-for-more-open-source-diversity-and-inclusion/">Beat Affinity Bias with Open Source Diversity and Inclusion</a></li></p><p> <li><a href="/open-source-communities-need-more-safe-spaces-and-codes-of-conducts-now/">Open Source Communities Need More Safe Spaces and Codes of Conducts. Now.</a></li></p><p> <li><a href="https://blog.container-solutions.com/wtf-is-wrong-with-open-source-communities">WTF is Wrong with Open Source Communities</a></li></p><p> <li><a href="/look-past-the-bros-and-concerns-about-open-source-inclusion-remain/">Look Past the Bros, and Concerns About Open Source Inclusion Remain</a></li></p><p> <li><a href="/how-to-give-and-receive-technical-help-in-open-source-communities/">How to Give and Receive Technical Help in Open Source Communities</a></li></p><p> <li><a href="/navigating-the-messy-world-of-open-source-contributor-data/">Navigating the Messy World of Open Source Contributor Data</a></li></p><p> <li><a href="/how-to-find-a-mentor-and-get-started-in-open-source/">How to Find a Mentor and Get Started in Open Source</a></li></p><p></ul></p>
]]></content:encoded>
      <enclosure length="17772993" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/322c25c8-6e20-44f3-a0f2-8ee5831d7ae0/audio/c10226ae-1a38-4545-9eb8-380714ed8374/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>The Future of Open Source Contributions from KubeCon Europe</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:duration>00:18:30</itunes:duration>
      <itunes:summary>VALENCIA – Open source code is part of at least 70% of enterprise stacks. Yet, a lot of open source contributors are still unpaid volunteers. Even more than tech as a whole, the future of open source relies on the community. Unless you&apos;re among the top tier funded open source projects, your sustainability replies on building a community – whether you want to or not – and cultivating project leadership to help recruit new maintainers – whether you want to hand over the reins or not.

That&apos;s where the Tech Advisory Group or TAG on Contributor Strategy comes in, acting as maintainer relations for the Cloud Native Computing Foundation. In this episode of The New Stack Makers podcast, recorded on the floor of KubeCon + CloudNativeCon Europe 2022, we talk to Dawn Foster, VMware&apos;s director of open source community strategy; Josh Berkus, Red Hat&apos;s Kubernetes community manager; Catherine Paganini, Bouyant&apos;s head of marketing and community; and Deepthi Sigireddi, a software engineer at PlanetScale. Foster and Berkus are the co-chairs of the Contributor Strategy TAG, while Paganini is the creator of Linkerd and Sigireddi is a maintainer of Vitess, both CNCF graduated projects. Each brought their unique experience in both open source contribution and leadership to talk about the open source contributor experience, sustainability, governance, and guidance.</itunes:summary>
      <itunes:subtitle>VALENCIA – Open source code is part of at least 70% of enterprise stacks. Yet, a lot of open source contributors are still unpaid volunteers. Even more than tech as a whole, the future of open source relies on the community. Unless you&apos;re among the top tier funded open source projects, your sustainability replies on building a community – whether you want to or not – and cultivating project leadership to help recruit new maintainers – whether you want to hand over the reins or not.

That&apos;s where the Tech Advisory Group or TAG on Contributor Strategy comes in, acting as maintainer relations for the Cloud Native Computing Foundation. In this episode of The New Stack Makers podcast, recorded on the floor of KubeCon + CloudNativeCon Europe 2022, we talk to Dawn Foster, VMware&apos;s director of open source community strategy; Josh Berkus, Red Hat&apos;s Kubernetes community manager; Catherine Paganini, Bouyant&apos;s head of marketing and community; and Deepthi Sigireddi, a software engineer at PlanetScale. Foster and Berkus are the co-chairs of the Contributor Strategy TAG, while Paganini is the creator of Linkerd and Sigireddi is a maintainer of Vitess, both CNCF graduated projects. Each brought their unique experience in both open source contribution and leadership to talk about the open source contributor experience, sustainability, governance, and guidance.</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1323</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">5610910d-ff79-4fb2-929a-f3a5302a2447</guid>
      <title>Simplifying Kubernetes through Automation</title>
      <description><![CDATA[<p>VALENCIA, SPAIN —Managing the cloud virtual machines (VMs) your containers run on. Running <a href="/category/data/">data-intensive workloads</a>. Scaling services in response to spikes in traffic — but doing so in a way that doesn’t jack up your organization’s cloud spend. <a href="/category/kubernetes/">Kubernetes (K8s)</a> seems so easy at the beginning, but it brings challenges that rachet up complexity as you go.</p><p> </p><p>The <a href="/category/cloud-native/">cloud native</a> ecosystem is filling up with tools aimed at making these challenges easier on developers, data scientists and Ops engineers. Increasingly, automation is the secret sauce helping teams and their companies work faster, safer and more productively.</p><p> </p><p>In this special On the Road edition of The New Stack Makers podcast recorded at [sponsor_inline_mention slug="kubecon-cloudnativecon" ]KubeCon + CloudNativeCon EU[/sponsor_inline_mention], we unpacked some of the ways automation helps simplify Kubernetes. We were joined by a trio of guests from [sponsor_inline_mention slug="netapp" ]Spot.io by NetApp[/sponsor_inline_mention]: <a href="https://www.linkedin.com/in/jystephan/">Jean-Yves “JY” Stephan</a>, senior product manager  for Ocean for Apache Spark, along with <a href="https://www.linkedin.com/in/gilad-shahar-0320a589/">Gilad Shahar</a>, ​and <a href="https://www.linkedin.com/in/yarin-pinyan-385b28176/">Yarin Pinyan</a> —product manager and  product architect, respectively, for Spot.io.</p><p> </p><p>Until recently, Stephan noted, <a href="https://spark.apache.org/">Apache Spark</a>, the open source, unified analytics engine for large-scale data processing, couldn’t be deployed on K8s. “So all these regular software engineers were getting the cool technology with Kubernetes, cloud native solutions,” he said. “And the big data engineers, they were stuck with technologies from 10 years ago.”</p><p> </p><p>Spot.io, he said, lets Apache Spark run atop Kubernetes: “It’s a lot more developer friendly, it’s a lot more flexible and it can also be more cost effective.”</p><p> </p><p>The company’s Ocean CD, expected to be generally available in August, is aimed at solving another Kubernetes problem, said Pinyan: canary deployments.</p><p> </p><p>Previously, if you were running normal VMs, without Kubernetes, it was pretty easy to do canary deployments because you had to scale up a VM and then see if the new version worked fine on it, and then gradually scale the others,” he said. “In Kubernetes, it’s pretty complex, because you have to deal with many pods and deployments.”</p><p> </p><p>In enterprises, where DevOps and SRE team members are likely serving multitudes of developers, automating as much toil as possible for devs is essential, said Shahar. For instance, Spot.io’s tools allow users to “break the configuration into parts,” he said, which can task developers with whatever percentage of responsibility for the config that is deemed best for their use case.</p><p> </p><p>“We try to design our solutions in a way that will allow the DevOps [team] to set things once and basically provide pre-baked solutions for the developers,” he said. “Because the developer, at the end of the day, knows best what their application will require.”</p>
]]></description>
      <pubDate>Wed, 1 Jun 2022 19:24:30 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/simplifying-kubernetes-through-automation-zwW_R_3C</link>
      <content:encoded><![CDATA[<p>VALENCIA, SPAIN —Managing the cloud virtual machines (VMs) your containers run on. Running <a href="/category/data/">data-intensive workloads</a>. Scaling services in response to spikes in traffic — but doing so in a way that doesn’t jack up your organization’s cloud spend. <a href="/category/kubernetes/">Kubernetes (K8s)</a> seems so easy at the beginning, but it brings challenges that rachet up complexity as you go.</p><p> </p><p>The <a href="/category/cloud-native/">cloud native</a> ecosystem is filling up with tools aimed at making these challenges easier on developers, data scientists and Ops engineers. Increasingly, automation is the secret sauce helping teams and their companies work faster, safer and more productively.</p><p> </p><p>In this special On the Road edition of The New Stack Makers podcast recorded at [sponsor_inline_mention slug="kubecon-cloudnativecon" ]KubeCon + CloudNativeCon EU[/sponsor_inline_mention], we unpacked some of the ways automation helps simplify Kubernetes. We were joined by a trio of guests from [sponsor_inline_mention slug="netapp" ]Spot.io by NetApp[/sponsor_inline_mention]: <a href="https://www.linkedin.com/in/jystephan/">Jean-Yves “JY” Stephan</a>, senior product manager  for Ocean for Apache Spark, along with <a href="https://www.linkedin.com/in/gilad-shahar-0320a589/">Gilad Shahar</a>, ​and <a href="https://www.linkedin.com/in/yarin-pinyan-385b28176/">Yarin Pinyan</a> —product manager and  product architect, respectively, for Spot.io.</p><p> </p><p>Until recently, Stephan noted, <a href="https://spark.apache.org/">Apache Spark</a>, the open source, unified analytics engine for large-scale data processing, couldn’t be deployed on K8s. “So all these regular software engineers were getting the cool technology with Kubernetes, cloud native solutions,” he said. “And the big data engineers, they were stuck with technologies from 10 years ago.”</p><p> </p><p>Spot.io, he said, lets Apache Spark run atop Kubernetes: “It’s a lot more developer friendly, it’s a lot more flexible and it can also be more cost effective.”</p><p> </p><p>The company’s Ocean CD, expected to be generally available in August, is aimed at solving another Kubernetes problem, said Pinyan: canary deployments.</p><p> </p><p>Previously, if you were running normal VMs, without Kubernetes, it was pretty easy to do canary deployments because you had to scale up a VM and then see if the new version worked fine on it, and then gradually scale the others,” he said. “In Kubernetes, it’s pretty complex, because you have to deal with many pods and deployments.”</p><p> </p><p>In enterprises, where DevOps and SRE team members are likely serving multitudes of developers, automating as much toil as possible for devs is essential, said Shahar. For instance, Spot.io’s tools allow users to “break the configuration into parts,” he said, which can task developers with whatever percentage of responsibility for the config that is deemed best for their use case.</p><p> </p><p>“We try to design our solutions in a way that will allow the DevOps [team] to set things once and basically provide pre-baked solutions for the developers,” he said. “Because the developer, at the end of the day, knows best what their application will require.”</p>
]]></content:encoded>
      <enclosure length="13955701" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/9578e708-d7f5-4713-a0a2-c85ba6e3c1a1/audio/26476ce5-dcdb-4535-9aa9-3716fca82c75/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Simplifying Kubernetes through Automation</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:duration>00:14:32</itunes:duration>
      <itunes:summary>VALENCIA, SPAIN —Managing the cloud virtual machines (VMs) your containers run on. Running data-intensive workloads. Scaling services in response to spikes in traffic — but doing so in a way that doesn’t jack up your organization’s cloud spend. Kubernetes (K8s) seems so easy at the beginning, but it brings challenges that rachet up complexity as you go.

The cloud native ecosystem is filling up with tools aimed at making these challenges easier on developers, data scientists and Ops engineers. Increasingly, automation is the secret sauce helping teams and their companies work faster, safer and more productively.

In this special On the Road edition of The New Stack Makers podcast recorded at KubeCon + CloudNativeCon EU, we unpacked some of the ways automation helps simplify Kubernetes. We were joined by a trio of guests from Spot.io by NetApp: Jean-Yves “JY” Stephan, senior product manager  for Ocean for Apache Spark, along with Gilad Shahar, ​and Yarin Pinyan —product manager and  product architect, respectively, for Spot.io.</itunes:summary>
      <itunes:subtitle>VALENCIA, SPAIN —Managing the cloud virtual machines (VMs) your containers run on. Running data-intensive workloads. Scaling services in response to spikes in traffic — but doing so in a way that doesn’t jack up your organization’s cloud spend. Kubernetes (K8s) seems so easy at the beginning, but it brings challenges that rachet up complexity as you go.

The cloud native ecosystem is filling up with tools aimed at making these challenges easier on developers, data scientists and Ops engineers. Increasingly, automation is the secret sauce helping teams and their companies work faster, safer and more productively.

In this special On the Road edition of The New Stack Makers podcast recorded at KubeCon + CloudNativeCon EU, we unpacked some of the ways automation helps simplify Kubernetes. We were joined by a trio of guests from Spot.io by NetApp: Jean-Yves “JY” Stephan, senior product manager  for Ocean for Apache Spark, along with Gilad Shahar, ​and Yarin Pinyan —product manager and  product architect, respectively, for Spot.io.</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1321</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">0951e69d-c55b-4633-8bfd-702c2a2b6ec8</guid>
      <title>One of Europe’s Largest Telcos’ Cloud Native Journey</title>
      <description><![CDATA[<p><span style="font-weight: 400;">Telecoms are not necessarily associated with adopting new-generation technologies. However, Deutsche Telekom has made considerable investments cloud in native environments, by creating and supporting Kubernetes clusters to supports its operations infrastructure. </span></p><p><span style="font-weight: 400;">In this episode of</span><a href="/tag/the-new-stack-makers"> <span style="font-weight: 400;">The New Stack Makers podcast</span></a><span style="font-weight: 400;">, recorded on the floor of</span><a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/"> <span style="font-weight: 400;">KubeCon + CloudNativeCon Europe 2022</span></a><span style="font-weight: 400;">,</span> <span style="font-weight: 400;">DevOps engineers </span><a href="https://www.linkedin.com/in/christopher-dziomba-5a9220153/?originalSubdomain=de"><span style="font-weight: 400;">Christopher Dziomba</span></a><span style="font-weight: 400;"> and </span><a href="https://www.linkedin.com/in/samuel-nitsche-457ba3158/?locale=en_US"><span style="font-weight: 400;">Samy Nitsche</span></a><span style="font-weight: 400;"> of </span><a href="https://www.telekom.com/en"><span style="font-weight: 400;">Deutsche Telekom</span></a><span style="font-weight: 400;"> discuss how one of Europe’s largest telecom providers made the shift to cloud native.</span></p><p><span style="font-weight: 400;">Deutsche Telekom obviously didn’t start from scratch. It had decades worth of telecom infrastructure and networks that all needed to be integrated into the new world of Kubenetes. This involved a lot of “discussion with the other teams,” </span><span style="font-weight: 400;">Dziomba said. “We had to work together [with other departments] to see how we wanted to manage legacy integration, and especially, and especially, policy and process integration,” Dziomba said. </span></p><p><span style="font-weight: 400;">As it turned out, many of the existing services Deutsche Telekom offered were conductive to integrating into the distributed Kubernetes infrastructure. “It was suited to be deployed on something like Kubernetes,” Dziomba said. “The decision was also made to build the Kubernetes platform by ourselves inside Deutsche Telekom and not to buy one. This really  facilitated the move towards cloud native infrastructure.”</span></p><p><span style="font-weight: 400;">The shift also heavily involved the vendors that were “coming from the old route,”  </span><span style="font-weight: 400;">Nitsche said. “It's sometimes a challenge to make sure that the application is really also cloud native and to make sure it can use all the benefits Kubernetes offers.</span></p>
]]></description>
      <pubDate>Wed, 1 Jun 2022 19:24:21 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/one-of-europes-largest-telcos-cloud-native-journey-VVkUyhqu</link>
      <content:encoded><![CDATA[<p><span style="font-weight: 400;">Telecoms are not necessarily associated with adopting new-generation technologies. However, Deutsche Telekom has made considerable investments cloud in native environments, by creating and supporting Kubernetes clusters to supports its operations infrastructure. </span></p><p><span style="font-weight: 400;">In this episode of</span><a href="/tag/the-new-stack-makers"> <span style="font-weight: 400;">The New Stack Makers podcast</span></a><span style="font-weight: 400;">, recorded on the floor of</span><a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/"> <span style="font-weight: 400;">KubeCon + CloudNativeCon Europe 2022</span></a><span style="font-weight: 400;">,</span> <span style="font-weight: 400;">DevOps engineers </span><a href="https://www.linkedin.com/in/christopher-dziomba-5a9220153/?originalSubdomain=de"><span style="font-weight: 400;">Christopher Dziomba</span></a><span style="font-weight: 400;"> and </span><a href="https://www.linkedin.com/in/samuel-nitsche-457ba3158/?locale=en_US"><span style="font-weight: 400;">Samy Nitsche</span></a><span style="font-weight: 400;"> of </span><a href="https://www.telekom.com/en"><span style="font-weight: 400;">Deutsche Telekom</span></a><span style="font-weight: 400;"> discuss how one of Europe’s largest telecom providers made the shift to cloud native.</span></p><p><span style="font-weight: 400;">Deutsche Telekom obviously didn’t start from scratch. It had decades worth of telecom infrastructure and networks that all needed to be integrated into the new world of Kubenetes. This involved a lot of “discussion with the other teams,” </span><span style="font-weight: 400;">Dziomba said. “We had to work together [with other departments] to see how we wanted to manage legacy integration, and especially, and especially, policy and process integration,” Dziomba said. </span></p><p><span style="font-weight: 400;">As it turned out, many of the existing services Deutsche Telekom offered were conductive to integrating into the distributed Kubernetes infrastructure. “It was suited to be deployed on something like Kubernetes,” Dziomba said. “The decision was also made to build the Kubernetes platform by ourselves inside Deutsche Telekom and not to buy one. This really  facilitated the move towards cloud native infrastructure.”</span></p><p><span style="font-weight: 400;">The shift also heavily involved the vendors that were “coming from the old route,”  </span><span style="font-weight: 400;">Nitsche said. “It's sometimes a challenge to make sure that the application is really also cloud native and to make sure it can use all the benefits Kubernetes offers.</span></p>
]]></content:encoded>
      <enclosure length="16020066" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/64fc9723-f65e-4280-ba2c-356863908114/audio/c4a75d80-946c-4481-94c6-caf5739f8c84/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>One of Europe’s Largest Telcos’ Cloud Native Journey</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:duration>00:16:41</itunes:duration>
      <itunes:summary>Telecoms are not necessarily associated with adopting new-generation technologies. However, Deutsche Telekom has made considerable investments cloud in native environments, by creating and supporting Kubernetes clusters to supports its operations infrastructure. 

In this episode of The New Stack Makers podcast, recorded on the floor of KubeCon + CloudNativeCon Europe 2022, DevOps engineers Christopher Dziomba and Samy Nitsche of Deutsche Telekom discuss how one of Europe’s largest telecom providers made the shift to cloud native.</itunes:summary>
      <itunes:subtitle>Telecoms are not necessarily associated with adopting new-generation technologies. However, Deutsche Telekom has made considerable investments cloud in native environments, by creating and supporting Kubernetes clusters to supports its operations infrastructure. 

In this episode of The New Stack Makers podcast, recorded on the floor of KubeCon + CloudNativeCon Europe 2022, DevOps engineers Christopher Dziomba and Samy Nitsche of Deutsche Telekom discuss how one of Europe’s largest telecom providers made the shift to cloud native.</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1322</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">469f9214-a4f6-4151-9338-8f497d9136ab</guid>
      <title>OpenTelemetry Gets Better Metrics</title>
      <description><![CDATA[<p>OpenTelemetry is defined by its creators as a collection of APIs used to instrument, generate, collect and export telemetry data for observability. This data is in the form of metrics, logs and traces and has emerged as a popular CNCF project. For this interview, we're delving deeper into OpenTelemetry and its metrics support which has just become generally available.  </p><p>The specifications provided for the metrics protocol are designed to connect metrics to other signals and to provide a path to OpenCensus, which enables customers to migrate to OpenTelemetry and to work with existing metrics-instrumentation protocols and standards, including, of course, Prometheus. </p><p>In this episode of<a href="/tag/the-new-stack-makers"> The New Stack Makers podcast</a>, recorded on the show floor of<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/"> KubeCon + CloudNativeCon Europe 2022 in Valencia, Spain</a>, <a href="https://ca.linkedin.com/in/morganmclean">Morgan McLean</a>, director of product management, <a href="https://www.splunk.com/fr_fr">Splunk</a>, <a href="https://www.linkedin.com/in/ted-young/">Ted Young</a>, director of developer education, <a href="https://lightstep.com/">LightStep</a> and <a href="https://www.linkedin.com/in/danieldyla">Daniel Dyla</a>, senior open source architect, <a href="https://www.dynatrace.com/">Dynatrace</a> discussed how OpenTelemetry is evolving and the magic of observability in general for DevOps.</p>
]]></description>
      <pubDate>Wed, 25 May 2022 17:12:52 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/opentelemetry-gets-better-metrics-y3QFtkVQ</link>
      <content:encoded><![CDATA[<p>OpenTelemetry is defined by its creators as a collection of APIs used to instrument, generate, collect and export telemetry data for observability. This data is in the form of metrics, logs and traces and has emerged as a popular CNCF project. For this interview, we're delving deeper into OpenTelemetry and its metrics support which has just become generally available.  </p><p>The specifications provided for the metrics protocol are designed to connect metrics to other signals and to provide a path to OpenCensus, which enables customers to migrate to OpenTelemetry and to work with existing metrics-instrumentation protocols and standards, including, of course, Prometheus. </p><p>In this episode of<a href="/tag/the-new-stack-makers"> The New Stack Makers podcast</a>, recorded on the show floor of<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/"> KubeCon + CloudNativeCon Europe 2022 in Valencia, Spain</a>, <a href="https://ca.linkedin.com/in/morganmclean">Morgan McLean</a>, director of product management, <a href="https://www.splunk.com/fr_fr">Splunk</a>, <a href="https://www.linkedin.com/in/ted-young/">Ted Young</a>, director of developer education, <a href="https://lightstep.com/">LightStep</a> and <a href="https://www.linkedin.com/in/danieldyla">Daniel Dyla</a>, senior open source architect, <a href="https://www.dynatrace.com/">Dynatrace</a> discussed how OpenTelemetry is evolving and the magic of observability in general for DevOps.</p>
]]></content:encoded>
      <enclosure length="19382483" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/e4ef5a01-87f9-4733-bed8-134d6920fc0d/audio/4e549817-eb91-477c-9d40-aa80c088a181/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>OpenTelemetry Gets Better Metrics</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/3f598954-b72d-4b88-b419-c65587983d17/3000x3000/tns-makers-logo-simplecast.jpg?aid=rss_feed"/>
      <itunes:duration>00:20:11</itunes:duration>
      <itunes:summary>OpenTelemetry is defined by its creators as a collection of APIs used to instrument, generate, collect and export telemetry data for observability. This data is in the form of metrics, logs and traces and has emerged as a popular CNCF project. For this interview, we&apos;re delving deeper into OpenTelemetry and its metrics support which has just become generally available.  

The specifications provided for the metrics protocol are designed to connect metrics to other signals and to provide a path to OpenCensus, which enables customers to migrate to OpenTelemetry and to work with existing metrics-instrumentation protocols and standards, including, of course, Prometheus. 

In this episode of The New Stack Makers podcast, recorded on the show floor of KubeCon + CloudNativeCon Europe 2022 in Valencia, Spain, Morgan McLean, director of product management, Splunk, Ted Young, director of developer education, LightStep and Daniel Dyla, senior open source architect, Dynatrace discussed how OpenTelemetry is evolving and the magic of observability in general for DevOps.</itunes:summary>
      <itunes:subtitle>OpenTelemetry is defined by its creators as a collection of APIs used to instrument, generate, collect and export telemetry data for observability. This data is in the form of metrics, logs and traces and has emerged as a popular CNCF project. For this interview, we&apos;re delving deeper into OpenTelemetry and its metrics support which has just become generally available.  

The specifications provided for the metrics protocol are designed to connect metrics to other signals and to provide a path to OpenCensus, which enables customers to migrate to OpenTelemetry and to work with existing metrics-instrumentation protocols and standards, including, of course, Prometheus. 

In this episode of The New Stack Makers podcast, recorded on the show floor of KubeCon + CloudNativeCon Europe 2022 in Valencia, Spain, Morgan McLean, director of product management, Splunk, Ted Young, director of developer education, LightStep and Daniel Dyla, senior open source architect, Dynatrace discussed how OpenTelemetry is evolving and the magic of observability in general for DevOps.</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1320</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">eec43ab1-8e3b-44fe-b0ae-82d734bb79c1</guid>
      <title>Living with Kubernetes After the &apos;Honeymoon&apos; Ends</title>
      <description><![CDATA[<p>Nearly seven years after Google released <a href="https://thenewstack.io/category/kubernetes/">Kubernetes</a>, the open source <a href="https://thenewstack.io/category/containers/">container</a> orchestrator, into an unsuspecting world, <a href="https://www.cncf.io/wp-content/uploads/2022/02/CNCF-AR_FINAL-edits-15.2.21.pdf">5.6 million developers worldwide use it</a>.</p><p>But that number, from the latest Cloud Native Computing Foundation (CNCF) annual survey, masks a lot of frustration. Kubernetes (K8s) can make life easier for the organization that adopts it — after it makes it a lot harder. And as it scales, it can create an unending cadence of triumph and challenge.</p><p>In other words: It’s complicated.</p><p>At KubeCon + CloudNativeCon EU in Valencia, Spain last week, a trio of experts — <a href="https://www.linkedin.com/in/saad-a-malik/">Saad Malik</a>, chief technology officer and co-founder of Spectro Cloud; <a href="https://www.linkedin.com/in/baileyhayes/">Bailey Hayes</a>, principal software engineer at SingleStore; and <a href="https://www.linkedin.com/in/fabrizio-pandini-5156037b/">Fabrizio Pandini</a>, a staff engineer at VMware — joined <a href="https://twitter.com/alexwilliams">Alex Williams</a>, founder and publisher of The New Stack, and myself for a livestream event.</p>
]]></description>
      <pubDate>Wed, 25 May 2022 17:04:33 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/living-with-kubernetes-after-the-honeymoon-ends-XV6pmatc</link>
      <content:encoded><![CDATA[<p>Nearly seven years after Google released <a href="https://thenewstack.io/category/kubernetes/">Kubernetes</a>, the open source <a href="https://thenewstack.io/category/containers/">container</a> orchestrator, into an unsuspecting world, <a href="https://www.cncf.io/wp-content/uploads/2022/02/CNCF-AR_FINAL-edits-15.2.21.pdf">5.6 million developers worldwide use it</a>.</p><p>But that number, from the latest Cloud Native Computing Foundation (CNCF) annual survey, masks a lot of frustration. Kubernetes (K8s) can make life easier for the organization that adopts it — after it makes it a lot harder. And as it scales, it can create an unending cadence of triumph and challenge.</p><p>In other words: It’s complicated.</p><p>At KubeCon + CloudNativeCon EU in Valencia, Spain last week, a trio of experts — <a href="https://www.linkedin.com/in/saad-a-malik/">Saad Malik</a>, chief technology officer and co-founder of Spectro Cloud; <a href="https://www.linkedin.com/in/baileyhayes/">Bailey Hayes</a>, principal software engineer at SingleStore; and <a href="https://www.linkedin.com/in/fabrizio-pandini-5156037b/">Fabrizio Pandini</a>, a staff engineer at VMware — joined <a href="https://twitter.com/alexwilliams">Alex Williams</a>, founder and publisher of The New Stack, and myself for a livestream event.</p>
]]></content:encoded>
      <enclosure length="47536213" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/7ae63425-ae8b-412c-9bf4-336ca883fada/audio/40e8bb31-dfde-44d2-82a9-afd3560b2604/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Living with Kubernetes After the &apos;Honeymoon&apos; Ends</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/db60a969-1710-4421-a686-3f250ec05461/3000x3000/tns-makers-logo-simplecast.jpg?aid=rss_feed"/>
      <itunes:duration>00:49:30</itunes:duration>
      <itunes:summary>Nearly seven years after Google released Kubernetes, the open source container orchestrator, into an unsuspecting world, 5.6 million developers worldwide use it.

But that number, from the latest Cloud Native Computing Foundation (CNCF) annual survey, masks a lot of frustration. Kubernetes (K8s) can make life easier for the organization that adopts it — after it makes it a lot harder. And as it scales, it can create an unending cadence of triumph and challenge.

In other words: It’s complicated.

At KubeCon + CloudNativeCon EU in Valencia, Spain last week, a trio of experts — Saad Malik, chief technology officer and co-founder of Spectro Cloud; Bailey Hayes, principal software engineer at SingleStore; and Fabrizio Pandini, a staff engineer at VMware — joined Alex Williams, founder and publisher of The New Stack, and Heather Joslyn, features editor for a livestream event.</itunes:summary>
      <itunes:subtitle>Nearly seven years after Google released Kubernetes, the open source container orchestrator, into an unsuspecting world, 5.6 million developers worldwide use it.

But that number, from the latest Cloud Native Computing Foundation (CNCF) annual survey, masks a lot of frustration. Kubernetes (K8s) can make life easier for the organization that adopts it — after it makes it a lot harder. And as it scales, it can create an unending cadence of triumph and challenge.

In other words: It’s complicated.

At KubeCon + CloudNativeCon EU in Valencia, Spain last week, a trio of experts — Saad Malik, chief technology officer and co-founder of Spectro Cloud; Bailey Hayes, principal software engineer at SingleStore; and Fabrizio Pandini, a staff engineer at VMware — joined Alex Williams, founder and publisher of The New Stack, and Heather Joslyn, features editor for a livestream event.</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1318</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1aa77ca8-fdd1-4ecd-b0a0-8052b5c6a560</guid>
      <title>Kubernetes and the Cloud Native Community</title>
      <description><![CDATA[<p>The pandemic has significantly accelerated the adoption of Kubernetes and cloud native environments as a way to accommodate the surge in remote workers and other infrastructure constraints. Following the beginning of the pandemic, however, organizations are retaining their investments for those organizations with cloud native infrastructure already in place. They have realized that cloud native is well worth maintaining their investments. Meanwhile, Kubernetes adoption continues to remain on an upward curve. And yet, challenges remain, needless to say. In this context, we look at the status of cloud native adoption, and in particular, Kubernetes at this time, compared to a year ago. </p><p>In this episode of<a href="/tag/the-new-stack-makers"> The New Stack Makers podcast</a>, recorded on the floor of<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/"> KubeCon + CloudNativeCon Europe 2022</a>, we discussed these themes along with the state of Kubernetes and the community with <a href="https://twitter.com/JamesLaverack?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor">James Laverack</a>, staff solutions engineer, <a href="https://www.jetstack.io/">Jetstack</a> a member of the Kubernetes release team, and <a href="https://www.linkedin.com/in/cblecker/?originalSubdomain=ca">Christoph Blecker</a>, site reliability engineer, <a href="https://www.redhat.com/fr">Red Hat</a>, a member of the Kubernetes steering committee.</p>
]]></description>
      <pubDate>Wed, 25 May 2022 17:03:58 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/kubernetes-and-the-cloud-native-community-VQIAVQ2v</link>
      <content:encoded><![CDATA[<p>The pandemic has significantly accelerated the adoption of Kubernetes and cloud native environments as a way to accommodate the surge in remote workers and other infrastructure constraints. Following the beginning of the pandemic, however, organizations are retaining their investments for those organizations with cloud native infrastructure already in place. They have realized that cloud native is well worth maintaining their investments. Meanwhile, Kubernetes adoption continues to remain on an upward curve. And yet, challenges remain, needless to say. In this context, we look at the status of cloud native adoption, and in particular, Kubernetes at this time, compared to a year ago. </p><p>In this episode of<a href="/tag/the-new-stack-makers"> The New Stack Makers podcast</a>, recorded on the floor of<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/"> KubeCon + CloudNativeCon Europe 2022</a>, we discussed these themes along with the state of Kubernetes and the community with <a href="https://twitter.com/JamesLaverack?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor">James Laverack</a>, staff solutions engineer, <a href="https://www.jetstack.io/">Jetstack</a> a member of the Kubernetes release team, and <a href="https://www.linkedin.com/in/cblecker/?originalSubdomain=ca">Christoph Blecker</a>, site reliability engineer, <a href="https://www.redhat.com/fr">Red Hat</a>, a member of the Kubernetes steering committee.</p>
]]></content:encoded>
      <enclosure length="15081265" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/ad425b11-e012-44ff-a4f7-f6deca5e0264/audio/dba585c6-d6c0-42e1-9eba-b96caddc6249/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Kubernetes and the Cloud Native Community</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/c1d9dff3-9a9a-4df7-aeb4-a9d456d160eb/3000x3000/tns-makers-logo-simplecast.jpg?aid=rss_feed"/>
      <itunes:duration>00:15:42</itunes:duration>
      <itunes:summary>The pandemic has significantly accelerated the adoption of Kubernetes and cloud native environments as a way to accommodate the surge in remote workers and other infrastructure constraints. Following the beginning of the pandemic, however, organizations are retaining their investments for those organizations with cloud native infrastructure already in place. They have realized that cloud native is well worth maintaining their investments. Meanwhile, Kubernetes adoption continues to remain on an upward curve. And yet, challenges remain, needless to say. In this context, we look at the status of cloud native adoption, and in particular, Kubernetes at this time, compared to a year ago. 

In this episode of The New Stack Makers podcast, recorded on the floor of KubeCon + CloudNativeCon Europe 2022, we discussed these themes along with the state of Kubernetes and the community with James Laverack, staff solutions engineer, Jetstack a member of the Kubernetes release team, and Christoph Blecker, site reliability engineer, Red Hat, a member of the Kubernetes steering committee.</itunes:summary>
      <itunes:subtitle>The pandemic has significantly accelerated the adoption of Kubernetes and cloud native environments as a way to accommodate the surge in remote workers and other infrastructure constraints. Following the beginning of the pandemic, however, organizations are retaining their investments for those organizations with cloud native infrastructure already in place. They have realized that cloud native is well worth maintaining their investments. Meanwhile, Kubernetes adoption continues to remain on an upward curve. And yet, challenges remain, needless to say. In this context, we look at the status of cloud native adoption, and in particular, Kubernetes at this time, compared to a year ago. 

In this episode of The New Stack Makers podcast, recorded on the floor of KubeCon + CloudNativeCon Europe 2022, we discussed these themes along with the state of Kubernetes and the community with James Laverack, staff solutions engineer, Jetstack a member of the Kubernetes release team, and Christoph Blecker, site reliability engineer, Red Hat, a member of the Kubernetes steering committee.</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1319</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">de91e2ae-f94c-499f-b73f-3c7d1acea8e3</guid>
      <title>Go Language Fuels Cloud Native Development</title>
      <description><![CDATA[<p> </p><p>Go was created at Google in 2007 to improve programming productivity in an era of multi-core networked machines and large codebases. Since then, engineering teams across Google, as well as across the industry, have adopted Go to build products and services at massive scale, including the Cloud Native Computing Foundation which has over 75% of the projects written in the language.</p><p>In this episode of The New Stack Makers podcast, <a href="https://www.linkedin.com/in/stevefrancia/">Steve Francia,</a> Head of Product: Go Language, Google and alumni of MongoDB, Docker and Drupal board member discusses the programming language, the new features in Go 1.18 and why Go is continuing on a path of accelerated adoption with developers. <a href="https://www.linkedin.com/in/darryltaft/">Darryl Taft,</a> News Editor of The New Stack hosted this podcast.</p><p>In the <a href="https://www.jetbrains.com/lp/devecosystem-2021/">State of Developer Ecosystem 2021</a>, Go ranked in the top five languages that developers planned to adopt and continues to be one of the fastest growing languages. According to Francia, it was created with the motivation to see if a new system programming language could be built and compile quick with security as the top focus. With developers coming and going at Google, the simplicity and scalability of the language enabled many to contribute across several projects at any given time.</p><p>“The influences that separates Go from most languages is the experience of the creators behind it who all came to build it with their collective experience,” Francia said.  Today “Go is influencing a lot of the mainstream languages. Elements of it can be found in a tool that formats everyone’s source code to be identical and more readable. Since then, a lot of languages have adopted that same practice,” said Francia. “And then there’s rust. <a href="https://thenewstack.io/rust-vs-go-why-theyre-better-together/">Go and rust</a> are on parallel tracks and we're learning from each other. There's also a new language called V that has recently been open sourced which is the first major language inspired by Go,” Francia said.</p><p>The latest release of <a href="https://thenewstack.io/go-1-18-the-programming-languages-biggest-release-yet/">Go 1.18</a> was Google’s biggest yet. “It included four major features, each of which you could build a release around,” said Francia. In this release, “Generics is the biggest change of the Go language which has been in the works for 10 years,” Francia added. “Because we knew that generics have the potential to make a language more complicated, we spent a long time going through different proposals,” he said. Fuzzing, workspaces and performance were three other features released in this past version of Go.</p><p>“From improving our documentation and learning – which you can go to <a href="https://go.dev/learn/">go.dev/learn/</a> to get the latest resources – we’re really focused on the broad view of the developer experience,” Francia said. “And in the future, we're seeing not our team so much as the community taking Go in new ways,” he added.</p><p> </p><p> </p><p> </p><p> </p>
]]></description>
      <pubDate>Tue, 17 May 2022 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/go-language-fuels-cloud-native-development-xF13ukLI</link>
      <content:encoded><![CDATA[<p> </p><p>Go was created at Google in 2007 to improve programming productivity in an era of multi-core networked machines and large codebases. Since then, engineering teams across Google, as well as across the industry, have adopted Go to build products and services at massive scale, including the Cloud Native Computing Foundation which has over 75% of the projects written in the language.</p><p>In this episode of The New Stack Makers podcast, <a href="https://www.linkedin.com/in/stevefrancia/">Steve Francia,</a> Head of Product: Go Language, Google and alumni of MongoDB, Docker and Drupal board member discusses the programming language, the new features in Go 1.18 and why Go is continuing on a path of accelerated adoption with developers. <a href="https://www.linkedin.com/in/darryltaft/">Darryl Taft,</a> News Editor of The New Stack hosted this podcast.</p><p>In the <a href="https://www.jetbrains.com/lp/devecosystem-2021/">State of Developer Ecosystem 2021</a>, Go ranked in the top five languages that developers planned to adopt and continues to be one of the fastest growing languages. According to Francia, it was created with the motivation to see if a new system programming language could be built and compile quick with security as the top focus. With developers coming and going at Google, the simplicity and scalability of the language enabled many to contribute across several projects at any given time.</p><p>“The influences that separates Go from most languages is the experience of the creators behind it who all came to build it with their collective experience,” Francia said.  Today “Go is influencing a lot of the mainstream languages. Elements of it can be found in a tool that formats everyone’s source code to be identical and more readable. Since then, a lot of languages have adopted that same practice,” said Francia. “And then there’s rust. <a href="https://thenewstack.io/rust-vs-go-why-theyre-better-together/">Go and rust</a> are on parallel tracks and we're learning from each other. There's also a new language called V that has recently been open sourced which is the first major language inspired by Go,” Francia said.</p><p>The latest release of <a href="https://thenewstack.io/go-1-18-the-programming-languages-biggest-release-yet/">Go 1.18</a> was Google’s biggest yet. “It included four major features, each of which you could build a release around,” said Francia. In this release, “Generics is the biggest change of the Go language which has been in the works for 10 years,” Francia added. “Because we knew that generics have the potential to make a language more complicated, we spent a long time going through different proposals,” he said. Fuzzing, workspaces and performance were three other features released in this past version of Go.</p><p>“From improving our documentation and learning – which you can go to <a href="https://go.dev/learn/">go.dev/learn/</a> to get the latest resources – we’re really focused on the broad view of the developer experience,” Francia said. “And in the future, we're seeing not our team so much as the community taking Go in new ways,” he added.</p><p> </p><p> </p><p> </p><p> </p>
]]></content:encoded>
      <enclosure length="29571910" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/929928e1-734a-4630-b198-fb9837f056e2/audio/3ec0338f-05c5-41e1-9da7-adad90309473/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Go Language Fuels Cloud Native Development</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:duration>00:30:48</itunes:duration>
      <itunes:summary>Go was created at Google in 2007 to improve programming productivity in an era of multi-core networked machines and large codebases. Since then, engineering teams across Google, as well as across the industry, have adopted Go to build products and services at massive scale, including the Cloud Native Computing Foundation which has over 75% of the projects written in the language.

In this episode of The New Stack Makers podcast, Steve Francia, Head of Product: Go Language, Google and alumni of MongoDB, Docker and Drupal board member discusses the programming language, the new features in Go 1.18 and why Go is continuing on a path of accelerated adoption with developers. Darryl Taft, News Editor of The New Stack hosted this podcast.

In the State of Developer Ecosystem 2021, Go ranked in the top five languages that developers planned to adopt and continues to be one of the fastest growing languages. According to Francia, it was created with the motivation to see if a new system programming language could be built and compile quick with security as the top focus. With developers coming and going at Google, the simplicity and scalability of the language enabled many to contribute across several projects at any given time.

“The influences that separates Go from most languages is the experience of the creators behind it who all came to build it with their collective experience,” Francia said.  Today “Go is influencing a lot of the mainstream languages. Elements of it can be found in a tool that formats everyone’s source code to be identical and more readable. Since then, a lot of languages have adopted that same practice,” said Francia. “And then there’s rust. Go and rust are on parallel tracks and we&apos;re learning from each other. There&apos;s also a new language called V that has recently been open sourced which is the first major language inspired by Go,” Francia said.

The latest release of Go 1.18 was Google’s biggest yet. “It included four major features, each of which you could build a release around,” said Francia. In this release, “Generics is the biggest change of the Go language which has been in the works for 10 years,” Francia added. “Because we knew that generics have the potential to make a language more complicated, we spent a long time going through different proposals,” he said. Fuzzing, workspaces and performance were three other features released in this past version of Go.

 “From improving our documentation and learning – which you can go to go.dev/learn/ to get the latest resources – we’re really focused on the broad view of the developer experience,” Francia said. “And in the future, we&apos;re seeing not our team so much as the community taking Go in new ways,” he added.</itunes:summary>
      <itunes:subtitle>Go was created at Google in 2007 to improve programming productivity in an era of multi-core networked machines and large codebases. Since then, engineering teams across Google, as well as across the industry, have adopted Go to build products and services at massive scale, including the Cloud Native Computing Foundation which has over 75% of the projects written in the language.

In this episode of The New Stack Makers podcast, Steve Francia, Head of Product: Go Language, Google and alumni of MongoDB, Docker and Drupal board member discusses the programming language, the new features in Go 1.18 and why Go is continuing on a path of accelerated adoption with developers. Darryl Taft, News Editor of The New Stack hosted this podcast.

In the State of Developer Ecosystem 2021, Go ranked in the top five languages that developers planned to adopt and continues to be one of the fastest growing languages. According to Francia, it was created with the motivation to see if a new system programming language could be built and compile quick with security as the top focus. With developers coming and going at Google, the simplicity and scalability of the language enabled many to contribute across several projects at any given time.

“The influences that separates Go from most languages is the experience of the creators behind it who all came to build it with their collective experience,” Francia said.  Today “Go is influencing a lot of the mainstream languages. Elements of it can be found in a tool that formats everyone’s source code to be identical and more readable. Since then, a lot of languages have adopted that same practice,” said Francia. “And then there’s rust. Go and rust are on parallel tracks and we&apos;re learning from each other. There&apos;s also a new language called V that has recently been open sourced which is the first major language inspired by Go,” Francia said.

The latest release of Go 1.18 was Google’s biggest yet. “It included four major features, each of which you could build a release around,” said Francia. In this release, “Generics is the biggest change of the Go language which has been in the works for 10 years,” Francia added. “Because we knew that generics have the potential to make a language more complicated, we spent a long time going through different proposals,” he said. Fuzzing, workspaces and performance were three other features released in this past version of Go.

 “From improving our documentation and learning – which you can go to go.dev/learn/ to get the latest resources – we’re really focused on the broad view of the developer experience,” Francia said. “And in the future, we&apos;re seeing not our team so much as the community taking Go in new ways,” he added.</itunes:subtitle>
      <itunes:keywords>newstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1317</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">08599ffe-b07d-4c3f-ad62-a6adbb85fbb3</guid>
      <title>Svelte and the Future of Front-end Development</title>
      <description><![CDATA[<p>First released in 2016, the <a href="https://svelte.dev/">Svelte</a> Web framework has steadily gained popularity as an alternative approach to building Web applications, one that prides itself on being more intuitive (and less verbose) than the current framework du jour, Facebook's <a href="https://reactjs.org/">React</a>. You can say that it reaches back to the era before the web app — when desktop and server applications were compiled — to make the web app easier to develop and more enjoyable to user.</p><p> </p><p>In this latest episode of <a href="https://thenewstack.io/podcasts">The New Stack Makers</a> podcast, we interview the creator of Svelte himself, <a href="https://github.com/Rich-Harris">Rich Harris</a>. Harris started out not as a web developer, but as a journalist who created the framework to do immersive web journalism. So we were interested in that.</p><p> </p><p>In addition to delving into history, we also discussed the current landscape of Web frameworks, the Web's Document Object Model, the way React.js updates variables, the value of TypeScript, and the importance SvelteKit. We also chatted about why <a href="https://vercel.com/">Vercel</a>, where Harris now works maintaining Svelte, wants to make a home for Svelte.</p><p> </p><p>TNS Editor <a href="https://thenewstack.io/author/joab/">Joab Jackson</a> hosted this conversation.</p><p> </p><p>Below are a few excerpts from our conversation, edited for brevity and clarity.</p><p> </p><p><strong>So set the stage for us. What was the point that inspired you to create Svelte?</strong></p><p> </p><p>To fully tell the story, we need to go way back into the mists of time, back to when I started programming. My background is in journalism. And about a decade ago, I was working in a newsroom at a financial publication in London. I was very inspired by some of the interactive journalism that was being produced at places like the New York Times, but also the BBC and the Guardian and lots of other news organizations, where they were using Flash and increasingly JavaScript, to tell these data rich interactive stories that couldn't really be done any other way.</p><p> </p><p>And to me, this felt like the future of journalism, it's something that was using the full power of the web platform as a storytelling medium in a way that just hadn't been done before. And I was very excited about all that, and I wanted a piece of it.</p><p> </p><p>So I started learning JavaScript with the help of the help of some some friends, and discovered that it's really difficult. Particularly if you're doing things that have a lot of interactivity. If you're managing lots of state that can be updated in lots of different ways, you end up writing what is often referred to as spaghetti code.</p><p> </p><p>And so I started building a toolkit, really, for myself. And this was a project called Reactive, short for interactive, something out of a out of a Neal Stephenson book, in fact, and it actually got a little bit of traction, not it was never huge, but you know, it was my first foray into open source, and it got used in a few different places.</p><p> </p><p>And I maintained that for some years, and eventually, I left that company and joined the Guardian in the U.K. And we used Reactive to build interactive pieces of journalism there, I transferred to the U.S. to continue at the guardian in New York. And we use directive quite heavily there as well. After a while, though, it became apparent that, you know, as with many frameworks of that era, it had certain flaws.</p><p> </p><p>A lot of these frameworks were built for an era in which desktop computing was prevalent. And we were now in firmly in this age of mobile, first, web development. And these frameworks weren't really up to the task, primarily because they were just too big, they were too big, and they were too bulky and they were too slow.</p><p> </p><p>And so in 2016, I started working on what was essentially a successor to that project. And we chose the name Svelte because it has all the right connotations. It's elegant, it's sophisticated. And the idea was to basically provide the same kind of development experience that people were used to, but change the was that translated into the experience end users have when they run it in the browser.</p><p> </p><p>It did this by adopting techniques from the compiler world. The code that you write doesn't need to be the code that actually runs in the browser. Svelte was really one of the first frameworks to lean into the compiler paradigm. And as a result, we were able to do things with much less JavaScript, and in a way that was much more performant, which is very important if you're producing these kinds of interactive stories that typically involve like a lot of data, a lot of animation</p><p> </p><p><strong>Can you talk a bit about more about the compiler aspect? How does that work with a web application or web page?</strong></p><p> </p><p>So, you know, browsers run JavaScript. And like nowadays, they can run <a href="https://thenewstack.io/what-is-webassembly/">WASM</a>, too. But JavaScript is the language that you need to write stuff in if you want to have interactivity on a web page. But that doesn't mean that you need to write JavaScript, if you can design a language that allows you to describe user interfaces in a more natural way, then the compiler could turn that intention into the code that actually runs. And so you get all the benefits of declarative programming but without the drawbacks that historically have accompanied that.</p><p> </p><p>There is this trade off that historically existed: the developer wants to write this nice, state driven declarative code and the user doesn't want to have to wait for this bulky JavaScript framework to load over the wire. And then to do all of this extra work to translate your declarative intentions into what actually happens within the browser. And the compiler approach basically allows you to, to square that circle, it means that you get the best of both worlds you're maximizing the developer experience without compromising on developer experience.</p><p> </p><p><strong>Stupid question: As a developer, if I'm writing JavaScript code, at least initially, how do I compile it?</strong></p><p> </p><p>So pretty much every web app has a build step. It is possible to write web applications that do not involve a build step, you can just write JavaScript, and you can write HTML, and you can import the JavaScript into the HTML and you've got a web app. But that approach, it really doesn't scale, much as some people will try and convince you otherwise.</p><p> </p><p>At some point, you're going to have to have a build step so that you can use libraries that you've installed from NPM, so that you can use things like TypeScript to optimize your JavaScript. And so Svelte fits into your existing build step. And so if you have your components that are written in Svelte files, it's literally a .SVELTE extension. Then during the build step, those components will get transformed into JavaScript files.</p><p> </p><p><strong>Svelte seemed to take off right around the time we heard complaints about Angular.js. Did the frustrations around Angular help the adoption of Svelte?</strong></p><p> </p><p>Svelte hasn't been a replacement for Angular because Angular is a full featured framework. It wants to own the entirety of your web application, whereas Svelte is really just a component framework.</p><p> </p><p>So on the spectrum, you have things that are very focused on individual components like React and Vue.js and Svelte. And then at the other end of the spectrum, you have frameworks like Angular, and Ember. And historically, you had to do the work of taking your component framework and figuring out how to build the rest of the application unless you were using one of these full-featured frameworks.</p><p> </p><p>Nowadays, that's less true because we have things like Next.js, and <a href="https://github.com/remix-vue">remix-vue</a>, And on the Svelte team are currently working on <a href="https://kit.svelte.dev/">SvelteKit</a>, which is the answer to that question of how do I actually build an app with this?</p><p> </p><p>I would attribute the growth in popularity is felt to different forces. Essentially, what happened is it trundled along with a small but dedicated user base for a few years. And then in 2019, we released version three of the framework, which really rethought the authoring experience, the syntax that you use to write components and, and the APIs that are available.</p><p> </p><p>Around that time, I gave a couple of conference talks around it. And that's when it really started to pick up steam. Now, of course, we're growing very rapidly. And we're consistently at the top of developer-happiness surveys. And so now, like a lot of people are aware of is, but we're still like a very tiny framework, compared to the big dogs like React and Vue.</p><p> </p><p><strong>You have said that part of the Svelte mission has been to make web development fun. What are some of Svelte's attributes that make it less aggravating for the developer?</strong></p><p> </p><p>The first thing is that you can write a lot less code. If you're using Svelte, then you can express the same concepts with typically about 40% less code. There's just a lot less ceremony, a lot less boilerplate.</p><p> </p><p>We're not constrained by JavaScript. For example, the way that you use state inside a component with React, you have to use hooks. And there's this slightly idiosyncratic way of declaring a local piece of state inside the component. With Svelte, you just declare a variable. And if you assign a new value to that variable, or if it's an object, and you mutate that object, then the compiler interprets that as a sign that it needs to update the component.</p><p> </p><p> </p>
]]></description>
      <pubDate>Tue, 10 May 2022 21:22:31 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/svelte-and-the-future-of-front-end-development-3gKzN2js</link>
      <content:encoded><![CDATA[<p>First released in 2016, the <a href="https://svelte.dev/">Svelte</a> Web framework has steadily gained popularity as an alternative approach to building Web applications, one that prides itself on being more intuitive (and less verbose) than the current framework du jour, Facebook's <a href="https://reactjs.org/">React</a>. You can say that it reaches back to the era before the web app — when desktop and server applications were compiled — to make the web app easier to develop and more enjoyable to user.</p><p> </p><p>In this latest episode of <a href="https://thenewstack.io/podcasts">The New Stack Makers</a> podcast, we interview the creator of Svelte himself, <a href="https://github.com/Rich-Harris">Rich Harris</a>. Harris started out not as a web developer, but as a journalist who created the framework to do immersive web journalism. So we were interested in that.</p><p> </p><p>In addition to delving into history, we also discussed the current landscape of Web frameworks, the Web's Document Object Model, the way React.js updates variables, the value of TypeScript, and the importance SvelteKit. We also chatted about why <a href="https://vercel.com/">Vercel</a>, where Harris now works maintaining Svelte, wants to make a home for Svelte.</p><p> </p><p>TNS Editor <a href="https://thenewstack.io/author/joab/">Joab Jackson</a> hosted this conversation.</p><p> </p><p>Below are a few excerpts from our conversation, edited for brevity and clarity.</p><p> </p><p><strong>So set the stage for us. What was the point that inspired you to create Svelte?</strong></p><p> </p><p>To fully tell the story, we need to go way back into the mists of time, back to when I started programming. My background is in journalism. And about a decade ago, I was working in a newsroom at a financial publication in London. I was very inspired by some of the interactive journalism that was being produced at places like the New York Times, but also the BBC and the Guardian and lots of other news organizations, where they were using Flash and increasingly JavaScript, to tell these data rich interactive stories that couldn't really be done any other way.</p><p> </p><p>And to me, this felt like the future of journalism, it's something that was using the full power of the web platform as a storytelling medium in a way that just hadn't been done before. And I was very excited about all that, and I wanted a piece of it.</p><p> </p><p>So I started learning JavaScript with the help of the help of some some friends, and discovered that it's really difficult. Particularly if you're doing things that have a lot of interactivity. If you're managing lots of state that can be updated in lots of different ways, you end up writing what is often referred to as spaghetti code.</p><p> </p><p>And so I started building a toolkit, really, for myself. And this was a project called Reactive, short for interactive, something out of a out of a Neal Stephenson book, in fact, and it actually got a little bit of traction, not it was never huge, but you know, it was my first foray into open source, and it got used in a few different places.</p><p> </p><p>And I maintained that for some years, and eventually, I left that company and joined the Guardian in the U.K. And we used Reactive to build interactive pieces of journalism there, I transferred to the U.S. to continue at the guardian in New York. And we use directive quite heavily there as well. After a while, though, it became apparent that, you know, as with many frameworks of that era, it had certain flaws.</p><p> </p><p>A lot of these frameworks were built for an era in which desktop computing was prevalent. And we were now in firmly in this age of mobile, first, web development. And these frameworks weren't really up to the task, primarily because they were just too big, they were too big, and they were too bulky and they were too slow.</p><p> </p><p>And so in 2016, I started working on what was essentially a successor to that project. And we chose the name Svelte because it has all the right connotations. It's elegant, it's sophisticated. And the idea was to basically provide the same kind of development experience that people were used to, but change the was that translated into the experience end users have when they run it in the browser.</p><p> </p><p>It did this by adopting techniques from the compiler world. The code that you write doesn't need to be the code that actually runs in the browser. Svelte was really one of the first frameworks to lean into the compiler paradigm. And as a result, we were able to do things with much less JavaScript, and in a way that was much more performant, which is very important if you're producing these kinds of interactive stories that typically involve like a lot of data, a lot of animation</p><p> </p><p><strong>Can you talk a bit about more about the compiler aspect? How does that work with a web application or web page?</strong></p><p> </p><p>So, you know, browsers run JavaScript. And like nowadays, they can run <a href="https://thenewstack.io/what-is-webassembly/">WASM</a>, too. But JavaScript is the language that you need to write stuff in if you want to have interactivity on a web page. But that doesn't mean that you need to write JavaScript, if you can design a language that allows you to describe user interfaces in a more natural way, then the compiler could turn that intention into the code that actually runs. And so you get all the benefits of declarative programming but without the drawbacks that historically have accompanied that.</p><p> </p><p>There is this trade off that historically existed: the developer wants to write this nice, state driven declarative code and the user doesn't want to have to wait for this bulky JavaScript framework to load over the wire. And then to do all of this extra work to translate your declarative intentions into what actually happens within the browser. And the compiler approach basically allows you to, to square that circle, it means that you get the best of both worlds you're maximizing the developer experience without compromising on developer experience.</p><p> </p><p><strong>Stupid question: As a developer, if I'm writing JavaScript code, at least initially, how do I compile it?</strong></p><p> </p><p>So pretty much every web app has a build step. It is possible to write web applications that do not involve a build step, you can just write JavaScript, and you can write HTML, and you can import the JavaScript into the HTML and you've got a web app. But that approach, it really doesn't scale, much as some people will try and convince you otherwise.</p><p> </p><p>At some point, you're going to have to have a build step so that you can use libraries that you've installed from NPM, so that you can use things like TypeScript to optimize your JavaScript. And so Svelte fits into your existing build step. And so if you have your components that are written in Svelte files, it's literally a .SVELTE extension. Then during the build step, those components will get transformed into JavaScript files.</p><p> </p><p><strong>Svelte seemed to take off right around the time we heard complaints about Angular.js. Did the frustrations around Angular help the adoption of Svelte?</strong></p><p> </p><p>Svelte hasn't been a replacement for Angular because Angular is a full featured framework. It wants to own the entirety of your web application, whereas Svelte is really just a component framework.</p><p> </p><p>So on the spectrum, you have things that are very focused on individual components like React and Vue.js and Svelte. And then at the other end of the spectrum, you have frameworks like Angular, and Ember. And historically, you had to do the work of taking your component framework and figuring out how to build the rest of the application unless you were using one of these full-featured frameworks.</p><p> </p><p>Nowadays, that's less true because we have things like Next.js, and <a href="https://github.com/remix-vue">remix-vue</a>, And on the Svelte team are currently working on <a href="https://kit.svelte.dev/">SvelteKit</a>, which is the answer to that question of how do I actually build an app with this?</p><p> </p><p>I would attribute the growth in popularity is felt to different forces. Essentially, what happened is it trundled along with a small but dedicated user base for a few years. And then in 2019, we released version three of the framework, which really rethought the authoring experience, the syntax that you use to write components and, and the APIs that are available.</p><p> </p><p>Around that time, I gave a couple of conference talks around it. And that's when it really started to pick up steam. Now, of course, we're growing very rapidly. And we're consistently at the top of developer-happiness surveys. And so now, like a lot of people are aware of is, but we're still like a very tiny framework, compared to the big dogs like React and Vue.</p><p> </p><p><strong>You have said that part of the Svelte mission has been to make web development fun. What are some of Svelte's attributes that make it less aggravating for the developer?</strong></p><p> </p><p>The first thing is that you can write a lot less code. If you're using Svelte, then you can express the same concepts with typically about 40% less code. There's just a lot less ceremony, a lot less boilerplate.</p><p> </p><p>We're not constrained by JavaScript. For example, the way that you use state inside a component with React, you have to use hooks. And there's this slightly idiosyncratic way of declaring a local piece of state inside the component. With Svelte, you just declare a variable. And if you assign a new value to that variable, or if it's an object, and you mutate that object, then the compiler interprets that as a sign that it needs to update the component.</p><p> </p><p> </p>
]]></content:encoded>
      <enclosure length="27063737" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/dd66e9f2-0631-4117-9a25-bf5a0a956100/audio/16286674-e122-4f14-9d0e-71b40870bc47/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Svelte and the Future of Front-end Development</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:duration>00:28:11</itunes:duration>
      <itunes:summary>First released in 2016, the Svelte Web framework has steadily gained popularity as an alternative approach to building Web applications, one that prides itself on being more intuitive (and less verbose) than the current framework du jour, Facebook&apos;s React. You can say that it reaches back to the era before the web app — when desktop and server applications were compiled — to make the web app easier to develop and more enjoyable to user.

In this latest episode of The New Stack Makers podcast, we interview the creator of Svelte himself, Rich Harris. Harris started out not as a web developer, but as a journalist who created the framework to do immersive web journalism. So we were interested in that.

In addition to delving into history, we also discussed the current landscape of Web frameworks, the Web&apos;s Document Object Model, the way React.js updates variables, the value of TypeScript, and the importance SvelteKit. We also chatted about why Vercel, where Harris now works maintaining Svelte, wants to make a home for Svelte.

TNS Editor Joab Jackson hosted this conversation.

Below are a few excerpts from our conversation, edited for brevity and clarity. 

So set the stage for us. What was the point that inspired you to create Svelte?

To fully tell the story, we need to go way back into the mists of time, back to when I started programming. My background is in journalism. And about a decade ago, I was working in a newsroom at a financial publication in London. I was very inspired by some of the interactive journalism that was being produced at places like the New York Times, but also the BBC and the Guardian and lots of other news organizations, where they were using Flash and increasingly JavaScript, to tell these data rich interactive stories that couldn&apos;t really be done any other way.

And to me, this felt like the future of journalism, it&apos;s something that was using the full power of the web platform as a storytelling medium in a way that just hadn&apos;t been done before. And I was very excited about all that, and I wanted a piece of it.

So I started learning JavaScript with the help of the help of some some friends, and discovered that it&apos;s really difficult. Particularly if you&apos;re doing things that have a lot of interactivity. If you&apos;re managing lots of state that can be updated in lots of different ways, you end up writing what is often referred to as spaghetti code.

And so I started building a toolkit, really, for myself. And this was a project called Reactive, short for interactive, something out of a out of a Neal Stephenson book, in fact, and it actually got a little bit of traction, not it was never huge, but you know, it was my first foray into open source, and it got used in a few different places.

And I maintained that for some years, and eventually, I left that company and joined the Guardian in the U.K. And we used Reactive to build interactive pieces of journalism there, I transferred to the U.S. to continue at the guardian in New York. And we use directive quite heavily there as well. After a while, though, it became apparent that, you know, as with many frameworks of that era, it had certain flaws.

A lot of these frameworks were built for an era in which desktop computing was prevalent. And we were now in firmly in this age of mobile, first, web development. And these frameworks weren&apos;t really up to the task, primarily because they were just too big, they were too big, and they were too bulky and they were too slow.

And so in 2016, I started working on what was essentially a successor to that project. And we chose the name Svelte because it has all the right connotations. It&apos;s elegant, it&apos;s sophisticated. And the idea was to basically provide the same kind of development experience that people were used to, but change the was that translated into the experience end users have when they run it in the browser.

It did this by adopting techniques from the compiler world. The code that you write doesn&apos;t need to be the code that actually runs in the browser. Svelte was really one of the first frameworks to lean into the compiler paradigm. And as a result, we were able to do things with much less JavaScript, and in a way that was much more performant, which is very important if you&apos;re producing these kinds of interactive stories that typically involve like a lot of data, a lot of animation

Can you talk a bit about more about the compiler aspect? How does that work with a web application or web page?

So, you know, browsers run JavaScript. And like nowadays, they can run WASM, too. But JavaScript is the language that you need to write stuff in if you want to have interactivity on a web page. But that doesn&apos;t mean that you need to write JavaScript, if you can design a language that allows you to describe user interfaces in a more natural way, then the compiler could turn that intention into the code that actually runs. And so you get all the benefits of declarative programming but without the drawbacks that historically have accompanied that.

There is this trade off that historically existed: the developer wants to write this nice, state driven declarative code and the user doesn&apos;t want to have to wait for this bulky JavaScript framework to load over the wire. And then to do all of this extra work to translate your declarative intentions into what actually happens within the browser. And the compiler approach basically allows you to, to square that circle, it means that you get the best of both worlds you&apos;re maximizing the developer experience without compromising on developer experience.

Stupid question: As a developer, if I&apos;m writing JavaScript code, at least initially, how do I compile it?

So pretty much every web app has a build step. It is possible to write web applications that do not involve a build step, you can just write JavaScript, and you can write HTML, and you can import the JavaScript into the HTML and you&apos;ve got a web app. But that approach, it really doesn&apos;t scale, much as some people will try and convince you otherwise.

At some point, you&apos;re going to have to have a build step so that you can use libraries that you&apos;ve installed from NPM, so that you can use things like TypeScript to optimize your JavaScript. And so Svelte fits into your existing build step. And so if you have your components that are written in Svelte files, it&apos;s literally a .SVELTE extension. Then during the build step, those components will get transformed into JavaScript files.

Svelte seemed to take off right around the time we heard complaints about Angular.js. Did the frustrations around Angular help the adoption of Svelte?

Svelte hasn&apos;t been a replacement for Angular because Angular is a full featured framework. It wants to own the entirety of your web application, whereas Svelte is really just a component framework.

So on the spectrum, you have things that are very focused on individual components like React and Vue.js and Svelte. And then at the other end of the spectrum, you have frameworks like Angular, and Ember. And historically, you had to do the work of taking your component framework and figuring out how to build the rest of the application unless you were using one of these full-featured frameworks.

Nowadays, that&apos;s less true because we have things like Next.js, and remix-vue, And on the Svelte team are currently working on SvelteKit, which is the answer to that question of how do I actually build an app with this?

I would attribute the growth in popularity is felt to different forces. Essentially, what happened is it trundled along with a small but dedicated user base for a few years. And then in 2019, we released version three of the framework, which really rethought the authoring experience, the syntax that you use to write components and, and the APIs that are available.

Around that time, I gave a couple of conference talks around it. And that&apos;s when it really started to pick up steam. Now, of course, we&apos;re growing very rapidly. And we&apos;re consistently at the top of developer-happiness surveys. And so now, like a lot of people are aware of is, but we&apos;re still like a very tiny framework, compared to the big dogs like React and Vue.

You have said that part of the Svelte mission has been to make web development fun. What are some of Svelte&apos;s attributes that make it less aggravating for the developer?

The first thing is that you can write a lot less code. If you&apos;re using Svelte, then you can express the same concepts with typically about 40% less code. There&apos;s just a lot less ceremony, a lot less boilerplate.

We&apos;re not constrained by JavaScript. For example, the way that you use state inside a component with React, you have to use hooks. And there&apos;s this slightly idiosyncratic way of declaring a local piece of state inside the component. With Svelte, you just declare a variable. And if you assign a new value to that variable, or if it&apos;s an object, and you mutate that object, then the compiler interprets that as a sign that it needs to update the component.


</itunes:summary>
      <itunes:subtitle>First released in 2016, the Svelte Web framework has steadily gained popularity as an alternative approach to building Web applications, one that prides itself on being more intuitive (and less verbose) than the current framework du jour, Facebook&apos;s React. You can say that it reaches back to the era before the web app — when desktop and server applications were compiled — to make the web app easier to develop and more enjoyable to user.

In this latest episode of The New Stack Makers podcast, we interview the creator of Svelte himself, Rich Harris. Harris started out not as a web developer, but as a journalist who created the framework to do immersive web journalism. So we were interested in that.

In addition to delving into history, we also discussed the current landscape of Web frameworks, the Web&apos;s Document Object Model, the way React.js updates variables, the value of TypeScript, and the importance SvelteKit. We also chatted about why Vercel, where Harris now works maintaining Svelte, wants to make a home for Svelte.

TNS Editor Joab Jackson hosted this conversation.

Below are a few excerpts from our conversation, edited for brevity and clarity. 

So set the stage for us. What was the point that inspired you to create Svelte?

To fully tell the story, we need to go way back into the mists of time, back to when I started programming. My background is in journalism. And about a decade ago, I was working in a newsroom at a financial publication in London. I was very inspired by some of the interactive journalism that was being produced at places like the New York Times, but also the BBC and the Guardian and lots of other news organizations, where they were using Flash and increasingly JavaScript, to tell these data rich interactive stories that couldn&apos;t really be done any other way.

And to me, this felt like the future of journalism, it&apos;s something that was using the full power of the web platform as a storytelling medium in a way that just hadn&apos;t been done before. And I was very excited about all that, and I wanted a piece of it.

So I started learning JavaScript with the help of the help of some some friends, and discovered that it&apos;s really difficult. Particularly if you&apos;re doing things that have a lot of interactivity. If you&apos;re managing lots of state that can be updated in lots of different ways, you end up writing what is often referred to as spaghetti code.

And so I started building a toolkit, really, for myself. And this was a project called Reactive, short for interactive, something out of a out of a Neal Stephenson book, in fact, and it actually got a little bit of traction, not it was never huge, but you know, it was my first foray into open source, and it got used in a few different places.

And I maintained that for some years, and eventually, I left that company and joined the Guardian in the U.K. And we used Reactive to build interactive pieces of journalism there, I transferred to the U.S. to continue at the guardian in New York. And we use directive quite heavily there as well. After a while, though, it became apparent that, you know, as with many frameworks of that era, it had certain flaws.

A lot of these frameworks were built for an era in which desktop computing was prevalent. And we were now in firmly in this age of mobile, first, web development. And these frameworks weren&apos;t really up to the task, primarily because they were just too big, they were too big, and they were too bulky and they were too slow.

And so in 2016, I started working on what was essentially a successor to that project. And we chose the name Svelte because it has all the right connotations. It&apos;s elegant, it&apos;s sophisticated. And the idea was to basically provide the same kind of development experience that people were used to, but change the was that translated into the experience end users have when they run it in the browser.

It did this by adopting techniques from the compiler world. The code that you write doesn&apos;t need to be the code that actually runs in the browser. Svelte was really one of the first frameworks to lean into the compiler paradigm. And as a result, we were able to do things with much less JavaScript, and in a way that was much more performant, which is very important if you&apos;re producing these kinds of interactive stories that typically involve like a lot of data, a lot of animation

Can you talk a bit about more about the compiler aspect? How does that work with a web application or web page?

So, you know, browsers run JavaScript. And like nowadays, they can run WASM, too. But JavaScript is the language that you need to write stuff in if you want to have interactivity on a web page. But that doesn&apos;t mean that you need to write JavaScript, if you can design a language that allows you to describe user interfaces in a more natural way, then the compiler could turn that intention into the code that actually runs. And so you get all the benefits of declarative programming but without the drawbacks that historically have accompanied that.

There is this trade off that historically existed: the developer wants to write this nice, state driven declarative code and the user doesn&apos;t want to have to wait for this bulky JavaScript framework to load over the wire. And then to do all of this extra work to translate your declarative intentions into what actually happens within the browser. And the compiler approach basically allows you to, to square that circle, it means that you get the best of both worlds you&apos;re maximizing the developer experience without compromising on developer experience.

Stupid question: As a developer, if I&apos;m writing JavaScript code, at least initially, how do I compile it?

So pretty much every web app has a build step. It is possible to write web applications that do not involve a build step, you can just write JavaScript, and you can write HTML, and you can import the JavaScript into the HTML and you&apos;ve got a web app. But that approach, it really doesn&apos;t scale, much as some people will try and convince you otherwise.

At some point, you&apos;re going to have to have a build step so that you can use libraries that you&apos;ve installed from NPM, so that you can use things like TypeScript to optimize your JavaScript. And so Svelte fits into your existing build step. And so if you have your components that are written in Svelte files, it&apos;s literally a .SVELTE extension. Then during the build step, those components will get transformed into JavaScript files.

Svelte seemed to take off right around the time we heard complaints about Angular.js. Did the frustrations around Angular help the adoption of Svelte?

Svelte hasn&apos;t been a replacement for Angular because Angular is a full featured framework. It wants to own the entirety of your web application, whereas Svelte is really just a component framework.

So on the spectrum, you have things that are very focused on individual components like React and Vue.js and Svelte. And then at the other end of the spectrum, you have frameworks like Angular, and Ember. And historically, you had to do the work of taking your component framework and figuring out how to build the rest of the application unless you were using one of these full-featured frameworks.

Nowadays, that&apos;s less true because we have things like Next.js, and remix-vue, And on the Svelte team are currently working on SvelteKit, which is the answer to that question of how do I actually build an app with this?

I would attribute the growth in popularity is felt to different forces. Essentially, what happened is it trundled along with a small but dedicated user base for a few years. And then in 2019, we released version three of the framework, which really rethought the authoring experience, the syntax that you use to write components and, and the APIs that are available.

Around that time, I gave a couple of conference talks around it. And that&apos;s when it really started to pick up steam. Now, of course, we&apos;re growing very rapidly. And we&apos;re consistently at the top of developer-happiness surveys. And so now, like a lot of people are aware of is, but we&apos;re still like a very tiny framework, compared to the big dogs like React and Vue.

You have said that part of the Svelte mission has been to make web development fun. What are some of Svelte&apos;s attributes that make it less aggravating for the developer?

The first thing is that you can write a lot less code. If you&apos;re using Svelte, then you can express the same concepts with typically about 40% less code. There&apos;s just a lot less ceremony, a lot less boilerplate.

We&apos;re not constrained by JavaScript. For example, the way that you use state inside a component with React, you have to use hooks. And there&apos;s this slightly idiosyncratic way of declaring a local piece of state inside the component. With Svelte, you just declare a variable. And if you assign a new value to that variable, or if it&apos;s an object, and you mutate that object, then the compiler interprets that as a sign that it needs to update the component.


</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1316</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">ad351444-56b4-4233-a6c7-0e63251964e3</guid>
      <title>Is Java Ready for Cloud Native Computing?</title>
      <description><![CDATA[<p>First released in 1995, Java’s programming language has been a leading developer platform that has become a workhorse for hundreds of enterprise applications. With each new technology evolution, Java has successfully adapted to change. But even while a recent Java ecosystem <a href="https://newrelic.com/resources/report/2022-state-of-java-ecosystem">study</a> found that more than 70% of Java applications in production environments are running inside a container, there continues to be hurdles the language must overcome to adapt to the cloud-native world.</p><p>In this episode of The New Stack Makers podcast, <a href="https://www.linkedin.com/in/siritter/?originalSubdomain=uk">Simon Ritter</a>, deputy CTO of Azul Systems and <a href="https://daliashea.com/about/">Dalia Abo Sheasha</a>, Java developer advocate of JetBrains discuss some of the challenges the language is working to overcome, and share some insight into the new features that developers are requesting. <a href="https://thenewstack.io/author/darryl-taft/">Darryl Taft</a>, news editor of The New Stack hosted this podcast.</p><p>The complexity of modern applications requires developers to master a growing array of skills, technologies, and concepts to develop in the cloud. And “what I've seen is that there is a gap in skills, and what it would take to get existing Java applications into the cloud,” said Abo Sheasha.</p><p>“What developers really want is to focus on the idea of developing the Java code,” said Ritter. “Having the ability to plug in to different cloud providers, but also the ability to integrate with things like your CI/CD tooling so that you've got continuous integration, continuous deployment built in,” he added.</p><p> </p><p>Getting Java ready for the cloud is a “distributed responsibility across the people – from cloud providers to tooling providers,” said Ritter. “Everyone recognizes that the more folks we have on it, the more minds we have on it, the better outcome we're going to have for the developer’s language,” Abo Sheasha said.</p><p> </p><p>Making developers more efficient and productive is coming into the fold with the introduction of JEP, or JDK Enhancement Proposals - a lightweight approach to add new features in the development of the Java platform itself. “But there's some bigger projects like Project Amber which is all about small changes to the language syntax of Java with the idea of making it more productive by taking some of the boilerplate code out,” Ritter said.</p><p> </p><p>The journey to the next chapter of Java is multi-dimensional. While “most developers are focused on getting the job done, picking up skills for new things is a challenge because it takes time. Many still have the issue of using whichever Java version their company is stuck on,” said Ritter. “It's not because the developers don't want to do it; it’s that they need to convince management that it's worth investing in,” added Abo Sheasha.</p>
]]></description>
      <pubDate>Tue, 3 May 2022 22:00:21 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/is-java-ready-for-cloud-native-computing-VRaVxsn1</link>
      <content:encoded><![CDATA[<p>First released in 1995, Java’s programming language has been a leading developer platform that has become a workhorse for hundreds of enterprise applications. With each new technology evolution, Java has successfully adapted to change. But even while a recent Java ecosystem <a href="https://newrelic.com/resources/report/2022-state-of-java-ecosystem">study</a> found that more than 70% of Java applications in production environments are running inside a container, there continues to be hurdles the language must overcome to adapt to the cloud-native world.</p><p>In this episode of The New Stack Makers podcast, <a href="https://www.linkedin.com/in/siritter/?originalSubdomain=uk">Simon Ritter</a>, deputy CTO of Azul Systems and <a href="https://daliashea.com/about/">Dalia Abo Sheasha</a>, Java developer advocate of JetBrains discuss some of the challenges the language is working to overcome, and share some insight into the new features that developers are requesting. <a href="https://thenewstack.io/author/darryl-taft/">Darryl Taft</a>, news editor of The New Stack hosted this podcast.</p><p>The complexity of modern applications requires developers to master a growing array of skills, technologies, and concepts to develop in the cloud. And “what I've seen is that there is a gap in skills, and what it would take to get existing Java applications into the cloud,” said Abo Sheasha.</p><p>“What developers really want is to focus on the idea of developing the Java code,” said Ritter. “Having the ability to plug in to different cloud providers, but also the ability to integrate with things like your CI/CD tooling so that you've got continuous integration, continuous deployment built in,” he added.</p><p> </p><p>Getting Java ready for the cloud is a “distributed responsibility across the people – from cloud providers to tooling providers,” said Ritter. “Everyone recognizes that the more folks we have on it, the more minds we have on it, the better outcome we're going to have for the developer’s language,” Abo Sheasha said.</p><p> </p><p>Making developers more efficient and productive is coming into the fold with the introduction of JEP, or JDK Enhancement Proposals - a lightweight approach to add new features in the development of the Java platform itself. “But there's some bigger projects like Project Amber which is all about small changes to the language syntax of Java with the idea of making it more productive by taking some of the boilerplate code out,” Ritter said.</p><p> </p><p>The journey to the next chapter of Java is multi-dimensional. While “most developers are focused on getting the job done, picking up skills for new things is a challenge because it takes time. Many still have the issue of using whichever Java version their company is stuck on,” said Ritter. “It's not because the developers don't want to do it; it’s that they need to convince management that it's worth investing in,” added Abo Sheasha.</p>
]]></content:encoded>
      <enclosure length="34177820" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/1eff397d-9a93-43ca-9613-47d6f238fb3d/audio/5bad9dcb-a4e5-4075-8087-6269bf35d250/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Is Java Ready for Cloud Native Computing?</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:duration>00:35:36</itunes:duration>
      <itunes:summary>First released in 1995, Java’s programming language has been a leading developer platform that has become a workhorse for hundreds of enterprise applications. With each new technology evolution, Java has successfully adapted to change. But even while a recent Java ecosystem study found that more than 70% of Java applications in production environments are running inside a container, there continues to be hurdles the language must overcome to adapt to the cloud-native world.

In this episode of The New Stack Makers podcast, Simon Ritter, deputy CTO of Azul Systems and Dalia Abo Sheasha, Java developer advocate of JetBrains discuss some of the challenges the language is working to overcome, and share some insight into the new features that developers are requesting. Darryl Taft, news editor of The New Stack hosted this podcast.

The complexity of modern applications requires developers to master a growing array of skills, technologies, and concepts to develop in the cloud. And “what I&apos;ve seen is that there is a gap in skills, and what it would take to get existing Java applications into the cloud,” said Abo Sheasha.

“What developers really want is to focus on the idea of developing the Java code,” said Ritter. “Having the ability to plug in to different cloud providers, but also the ability to integrate with things like your CI/CD tooling so that you&apos;ve got continuous integration, continuous deployment built in,” he added.

Getting Java ready for the cloud is a “distributed responsibility across the people – from cloud providers to tooling providers,” said Ritter. “Everyone recognizes that the more folks we have on it, the more minds we have on it, the better outcome we&apos;re going to have for the developer’s language,” Abo Sheasha said.

Making developers more efficient and productive is coming into the fold with the introduction of JEP, or JDK Enhancement Proposals - a lightweight approach to add new features in the development of the Java platform itself. “But there&apos;s some bigger projects like Project Amber which is all about small changes to the language syntax of Java with the idea of making it more productive by taking some of the boilerplate code out,” Ritter said.

The journey to the next chapter of Java is multi-dimensional. While “most developers are focused on getting the job done, picking up skills for new things is a challenge because it takes time. Many still have the issue of using whichever Java version their company is stuck on,” said Ritter. “It&apos;s not because the developers don&apos;t want to do it; it’s that they need to convince management that it&apos;s worth investing in,” added Abo Sheasha.</itunes:summary>
      <itunes:subtitle>First released in 1995, Java’s programming language has been a leading developer platform that has become a workhorse for hundreds of enterprise applications. With each new technology evolution, Java has successfully adapted to change. But even while a recent Java ecosystem study found that more than 70% of Java applications in production environments are running inside a container, there continues to be hurdles the language must overcome to adapt to the cloud-native world.

In this episode of The New Stack Makers podcast, Simon Ritter, deputy CTO of Azul Systems and Dalia Abo Sheasha, Java developer advocate of JetBrains discuss some of the challenges the language is working to overcome, and share some insight into the new features that developers are requesting. Darryl Taft, news editor of The New Stack hosted this podcast.

The complexity of modern applications requires developers to master a growing array of skills, technologies, and concepts to develop in the cloud. And “what I&apos;ve seen is that there is a gap in skills, and what it would take to get existing Java applications into the cloud,” said Abo Sheasha.

“What developers really want is to focus on the idea of developing the Java code,” said Ritter. “Having the ability to plug in to different cloud providers, but also the ability to integrate with things like your CI/CD tooling so that you&apos;ve got continuous integration, continuous deployment built in,” he added.

Getting Java ready for the cloud is a “distributed responsibility across the people – from cloud providers to tooling providers,” said Ritter. “Everyone recognizes that the more folks we have on it, the more minds we have on it, the better outcome we&apos;re going to have for the developer’s language,” Abo Sheasha said.

Making developers more efficient and productive is coming into the fold with the introduction of JEP, or JDK Enhancement Proposals - a lightweight approach to add new features in the development of the Java platform itself. “But there&apos;s some bigger projects like Project Amber which is all about small changes to the language syntax of Java with the idea of making it more productive by taking some of the boilerplate code out,” Ritter said.

The journey to the next chapter of Java is multi-dimensional. While “most developers are focused on getting the job done, picking up skills for new things is a challenge because it takes time. Many still have the issue of using whichever Java version their company is stuck on,” said Ritter. “It&apos;s not because the developers don&apos;t want to do it; it’s that they need to convince management that it&apos;s worth investing in,” added Abo Sheasha.</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1315</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">50ee4180-8f65-4677-a2f6-0606343fab80</guid>
      <title>KubeCon + CloudNativeCon 2022 Europe, in Valencia: Bring a Mask</title>
      <description><![CDATA[<p>Last week, the country of Spain dropped its mandate for residents and visitors to wear masks, to ward off further infections of the Coronavirus. So, for this year's KubeCon + CloudNativeCon  Europe conference, to be held May 16 - 20th of May in <a href="https://theculturetrip.com/europe/spain/articles/15-reasons-you-should-visit-valencia-at-least-once-in-your-lifetime/">Valencia, Spain</a>, the Cloud Native Computing Foundation dropped its own original mandate that attendees wear masks, a rule that had been in place for its other recent conferences.</p><p>This turned out to be the wrong decision, CNCF <a href="https://www.cncf.io/blog/2022/04/25/clarifying-mask-mandate-update/">admitted a week later</a>. A lot of people who already bought tickets <a href="https://twitter.com/jpetazzo/status/1518251896611516418">were upset</a> at this laxing of the rules for the conference, which could put them in greater danger of contacting the disease.</p><p>So the CNCF put the mandate back in place, and offered refunds for those who felt Spain's own decision would put them in harm's way. CNCF will even send you a week's worth of N95 masks if you request them.</p><p>So, long story short: bring a mask to KubeCon. And, as always, it is still a requirement to show proof of vaccination and temperature checks will be made as well.</p><p>Tricky business running a conference in this time, no?</p><p>In this latest episode of <a href="https://thenewstack.io/podcasts">The New Stack Makers</a> podcast, we take a look at what to expect from this year's KubeCon EU 2022. Our guests for this podcast are <a href="https://www.linkedin.com/in/pritianka/">Priyanka Sharma</a>, the executive director of CNCF, and <a href="https://www.linkedin.com/in/ricardo-rocha-739aa718/?originalSubdomain=ch">Ricardo Rocha</a>, who is a KubeCon co-chair and computer engineer at <a href="https://home.cern/">CERN</a>. TNS Editor-in-chief <a href="https://twitter.com/Joab_Jackson">Joab Jackson</a> hosted this podcast.</p><p>We recorded this podcast prior to the discussion around masks, and at the time, Sharma said that the CNCF based the mask ruling on Spain's own country-wide mandates. "So we are being very cautious with the health requirements for the event," she said.</p><p>The conference team is also keeping an eye on Russia's aggressive moves in the Ukraine, though it is unlikely that the chaos will reach all the way to Spain. Still, "this is why it's essential to always have the hybrid option .. [to] have the virtual elements sorted," Sharma said.</p><p>As the CNCF flagship conference, KubeCon brings together managers and users of a wide variety of cloud native technologies, including containerd, CoreDNS, Envoy, etcd, Fluentd, Harbor, Helm, Istio, Jaeger, Kubernetes, Linkerd, Open Policy Agent, Prometheus, Rook, Vitess, Argo, CRI-O, Crossplane, dapr, Dragonfly,  Falco, Flagger, Flux, gRPC, KEDA, SPIFFE, SPIRE, and Thanos, and many many more. Most have been featured on TNS at one time or another.</p><p>In this podcast, we also discuss what to expect from the virtual sessions at the conference, what to do in Valencia, the current state of Kubernetes, and we get some unofficial picks from Sharma and Rocha as to what keynotes not miss and what sessions to attend.</p><p>"The virtual option is great," Rocha said. "But I think the in-person conferences have have their own value. And there's a lot to be to be gained about meeting people directly and exchanging ideas and going to these events on the side of the conference as well."</p>
]]></description>
      <pubDate>Tue, 26 Apr 2022 19:28:19 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/kubecon-cloudnativecon-2022-europe-in-valencia-bring-a-mask-0QuRJfTS</link>
      <content:encoded><![CDATA[<p>Last week, the country of Spain dropped its mandate for residents and visitors to wear masks, to ward off further infections of the Coronavirus. So, for this year's KubeCon + CloudNativeCon  Europe conference, to be held May 16 - 20th of May in <a href="https://theculturetrip.com/europe/spain/articles/15-reasons-you-should-visit-valencia-at-least-once-in-your-lifetime/">Valencia, Spain</a>, the Cloud Native Computing Foundation dropped its own original mandate that attendees wear masks, a rule that had been in place for its other recent conferences.</p><p>This turned out to be the wrong decision, CNCF <a href="https://www.cncf.io/blog/2022/04/25/clarifying-mask-mandate-update/">admitted a week later</a>. A lot of people who already bought tickets <a href="https://twitter.com/jpetazzo/status/1518251896611516418">were upset</a> at this laxing of the rules for the conference, which could put them in greater danger of contacting the disease.</p><p>So the CNCF put the mandate back in place, and offered refunds for those who felt Spain's own decision would put them in harm's way. CNCF will even send you a week's worth of N95 masks if you request them.</p><p>So, long story short: bring a mask to KubeCon. And, as always, it is still a requirement to show proof of vaccination and temperature checks will be made as well.</p><p>Tricky business running a conference in this time, no?</p><p>In this latest episode of <a href="https://thenewstack.io/podcasts">The New Stack Makers</a> podcast, we take a look at what to expect from this year's KubeCon EU 2022. Our guests for this podcast are <a href="https://www.linkedin.com/in/pritianka/">Priyanka Sharma</a>, the executive director of CNCF, and <a href="https://www.linkedin.com/in/ricardo-rocha-739aa718/?originalSubdomain=ch">Ricardo Rocha</a>, who is a KubeCon co-chair and computer engineer at <a href="https://home.cern/">CERN</a>. TNS Editor-in-chief <a href="https://twitter.com/Joab_Jackson">Joab Jackson</a> hosted this podcast.</p><p>We recorded this podcast prior to the discussion around masks, and at the time, Sharma said that the CNCF based the mask ruling on Spain's own country-wide mandates. "So we are being very cautious with the health requirements for the event," she said.</p><p>The conference team is also keeping an eye on Russia's aggressive moves in the Ukraine, though it is unlikely that the chaos will reach all the way to Spain. Still, "this is why it's essential to always have the hybrid option .. [to] have the virtual elements sorted," Sharma said.</p><p>As the CNCF flagship conference, KubeCon brings together managers and users of a wide variety of cloud native technologies, including containerd, CoreDNS, Envoy, etcd, Fluentd, Harbor, Helm, Istio, Jaeger, Kubernetes, Linkerd, Open Policy Agent, Prometheus, Rook, Vitess, Argo, CRI-O, Crossplane, dapr, Dragonfly,  Falco, Flagger, Flux, gRPC, KEDA, SPIFFE, SPIRE, and Thanos, and many many more. Most have been featured on TNS at one time or another.</p><p>In this podcast, we also discuss what to expect from the virtual sessions at the conference, what to do in Valencia, the current state of Kubernetes, and we get some unofficial picks from Sharma and Rocha as to what keynotes not miss and what sessions to attend.</p><p>"The virtual option is great," Rocha said. "But I think the in-person conferences have have their own value. And there's a lot to be to be gained about meeting people directly and exchanging ideas and going to these events on the side of the conference as well."</p>
]]></content:encoded>
      <enclosure length="28173000" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/5fb9f527-ce8b-46f3-b7ed-7d6a96f5b10a/audio/52b74ab3-b8a3-4092-af10-a399a9306da2/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>KubeCon + CloudNativeCon 2022 Europe, in Valencia: Bring a Mask</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:duration>00:29:20</itunes:duration>
      <itunes:summary>Last week, the country of Spain dropped its mandate for residents and visitors to wear masks, to ward off further infections of the Coronavirus. So, for this year&apos;s KubeCon + CloudNativeCon  Europe conference, to be held May 16 - 20th of May in Valencia, Spain, the Cloud Native Computing Foundation dropped its own original mandate that attendees wear masks, a rule that had been in place for its other recent conferences.

This turned out to be the wrong decision, CNCF admitted a week later. A lot of people who already bought tickets were upset at this laxing of the rules for the conference, which could put them in greater danger of contacting the disease.

So the CNCF put the mandate back in place, and offered refunds for those who felt Spain&apos;s own decision would put them in harm&apos;s way. CNCF will even send you a week&apos;s worth of N95 masks if you request them.

So, long story short: bring a mask to KubeCon. And, as always, it is still a requirement to show proof of vaccination and temperature checks will be made as well.

Tricky business running a conference in this time, no?

In this latest episode of The New Stack Makers podcast, we take a look at what to expect from this year&apos;s KubeCon EU 2022. Our guests for this podcast are Priyanka Sharma, the executive director of CNCF, and Ricardo Rocha, who is a KubeCon co-chair and computer engineer at CERN. TNS Editor-in-chief Joab Jackson hosted this podcast.

We recorded this podcast prior to the discussion around masks, and at the time, Sharma said that the CNCF based the mask ruling on Spain&apos;s own country-wide mandates. &quot;So we are being very cautious with the health requirements for the event,&quot; she said.

The conference team is also keeping an eye on Russia&apos;s aggressive moves in the Ukraine, though it is unlikely that the chaos will reach all the way to Spain. Still, &quot;this is why it&apos;s essential to always have the hybrid option .. [to] have the virtual elements sorted,&quot; Sharma said.

As the CNCF flagship conference, KubeCon brings together managers and users of a wide variety of cloud native technologies, including containerd, CoreDNS, Envoy, etcd, Fluentd, Harbor, Helm, Istio, Jaeger, Kubernetes, Linkerd, Open Policy Agent, Prometheus, Rook, Vitess, Argo, CRI-O, Crossplane, dapr, Dragonfly,  Falco, Flagger, Flux, gRPC, KEDA, SPIFFE, SPIRE, and Thanos, and many many more. Most have been featured on TNS at one time or another.

In this podcast, we also discuss what to expect from the virtual sessions at the conference, what to do in Valencia, the current state of Kubernetes, and we get some unofficial picks from Sharma and Rocha as to what keynotes not miss and what sessions to attend.

&quot;The virtual option is great,&quot; Rocha said. &quot;But I think the in-person conferences have have their own value. And there&apos;s a lot to be to be gained about meeting people directly and exchanging ideas and going to these events on the side of the conference as well.&quot;</itunes:summary>
      <itunes:subtitle>Last week, the country of Spain dropped its mandate for residents and visitors to wear masks, to ward off further infections of the Coronavirus. So, for this year&apos;s KubeCon + CloudNativeCon  Europe conference, to be held May 16 - 20th of May in Valencia, Spain, the Cloud Native Computing Foundation dropped its own original mandate that attendees wear masks, a rule that had been in place for its other recent conferences.

This turned out to be the wrong decision, CNCF admitted a week later. A lot of people who already bought tickets were upset at this laxing of the rules for the conference, which could put them in greater danger of contacting the disease.

So the CNCF put the mandate back in place, and offered refunds for those who felt Spain&apos;s own decision would put them in harm&apos;s way. CNCF will even send you a week&apos;s worth of N95 masks if you request them.

So, long story short: bring a mask to KubeCon. And, as always, it is still a requirement to show proof of vaccination and temperature checks will be made as well.

Tricky business running a conference in this time, no?

In this latest episode of The New Stack Makers podcast, we take a look at what to expect from this year&apos;s KubeCon EU 2022. Our guests for this podcast are Priyanka Sharma, the executive director of CNCF, and Ricardo Rocha, who is a KubeCon co-chair and computer engineer at CERN. TNS Editor-in-chief Joab Jackson hosted this podcast.

We recorded this podcast prior to the discussion around masks, and at the time, Sharma said that the CNCF based the mask ruling on Spain&apos;s own country-wide mandates. &quot;So we are being very cautious with the health requirements for the event,&quot; she said.

The conference team is also keeping an eye on Russia&apos;s aggressive moves in the Ukraine, though it is unlikely that the chaos will reach all the way to Spain. Still, &quot;this is why it&apos;s essential to always have the hybrid option .. [to] have the virtual elements sorted,&quot; Sharma said.

As the CNCF flagship conference, KubeCon brings together managers and users of a wide variety of cloud native technologies, including containerd, CoreDNS, Envoy, etcd, Fluentd, Harbor, Helm, Istio, Jaeger, Kubernetes, Linkerd, Open Policy Agent, Prometheus, Rook, Vitess, Argo, CRI-O, Crossplane, dapr, Dragonfly,  Falco, Flagger, Flux, gRPC, KEDA, SPIFFE, SPIRE, and Thanos, and many many more. Most have been featured on TNS at one time or another.

In this podcast, we also discuss what to expect from the virtual sessions at the conference, what to do in Valencia, the current state of Kubernetes, and we get some unofficial picks from Sharma and Rocha as to what keynotes not miss and what sessions to attend.

&quot;The virtual option is great,&quot; Rocha said. &quot;But I think the in-person conferences have have their own value. And there&apos;s a lot to be to be gained about meeting people directly and exchanging ideas and going to these events on the side of the conference as well.&quot;</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1314</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">8578d6c9-a043-448f-9c4d-5cc3b61bee78</guid>
      <title>Microsoft Accelerates the Journey to Low-Code</title>
      <description><![CDATA[<p>Low-code and no-code is becoming increasingly popular in software development, particularly in enterprises that are looking to expand the number of people who can create applications for digital transformation efforts. While <a href="https://www.gartner.com/en/newsroom/press-releases/2021-11-10-gartner-says-cloud-will-be-the-centerpiece-of-new-digital-experiences">in 2020, less than 25%</a> of new apps were developed using no code/low code, Gartner predicts that by 2025, 70% will utilize this means. Microsoft is one vendor who has been paving the way in this shift by reducing the burden on those in the lines of business and developers in exchange for speed. But what are the potential and best practices for low code/no code software development?</p><p> </p><p>In this episode of The New Stack Makers podcast, <a href="https://www.linkedin.com/in/charleslamanna/">Charles Lamanna</a>, Corporate Vice President, Business Apps and Platform at <a href="https://www.microsoft.com/en-us/">Microsoft</a> discusses what the company is doing in the low-code/no code space with its Power Platform offering, including bringing no code/low-code professionals together to deliver applications.</p><p> </p><p><a href="/author/joab/">Joab Jackson</a>, Editor-in-Chief of The New Stack and <a href="/author/darryl-taft/">Darryl Taft</a>, News Editor of The New Stack hosted this podcast.</p><p> </p>
]]></description>
      <pubDate>Tue, 19 Apr 2022 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/microsoft-accelerates-the-journey-to-low-code-owNXcX5T</link>
      <content:encoded><![CDATA[<p>Low-code and no-code is becoming increasingly popular in software development, particularly in enterprises that are looking to expand the number of people who can create applications for digital transformation efforts. While <a href="https://www.gartner.com/en/newsroom/press-releases/2021-11-10-gartner-says-cloud-will-be-the-centerpiece-of-new-digital-experiences">in 2020, less than 25%</a> of new apps were developed using no code/low code, Gartner predicts that by 2025, 70% will utilize this means. Microsoft is one vendor who has been paving the way in this shift by reducing the burden on those in the lines of business and developers in exchange for speed. But what are the potential and best practices for low code/no code software development?</p><p> </p><p>In this episode of The New Stack Makers podcast, <a href="https://www.linkedin.com/in/charleslamanna/">Charles Lamanna</a>, Corporate Vice President, Business Apps and Platform at <a href="https://www.microsoft.com/en-us/">Microsoft</a> discusses what the company is doing in the low-code/no code space with its Power Platform offering, including bringing no code/low-code professionals together to deliver applications.</p><p> </p><p><a href="/author/joab/">Joab Jackson</a>, Editor-in-Chief of The New Stack and <a href="/author/darryl-taft/">Darryl Taft</a>, News Editor of The New Stack hosted this podcast.</p><p> </p>
]]></content:encoded>
      <enclosure length="34752096" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/b1c68af2-f80c-4aa4-84e3-18a461c1f6e4/audio/af978fd1-92c0-418f-b2ce-1b17e06c00a1/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Microsoft Accelerates the Journey to Low-Code</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/e3751664-7ebb-4cb6-a8cb-79ad6d075756/3000x3000/tns-makers-logo-simplecast.jpg?aid=rss_feed"/>
      <itunes:duration>00:36:11</itunes:duration>
      <itunes:summary>Low-code and no-code is becoming increasingly popular in software development, particularly in enterprises that are looking to expand the number of people who can create applications for digital transformation efforts. While in 2020, less than 25% of new apps were developed using no code/low code, Gartner predicts that by 2025, 70% will utilize this means. Microsoft is one vendor who has been paving the way in this shift by reducing the burden on those in the lines of business and developers in exchange for speed. But what are the potential and best practices for low code/no code software development?

In this episode of The New Stack Makers podcast, Charles Lamanna, Corporate Vice President, Business Apps and Platform at Microsoft discusses what the company is doing in the low-code/no code space with its Power Platform offering, including bringing no code/low-code professionals together to deliver applications.

Joab Jackson, Editor-in-Chief of The New Stack and Darryl Taft, News Editor of The New Stack hosted this podcast.</itunes:summary>
      <itunes:subtitle>Low-code and no-code is becoming increasingly popular in software development, particularly in enterprises that are looking to expand the number of people who can create applications for digital transformation efforts. While in 2020, less than 25% of new apps were developed using no code/low code, Gartner predicts that by 2025, 70% will utilize this means. Microsoft is one vendor who has been paving the way in this shift by reducing the burden on those in the lines of business and developers in exchange for speed. But what are the potential and best practices for low code/no code software development?

In this episode of The New Stack Makers podcast, Charles Lamanna, Corporate Vice President, Business Apps and Platform at Microsoft discusses what the company is doing in the low-code/no code space with its Power Platform offering, including bringing no code/low-code professionals together to deliver applications.

Joab Jackson, Editor-in-Chief of The New Stack and Darryl Taft, News Editor of The New Stack hosted this podcast.</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1313</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">f7114d3d-df12-47a3-9e48-5ec4c02dc65b</guid>
      <title>Meet Cadence: The Open-Source Orchestration Workflow Engine</title>
      <description><![CDATA[<p>Developers are often faced with complexity when building and operating long-running processes that involve multiple service calls and require continuous coordination. To solve this challenge, Uber built and introduced Cadence, the open-source solution for workflow orchestration in 2016 that enables developers to directly express complex, long-running business logic as simple code. Since its debut, it continues to find increased traction with developers operating large-scale, microservices-based architectures. More recently, Instaclustr announced support for a hosted version of Cadence.</p></p><p><p style="font-weight: 400;">In this episode of The New Stack Makers podcast, <a href="https://www.linkedin.com/in/ben-slater-2720562/?originalSubdomain=au">Ben Slater,</a> Chief Product Officer at Instaclustr and <a href="https://www.linkedin.com/in/emrahseker/">Emrah Seker,</a> Staff Software Engineer at Uber discuss Cadence, and how it is used by developers to solve various business problems by enabling them to focus on writing code for business logic, without worrying about the complexity of distributed systems.</p></p><p><p style="font-weight: 400;"><a href="https://thenewstack.io/author/alex/">Alex Williams</a>, founder and publisher of The New Stack hosted this podcast, along with co-host <a href="https://thenewstack.io/author/joab/">Joab Jackson</a>, Editor-in-Chief of The New Stack.</p>
]]></description>
      <pubDate>Tue, 12 Apr 2022 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/meet-cadence-the-open-source-orchestration-workflow-engine-4rYZyy_D</link>
      <content:encoded><![CDATA[<p>Developers are often faced with complexity when building and operating long-running processes that involve multiple service calls and require continuous coordination. To solve this challenge, Uber built and introduced Cadence, the open-source solution for workflow orchestration in 2016 that enables developers to directly express complex, long-running business logic as simple code. Since its debut, it continues to find increased traction with developers operating large-scale, microservices-based architectures. More recently, Instaclustr announced support for a hosted version of Cadence.</p></p><p><p style="font-weight: 400;">In this episode of The New Stack Makers podcast, <a href="https://www.linkedin.com/in/ben-slater-2720562/?originalSubdomain=au">Ben Slater,</a> Chief Product Officer at Instaclustr and <a href="https://www.linkedin.com/in/emrahseker/">Emrah Seker,</a> Staff Software Engineer at Uber discuss Cadence, and how it is used by developers to solve various business problems by enabling them to focus on writing code for business logic, without worrying about the complexity of distributed systems.</p></p><p><p style="font-weight: 400;"><a href="https://thenewstack.io/author/alex/">Alex Williams</a>, founder and publisher of The New Stack hosted this podcast, along with co-host <a href="https://thenewstack.io/author/joab/">Joab Jackson</a>, Editor-in-Chief of The New Stack.</p>
]]></content:encoded>
      <enclosure length="26964043" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/9e386aa1-8ad9-47a4-8a7d-e1a5a21258a6/audio/b4d1cb6f-6835-4d05-a5e6-66c4cce3b98d/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Meet Cadence: The Open-Source Orchestration Workflow Engine</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/ed37a3df-4d88-4064-8463-744af876e609/3000x3000/tns-makers-logo-simplecast.jpg?aid=rss_feed"/>
      <itunes:duration>00:28:05</itunes:duration>
      <itunes:summary>Developers are often faced with complexity when building and operating long-running processes that involve multiple service calls and require continuous coordination. To solve this challenge, Uber built and introduced Cadence, the open-source solution for workflow orchestration in 2016 that enables developers to directly express complex, long-running business logic as simple code. Since its debut, it continues to find increased traction with developers operating large-scale, microservices-based architectures. More recently, Instaclustr announced support for a hosted version of Cadence. In this episode of The New Stack Makers podcast, Ben Slater, Chief Product Officer at Instaclustr and Emrah Seker, Staff Software Engineer at Uber discuss Cadence, and how it is used by developers to solve various business problems by enabling them to focus on writing code for business logic, without worrying about the complexity of distributed systems. Alex Williams, founder and publisher of The New Stack hosted this podcast, along with co-host Joab Jackson, Editor-in-Chief of The New Stack.</itunes:summary>
      <itunes:subtitle>Developers are often faced with complexity when building and operating long-running processes that involve multiple service calls and require continuous coordination. To solve this challenge, Uber built and introduced Cadence, the open-source solution for workflow orchestration in 2016 that enables developers to directly express complex, long-running business logic as simple code. Since its debut, it continues to find increased traction with developers operating large-scale, microservices-based architectures. More recently, Instaclustr announced support for a hosted version of Cadence. In this episode of The New Stack Makers podcast, Ben Slater, Chief Product Officer at Instaclustr and Emrah Seker, Staff Software Engineer at Uber discuss Cadence, and how it is used by developers to solve various business problems by enabling them to focus on writing code for business logic, without worrying about the complexity of distributed systems. Alex Williams, founder and publisher of The New Stack hosted this podcast, along with co-host Joab Jackson, Editor-in-Chief of The New Stack.</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1312</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e9085c91-ff83-4e3b-93a8-3ab27235c4ef</guid>
      <title>Removing the Complexity to Securely Access the Infrastructure</title>
      <description><![CDATA[<p>As the tech stack grows, the list of technologies that must be configured in cloud computing environments has grown exponentially and increased the complexity in the IT infrastructure. While every layer of the stack comes with its own implementation of encrypted connectivity, client authentication, authorization and audit, the challenge for developers and DevOps teams to properly set up secure access to hardware, software throughout the organization will continue to grow, making IT environments increasingly vulnerable.</p><p> </p><p>In this episode of The New Stack Makers podcast, <a href="https://www.linkedin.com/in/benarent/">Ben Arent</a>, Developer Relations Manager, Teleport discusses how to address the hardware, software and peopleware complexity that comes from the cloud by using tools like Teleport 9.0 and the company’s first release of Teleport Machine ID.</p><p> </p>
]]></description>
      <pubDate>Tue, 5 Apr 2022 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/removing-the-complexity-to-securely-access-the-infrastructure-v0kN0cZS</link>
      <content:encoded><![CDATA[<p>As the tech stack grows, the list of technologies that must be configured in cloud computing environments has grown exponentially and increased the complexity in the IT infrastructure. While every layer of the stack comes with its own implementation of encrypted connectivity, client authentication, authorization and audit, the challenge for developers and DevOps teams to properly set up secure access to hardware, software throughout the organization will continue to grow, making IT environments increasingly vulnerable.</p><p> </p><p>In this episode of The New Stack Makers podcast, <a href="https://www.linkedin.com/in/benarent/">Ben Arent</a>, Developer Relations Manager, Teleport discusses how to address the hardware, software and peopleware complexity that comes from the cloud by using tools like Teleport 9.0 and the company’s first release of Teleport Machine ID.</p><p> </p>
]]></content:encoded>
      <enclosure length="15529098" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/cdf68f34-4396-4b2b-8c8c-b235289842b7/audio/cd2afef2-3d3b-4413-abd2-fcc0b8b912b1/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Removing the Complexity to Securely Access the Infrastructure</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/6a312781-6ea4-49a1-bb36-2417b7d148d2/3000x3000/tns-makers-logo-simplecast.jpg?aid=rss_feed"/>
      <itunes:duration>00:16:10</itunes:duration>
      <itunes:summary>As the tech stack grows, the list of technologies that must be configured in cloud computing environments has grown exponentially and increased the complexity in the IT infrastructure. While every layer of the stack comes with its own implementation of encrypted connectivity, client authentication, authorization and audit, the challenge for developers and DevOps teams to properly set up secure access to hardware, software throughout the organization will continue to grow, making IT environments increasingly vulnerable.

In this episode of The New Stack Makers podcast, Ben Arent, Developer Relations Manager, Teleport discusses how to address the hardware, software and peopleware complexity that comes from the cloud by using tools like Teleport 9.0 and the company’s first release of Teleport Machine ID.
</itunes:summary>
      <itunes:subtitle>As the tech stack grows, the list of technologies that must be configured in cloud computing environments has grown exponentially and increased the complexity in the IT infrastructure. While every layer of the stack comes with its own implementation of encrypted connectivity, client authentication, authorization and audit, the challenge for developers and DevOps teams to properly set up secure access to hardware, software throughout the organization will continue to grow, making IT environments increasingly vulnerable.

In this episode of The New Stack Makers podcast, Ben Arent, Developer Relations Manager, Teleport discusses how to address the hardware, software and peopleware complexity that comes from the cloud by using tools like Teleport 9.0 and the company’s first release of Teleport Machine ID.
</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1311</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">57fab0a8-f4eb-4142-a3bf-e91b3453992c</guid>
      <title>Rethinking Trust in Cloud Security</title>
      <description><![CDATA[<p>From cloud security providers to open source, trust has become a staple from which an organization's security is built. But with the rise of cloud-native technologies, the new ways of building applications are challenging the traditional approaches to security. The changing cloud-native landscape is requiring broader security coverage across the technology stack and more contextual awareness of the environment. So how should DevOps and InfoSec teams across commercial businesses and governments rethink their security approach?</p><p> </p><p>In this episode of The New Stack Makers podcast, <a href="https://www.linkedin.com/in/tombossert/">Tom Bossert</a>, president of Trinity Cyber (and former Homeland Security Advisor to two Presidents); <a href="https://www.linkedin.com/in/patrick-hylant-406a49126/">Patrick Hylant</a>, client executive of VMware; and <a href="https://www.linkedin.com/in/chenxiwang88">Chenxi Wang</a>, managing general partner, Rain Capital discuss how businesses and the U.S. government can adapt to the evolving threat landscape, including new initiatives and lessons that can be applied in this high-risk environment.</p><p> </p><p><a href="/author/alex/">Alex Williams</a>, founder and publisher of The New Stack, hosted this podcast. <a href="https://www.linkedin.com/in/jidouglas/">Jim Douglas</a>, CEO of Armory also joined as co-host of this livestream event.</p>
]]></description>
      <pubDate>Tue, 29 Mar 2022 19:01:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/rethinking-trust-in-cloud-security-FeFABLD2</link>
      <content:encoded><![CDATA[<p>From cloud security providers to open source, trust has become a staple from which an organization's security is built. But with the rise of cloud-native technologies, the new ways of building applications are challenging the traditional approaches to security. The changing cloud-native landscape is requiring broader security coverage across the technology stack and more contextual awareness of the environment. So how should DevOps and InfoSec teams across commercial businesses and governments rethink their security approach?</p><p> </p><p>In this episode of The New Stack Makers podcast, <a href="https://www.linkedin.com/in/tombossert/">Tom Bossert</a>, president of Trinity Cyber (and former Homeland Security Advisor to two Presidents); <a href="https://www.linkedin.com/in/patrick-hylant-406a49126/">Patrick Hylant</a>, client executive of VMware; and <a href="https://www.linkedin.com/in/chenxiwang88">Chenxi Wang</a>, managing general partner, Rain Capital discuss how businesses and the U.S. government can adapt to the evolving threat landscape, including new initiatives and lessons that can be applied in this high-risk environment.</p><p> </p><p><a href="/author/alex/">Alex Williams</a>, founder and publisher of The New Stack, hosted this podcast. <a href="https://www.linkedin.com/in/jidouglas/">Jim Douglas</a>, CEO of Armory also joined as co-host of this livestream event.</p>
]]></content:encoded>
      <enclosure length="52489030" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/dd609c87-4985-4c66-943d-2e72776b61dc/audio/102cb659-c0fe-44fe-828c-52e3bc52da34/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Rethinking Trust in Cloud Security</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/de015819-8d94-4c9f-b5f0-55f5f8345ee5/3000x3000/tns-makers-logo-simplecast.jpg?aid=rss_feed"/>
      <itunes:duration>00:54:40</itunes:duration>
      <itunes:summary>From cloud security providers to open source, trust has become a staple from which an organization&apos;s security is built. But with the rise of cloud-native technologies, the new ways of building applications are challenging the traditional approaches to security. The changing cloud-native landscape is requiring broader security coverage across the technology stack and more contextual awareness of the environment. So how should DevOps and InfoSec teams across commercial businesses and governments rethink their security approach?

In this episode of The New Stack Makers podcast, Tom Bossert, president of Trinity Cyber (and former Homeland Security Advisor to two Presidents); Patrick Hylant, client executive of VMware; and Chenxi Wang, managing general partner, Rain Capital discuss how businesses and the U.S. government can adapt to the evolving threat landscape, including new initiatives and lessons that can be applied in this high-risk environment.

Alex Williams, founder and publisher of The New Stack, hosted this podcast. Jim Douglas, CEO of Armory also joined as co-host of this livestream event.</itunes:summary>
      <itunes:subtitle>From cloud security providers to open source, trust has become a staple from which an organization&apos;s security is built. But with the rise of cloud-native technologies, the new ways of building applications are challenging the traditional approaches to security. The changing cloud-native landscape is requiring broader security coverage across the technology stack and more contextual awareness of the environment. So how should DevOps and InfoSec teams across commercial businesses and governments rethink their security approach?

In this episode of The New Stack Makers podcast, Tom Bossert, president of Trinity Cyber (and former Homeland Security Advisor to two Presidents); Patrick Hylant, client executive of VMware; and Chenxi Wang, managing general partner, Rain Capital discuss how businesses and the U.S. government can adapt to the evolving threat landscape, including new initiatives and lessons that can be applied in this high-risk environment.

Alex Williams, founder and publisher of The New Stack, hosted this podcast. Jim Douglas, CEO of Armory also joined as co-host of this livestream event.</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1310</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">2f746bc0-55f5-4dae-a905-b139d20dfdf1</guid>
      <title>The Work-War Balance of Open Source Developers in Ukraine</title>
      <description><![CDATA[<p>"Many Ukrainians continue working. A very good opportunity is to continue working with them, to buy Ukrainian software products, to engage with people who are working [via] <a href="https://www.upwork.com/hire/ua/">UpWork</a>. Help Ukrainians by giving them the ability to work, to do some paid work," whether still in the country or as refugees abroad. If you take something from this conversation, <a href="https://twitter.com/vixentael">Anastasiia Voitova</a>'s words may be the ones that should stick. After all, Ukraine has a renowned IT workforce, with <a href="https://reports.itukraine.org.ua/en">IT outsourcing among its most important exports</a>.</p><p>Voitova, the head of customer solutions and security software engineer at Cossack Labs, just grabbed her laptop and some essentials when she suddenly fled to the mountains last month to "a small village that doesn't even have a name." She doesn't have much with her, but she has more work to do than ever — to meet her clients' increasing demand for cybersecurity defenses and to support the Ukrainian defense effort. Earlier this month, her Ukraine-based team even released a <a href="https://github.com/cossacklabs/themis">new open source cryptographic framework for data protection</a>, on time, amidst the war.</p><p>Voitova was joined in this episode of <a href="/tag/the-new-stack-makers/">The New Stack Makers</a> by <a href="https://twitter.com/Tyrrrz">Oleksii Holub</a>, open source developer, software consultant and GitHub Star, and <a href="https://twitter.com/denysdovhan">Denys Dovhan</a>, front-end engineer at Wix. All three of them are globally known open source community contributors and maintainers. And all three had to suddenly relocate from Kyiv this February. This conversation is a reflection into the lives of these three open source community leaders during the first three weeks of the Russian invasion.</p><p>This conversation aims to help answer what the open source community and the tech community as a whole can do <a href="/how-to-support-teammates-living-in-ukraine-or-any-war-zone/">to support our Ukrainian colleagues and friends</a>. Because open source is a community first and foremost. </p><p>"Open source for me is a very big part of my life. Idon't try to like gain anything out of it, I just code things. If I had a problem, I solve it, and I think to myself, why not share it with other people," Holub said.</p><p>He sees open source as an opportunity for influence in this war, but also is acutely aware that his unpaid labor could be used to support the aggression against his country. That's why he added terms of use to <a href="https://tyrrrz.me/projects">his open source projects</a> that use of his code implicitly means you condemn the Russian invasion. This may be controversial in the strict <a href="/a-guide-to-leveraging-open-source-licensing/">open source licensing</a> world, but the semantics of OSS seem less important to Holub right now.</p><p>Of course, when talking about open source, the world's largest code repository GitHub comes up. <a href="https://github.com/github/feedback/discussions/12042">Whether GitHub should block Russia</a> is an on going OSS debate. On the one hand, many are concerned about further cutting off Russia — which has already restricted access to Facebook, Instagram, and Twitter — from external news and facts about the ongoing conflict. On the other hand, with the <a href="https://tadviser.com/index.php/Article:Open_Source_Software_in_Russia">widespread adoption of OSS in Russia</a>, it's reasonable to assume swaths of open source code is directly supporting the invasion or at least supporting the Russian government through income, taxes, and some of the Kremlin's technical stack.</p><p>For Dovhan, there's a middle ground. His employer, website builder Wix, has blocked all payments in Russia, but has maintained its freemium offering there. "There is no possibility to pay for your premium website. But you still can make a free one, and that's a possibility for Russians to express themselves, and this is a space for free speech, which is limited in Russia." He proposes that GitHub similarly allows the creation of public repos in Russia, but that it blocks payments and private repos there.</p><p>Dovhan continued that "I believe [the] open source community is deeply connected and blocking access for Russian developers, might cause serious issues in infrastructure. Alot of projects are actually made by Russian developers, for example, <a href="https://github.com/postcss/postcss">PostCSS</a>, <a href="https://github.com/nginx">Nginx</a>, and PostHTML."</p><p>These conversations will continue as this war changes the landscape of the tech world as we know it. One thing is for sure, Voitova, Dovhan and Holub have joined the <a href="https://www.n-ix.com/20-facts-ukrainian-software-developers-based-2018-survey/">hundreds of thousands of Ukrainian software developers</a> in making a routine of work-war balance, doing everything they can, every waking hour of the day.</p><p> </p>
]]></description>
      <pubDate>Wed, 23 Mar 2022 12:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/the-work-war-balance-of-open-source-developers-in-ukraine-7LFQQRlI</link>
      <content:encoded><![CDATA[<p>"Many Ukrainians continue working. A very good opportunity is to continue working with them, to buy Ukrainian software products, to engage with people who are working [via] <a href="https://www.upwork.com/hire/ua/">UpWork</a>. Help Ukrainians by giving them the ability to work, to do some paid work," whether still in the country or as refugees abroad. If you take something from this conversation, <a href="https://twitter.com/vixentael">Anastasiia Voitova</a>'s words may be the ones that should stick. After all, Ukraine has a renowned IT workforce, with <a href="https://reports.itukraine.org.ua/en">IT outsourcing among its most important exports</a>.</p><p>Voitova, the head of customer solutions and security software engineer at Cossack Labs, just grabbed her laptop and some essentials when she suddenly fled to the mountains last month to "a small village that doesn't even have a name." She doesn't have much with her, but she has more work to do than ever — to meet her clients' increasing demand for cybersecurity defenses and to support the Ukrainian defense effort. Earlier this month, her Ukraine-based team even released a <a href="https://github.com/cossacklabs/themis">new open source cryptographic framework for data protection</a>, on time, amidst the war.</p><p>Voitova was joined in this episode of <a href="/tag/the-new-stack-makers/">The New Stack Makers</a> by <a href="https://twitter.com/Tyrrrz">Oleksii Holub</a>, open source developer, software consultant and GitHub Star, and <a href="https://twitter.com/denysdovhan">Denys Dovhan</a>, front-end engineer at Wix. All three of them are globally known open source community contributors and maintainers. And all three had to suddenly relocate from Kyiv this February. This conversation is a reflection into the lives of these three open source community leaders during the first three weeks of the Russian invasion.</p><p>This conversation aims to help answer what the open source community and the tech community as a whole can do <a href="/how-to-support-teammates-living-in-ukraine-or-any-war-zone/">to support our Ukrainian colleagues and friends</a>. Because open source is a community first and foremost. </p><p>"Open source for me is a very big part of my life. Idon't try to like gain anything out of it, I just code things. If I had a problem, I solve it, and I think to myself, why not share it with other people," Holub said.</p><p>He sees open source as an opportunity for influence in this war, but also is acutely aware that his unpaid labor could be used to support the aggression against his country. That's why he added terms of use to <a href="https://tyrrrz.me/projects">his open source projects</a> that use of his code implicitly means you condemn the Russian invasion. This may be controversial in the strict <a href="/a-guide-to-leveraging-open-source-licensing/">open source licensing</a> world, but the semantics of OSS seem less important to Holub right now.</p><p>Of course, when talking about open source, the world's largest code repository GitHub comes up. <a href="https://github.com/github/feedback/discussions/12042">Whether GitHub should block Russia</a> is an on going OSS debate. On the one hand, many are concerned about further cutting off Russia — which has already restricted access to Facebook, Instagram, and Twitter — from external news and facts about the ongoing conflict. On the other hand, with the <a href="https://tadviser.com/index.php/Article:Open_Source_Software_in_Russia">widespread adoption of OSS in Russia</a>, it's reasonable to assume swaths of open source code is directly supporting the invasion or at least supporting the Russian government through income, taxes, and some of the Kremlin's technical stack.</p><p>For Dovhan, there's a middle ground. His employer, website builder Wix, has blocked all payments in Russia, but has maintained its freemium offering there. "There is no possibility to pay for your premium website. But you still can make a free one, and that's a possibility for Russians to express themselves, and this is a space for free speech, which is limited in Russia." He proposes that GitHub similarly allows the creation of public repos in Russia, but that it blocks payments and private repos there.</p><p>Dovhan continued that "I believe [the] open source community is deeply connected and blocking access for Russian developers, might cause serious issues in infrastructure. Alot of projects are actually made by Russian developers, for example, <a href="https://github.com/postcss/postcss">PostCSS</a>, <a href="https://github.com/nginx">Nginx</a>, and PostHTML."</p><p>These conversations will continue as this war changes the landscape of the tech world as we know it. One thing is for sure, Voitova, Dovhan and Holub have joined the <a href="https://www.n-ix.com/20-facts-ukrainian-software-developers-based-2018-survey/">hundreds of thousands of Ukrainian software developers</a> in making a routine of work-war balance, doing everything they can, every waking hour of the day.</p><p> </p>
]]></content:encoded>
      <enclosure length="35279759" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/2ad4b641-5e60-498d-94a3-0eb36bcbda91/audio/5b5a1409-0c2a-4ffd-992d-90d11fe91565/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>The Work-War Balance of Open Source Developers in Ukraine</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/4b3fc2da-af88-4425-a532-fc91f9f56790/3000x3000/tns-makers-logo-simplecast.jpg?aid=rss_feed"/>
      <itunes:duration>00:36:44</itunes:duration>
      <itunes:summary>&quot;Many Ukrainians continue working. A very good opportunity is to continue working with them, to buy Ukrainian software products, to engage with people who are working [via] UpWork. Help Ukrainians by giving them the ability to work, to do some paid work,&quot; whether still in the country or as refugees abroad. If you take something from this conversation, Anastasiia Voitova&apos;s words may be the ones that should stick. After all, Ukraine has a renowned IT workforce, with IT outsourcing among its most important exports.

Voitova, the head of customer solutions and security software engineer at Cossack Labs, just grabbed her laptop and some essentials when she suddenly fled to the mountains last month to &quot;a small village that doesn&apos;t even have a name.&quot; She doesn&apos;t have much with her, but she has more work to do than ever — to meet her clients&apos; increasing demand for cybersecurity defenses and to support the Ukrainian defense effort. Earlier this month, her Ukraine-based team even released a new open source cryptographic framework for data protection, on time, amidst the war.

Voitova was joined in this episode of The New Stack Makers by Oleksii Holub, open source developer, software consultant and GitHub Star, and Denys Dovhan, front-end engineer at Wix. All three of them are globally known open source community contributors and maintainers. And all three had to suddenly relocate from Kyiv this February. This conversation is a reflection into the lives of these three open source community leaders during the first three weeks of the Russian invasion.

This conversation aims to help answer what the open source community and the tech community as a whole can do to support our Ukrainian colleagues and friends. Because open source is a community first and foremost. 

&quot;Open source for me is a very big part of my life. I don&apos;t try to like gain anything out of it, I just code things. If I had a problem, I solve it, and I think to myself, why not share it with other people,&quot; Holub said.

He sees open source as an opportunity for influence in this war, but also is acutely aware that his unpaid labor could be used to support the aggression against his country. That&apos;s why he added terms of use to his open source projects that use of his code implicitly means you condemn the Russian invasion. This may be controversial in the strict open source licensing world, but the semantics of OSS seem less important to Holub right now.

Of course, when talking about open source, the world&apos;s largest code repository GitHub comes up. Whether GitHub should block Russia is an on going OSS debate. On the one hand, many are concerned about further cutting off Russia — which has already restricted access to Facebook, Instagram, and Twitter — from external news and facts about the ongoing conflict. On the other hand, with the widespread adoption of OSS in Russia, it&apos;s reasonable to assume swaths of open source code is directly supporting the invasion or at least supporting the Russian government through income, taxes, and some of the Kremlin&apos;s technical stack.

For Dovhan, there&apos;s a middle ground. His employer, website builder Wix, has blocked all payments in Russia, but has maintained its freemium offering there. &quot;There is no possibility to pay for your premium website. But you still can make a free one, and that&apos;s a possibility for Russians to express themselves, and this is a space for free speech, which is limited in Russia.&quot; He proposes that GitHub similarly allows the creation of public repos in Russia, but that it blocks payments and private repos there.

Dovhan continued that &quot;I believe [the] open source community is deeply connected and blocking access for Russian developers, might cause serious issues in infrastructure. A lot of projects are actually made by Russian developers, for example, PostCSS, Nginx, and PostHTML.&quot;

These conversations will continue as this war changes the landscape of the tech world as we know it. One thing is for sure, Voitova, Dovhan and Holub have joined the hundreds of thousands of Ukrainian software developers in making a routine of work-war balance, doing everything they can, every waking hour of the day.</itunes:summary>
      <itunes:subtitle>&quot;Many Ukrainians continue working. A very good opportunity is to continue working with them, to buy Ukrainian software products, to engage with people who are working [via] UpWork. Help Ukrainians by giving them the ability to work, to do some paid work,&quot; whether still in the country or as refugees abroad. If you take something from this conversation, Anastasiia Voitova&apos;s words may be the ones that should stick. After all, Ukraine has a renowned IT workforce, with IT outsourcing among its most important exports.

Voitova, the head of customer solutions and security software engineer at Cossack Labs, just grabbed her laptop and some essentials when she suddenly fled to the mountains last month to &quot;a small village that doesn&apos;t even have a name.&quot; She doesn&apos;t have much with her, but she has more work to do than ever — to meet her clients&apos; increasing demand for cybersecurity defenses and to support the Ukrainian defense effort. Earlier this month, her Ukraine-based team even released a new open source cryptographic framework for data protection, on time, amidst the war.

Voitova was joined in this episode of The New Stack Makers by Oleksii Holub, open source developer, software consultant and GitHub Star, and Denys Dovhan, front-end engineer at Wix. All three of them are globally known open source community contributors and maintainers. And all three had to suddenly relocate from Kyiv this February. This conversation is a reflection into the lives of these three open source community leaders during the first three weeks of the Russian invasion.

This conversation aims to help answer what the open source community and the tech community as a whole can do to support our Ukrainian colleagues and friends. Because open source is a community first and foremost. 

&quot;Open source for me is a very big part of my life. I don&apos;t try to like gain anything out of it, I just code things. If I had a problem, I solve it, and I think to myself, why not share it with other people,&quot; Holub said.

He sees open source as an opportunity for influence in this war, but also is acutely aware that his unpaid labor could be used to support the aggression against his country. That&apos;s why he added terms of use to his open source projects that use of his code implicitly means you condemn the Russian invasion. This may be controversial in the strict open source licensing world, but the semantics of OSS seem less important to Holub right now.

Of course, when talking about open source, the world&apos;s largest code repository GitHub comes up. Whether GitHub should block Russia is an on going OSS debate. On the one hand, many are concerned about further cutting off Russia — which has already restricted access to Facebook, Instagram, and Twitter — from external news and facts about the ongoing conflict. On the other hand, with the widespread adoption of OSS in Russia, it&apos;s reasonable to assume swaths of open source code is directly supporting the invasion or at least supporting the Russian government through income, taxes, and some of the Kremlin&apos;s technical stack.

For Dovhan, there&apos;s a middle ground. His employer, website builder Wix, has blocked all payments in Russia, but has maintained its freemium offering there. &quot;There is no possibility to pay for your premium website. But you still can make a free one, and that&apos;s a possibility for Russians to express themselves, and this is a space for free speech, which is limited in Russia.&quot; He proposes that GitHub similarly allows the creation of public repos in Russia, but that it blocks payments and private repos there.

Dovhan continued that &quot;I believe [the] open source community is deeply connected and blocking access for Russian developers, might cause serious issues in infrastructure. A lot of projects are actually made by Russian developers, for example, PostCSS, Nginx, and PostHTML.&quot;

These conversations will continue as this war changes the landscape of the tech world as we know it. One thing is for sure, Voitova, Dovhan and Holub have joined the hundreds of thousands of Ukrainian software developers in making a routine of work-war balance, doing everything they can, every waking hour of the day.</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1309</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">f133cd05-fd39-48fd-82ea-a3e763ecc59c</guid>
      <title>Securing the Modern Enterprise with Trust: A Look at the Upcoming Code to Cloud Summit</title>
      <description><![CDATA[<p>From cloud security providers to open source, trust has become the foundation from which an organization's security is built. But with the rise of cloud-native technologies such as containers and infrastructure as code (IaC), it has ushered in new ways to build applications and requirements that are challenging the traditional approaches to security. The changing nature of the cloud-native landscape is requiring broader security coverage across the technology stack and more contextual awareness of the environment. But how should teams like Infosec, DevOps rethink their approach to security?</p><p>In this episode of The New Stack Makers podcast, <a href="https://www.linkedin.com/in/guy-eisen-3012a597/">Guy Eisenkot</a>, co-founder and vice president of product at <a href="https://bridgecrew.io/">Bridgecrew</a>, <a href="https://www.linkedin.com/in/barakschoster/?originalSubdomain=il">Barak Schoster Goihman</a>, senior director, chief architect at <a href="https://www.paloaltonetworks.com/">Palo Alto Networks</a> and <a href="https://www.linkedin.com/in/ashishrajan/?originalSubdomain=au">Ashish Rajan</a>, head of security and compliance at <a href="https://www.pageuppeople.com/">PageUp</a> and producer and host for <a href="https://cloudsecuritypodcast.tv/">Cloud Security Podcast</a> preview what’s to come at <a href="https://start.paloaltonetworks.com/code-to-cloud-summit.html">Palo Alto Network’s Code to Cloud Summit</a> on March 23-24, 2022, including the role of security and trust as it relates to DevOps, cloud service providers, software supply chain, SBOM (Software Bill of materials) and IBOM (Infrastructure Bill of Material),</p><p><a href="https://thenewstack.io/author/alex/">Alex Williams</a>, founder and publisher of The New Stack hosted this podcast.</p>
]]></description>
      <pubDate>Tue, 15 Mar 2022 19:27:51 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New  Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/securing-the-modern-enterprise-with-trust-a-look-at-the-upcoming-code-to-cloud-summit-oVWsGU37</link>
      <content:encoded><![CDATA[<p>From cloud security providers to open source, trust has become the foundation from which an organization's security is built. But with the rise of cloud-native technologies such as containers and infrastructure as code (IaC), it has ushered in new ways to build applications and requirements that are challenging the traditional approaches to security. The changing nature of the cloud-native landscape is requiring broader security coverage across the technology stack and more contextual awareness of the environment. But how should teams like Infosec, DevOps rethink their approach to security?</p><p>In this episode of The New Stack Makers podcast, <a href="https://www.linkedin.com/in/guy-eisen-3012a597/">Guy Eisenkot</a>, co-founder and vice president of product at <a href="https://bridgecrew.io/">Bridgecrew</a>, <a href="https://www.linkedin.com/in/barakschoster/?originalSubdomain=il">Barak Schoster Goihman</a>, senior director, chief architect at <a href="https://www.paloaltonetworks.com/">Palo Alto Networks</a> and <a href="https://www.linkedin.com/in/ashishrajan/?originalSubdomain=au">Ashish Rajan</a>, head of security and compliance at <a href="https://www.pageuppeople.com/">PageUp</a> and producer and host for <a href="https://cloudsecuritypodcast.tv/">Cloud Security Podcast</a> preview what’s to come at <a href="https://start.paloaltonetworks.com/code-to-cloud-summit.html">Palo Alto Network’s Code to Cloud Summit</a> on March 23-24, 2022, including the role of security and trust as it relates to DevOps, cloud service providers, software supply chain, SBOM (Software Bill of materials) and IBOM (Infrastructure Bill of Material),</p><p><a href="https://thenewstack.io/author/alex/">Alex Williams</a>, founder and publisher of The New Stack hosted this podcast.</p>
]]></content:encoded>
      <enclosure length="28116357" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/0ff4267d-8457-425b-92ba-50f281cea76f/audio/1d934989-0db3-45f6-a3c2-c7577b1457da/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Securing the Modern Enterprise with Trust: A Look at the Upcoming Code to Cloud Summit</itunes:title>
      <itunes:author>The New  Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/a05e1635-b701-4278-a4da-fa0da9b624a0/3000x3000/tns-makers-logo-simplecast.jpg?aid=rss_feed"/>
      <itunes:duration>00:29:17</itunes:duration>
      <itunes:summary>From cloud security providers to open source, trust has become the foundation from which an organization&apos;s security is built. But with the rise of cloud-native technologies such as containers and infrastructure as code (IaC), it has ushered in new ways to build applications and requirements that are challenging the traditional approaches to security. The changing nature of the cloud-native landscape is requiring broader security coverage across the technology stack and more contextual awareness of the environment. But how should teams like Infosec, DevOps rethink their approach to security?

In this episode of The New Stack Makers podcast, Guy Eisenkot, co-founder and vice president of product at Bridgecrew, Barak Schoster Goihman, senior director, chief architect at Palo Alto Networks and Ashish Rajan, head of security and compliance at PageUp and producer and host for Cloud Security Podcast preview what’s to come at Palo Alto Network’s Code to Cloud Summit on March 23-24, 2022, including the role of security and trust as it relates to DevOps, cloud service providers, software supply chain, SBOM (Software Bill of materials) and IBOM (Infrastructure Bill of Material),

Alex Williams, founder and publisher of The New Stack hosted this podcast.</itunes:summary>
      <itunes:subtitle>From cloud security providers to open source, trust has become the foundation from which an organization&apos;s security is built. But with the rise of cloud-native technologies such as containers and infrastructure as code (IaC), it has ushered in new ways to build applications and requirements that are challenging the traditional approaches to security. The changing nature of the cloud-native landscape is requiring broader security coverage across the technology stack and more contextual awareness of the environment. But how should teams like Infosec, DevOps rethink their approach to security?

In this episode of The New Stack Makers podcast, Guy Eisenkot, co-founder and vice president of product at Bridgecrew, Barak Schoster Goihman, senior director, chief architect at Palo Alto Networks and Ashish Rajan, head of security and compliance at PageUp and producer and host for Cloud Security Podcast preview what’s to come at Palo Alto Network’s Code to Cloud Summit on March 23-24, 2022, including the role of security and trust as it relates to DevOps, cloud service providers, software supply chain, SBOM (Software Bill of materials) and IBOM (Infrastructure Bill of Material),

Alex Williams, founder and publisher of The New Stack hosted this podcast.</itunes:subtitle>
      <itunes:keywords>software supply chain, thenewstack, devops, iac, sbom, infosec, makers, security</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1308</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">dd5185b2-78d6-43dc-af9a-08bf79639fa9</guid>
      <title>Optimizing Resource Management Using Machine Learning to Scale Kubernetes</title>
      <description><![CDATA[<p>Kubernetes is great at large-scale systems, but its complexity and transparency has caused higher cloud costs, delays in deployment and developer frustration. As Kubernetes has taken off and workloads continue to move to a containerized environment, optimizing resources is becoming increasingly important. In fact, the recent <a href="https://www.cncf.io/reports/cncf-annual-survey-2021/">2021 Cloud Native Survey</a> revealed that Kubernetes has already crossed the chasm to mainstream with 96 percent of organizations using or evaluating the technology.</p><p>In this episode of The New Stack Makers podcast, <a href="https://www.linkedin.com/in/mprovo/">Matt Provo, founder and CEO of StormForge</a>, discusses new ways to think about Kubernetes, including resource optimization which can be achieved by empowering developers through automation. He also shared the company’s latest new machine learning-powered multi-dimensional optimization solution, Optimize Live.</p><p><a href="https://thenewstack.io/author/alex/">Alex Williams</a>, founder and publisher of The New Stack, hosted this podcast.</p>
]]></description>
      <pubDate>Tue, 8 Mar 2022 09:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/optimizing-resources-in-kubernetes-environments-with-machine-learning-es5xkqj0</link>
      <content:encoded><![CDATA[<p>Kubernetes is great at large-scale systems, but its complexity and transparency has caused higher cloud costs, delays in deployment and developer frustration. As Kubernetes has taken off and workloads continue to move to a containerized environment, optimizing resources is becoming increasingly important. In fact, the recent <a href="https://www.cncf.io/reports/cncf-annual-survey-2021/">2021 Cloud Native Survey</a> revealed that Kubernetes has already crossed the chasm to mainstream with 96 percent of organizations using or evaluating the technology.</p><p>In this episode of The New Stack Makers podcast, <a href="https://www.linkedin.com/in/mprovo/">Matt Provo, founder and CEO of StormForge</a>, discusses new ways to think about Kubernetes, including resource optimization which can be achieved by empowering developers through automation. He also shared the company’s latest new machine learning-powered multi-dimensional optimization solution, Optimize Live.</p><p><a href="https://thenewstack.io/author/alex/">Alex Williams</a>, founder and publisher of The New Stack, hosted this podcast.</p>
]]></content:encoded>
      <enclosure length="26597294" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/a474673e-5c39-4591-ad02-0ec023bfb7e7/audio/ed5a1316-6546-4d49-b48e-c994ee4255cb/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Optimizing Resource Management Using Machine Learning to Scale Kubernetes</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/7693c9ba-a226-4018-af70-46eb3eba80e7/3000x3000/tns-makers-logo-simplecast.jpg?aid=rss_feed"/>
      <itunes:duration>00:27:42</itunes:duration>
      <itunes:summary>Kubernetes is great at large-scale systems, but its complexity and transparency has caused higher cloud costs, delays in deployment and developer frustration. As Kubernetes has taken off and workloads continue to move to a containerized environment, optimizing resources is becoming increasingly important. In fact, the recent 2021 Cloud Native Survey revealed that Kubernetes has already crossed the chasm to mainstream with 96 percent of organizations using or evaluating the technology.

In this episode of The New Stack Makers podcast, Matt Provo, founder and CEO of StormForge, discusses new ways to think about Kubernetes, including resource optimization which can be achieved by empowering developers through automation. He also shared the company’s latest new machine learning-powered multi-dimensional optimization solution, Optimize Live.

Alex Williams, founder and publisher of The New Stack, hosted this podcast.</itunes:summary>
      <itunes:subtitle>Kubernetes is great at large-scale systems, but its complexity and transparency has caused higher cloud costs, delays in deployment and developer frustration. As Kubernetes has taken off and workloads continue to move to a containerized environment, optimizing resources is becoming increasingly important. In fact, the recent 2021 Cloud Native Survey revealed that Kubernetes has already crossed the chasm to mainstream with 96 percent of organizations using or evaluating the technology.

In this episode of The New Stack Makers podcast, Matt Provo, founder and CEO of StormForge, discusses new ways to think about Kubernetes, including resource optimization which can be achieved by empowering developers through automation. He also shared the company’s latest new machine learning-powered multi-dimensional optimization solution, Optimize Live.

Alex Williams, founder and publisher of The New Stack, hosted this podcast.</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1307</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b8331850-1243-4897-8734-919699775512</guid>
      <title>Java Adapts to Cloud Native Computing</title>
      <description><![CDATA[<p>While Java continues to be the most widely used programming language in the enterprise, how is it faring the emerging cloud native ecosystem? Quite well, observed a panel of Oracle engineers who work the language. In fact, they estimate that they there are more than 50 million Java virtual machines running concurrently in the cloud at present.</p><p>In this latest edition of The New Stack Makers podcast, we discussed the current state of Java with <a href="https://www.linkedin.com/in/georgessaab/">Georges Saab</a>, who is Oracle's vice president of software development, for the Java Platform Group; <a href="https://www.linkedin.com/in/donaldojdk/?originalSubdomain=ca">Donald Smith</a>, Oracle senior director of product management; and <a href="https://twitter.com/Sharat_Chander">Sharat Chander</a>, Oracle senior director of product management. TNS editors <a href="https://thenewstack.io/author/darryl-taft/">Darryl Taft</a> and <a href="https://thenewstack.io/author/joab/">Joab Jackson</a> hosted the conversation.</p>
]]></description>
      <pubDate>Tue, 1 Mar 2022 14:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/java-adapts-to-the-cloud-native-computing-QsexSZFo</link>
      <content:encoded><![CDATA[<p>While Java continues to be the most widely used programming language in the enterprise, how is it faring the emerging cloud native ecosystem? Quite well, observed a panel of Oracle engineers who work the language. In fact, they estimate that they there are more than 50 million Java virtual machines running concurrently in the cloud at present.</p><p>In this latest edition of The New Stack Makers podcast, we discussed the current state of Java with <a href="https://www.linkedin.com/in/georgessaab/">Georges Saab</a>, who is Oracle's vice president of software development, for the Java Platform Group; <a href="https://www.linkedin.com/in/donaldojdk/?originalSubdomain=ca">Donald Smith</a>, Oracle senior director of product management; and <a href="https://twitter.com/Sharat_Chander">Sharat Chander</a>, Oracle senior director of product management. TNS editors <a href="https://thenewstack.io/author/darryl-taft/">Darryl Taft</a> and <a href="https://thenewstack.io/author/joab/">Joab Jackson</a> hosted the conversation.</p>
]]></content:encoded>
      <enclosure length="27570422" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/6fbb2dfd-af68-4f4e-8d5b-7fd7336812e5/audio/420500a4-27c3-47e2-a552-9f72dcd32cb5/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Java Adapts to Cloud Native Computing</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/6c6a18d5-dfe3-4b0e-80a2-53b469a04ec6/3000x3000/tns-makers-logo-simplecast.jpg?aid=rss_feed"/>
      <itunes:duration>00:28:43</itunes:duration>
      <itunes:summary>While Java continues to be the most widely used programming language in the enterprise, how is it faring the emerging cloud native ecosystem? Quite well, observed a panel of Oracle engineers who work the language. In fact, they estimate that they there are more than 50 million Java virtual machines running concurrently in the cloud at present.

In this latest edition of The New Stack Makers podcast, we discussed the current state of Java with Georges Saab, who is Oracle&apos;s vice president of software development, for the Java Platform Group; Donald Smith, Oracle senior director of product management; and Sharat Chander, Oracle senior director of product management. TNS editors Darryl Taft and Joab Jackson hosted the conversation.</itunes:summary>
      <itunes:subtitle>While Java continues to be the most widely used programming language in the enterprise, how is it faring the emerging cloud native ecosystem? Quite well, observed a panel of Oracle engineers who work the language. In fact, they estimate that they there are more than 50 million Java virtual machines running concurrently in the cloud at present.

In this latest edition of The New Stack Makers podcast, we discussed the current state of Java with Georges Saab, who is Oracle&apos;s vice president of software development, for the Java Platform Group; Donald Smith, Oracle senior director of product management; and Sharat Chander, Oracle senior director of product management. TNS editors Darryl Taft and Joab Jackson hosted the conversation.</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1306</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">a794d24d-1068-470a-8878-1c85042dacfb</guid>
      <title>Mitigating Risks in Cloud Native Applications</title>
      <description><![CDATA[<p>Two decades ago, security was an afterthought; it was often ‘bolted on’ to existing applications that left businesses with a reactive approach to threat visibility and enforcement. But with the proliferation of cloud native applications and businesses employing a work from anywhere model, the traditional approach to security is being reimagined to play an integral role from development through operations. From identifying, assessing, prioritizing, and adapting to risk across the applications, organizations are moving to a full view of their risk posture by employing security across the entire lifecycle.</p><p>In this episode of The New Stack Makers podcast, <a href="https://www.linkedin.com/in/ratantipirneni/">Ratan Tipirneni, President and & CEO</a>, <a href="https://www.tigera.io/">Tigera</a> discusses how organizations can take an active approach to security by bringing zero-trust principles to reduce the application’s attack surface, harness machine learning to combat runtime security risks and enable a continuous compliance while mitigating risks from vulnerabilities and attacks through security policy changes.</p><p><a href="https://thenewstack.io/author/alex/">Alex Williams</a>, founder and publisher of The New Stack hosted this podcast.</p>
]]></description>
      <pubDate>Tue, 22 Feb 2022 20:45:07 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack Podcast)</author>
      <link>https://thenewstack.simplecast.com/episodes/mitigating-risks-in-cloud-native-applications-GHKMGAKv</link>
      <content:encoded><![CDATA[<p>Two decades ago, security was an afterthought; it was often ‘bolted on’ to existing applications that left businesses with a reactive approach to threat visibility and enforcement. But with the proliferation of cloud native applications and businesses employing a work from anywhere model, the traditional approach to security is being reimagined to play an integral role from development through operations. From identifying, assessing, prioritizing, and adapting to risk across the applications, organizations are moving to a full view of their risk posture by employing security across the entire lifecycle.</p><p>In this episode of The New Stack Makers podcast, <a href="https://www.linkedin.com/in/ratantipirneni/">Ratan Tipirneni, President and & CEO</a>, <a href="https://www.tigera.io/">Tigera</a> discusses how organizations can take an active approach to security by bringing zero-trust principles to reduce the application’s attack surface, harness machine learning to combat runtime security risks and enable a continuous compliance while mitigating risks from vulnerabilities and attacks through security policy changes.</p><p><a href="https://thenewstack.io/author/alex/">Alex Williams</a>, founder and publisher of The New Stack hosted this podcast.</p>
]]></content:encoded>
      <enclosure length="27017832" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/91146608-e5cb-4438-b0c7-38a95bbb8ffb/audio/d65438fe-07a2-443d-98a3-2943b8ae3c75/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Mitigating Risks in Cloud Native Applications</itunes:title>
      <itunes:author>The New Stack Podcast</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/cf48cd94-87c0-4bf9-b877-989693a0eb7a/3000x3000/tns-makers-logo-simplecast.jpg?aid=rss_feed"/>
      <itunes:duration>00:28:08</itunes:duration>
      <itunes:summary>Two decades ago, security was an afterthought; it was often ‘bolted on’ to existing applications that left businesses with a reactive approach to threat visibility and enforcement. But with the proliferation of cloud native applications and businesses employing a work from anywhere model, the traditional approach to security is being reimagined to play an integral role from development through operations. From identifying, assessing, prioritizing, and adapting to risk across the applications, organizations are moving to a full view of their risk posture by employing security across the entire lifecycle.

In this episode of The New Stack Makers podcast, Ratan Tipirneni, President and &amp; CEO, Tigera discusses how organizations can take an active approach to security by bringing zero-trust principles to reduce the application’s attack surface, harness machine learning to combat runtime security risks and enable a continuous compliance while mitigating risks from vulnerabilities and attacks through security policy changes.

Alex Williams, founder and publisher of The New Stack hosted this podcast.</itunes:summary>
      <itunes:subtitle>Two decades ago, security was an afterthought; it was often ‘bolted on’ to existing applications that left businesses with a reactive approach to threat visibility and enforcement. But with the proliferation of cloud native applications and businesses employing a work from anywhere model, the traditional approach to security is being reimagined to play an integral role from development through operations. From identifying, assessing, prioritizing, and adapting to risk across the applications, organizations are moving to a full view of their risk posture by employing security across the entire lifecycle.

In this episode of The New Stack Makers podcast, Ratan Tipirneni, President and &amp; CEO, Tigera discusses how organizations can take an active approach to security by bringing zero-trust principles to reduce the application’s attack surface, harness machine learning to combat runtime security risks and enable a continuous compliance while mitigating risks from vulnerabilities and attacks through security policy changes.

Alex Williams, founder and publisher of The New Stack hosted this podcast.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1305</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b1090336-505b-447c-ad94-b1e9d2f2da8e</guid>
      <title>Engineering the Reliability of Chaotic Cloud Native Environments</title>
      <description><![CDATA[<p>Cloud-native applications provide an advantage in terms of their scalability and velocity. Yet, despite their resiliency, the complexity of these systems has grown as the number of application components continue to increase. Understanding how these components fit together has stretched beyond what can be easily digested, further challenging the ability for organizations to prepare for technical issues that may arise from the system complexities.</p><p>Last month, <a href="https://www.chaosnative.com/">ChaosNative hosted</a> its second annual engineering event, <a href="https://chaoscarnival.io/">Chaos Carnival</a> where we discussed the principles of chaos engineering and using them to optimize cloud applications in today’s complex IT systems.</p><p>The panelists for this discussion:</p><ul><li><a href="https://www.linkedin.com/in/karthik-satchitanand/?originalSubdomain=in">Karthik Satchitanand</a>, Co-founder and Open-Source Lead, ChaosNative</li><li><a href="https://www.linkedin.com/in/ramyamoorthy/?originalSubdomain=in">Ramya Ramalinga Moorthy</a>, Industrialization Head - Reliability & Resilience Engineering, LTI – Larsen & Toubro Infotech</li><li><a href="https://www.linkedin.com/in/charlotte-d-mach/?originalSubdomain=nl">Charlotte Mach</a>, Engineering Manager, Container Solutions</li><li><a href="https://www.linkedin.com/in/norajones1/">Nora Jones,</a> Founder and CEO, Jeli</li></ul><p>In this episode of The New Stack Makers podcast, <a href="https://thenewstack.io/author/alex/">Alex Williams</a>, founder and publisher of The New Stack served as the moderator, with the help of Joab Jackson, editor-in-chief of The New Stack.</p>
]]></description>
      <pubDate>Tue, 15 Feb 2022 20:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/engineering-the-reliability-of-chaotic-cloud-native-environments-OaIZQqc_</link>
      <content:encoded><![CDATA[<p>Cloud-native applications provide an advantage in terms of their scalability and velocity. Yet, despite their resiliency, the complexity of these systems has grown as the number of application components continue to increase. Understanding how these components fit together has stretched beyond what can be easily digested, further challenging the ability for organizations to prepare for technical issues that may arise from the system complexities.</p><p>Last month, <a href="https://www.chaosnative.com/">ChaosNative hosted</a> its second annual engineering event, <a href="https://chaoscarnival.io/">Chaos Carnival</a> where we discussed the principles of chaos engineering and using them to optimize cloud applications in today’s complex IT systems.</p><p>The panelists for this discussion:</p><ul><li><a href="https://www.linkedin.com/in/karthik-satchitanand/?originalSubdomain=in">Karthik Satchitanand</a>, Co-founder and Open-Source Lead, ChaosNative</li><li><a href="https://www.linkedin.com/in/ramyamoorthy/?originalSubdomain=in">Ramya Ramalinga Moorthy</a>, Industrialization Head - Reliability & Resilience Engineering, LTI – Larsen & Toubro Infotech</li><li><a href="https://www.linkedin.com/in/charlotte-d-mach/?originalSubdomain=nl">Charlotte Mach</a>, Engineering Manager, Container Solutions</li><li><a href="https://www.linkedin.com/in/norajones1/">Nora Jones,</a> Founder and CEO, Jeli</li></ul><p>In this episode of The New Stack Makers podcast, <a href="https://thenewstack.io/author/alex/">Alex Williams</a>, founder and publisher of The New Stack served as the moderator, with the help of Joab Jackson, editor-in-chief of The New Stack.</p>
]]></content:encoded>
      <enclosure length="51351552" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/fa966189-d07f-4f34-91eb-75045f4bce0d/audio/649f6c78-a215-451c-b5f9-cf73422b0885/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Engineering the Reliability of Chaotic Cloud Native Environments</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/5f00ff97-5ccc-4507-92b0-39b41e9fe81b/3000x3000/tns-makers-logo-simplecast.jpg?aid=rss_feed"/>
      <itunes:duration>00:53:29</itunes:duration>
      <itunes:summary>Cloud-native applications provide an advantage in terms of their scalability and velocity. Yet, despite their resiliency, the complexity of these systems has grown as the number of application components continue to increase. Understanding how these components fit together has stretched beyond what can be easily digested, further challenging the ability for organizations to prepare for technical issues that may arise from the system complexities.

Last month, ChaosNative hosted its second annual engineering event, Chaos Carnival where we discussed the principles of chaos engineering and using them to optimize cloud applications in today’s complex IT systems.

The panelists for this discussion:

Karthik Satchitanand, Co-founder and Open-Source Lead, ChaosNative
Ramya Ramalinga Moorthy, Industrialization Head - Reliability &amp; Resilience Engineering, LTI – Larsen &amp; Toubro Infotech
Charlotte Mach, Engineering Manager, Container Solutions
Nora Jones, Founder and CEO, Jeli

In this episode of The New Stack Makers podcast, Alex Williams, founder and publisher of The New Stack served as the moderator, with the help of Joab Jackson, editor-in-chief of The New Stack.</itunes:summary>
      <itunes:subtitle>Cloud-native applications provide an advantage in terms of their scalability and velocity. Yet, despite their resiliency, the complexity of these systems has grown as the number of application components continue to increase. Understanding how these components fit together has stretched beyond what can be easily digested, further challenging the ability for organizations to prepare for technical issues that may arise from the system complexities.

Last month, ChaosNative hosted its second annual engineering event, Chaos Carnival where we discussed the principles of chaos engineering and using them to optimize cloud applications in today’s complex IT systems.

The panelists for this discussion:

Karthik Satchitanand, Co-founder and Open-Source Lead, ChaosNative
Ramya Ramalinga Moorthy, Industrialization Head - Reliability &amp; Resilience Engineering, LTI – Larsen &amp; Toubro Infotech
Charlotte Mach, Engineering Manager, Container Solutions
Nora Jones, Founder and CEO, Jeli

In this episode of The New Stack Makers podcast, Alex Williams, founder and publisher of The New Stack served as the moderator, with the help of Joab Jackson, editor-in-chief of The New Stack.</itunes:subtitle>
      <itunes:keywords>thenewstack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1304</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">2b7a2d21-2741-45d9-807d-4713b3cb9838</guid>
      <title>TypeScript and the Power of a Statically-Typed Language</title>
      <description><![CDATA[<p>If there is a <a href="https://thenewstack.io/how-typescript-helps-enterprise-developers/">secret to the success</a> of TypeScript, it is in the type checking, ensuring that the data flowing through the program is of the correct kind of data. Type checking cuts down on errors, sets the stage for better tooling, and allows developers to map their programs at a higher level. And <a href="https://www.typescriptlang.org/">TypeScript</a> itself, a statically-typed superset of JavaScript, ensures that an army of JavaScript programmers can easily enjoy these advanced programming benefits with a minimal learning curve.</p><p>In this latest edition of <a href="/podcasts">The New Stack Makers podcast,</a> we spoke with a few of TypeScript's designers and maintainers to learn a bit more about the design of the language: <a href="https://www.linkedin.com/in/ryan-cavanaugh-aa4a37106/">Ryan Cavanaugh</a>, a principal software engineering manager for <a href="https://www.microsoft.com/">Microsoft</a>; <a href="https://www.linkedin.com/in/lukejhoban/">Luke Hoban</a>, chief technology officer for <a href="https://www.pulumi.com/">Pulumi</a>, who was one of original creators of TypeScript, and; <a href="https://www.linkedin.com/in/daniel-rosenwasser-b56b7837/">Daniel Rosenwasser</a>, Senior Program Manager, Microsoft. TNS editors <a href="/author/darryl-taft/">Darryl Taft</a> and <a href="/author/joab/">Joab Jackson</a> hosted the discussion.</p>
]]></description>
      <pubDate>Tue, 8 Feb 2022 16:52:39 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/typescript-and-the-power-of-a-statically-typed-language-uhAms8bN</link>
      <content:encoded><![CDATA[<p>If there is a <a href="https://thenewstack.io/how-typescript-helps-enterprise-developers/">secret to the success</a> of TypeScript, it is in the type checking, ensuring that the data flowing through the program is of the correct kind of data. Type checking cuts down on errors, sets the stage for better tooling, and allows developers to map their programs at a higher level. And <a href="https://www.typescriptlang.org/">TypeScript</a> itself, a statically-typed superset of JavaScript, ensures that an army of JavaScript programmers can easily enjoy these advanced programming benefits with a minimal learning curve.</p><p>In this latest edition of <a href="/podcasts">The New Stack Makers podcast,</a> we spoke with a few of TypeScript's designers and maintainers to learn a bit more about the design of the language: <a href="https://www.linkedin.com/in/ryan-cavanaugh-aa4a37106/">Ryan Cavanaugh</a>, a principal software engineering manager for <a href="https://www.microsoft.com/">Microsoft</a>; <a href="https://www.linkedin.com/in/lukejhoban/">Luke Hoban</a>, chief technology officer for <a href="https://www.pulumi.com/">Pulumi</a>, who was one of original creators of TypeScript, and; <a href="https://www.linkedin.com/in/daniel-rosenwasser-b56b7837/">Daniel Rosenwasser</a>, Senior Program Manager, Microsoft. TNS editors <a href="/author/darryl-taft/">Darryl Taft</a> and <a href="/author/joab/">Joab Jackson</a> hosted the discussion.</p>
]]></content:encoded>
      <enclosure length="28960642" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/ba2601a8-b82f-4f6a-b22d-9e4e0ca31f4b/audio/41fad0b0-1070-49b8-8e6a-2a06ed62eeba/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>TypeScript and the Power of a Statically-Typed Language</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/ee584016-9259-465e-8013-482558f02db4/3000x3000/tns-makers-logo-simplecast.jpg?aid=rss_feed"/>
      <itunes:duration>00:30:10</itunes:duration>
      <itunes:summary>If there is a secret to the success of TypeScript, it is in the type checking, ensuring that the data flowing through the program is of the correct kind of data. Type checking cuts down on errors, sets the stage for better tooling, and allows developers to map their programs at a higher level. And TypeScript itself, a statically-typed superset of JavaScript, ensures that an army of JavaScript programmers can easily enjoy these advanced programming benefits with a minimal learning curve.

In this latest edition of The New Stack Makers podcast, we spoke with a few of TypeScript&apos;s designers and maintainers to learn a bit more about the design of the language: Ryan Cavanaugh, a principal software engineering manager for Microsoft; Luke Hoban, chief technology officer for Pulumi, who was one of original creators of TypeScript, and; Daniel Rosenwasser, Senior Program Manager, Microsoft. TNS editors Darryl Taft and Joab Jackson hosted the discussion.</itunes:summary>
      <itunes:subtitle>If there is a secret to the success of TypeScript, it is in the type checking, ensuring that the data flowing through the program is of the correct kind of data. Type checking cuts down on errors, sets the stage for better tooling, and allows developers to map their programs at a higher level. And TypeScript itself, a statically-typed superset of JavaScript, ensures that an army of JavaScript programmers can easily enjoy these advanced programming benefits with a minimal learning curve.

In this latest edition of The New Stack Makers podcast, we spoke with a few of TypeScript&apos;s designers and maintainers to learn a bit more about the design of the language: Ryan Cavanaugh, a principal software engineering manager for Microsoft; Luke Hoban, chief technology officer for Pulumi, who was one of original creators of TypeScript, and; Daniel Rosenwasser, Senior Program Manager, Microsoft. TNS editors Darryl Taft and Joab Jackson hosted the discussion.</itunes:subtitle>
      <itunes:keywords>the new stack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1303</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">98affa8e-5bbd-4b46-b5c8-47b42b18552e</guid>
      <title>When to Use Kubernetes, and When to Use Cloud Foundry</title>
      <description><![CDATA[<p>While Kubernetes brings a great deal of flexibility to application management, the Cloud Foundry platform-as-a-service (PaaS) software offers the best level of standardization, observed<a href="https://www.linkedin.com/in/julianfischer"> Julian Fischer</a>, CEO, of cloud native services provider anynines.</p><p>We chatted with Fischer for this latest episode of <a href="/podcasts">The New Stack Makers</a> podcast, to learn about the company's experience in managing large-scale deployments of both Kubernetes and Cloud Foundry.</p><p>"A lot of the conversation today is about Kubernetes. But the Cloud Foundry ecosystem has been very strong," especially for enterprises, noted Fischer.</p>
]]></description>
      <pubDate>Tue, 1 Feb 2022 13:00:00 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/when-to-use-kubernetes-and-when-to-use-cloud-foundry-uCC5Xqga</link>
      <content:encoded><![CDATA[<p>While Kubernetes brings a great deal of flexibility to application management, the Cloud Foundry platform-as-a-service (PaaS) software offers the best level of standardization, observed<a href="https://www.linkedin.com/in/julianfischer"> Julian Fischer</a>, CEO, of cloud native services provider anynines.</p><p>We chatted with Fischer for this latest episode of <a href="/podcasts">The New Stack Makers</a> podcast, to learn about the company's experience in managing large-scale deployments of both Kubernetes and Cloud Foundry.</p><p>"A lot of the conversation today is about Kubernetes. But the Cloud Foundry ecosystem has been very strong," especially for enterprises, noted Fischer.</p>
]]></content:encoded>
      <enclosure length="23313867" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/80c81588-4acd-47ef-a4a7-acfa1f19e599/audio/e888b076-e34d-4ccb-b256-3c9c304cb6d0/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>When to Use Kubernetes, and When to Use Cloud Foundry</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/2119eaae-892d-4420-9a60-ca29309cf629/3000x3000/tns-makers-logo-simplecast.jpg?aid=rss_feed"/>
      <itunes:duration>00:24:17</itunes:duration>
      <itunes:summary>While Kubernetes brings a great deal of flexibility to application management, the Cloud Foundry platform-as-a-service (PaaS) software offers the best level of standardization, observed Julian Fischer, CEO, of cloud native services provider anynines.

We chatted with Fischer for this latest episode of The New Stack Makers podcast, to learn about the company&apos;s experience in managing large-scale deployments of both Kubernetes and Cloud Foundry.

&quot;A lot of the conversation today is about Kubernetes. But the Cloud Foundry ecosystem has been very strong,&quot; especially for enterprises, noted Fischer.</itunes:summary>
      <itunes:subtitle>While Kubernetes brings a great deal of flexibility to application management, the Cloud Foundry platform-as-a-service (PaaS) software offers the best level of standardization, observed Julian Fischer, CEO, of cloud native services provider anynines.

We chatted with Fischer for this latest episode of The New Stack Makers podcast, to learn about the company&apos;s experience in managing large-scale deployments of both Kubernetes and Cloud Foundry.

&quot;A lot of the conversation today is about Kubernetes. But the Cloud Foundry ecosystem has been very strong,&quot; especially for enterprises, noted Fischer.</itunes:subtitle>
      <itunes:keywords>the new stack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1302</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">d97abe35-cce9-4e73-9d5e-14137967c15c</guid>
      <title>Makings of a Web3 Stack: Agoric, IPFS, Cosmos Network</title>
      <description><![CDATA[<p>Want an easy way to get started in Web3? Download a <a href="https://ipfs.io/#install">desktop copy</a> of <a href="https://thenewstack.io/interplanetary-file-system-could-pave-the-way-for-a-distributed-permanent-web/"> IPFS</a> (Interplanetary File System) and install it on your computer, advises <a href="https://www.linkedin.com/in/dietrich/">Dietrich Ayala</a>, IPFS Ecosystem Growth Engineer, <a href="https://protocol.ai/">Protocol Labs</a>, in our most recent edition of <a href="https://thenewstack.io/podcasts">The New Stack Makers podcast</a>.</p><p>We've been hearing a lot of hype about the Web3 and its promise of decentralization — how it will bring the power of the web back to the people, through the use of a blockchain. So what's up with that? How do you build a Web3 stack? What can you build with a Web3 stack? How far along is the community with tooling and ease-of-use?</p><p>This virtual panel podcast sets out to answer all these questions.</p><p>In addition to speaking to Ayala, we spoke with <a href="https://www.linkedin.com/in/rowlandgraus/">Rowland Graus</a>, head of product for <a href="https://agoric.com/">Agoric</a>, and <a href="https://www.linkedin.com/in/marko-baricevic-ab0b49214/?originalSubdomain=de">Marko Baricevic</a>, software engineer for <a href="https://interchain.io/">The Interchain Foundation</a>, which manages <a href="https://cosmos.network/">Cosmos Network</a>. an open source technology to help blockchains interoperate. Each participant describes the role in the Web3 ecosystem where their respective technologies play. These technologies are often used together, so they represent an emerging blockchain stack of sorts.</p><p>TNS Editor-in-Chief <a href="https://thenewstack.io/author/joab/">Joab Jackson</a> hosted the discussion.</p>
]]></description>
      <pubDate>Tue, 25 Jan 2022 20:54:13 +0000</pubDate>
      <author>podcasts@thenewstack.io (the new stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/makings-of-a-web3-stack-agora-ipfs-cosmos-network-VWWVQHAu</link>
      <content:encoded><![CDATA[<p>Want an easy way to get started in Web3? Download a <a href="https://ipfs.io/#install">desktop copy</a> of <a href="https://thenewstack.io/interplanetary-file-system-could-pave-the-way-for-a-distributed-permanent-web/"> IPFS</a> (Interplanetary File System) and install it on your computer, advises <a href="https://www.linkedin.com/in/dietrich/">Dietrich Ayala</a>, IPFS Ecosystem Growth Engineer, <a href="https://protocol.ai/">Protocol Labs</a>, in our most recent edition of <a href="https://thenewstack.io/podcasts">The New Stack Makers podcast</a>.</p><p>We've been hearing a lot of hype about the Web3 and its promise of decentralization — how it will bring the power of the web back to the people, through the use of a blockchain. So what's up with that? How do you build a Web3 stack? What can you build with a Web3 stack? How far along is the community with tooling and ease-of-use?</p><p>This virtual panel podcast sets out to answer all these questions.</p><p>In addition to speaking to Ayala, we spoke with <a href="https://www.linkedin.com/in/rowlandgraus/">Rowland Graus</a>, head of product for <a href="https://agoric.com/">Agoric</a>, and <a href="https://www.linkedin.com/in/marko-baricevic-ab0b49214/?originalSubdomain=de">Marko Baricevic</a>, software engineer for <a href="https://interchain.io/">The Interchain Foundation</a>, which manages <a href="https://cosmos.network/">Cosmos Network</a>. an open source technology to help blockchains interoperate. Each participant describes the role in the Web3 ecosystem where their respective technologies play. These technologies are often used together, so they represent an emerging blockchain stack of sorts.</p><p>TNS Editor-in-Chief <a href="https://thenewstack.io/author/joab/">Joab Jackson</a> hosted the discussion.</p>
]]></content:encoded>
      <enclosure length="31396719" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/8ffb3851-5621-4fe5-85e9-2814345e944f/audio/6d06abe4-e6a0-4d20-acd6-0a9c88798509/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Makings of a Web3 Stack: Agoric, IPFS, Cosmos Network</itunes:title>
      <itunes:author>the new stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/d4c52e1f-75c4-4dd7-bf29-b8419d91b4a6/3000x3000/tns-makers-logo-simplecast.jpg?aid=rss_feed"/>
      <itunes:duration>00:32:42</itunes:duration>
      <itunes:summary>Want an easy way to get started in Web3? Download a desktop copy of  IPFS (Interplanetary File System) and install it on your computer, advises Dietrich Ayala, IPFS Ecosystem Growth Engineer, Protocol Labs, in our most recent edition of The New Stack Makers podcast.

We&apos;ve been hearing a lot of hype about the Web3 and its promise of decentralization — how it will bring the power of the web back to the people, through the use of a blockchain. So what&apos;s up with that? How do you build a Web3 stack? What can you build with a Web3 stack? How far along is the community with tooling and ease-of-use?

This virtual panel podcast sets out to answer all these questions.

In addition to speaking to Ayala, we spoke with Rowland Graus, head of product for Agoric, and Marko Baricevic, software engineer for The Interchain Foundation, which manages Cosmos Network. an open source technology to help blockchains interoperate. Each participant describes the role in the Web3 ecosystem where their respective technologies play. These technologies are often used together, so they represent an emerging blockchain stack of sorts.

TNS Editor-in-Chief Joab Jackson hosted the discussion.</itunes:summary>
      <itunes:subtitle>Want an easy way to get started in Web3? Download a desktop copy of  IPFS (Interplanetary File System) and install it on your computer, advises Dietrich Ayala, IPFS Ecosystem Growth Engineer, Protocol Labs, in our most recent edition of The New Stack Makers podcast.

We&apos;ve been hearing a lot of hype about the Web3 and its promise of decentralization — how it will bring the power of the web back to the people, through the use of a blockchain. So what&apos;s up with that? How do you build a Web3 stack? What can you build with a Web3 stack? How far along is the community with tooling and ease-of-use?

This virtual panel podcast sets out to answer all these questions.

In addition to speaking to Ayala, we spoke with Rowland Graus, head of product for Agoric, and Marko Baricevic, software engineer for The Interchain Foundation, which manages Cosmos Network. an open source technology to help blockchains interoperate. Each participant describes the role in the Web3 ecosystem where their respective technologies play. These technologies are often used together, so they represent an emerging blockchain stack of sorts.

TNS Editor-in-Chief Joab Jackson hosted the discussion.</itunes:subtitle>
      <itunes:keywords>the new stack, makers</itunes:keywords>
      <itunes:explicit>true</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1301</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b94b805d-1a97-4967-a146-e84370983871</guid>
      <title>Managing Cloud Security Risk Posture Through a Full Stack Approach</title>
      <description><![CDATA[<p>Kubernetes, containers, and cloud-native technologies offer organizations the benefits of portability, flexibility and increased developer productivity but the security risks associated with adopting them continue to be a top concern for companies. In the recent <a href="https://www.redhat.com/en/engage/state-kubernetes-security-s-202106210910">State of Kubernetes Security report</a>, 94% of respondents experienced at least one security incident in their Kubernetes environment in the last 12 months. </p><p>In this episode of The New Stack Makers podcast, <a href="https://www.linkedin.com/in/avishua/">Avi Shua</a>, CEO and Co-Founder of <a href="https://orca.security/">Orca Security</a> talks about how organizations can enhance the security of their cloud environment by acting on the critical risks such as vulnerabilities, malware and misconfigurations by taking a snapshot of Kubernetes clusters and analyzing them, without the need for an agent. </p><p> </p>
]]></description>
      <pubDate>Wed, 19 Jan 2022 19:13:52 +0000</pubDate>
      <author>podcasts@thenewstack.io (The New Stack)</author>
      <link>https://thenewstack.simplecast.com/episodes/managing-cloud-security-risk-posture-through-a-full-stack-approach-9k4lqkyg</link>
      <content:encoded><![CDATA[<p>Kubernetes, containers, and cloud-native technologies offer organizations the benefits of portability, flexibility and increased developer productivity but the security risks associated with adopting them continue to be a top concern for companies. In the recent <a href="https://www.redhat.com/en/engage/state-kubernetes-security-s-202106210910">State of Kubernetes Security report</a>, 94% of respondents experienced at least one security incident in their Kubernetes environment in the last 12 months. </p><p>In this episode of The New Stack Makers podcast, <a href="https://www.linkedin.com/in/avishua/">Avi Shua</a>, CEO and Co-Founder of <a href="https://orca.security/">Orca Security</a> talks about how organizations can enhance the security of their cloud environment by acting on the critical risks such as vulnerabilities, malware and misconfigurations by taking a snapshot of Kubernetes clusters and analyzing them, without the need for an agent. </p><p> </p>
]]></content:encoded>
      <enclosure length="9093999" type="audio/mpeg" url="https://cdn.simplecast.com/audio/5672b58f-7201-4e0e-b0af-da702259d97f/episodes/a6731779-6cb6-4984-8f6e-4c45fdc78ebb/audio/314c4a73-0b02-478b-a348-254891cf27df/default_tc.mp3?aid=rss_feed&amp;feed=IgzWks06"/>
      <itunes:title>Managing Cloud Security Risk Posture Through a Full Stack Approach</itunes:title>
      <itunes:author>The New Stack</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/1425ebfd-95bd-4a66-b963-a0b885c75680/74acb7d4-f6f8-487c-ba69-f9f55614bc56/3000x3000/tns-makers-logo-simplecast.jpg?aid=rss_feed"/>
      <itunes:duration>00:09:28</itunes:duration>
      <itunes:summary>Kubernetes, containers, and cloud-native technologies offer organizations the benefits of portability, flexibility and increased developer productivity but the security risks associated with adopting them continue to be a top concern for companies. In the recent State of Kubernetes Security report, 94% of respondents experienced at least one security incident in their Kubernetes environment in the last 12 months.

In this episode of The New Stack Makers podcast, Avi Shua, CEO and Co-Founder of Orca Security talks about how organizations can enhance the security of their cloud environment by acting on the critical risks such as vulnerabilities, malware and misconfigurations by taking a snapshot of Kubernetes clusters and analyzing them, without the need for an agent. </itunes:summary>
      <itunes:subtitle>Kubernetes, containers, and cloud-native technologies offer organizations the benefits of portability, flexibility and increased developer productivity but the security risks associated with adopting them continue to be a top concern for companies. In the recent State of Kubernetes Security report, 94% of respondents experienced at least one security incident in their Kubernetes environment in the last 12 months.

In this episode of The New Stack Makers podcast, Avi Shua, CEO and Co-Founder of Orca Security talks about how organizations can enhance the security of their cloud environment by acting on the critical risks such as vulnerabilities, malware and misconfigurations by taking a snapshot of Kubernetes clusters and analyzing them, without the need for an agent. </itunes:subtitle>
      <itunes:keywords>the new stack, makers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1300</itunes:episode>
    </item>
  </channel>
</rss>