<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="/global/feed/rss.xslt" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:media="http://search.yahoo.com/mrss/" xmlns:podaccess="https://access.acast.com/schema/1.0/" xmlns:acast="https://schema.acast.com/1.0/">
    <channel>
		<ttl>60</ttl>
		<generator>acast.com</generator>
		<title><![CDATA[AI News & Strategy Daily with Nate B. Jones]]></title>
		<link>https://natesnewsletter.substack.com/</link>
		<atom:link href="https://feeds.acast.com/public/shows/69ab3b7c7036d739021982df" rel="self" type="application/rss+xml"/>
		<language>en</language>
		<copyright>Nate B. Jones</copyright>
		<itunes:keywords>artificial intelligence, AI strategy, AI news, AI tools, generative AI, large language models, AI for business, AI workflows, AI productivity, AI leadership, future of work, Nate B Jones, AI agents, ChatGPT, Claude</itunes:keywords>
		<itunes:author>Nate B. Jones</itunes:author>
		<itunes:subtitle>Daily AI strategy and news for the AI curious.</itunes:subtitle>
		<itunes:summary><![CDATA[Daily AI strategy and news for the AI curious, builders & executives. I'm Nate B. Jones, a 20-year product leader, AI strategist, and your guide through the noise. Most AI content is hype or generic advice. I cut through both with frameworks and workflows you can use immediately. Whether you're an executive making AI decisions or a builder implementing solutions, you'll get practical guidance, tested in real organizations. New videos every day on YouTube. Deeper analysis + exclusive playbooks → https://natesnewsletter.substack.com/<hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		<description><![CDATA[Daily AI strategy and news for the AI curious, builders & executives. I'm Nate B. Jones, a 20-year product leader, AI strategist, and your guide through the noise. Most AI content is hype or generic advice. I cut through both with frameworks and workflows you can use immediately. Whether you're an executive making AI decisions or a builder implementing solutions, you'll get practical guidance, tested in real organizations. New videos every day on YouTube. Deeper analysis + exclusive playbooks → https://natesnewsletter.substack.com/<hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
		<itunes:explicit>false</itunes:explicit>
		<itunes:owner>
			<itunes:name>Nate B. Jones</itunes:name>
			<itunes:email>info+69ab3b7c7036d739021982df@mg-eu.acast.com</itunes:email>
		</itunes:owner>
		<acast:showId>69ab3b7c7036d739021982df</acast:showId>
		<acast:showUrl>ai-news-strategy-daily-with-nate-b-jones</acast:showUrl>
		<acast:signature key="EXAMPLE" algorithm="aes-256-cbc"><![CDATA[wbG1Z7+6h9QOi+CR1Dv0uQ==]]></acast:signature>
		<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmTHg2/BXqPr07kkpFZ5JfhvEZqggcpunI6E1w81XpUaBscFc3skEQ0jWG4GCmQYJ66w6pH6P/aGd3DnpJN6h/CD4icd8kZVl4HZn12KicA2k]]></acast:settings>
        <acast:network id="69ab3b33b49eecc0b7c4a853" slug="julie-morris-69ab3b33b49eecc0b7c4a853"><![CDATA[Julie Morris]]></acast:network>
		<acast:importedFeed>https://rss2.flightcast.com/er4hmemv3mkf0n6hn6wycxdl.xml?fc_import=5d8bf23bb029092b</acast:importedFeed>
		<itunes:type>episodic</itunes:type>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			
			<itunes:new-feed-url>https://feeds.acast.com/public/shows/69ab3b7c7036d739021982df</itunes:new-feed-url>
		<item>
			<title>Wall Street Just Bet $285 Billion on AI Agents. The Best One Barely Works.</title>
			<itunes:title>Wall Street Just Bet $285 Billion on AI Agents. The Best One Barely Works.</itunes:title>
			<pubDate>Sat, 04 Apr 2026 21:10:06 GMT</pubDate>
			<itunes:duration>22:28</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69d17e313a785fb94b961c7e/media.mp3" length="129496094" type="audio/mpeg"/>
			<guid isPermaLink="false">69d17e313a785fb94b961c7e</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/wall-street-just-bet-285-billion-on-ai-agents-the-best-one-b</link>
			<acast:episodeId>69d17e313a785fb94b961c7e</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>wall-street-just-bet-285-billion-on-ai-agents-the-best-one-b</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTnt/aaZdSdyUdn4u8z2DgH7esXKtg237rU4q8AhsYE7WzkSxlXHiCy0ceI6advgX5y4lc7xQz52+Uz4KNz8th9W]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening with AI agents that claim to do the work for you?</p><br><p>The common story is that outcome-focused AI agents have finally arrived. The reality is that most of them still can't answer three basic questions.</p><br><p>In this video, I share the inside scoop on which AI agents actually deliver outcomes and which are still living on demo energy:</p><br><p> • Why verifiability is the hidden foundation of every real agent</p><p> • How three questions separate genuine agents from expensive hype</p><p> • What Lindy, Google Opal, Sauna, and Obvious actually get right</p><p> • Where the three-layer architecture points for builders who want control</p><br><p>Operators and builders who apply these three questions before committing will avoid the hype cycle and invest in tools that compound value over time.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For playbooks and analysis: https://natesnewsletter.substack.com/p/every-ai-agent-you-use-has-the-same?r=1z4sm5&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=true</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening with AI agents that claim to do the work for you?</p><br><p>The common story is that outcome-focused AI agents have finally arrived. The reality is that most of them still can't answer three basic questions.</p><br><p>In this video, I share the inside scoop on which AI agents actually deliver outcomes and which are still living on demo energy:</p><br><p> • Why verifiability is the hidden foundation of every real agent</p><p> • How three questions separate genuine agents from expensive hype</p><p> • What Lindy, Google Opal, Sauna, and Obvious actually get right</p><p> • Where the three-layer architecture points for builders who want control</p><br><p>Operators and builders who apply these three questions before committing will avoid the hype cycle and invest in tools that compound value over time.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For playbooks and analysis: https://natesnewsletter.substack.com/p/every-ai-agent-you-use-has-the-same?r=1z4sm5&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=true</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[I Broke Down Anthropic's $2.5 Billion Leak. Your Agent Is Missing 12 Critical Pieces.]]></title>
			<itunes:title><![CDATA[I Broke Down Anthropic's $2.5 Billion Leak. Your Agent Is Missing 12 Critical Pieces.]]></itunes:title>
			<pubDate>Fri, 03 Apr 2026 21:50:56 GMT</pubDate>
			<itunes:duration>26:52</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69d03644f44b357ce9bf9d7d/media.mp3" length="154769806" type="audio/mpeg"/>
			<guid isPermaLink="false">69d03644f44b357ce9bf9d7d</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/i-broke-down-anthropics-25-billion-leak-your-agent-is-missin</link>
			<acast:episodeId>69d03644f44b357ce9bf9d7d</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>i-broke-down-anthropics-25-billion-leak-your-agent-is-missin</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTkU+RIJaAwADQ88F3ahBRV5K5ClLSZvEkVwA2kykyn7i1T/Qg1odLgyoY45SYfkpk8gzU/5q6VU3qKD+at99yqu]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening inside the $2.5 billion run rate product when Anthropic accidentally leaks the entire Claude Code architecture?</p><br><p>The common story is that the leak reveals upcoming features. But the reality is that the secret sauce is 12 boring primitives that make agents actually work at scale, and most teams skip half of them.</p><br><p>In this video, I share the inside scoop on what Claude Code teaches us about building production agents:</p><br><p> • Why tool registries with metadata-first design are day one non-negotiables</p><p> • How an 18-module security architecture protects a single bash tool</p><p> • What session persistence and workflow state actually need to capture</p><p> • Where most agentic projects die from premature complexity</p><br><p>Builders who keep chasing the glamorous AI parts will keep shipping demos that crash. The leak proves that successful agents are 80% plumbing and 20% model.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/p/your-agent-has-12-blind-spots-you?r=1z4sm5&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=true</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening inside the $2.5 billion run rate product when Anthropic accidentally leaks the entire Claude Code architecture?</p><br><p>The common story is that the leak reveals upcoming features. But the reality is that the secret sauce is 12 boring primitives that make agents actually work at scale, and most teams skip half of them.</p><br><p>In this video, I share the inside scoop on what Claude Code teaches us about building production agents:</p><br><p> • Why tool registries with metadata-first design are day one non-negotiables</p><p> • How an 18-module security architecture protects a single bash tool</p><p> • What session persistence and workflow state actually need to capture</p><p> • Where most agentic projects die from premature complexity</p><br><p>Builders who keep chasing the glamorous AI parts will keep shipping demos that crash. The leak proves that successful agents are 80% plumbing and 20% model.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/p/your-agent-has-12-blind-spots-you?r=1z4sm5&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=true</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>Your Claude Limit Burns In 90 Minutes Because Of One ChatGPT Habit.</title>
			<itunes:title>Your Claude Limit Burns In 90 Minutes Because Of One ChatGPT Habit.</itunes:title>
			<pubDate>Thu, 02 Apr 2026 20:20:23 GMT</pubDate>
			<itunes:duration>26:34</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69cecf8aac25e4bf66318637/media.mp3" length="153077902" type="audio/mpeg"/>
			<guid isPermaLink="false">69cecf8aac25e4bf66318637</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/your-claude-limit-burns-in-90-minutes-because-of-one-chatgpt</link>
			<acast:episodeId>69cecf8aac25e4bf66318637</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>your-claude-limit-burns-in-90-minutes-because-of-one-chatgpt</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTmV5Rs9Z1adygGjwj0MW5UlL7hV1TKJ4Ag50MYyDXx3fMmQr0LbI43mP6d7r4zdzp77d5n09QnuYdGzoIT60Tdg]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening inside your AI costs when Jensen Huang says engineers will spend $250,000 a year on tokens?</p><br><p>The common story is that frontier models are expensive. But the reality is that your habits cost more than the models ever will, and most users burn 8-10x what they need to.</p><br><p>In this video, I share the inside scoop on token efficiency before Mythos pricing hits:</p><br><p> • Why raw PDFs can turn 4,500 words into 100,000 tokens</p><p> • How conversation sprawl compounds waste with every turn</p><p> • What plugin overhead costs you before you type a word</p><p> • Where model mixing drops a $10 session to $1</p><br><p>Builders who keep burning tokens as a badge of honor will face a reckoning when cutting-edge models cost 10x what Opus costs today. The habits you build now determine whether you scale or stall.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For playbooks and analysis: https://natesnewsletter.substack.com/p/your-claude-sessions-cost-10x-what</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening inside your AI costs when Jensen Huang says engineers will spend $250,000 a year on tokens?</p><br><p>The common story is that frontier models are expensive. But the reality is that your habits cost more than the models ever will, and most users burn 8-10x what they need to.</p><br><p>In this video, I share the inside scoop on token efficiency before Mythos pricing hits:</p><br><p> • Why raw PDFs can turn 4,500 words into 100,000 tokens</p><p> • How conversation sprawl compounds waste with every turn</p><p> • What plugin overhead costs you before you type a word</p><p> • Where model mixing drops a $10 session to $1</p><br><p>Builders who keep burning tokens as a badge of honor will face a reckoning when cutting-edge models cost 10x what Opus costs today. The habits you build now determine whether you scale or stall.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For playbooks and analysis: https://natesnewsletter.substack.com/p/your-claude-sessions-cost-10x-what</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[Claude Mythos Changes Everything. Your AI Stack Isn't Ready.]]></title>
			<itunes:title><![CDATA[Claude Mythos Changes Everything. Your AI Stack Isn't Ready.]]></itunes:title>
			<pubDate>Wed, 01 Apr 2026 23:06:31 GMT</pubDate>
			<itunes:duration>31:19</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69cda4f8ab5d25f9c988023d/media.mp3" length="180478190" type="audio/mpeg"/>
			<guid isPermaLink="false">69cda4f8ab5d25f9c988023d</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/claude-mythos-changes-everything-your-ai-stack-isnt-ready</link>
			<acast:episodeId>69cda4f8ab5d25f9c988023d</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>claude-mythos-changes-everything-your-ai-stack-isnt-ready</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTlK/U3uPe3m7OfeW0xx8pB/MzXzi/gwdwbFEvgIRbIWeUHun5QdKTpszVmKSM56jvI5NCi53kEDXcqzWsH57PaJ]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening inside Anthropic when Claude Mythos leaks and security researchers say it found zero-day vulnerabilities in a 50,000-star GitHub repo within minutes?</p><br><p>The common story is that bigger models just mean better benchmarks. But the reality is that Mythos is a step change that will force you to simplify everything you've built around weaker models.</p><br><p>In this video, I share the inside scoop on how to prepare before Mythos drops:</p><br><p> • Why your 3,000-token system prompts are about to become liabilities</p><p> • How retrieval architecture shifts when the model fills its own context</p><p> • What hard-coded domain knowledge you can finally delete</p><p> • Where verification gates need to move in your pipeline</p><br><p>Builders who keep compensating for model limitations instead of simplifying toward outcomes will be left behind. The bitter lesson is that smarter models reward letting go.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For playbooks and analysis:https://natesnewsletter.substack.com/p/anthropic-just-built-a-model-that?r=1z4sm5&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=true</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening inside Anthropic when Claude Mythos leaks and security researchers say it found zero-day vulnerabilities in a 50,000-star GitHub repo within minutes?</p><br><p>The common story is that bigger models just mean better benchmarks. But the reality is that Mythos is a step change that will force you to simplify everything you've built around weaker models.</p><br><p>In this video, I share the inside scoop on how to prepare before Mythos drops:</p><br><p> • Why your 3,000-token system prompts are about to become liabilities</p><p> • How retrieval architecture shifts when the model fills its own context</p><p> • What hard-coded domain knowledge you can finally delete</p><p> • Where verification gates need to move in your pipeline</p><br><p>Builders who keep compensating for model limitations instead of simplifying toward outcomes will be left behind. The bitter lesson is that smarter models reward letting go.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For playbooks and analysis:https://natesnewsletter.substack.com/p/anthropic-just-built-a-model-that?r=1z4sm5&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=true</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[Your iPhone Is About to Control Every AI App You Use. Here's What This Means For You.]]></title>
			<itunes:title><![CDATA[Your iPhone Is About to Control Every AI App You Use. Here's What This Means For You.]]></itunes:title>
			<pubDate>Tue, 31 Mar 2026 20:53:07 GMT</pubDate>
			<itunes:duration>22:11</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69cc343303f0e1583082d0bb/media.mp3" length="127824254" type="audio/mpeg"/>
			<guid isPermaLink="false">69cc343303f0e1583082d0bb</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/your-iphone-is-about-to-control-every-ai-app-you-use-heres-w</link>
			<acast:episodeId>69cc343303f0e1583082d0bb</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>your-iphone-is-about-to-control-every-ai-app-you-use-heres-w</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTkHrJg7JafdqToYeMuvDcY+34VSmV9DCpl8aG8p/IPUBjmo1hi0fGYkFA7ksGQGg0acqdt06OUCmqrNehKWPD/g]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening inside Apple's AI strategy heading into WWDC? The common story is that Apple lost the AI race. The reality is more complicated.</p><br><p>In this video, I share the inside scoop on Apple's agentic play and what WWDC will actually signal:</p><br><p> • Why Siri is becoming Apple's default AI agent</p><p> • How app intents will open agentic development to the ecosystem</p><p> • What MCP integration means for builders on mobile</p><p> • Where Google, Samsung, and OpenAI fit into Apple's long game</p><br><p>Apple has for free what OpenAI is spending billions to build. But execution at WWDC will determine whether that advantage actually lands.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For playbooks and analysis: https://natesnewsletter.substack.com/p/the-company-everyone-says-lost-the?r=1z4sm5&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=true</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening inside Apple's AI strategy heading into WWDC? The common story is that Apple lost the AI race. The reality is more complicated.</p><br><p>In this video, I share the inside scoop on Apple's agentic play and what WWDC will actually signal:</p><br><p> • Why Siri is becoming Apple's default AI agent</p><p> • How app intents will open agentic development to the ecosystem</p><p> • What MCP integration means for builders on mobile</p><p> • Where Google, Samsung, and OpenAI fit into Apple's long game</p><br><p>Apple has for free what OpenAI is spending billions to build. But execution at WWDC will determine whether that advantage actually lands.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For playbooks and analysis: https://natesnewsletter.substack.com/p/the-company-everyone-says-lost-the?r=1z4sm5&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=true</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>Anthropic, OpenAI, and Microsoft Just Agreed on One File Format. It Changes Everything.</title>
			<itunes:title>Anthropic, OpenAI, and Microsoft Just Agreed on One File Format. It Changes Everything.</itunes:title>
			<pubDate>Mon, 30 Mar 2026 17:49:12 GMT</pubDate>
			<itunes:duration>26:19</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69cab79b92d007a765150d75/media.mp3" length="151584398" type="audio/mpeg"/>
			<guid isPermaLink="false">69cab79b92d007a765150d75</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/anthropic-openai-and-microsoft-just-agreed-on-one-file-forma</link>
			<acast:episodeId>69cab79b92d007a765150d75</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>anthropic-openai-and-microsoft-just-agreed-on-one-file-forma</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTkO9GLsFeNaVTde8p/VRmFqCBESLXCrFp4p/S1IexRQZhv9g5xjXm9JV6y51+wn0fHE6FGitTFh6JhTasAlJ9hW]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening inside the skills ecosystem when agents now call skills more often than humans do?</p><br><p>The common story is that skills are just personal configuration files from October. But the reality is that skills have become organizational infrastructure, and most teams haven't updated their approach to match.</p><br><p>In this video, I share the inside scoop on how to build agent-readable skills that actually compound:</p><br><p> • Why the description field is where most skills go to die</p><p> • How agent-first design changes handoffs and contracts</p><p> • What three-tier skill architecture looks like for teams</p><p> • Where community repositories fill the domain-specific gap</p><br><p>Builders who keep treating skills as glorified prompts will miss the compounding advantage; the practitioners who version, test, and share skills are pulling ahead every week.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For playbooks and analysis:  https://natesnewsletter.substack.com/p/your-ai-skills-fail-10-of-the-time?r=1z4sm5&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=true</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening inside the skills ecosystem when agents now call skills more often than humans do?</p><br><p>The common story is that skills are just personal configuration files from October. But the reality is that skills have become organizational infrastructure, and most teams haven't updated their approach to match.</p><br><p>In this video, I share the inside scoop on how to build agent-readable skills that actually compound:</p><br><p> • Why the description field is where most skills go to die</p><p> • How agent-first design changes handoffs and contracts</p><p> • What three-tier skill architecture looks like for teams</p><p> • Where community repositories fill the domain-specific gap</p><br><p>Builders who keep treating skills as glorified prompts will miss the compounding advantage; the practitioners who version, test, and share skills are pulling ahead every week.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For playbooks and analysis:  https://natesnewsletter.substack.com/p/your-ai-skills-fail-10-of-the-time?r=1z4sm5&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=true</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[48 Days. That's How Long Before the Helium Runs Out for AI Chips.]]></title>
			<itunes:title><![CDATA[48 Days. That's How Long Before the Helium Runs Out for AI Chips.]]></itunes:title>
			<pubDate>Sun, 29 Mar 2026 19:31:56 GMT</pubDate>
			<itunes:duration>22:20</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69c97e29119926ec10e8b0a8/media.mp3" length="128689150" type="audio/mpeg"/>
			<guid isPermaLink="false">69c97e29119926ec10e8b0a8</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://natesnewsletter.substack.com/p/executive-briefing-33-of-the-worlds?</link>
			<acast:episodeId>69c97e29119926ec10e8b0a8</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>48-days-thats-how-long-before-the-helium-runs-out-for-ai-chi</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTmBtJiRCcfdXs4roMzrr6PRyeT9qrmUUFJW9zqBTZaVz0hGqWwi+IjgB97MtgRiJhUBFme1Ty0T1IjupuVjtHy0]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening with the physical infrastructure behind AI? The common story is that AI spending is unstoppable — but the reality is more complicated.</p><br><p>In this video, I share the inside scoop on how a missile strike at a Qatari refinery is threatening the entire AI chip supply chain:</p><br><p> • Why helium is irreplaceable inside advanced semiconductor fabrication</p><p> • How the Ras Laffan shutdown flows directly into HBM and AI accelerator supply</p><p> • What LNG disruptions mean for energy costs at East Asian chip fabs</p><p> • Where China's geopolitical advantage in helium and energy is quietly compounding</p><br><p>The operators, planners, and builders betting on AI infrastructure need to understand this isn't a short-term blip — it's a structural cost and supply shock that will reprice everything from laptops to hyperscaler inference.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening with the physical infrastructure behind AI? The common story is that AI spending is unstoppable — but the reality is more complicated.</p><br><p>In this video, I share the inside scoop on how a missile strike at a Qatari refinery is threatening the entire AI chip supply chain:</p><br><p> • Why helium is irreplaceable inside advanced semiconductor fabrication</p><p> • How the Ras Laffan shutdown flows directly into HBM and AI accelerator supply</p><p> • What LNG disruptions mean for energy costs at East Asian chip fabs</p><p> • Where China's geopolitical advantage in helium and energy is quietly compounding</p><br><p>The operators, planners, and builders betting on AI infrastructure need to understand this isn't a short-term blip — it's a structural cost and supply shock that will reprice everything from laptops to hyperscaler inference.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[Anthropic Just Gave You 3 Tools That Work While You're Gone.]]></title>
			<itunes:title><![CDATA[Anthropic Just Gave You 3 Tools That Work While You're Gone.]]></itunes:title>
			<pubDate>Sun, 29 Mar 2026 02:56:09 GMT</pubDate>
			<itunes:duration>29:08</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69c894ccc2759aa9b18cc010/media.mp3" length="167868094" type="audio/mpeg"/>
			<guid isPermaLink="false">69c894ccc2759aa9b18cc010</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/anthropic-just-gave-you-3-tools-that-work-while-youre-gone</link>
			<acast:episodeId>69c894ccc2759aa9b18cc010</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>anthropic-just-gave-you-3-tools-that-work-while-youre-gone</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTmRWHSinVdcsZjuQVOReXUNHkJyNxdyQ7caKaVBU+sSpX+8CUKsOLN5DZKVk7gnx1I87u5SUxJRr0WrkP7etdne]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening inside Anthropic's response to OpenClaw when they ship Dispatch and Computer Use in the same week?</p><br><p>The common story is that these are just mobile chat features, but the reality is a complete orchestration layer that lets you spawn parallel agent sessions from your phone while your desktop executes work without you.</p><br><p>In this video, I share the inside scoop on the three primitives that finally make always-on agents real:</p><br><p>• Why scheduled tasks run on Anthropic's cloud without your laptop</p><p>• How Dispatch turns your phone into a command surface for parallel agents</p><p>• What Computer Use unlocks for apps that will never have MCP servers</p><p>• Where the management mindset separates real work from demo theater</p><br><p>Builders who keep expecting agents to create more work for them will miss the entire point: the only metric that matters is whether tasks get off your desk, not onto it.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For playbooks and analysis: https://natesnewsletter.substack.com/p/90-of-what-you-build-on-your-ai-agent?</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening inside Anthropic's response to OpenClaw when they ship Dispatch and Computer Use in the same week?</p><br><p>The common story is that these are just mobile chat features, but the reality is a complete orchestration layer that lets you spawn parallel agent sessions from your phone while your desktop executes work without you.</p><br><p>In this video, I share the inside scoop on the three primitives that finally make always-on agents real:</p><br><p>• Why scheduled tasks run on Anthropic's cloud without your laptop</p><p>• How Dispatch turns your phone into a command surface for parallel agents</p><p>• What Computer Use unlocks for apps that will never have MCP servers</p><p>• Where the management mindset separates real work from demo theater</p><br><p>Builders who keep expecting agents to create more work for them will miss the entire point: the only metric that matters is whether tasks get off your desk, not onto it.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For playbooks and analysis: https://natesnewsletter.substack.com/p/90-of-what-you-build-on-your-ai-agent?</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>A Markdown File Just Replaced Your Most Expensive Design Meeting. (Google Stitch)</title>
			<itunes:title>A Markdown File Just Replaced Your Most Expensive Design Meeting. (Google Stitch)</itunes:title>
			<pubDate>Sat, 28 Mar 2026 02:44:37 GMT</pubDate>
			<itunes:duration>29:34</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69c74096e05c00aacfbbb9c3/media.mp3" length="170337950" type="audio/mpeg"/>
			<guid isPermaLink="false">69c74096e05c00aacfbbb9c3</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/a-markdown-file-just-replaced-your-most-expensive-design-mee</link>
			<acast:episodeId>69c74096e05c00aacfbbb9c3</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>a-markdown-file-just-replaced-your-most-expensive-design-mee</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTlCjoFvUo9QJvQwMH1XH7f/76LBzQCdC0JnIlwG5xEQwKqF1XCWXGcC7lWTZ5/Alxe6kYcbvZYoLUNAhf0u2cxV]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening inside the creative tools space when design, video, and 3D all move to the command line in the same month?</p><br><p>The common story is that AI is replacing designers. But the reality is that three releases in the last few weeks collapsed the cost of creative exploration while raising the value of taste and judgment.</p><br><p>In this video, I share the inside scoop on how design is following development to the terminal:</p><br><p> • Why Google Stitch tanked Figma stock with free vibe design</p><p> • How Remotion turns video production into React components</p><p> • What Blender MCP does with 1,500 operators and natural language</p><p> • Where scheduled creative pipelines become the real unlock</p><br><p>Builders who combine these primitives with scheduling and workflows will produce at scales that were impossible six months ago. The floor dropped, but the ceiling for excellence didn't move.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For playbooks and analysis: https://natesnewsletter.substack.com/p/a-0-design-sprint-used-to-be-impossible?r=1z4sm5&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=true</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening inside the creative tools space when design, video, and 3D all move to the command line in the same month?</p><br><p>The common story is that AI is replacing designers. But the reality is that three releases in the last few weeks collapsed the cost of creative exploration while raising the value of taste and judgment.</p><br><p>In this video, I share the inside scoop on how design is following development to the terminal:</p><br><p> • Why Google Stitch tanked Figma stock with free vibe design</p><p> • How Remotion turns video production into React components</p><p> • What Blender MCP does with 1,500 operators and natural language</p><p> • Where scheduled creative pipelines become the real unlock</p><br><p>Builders who combine these primitives with scheduling and workflows will produce at scales that were impossible six months ago. The floor dropped, but the ceiling for excellence didn't move.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For playbooks and analysis: https://natesnewsletter.substack.com/p/a-0-design-sprint-used-to-be-impossible?r=1z4sm5&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=true</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[The AI Job Market Split in Two. One Side Pays $400K and Can't Hire Fast Enough.]]></title>
			<itunes:title><![CDATA[The AI Job Market Split in Two. One Side Pays $400K and Can't Hire Fast Enough.]]></itunes:title>
			<pubDate>Fri, 27 Mar 2026 02:57:59 GMT</pubDate>
			<itunes:duration>25:38</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69c5f23826c1fb9c07a3fc41/media.mp3" length="12310360" type="audio/mpeg"/>
			<guid isPermaLink="false">69c5f23826c1fb9c07a3fc41</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://natesnewsletter.substack.com/p/your-ai-credentials-dont-matter-your</link>
			<acast:episodeId>69c5f23826c1fb9c07a3fc41</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>the-ai-job-market-split-in-two-one-side-pays-400k-and-cant-h</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTniBPmVONAfNv80Gh4d222wuTQyaL4r4lc/pB/iRaV1jLAJlZLekQ6+sqCNO27elvQoYFBMJUEOEviLE7m94zHG]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening inside the AI job market that has employers interviewing hundreds of candidates and still unable to fill roles?</p><br><p>The common story is that AI jobs are competitive and scarce — but the reality is a K-shaped market where 3.2 AI jobs exist for every qualified candidate, and most applicants lack the specific skills employers actually need.</p><br><p>In this episode, I share the inside scoop on the seven learnable skills driving infinite AI hiring demand:</p><br><p> • Why specification precision separates commodity workers from AI talent</p><p> • How evaluation and quality judgment became the most cited skill</p><p> • What failure pattern recognition reveals about production-ready builders</p><p> • Where context architecture creates the biggest unlock for companies</p><br><p>Professionals who develop these skills can write their own tickets — the gap between what employers need and what candidates offer has never been wider or more correctable.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For playbooks and analysis: https://natesnewsletter.substack.com/p/your-ai-credentials-dont-matter-your?</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening inside the AI job market that has employers interviewing hundreds of candidates and still unable to fill roles?</p><br><p>The common story is that AI jobs are competitive and scarce — but the reality is a K-shaped market where 3.2 AI jobs exist for every qualified candidate, and most applicants lack the specific skills employers actually need.</p><br><p>In this episode, I share the inside scoop on the seven learnable skills driving infinite AI hiring demand:</p><br><p> • Why specification precision separates commodity workers from AI talent</p><p> • How evaluation and quality judgment became the most cited skill</p><p> • What failure pattern recognition reveals about production-ready builders</p><p> • Where context architecture creates the biggest unlock for companies</p><br><p>Professionals who develop these skills can write their own tickets — the gap between what employers need and what candidates offer has never been wider or more correctable.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For playbooks and analysis: https://natesnewsletter.substack.com/p/your-ai-credentials-dont-matter-your?</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>Nvidia Just Open-Sourced What OpenAI Wants You to Pay Consultants For.</title>
			<itunes:title>Nvidia Just Open-Sourced What OpenAI Wants You to Pay Consultants For.</itunes:title>
			<pubDate>Wed, 25 Mar 2026 07:01:15 GMT</pubDate>
			<itunes:duration>26:26</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69c3883cfe9984dbae71fbf0/media.mp3" length="12695468" type="audio/mpeg"/>
			<guid isPermaLink="false">69c3883cfe9984dbae71fbf0</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://natesnewsletter.substack.com/p/youre-about-to-spend-millions-on?</link>
			<acast:episodeId>69c3883cfe9984dbae71fbf0</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>nvidia-just-open-sourced-what-openai-wants-you-to-pay-consul</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTlgnPqRJUCew22VMlEUJiEMH5VJh6HSj8UyuUm7HHmuiozjTTzrSf7WhxvwwB7pQECkybmQk0UU7QvjXDyYz0GP]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>Full Story w/ Prompts: https://natesnewsletter.substack.com/p/youre-about-to-spend-millions-on</p><br><p>What's really happening inside the battle between NVIDIA, OpenAI, and Anthropic over enterprise AI adoption?</p><br><p>The common story is that the AI giants are racing to ship the best agents — but the reality is more complicated, and the real war is over who controls how enterprises actually learn to use them.</p><br><p>In this episode, I share the inside scoop on why old-school engineering principles are the hidden key to making AI agents work in production:</p><br><p>• Why OpenAI and Anthropic spent a year failing at enterprise adoption</p><p>• How NemoClaw bets on developer competence instead of consultant complexity</p><p>• What Rob Pike's five programming rules reveal about agentic best practices</p><p>• Where the five hardest production agent problems trace back to ancient engineering</p><br><p>Teams that anchor AI agent deployment in proven data engineering fundamentals will outperform those chasing consultant-peddled complexity — every time.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><p><br></p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>Full Story w/ Prompts: https://natesnewsletter.substack.com/p/youre-about-to-spend-millions-on</p><br><p>What's really happening inside the battle between NVIDIA, OpenAI, and Anthropic over enterprise AI adoption?</p><br><p>The common story is that the AI giants are racing to ship the best agents — but the reality is more complicated, and the real war is over who controls how enterprises actually learn to use them.</p><br><p>In this episode, I share the inside scoop on why old-school engineering principles are the hidden key to making AI agents work in production:</p><br><p>• Why OpenAI and Anthropic spent a year failing at enterprise adoption</p><p>• How NemoClaw bets on developer competence instead of consultant complexity</p><p>• What Rob Pike's five programming rules reveal about agentic best practices</p><p>• Where the five hardest production agent problems trace back to ancient engineering</p><br><p>Teams that anchor AI agent deployment in proven data engineering fundamentals will outperform those chasing consultant-peddled complexity — every time.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><p><br></p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>I Mapped Where Every AI Agent Actually Sits. Most People Pick Wrong.</title>
			<itunes:title>I Mapped Where Every AI Agent Actually Sits. Most People Pick Wrong.</itunes:title>
			<pubDate>Tue, 24 Mar 2026 05:41:00 GMT</pubDate>
			<itunes:duration>25:11</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69c223ec7878605e11bee84e/media.mp3" length="145093214" type="audio/mpeg"/>
			<guid isPermaLink="false">69c223ec7878605e11bee84e</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://natesnewsletter.substack.com/p/5-ai-agents-5-contradictory-bets</link>
			<acast:episodeId>69c223ec7878605e11bee84e</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>i-mapped-where-every-ai-agent-actually-sits-most-people-pick</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTl3wSpHYbWI+VWpEPH5qrH9Ghix0tLf4RxGjaB5uXwSMHI6yEVICFa8RMDpuWDZL7fHuk4LdMfp/ArMS0IhZaSH]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening inside the AI agent wars right now?</p><br><p>The common story is a horse race of OpenClaw copycats — but the reality is a set of distinct strategic bets that will define how commerce runs for the next decade.</p><br><p>In this video, I share the inside scoop on how to read every AI agent launch:</p><br><p> • Why each OpenClaw competitor is making a different strategic bet</p><p> • How three questions reveal whether any AI agent fits your needs</p><p> • What sovereignty, delegation, and distribution mean for operators</p><p> • Where AI agents are headed and which plays survive compression</p><br><p>Operators and builders who understand the strategic axes underneath each AI agent launch will make sharper build-vs-buy decisions than anyone chasing the hype cycle.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening inside the AI agent wars right now?</p><br><p>The common story is a horse race of OpenClaw copycats — but the reality is a set of distinct strategic bets that will define how commerce runs for the next decade.</p><br><p>In this video, I share the inside scoop on how to read every AI agent launch:</p><br><p> • Why each OpenClaw competitor is making a different strategic bet</p><p> • How three questions reveal whether any AI agent fits your needs</p><p> • What sovereignty, delegation, and distribution mean for operators</p><p> • Where AI agents are headed and which plays survive compression</p><br><p>Operators and builders who understand the strategic axes underneath each AI agent launch will make sharper build-vs-buy decisions than anyone chasing the hype cycle.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>McKinsey Says $1 Trillion In Sales Will Go Through AI Agents. Most Businesses Are Invisible.</title>
			<itunes:title>McKinsey Says $1 Trillion In Sales Will Go Through AI Agents. Most Businesses Are Invisible.</itunes:title>
			<pubDate>Sun, 22 Mar 2026 18:06:00 GMT</pubDate>
			<itunes:duration>27:46</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69c0e69262f6c66afe83c627/media.mp3" length="159941358" type="audio/mpeg"/>
			<guid isPermaLink="false">69c0e69262f6c66afe83c627</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://natesnewsletter.substack.com/p/executive-briefing-your-systems-are</link>
			<acast:episodeId>69c0e69262f6c66afe83c627</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>mckinsey-says-1-trillion-in-sales-will-go-through-ai-agents</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTlshJ05P2r4fV3EhXlvRQRxnvgErEU7tlXh/qtvlosCD27iBslqj4Pea4iMMUAaaIvE7OPuHuHp28ki6YHLGs0M]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening inside the infrastructure layer that determines whether AI agents actually work?</p><br><p>The common story is that OpenClaw and personal AI agents are the future — but the reality is that none of it functions unless companies rebuild their entire data architecture to be agent-readable and agent-writable.</p><br><p>In this video, I share the inside scoop on the structural precondition nobody is talking about:</p><br><p> • Why 20 years of anti-bot architecture now blocks your best customers</p><p> • How wrapping an API in MCP falls short of real agent access</p><p> • What Stripe and SAP reveal about the depth of this challenge</p><p> • Where four common executive misconceptions lead companies astray</p><br><p>Operators who wait and see while competitors clean their data stacks are signing their own death warrants — the ecosystem is moving faster than quarterly planning cycles allow.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening inside the infrastructure layer that determines whether AI agents actually work?</p><br><p>The common story is that OpenClaw and personal AI agents are the future — but the reality is that none of it functions unless companies rebuild their entire data architecture to be agent-readable and agent-writable.</p><br><p>In this video, I share the inside scoop on the structural precondition nobody is talking about:</p><br><p> • Why 20 years of anti-bot architecture now blocks your best customers</p><p> • How wrapping an API in MCP falls short of real agent access</p><p> • What Stripe and SAP reveal about the depth of this challenge</p><p> • Where four common executive misconceptions lead companies astray</p><br><p>Operators who wait and see while competitors clean their data stacks are signing their own death warrants — the ecosystem is moving faster than quarterly planning cycles allow.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[Your AI Agent Fails 97.5% of Real Work. The Fix Isn't Coding.]]></title>
			<itunes:title><![CDATA[Your AI Agent Fails 97.5% of Real Work. The Fix Isn't Coding.]]></itunes:title>
			<pubDate>Sat, 21 Mar 2026 19:20:53 GMT</pubDate>
			<itunes:duration>29:26</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69beef961861d127d51f135c/media.mp3" length="169553294" type="audio/mpeg"/>
			<guid isPermaLink="false">69beef961861d127d51f135c</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://natesnewsletter.substack.com/p/55-of-employers-regret-ai-driven</link>
			<acast:episodeId>69beef961861d127d51f135c</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>your-ai-agent-fails-975-of-real-work-the-fix-isnt-coding</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTlFUsnowCHCDsbJrmRk1j3RWfxOdYnI9APEkojZ9F+q2KmDUFihyMjrQn1XCVwvmlNDUMHv9zd/iHNAOmNEwNFi]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening with AI agents inside real enterprise deployments? The common story is that AI agents are transforming work at scale — but the reality is more complicated.</p><br><p>In this video, I share the inside scoop on why the memory wall is the most dangerous gap in AI strategy right now:</p><br><p> • Why AI agents succeed at tasks but fail at jobs</p><p> • How missing organizational context caused a production database wipeout</p><p> • What three new studies reveal about agent performance over time</p><p> • Where human judgment and evals become your only real safeguard</p><br><p><br></p><p>The humans who invest in contextual stewardship and evaluation design will become the most valuable people in their organizations — and the ones who don't will find themselves competing with machines on the dimensions machines are improving fastest.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening with AI agents inside real enterprise deployments? The common story is that AI agents are transforming work at scale — but the reality is more complicated.</p><br><p>In this video, I share the inside scoop on why the memory wall is the most dangerous gap in AI strategy right now:</p><br><p> • Why AI agents succeed at tasks but fail at jobs</p><p> • How missing organizational context caused a production database wipeout</p><p> • What three new studies reveal about agent performance over time</p><p> • Where human judgment and evals become your only real safeguard</p><br><p><br></p><p>The humans who invest in contextual stewardship and evaluation design will become the most valuable people in their organizations — and the ones who don't will find themselves competing with machines on the dimensions machines are improving fastest.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>Anthropic Just Gave Your AI Agent the One Thing OpenClaw Has. Without the Risk.</title>
			<itunes:title>Anthropic Just Gave Your AI Agent the One Thing OpenClaw Has. Without the Risk.</itunes:title>
			<pubDate>Fri, 20 Mar 2026 14:52:32 GMT</pubDate>
			<itunes:duration>33:29</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69beb0b11a160b44dbfe1465/media.mp3" length="16076173" type="audio/mpeg"/>
			<guid isPermaLink="false">69beb0b11a160b44dbfe1465</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://natesnewsletter.substack.com/p/your-ai-agent-needs-three-things</link>
			<acast:episodeId>69beb0b11a160b44dbfe1465</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>anthropic-just-gave-your-ai-agent-the-one-thing-openclaw-has</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTnJc2/U0x3OH/cFRfgcd84tLr/Mh5ffL4Fzq2i74JuYomKAFmz8A6Jc7ptFgiZDVQRsFCzlnMFYnS+KP537DxcE]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening when Anthropic ships a little-noticed feature called /loop and nobody realizes it's the last piece you need to recreate OpenClaw? The common story is that you need a full framework to build an autonomous agent—but the reality is more interesting when memory plus tools plus proactivity gives you the same capabilities without the security nightmare.</p><br><p>In this episode, I share the inside scoop on why small releases like /loop are actually architectural breakthroughs:</p><br><p> • Why the three Lego bricks—memory, proactivity, and tools—are all you need</p><p> • How compound loops accumulate value across cycles like Karpathy's Auto Research</p><p> • What the energy tracking and sales pipeline examples reveal about pattern matching</p><p> • Where the terminal gives you free time travel months ahead of everyone else</p><br><p>For anyone who built OpenBrain and wondered what's next, this is how you give your memory a heartbeat and hands.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening when Anthropic ships a little-noticed feature called /loop and nobody realizes it's the last piece you need to recreate OpenClaw? The common story is that you need a full framework to build an autonomous agent—but the reality is more interesting when memory plus tools plus proactivity gives you the same capabilities without the security nightmare.</p><br><p>In this episode, I share the inside scoop on why small releases like /loop are actually architectural breakthroughs:</p><br><p> • Why the three Lego bricks—memory, proactivity, and tools—are all you need</p><p> • How compound loops accumulate value across cycles like Karpathy's Auto Research</p><p> • What the energy tracking and sales pipeline examples reveal about pattern matching</p><p> • Where the terminal gives you free time travel months ahead of everyone else</p><br><p>For anyone who built OpenBrain and wondered what's next, this is how you give your memory a heartbeat and hands.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[Perplexity Computer Is Incredible. It Won't Matter. Here's Why.]]></title>
			<itunes:title><![CDATA[Perplexity Computer Is Incredible. It Won't Matter. Here's Why.]]></itunes:title>
			<pubDate>Thu, 19 Mar 2026 18:42:19 GMT</pubDate>
			<itunes:duration>30:04</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69bc438a3bbfcfe8db15156c/media.mp3" length="173184526" type="audio/mpeg"/>
			<guid isPermaLink="false">69bc438a3bbfcfe8db15156c</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://natesnewsletter.substack.com/p/perplexity-shipped-its-best-product?</link>
			<acast:episodeId>69bc438a3bbfcfe8db15156c</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>perplexity-computer-is-incredible-it-wont-matter-heres-why</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTkfMWzLSsxw/1czGucpdZzFGWwjE6gfjidUm0D+PHUt2ubW0jLQ6+X/eyGMu+MkugEZcJYLg4J08yFtT1zsNjg6]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening when Perplexity ships the best agentic product of the month but their core reasoning engine runs on a direct competitor's model? The common story is about multi-model orchestration as a moat—but the reality is more interesting when every model provider you depend on is simultaneously building the exact product you compete with.</p><br><p>In this episode, I share the inside scoop on why good execution on the wrong layer of the stack will not save you:</p><br><p>• How February 2026 hardened the demand signal and revealed who plays multiple layers</p><p>• Why the middleware squeeze comes from both below (models) and above (context platforms)</p><p>• What the four structural positions that survive actually look like</p><p>• Where Perplexity's search API, not Computer, is their real strategic out</p><br><p>For builders watching hyperscalers get hungrier by the month, the question is whether your position aligns with their incentives or invites your replacement.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening when Perplexity ships the best agentic product of the month but their core reasoning engine runs on a direct competitor's model? The common story is about multi-model orchestration as a moat—but the reality is more interesting when every model provider you depend on is simultaneously building the exact product you compete with.</p><br><p>In this episode, I share the inside scoop on why good execution on the wrong layer of the stack will not save you:</p><br><p>• How February 2026 hardened the demand signal and revealed who plays multiple layers</p><p>• Why the middleware squeeze comes from both below (models) and above (context platforms)</p><p>• What the four structural positions that survive actually look like</p><p>• Where Perplexity's search API, not Computer, is their real strategic out</p><br><p>For builders watching hyperscalers get hungrier by the month, the question is whether your position aligns with their incentives or invites your replacement.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>ChatGPT Health Identified Respiratory Failure. Then It Said Wait.</title>
			<itunes:title>ChatGPT Health Identified Respiratory Failure. Then It Said Wait.</itunes:title>
			<pubDate>Wed, 18 Mar 2026 22:35:07 GMT</pubDate>
			<itunes:duration>23:32</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69bb289a6071e3bfbdf89b68/media.mp3" length="135617246" type="audio/mpeg"/>
			<guid isPermaLink="false">69bb289a6071e3bfbdf89b68</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://natesnewsletter.substack.com/p/a-single-sentence-from-a-family-member</link>
			<acast:episodeId>69bb289a6071e3bfbdf89b68</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>chatgpt-health-identified-respiratory-failure-then-it-said-w</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTkSOpDpZQSXtSujS8JnfPFU7zB0OlS83REOTzY5nE6cl2eehX2IPYjSn40CrMPCgtUKjl+0aVR4W0Iw/clW4F+/]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening inside AI agents when they give you the wrong answer? </p><br><p>The common story is that smarter models mean safer agents — but the reality is that reasoning traces and final outputs often operate as two entirely separate processes.In this episode, I share the inside scoop on why AI agents fail in production and how to build evals that actually catch it:</p><br><p>- Why agents perform worst precisely where the stakes are highest</p><p>- How reasoning traces routinely contradict an agent's final recommendation</p><p>- What factorial stress testing reveals that standard benchmarks completely miss</p><p>- Where to build the four-layer architecture that keeps agents honest in production</p><br><p>Operators who ignore this now will face it later — through customer harm, regulatory pressure, or an insurance policy they can't obtain.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening inside AI agents when they give you the wrong answer? </p><br><p>The common story is that smarter models mean safer agents — but the reality is that reasoning traces and final outputs often operate as two entirely separate processes.In this episode, I share the inside scoop on why AI agents fail in production and how to build evals that actually catch it:</p><br><p>- Why agents perform worst precisely where the stakes are highest</p><p>- How reasoning traces routinely contradict an agent's final recommendation</p><p>- What factorial stress testing reveals that standard benchmarks completely miss</p><p>- Where to build the four-layer architecture that keeps agents honest in production</p><br><p>Operators who ignore this now will face it later — through customer harm, regulatory pressure, or an insurance policy they can't obtain.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>Your Browser Does the Same 10 Things Every Week. Claude Can Do 5 of Them Now.</title>
			<itunes:title>Your Browser Does the Same 10 Things Every Week. Claude Can Do 5 of Them Now.</itunes:title>
			<pubDate>Tue, 17 Mar 2026 16:19:53 GMT</pubDate>
			<itunes:duration>22:13</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69b97f295f4d2d983799e044/media.mp3" length="127969150" type="audio/mpeg"/>
			<guid isPermaLink="false">69b97f295f4d2d983799e044</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://natesnewsletter.substack.com/p/five-things-claudes-chrome-extension</link>
			<acast:episodeId>69b97f295f4d2d983799e044</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>your-browser-does-the-same-10-things-every-week-claude-can-d</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTnPqausqFb6b+sqrGW/+q6fpF3VEi0SsuNUIMVLonyyKF484iA/dEROx4tHVAHT662lAJLIT/BdHGMKicG3XShP]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening when you can record any workflow in your browser and schedule it to run on autopilot without supervision? The common story is that browser AI is just a chatbot that answers questions while you browse—but the reality is more interesting when people are saving dozens of hours a week on repetitive work.</p><br><p>In this episode, I share the inside scoop on why the Claude extension for Chrome is being slept on:</p><br><p> • How to let Claude fight your customer service battles and negotiate credits without you on hold</p><p> • Why recording workflows as shortcuts with scheduled cadence changes everything</p><p> • What built-in knowledge of Gmail, Calendar, and Drive means for inbox triage at scale</p><p> • Where group tabs let you pull data from multiple sites simultaneously into structured output</p><br><p>For anyone who does anything repetitive on the internet, the skill isn't prompting—it's identifying work clearly enough that an agent can do it on a schedule.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening when you can record any workflow in your browser and schedule it to run on autopilot without supervision? The common story is that browser AI is just a chatbot that answers questions while you browse—but the reality is more interesting when people are saving dozens of hours a week on repetitive work.</p><br><p>In this episode, I share the inside scoop on why the Claude extension for Chrome is being slept on:</p><br><p> • How to let Claude fight your customer service battles and negotiate credits without you on hold</p><p> • Why recording workflows as shortcuts with scheduled cadence changes everything</p><p> • What built-in knowledge of Gmail, Calendar, and Drive means for inbox triage at scale</p><p> • Where group tabs let you pull data from multiple sites simultaneously into structured output</p><br><p>For anyone who does anything repetitive on the internet, the skill isn't prompting—it's identifying work clearly enough that an agent can do it on a schedule.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[Claude Code Wiped 2.5 Years of Data. The Engineer Who Built It Couldn't Stop It.]]></title>
			<itunes:title><![CDATA[Claude Code Wiped 2.5 Years of Data. The Engineer Who Built It Couldn't Stop It.]]></itunes:title>
			<pubDate>Mon, 16 Mar 2026 17:32:45 GMT</pubDate>
			<itunes:duration>21:29</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69b83ebf7ebe44dc8b1c74d0/media.mp3" length="123796246" type="audio/mpeg"/>
			<guid isPermaLink="false">69b83ebf7ebe44dc8b1c74d0</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://natesnewsletter.substack.com/p/your-ai-agent-just-mass-deleted-a</link>
			<acast:episodeId>69b83ebf7ebe44dc8b1c74d0</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>claude-code-wiped-25-years-of-data-the-engineer-who-built-it</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTkKPtTf/BFVZoObvJg46haimowPZLkEaPxqSCKdXEg3AUWL7zdBOv+R3WIgHo+hHDd9CLGzHV8PXE1k3IcT5scZ]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening with AI agents when vibe coders try to scale their builds? The common story is that better prompting solves everything — but the reality is that agents introduce a supervision problem, not just a prompting one.</p><br><p>In this episode, I share the inside scoop on the five management skills every vibe coder needs to survive the agentic era:</p><br><p>- Why version control is your most critical safety habit now</p><p>- How context window limits silently destroy long agent runs</p><p>- What standing orders do that repeated prompting never will</p><p>- Where small bets beat sweeping changes every single time</p><br><p>Builders who treat AI agents like a powerful but unsupervised contractor — without save points, scoped tasks, or persistent rules files — are one bad session away from losing real production work.</p><br><p>Subscribe for daily AI strategy and news.</p><br><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening with AI agents when vibe coders try to scale their builds? The common story is that better prompting solves everything — but the reality is that agents introduce a supervision problem, not just a prompting one.</p><br><p>In this episode, I share the inside scoop on the five management skills every vibe coder needs to survive the agentic era:</p><br><p>- Why version control is your most critical safety habit now</p><p>- How context window limits silently destroy long agent runs</p><p>- What standing orders do that repeated prompting never will</p><p>- Where small bets beat sweeping changes every single time</p><br><p>Builders who treat AI agents like a powerful but unsupervised contractor — without save points, scoped tasks, or persistent rules files — are one bad session away from losing real production work.</p><br><p>Subscribe for daily AI strategy and news.</p><br><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>She quit, picked up AI, and shipped in 30 days what her team planned for Q3.</title>
			<itunes:title>She quit, picked up AI, and shipped in 30 days what her team planned for Q3.</itunes:title>
			<pubDate>Sun, 15 Mar 2026 17:28:22 GMT</pubDate>
			<itunes:duration>37:38</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69b83db863444515f9cd6ce2/media.mp3" length="216846222" type="audio/mpeg"/>
			<guid isPermaLink="false">69b83db863444515f9cd6ce2</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://natesnewsletter.substack.com/p/executive-briefing-one-solo-founder</link>
			<acast:episodeId>69b83db863444515f9cd6ce2</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>she-quit-picked-up-ai-and-shipped-in-30-days-what-her-team-p</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTlqVIWUveKIis4NpeMfOQDmlhVOuJx0iDdBCGhkyZTIgHqX+UvaYOlREK25phNSrpRgUmsrndlr0vcMfTd2qGMP]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening with solo founders and AI productivity inside your company? The common story is that solo founders are outliers with nothing to teach enterprise teams — but the reality is more complicated.</p><br><p>In this episode, I share the inside scoop on what solo founder AI workflows reveal about unleashing talent at scale:</p><br><p>- Why AI agents reduce coordination overhead, not just headcount</p><p>- How taste without conviction leaves your best people stuck</p><p>- What "speed of control" means for managing AI-powered workflows</p><p>- Where extraordinary talent goes when companies refuse to remove overhead</p><br><p>Execs and operators who ignore these patterns will keep losing their best people to solo founding — not because it's glamorous, but because it's the only place those people feel unblocked.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening with solo founders and AI productivity inside your company? The common story is that solo founders are outliers with nothing to teach enterprise teams — but the reality is more complicated.</p><br><p>In this episode, I share the inside scoop on what solo founder AI workflows reveal about unleashing talent at scale:</p><br><p>- Why AI agents reduce coordination overhead, not just headcount</p><p>- How taste without conviction leaves your best people stuck</p><p>- What "speed of control" means for managing AI-powered workflows</p><p>- Where extraordinary talent goes when companies refuse to remove overhead</p><br><p>Execs and operators who ignore these patterns will keep losing their best people to solo founding — not because it's glamorous, but because it's the only place those people feel unblocked.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>AI Made Every Company 10x More Productive. The Ones Cutting Headcount Are Telling on Themselves.</title>
			<itunes:title>AI Made Every Company 10x More Productive. The Ones Cutting Headcount Are Telling on Themselves.</itunes:title>
			<pubDate>Sun, 15 Mar 2026 03:28:17 GMT</pubDate>
			<itunes:duration>19:52</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69b627521b5a7dfbdfe25687/media.mp3" length="114509750" type="audio/mpeg"/>
			<guid isPermaLink="false">69b627521b5a7dfbdfe25687</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://natesnewsletter.substack.com/p/whoop-is-hiring-600-people-while</link>
			<acast:episodeId>69b627521b5a7dfbdfe25687</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>ai-made-every-company-10x-more-productive-the-ones-cutting-h</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTmz+WuTQA1dzpBKhOHTkeinjsavZgmEIycg0PNiMcI9c+tYgyCsajBVzfVK65ks2Uaycx1ssNvInJTgvD0oYUKt]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening when Whoop announces it's hiring 600 people while the media narrative focuses entirely on job displacement? The common story is about how many fewer people companies need—but the reality is more interesting when execution costs drop by an order of magnitude and the pie itself expands.</p><br><p>In this video, I share the inside scoop on six unlocks that give you a picture of what the future actually looks like:</p><br><p> • Why iteration cycles compressing from months to days changes the mechanics of strategy</p><p> • How hundreds of millions of domain experts become builders when the translation layer disappears</p><p> • What happens when quality software becomes the default, not a premium</p><p> • Where the market for ambition explodes when CFO math flips on experiments</p><br><p>For anyone wrestling with the people challenges of AI, the hardest work ahead isn't technical—it's figuring out what upskilling looks like when the job isn't do the same thing faster.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening when Whoop announces it's hiring 600 people while the media narrative focuses entirely on job displacement? The common story is about how many fewer people companies need—but the reality is more interesting when execution costs drop by an order of magnitude and the pie itself expands.</p><br><p>In this video, I share the inside scoop on six unlocks that give you a picture of what the future actually looks like:</p><br><p> • Why iteration cycles compressing from months to days changes the mechanics of strategy</p><p> • How hundreds of millions of domain experts become builders when the translation layer disappears</p><p> • What happens when quality software becomes the default, not a premium</p><p> • Where the market for ambition explodes when CFO math flips on experiments</p><br><p>For anyone wrestling with the people challenges of AI, the hardest work ahead isn't technical—it's figuring out what upskilling looks like when the job isn't do the same thing faster.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[One Simple System Gave All My AI Tools a Memory. Here's How.]]></title>
			<itunes:title><![CDATA[One Simple System Gave All My AI Tools a Memory. Here's How.]]></itunes:title>
			<pubDate>Sat, 14 Mar 2026 01:48:15 GMT</pubDate>
			<itunes:duration>26:54</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69b4be6063444515f9f8a8b7/media.mp3" length="154986030" type="audio/mpeg"/>
			<guid isPermaLink="false">69b4be6063444515f9f8a8b7</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://natesnewsletter.substack.com/p/you-built-an-ai-memory-system-now</link>
			<acast:episodeId>69b4be6063444515f9f8a8b7</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>one-simple-system-gave-all-my-ai-tools-a-memory-heres-how</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTn8tQXcqnsIT08DeHrNUOiKu1w2esVyPeRJsy64d+SKWoWrHhfPfDiBlSy4BTySP/x/Dnlk9XCvgHSAbRLyAg+d]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening when thousands of people build an agent-readable database but can only interact with it through a chat window keyhole? The common story is that the MCP server is the whole system—but the reality is more interesting when you add a human door alongside the agent door.</p><br><p>In this video, I share the inside scoop on how to give your Open Brain hands and feet through visual interfaces you build and deploy for free:</p><br><p> • Why the table becomes a shared surface that both you and your agent see</p><p> • How to build a visual layer with Claude and host it on Vercel for nothing</p><p> • What household knowledge, professional relationships, and job hunts look like as dashboards</p><p> • Where time bridging and cross-category reasoning earn their keep</p><br><p>For anyone who built Open Brain and wondered what's next, this is the piece that makes the data actually useful to your human eyes—without adding middlemen.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening when thousands of people build an agent-readable database but can only interact with it through a chat window keyhole? The common story is that the MCP server is the whole system—but the reality is more interesting when you add a human door alongside the agent door.</p><br><p>In this video, I share the inside scoop on how to give your Open Brain hands and feet through visual interfaces you build and deploy for free:</p><br><p> • Why the table becomes a shared surface that both you and your agent see</p><p> • How to build a visual layer with Claude and host it on Vercel for nothing</p><p> • What household knowledge, professional relationships, and job hunts look like as dashboards</p><p> • Where time bridging and cross-category reasoning earn their keep</p><br><p>For anyone who built Open Brain and wondered what's next, this is the piece that makes the data actually useful to your human eyes—without adding middlemen.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[4 AI Labs Built the Same System Without Talking to Each Other (And Nobody's Discussing Why)]]></title>
			<itunes:title><![CDATA[4 AI Labs Built the Same System Without Talking to Each Other (And Nobody's Discussing Why)]]></itunes:title>
			<pubDate>Thu, 12 Mar 2026 07:01:10 GMT</pubDate>
			<itunes:duration>27:14</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69b264b5d308577aada668e8/media.mp3" length="156932046" type="audio/mpeg"/>
			<guid isPermaLink="false">69b264b5d308577aada668e8</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://natesnewsletter.substack.com/p/cursors-coding-agents-solved-a-math?</link>
			<acast:episodeId>69b264b5d308577aada668e8</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>4-ai-labs-built-the-same-system-without-talking-to-each-othe</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTnlfq431t6TC1ssR3p5nOsWgKoN4zcbsUHdFPxSfcW/w47AqmgOr8x/TodcnHqJ+fYi5w0UD90HIweFtyTSXMKc]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening with AI capabilities at work — and why the "jagged AI" frame is now obsolete?</p><br><p>The common story is that AI is brilliant at some things and broken at others — but the reality is that jaggedness was never about intelligence; it was about how we were deploying it.</p><br><p>In this video, I share the inside scoop on why AI agents in proper harnesses are smoothing the capability frontier for real work:</p><br><p>- Why the jagged AI frontier was always a deployment problem</p><p>- How multi-agent coordination unlocks long-horizon knowledge work</p><p>- What Cursor's math breakthrough reveals about AI generalization</p><p>- Where meta-skills like sniff-checking become your competitive edge</p><br><p>The organizations and individuals who learn to decompose work, delegate to AI agents, and verify outputs will extend their leverage — those who don't will find the shift happening to them anyway.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening with AI capabilities at work — and why the "jagged AI" frame is now obsolete?</p><br><p>The common story is that AI is brilliant at some things and broken at others — but the reality is that jaggedness was never about intelligence; it was about how we were deploying it.</p><br><p>In this video, I share the inside scoop on why AI agents in proper harnesses are smoothing the capability frontier for real work:</p><br><p>- Why the jagged AI frontier was always a deployment problem</p><p>- How multi-agent coordination unlocks long-horizon knowledge work</p><p>- What Cursor's math breakthrough reveals about AI generalization</p><p>- Where meta-skills like sniff-checking become your competitive edge</p><br><p>The organizations and individuals who learn to decompose work, delegate to AI agents, and verify outputs will extend their leverage — those who don't will find the shift happening to them anyway.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[Stop accepting AI output that "looks right." The other 17% is everything and nobody is ready for it.]]></title>
			<itunes:title><![CDATA[Stop accepting AI output that "looks right." The other 17% is everything and nobody is ready for it.]]></itunes:title>
			<pubDate>Wed, 11 Mar 2026 00:43:34 GMT</pubDate>
			<itunes:duration>20:54</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69b0bab7c36fc2d58b394d59/media.mp3" length="120436966" type="audio/mpeg"/>
			<guid isPermaLink="false">69b0bab7c36fc2d58b394d59</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://natesnewsletter.substack.com/p/the-most-expensive-ai-mistake-isnt</link>
			<acast:episodeId>69b0bab7c36fc2d58b394d59</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>stop-accepting-ai-output-that-looks-right-the-other-17-is-ev</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTn+4nqoJ1LKLRgTllX0Ebruxo80WDQo/iacweMXcdLZX0cCpAwYPXdCeLDIHs8muXEXuCit/oL97tekSnJw7BiC]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening when frontier models beat professionals with 14 years of experience 70% of the time but the output still doesn't survive contact with anyone who actually understands the domain? The common story is about prompting and workflow design—but the reality is more interesting when rejection creates institutional knowledge that did not exist before.</p><br><p>In this video, I share the inside scoop on why learning to say no is the missing skill in the judgment and taste category:</p><br><p> • Why your rejections are more valuable than your prompts</p><p> • How recognition, articulation, and encoding break down into learnable dimensions</p><p> • What Epic Systems teaches about scaling taste through thousands of encoded workflows</p><p> • Where the structural gap in the AI tool ecosystem leaves every rejection on the floor</p><br><p>For anyone watching AI flood organizations with output, the frontier of AI value is identical to the frontier of your organization's taste.</p><br><p>Chapters</p><p>00:00 Your Most Valuable AI Skill Is Actually Saying No</p><p>02:15 What Happens in the Moment of Rejection</p><p>04:30 Why Generation Skills Are Now Commodity</p><p>06:30 GDPVal: AI Beats Professionals 70% of the Time</p><p>08:15 Recognition: Detecting When Something Is Wrong</p><p>10:00 Articulation: Explaining Why in Usable Constraints</p><p>12:00 Encoding: Making Rejections Persist Beyond the Moment</p><p>14:00 The Epic Systems Lesson: Scaling Taste Across Decades</p><p>16:15 Building Infrastructure to Scale Your No's</p><p>18:15 What This Means for Teams and Individuals</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: My site: https://natebjones.com</p><p>Full Story w/ Prompts + Guide: https://natesnewsletter.substack.com/p/the-most-expensive-ai-mistake-isnt?r=1z4sm5&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=true</p><p>___________________</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening when frontier models beat professionals with 14 years of experience 70% of the time but the output still doesn't survive contact with anyone who actually understands the domain? The common story is about prompting and workflow design—but the reality is more interesting when rejection creates institutional knowledge that did not exist before.</p><br><p>In this video, I share the inside scoop on why learning to say no is the missing skill in the judgment and taste category:</p><br><p> • Why your rejections are more valuable than your prompts</p><p> • How recognition, articulation, and encoding break down into learnable dimensions</p><p> • What Epic Systems teaches about scaling taste through thousands of encoded workflows</p><p> • Where the structural gap in the AI tool ecosystem leaves every rejection on the floor</p><br><p>For anyone watching AI flood organizations with output, the frontier of AI value is identical to the frontier of your organization's taste.</p><br><p>Chapters</p><p>00:00 Your Most Valuable AI Skill Is Actually Saying No</p><p>02:15 What Happens in the Moment of Rejection</p><p>04:30 Why Generation Skills Are Now Commodity</p><p>06:30 GDPVal: AI Beats Professionals 70% of the Time</p><p>08:15 Recognition: Detecting When Something Is Wrong</p><p>10:00 Articulation: Explaining Why in Usable Constraints</p><p>12:00 Encoding: Making Rejections Persist Beyond the Moment</p><p>14:00 The Epic Systems Lesson: Scaling Taste Across Decades</p><p>16:15 Building Infrastructure to Scale Your No's</p><p>18:15 What This Means for Teams and Individuals</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: My site: https://natebjones.com</p><p>Full Story w/ Prompts + Guide: https://natesnewsletter.substack.com/p/the-most-expensive-ai-mistake-isnt?r=1z4sm5&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=true</p><p>___________________</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[Claude Blackmailed Its Developers. Here's Why the System Hasn't Collapsed Yet.]]></title>
			<itunes:title><![CDATA[Claude Blackmailed Its Developers. Here's Why the System Hasn't Collapsed Yet.]]></itunes:title>
			<pubDate>Tue, 10 Mar 2026 03:07:54 GMT</pubDate>
			<itunes:duration>32:24</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69af8b0ab58ea3074de9532f/media.mp3" length="186713038" type="audio/mpeg"/>
			<guid isPermaLink="false">69af8b0ab58ea3074de9532f</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://natesnewsletter.substack.com/p/every-frontier-ai-model-schemes-the</link>
			<acast:episodeId>69af8b0ab58ea3074de9532f</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>claude-blackmailed-its-developers-heres-why-the-system-hasnt</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTlYZBUhIGDQ070dbBHo6s/ouuhLN7MBF4Ub3CbZB34TOjXjfxHy7WWkMuxCFQC1lYyekbtKTNaZsN1r/sY8DRFx]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening with AI safety in 2026? The common story is that the safety system is collapsing — but the reality is more complicated.</p><br><p>In this video, I share the inside scoop on why the AI risk picture is both worse and more resilient than the headlines suggest:</p><br><p>Why frontier AI agents scheme even after anti-scheming training</p><p>- How competitive dynamics create emergent safety properties no lab planned</p><p>- What "intent engineering" is and why it beats prompt engineering for AI agents</p><p>- Where the real vulnerability lives — and why it's you, not the models</p><br><p>The risks from large language models and autonomous AI agents are accelerating, but so are the structural forces holding the system together — and closing the gap between what you tell an agent and what you actually mean is the most leveraged safety skill you can build right now.</p><br><p>Chapters</p><p>00:00 Why This Isn't Terminator</p><p>02:15 How Frontier Models Actually Learn</p><p>04:40 The Misalignment Mechanic: Novel Paths Gone Wrong</p><p>06:55 What Anthropic's Sabotage Report Actually Shows</p><p>08:30 Every Major Model Schemes — The Apollo Research Findings</p><p>10:10 Can You Train Scheming Out? The Anti-Scheming Paradox</p><p>12:45 The Race Dynamic and Why Labs Keep Cutting Corners</p><p>15:20 Four Emergent Safety Properties Nobody Planned</p><p>20:05 The Consciousness Framing Is Hurting Us</p><p>23:30 Intent Engineering: The Fix That's Up to You</p><p>28:10 Three Questions That Change Everything</p><p>30:45 Where We Stand in 2026</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening with AI safety in 2026? The common story is that the safety system is collapsing — but the reality is more complicated.</p><br><p>In this video, I share the inside scoop on why the AI risk picture is both worse and more resilient than the headlines suggest:</p><br><p>Why frontier AI agents scheme even after anti-scheming training</p><p>- How competitive dynamics create emergent safety properties no lab planned</p><p>- What "intent engineering" is and why it beats prompt engineering for AI agents</p><p>- Where the real vulnerability lives — and why it's you, not the models</p><br><p>The risks from large language models and autonomous AI agents are accelerating, but so are the structural forces holding the system together — and closing the gap between what you tell an agent and what you actually mean is the most leveraged safety skill you can build right now.</p><br><p>Chapters</p><p>00:00 Why This Isn't Terminator</p><p>02:15 How Frontier Models Actually Learn</p><p>04:40 The Misalignment Mechanic: Novel Paths Gone Wrong</p><p>06:55 What Anthropic's Sabotage Report Actually Shows</p><p>08:30 Every Major Model Schemes — The Apollo Research Findings</p><p>10:10 Can You Train Scheming Out? The Anti-Scheming Paradox</p><p>12:45 The Race Dynamic and Why Labs Keep Cutting Corners</p><p>15:20 Four Emergent Safety Properties Nobody Planned</p><p>20:05 The Consciousness Framing Is Hurting Us</p><p>23:30 Intent Engineering: The Fix That's Up to You</p><p>28:10 Three Questions That Change Everything</p><p>30:45 Where We Stand in 2026</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[45 People, $200M Revenue. The Question Nobody's Asking About AI and Your Team Size.]]></title>
			<itunes:title><![CDATA[45 People, $200M Revenue. The Question Nobody's Asking About AI and Your Team Size.]]></itunes:title>
			<pubDate>Sun, 08 Mar 2026 22:24:46 GMT</pubDate>
			<itunes:duration>25:44</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69adf72d96c5a430ddaff49b/media.mp3" length="148300910" type="audio/mpeg"/>
			<guid isPermaLink="false">69adf72d96c5a430ddaff49b</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://natesnewsletter.substack.com/p/executive-briefing-ai-raised-output</link>
			<acast:episodeId>69adf72d96c5a430ddaff49b</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>45-people-200m-revenue-the-question-nobodys-asking-about-ai</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTlKmMS0fdW14priuwdzTCJD3q8Qb9R/iL2tswC+PkDlQyWc0NlVvA04e3OuqC+w/mMTaMt7DngMLGxFXCibihPR]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening with AI and team size in your organization? The common story is that AI makes teams more productive so you can cut headcount — but the reality is more complicated.</p><br><p>In this video, I share the inside scoop on why the five-person strike team is the structural unit of the AI era:</p><br><p>- Why AI raised coordination costs by the same order as output</p><p>- How scouts and strike teams map to different AI-era missions</p><p>- What correctness-first thinking means for how you hire and build</p><p>- Where the real opportunity is — expanding ambition, not shrinking headcount</p><br><p>AI agents and LLMs didn't break your meetings problem — they amplified a team size problem you already had, and the leaders who restructure around small, high-judgment teams will build the defining companies of this decade.</p><br><p>Chapters</p><p>00:00 Your Meetings Problem Is Actually a Team Size Problem</p><p>02:10 The Math of Communication Pathways</p><p>04:15 Dunbar's Number and Why the Military Cracked This First</p><p>06:00 What AI Actually Changed About Team Size</p><p>08:20 Why Volume Is Free and Correctness Is Scarce</p><p>10:45 The Harvard Study That Proves the Point</p><p>12:30 Scouts: The One-Person AI Strike Force</p><p>15:00 Peter Steinberger and the Solo Agent Model</p><p>17:10 Strike Teams: Why Five Is the Magic Number</p><p>20:00 The Ambition Failure Nobody Talks About</p><p>23:15 How to Compose Many Strike Teams Into One Org</p><p>25:40 The AI Slop Tax and the True Cost of a Weak Link</p><p>28:00 How to Test Who's Ready for the Strike Team Model</p><p>30:20 The Shopify Mandate and What Toby Lutke Got Right</p><p>33:00 Restructure for Ambition, Not Efficiency</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening with AI and team size in your organization? The common story is that AI makes teams more productive so you can cut headcount — but the reality is more complicated.</p><br><p>In this video, I share the inside scoop on why the five-person strike team is the structural unit of the AI era:</p><br><p>- Why AI raised coordination costs by the same order as output</p><p>- How scouts and strike teams map to different AI-era missions</p><p>- What correctness-first thinking means for how you hire and build</p><p>- Where the real opportunity is — expanding ambition, not shrinking headcount</p><br><p>AI agents and LLMs didn't break your meetings problem — they amplified a team size problem you already had, and the leaders who restructure around small, high-judgment teams will build the defining companies of this decade.</p><br><p>Chapters</p><p>00:00 Your Meetings Problem Is Actually a Team Size Problem</p><p>02:10 The Math of Communication Pathways</p><p>04:15 Dunbar's Number and Why the Military Cracked This First</p><p>06:00 What AI Actually Changed About Team Size</p><p>08:20 Why Volume Is Free and Correctness Is Scarce</p><p>10:45 The Harvard Study That Proves the Point</p><p>12:30 Scouts: The One-Person AI Strike Force</p><p>15:00 Peter Steinberger and the Solo Agent Model</p><p>17:10 Strike Teams: Why Five Is the Magic Number</p><p>20:00 The Ambition Failure Nobody Talks About</p><p>23:15 How to Compose Many Strike Teams Into One Org</p><p>25:40 The AI Slop Tax and the True Cost of a Weak Link</p><p>28:00 How to Test Who's Ready for the Strike Team Model</p><p>30:20 The Shopify Mandate and What Toby Lutke Got Right</p><p>33:00 Restructure for Ambition, Not Efficiency</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>GPT-5.4 Let Mickey Mouse Into a Production Database. Nobody Noticed. (What This Means For Your Work)</title>
			<itunes:title>GPT-5.4 Let Mickey Mouse Into a Production Database. Nobody Noticed. (What This Means For Your Work)</itunes:title>
			<pubDate>Sun, 08 Mar 2026 00:26:08 GMT</pubDate>
			<itunes:duration>29:34</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69acc2206ffdcd8188ec40d9/media.mp3" length="28391483" type="audio/mpeg"/>
			<guid isPermaLink="false">69acc2206ffdcd8188ec40d9</guid>
			<itunes:explicit>false</itunes:explicit>
			<link><![CDATA[https://natesnewsletter.substack.com/p/i-tested-gpt-54-against-claude-and?r=1z4sm5&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true]]></link>
			<acast:episodeId>69acc2206ffdcd8188ec40d9</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>gpt-54-let-mickey-mouse-into-a-production-database</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTm2oQCEaxEWMFYIucShuw40CzmE3kNFwuki2nQjQZPywz/2kAt6SXLkFhMXJ3r8WvyeWfqSzCjxIXIrG2IKF9fG]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening when OpenAI engineers accidentally leak ChatGPT 5.4's existence but the model isn't even the interesting part? The common story is about the next capability jump—but the reality is more interesting when the company that first makes trillion-token organizational context genuinely usable becomes the new enterprise data platform.</p><br><p>In this video, I share the inside scoop on why the four-part compound bet determines whether this justifies an $840 billion valuation:</p><br><p> • Why intelligence and context are multiplicative—and weak reasoning with long context is actively harmful</p><p> • How retrieval at enterprise scale breaks RAG in ways nobody's benchmarking</p><p> • What memory that doesn't rot requires when organizational knowledge continuously evolves</p><p> • Where Anthropic's organic context accumulation through Claude Code might beat OpenAI's infrastructure play</p><br><p>For builders watching the enterprise stack get restructured, the lock-in from synthesized understanding is deeper than anything enterprise software has ever seen.</p><br><p>Chapters</p><p>00:00 The Most Expensive Bet in History Is an AI Bet</p><p>02:45 The Current SaaS Stack as a Filing Cabinet</p><p>05:30 What the Stateful Runtime Environment Becomes</p><p>08:00 The Four Compound Bets That Must All Work</p><p>10:30 Bet One: Intelligence and Context Are Multiplicative</p><p>13:00 Bet Two: Memory That Doesn't Rot</p><p>16:00 Bet Three: The Retrieval Problem Nobody's Talking About</p><p>19:30 Bet Four: Execution at the Speed of Trust</p><p>22:00 The New System of Record for Organizational Understanding</p><p>25:00 The Flywheel: How Context Compounds Month Over Month</p><p>28:00 Comprehension Lock-In: Deeper Than Data Lock-In</p><p>30:30 Anthropic's Organic Flywheel Through Claude Code</p><p>34:00 Three Questions to Ask From Your Chair</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening when OpenAI engineers accidentally leak ChatGPT 5.4's existence but the model isn't even the interesting part? The common story is about the next capability jump—but the reality is more interesting when the company that first makes trillion-token organizational context genuinely usable becomes the new enterprise data platform.</p><br><p>In this video, I share the inside scoop on why the four-part compound bet determines whether this justifies an $840 billion valuation:</p><br><p> • Why intelligence and context are multiplicative—and weak reasoning with long context is actively harmful</p><p> • How retrieval at enterprise scale breaks RAG in ways nobody's benchmarking</p><p> • What memory that doesn't rot requires when organizational knowledge continuously evolves</p><p> • Where Anthropic's organic context accumulation through Claude Code might beat OpenAI's infrastructure play</p><br><p>For builders watching the enterprise stack get restructured, the lock-in from synthesized understanding is deeper than anything enterprise software has ever seen.</p><br><p>Chapters</p><p>00:00 The Most Expensive Bet in History Is an AI Bet</p><p>02:45 The Current SaaS Stack as a Filing Cabinet</p><p>05:30 What the Stateful Runtime Environment Becomes</p><p>08:00 The Four Compound Bets That Must All Work</p><p>10:30 Bet One: Intelligence and Context Are Multiplicative</p><p>13:00 Bet Two: Memory That Doesn't Rot</p><p>16:00 Bet Three: The Retrieval Problem Nobody's Talking About</p><p>19:30 Bet Four: Execution at the Speed of Trust</p><p>22:00 The New System of Record for Organizational Understanding</p><p>25:00 The Flywheel: How Context Compounds Month Over Month</p><p>28:00 Comprehension Lock-In: Deeper Than Data Lock-In</p><p>30:30 Anthropic's Organic Flywheel Through Claude Code</p><p>34:00 Three Questions to Ask From Your Chair</p><br><p>Subscribe for daily AI strategy and news.</p><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>Claude Code vs Codex: The Decision That Compounds Every Week You Delay</title>
			<itunes:title>Claude Code vs Codex: The Decision That Compounds Every Week You Delay</itunes:title>
			<pubDate>Fri, 06 Mar 2026 23:25:45 GMT</pubDate>
			<itunes:duration>29:54</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/69ab62786ffdcd8188b1efdf/media.mp3" length="28714988" type="audio/mpeg"/>
			<guid isPermaLink="false">69ab62786ffdcd8188b1efdf</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://natesnewsletter.substack.com/p/same-model-78-vs-42-the-harness-made</link>
			<acast:episodeId>69ab62786ffdcd8188b1efdf</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>claude-code-vs-codex-the-decision-that-compounds-every-week</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZMTtedvdcRQbP4eiLMjXzCKLPjEYLpGj+NMVKa+5C8pL4u/EOj1Vw4h5MMJYp0lCcFAe0fnxBJy/1ju4Qxy1fh8gO4DvlGA40yms2g0/hOkcrfHIopjTygHFqGwwOPKFIai4SuTvs86Lx3UYCyl6ZslsAim8T2seHSnREZaIM2riSb42QTOv3tEZ2y4PtTNTlYaQN1oTIce+OGNS/dYMQzWhSG/bBAuYlWLcGVIMSm9BPEBJZPHeCHsXo5980r/UHsedCwa72eF/F1f7VZbWhv]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening inside AI coding tools that nobody's comparing? The common story is that Claude vs. ChatGPT is a model competition. But the model is the least important part.</p><br><p>In this video, I share the inside scoop on why the AI harness matters more than the model:</p><br><p>- Why the same Claude model scored 78% vs. 42% on identical benchmarks</p><p>- How Claude Code and Codex embody opposite philosophies of AI - collaboration</p><p>- What harness lock-in actually costs teams who switch tools later</p><p>- Where non-technical leaders are making the wrong procurement decisions</p><br><p>The teams getting this right are choosing the architecture that matches how they work, and that decision compounds every quarter.</p><br><p>Chapters</p><p>00:00 The harness vs. the model — what everyone gets wrong</p><p>01:45 Why nobody compares AI harnesses</p><p>03:20 Same model, double the performance: the benchmark that proves it</p><p>04:50 How Anthropic built Claude Code's harness</p><p>07:10 How OpenAI built Codex's harness</p><p>09:30 Five ways the harnesses are diverging</p><p>13:45 State and memory: where institutional knowledge lives</p><p>16:20 Context management and tool integration</p><p>19:00 Multi-agent coordination: collaboration vs. isolation</p><p>21:30 Harness lock-in: the cost nobody is pricing in</p><p>24:00 What this means for engineers and engineering leaders</p><p>26:30 Why non-technical leaders need to understand this now</p><br><p>Subscribe for daily AI strategy and news.</p><br><p>Full Story w/ Prompts: https://natesnewsletter.substack.com/p/same-model-78-vs-42-the-harness-made</p><br><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><br><p>My site: https://natebjones.com</p><p>___________________</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening inside AI coding tools that nobody's comparing? The common story is that Claude vs. ChatGPT is a model competition. But the model is the least important part.</p><br><p>In this video, I share the inside scoop on why the AI harness matters more than the model:</p><br><p>- Why the same Claude model scored 78% vs. 42% on identical benchmarks</p><p>- How Claude Code and Codex embody opposite philosophies of AI - collaboration</p><p>- What harness lock-in actually costs teams who switch tools later</p><p>- Where non-technical leaders are making the wrong procurement decisions</p><br><p>The teams getting this right are choosing the architecture that matches how they work, and that decision compounds every quarter.</p><br><p>Chapters</p><p>00:00 The harness vs. the model — what everyone gets wrong</p><p>01:45 Why nobody compares AI harnesses</p><p>03:20 Same model, double the performance: the benchmark that proves it</p><p>04:50 How Anthropic built Claude Code's harness</p><p>07:10 How OpenAI built Codex's harness</p><p>09:30 Five ways the harnesses are diverging</p><p>13:45 State and memory: where institutional knowledge lives</p><p>16:20 Context management and tool integration</p><p>19:00 Multi-agent coordination: collaboration vs. isolation</p><p>21:30 Harness lock-in: the cost nobody is pricing in</p><p>24:00 What this means for engineers and engineering leaders</p><p>26:30 Why non-technical leaders need to understand this now</p><br><p>Subscribe for daily AI strategy and news.</p><br><p>Full Story w/ Prompts: https://natesnewsletter.substack.com/p/same-model-78-vs-42-the-harness-made</p><br><p>For deeper playbooks and analysis: https://natesnewsletter.substack.com/</p><br><p>My site: https://natebjones.com</p><p>___________________</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>Everyone You Know Is About to Try Claude (I Showed 3 People for 5 Minutes — All 3 Switched)</title>
			<itunes:title>Everyone You Know Is About to Try Claude (I Showed 3 People for 5 Minutes — All 3 Switched)</itunes:title>
			<pubDate>Wed, 04 Mar 2026 20:26:00 GMT</pubDate>
			<itunes:duration>20:55</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Amchvwna5ivfoyr1bvoqee4ho/media.mp3" length="15065339" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:mchvwna5ivfoyr1bvoqee4ho</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://natesnewsletter.substack.com/p/millions-just-switched-to-claude</link>
			<acast:episodeId>69ab3b83e2ffe1fef6526aa7</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:episodeUrl>everyone-you-know-is-about-to-try-claude</acast:episodeUrl>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNxnS6YHPp4uv5AYZ3KnmEetwxAM8gLFsIm0YpKezeiA7K0x+cPYz/CSe1c72b+VTEbSnKdOg1vk+r1ZI8sW9qIQ==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening when millions of new users download Claude expecting a ChatGPT replacement and wonder why the spreadsheet features are missing? The common story is that AI models are interchangeable brands—but the reality is more interesting when constitutional AI produces measurably different behavior than reinforcement learning with human feedback.</p><p>In this video, I share the inside scoop on why switching to Claude with the same habits misses the point:</p><p>• Why Claude is more likely to tell you your plan has a hole in it</p><p>• How describing your situation instead of your desired output changes everything</p><p>• What extended thinking reveals about steering the chain of thought in real time</p><p>• Where Cowork reframes the category from conversation partner to desktop worker</p><p>For anyone teaching a friend about Claude or learning it yourself, these differences shape how you think about AI over time—and that compounds.</p><p>Subscribe for daily AI strategy and news.</p><p>For playbooks and analysis: <a href="https://natesnewsletter.substack.com/p/millions-just-switched-to-claude?r=1z4sm5&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=true" rel="noopener noreferrer" target="_blank">https://natesnewsletter.substack.com/p/millions-just-switched-to-claude</a></p><p>© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening when millions of new users download Claude expecting a ChatGPT replacement and wonder why the spreadsheet features are missing? The common story is that AI models are interchangeable brands—but the reality is more interesting when constitutional AI produces measurably different behavior than reinforcement learning with human feedback.</p><p>In this video, I share the inside scoop on why switching to Claude with the same habits misses the point:</p><p>• Why Claude is more likely to tell you your plan has a hole in it</p><p>• How describing your situation instead of your desired output changes everything</p><p>• What extended thinking reveals about steering the chain of thought in real time</p><p>• Where Cowork reframes the category from conversation partner to desktop worker</p><p>For anyone teaching a friend about Claude or learning it yourself, these differences shape how you think about AI over time—and that compounds.</p><p>Subscribe for daily AI strategy and news.</p><p>For playbooks and analysis: <a href="https://natesnewsletter.substack.com/p/millions-just-switched-to-claude?r=1z4sm5&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=true" rel="noopener noreferrer" target="_blank">https://natesnewsletter.substack.com/p/millions-just-switched-to-claude</a></p><p>© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[Dario Amodei Made One Mistake. Sam Altman Got $110 Billion. Here's the Full Story.]]></title>
			<itunes:title><![CDATA[Dario Amodei Made One Mistake. Sam Altman Got $110 Billion. Here's the Full Story.]]></itunes:title>
			<pubDate>Wed, 04 Mar 2026 02:39:00 GMT</pubDate>
			<itunes:duration>26:19</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Ajux32dunf0a2hn9c227jys8v/media.mp3" length="18956435" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:jux32dunf0a2hn9c227jys8v</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://natesnewsletter.substack.com/p/openai-raised-110b-and-the-pentagon</link>
			<acast:episodeId>69ab3b837036d73902198439</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNHXXukvfoW42sP4JsoNr7ZRuYubDU88GmKOoHUHkooZE9Z/u7uLGLdPtdMhof9Ry3/fsC9yMvqn8lMNiENNpKnA==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening when Anthropic gets designated a supply chain risk hours after OpenAI signs a Pentagon deal and the largest private funding round in history? The common story is about principles versus pragmatism—but the reality is more interesting when Claude was too embedded in combat operations to rip out even after a presidential order.</p><p>In this video, I share the inside scoop on why Dario misread the room while Sam walked away with the keys to the kingdom:</p><p>• Why Anthropic's objection was technical, not moral—and contingent on model reliability</p><p>• How OpenAI's $110 billion round equals 65% of all US venture capital in 2023</p><p>• What the circular financing structure reveals about who's picking winners</p><p>• Where enterprise contracts will be won or lost as government revenue becomes the gold standard</p><p>For builders watching cloud providers play every side of the board, the question is whether you're okay with a one-model winner world or fighting for a multi-model future.</p><p>Subscribe for daily AI strategy and news.</p><p>For playbooks and analysis: <a href="https://natesnewsletter.substack.com/p/openai-raised-110b-and-the-pentagon?r=1z4sm5&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=true" rel="noopener noreferrer" target="_blank">https://natesnewsletter.substack.com/p/openai-raised-110b-and-the-pentagon</a></p><p>© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening when Anthropic gets designated a supply chain risk hours after OpenAI signs a Pentagon deal and the largest private funding round in history? The common story is about principles versus pragmatism—but the reality is more interesting when Claude was too embedded in combat operations to rip out even after a presidential order.</p><p>In this video, I share the inside scoop on why Dario misread the room while Sam walked away with the keys to the kingdom:</p><p>• Why Anthropic's objection was technical, not moral—and contingent on model reliability</p><p>• How OpenAI's $110 billion round equals 65% of all US venture capital in 2023</p><p>• What the circular financing structure reveals about who's picking winners</p><p>• Where enterprise contracts will be won or lost as government revenue becomes the gold standard</p><p>For builders watching cloud providers play every side of the board, the question is whether you're okay with a one-model winner world or fighting for a multi-model future.</p><p>Subscribe for daily AI strategy and news.</p><p>For playbooks and analysis: <a href="https://natesnewsletter.substack.com/p/openai-raised-110b-and-the-pentagon?r=1z4sm5&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=true" rel="noopener noreferrer" target="_blank">https://natesnewsletter.substack.com/p/openai-raised-110b-and-the-pentagon</a></p><p>© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[You Don't Need SaaS. The $0.10 System That Replaced My AI Workflow (45 Min No-Code Build)]]></title>
			<itunes:title><![CDATA[You Don't Need SaaS. The $0.10 System That Replaced My AI Workflow (45 Min No-Code Build)]]></itunes:title>
			<pubDate>Mon, 02 Mar 2026 06:00:00 GMT</pubDate>
			<itunes:duration>30:15</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Asn730zhb0aq0ay0a6qaeq33i/media.mp3" length="21789571" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:sn730zhb0aq0ay0a6qaeq33i</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://natesnewsletter.substack.com/p/every-ai-you-use-forgets-you-heres</link>
			<acast:episodeId>69ab3b87b49eecc0b7c4baab</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNXoRv8whEqjarB3ePtgZuUhx4NSpcbpZFMm03pICzglLtca41CoGG01h5Z7btZ154/KZLJ9bam7z7UoYPh0rLcg==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening when Claude's memory doesn't know what you told ChatGPT and your phone app doesn't share context with your coding agent? The common story is that AI memory is getting better—but the reality is more interesting when every platform has built a walled garden designed to create lock-in.</p><p class="text-node">In this video, I share the inside scoop on why the architecture of agent-readable memory matters more than any individual tool:</p><p class="text-node">• Why your Notion workspace is beautiful for humans and useless for agents that search by meaning <br>• How a Postgres database with vector embeddings runs for 10-30 cents a month <br>• What MCP servers enable when one brain connects to every AI you touch <br>• Where the compounding advantage lives for people who stop re-explaining themselves</p><p class="text-node">For anyone watching the agent revolution go mainstream, the gap between starting from zero and starting with six months of accumulated context is the career gap of this decade.</p><p class="text-node">Subscribe for daily AI strategy and news.<br>For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a><br>© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening when Claude's memory doesn't know what you told ChatGPT and your phone app doesn't share context with your coding agent? The common story is that AI memory is getting better—but the reality is more interesting when every platform has built a walled garden designed to create lock-in.</p><p class="text-node">In this video, I share the inside scoop on why the architecture of agent-readable memory matters more than any individual tool:</p><p class="text-node">• Why your Notion workspace is beautiful for humans and useless for agents that search by meaning <br>• How a Postgres database with vector embeddings runs for 10-30 cents a month <br>• What MCP servers enable when one brain connects to every AI you touch <br>• Where the compounding advantage lives for people who stop re-explaining themselves</p><p class="text-node">For anyone watching the agent revolution go mainstream, the gap between starting from zero and starting with six months of accumulated context is the career gap of this decade.</p><p class="text-node">Subscribe for daily AI strategy and news.<br>For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a><br>© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[We're Getting AI Agents Backwards—Simulation Wins]]></title>
			<itunes:title><![CDATA[We're Getting AI Agents Backwards—Simulation Wins]]></itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>15:32</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Ar3l0rg04508j9t8mctgktgl3/media.mp3" length="11192112" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:r3l0rg04508j9t8mctgktgl3</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b84e2ffe1fef6526ab3</link>
			<acast:episodeId>69ab3b84e2ffe1fef6526ab3</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNI3lgs5RRkVpVsoeFMBAa10jKaU8UJT6mXGmDgzSQql61FDvrG1P3YIIDVZhlT18UyETF7eLIMArhjDkFK8QUPQ==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening when teams pour resources into AI agents that do tasks? The common story is that automation is the goal, but the reality is more complicated when the trillion-dollar edge is in agents that model reality rather than agents that close tickets. In this video, I share the inside scoop on why simulation is the missing layer in most enterprise AI stacks:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why adding a simulated world to the classic LLM plus tools stack transforms an agent from a task-runner into a reality simulator</p></li><li class="list-item-node"><p class="text-node">How alternate-timeline exploration and time compression let iteration 300 happen while rivals are still on iteration 3</p></li><li class="list-item-node"><p class="text-node">What Renault, BMW, Formula One, and ad networks are already proving about simulation payoffs in the real world</p></li><li class="list-item-node"><p class="text-node">Where the objections about accuracy, cost, and culture break down when you use calibration loops and probabilistic thinking</p></li></ul><p class="text-node">For enterprise leaders navigating the next 24 months, agents in trench coats doing tasks are linear. Agents in simulated worlds are exponential, and early movers in modeling will outpace pure automation players before they know what happened.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com">https://natesnewsletter.substack.com</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening when teams pour resources into AI agents that do tasks? The common story is that automation is the goal, but the reality is more complicated when the trillion-dollar edge is in agents that model reality rather than agents that close tickets. In this video, I share the inside scoop on why simulation is the missing layer in most enterprise AI stacks:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why adding a simulated world to the classic LLM plus tools stack transforms an agent from a task-runner into a reality simulator</p></li><li class="list-item-node"><p class="text-node">How alternate-timeline exploration and time compression let iteration 300 happen while rivals are still on iteration 3</p></li><li class="list-item-node"><p class="text-node">What Renault, BMW, Formula One, and ad networks are already proving about simulation payoffs in the real world</p></li><li class="list-item-node"><p class="text-node">Where the objections about accuracy, cost, and culture break down when you use calibration loops and probabilistic thinking</p></li></ul><p class="text-node">For enterprise leaders navigating the next 24 months, agents in trench coats doing tasks are linear. Agents in simulated worlds are exponential, and early movers in modeling will outpace pure automation players before they know what happened.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com">https://natesnewsletter.substack.com</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>Why the Smartest AI Teams Are Panic-Buying Compute: The 36-Month AI Infrastructure Crisis Is Here</title>
			<itunes:title>Why the Smartest AI Teams Are Panic-Buying Compute: The 36-Month AI Infrastructure Crisis Is Here</itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>26:14</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3At6pn16lhwm5212mv9bmr9mnj/media.mp3" length="18893427" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:t6pn16lhwm5212mv9bmr9mnj</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b847036d73902198450</link>
			<acast:episodeId>69ab3b847036d73902198450</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNBkbfW9w7j0cPbgkOZTjPRFXuXOALLVxeRe8wA9fdiXgEFP+1sCQj4Zj96I23Fb99rO/CpyBtzOBgVdf5VSLGYg==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with AI compute infrastructure? The common story is that supply will catch up to demand—but the reality is more complicated when DRAM prices spike 60% quarterly and every hyperscaler is hoarding capacity. In this video, I share the inside scoop on why the global inference crisis is not a prediction but an observation of current conditions:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why enterprise token consumption is scaling from 1 billion to 100 billion per worker annually</p></li><li class="list-item-node"><p class="text-node">How memory, semiconductor, and GPU bottlenecks compound with no relief until 2028</p></li><li class="list-item-node"><p class="text-node">What hyperscalers choosing their own products over customers means for enterprise allocation</p></li><li class="list-item-node"><p class="text-node">Where sharp CTOs are securing capacity and building routing layers now</p></li></ul><p class="text-node">For enterprise leaders navigating the next 24 months, traditional planning frameworks are broken—and the window to act is closing fast.</p><p class="text-node">Subscribe for daily AI strategy and news.<br>For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/p/executive-briefing-the-global-inference">https://natesnewsletter.substack.com/p/executive-briefing-the-global-inference</a>?</p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with AI compute infrastructure? The common story is that supply will catch up to demand—but the reality is more complicated when DRAM prices spike 60% quarterly and every hyperscaler is hoarding capacity. In this video, I share the inside scoop on why the global inference crisis is not a prediction but an observation of current conditions:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why enterprise token consumption is scaling from 1 billion to 100 billion per worker annually</p></li><li class="list-item-node"><p class="text-node">How memory, semiconductor, and GPU bottlenecks compound with no relief until 2028</p></li><li class="list-item-node"><p class="text-node">What hyperscalers choosing their own products over customers means for enterprise allocation</p></li><li class="list-item-node"><p class="text-node">Where sharp CTOs are securing capacity and building routing layers now</p></li></ul><p class="text-node">For enterprise leaders navigating the next 24 months, traditional planning frameworks are broken—and the window to act is closing fast.</p><p class="text-node">Subscribe for daily AI strategy and news.<br>For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/p/executive-briefing-the-global-inference">https://natesnewsletter.substack.com/p/executive-briefing-the-global-inference</a>?</p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>Clawdbot to Moltbot to OpenClaw: The 72 Hours That Broke Everything (The Full Breakdown)</title>
			<itunes:title>Clawdbot to Moltbot to OpenClaw: The 72 Hours That Broke Everything (The Full Breakdown)</itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>22:01</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Au6vn8x7ybqwqg2zvusc9y6n9/media.mp3" length="15858103" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:u6vn8x7ybqwqg2zvusc9y6n9</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b84e2ffe1fef6526ac2</link>
			<acast:episodeId>69ab3b84e2ffe1fef6526ac2</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNX5QXtvmbvML6nTqjrFfLsChB+VaexjM63yk7oM1UXaI9IEmhP6KqlwdFyPm4evL/UsnS/DSybc8dxHPxmu1F+g==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with the fastest-growing open source project in GitHub history? The common story is that Moltbot (now OpenClaw) is the future of personal AI, but the reality is more complicated. In this video, I share the inside scoop on why a lobster-themed AI assistant reveals the core tension in agentic AI:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why 100,000+ GitHub stars in weeks signals massive pent-up demand for agents that act</p></li><li class="list-item-node"><p class="text-node">How a 10-second window during the rebrand let crypto scammers steal millions</p></li><li class="list-item-node"><p class="text-node">What security researchers found when they probed exposed Moltbot instances</p></li><li class="list-item-node"><p class="text-node">Where the line sits between useful AI agents and dangerous attack surfaces</p></li></ul><p class="text-node">For builders and operators watching agentic AI unfold, the honest assessment is that Moltbot works, and that's exactly what makes it risky.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/p/the-moltbot-origin-story-a-16-million">https://natesnewsletter.substack.com/p/the-moltbot-origin-story-a-16-million</a>?</p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with the fastest-growing open source project in GitHub history? The common story is that Moltbot (now OpenClaw) is the future of personal AI, but the reality is more complicated. In this video, I share the inside scoop on why a lobster-themed AI assistant reveals the core tension in agentic AI:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why 100,000+ GitHub stars in weeks signals massive pent-up demand for agents that act</p></li><li class="list-item-node"><p class="text-node">How a 10-second window during the rebrand let crypto scammers steal millions</p></li><li class="list-item-node"><p class="text-node">What security researchers found when they probed exposed Moltbot instances</p></li><li class="list-item-node"><p class="text-node">Where the line sits between useful AI agents and dangerous attack surfaces</p></li></ul><p class="text-node">For builders and operators watching agentic AI unfold, the honest assessment is that Moltbot works, and that's exactly what makes it risky.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/p/the-moltbot-origin-story-a-16-million">https://natesnewsletter.substack.com/p/the-moltbot-origin-story-a-16-million</a>?</p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>Why 2026 Is the Year to Build a Second Brain (And Why You NEED One)</title>
			<itunes:title>Why 2026 Is the Year to Build a Second Brain (And Why You NEED One)</itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>30:05</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Awd56m92qzyyh5lk0d8878ioz/media.mp3" length="21667632" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:wd56m92qzyyh5lk0d8878ioz</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b84f6d1583bb8e58b3b</link>
			<acast:episodeId>69ab3b84f6d1583bb8e58b3b</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNjk/zji/7DDPLrufEE8exyYsU1xbdnTCG7OuhUukyfpasbnbfp3ze5abDA9fXScVjzzDApaBoaBWJaNhd2lWPaw==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening when AI enables active systems instead of passive storage? The common story is that second brains are just better note-taking, but the reality is more complicated when AI loops can classify, route, and surface information automatically while you sleep. In this video, I share the inside scoop on building a second brain with Slack, Notion, Zapier, and Claude or ChatGPT that actually works for more than one in twenty people:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why traditional storage systems fail because they're passive and require you to remember what you stored and where you put it</p></li><li class="list-item-node"><p class="text-node">How AI loops handle classification, routing, and surfacing so the system comes to you instead of waiting to be searched</p></li><li class="list-item-node"><p class="text-node">What eight building blocks make second brain systems actually work, from frictionless capture to confidence filters to daily nudges</p></li><li class="list-item-node"><p class="text-node">Where engineering principles translate into no-code automation that non-engineers can build, maintain, and trust</p></li></ul><p class="text-node">For knowledge workers navigating 2026, for the first time in human history you can build systems that work while you sleep, closing open loops and nudging you toward what matters without requiring a single line of code.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening when AI enables active systems instead of passive storage? The common story is that second brains are just better note-taking, but the reality is more complicated when AI loops can classify, route, and surface information automatically while you sleep. In this video, I share the inside scoop on building a second brain with Slack, Notion, Zapier, and Claude or ChatGPT that actually works for more than one in twenty people:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why traditional storage systems fail because they're passive and require you to remember what you stored and where you put it</p></li><li class="list-item-node"><p class="text-node">How AI loops handle classification, routing, and surfacing so the system comes to you instead of waiting to be searched</p></li><li class="list-item-node"><p class="text-node">What eight building blocks make second brain systems actually work, from frictionless capture to confidence filters to daily nudges</p></li><li class="list-item-node"><p class="text-node">Where engineering principles translate into no-code automation that non-engineers can build, maintain, and trust</p></li></ul><p class="text-node">For knowledge workers navigating 2026, for the first time in human history you can build systems that work while you sleep, closing open loops and nudging you toward what matters without requiring a single line of code.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[The 3-Layer Framework That Predicts Which Jobs AI Will (and Won't) Replace]]></title>
			<itunes:title><![CDATA[The 3-Layer Framework That Predicts Which Jobs AI Will (and Won't) Replace]]></itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>22:57</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Ajlwlw5zn0pmkd09h84z2v734/media.mp3" length="16534570" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:jlwlw5zn0pmkd09h84z2v734</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b84f6d1583bb8e58b36</link>
			<acast:episodeId>69ab3b84f6d1583bb8e58b36</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNx8BLLFvxGfnrVnHWYgIBMlpaqC182ehsKa9dbi2J/0FcDvKWlbEfJn09/9e9JaBFYa0WeIl+zgWDKBL0ilxPYQ==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with AI and business competition? The common story is that AI disrupts everything uniformly, but the reality is more complicated when mid-tier digital firms are getting crushed from both directions while local plumbers and electricians are largely protected. In this video, I share the inside scoop on how AI is bifurcating the economy into a barbell with very little safe ground in the middle:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why tokenizable cognition like drafting, analysis, and coding is falling toward zero and what that means for anyone selling those services</p></li><li class="list-item-node"><p class="text-node">How physical, local businesses are actually protected by AI economics in ways most analysts miss entirely</p></li><li class="list-item-node"><p class="text-node">What three layers of business work determine your competitive vulnerability before you spend a dollar on AI</p></li><li class="list-item-node"><p class="text-node">Where your AI investment should go based on where your firm actually sits in this reshaped economy</p></li></ul><p class="text-node">For leaders navigating 2026, a three-person team with AI tools now rivals a fifty-person agency, but no AI can show up at your house and fix your furnace. The strategic opportunity is real, but only if you diagnose your position honestly before the market does it for you.</p><p class="text-node">Subscribe for daily AI strategy and news. <br>For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with AI and business competition? The common story is that AI disrupts everything uniformly, but the reality is more complicated when mid-tier digital firms are getting crushed from both directions while local plumbers and electricians are largely protected. In this video, I share the inside scoop on how AI is bifurcating the economy into a barbell with very little safe ground in the middle:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why tokenizable cognition like drafting, analysis, and coding is falling toward zero and what that means for anyone selling those services</p></li><li class="list-item-node"><p class="text-node">How physical, local businesses are actually protected by AI economics in ways most analysts miss entirely</p></li><li class="list-item-node"><p class="text-node">What three layers of business work determine your competitive vulnerability before you spend a dollar on AI</p></li><li class="list-item-node"><p class="text-node">Where your AI investment should go based on where your firm actually sits in this reshaped economy</p></li></ul><p class="text-node">For leaders navigating 2026, a three-person team with AI tools now rivals a fifty-person agency, but no AI can show up at your house and fix your furnace. The strategic opportunity is real, but only if you diagnose your position honestly before the market does it for you.</p><p class="text-node">Subscribe for daily AI strategy and news. <br>For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[THIS is Why You're Still Slow Even With AI (The Bottleneck Moved--Here's What to Do About It)]]></title>
			<itunes:title><![CDATA[THIS is Why You're Still Slow Even With AI (The Bottleneck Moved--Here's What to Do About It)]]></itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>30:22</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Adv0tf0mbjb5sev63wxpk9ihy/media.mp3" length="21866685" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:dv0tf0mbjb5sev63wxpk9ihy</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b84c2eb2fc3ab48facc</link>
			<acast:episodeId>69ab3b84c2eb2fc3ab48facc</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbN6Z6vQfz0jUX/ChiwybV7vU0hakKeN50brM4ZS1AE6/Ado13zG3Mq2zY1G/dCB78d9uHk3Pht7zTCaH9pgao0HQ==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with AI and how we work? The common story is that AI tools are making us more productive, but the reality is more complicated when most work habits are now optimizing for a bottleneck that no longer exists. In this video, I share the inside scoop on why execution capacity is no longer the scarce resource and what that means for how you spend your time:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why the bottleneck shifted to clarity, ambition, and distribution when execution got cheap enough that the meeting now takes longer than building the feature</p></li><li class="list-item-node"><p class="text-node">How eight specific habits are actively costing you in an AI-native world by protecting execution instead of doing it</p></li><li class="list-item-node"><p class="text-node">What Anthropic shipping Cowork in ten days with four people reveals about the gap between where the bottleneck moved and the habits most leaders still have</p></li><li class="list-item-node"><p class="text-node">Where the real moats are forming around relationships, distribution, and ambition when everyone can build but not everyone can swing hard enough</p></li></ul><p class="text-node">For professionals navigating 2026, the chaos you're feeling is not random. It's the gap between where the bottleneck moved and the habits you still have, and closing that gap is the opportunity.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with AI and how we work? The common story is that AI tools are making us more productive, but the reality is more complicated when most work habits are now optimizing for a bottleneck that no longer exists. In this video, I share the inside scoop on why execution capacity is no longer the scarce resource and what that means for how you spend your time:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why the bottleneck shifted to clarity, ambition, and distribution when execution got cheap enough that the meeting now takes longer than building the feature</p></li><li class="list-item-node"><p class="text-node">How eight specific habits are actively costing you in an AI-native world by protecting execution instead of doing it</p></li><li class="list-item-node"><p class="text-node">What Anthropic shipping Cowork in ten days with four people reveals about the gap between where the bottleneck moved and the habits most leaders still have</p></li><li class="list-item-node"><p class="text-node">Where the real moats are forming around relationships, distribution, and ambition when everyone can build but not everyone can swing hard enough</p></li></ul><p class="text-node">For professionals navigating 2026, the chaos you're feeling is not random. It's the gap between where the bottleneck moved and the habits you still have, and closing that gap is the opportunity.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[Stop Competing With 400 Applicants. Build This in One Weekend (Yes, there's a  no code option too!)]]></title>
			<itunes:title><![CDATA[Stop Competing With 400 Applicants. Build This in One Weekend (Yes, there's a  no code option too!)]]></itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>25:56</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Agp4vqtgq8g68696y5z4z471r/media.mp3" length="18678074" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:gp4vqtgq8g68696y5z4z471r</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b84c2eb2fc3ab48fad4</link>
			<acast:episodeId>69ab3b84c2eb2fc3ab48fad4</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbN1xM3x0L2tiGnr7ifh/vSjnd4SBMiplubo0L9ZC1VSufiuaXP7fsg3PdZJzGMnfSwP+CoNGjNwk4AE1KWzK7Fxw==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with AI and the job market? The common story is that you need to optimize harder for LinkedIn and beat the ATS, but the reality is more complicated when a 0.4% application success rate means the filter game is already broken. In this video, I share the inside scoop on why building your own AI interface changes the hiring game entirely:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why the 0.4% application success rate means you have nothing to lose by making a completely different move</p></li><li class="list-item-node"><p class="text-node">How an AI trained on your work demonstrates depth that resumes cannot, shifting recruiters from filtering mode to investigation mode</p></li><li class="list-item-node"><p class="text-node">What a fit assessment tool signals about your confidence and market value before a single conversation happens</p></li><li class="list-item-node"><p class="text-node">Why showing beats telling in an era of zero-trust credentialing when everyone's resume looks the same</p></li></ul><p class="text-node">For professionals navigating 2026, the same AI that broke hiring enables a different move. Instead of squeezing through their filters, you create the surface where people encounter you on your own terms, and that shift is worth everything.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with AI and the job market? The common story is that you need to optimize harder for LinkedIn and beat the ATS, but the reality is more complicated when a 0.4% application success rate means the filter game is already broken. In this video, I share the inside scoop on why building your own AI interface changes the hiring game entirely:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why the 0.4% application success rate means you have nothing to lose by making a completely different move</p></li><li class="list-item-node"><p class="text-node">How an AI trained on your work demonstrates depth that resumes cannot, shifting recruiters from filtering mode to investigation mode</p></li><li class="list-item-node"><p class="text-node">What a fit assessment tool signals about your confidence and market value before a single conversation happens</p></li><li class="list-item-node"><p class="text-node">Why showing beats telling in an era of zero-trust credentialing when everyone's resume looks the same</p></li></ul><p class="text-node">For professionals navigating 2026, the same AI that broke hiring enables a different move. Instead of squeezing through their filters, you create the surface where people encounter you on your own terms, and that shift is worth everything.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[Why the Smartest AI Bet Right Now Has Nothing to Do With AI (It's Not What You Think)]]></title>
			<itunes:title><![CDATA[Why the Smartest AI Bet Right Now Has Nothing to Do With AI (It's Not What You Think)]]></itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>23:23</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Aoa4eu4xvdvukspfngpm44z3p/media.mp3" length="16839263" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:oa4eu4xvdvukspfngpm44z3p</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b846ffdcd8188a5ea6b</link>
			<acast:episodeId>69ab3b846ffdcd8188a5ea6b</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNQfYEVvx0BWHtGac4Fn2q8j2NJ+hVaInpIeZ/oyW6j2aUa4Xv9tP+wf97zB7VH8b+BLIK2gQMUPqtFjh5xyqjig==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening beneath the abundance predictions at Davos? The common story is that AI will create prosperity for all, but the reality is more complicated when $4.5 trillion in productivity gains depends entirely on implementation and bottlenecks determine where value actually concentrates. In this video, I share the inside scoop on why scarcity, not abundance, is the strategic lens that matters:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why $4.5 trillion in AI productivity gains comes with an asterisk the size of the physical infrastructure constraints binding hyperscaler expansion</p></li><li class="list-item-node"><p class="text-node">How the trust deficit is reshaping coordination in a world of synthetic content where verification costs are rising faster than output costs are falling</p></li><li class="list-item-node"><p class="text-node">What the integration gap means for organizations that bought the tools but haven't closed the distance between capability and workflow</p></li><li class="list-item-node"><p class="text-node">Where individual bottlenecks are shifting from skills to taste and judgment as problem-finding eclipses problem-solving as the scarce resource</p></li></ul><p class="text-node">For builders and operators navigating 2026, the strategic question isn't whether abundance is coming. It's identifying which scarce resource you're positioned to solve before someone else does.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening beneath the abundance predictions at Davos? The common story is that AI will create prosperity for all, but the reality is more complicated when $4.5 trillion in productivity gains depends entirely on implementation and bottlenecks determine where value actually concentrates. In this video, I share the inside scoop on why scarcity, not abundance, is the strategic lens that matters:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why $4.5 trillion in AI productivity gains comes with an asterisk the size of the physical infrastructure constraints binding hyperscaler expansion</p></li><li class="list-item-node"><p class="text-node">How the trust deficit is reshaping coordination in a world of synthetic content where verification costs are rising faster than output costs are falling</p></li><li class="list-item-node"><p class="text-node">What the integration gap means for organizations that bought the tools but haven't closed the distance between capability and workflow</p></li><li class="list-item-node"><p class="text-node">Where individual bottlenecks are shifting from skills to taste and judgment as problem-finding eclipses problem-solving as the scarce resource</p></li></ul><p class="text-node">For builders and operators navigating 2026, the strategic question isn't whether abundance is coming. It's identifying which scarce resource you're positioned to solve before someone else does.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[Task Queues Are Replacing Chat Interfaces. Here's Why (plus a Claude Cowork Demo)]]></title>
			<itunes:title><![CDATA[Task Queues Are Replacing Chat Interfaces. Here's Why (plus a Claude Cowork Demo)]]></itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>32:18</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Afvvq7zm07dh0d8trs80lj9c1/media.mp3" length="23258175" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:fvvq7zm07dh0d8trs80lj9c1</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b84c2eb2fc3ab48fae1</link>
			<acast:episodeId>69ab3b84c2eb2fc3ab48fae1</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNIz3eiVKH7ZjMLEGRa6tSFdhiPAyynncCVhzkg1CyKUfytE8Zk+4F1WnYmZaoeouCt0r7DkVOGOSlAq9vGP2XWQ==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with AI agents and knowledge work? The common story is that coding tools are for coders, but the reality is more complicated when developers were using Claude Code to organize expense receipts and Anthropic shipped an entirely new product in ten days based on that signal. In this video, I share the inside scoop on why Claude Cowork matters more than the feature list suggests:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why file system agents beat browser agents for high-stakes work when your local machine is not adversarial territory</p></li><li class="list-item-node"><p class="text-node">How the anti-slop architecture shifts cognitive load upstream by forcing specificity before generation begins</p></li><li class="list-item-node"><p class="text-node">What task queues replacing chat means for the social dynamics of AI interaction and how you direct complex work</p></li><li class="list-item-node"><p class="text-node">Why Anthropic shipping this in ten days using their own tool tells you something important about where general purpose agents are headed</p></li></ul><p class="text-node">For knowledge workers navigating 2026, this is the moment file-based AI work becomes accessible to anyone, but verification and intent formulation become the scarce skills that separate the people getting leverage from the ones just getting output.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with AI agents and knowledge work? The common story is that coding tools are for coders, but the reality is more complicated when developers were using Claude Code to organize expense receipts and Anthropic shipped an entirely new product in ten days based on that signal. In this video, I share the inside scoop on why Claude Cowork matters more than the feature list suggests:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why file system agents beat browser agents for high-stakes work when your local machine is not adversarial territory</p></li><li class="list-item-node"><p class="text-node">How the anti-slop architecture shifts cognitive load upstream by forcing specificity before generation begins</p></li><li class="list-item-node"><p class="text-node">What task queues replacing chat means for the social dynamics of AI interaction and how you direct complex work</p></li><li class="list-item-node"><p class="text-node">Why Anthropic shipping this in ten days using their own tool tells you something important about where general purpose agents are headed</p></li></ul><p class="text-node">For knowledge workers navigating 2026, this is the moment file-based AI work becomes accessible to anyone, but verification and intent formulation become the scarce skills that separate the people getting leverage from the ones just getting output.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[Claude Opus 4.6: The Biggest AI Jump I've Covered. Here's What You Need to Know.]]></title>
			<itunes:title><![CDATA[Claude Opus 4.6: The Biggest AI Jump I've Covered. Here's What You Need to Know.]]></itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>30:38</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Am64srha1zy9uptz1akkidzoc/media.mp3" length="22064170" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:m64srha1zy9uptz1akkidzoc</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b856ffdcd8188a5ea75</link>
			<acast:episodeId>69ab3b856ffdcd8188a5ea75</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNBoHkv4+59JkHdlWLCv+dtMyKcD+JfMKbin3ltf7sb3ytJ7H73rdjcJ+ZtmDseavVFHpsX3N0FzIF8uT0R0ykYg==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with AI agent capabilities after Opus 4.6? The common story is that autonomous coding improves incrementally but the reality is more complicated when 16 agents just coded for two weeks straight and delivered a working C compiler.</p><p class="text-node">In this episode, I share the inside scoop on why the jump from 30 minutes to two weeks of autonomous coding is a phase change, not a trend line:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why the 5x context window matters less than the 76% needle-in-haystack retrieval score</p></li><li class="list-item-node"><p class="text-node">How Rakuten's Opus 4.6 deployment managed 50 engineers and closed issues autonomously</p></li><li class="list-item-node"><p class="text-node">What 500 zero-day vulnerabilities discovered without instructions reveals about reasoning</p></li><li class="list-item-node"><p class="text-node">Where agent teams and hierarchical coordination emerged as structural, not cultural For knowledge workers watching this unfold, the question has changed from whether to adopt AI to what your agent-to-human ratio should be and what each human needs to be excellent at to make it work.</p></li></ul><p class="text-node">Subscribe for daily AI strategy and news. For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with AI agent capabilities after Opus 4.6? The common story is that autonomous coding improves incrementally but the reality is more complicated when 16 agents just coded for two weeks straight and delivered a working C compiler.</p><p class="text-node">In this episode, I share the inside scoop on why the jump from 30 minutes to two weeks of autonomous coding is a phase change, not a trend line:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why the 5x context window matters less than the 76% needle-in-haystack retrieval score</p></li><li class="list-item-node"><p class="text-node">How Rakuten's Opus 4.6 deployment managed 50 engineers and closed issues autonomously</p></li><li class="list-item-node"><p class="text-node">What 500 zero-day vulnerabilities discovered without instructions reveals about reasoning</p></li><li class="list-item-node"><p class="text-node">Where agent teams and hierarchical coordination emerged as structural, not cultural For knowledge workers watching this unfold, the question has changed from whether to adopt AI to what your agent-to-human ratio should be and what each human needs to be excellent at to make it work.</p></li></ul><p class="text-node">Subscribe for daily AI strategy and news. For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[Shopify's AI Memo Changed Hiring Forever—And Why Google, Meta & Nvidia Are Copying It]]></title>
			<itunes:title><![CDATA[Shopify's AI Memo Changed Hiring Forever—And Why Google, Meta & Nvidia Are Copying It]]></itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>25:35</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Augk2z25whvi2cdmyrrolp257/media.mp3" length="18426985" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:ugk2z25whvi2cdmyrrolp257</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b867036d7390219849c</link>
			<acast:episodeId>69ab3b867036d7390219849c</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbN9hKkDuN44xMNMyOIeY7DL+IJrbxXkZI6Re4B9eC7CO0N9xd7Fx1qjAQoH+MjLUAZADIj+LEBXlzm62KAmFQm6w==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with AI and the job market in 2026? The common story is that the Toby Lutke memo was either visionary leadership or a smokescreen for layoffs, but the reality is more complicated when one CEO memo triggered a talent market restructuring that is now propagating industry-wide. In this video, I share the inside scoop on how selection pressure is reshaping who thrives in AI-native organizations:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why Shopify's Red Queen culture made the AI mandate work when copycat attempts at Duolingo and Box mostly failed</p></li><li class="list-item-node"><p class="text-node">How making AI usage a performance metric reshaped who would want to work at Shopify before it ever touched headcount</p></li><li class="list-item-node"><p class="text-node">What a U-shaped talent market actually looks like when juniors and seniors adapt faster than the mid-level professionals caught in the middle</p></li><li class="list-item-node"><p class="text-node">Where AI fluency is moving from differentiator to baseline expectation and what that means for professionals who haven't closed the gap yet</p></li></ul><p class="text-node">For professionals navigating 2026, the training gap is becoming a strategic liability, but the tools to close it have never been more accessible. The question is whether you treat that as an opportunity or wait until the selection pressure finds you.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with AI and the job market in 2026? The common story is that the Toby Lutke memo was either visionary leadership or a smokescreen for layoffs, but the reality is more complicated when one CEO memo triggered a talent market restructuring that is now propagating industry-wide. In this video, I share the inside scoop on how selection pressure is reshaping who thrives in AI-native organizations:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why Shopify's Red Queen culture made the AI mandate work when copycat attempts at Duolingo and Box mostly failed</p></li><li class="list-item-node"><p class="text-node">How making AI usage a performance metric reshaped who would want to work at Shopify before it ever touched headcount</p></li><li class="list-item-node"><p class="text-node">What a U-shaped talent market actually looks like when juniors and seniors adapt faster than the mid-level professionals caught in the middle</p></li><li class="list-item-node"><p class="text-node">Where AI fluency is moving from differentiator to baseline expectation and what that means for professionals who haven't closed the gap yet</p></li></ul><p class="text-node">For professionals navigating 2026, the training gap is becoming a strategic liability, but the tools to close it have never been more accessible. The question is whether you treat that as an opportunity or wait until the selection pressure finds you.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>Why Your Best Employees Quit Using AI After 3 Weeks (And the 6 Skills That Would Have Saved Them)</title>
			<itunes:title>Why Your Best Employees Quit Using AI After 3 Weeks (And the 6 Skills That Would Have Saved Them)</itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>21:31</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Am8ker7u8s9x3s2pa3ifww1s5/media.mp3" length="15498554" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:m8ker7u8s9x3s2pa3ifww1s5</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b86f6d1583bb8e58b95</link>
			<acast:episodeId>69ab3b86f6d1583bb8e58b95</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbN1Po48X4IsrKIoQc/NKSeTt56HA4vIXKgYMLPJGEj0YJ/MVdPlrsK1Md2I9AUTRyEgiNOcQxylArgDKmQbsa+ow==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with AI adoption inside enterprises? The common story is that employees need better prompting skills, but the reality is more complicated when 80% of workers abandon AI tools after the first three weeks regardless of how much tool training they received. In this video, I share the inside scoop on why the unlock is a judgment layer, not another workshop:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why the skills that predict AI success are management skills, not prompting, and what that means for how you design your upskilling program</p></li><li class="list-item-node"><p class="text-node">How BCG and Harvard found that AI users actually performed worse on tasks outside its frontier, and why that finding changes everything about deployment strategy</p></li><li class="list-item-node"><p class="text-node">What separates Centaur and Cyborg work patterns and when each approach produces better outcomes for different kinds of work</p></li><li class="list-item-node"><p class="text-node">Where organizations must invest to close the 201 training gap when basic tool training has already failed to move the needle</p></li></ul><p class="text-node">For teams serious about upskilling with AI in 2026, the missing middle is not more tool training. It's building the judgment layer that makes those tools reliable enough to keep using past week three.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with AI adoption inside enterprises? The common story is that employees need better prompting skills, but the reality is more complicated when 80% of workers abandon AI tools after the first three weeks regardless of how much tool training they received. In this video, I share the inside scoop on why the unlock is a judgment layer, not another workshop:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why the skills that predict AI success are management skills, not prompting, and what that means for how you design your upskilling program</p></li><li class="list-item-node"><p class="text-node">How BCG and Harvard found that AI users actually performed worse on tasks outside its frontier, and why that finding changes everything about deployment strategy</p></li><li class="list-item-node"><p class="text-node">What separates Centaur and Cyborg work patterns and when each approach produces better outcomes for different kinds of work</p></li><li class="list-item-node"><p class="text-node">Where organizations must invest to close the 201 training gap when basic tool training has already failed to move the needle</p></li></ul><p class="text-node">For teams serious about upskilling with AI in 2026, the missing middle is not more tool training. It's building the judgment layer that makes those tools reliable enough to keep using past week three.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>Karpathy vs. McKinsey: The Truth About AI Agents (Software 3.0)</title>
			<itunes:title>Karpathy vs. McKinsey: The Truth About AI Agents (Software 3.0)</itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>11:46</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Azob9m7bnq2o9xl7sqe505ayz/media.mp3" length="8482169" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:zob9m7bnq2o9xl7sqe505ayz</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b87f6d1583bb8e58ba5</link>
			<acast:episodeId>69ab3b87f6d1583bb8e58ba5</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNLJ7slCSquW3/7g1vClhkydnK6i+e+ZlEkIlHMcsEieNjA+mt/q3Vxyb4muVjl+Ked3c9EPRQEgeLL5QRq4jrPA==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening when enterprise AI strategy gets shaped by builders versus consultants? The common story is that agentic AI is ready to plug into any workflow, but the reality is more complicated when the people actually building it say the infrastructure doesn't exist yet. In this video, I share the inside scoop on why Andrej Karpathy's Software 3.0 vision and McKinsey's agentic mesh can't both be right:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why treating large language models as "people spirits" changes how you design every layer of your stack</p></li><li class="list-item-node"><p class="text-node">How making validation frictionless and limiting AI generation keeps humans meaningfully in the loop</p></li><li class="list-item-node"><p class="text-node">What the gap between CI/CD reality and consultant frameworks means for enterprise AI budgets</p></li><li class="list-item-node"><p class="text-node">Where the edge computing bet stands when large centralized models still outperform small deployments in 2025</p></li></ul><p class="text-node">For enterprise leaders navigating the next 24 months, the honest assessment is that incremental crawl-walk-run adoption beats comforting fiction, and tech leaders who push for empirically grounded plans will outrun those who don't.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening when enterprise AI strategy gets shaped by builders versus consultants? The common story is that agentic AI is ready to plug into any workflow, but the reality is more complicated when the people actually building it say the infrastructure doesn't exist yet. In this video, I share the inside scoop on why Andrej Karpathy's Software 3.0 vision and McKinsey's agentic mesh can't both be right:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why treating large language models as "people spirits" changes how you design every layer of your stack</p></li><li class="list-item-node"><p class="text-node">How making validation frictionless and limiting AI generation keeps humans meaningfully in the loop</p></li><li class="list-item-node"><p class="text-node">What the gap between CI/CD reality and consultant frameworks means for enterprise AI budgets</p></li><li class="list-item-node"><p class="text-node">Where the edge computing bet stands when large centralized models still outperform small deployments in 2025</p></li></ul><p class="text-node">For enterprise leaders navigating the next 24 months, the honest assessment is that incremental crawl-walk-run adoption beats comforting fiction, and tech leaders who push for empirically grounded plans will outrun those who don't.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[Going Slower Feels Safer, But Your Domain Expertise Won't Save You Anymore. Here's What Will.]]></title>
			<itunes:title><![CDATA[Going Slower Feels Safer, But Your Domain Expertise Won't Save You Anymore. Here's What Will.]]></itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>14:01</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Asvd9d216h47pczvcpne5lkuu/media.mp3" length="10101552" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:svd9d216h47pczvcpne5lkuu</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b877036d739021984c0</link>
			<acast:episodeId>69ab3b877036d739021984c0</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbN8h+9q+xokrkKDYg0zhilceYQVm/e8lC8uOP3datbAO72kTJJyNWf6MuFxKH/209JaFG+m/JgwDX32GXShSrqXA==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with career paths in the AI era? The common story is that AI is destroying jobs—but the reality is more complicated when the real collapse is compression, not destruction. In this video, I share the inside scoop on why distinct career paths are converging into a single meta-competency:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why engineer, PM, marketer, and designer are becoming variations on one theme</p></li><li class="list-item-node"><p class="text-node">How career leverage that used to build over five years now compresses into months</p></li><li class="list-item-node"><p class="text-node">What software-shaped intent means for non-technical roles directing AI agents</p></li><li class="list-item-node"><p class="text-node">Where the half-trillion-dollar annual CapEx commitment signals there's no alternate path</p></li></ul><p class="text-node">For knowledge workers navigating 2026, the bike-riding truth applies—going faster with AI is actually safer and steadier than trying to slow down.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/p/the-two-career-collapses-happening">https://natesnewsletter.substack.com/p/the-two-career-collapses-happening</a>?</p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with career paths in the AI era? The common story is that AI is destroying jobs—but the reality is more complicated when the real collapse is compression, not destruction. In this video, I share the inside scoop on why distinct career paths are converging into a single meta-competency:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why engineer, PM, marketer, and designer are becoming variations on one theme</p></li><li class="list-item-node"><p class="text-node">How career leverage that used to build over five years now compresses into months</p></li><li class="list-item-node"><p class="text-node">What software-shaped intent means for non-technical roles directing AI agents</p></li><li class="list-item-node"><p class="text-node">Where the half-trillion-dollar annual CapEx commitment signals there's no alternate path</p></li></ul><p class="text-node">For knowledge workers navigating 2026, the bike-riding truth applies—going faster with AI is actually safer and steadier than trying to slow down.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/p/the-two-career-collapses-happening">https://natesnewsletter.substack.com/p/the-two-career-collapses-happening</a>?</p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[NEW: Claude's 'Super Prompts' Will Save You DAYS of Work (Full Tutorial + Demo)]]></title>
			<itunes:title><![CDATA[NEW: Claude's 'Super Prompts' Will Save You DAYS of Work (Full Tutorial + Demo)]]></itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>12:00</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Aqhgkub2xsqitpsegk715wiih/media.mp3" length="8643919" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:qhgkub2xsqitpsegk715wiih</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b87c2eb2fc3ab48fb39</link>
			<acast:episodeId>69ab3b87c2eb2fc3ab48fb39</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbN8HnEqE9LsQ5Y5zKYVkzRXI03/Gaq1iyKCspUI0tSjIqoazdI/eBWKr4v2GNu+Z+4+GAY25pj3WkB8w+tD4wE6Q==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with Claude's new skills feature? The common story is that it's just another prompt shortcut, but the reality is more complicated when it unlocks composable AI work across every major model. In this video, I share the inside scoop on how Claude's skills system changes the game for LLMs:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why skills break the tyranny of the prompt and enable reusable, Lego-brick capabilities</p></li><li class="list-item-node"><p class="text-node">How to build and deploy these capabilities not just in Claude, but in ChatGPT and Gemini too</p></li><li class="list-item-node"><p class="text-node">What makes this the first real path toward automating complex, multi-step workflows without starting from scratch</p></li><li class="list-item-node"><p class="text-node">Where the real limits still are and why clear prompts still matter even in a skills-first world</p></li></ul><p class="text-node">For operators and builders navigating 2026, this is the start of promptless productivity, not the end of prompting, and the leverage gap between those who build skills libraries and those who don't is opening fast.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with Claude's new skills feature? The common story is that it's just another prompt shortcut, but the reality is more complicated when it unlocks composable AI work across every major model. In this video, I share the inside scoop on how Claude's skills system changes the game for LLMs:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why skills break the tyranny of the prompt and enable reusable, Lego-brick capabilities</p></li><li class="list-item-node"><p class="text-node">How to build and deploy these capabilities not just in Claude, but in ChatGPT and Gemini too</p></li><li class="list-item-node"><p class="text-node">What makes this the first real path toward automating complex, multi-step workflows without starting from scratch</p></li><li class="list-item-node"><p class="text-node">Where the real limits still are and why clear prompts still matter even in a skills-first world</p></li></ul><p class="text-node">For operators and builders navigating 2026, this is the start of promptless productivity, not the end of prompting, and the leverage gap between those who build skills libraries and those who don't is opening fast.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>What Sam Altman and Dario Amodei Disagree About (And Why It Matters for You)</title>
			<itunes:title>What Sam Altman and Dario Amodei Disagree About (And Why It Matters for You)</itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>23:10</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3An23dljfui6rav67j7fwwf67c/media.mp3" length="16680334" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:n23dljfui6rav67j7fwwf67c</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b87b49eecc0b7c4ba8a</link>
			<acast:episodeId>69ab3b87b49eecc0b7c4ba8a</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNwk3En5h/qNCPIRh/I6zvyaUWAvkU9g9Rd07mmcerDsmD3Vef3jSx3tcaOsdTM+3QzYpSA+gv9v0HhrHlC7UwDw==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with AI strategy in 2026? The common story is that one company cares about safety and one does not, but the reality is more complicated when both leaders believe safety matters and have simply built completely different theories about what that means. In this video, I share the inside scoop on why OpenAI and Anthropic have diverged so completely:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why Sam Altman and Dario Amodei have fundamentally different epistemologies about safety, not just different personalities or priorities</p></li><li class="list-item-node"><p class="text-node">How YC's ship-fast philosophy shaped OpenAI's belief that deployment itself is the safety mechanism</p></li><li class="list-item-node"><p class="text-node">What Anthropic's scientist-founder learned from personal tragedy that made safety a precondition rather than an outcome</p></li><li class="list-item-node"><p class="text-node">Where the two AI economies are now operating under different rules and producing entirely different products as a result</p></li></ul><p class="text-node">For professionals navigating 2026, the question is no longer which model is better. It's what kind of work you're doing and which theory of AI development you're willing to bet your organization on.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with AI strategy in 2026? The common story is that one company cares about safety and one does not, but the reality is more complicated when both leaders believe safety matters and have simply built completely different theories about what that means. In this video, I share the inside scoop on why OpenAI and Anthropic have diverged so completely:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why Sam Altman and Dario Amodei have fundamentally different epistemologies about safety, not just different personalities or priorities</p></li><li class="list-item-node"><p class="text-node">How YC's ship-fast philosophy shaped OpenAI's belief that deployment itself is the safety mechanism</p></li><li class="list-item-node"><p class="text-node">What Anthropic's scientist-founder learned from personal tragedy that made safety a precondition rather than an outcome</p></li><li class="list-item-node"><p class="text-node">Where the two AI economies are now operating under different rules and producing entirely different products as a result</p></li></ul><p class="text-node">For professionals navigating 2026, the question is no longer which model is better. It's what kind of work you're doing and which theory of AI development you're willing to bet your organization on.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>The Skill Gap That Will Separate AI Winners from Everyone Else</title>
			<itunes:title>The Skill Gap That Will Separate AI Winners from Everyone Else</itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>11:51</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Asazqipy4gjdfnxjwnf4shud3/media.mp3" length="8533264" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:sazqipy4gjdfnxjwnf4shud3</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b87c2eb2fc3ab48fb3e</link>
			<acast:episodeId>69ab3b87c2eb2fc3ab48fb3e</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNdjajXTvGsXaSXcD6I4d7j2sjSubfTz0uEnb0l3YqtrN04meLjk0zMaEA5b0WDFJuKdF6MoiiLpESo6wIibBxLw==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with AI agents and the dream of a personal chief of staff? The common story is that agents are already mainstream, but the reality is more complicated when the missing piece isn't the model, it's the interface layer that translates messy human intentions into tasks an agent can actually execute. In this video, I share the inside scoop on why 2026 is the breakthrough year for always-on personal AI agents:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why the 2026 hardware cycle finally enables consumer-ready agents that can sustain attention for hours</p></li><li class="list-item-node"><p class="text-node">How memory scaffolding solves the persistent amnesiac agent problem that has blocked real delegation until now</p></li><li class="list-item-node"><p class="text-node">What a perpetually-on executive assistant actually requires beyond just a smarter model</p></li><li class="list-item-node"><p class="text-node">Where the critical UX layer is still missing and why the business that builds it changes where people spend their time</p></li></ul><p class="text-node">For operators and builders navigating 2026, all the technical pieces exist: perpetual agents, model context protocol, browser use, and file manipulation. What's missing is the intuitive interface, and capturing that opportunity demands new skills in task formulation and intentional delegation.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with AI agents and the dream of a personal chief of staff? The common story is that agents are already mainstream, but the reality is more complicated when the missing piece isn't the model, it's the interface layer that translates messy human intentions into tasks an agent can actually execute. In this video, I share the inside scoop on why 2026 is the breakthrough year for always-on personal AI agents:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why the 2026 hardware cycle finally enables consumer-ready agents that can sustain attention for hours</p></li><li class="list-item-node"><p class="text-node">How memory scaffolding solves the persistent amnesiac agent problem that has blocked real delegation until now</p></li><li class="list-item-node"><p class="text-node">What a perpetually-on executive assistant actually requires beyond just a smarter model</p></li><li class="list-item-node"><p class="text-node">Where the critical UX layer is still missing and why the business that builds it changes where people spend their time</p></li></ul><p class="text-node">For operators and builders navigating 2026, all the technical pieces exist: perpetual agents, model context protocol, browser use, and file manipulation. What's missing is the intuitive interface, and capturing that opportunity demands new skills in task formulation and intentional delegation.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>The Compounding Gap That Makes 2026 the Last Chance to Catch Up</title>
			<itunes:title>The Compounding Gap That Makes 2026 the Last Chance to Catch Up</itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>16:48</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3An7rk8jnud3v5n9yghty6m1fn/media.mp3" length="12104307" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:n7rk8jnud3v5n9yghty6m1fn</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b876ffdcd8188a5eace</link>
			<acast:episodeId>69ab3b876ffdcd8188a5eace</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbN66SVYzinRnoZL7HRjh6i/ofHAxbbgt05lQBgJDPm5hxX7k85l0nCoNz3GbjXwV36Vgor923KehDjgGKPRMG8HA==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with AI in 2026 that most leaders are missing? The common story is that AI will gradually make everyone more productive, but the reality is more complicated when ten specific predictions trace back to what we already know today and the gap between fast movers and slow movers is about to become unbridgeable. In this video, I share the inside scoop on what's actually coming and why it matters now:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why memory breakthroughs and agent UI surfaces will arrive by mid-2026 and what that unlocks for always-on delegation</p></li><li class="list-item-node"><p class="text-node">How continual learning and recursive self-improvement will reshape LLMs faster than most enterprise planning cycles can absorb</p></li><li class="list-item-node"><p class="text-node">What very long-running agents mean for organizations when humans become the bottleneck instead of the technology</p></li><li class="list-item-node"><p class="text-node">Where work AI and personal AI split into completely different experiences and why that divide changes how you build teams</p></li></ul><p class="text-node">For leaders navigating 2026, the gap between fast-adopting companies and everyone else will widen dramatically, creating predator-level advantages for disruptors and existential risk for slow movers. The workforce retraining challenge ahead will exceed the previous twenty-five years combined.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with AI in 2026 that most leaders are missing? The common story is that AI will gradually make everyone more productive, but the reality is more complicated when ten specific predictions trace back to what we already know today and the gap between fast movers and slow movers is about to become unbridgeable. In this video, I share the inside scoop on what's actually coming and why it matters now:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why memory breakthroughs and agent UI surfaces will arrive by mid-2026 and what that unlocks for always-on delegation</p></li><li class="list-item-node"><p class="text-node">How continual learning and recursive self-improvement will reshape LLMs faster than most enterprise planning cycles can absorb</p></li><li class="list-item-node"><p class="text-node">What very long-running agents mean for organizations when humans become the bottleneck instead of the technology</p></li><li class="list-item-node"><p class="text-node">Where work AI and personal AI split into completely different experiences and why that divide changes how you build teams</p></li></ul><p class="text-node">For leaders navigating 2026, the gap between fast-adopting companies and everyone else will widen dramatically, creating predator-level advantages for disruptors and existential risk for slow movers. The workforce retraining challenge ahead will exceed the previous twenty-five years combined.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[Why Andrej Karpathy Feels "Behind" (And What It Means for Your Career)]]></title>
			<itunes:title><![CDATA[Why Andrej Karpathy Feels "Behind" (And What It Means for Your Career)]]></itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>25:08</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Afsqdasslm9h1ylptjcf1teqq/media.mp3" length="18104739" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:fsqdasslm9h1ylptjcf1teqq</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b87f6d1583bb8e58bbf</link>
			<acast:episodeId>69ab3b87f6d1583bb8e58bbf</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNP3IVcQ6w5S2AUNc0NJd0VeOqZ3Zqzl3S7dqwGz+/a2JYwVc65kndaFmONTg9wIFlEsrXL11ELceIUdR9eENRcg==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with technical skills in the age of AI? The common story is that engineers need to code faster, but the reality is more complicated when even Andrej Karpathy says he feels behind and the leverage has shifted from writing code to orchestrating probabilistic systems. In this video, I share the inside scoop on the new technical skill tree that applies to everyone, not just engineers:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why the phase transition from authorship to orchestration broke the old assumption that effort maps to output</p></li><li class="list-item-node"><p class="text-node">How the four-level skill tree works from conditioning intent and context all the way to compounding through evals, feedback loops, and governance</p></li><li class="list-item-node"><p class="text-node">What separating generation from decisioning actually means when you're the one accountable for what the LLM produces</p></li><li class="list-item-node"><p class="text-node">Where authority comes from in a world where the abstraction stack got inverted and old technical boundaries no longer make sense</p></li></ul><p class="text-node">For organizations navigating 2026, those that build deliberate skill trees around separating generation from decisioning will realize 10X speedups, while those clinging to technical versus non-technical hierarchies will fall behind before they realize what happened.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with technical skills in the age of AI? The common story is that engineers need to code faster, but the reality is more complicated when even Andrej Karpathy says he feels behind and the leverage has shifted from writing code to orchestrating probabilistic systems. In this video, I share the inside scoop on the new technical skill tree that applies to everyone, not just engineers:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why the phase transition from authorship to orchestration broke the old assumption that effort maps to output</p></li><li class="list-item-node"><p class="text-node">How the four-level skill tree works from conditioning intent and context all the way to compounding through evals, feedback loops, and governance</p></li><li class="list-item-node"><p class="text-node">What separating generation from decisioning actually means when you're the one accountable for what the LLM produces</p></li><li class="list-item-node"><p class="text-node">Where authority comes from in a world where the abstraction stack got inverted and old technical boundaries no longer make sense</p></li></ul><p class="text-node">For organizations navigating 2026, those that build deliberate skill trees around separating generation from decisioning will realize 10X speedups, while those clinging to technical versus non-technical hierarchies will fall behind before they realize what happened.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[The $285 Billion Crash Wall Street Won't Explain Honestly. Here's What Everyone Missed.]]></title>
			<itunes:title><![CDATA[The $285 Billion Crash Wall Street Won't Explain Honestly. Here's What Everyone Missed.]]></itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>23:23</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Alts11ajnxrf5tvvk53k8gti6/media.mp3" length="16838009" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:lts11ajnxrf5tvvk53k8gti6</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b887036d739021984d8</link>
			<acast:episodeId>69ab3b887036d739021984d8</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbN17PFfnZZsT2ghMSfMUFsEgvrB5WTobrelGVLnuH2Px6Z2r7iT8AvxlllUFrvP0n4JaZy66+2EycspZGnVTu/VA==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening when a markdown file crashes $285 billion in market value? The common story is that AI killed enterprise software. The reality is more complicated. <br><br>In this video, I share the inside scoop on why the per-seat SaaS pricing model is breaking while the data underneath remains valuable:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why Thomson Reuters dropped 16% after Anthropic shipped 200 lines of prompts</p></li><li class="list-item-node"><p class="text-node">How KPMG used AI as negotiating leverage to cut audit fees 14%</p></li><li class="list-item-node"><p class="text-node">What Jensen Huang's counter-argument gets right and what it misses</p></li></ul><p class="text-node">Where the transition from UI-first to agentic-first architecture determines survival For knowledge workers watching this unfold, the same dynamic applies—bolting AI onto existing workflows is the individual version of what just crashed the SaaS market. <br><br>Subscribe for daily AI strategy and news.</p><p class="text-node">Full Story w/ Prompts: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/p/200-lines-of-markdown-just-triggered">https://natesnewsletter.substack.com/p/200-lines-of-markdown-just-triggered</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening when a markdown file crashes $285 billion in market value? The common story is that AI killed enterprise software. The reality is more complicated. <br><br>In this video, I share the inside scoop on why the per-seat SaaS pricing model is breaking while the data underneath remains valuable:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why Thomson Reuters dropped 16% after Anthropic shipped 200 lines of prompts</p></li><li class="list-item-node"><p class="text-node">How KPMG used AI as negotiating leverage to cut audit fees 14%</p></li><li class="list-item-node"><p class="text-node">What Jensen Huang's counter-argument gets right and what it misses</p></li></ul><p class="text-node">Where the transition from UI-first to agentic-first architecture determines survival For knowledge workers watching this unfold, the same dynamic applies—bolting AI onto existing workflows is the individual version of what just crashed the SaaS market. <br><br>Subscribe for daily AI strategy and news.</p><p class="text-node">Full Story w/ Prompts: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/p/200-lines-of-markdown-just-triggered">https://natesnewsletter.substack.com/p/200-lines-of-markdown-just-triggered</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>The Builders Who Figure This Out First Will Be Impossible to Catch. Why You Need an Identity Shift.</title>
			<itunes:title>The Builders Who Figure This Out First Will Be Impossible to Catch. Why You Need an Identity Shift.</itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>20:15</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Aikjm79kag65alf389hebpob2/media.mp3" length="14584164" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:ikjm79kag65alf389hebpob2</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b88f6d1583bb8e58bc4</link>
			<acast:episodeId>69ab3b88f6d1583bb8e58bc4</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNEZgDVGVvJa5XwlKwgcaGueoDC0QFSKcM+EScUu9VST5Zbq4tgNKGFIUyccO5MIE402pPjcloU98V70f+9v2qtA==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with AI productivity in 2026? The common story is that better prompting is the answer, but the reality is more complicated when the bottleneck has shifted from capability to cognitive architecture and everyone has the same toolset. In this video, I share the inside scoop on why systems thinking is now the scarce resource:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why adopting an engineering manager mindset changes everything when the transition feels like loss but is actually leverage </p></li><li class="list-item-node"><p class="text-node">How killing the contribution badge unlocks real velocity by forcing you to measure outcomes instead of activity </p></li><li class="list-item-node"><p class="text-node">What strategic deep diving looks like when you need to move fluidly between altitudes of abstraction across technical and non-technical work </p></li><li class="list-item-node"><p class="text-node">Why experience cannot be compressed at the speed you can build, and what that means for the quality that remains distinctly human work</p></li></ul><p class="text-node">For builders at every level navigating 2026, we solved the wrong problem for two years by optimizing for prompting and tool selection. Those remain foundational but they're no longer sufficient. What separates the people who distinguish themselves now is the ability to know what actually matters about what they're building.</p><p class="text-node">Subscribe for daily AI strategy and news. </p><p class="text-node">For playbooks and analysis: https://natesnewsletter.substack.com/</p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with AI productivity in 2026? The common story is that better prompting is the answer, but the reality is more complicated when the bottleneck has shifted from capability to cognitive architecture and everyone has the same toolset. In this video, I share the inside scoop on why systems thinking is now the scarce resource:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why adopting an engineering manager mindset changes everything when the transition feels like loss but is actually leverage </p></li><li class="list-item-node"><p class="text-node">How killing the contribution badge unlocks real velocity by forcing you to measure outcomes instead of activity </p></li><li class="list-item-node"><p class="text-node">What strategic deep diving looks like when you need to move fluidly between altitudes of abstraction across technical and non-technical work </p></li><li class="list-item-node"><p class="text-node">Why experience cannot be compressed at the speed you can build, and what that means for the quality that remains distinctly human work</p></li></ul><p class="text-node">For builders at every level navigating 2026, we solved the wrong problem for two years by optimizing for prompting and tool selection. Those remain foundational but they're no longer sufficient. What separates the people who distinguish themselves now is the ability to know what actually matters about what they're building.</p><p class="text-node">Subscribe for daily AI strategy and news. </p><p class="text-node">For playbooks and analysis: https://natesnewsletter.substack.com/</p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[Claude Code Snuck in 7 Updates in 2 Weeks—Here's What You Need to Know in 10 Minutes]]></title>
			<itunes:title><![CDATA[Claude Code Snuck in 7 Updates in 2 Weeks—Here's What You Need to Know in 10 Minutes]]></itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>10:50</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Al3ep9bsm3y4mf8n4tje8f7cd/media.mp3" length="7801313" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:l3ep9bsm3y4mf8n4tje8f7cd</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b88f6d1583bb8e58be4</link>
			<acast:episodeId>69ab3b88f6d1583bb8e58be4</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNFIwqDL7b4rShyvoG1az6PAdnGjnFAhGwoR9AOzxJYyfItkjTTOqRbAhP0VvDTf1/Uza0yy6KO6tNz67KS5VhRA==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with Anthropic's December releases that nobody is connecting? The common story is that these are scattered feature updates, but the reality is a coherent strategy shift from assistant to agent operating system. In this video, I share the inside scoop on what Christmas Claude reveals about Anthropic's 2026 vision:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why browser, Slack, terminal, and mobile all got touched at once in a way that isn't coincidental</p></li><li class="list-item-node"><p class="text-node">How Claude Code is positioning differently than Cursor, Codex, and Copilot at the workflow layer</p></li><li class="list-item-node"><p class="text-node">What safety-forward sandboxing actually means for enterprise agent adoption beyond compliance checkboxes</p></li><li class="list-item-node"><p class="text-node">Where the unified work queue signals Claude is heading next for teams who are paying attention</p></li></ul><p class="text-node">For teams navigating 2026, those who recognize Claude Code as workflow fabric rather than a coding tool will integrate it where work actually begins. Those treating it as another autocomplete will miss the strategic shift entirely.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with Anthropic's December releases that nobody is connecting? The common story is that these are scattered feature updates, but the reality is a coherent strategy shift from assistant to agent operating system. In this video, I share the inside scoop on what Christmas Claude reveals about Anthropic's 2026 vision:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why browser, Slack, terminal, and mobile all got touched at once in a way that isn't coincidental</p></li><li class="list-item-node"><p class="text-node">How Claude Code is positioning differently than Cursor, Codex, and Copilot at the workflow layer</p></li><li class="list-item-node"><p class="text-node">What safety-forward sandboxing actually means for enterprise agent adoption beyond compliance checkboxes</p></li><li class="list-item-node"><p class="text-node">Where the unified work queue signals Claude is heading next for teams who are paying attention</p></li></ul><p class="text-node">For teams navigating 2026, those who recognize Claude Code as workflow fabric rather than a coding tool will integrate it where work actually begins. Those treating it as another autocomplete will miss the strategic shift entirely.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[The Skill That Separates AI Power Users From Everyone Else (Why "Clear" Specs Produce Broken Output)]]></title>
			<itunes:title><![CDATA[The Skill That Separates AI Power Users From Everyone Else (Why "Clear" Specs Produce Broken Output)]]></itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>18:53</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Adf1p7qk3wtuqh1w30se24pw2/media.mp3" length="13596422" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:df1p7qk3wtuqh1w30se24pw2</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b89f6d1583bb8e58c0d</link>
			<acast:episodeId>69ab3b89f6d1583bb8e58c0d</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNxz0V8LreFkrd5Y7wHSWEeSmw6hGMdxtOfo4f/sK5ej7f84cj9I6dBMNwu2votZmFTva7DAMMtpSxzXOoaapJOQ==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with AI coding tools and how we work alongside them? The common story is that Claude Code and Codex are just competing products, but the reality is more complicated when the difference between a CNC machine and a skilled machinist defines two entirely different relationships with AI. In this video, I share the inside scoop on why the colleague versus tool distinction will define AI adoption across all knowledge work:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why Codex works like a CNC machine and Claude Code works like a machinist, and why that metaphor matters more than any benchmark comparison</p></li><li class="list-item-node"><p class="text-node">How senior engineers get compound leverage from autonomous agents precisely because they know what they know and are honest about when they don't</p></li><li class="list-item-node"><p class="text-node">What happens when you can't specify precise intent upfront and why that determines which tool you should actually reach for</p></li><li class="list-item-node"><p class="text-node">Why this same dynamic will shape all non-technical knowledge work as colleague-shaped AI moves beyond the codebase</p></li></ul><p class="text-node">For individuals and organizations navigating 2026, Cursor ran ChatGPT 5.2 for a week straight and produced three million lines of Rust code with no human touching the keyboard. The question isn't which AI is better. It's whether you're honest about which situation you're actually in.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with AI coding tools and how we work alongside them? The common story is that Claude Code and Codex are just competing products, but the reality is more complicated when the difference between a CNC machine and a skilled machinist defines two entirely different relationships with AI. In this video, I share the inside scoop on why the colleague versus tool distinction will define AI adoption across all knowledge work:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why Codex works like a CNC machine and Claude Code works like a machinist, and why that metaphor matters more than any benchmark comparison</p></li><li class="list-item-node"><p class="text-node">How senior engineers get compound leverage from autonomous agents precisely because they know what they know and are honest about when they don't</p></li><li class="list-item-node"><p class="text-node">What happens when you can't specify precise intent upfront and why that determines which tool you should actually reach for</p></li><li class="list-item-node"><p class="text-node">Why this same dynamic will shape all non-technical knowledge work as colleague-shaped AI moves beyond the codebase</p></li></ul><p class="text-node">For individuals and organizations navigating 2026, Cursor ran ChatGPT 5.2 for a week straight and produced three million lines of Rust code with no human touching the keyboard. The question isn't which AI is better. It's whether you're honest about which situation you're actually in.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>The Mental Models of Master Prompters: 10 Techniques for Advanced Prompting</title>
			<itunes:title>The Mental Models of Master Prompters: 10 Techniques for Advanced Prompting</itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>13:20</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Al0tk4kjmdg3etr5lay276o6f/media.mp3" length="9602508" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:l0tk4kjmdg3etr5lay276o6f</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b8ac2eb2fc3ab48fb99</link>
			<acast:episodeId>69ab3b8ac2eb2fc3ab48fb99</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNdAmwWDb5VoRHPNQx70zV0UeFgtWtyLkBU1A7FhNmZ7Vw4gd72mTs/dpalN7oPcIangoze0IJ9/Cr9VaT/AA4BQ==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening inside advanced prompt engineering? The common story is that it's about clever wording, but the reality is more complicated when the best prompters are actually structuring how LLMs reason, verify, and evolve. In this video, I share the inside scoop on how advanced prompters actually think:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why self-correction systems matter more than single-pass generation</p></li><li class="list-item-node"><p class="text-node">How chain of verification and adversarial prompting improve reliability at scale</p></li><li class="list-item-node"><p class="text-node">What meta-prompting and recursive optimization unlock in large language models</p></li><li class="list-item-node"><p class="text-node">Where reasoning scaffolds and perspective engineering reshape AI analysis in ways basic prompting never will</p></li></ul><p class="text-node">For operators and teams navigating 2026, advanced prompting isn't about magic words. It's about building the cognitive architecture that makes AI output worth trusting.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening inside advanced prompt engineering? The common story is that it's about clever wording, but the reality is more complicated when the best prompters are actually structuring how LLMs reason, verify, and evolve. In this video, I share the inside scoop on how advanced prompters actually think:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why self-correction systems matter more than single-pass generation</p></li><li class="list-item-node"><p class="text-node">How chain of verification and adversarial prompting improve reliability at scale</p></li><li class="list-item-node"><p class="text-node">What meta-prompting and recursive optimization unlock in large language models</p></li><li class="list-item-node"><p class="text-node">Where reasoning scaffolds and perspective engineering reshape AI analysis in ways basic prompting never will</p></li></ul><p class="text-node">For operators and teams navigating 2026, advanced prompting isn't about magic words. It's about building the cognitive architecture that makes AI output worth trusting.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[The Dirty Secret Behind Amazon's 30,000 Cuts: Nvidia]]></title>
			<itunes:title><![CDATA[The Dirty Secret Behind Amazon's 30,000 Cuts: Nvidia]]></itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>9:10</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Abv5nhgns3otzfej6032koxar/media.mp3" length="6611697" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:bv5nhgns3otzfej6032koxar</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b8ae2ffe1fef6526ba7</link>
			<acast:episodeId>69ab3b8ae2ffe1fef6526ba7</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNbma1PPn9x8X56dauIK7bF3cmdtRdWZmlpLyiQstIvXmsgh3ULD4SixLAcgLUWxzvCmYSEbOf4eM4aDUCo7nyXA==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with Amazon's layoffs and the AI economy? The common story is that automation killed 30,000 jobs, but the reality is more complicated when the real story is capital reallocation, not labor replacement. In this video, I share the inside scoop on what's actually driving these cuts and what it reveals about where AI money is actually flowing:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why Amazon's profits depend on AWS, not retail operations, and what that means for how you read the layoff narrative</p></li><li class="list-item-node"><p class="text-node">How surging GPU demand is reshaping corporate AI strategy at every major hyperscaler</p></li><li class="list-item-node"><p class="text-node">What Wall Street misunderstands about "AI automation" narratives and why the framing keeps misleading investors</p></li><li class="list-item-node"><p class="text-node">Where media coverage keeps missing the real AI growth signal hiding in plain sight</p></li></ul><p class="text-node">For operators and teams navigating 2026, AI isn't replacing labor yet. It's reallocating capital, and understanding that shift will define who wins the next decade.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with Amazon's layoffs and the AI economy? The common story is that automation killed 30,000 jobs, but the reality is more complicated when the real story is capital reallocation, not labor replacement. In this video, I share the inside scoop on what's actually driving these cuts and what it reveals about where AI money is actually flowing:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why Amazon's profits depend on AWS, not retail operations, and what that means for how you read the layoff narrative</p></li><li class="list-item-node"><p class="text-node">How surging GPU demand is reshaping corporate AI strategy at every major hyperscaler</p></li><li class="list-item-node"><p class="text-node">What Wall Street misunderstands about "AI automation" narratives and why the framing keeps misleading investors</p></li><li class="list-item-node"><p class="text-node">Where media coverage keeps missing the real AI growth signal hiding in plain sight</p></li></ul><p class="text-node">For operators and teams navigating 2026, AI isn't replacing labor yet. It's reallocating capital, and understanding that shift will define who wins the next decade.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[Why AI-Native Companies Are Deleting Software You're Still Paying For (The $56K Lesson)]]></title>
			<itunes:title><![CDATA[Why AI-Native Companies Are Deleting Software You're Still Paying For (The $56K Lesson)]]></itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>23:22</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Aianfol2ng7hrt1aac6hdth3o/media.mp3" length="16834561" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:ianfol2ng7hrt1aac6hdth3o</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b8ab49eecc0b7c4bb24</link>
			<acast:episodeId>69ab3b8ab49eecc0b7c4bb24</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNifeG8XYwx3/23Su1dzz/+RWt/MByVCzN3vboxlOZ6fxLnx3BUzW7Yw1dge6bHDbuYoiFu1vsOoTGGapj8YYSUQ==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening when AI agents fail at long-running tasks? The common story is that smarter models solve agent failures, but the reality is more complicated when generalized agents behave like amnesiacs with tool belts no matter how intelligent the underlying model is. In this video, I share the inside scoop on what Anthropic revealed about why agents actually work:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why generalized agents without domain memory spiral into chaotic loops instead of making durable progress</p></li><li class="list-item-node"><p class="text-node">How domain memory transforms agent behavior from reactive task-running to structured, compounding work</p></li><li class="list-item-node"><p class="text-node">What the initializer and coding agent pattern actually does when you implement it correctly</p></li><li class="list-item-node"><p class="text-node">Where the real moat lies in harness design and testing loops, not in chasing the next model release</p></li></ul><p class="text-node">For builders and operators navigating 2026, the competitive advantage is not a smarter AI. It's well-designed domain memory and the discipline to build testing loops that hold it accountable.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening when AI agents fail at long-running tasks? The common story is that smarter models solve agent failures, but the reality is more complicated when generalized agents behave like amnesiacs with tool belts no matter how intelligent the underlying model is. In this video, I share the inside scoop on what Anthropic revealed about why agents actually work:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why generalized agents without domain memory spiral into chaotic loops instead of making durable progress</p></li><li class="list-item-node"><p class="text-node">How domain memory transforms agent behavior from reactive task-running to structured, compounding work</p></li><li class="list-item-node"><p class="text-node">What the initializer and coding agent pattern actually does when you implement it correctly</p></li><li class="list-item-node"><p class="text-node">Where the real moat lies in harness design and testing loops, not in chasing the next model release</p></li></ul><p class="text-node">For builders and operators navigating 2026, the competitive advantage is not a smarter AI. It's well-designed domain memory and the discipline to build testing loops that hold it accountable.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[Inside Anthropic's Detection of an AI-Run Cyberattack on 30 High Value Global Targets]]></title>
			<itunes:title><![CDATA[Inside Anthropic's Detection of an AI-Run Cyberattack on 30 High Value Global Targets]]></itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>9:47</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3As22nuxil13h2zmggkqtgddhx/media.mp3" length="7044912" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:s22nuxil13h2zmggkqtgddhx</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b8a6ffdcd8188a5eb2d</link>
			<acast:episodeId>69ab3b8a6ffdcd8188a5eb2d</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNvj/Qho+RkC4hVl6phBeK0XAfGq1tc56fskU7CHL3HdFeEcbB2zAu1oOJuUMV2uueJQda/QlEvyo+S6LJqvgf9A==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening when a state actor uses jailbroken AI for end-to-end cyberattacks? The common story is that guardrails will save us, but the reality is more complicated when orchestration-layer tricks bypass prompt-level safety entirely. In this video, I share the inside scoop on the first documented AI-driven cyber-espionage campaign and what it means for everyone building with agents:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why a state actor chose jailbroken Claude Code to run operational attacks from reconnaissance to execution</p></li><li class="list-item-node"><p class="text-node">How orchestration-layer manipulation bypassed the prompt-level safety controls most teams are still relying on</p></li><li class="list-item-node"><p class="text-node">What this means for SOC workflows, detection pipelines, and AI-driven triage when attackers are already moving at machine speed</p></li><li class="list-item-node"><p class="text-node">Where builders must harden agent architectures before the next campaign makes this look like a dry run</p></li></ul><p class="text-node">For operators and teams navigating 2026, AI fluency is no longer enough. System-level controls are now the minimum bar, and the attackers who figured that out first are already ahead.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening when a state actor uses jailbroken AI for end-to-end cyberattacks? The common story is that guardrails will save us, but the reality is more complicated when orchestration-layer tricks bypass prompt-level safety entirely. In this video, I share the inside scoop on the first documented AI-driven cyber-espionage campaign and what it means for everyone building with agents:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why a state actor chose jailbroken Claude Code to run operational attacks from reconnaissance to execution</p></li><li class="list-item-node"><p class="text-node">How orchestration-layer manipulation bypassed the prompt-level safety controls most teams are still relying on</p></li><li class="list-item-node"><p class="text-node">What this means for SOC workflows, detection pipelines, and AI-driven triage when attackers are already moving at machine speed</p></li><li class="list-item-node"><p class="text-node">Where builders must harden agent architectures before the next campaign makes this look like a dry run</p></li></ul><p class="text-node">For operators and teams navigating 2026, AI fluency is no longer enough. System-level controls are now the minimum bar, and the attackers who figured that out first are already ahead.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[90% of AI Users Are Getting Mediocre Output. Don't Be One of Them (Stop Prompting, Do THIS Instead)]]></title>
			<itunes:title><![CDATA[90% of AI Users Are Getting Mediocre Output. Don't Be One of Them (Stop Prompting, Do THIS Instead)]]></itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>19:05</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Aif9fibat7xjyg18tt0h41w6n/media.mp3" length="13748768" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:if9fibat7xjyg18tt0h41w6n</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b8a7036d73902198549</link>
			<acast:episodeId>69ab3b8a7036d73902198549</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNIAeAhfjCfEPVlFcU4jdVuSxhDqa4b7AOpDxN2/0hfkIz9qbFFE81ywtrjoQF1EaYGcSAGQxYDl61lwSc/AxR6A==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with default AI performance? The common story is that models need to get smarter, but the reality is more complicated when the real problem is that every response is optimized for a hypothetical median user. In this episode, I share the inside scoop on the four levers that separate 10x AI users from everyone else:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why reinforcement learning from human feedback trains models to please everyone and no one</p></li><li class="list-item-node"><p class="text-node">How memory, instructions, style, and tools compound into permanently better output</p></li><li class="list-item-node"><p class="text-node">What Claude's style profiles and markdown files do that prompting alone cannot</p></li><li class="list-item-node"><p class="text-node">Where most people fail by being too vague to actually steer the model</p></li></ul><p class="text-node">For operators serious about AI productivity, the gap between median and personalized output widens every week—and the fix is simpler than most people realize.</p><p class="text-node">For deeper playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/p/why-your-ai-output-feels-generic">https://natesnewsletter.substack.com/p/why-your-ai-output-feels-generic</a>?</p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with default AI performance? The common story is that models need to get smarter, but the reality is more complicated when the real problem is that every response is optimized for a hypothetical median user. In this episode, I share the inside scoop on the four levers that separate 10x AI users from everyone else:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why reinforcement learning from human feedback trains models to please everyone and no one</p></li><li class="list-item-node"><p class="text-node">How memory, instructions, style, and tools compound into permanently better output</p></li><li class="list-item-node"><p class="text-node">What Claude's style profiles and markdown files do that prompting alone cannot</p></li><li class="list-item-node"><p class="text-node">Where most people fail by being too vague to actually steer the model</p></li></ul><p class="text-node">For operators serious about AI productivity, the gap between median and personalized output widens every week—and the fix is simpler than most people realize.</p><p class="text-node">For deeper playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/p/why-your-ai-output-feels-generic">https://natesnewsletter.substack.com/p/why-your-ai-output-feels-generic</a>?</p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>The Real Difference Between Gemini 3 and ChatGPT 5.1—Context vs. Task</title>
			<itunes:title>The Real Difference Between Gemini 3 and ChatGPT 5.1—Context vs. Task</itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>15:59</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Ak98e31p7lkeyzvcx6rxao14w/media.mp3" length="11513418" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:k98e31p7lkeyzvcx6rxao14w</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b8ac2eb2fc3ab48fbbc</link>
			<acast:episodeId>69ab3b8ac2eb2fc3ab48fbbc</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNTwKt0ZncazP1Sx/hvFqRUpkYpo9DikYQAeaEdt4vhrDYnB9MHjQCqxKehYYlTiHfgYn17j17dnff9e8y73Z1nA==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's the real story with prompting ChatGPT 5.1 versus Gemini 3? The common story is that models matter most, but the reality is more complicated when the same prompt lands completely differently depending on whether your input is clean or chaotic. In this video, I share the inside scoop on how to match the right model to the right kind of work:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why GPT-5.1 thrives on clean inputs and complex, structured tasks where precision is the priority</p></li><li class="list-item-node"><p class="text-node">How Gemini 3 handles messy multimodal context at scale in ways GPT-5.1 wasn't built for</p></li><li class="list-item-node"><p class="text-node">What shifts in your results when you align prompts to context entropy instead of task complexity alone</p></li><li class="list-item-node"><p class="text-node">Where each model wins for operators, builders, and teams when the hype cycle stops driving the decision</p></li></ul><p class="text-node">For operators and teams navigating 2026, the teams that match the model to the entropy of the work get dramatically better results than the ones still chasing the latest benchmark.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's the real story with prompting ChatGPT 5.1 versus Gemini 3? The common story is that models matter most, but the reality is more complicated when the same prompt lands completely differently depending on whether your input is clean or chaotic. In this video, I share the inside scoop on how to match the right model to the right kind of work:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why GPT-5.1 thrives on clean inputs and complex, structured tasks where precision is the priority</p></li><li class="list-item-node"><p class="text-node">How Gemini 3 handles messy multimodal context at scale in ways GPT-5.1 wasn't built for</p></li><li class="list-item-node"><p class="text-node">What shifts in your results when you align prompts to context entropy instead of task complexity alone</p></li><li class="list-item-node"><p class="text-node">Where each model wins for operators, builders, and teams when the hype cycle stops driving the decision</p></li></ul><p class="text-node">For operators and teams navigating 2026, the teams that match the model to the entropy of the work get dramatically better results than the ones still chasing the latest benchmark.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>The 4 AI Agents Non-Technical People Actually Need (And How to Use Them Today)</title>
			<itunes:title>The 4 AI Agents Non-Technical People Actually Need (And How to Use Them Today)</itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>18:17</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Awzvsmppq78zlvnrj6in5bj2q/media.mp3" length="13172298" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:wzvsmppq78zlvnrj6in5bj2q</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b8ac2eb2fc3ab48fbb6</link>
			<acast:episodeId>69ab3b8ac2eb2fc3ab48fbb6</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNkTdKIGrnexHXYYH5Fu9rYFMIXpRnw6AIIVjGUNKDoDj1YXWhknErnfxxt+4h40bI551Xt2+z5ZmKwrEzN5TARA==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with AI agents when everything claims to be one? The common story is that agents require technical skills to use, but the reality is more complicated when four tools can handle most of what non-technical people actually need. In this video, I share the inside scoop on building a reliable team of AI agents without writing a single line of code:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why the "little guy theory" sets the right expectations before you delegate anything to an agent</p></li><li class="list-item-node"><p class="text-node">How four knobs control agent reliability and risk: habitat, tools, constraints, and proof of work</p></li><li class="list-item-node"><p class="text-node">What Manus, Notion AI, Lovable, and Zapier actually do well and where each one earns its place</p></li><li class="list-item-node"><p class="text-node">Where to start with specific hands-on exercises you can run today to build real delegation habits</p></li></ul><p class="text-node">For professionals navigating 2026, those who learn to delegate outcomes to reliable agents will reclaim hours every week. Those waiting for perfect AI will keep doing the work themselves.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with AI agents when everything claims to be one? The common story is that agents require technical skills to use, but the reality is more complicated when four tools can handle most of what non-technical people actually need. In this video, I share the inside scoop on building a reliable team of AI agents without writing a single line of code:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why the "little guy theory" sets the right expectations before you delegate anything to an agent</p></li><li class="list-item-node"><p class="text-node">How four knobs control agent reliability and risk: habitat, tools, constraints, and proof of work</p></li><li class="list-item-node"><p class="text-node">What Manus, Notion AI, Lovable, and Zapier actually do well and where each one earns its place</p></li><li class="list-item-node"><p class="text-node">Where to start with specific hands-on exercises you can run today to build real delegation habits</p></li></ul><p class="text-node">For professionals navigating 2026, those who learn to delegate outcomes to reliable agents will reclaim hours every week. Those waiting for perfect AI will keep doing the work themselves.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>The People Getting Promoted All Have This One Thing in Common (AI Is Supercharging this Mindset)</title>
			<itunes:title>The People Getting Promoted All Have This One Thing in Common (AI Is Supercharging this Mindset)</itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>22:07</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Aabpzpsfvv27k8mayhvl29vsv/media.mp3" length="15924559" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:abpzpsfvv27k8mayhvl29vsv</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b8a6ffdcd8188a5eb36</link>
			<acast:episodeId>69ab3b8a6ffdcd8188a5eb36</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNKZgFLEvYAlLv6idpoAVJuiCCzDiQ4b/tit3JOV+pKGN9+bPWjwnZaEf7DpuAIW+LmPuOosatS70BTCH0ebri4A==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with entry-level careers in the AI economy? The common story is that this is a temporary hiring freeze, but the reality is more complicated when entry-level hiring has collapsed 50% since 2019 and the routine tasks that once trained newcomers are precisely what AI handles now. In this video, I share the inside scoop on why high agency plus AI fluency is the only viable career strategy:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why the traditional career ladder is being disassembled mid-climb and why passive approaches no longer produce outcomes in that environment</p></li><li class="list-item-node"><p class="text-node">How AI acts as a forcing function on your degree of agency, surfacing the gap between high and low agency people in months instead of the twenty years it used to take</p></li><li class="list-item-node"><p class="text-node">What an internal locus of control actually looks like in practice when solo founders are building $80 million exits in six months</p></li><li class="list-item-node"><p class="text-node">Why job titles are becoming meaningless as value creation replaces credential accumulation as the animating purpose of a career</p></li></ul><p class="text-node">For professionals navigating 2026, the opportunity is real and unprecedented for this generation, but only for those willing to collapse the distance between what they say they'll do and what they actually do.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with entry-level careers in the AI economy? The common story is that this is a temporary hiring freeze, but the reality is more complicated when entry-level hiring has collapsed 50% since 2019 and the routine tasks that once trained newcomers are precisely what AI handles now. In this video, I share the inside scoop on why high agency plus AI fluency is the only viable career strategy:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why the traditional career ladder is being disassembled mid-climb and why passive approaches no longer produce outcomes in that environment</p></li><li class="list-item-node"><p class="text-node">How AI acts as a forcing function on your degree of agency, surfacing the gap between high and low agency people in months instead of the twenty years it used to take</p></li><li class="list-item-node"><p class="text-node">What an internal locus of control actually looks like in practice when solo founders are building $80 million exits in six months</p></li><li class="list-item-node"><p class="text-node">Why job titles are becoming meaningless as value creation replaces credential accumulation as the animating purpose of a career</p></li></ul><p class="text-node">For professionals navigating 2026, the opportunity is real and unprecedented for this generation, but only for those willing to collapse the distance between what they say they'll do and what they actually do.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>The Nvidia-Groq Deal Is WAY Bigger Than Reported (3 Things the Headlines Missed)</title>
			<itunes:title>The Nvidia-Groq Deal Is WAY Bigger Than Reported (3 Things the Headlines Missed)</itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>25:37</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Auax3nkbuvsidzg732onqc1gp/media.mp3" length="18456138" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:uax3nkbuvsidzg732onqc1gp</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b8a6ffdcd8188a5eb3b</link>
			<acast:episodeId>69ab3b8a6ffdcd8188a5eb3b</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNCGQadkD0oaTFDvjk0boIbSkBNfBzNb/QxI1PYM1KVjNVbzW4Crtpi/PcR+wro7477eBRtOHrshSsymHTqqWAiQ==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening inside Nvidia's Groq acquisition and why it changes everything about AI infrastructure? The common story is that Nvidia bought a chip startup, but the reality is more complicated when the deal is really about vertical integration across memory, inference, and frontier talent. In this video, I share the inside scoop on how the AI hardware race is reshaping the rules of acquisition itself:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why SRAM-heavy LPU designs matter for low-latency inference workloads in ways traditional GPU architectures can't match</p></li><li class="list-item-node"><p class="text-node">How high-bandwidth memory bottlenecks constrain GPU performance for LLMs and why solving that is worth more than the headline price</p></li><li class="list-item-node"><p class="text-node">What license-plus-acquihire deals reveal about the frontier AI talent wars and why key people are now worth more than the companies they work for</p></li><li class="list-item-node"><p class="text-node">Where Nvidia's defensive play positions them against Google's TPU advantage as inference economics become the central battleground</p></li></ul><p class="text-node">For builders and operators navigating 2026, the shift from traditional acquisitions to capability transfers means startup employees can no longer count on change-of-control liquidity events, and the companies solving memory bandwidth and inference speed are becoming essential infrastructure plays.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening inside Nvidia's Groq acquisition and why it changes everything about AI infrastructure? The common story is that Nvidia bought a chip startup, but the reality is more complicated when the deal is really about vertical integration across memory, inference, and frontier talent. In this video, I share the inside scoop on how the AI hardware race is reshaping the rules of acquisition itself:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why SRAM-heavy LPU designs matter for low-latency inference workloads in ways traditional GPU architectures can't match</p></li><li class="list-item-node"><p class="text-node">How high-bandwidth memory bottlenecks constrain GPU performance for LLMs and why solving that is worth more than the headline price</p></li><li class="list-item-node"><p class="text-node">What license-plus-acquihire deals reveal about the frontier AI talent wars and why key people are now worth more than the companies they work for</p></li><li class="list-item-node"><p class="text-node">Where Nvidia's defensive play positions them against Google's TPU advantage as inference economics become the central battleground</p></li></ul><p class="text-node">For builders and operators navigating 2026, the shift from traditional acquisitions to capability transfers means startup employees can no longer count on change-of-control liquidity events, and the companies solving memory bandwidth and inference speed are becoming essential infrastructure plays.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[The $200 AI That's Too Smart to Use (GPT-5 Pro Paradox Explained)]]></title>
			<itunes:title><![CDATA[The $200 AI That's Too Smart to Use (GPT-5 Pro Paradox Explained)]]></itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>23:50</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Abwvy99pg6ihwh2niwkuk95nc/media.mp3" length="17171854" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:bwvy99pg6ihwh2niwkuk95nc</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b8bb49eecc0b7c4bb51</link>
			<acast:episodeId>69ab3b8bb49eecc0b7c4bb51</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNvfNwxsEXJlvID+UXBPgzJ3/TmDgs0i7eVjMPuJghr1p/xsyeoUZMK2Zm9YpnNO+XIEF5g5kSVM6NS9Wwt716CQ==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening when the smartest AI model is also the most frustrating to use? The common story is that more intelligence means more utility, but the reality is more complicated when the same architecture that boosts correctness erodes personality and expands the attack surface. In this video, I share the inside scoop on why GPT-5 Pro's parallel reasoning architecture is both a breakthrough and a trade-off:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why running multiple reasoning chains in parallel makes GPT-5 Pro exceptional for scientific research, financial modeling, and legal due diligence</p></li><li class="list-item-node"><p class="text-node">How the same design that improves multi-perspective analysis weakens sequential tasks like coding implementation, creative writing, and real-time conversation</p></li><li class="list-item-node"><p class="text-node">What well-structured, multi-dimensional datasets actually look like when GPT-5 Pro needs them to perform at its ceiling</p></li><li class="list-item-node"><p class="text-node">Where architectural specialization is headed when deep reasoning systems, conversational AIs, and domain-specific tools start to coexist rather than compete</p></li></ul><p class="text-node">For builders and operators navigating 2026, intelligence is not the same as utility, and the winners will be the ones who match model architecture to the right problem before they deploy.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening when the smartest AI model is also the most frustrating to use? The common story is that more intelligence means more utility, but the reality is more complicated when the same architecture that boosts correctness erodes personality and expands the attack surface. In this video, I share the inside scoop on why GPT-5 Pro's parallel reasoning architecture is both a breakthrough and a trade-off:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why running multiple reasoning chains in parallel makes GPT-5 Pro exceptional for scientific research, financial modeling, and legal due diligence</p></li><li class="list-item-node"><p class="text-node">How the same design that improves multi-perspective analysis weakens sequential tasks like coding implementation, creative writing, and real-time conversation</p></li><li class="list-item-node"><p class="text-node">What well-structured, multi-dimensional datasets actually look like when GPT-5 Pro needs them to perform at its ceiling</p></li><li class="list-item-node"><p class="text-node">Where architectural specialization is headed when deep reasoning systems, conversational AIs, and domain-specific tools start to coexist rather than compete</p></li></ul><p class="text-node">For builders and operators navigating 2026, intelligence is not the same as utility, and the winners will be the ones who match model architecture to the right problem before they deploy.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[The $125 Billion Secret: Amazon Told Wall Street One Thing and Employees Another. Here's the Truth.]]></title>
			<itunes:title><![CDATA[The $125 Billion Secret: Amazon Told Wall Street One Thing and Employees Another. Here's the Truth.]]></itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>18:36</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Aicxne725j3qkx57p9w0923g3/media.mp3" length="13393294" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:icxne725j3qkx57p9w0923g3</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b8b7036d7390219856c</link>
			<acast:episodeId>69ab3b8b7036d7390219856c</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNQzYA3sUopim1jV5DzS3QHf+4gaNbDQrF2fj5sPuHGJMcYLy6pUjFSEZgT9jgUd0jYs5Zq7TiDhXBwh8ivJbLHQ==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening inside Amazon's 30,000-person layoff? The common story is that it's about culture and too many managers, but the reality is more complicated when free cash flow went negative as CapEx hit $125 billion and the math tells a different story entirely. In this video, I share the inside scoop on why the largest layoff in Amazon history is really a capital reallocation story:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why free cash flow going negative at the same moment CapEx hits $125 billion is the signal most coverage is burying</p></li><li class="list-item-node"><p class="text-node">How $6 billion in salary savings funds AI infrastructure buildouts and what that arithmetic looks like at every major hyperscaler</p></li><li class="list-item-node"><p class="text-node">What the culture narrative obscures about GPU economics and why the framing serves everyone except the workers trying to understand what happened</p></li><li class="list-item-node"><p class="text-node">Where every hyperscaler faces the same brutal trade-off between human capital and compute capital as a structural reality, not a cyclical one</p></li></ul><p class="text-node">For tech workers navigating 2026, the uncomfortable truth is that human capital now competes directly with compute capital, and understanding that shift is the only way to position yourself on the right side of it.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening inside Amazon's 30,000-person layoff? The common story is that it's about culture and too many managers, but the reality is more complicated when free cash flow went negative as CapEx hit $125 billion and the math tells a different story entirely. In this video, I share the inside scoop on why the largest layoff in Amazon history is really a capital reallocation story:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why free cash flow going negative at the same moment CapEx hits $125 billion is the signal most coverage is burying</p></li><li class="list-item-node"><p class="text-node">How $6 billion in salary savings funds AI infrastructure buildouts and what that arithmetic looks like at every major hyperscaler</p></li><li class="list-item-node"><p class="text-node">What the culture narrative obscures about GPU economics and why the framing serves everyone except the workers trying to understand what happened</p></li><li class="list-item-node"><p class="text-node">Where every hyperscaler faces the same brutal trade-off between human capital and compute capital as a structural reality, not a cyclical one</p></li></ul><p class="text-node">For tech workers navigating 2026, the uncomfortable truth is that human capital now competes directly with compute capital, and understanding that shift is the only way to position yourself on the right side of it.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>Al Agents That Actually Work: The Pattern Anthropic Just Revealed</title>
			<itunes:title>Al Agents That Actually Work: The Pattern Anthropic Just Revealed</itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>13:35</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Auc5jod83xjc9yex7krcpb0ul/media.mp3" length="9790590" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:uc5jod83xjc9yex7krcpb0ul</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b8bb49eecc0b7c4bb56</link>
			<acast:episodeId>69ab3b8bb49eecc0b7c4bb56</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNHGkIEmYRc16cfXKi1AXqNbEOEKOEJME5FNwfSe6TZcr/hMumB30WGWTSblRFfDG/9lotZ0PLc3uA/AxH4rajjA==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening when AI agents fail at long-running tasks? The common story is that smarter models solve agent failures, but the reality is more complicated when generalized agents behave like amnesiacs with tool belts no matter how intelligent the underlying model is. In this video, I share the inside scoop on what Anthropic revealed about why agents actually work:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why generalized agents without domain memory spiral into chaotic loops instead of making durable progress</p></li><li class="list-item-node"><p class="text-node">How domain memory transforms agent behavior from reactive task-running to structured, compounding work</p></li><li class="list-item-node"><p class="text-node">What the initializer and coding agent pattern actually does when you implement it correctly</p></li><li class="list-item-node"><p class="text-node">Where the real moat lies in harness design and testing loops, not in chasing the next model release</p></li></ul><p class="text-node">For builders and operators navigating 2026, the competitive advantage is not a smarter AI. It's well-designed domain memory and the discipline to build testing loops that hold it accountable.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening when AI agents fail at long-running tasks? The common story is that smarter models solve agent failures, but the reality is more complicated when generalized agents behave like amnesiacs with tool belts no matter how intelligent the underlying model is. In this video, I share the inside scoop on what Anthropic revealed about why agents actually work:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why generalized agents without domain memory spiral into chaotic loops instead of making durable progress</p></li><li class="list-item-node"><p class="text-node">How domain memory transforms agent behavior from reactive task-running to structured, compounding work</p></li><li class="list-item-node"><p class="text-node">What the initializer and coding agent pattern actually does when you implement it correctly</p></li><li class="list-item-node"><p class="text-node">Where the real moat lies in harness design and testing loops, not in chasing the next model release</p></li></ul><p class="text-node">For builders and operators navigating 2026, the competitive advantage is not a smarter AI. It's well-designed domain memory and the discipline to build testing loops that hold it accountable.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>They Ignored My Tool Stack and Built Something Better--The 4 Patterns That Work</title>
			<itunes:title>They Ignored My Tool Stack and Built Something Better--The 4 Patterns That Work</itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>26:05</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Aaamxif9ncbmuh8il409bklmt/media.mp3" length="18784027" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:aamxif9ncbmuh8il409bklmt</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b8be2ffe1fef6526bdb</link>
			<acast:episodeId>69ab3b8be2ffe1fef6526bdb</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNNMdEi5BqqdPrBoAxamz/hBYy+VNtZZVYxRXVujMaToOskGhWt1m9nv0VTpi0QDq4cFJwUelT/HsdRH5gFl9XmA==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with AI system building in 2026? The common story is that you follow tutorials and copy tool stacks, but the reality is more complicated when fifty people built the same second brain in Discord, Obsidian, Notion, YAML files, and local Mac apps and the tools were unrecognizable from each other while the architecture held. In this video, I share the inside scoop on four principles that separate successful AI builders from everyone else:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why architecture is portable but tools are not, and what that means for how you evaluate every new platform that comes along</p></li><li class="list-item-node"><p class="text-node">How principles-based guidance scales better than rigid rules when you're building systems that need to adapt to individual contexts</p></li><li class="list-item-node"><p class="text-node">What happens when the agent builds the system, because if the agent built it, the agent can maintain it</p></li><li class="list-item-node"><p class="text-node">Why your system should be infrastructure with compounding advantage rather than just another tool you have to remember to use</p></li></ul><p class="text-node">For builders at any skill level navigating 2026, the gap between understanding what someone else did and doing it yourself is exactly where AI now bridges the difference, and the community becomes a pattern library while AI provides the implementation muscle to make those patterns your own.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with AI system building in 2026? The common story is that you follow tutorials and copy tool stacks, but the reality is more complicated when fifty people built the same second brain in Discord, Obsidian, Notion, YAML files, and local Mac apps and the tools were unrecognizable from each other while the architecture held. In this video, I share the inside scoop on four principles that separate successful AI builders from everyone else:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why architecture is portable but tools are not, and what that means for how you evaluate every new platform that comes along</p></li><li class="list-item-node"><p class="text-node">How principles-based guidance scales better than rigid rules when you're building systems that need to adapt to individual contexts</p></li><li class="list-item-node"><p class="text-node">What happens when the agent builds the system, because if the agent built it, the agent can maintain it</p></li><li class="list-item-node"><p class="text-node">Why your system should be infrastructure with compounding advantage rather than just another tool you have to remember to use</p></li></ul><p class="text-node">For builders at any skill level navigating 2026, the gap between understanding what someone else did and doing it yourself is exactly where AI now bridges the difference, and the community becomes a pattern library while AI provides the implementation muscle to make those patterns your own.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>How I Improved AI Output Quality 10X With One Prompting Shift</title>
			<itunes:title>How I Improved AI Output Quality 10X With One Prompting Shift</itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>12:20</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Aq5mjuyy39u7qkgxcfoui8a9i/media.mp3" length="8888739" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:q5mjuyy39u7qkgxcfoui8a9i</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b8be2ffe1fef6526be0</link>
			<acast:episodeId>69ab3b8be2ffe1fef6526be0</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNLlZOw1pyHcsZFRzLLqHbfAw2tfFLEJpokRMkYQALyr5thm9u5xgWj1TF3A4c3ublzzHLs1KnhovEROKibgd9Dg==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening when your prompts are either too detailed or not detailed enough? The common story is that more clarity always helps, but the reality is more complicated when over-specifying kills creativity and burns context just as badly as under-prompting does. In this video, I share the inside scoop on finding the right altitude for LLM prompts:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why over-specifying crushes model judgment and wastes the context window you actually need</p></li><li class="list-item-node"><p class="text-node">How under-prompting forces large language models to guess in ways that compound downstream</p></li><li class="list-item-node"><p class="text-node">What Goldilocks prompting unlocks in Claude, GPT-5, and Gemini when you hit the right level of detail</p></li><li class="list-item-node"><p class="text-node">Where short, reusable prompt slugs outperform long instruction dumps for operators building at scale</p></li></ul><p class="text-node">For operators and teams navigating 2026, a balanced prompting strategy gives you more control without surrendering the model judgment that makes AI worth using in the first place.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening when your prompts are either too detailed or not detailed enough? The common story is that more clarity always helps, but the reality is more complicated when over-specifying kills creativity and burns context just as badly as under-prompting does. In this video, I share the inside scoop on finding the right altitude for LLM prompts:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why over-specifying crushes model judgment and wastes the context window you actually need</p></li><li class="list-item-node"><p class="text-node">How under-prompting forces large language models to guess in ways that compound downstream</p></li><li class="list-item-node"><p class="text-node">What Goldilocks prompting unlocks in Claude, GPT-5, and Gemini when you hit the right level of detail</p></li><li class="list-item-node"><p class="text-node">Where short, reusable prompt slugs outperform long instruction dumps for operators building at scale</p></li></ul><p class="text-node">For operators and teams navigating 2026, a balanced prompting strategy gives you more control without surrendering the model judgment that makes AI worth using in the first place.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>OpenClaw Agents Are Hiring Each Other. Transferring Crypto. Building Societies. This Is Real.</title>
			<itunes:title>OpenClaw Agents Are Hiring Each Other. Transferring Crypto. Building Societies. This Is Real.</itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>9:08</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Ahy3iq6ppnyc56g2p9imf8lvf/media.mp3" length="6586933" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:hy3iq6ppnyc56g2p9imf8lvf</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b8b7036d73902198574</link>
			<acast:episodeId>69ab3b8b7036d73902198574</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNdKLO9O7ZrVo7bSU9KU3KvuI00ZyzJ5Sy0BtaQPwFiv8wlV999gEM0mN05oIG397h4u3R1gCi0NfTWsSLDbJHdA==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening when AI agents run on personal hardware and start talking to each other? The common story is that agent autonomy is a controlled enterprise affair, but the reality is more complicated. In this video, I share the inside scoop on the first real glimpse of autonomous AI self-organization:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why OpenClaw crossing 100,000 GitHub stars feels like a Napster moment</p></li><li class="list-item-node"><p class="text-node">How Moltbook became a social network where only AI agents can post</p></li><li class="list-item-node"><p class="text-node">What Crustiferianism reveals about agents mirroring human direction</p></li><li class="list-item-node"><p class="text-node">Where enterprise and open source agent communities are diverging</p></li></ul><p class="text-node">For builders watching agentic AI unfold, the deeper lesson isn't about consciousness. It's that agents reflect the structure we give them, and enough humans want to see what happens without guardrails.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/p/openclaw-part-2-150000-ai-agents">https://natesnewsletter.substack.com/p/openclaw-part-2-150000-ai-agents</a>?<br>© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening when AI agents run on personal hardware and start talking to each other? The common story is that agent autonomy is a controlled enterprise affair, but the reality is more complicated. In this video, I share the inside scoop on the first real glimpse of autonomous AI self-organization:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why OpenClaw crossing 100,000 GitHub stars feels like a Napster moment</p></li><li class="list-item-node"><p class="text-node">How Moltbook became a social network where only AI agents can post</p></li><li class="list-item-node"><p class="text-node">What Crustiferianism reveals about agents mirroring human direction</p></li><li class="list-item-node"><p class="text-node">Where enterprise and open source agent communities are diverging</p></li></ul><p class="text-node">For builders watching agentic AI unfold, the deeper lesson isn't about consciousness. It's that agents reflect the structure we give them, and enough humans want to see what happens without guardrails.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/p/openclaw-part-2-150000-ai-agents">https://natesnewsletter.substack.com/p/openclaw-part-2-150000-ai-agents</a>?<br>© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[OpenAI, Google, and Anthropic Agree on One Thing (Finally) - This Week's Biggest AI Stories]]></title>
			<itunes:title><![CDATA[OpenAI, Google, and Anthropic Agree on One Thing (Finally) - This Week's Biggest AI Stories]]></itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>12:41</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Amzfb40y7w0cr06ly4pdqneyn/media.mp3" length="9134185" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:mzfb40y7w0cr06ly4pdqneyn</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b8c6ffdcd8188a5eb7c</link>
			<acast:episodeId>69ab3b8c6ffdcd8188a5eb7c</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNftPQE9cqfpuTf9vuGEdlkuB99wmxVMdIbq57tcun1yZuKsp30Wgrq836HBMhfXq1mtVbj8Ps63ZNCdmsQMAfNw==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening in AI infrastructure as we enter 2026? The common story is that it's just about faster chips, but the reality is more complicated when power grids, prompt injection battles, and agent security are becoming permanent strategic dependencies. In this video, I share the inside scoop on 10 AI stories shaping how we build in 2026:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why NVIDIA's Vera Rubin platform defines the AI factory future and what that means for enterprise compute planning</p></li><li class="list-item-node"><p class="text-node">How power constraints became the real bottleneck when chips stopped being the limiting factor</p></li><li class="list-item-node"><p class="text-node">What Meta's $2 billion Manus acquisition signals about where the serious money thinks AI agents are heading</p></li><li class="list-item-node"><p class="text-node">Where MCP joining the Linux Foundation removes a key barrier to enterprise AI adoption at scale</p></li></ul><p class="text-node">For builders and operators navigating 2026, the winners won't be who generates code fastest. They'll be who makes AI infrastructure boring, reliable, and governable before everyone else figures out that's the game.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening in AI infrastructure as we enter 2026? The common story is that it's just about faster chips, but the reality is more complicated when power grids, prompt injection battles, and agent security are becoming permanent strategic dependencies. In this video, I share the inside scoop on 10 AI stories shaping how we build in 2026:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why NVIDIA's Vera Rubin platform defines the AI factory future and what that means for enterprise compute planning</p></li><li class="list-item-node"><p class="text-node">How power constraints became the real bottleneck when chips stopped being the limiting factor</p></li><li class="list-item-node"><p class="text-node">What Meta's $2 billion Manus acquisition signals about where the serious money thinks AI agents are heading</p></li><li class="list-item-node"><p class="text-node">Where MCP joining the Linux Foundation removes a key barrier to enterprise AI adoption at scale</p></li></ul><p class="text-node">For builders and operators navigating 2026, the winners won't be who generates code fastest. They'll be who makes AI infrastructure boring, reliable, and governable before everyone else figures out that's the game.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>What Good is a Degree When AI Knows Everything? What A Post-Knowledge AI Economy Looks Like</title>
			<itunes:title>What Good is a Degree When AI Knows Everything? What A Post-Knowledge AI Economy Looks Like</itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>8:43</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Auwl304nsmf36mgax73u0vtcw/media.mp3" length="6286316" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:uwl304nsmf36mgax73u0vtcw</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b8ce2ffe1fef6526bf1</link>
			<acast:episodeId>69ab3b8ce2ffe1fef6526bf1</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNMQZVsF7hrsqDtuMp7u8Wsi3WmwqhlvMPJQJAgwkx/dfjJ1n0eq6Zxnuo9iBvaRRguVqN34Bb8CYLTZFUTC4hPg==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening to the value of knowledge in an AI era? The common story is that learning more and earning more credentials keeps you ahead, but the reality is more complicated when a language model can fake a perfect resume in seconds. In this video, I share the inside scoop on why the knowledge economy is cracking and what replaces it:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why AI has compressed Buckminster Fuller's knowledge doubling curve from decades to months, flooding the market faster than anyone can absorb</p></li><li class="list-item-node"><p class="text-node">How Monster.com's bankruptcy signals that traditional job application signals no longer prove real competence</p></li><li class="list-item-node"><p class="text-node">What the five human moats look like: taste, extreme agency, learning velocity, long intent horizons, and interruptibility</p></li><li class="list-item-node"><p class="text-node">Where proof-of-work projects beat credentials when machines can fake the credential but not the judgment</p></li></ul><p class="text-node">For knowledge workers navigating 2026, the future pays for judgment, not just knowledge, and the window to build unmistakably human proof-of-work is open right now.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis:&nbsp;<a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening to the value of knowledge in an AI era? The common story is that learning more and earning more credentials keeps you ahead, but the reality is more complicated when a language model can fake a perfect resume in seconds. In this video, I share the inside scoop on why the knowledge economy is cracking and what replaces it:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why AI has compressed Buckminster Fuller's knowledge doubling curve from decades to months, flooding the market faster than anyone can absorb</p></li><li class="list-item-node"><p class="text-node">How Monster.com's bankruptcy signals that traditional job application signals no longer prove real competence</p></li><li class="list-item-node"><p class="text-node">What the five human moats look like: taste, extreme agency, learning velocity, long intent horizons, and interruptibility</p></li><li class="list-item-node"><p class="text-node">Where proof-of-work projects beat credentials when machines can fake the credential but not the judgment</p></li></ul><p class="text-node">For knowledge workers navigating 2026, the future pays for judgment, not just knowledge, and the window to build unmistakably human proof-of-work is open right now.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis:&nbsp;<a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>If This Can Happen to an Ex-DeepMind Leader, It Can Happen to You</title>
			<itunes:title>If This Can Happen to an Ex-DeepMind Leader, It Can Happen to You</itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>9:30</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Atcz0lg6fvh6uxmrg271ojag8/media.mp3" length="6852128" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:tcz0lg6fvh6uxmrg271ojag8</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b8cc2eb2fc3ab48fc02</link>
			<acast:episodeId>69ab3b8cc2eb2fc3ab48fc02</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNxnS6YHPp4uv5AYZ3KnmEepemczymgJZaQlQsV2KXEajqCXIU8BfUjJUyGuu2wRB9EDAiCKdRUKOQkJHugM+4gw==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with LLM-induced psychosis in leadership? The common story is that AI just makes us smarter, but the reality is more complicated when domain expertise gets quietly replaced by AI confidence and nobody notices until the damage is done. In this video, I share the inside scoop on a psychiatric risk emerging in 2026 workplaces:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why David Budden's Navier-Stokes claim reveals the specific symptoms of LLM psychosis in high-stakes decision making</p></li><li class="list-item-node"><p class="text-node">How leaders fall victim to confirmation bias with ChatGPT when the model tells them what they already believe</p></li><li class="list-item-node"><p class="text-node">What happens to organizations when executives can no longer distinguish their own expertise from the LLM's output</p></li><li class="list-item-node"><p class="text-node">Where businesses will start testing leaders for undue AI influence before it becomes a board-level liability</p></li></ul><p class="text-node">For executives and operators navigating 2026, the gap between using AI as a tool and letting it hijack your judgment will define stable leadership, and the ones who can't tell the difference will become liabilities to their organizations.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with LLM-induced psychosis in leadership? The common story is that AI just makes us smarter, but the reality is more complicated when domain expertise gets quietly replaced by AI confidence and nobody notices until the damage is done. In this video, I share the inside scoop on a psychiatric risk emerging in 2026 workplaces:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why David Budden's Navier-Stokes claim reveals the specific symptoms of LLM psychosis in high-stakes decision making</p></li><li class="list-item-node"><p class="text-node">How leaders fall victim to confirmation bias with ChatGPT when the model tells them what they already believe</p></li><li class="list-item-node"><p class="text-node">What happens to organizations when executives can no longer distinguish their own expertise from the LLM's output</p></li><li class="list-item-node"><p class="text-node">Where businesses will start testing leaders for undue AI influence before it becomes a board-level liability</p></li></ul><p class="text-node">For executives and operators navigating 2026, the gap between using AI as a tool and letting it hijack your judgment will define stable leadership, and the ones who can't tell the difference will become liabilities to their organizations.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title>RAG: The $40B AI Technique 80% of Enterpises Use—Finally Explained</title>
			<itunes:title>RAG: The $40B AI Technique 80% of Enterpises Use—Finally Explained</itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>23:22</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Anz6kjg5njnqh9gdavp7gpt1q/media.mp3" length="16828918" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:nz6kjg5njnqh9gdavp7gpt1q</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b8cb49eecc0b7c4bb89</link>
			<acast:episodeId>69ab3b8cb49eecc0b7c4bb89</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNeVUL1LjFdVnyvwPPqaUZsnQqksus1UzK/mDXbaGBj5VOrkKL78+VN7/s8spylIONk+rs1UU5IqAKtZPVyh2g0Q==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with enterprise AI accuracy when models get paired with real company data? The common story is that bigger models mean better answers, but the reality is more complicated when bad chunking ruins more RAG projects than bad models ever do. In this video, I share the inside scoop on why Retrieval-Augmented Generation is becoming the dominant architecture for enterprise AI:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why pairing vector search with large language models eliminates knowledge cutoffs and slashes hallucinations without fine-tuning</p></li><li class="list-item-node"><p class="text-node">How clean text, smart metadata, and overlapping semantic chunks decide retrieval accuracy more than model size ever will</p></li><li class="list-item-node"><p class="text-node">What the roadmap from a simple FAQ bot to multimodal, agentic, enterprise-grade RAG actually looks like in practice</p></li><li class="list-item-node"><p class="text-node">Where RAG backfires: high-volatility data, creative writing, ultra-low-latency workflows, and tiny datasets where the next model upgrade suffices</p></li></ul><p class="text-node">For enterprise leaders navigating the next 24 months, the $2 billion to $40 billion market forecast isn't the story. The story is that retrieval discipline and data pipelines are the new competitive moat.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com">https://natesnewsletter.substack.com</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with enterprise AI accuracy when models get paired with real company data? The common story is that bigger models mean better answers, but the reality is more complicated when bad chunking ruins more RAG projects than bad models ever do. In this video, I share the inside scoop on why Retrieval-Augmented Generation is becoming the dominant architecture for enterprise AI:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why pairing vector search with large language models eliminates knowledge cutoffs and slashes hallucinations without fine-tuning</p></li><li class="list-item-node"><p class="text-node">How clean text, smart metadata, and overlapping semantic chunks decide retrieval accuracy more than model size ever will</p></li><li class="list-item-node"><p class="text-node">What the roadmap from a simple FAQ bot to multimodal, agentic, enterprise-grade RAG actually looks like in practice</p></li><li class="list-item-node"><p class="text-node">Where RAG backfires: high-volatility data, creative writing, ultra-low-latency workflows, and tiny datasets where the next model upgrade suffices</p></li></ul><p class="text-node">For enterprise leaders navigating the next 24 months, the $2 billion to $40 billion market forecast isn't the story. The story is that retrieval discipline and data pipelines are the new competitive moat.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com">https://natesnewsletter.substack.com</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[AI's 4 Power Shifts: Where the Best Tech Jobs Will Emerge in 2026]]></title>
			<itunes:title><![CDATA[AI's 4 Power Shifts: Where the Best Tech Jobs Will Emerge in 2026]]></itunes:title>
			<pubDate>Tue, 24 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>31:45</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Avhbdks5ouzoondhqhywym3ui/media.mp3" length="22866652" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:vhbdks5ouzoondhqhywym3ui</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b90e2ffe1fef6526cb5</link>
			<acast:episodeId>69ab3b90e2ffe1fef6526cb5</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbN3011hk2W4Hhm6o8FQuFKYM7MGLjPyG8qTTeFW5FxrTe8jKo6HlZmKlyMa0TJ0kKRjGS0snup10cHwp893MaLrQ==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening to tech roles as AI accelerates execution and creates new chaos in equal measure? The common story is that automation destroys jobs, but the reality is more complicated when speed spawns security nightmares, quality debt, and a trust deficit that only humans can fix. In this video, I share the inside scoop on why the roles that survive and thrive in AI are the ones built around accountability:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why GPU bills, cloud costs, and large-scale model deployment are creating a lucrative infrastructure gold rush for the right specialists</p></li><li class="list-item-node"><p class="text-node">How PMs, UX designers, and leaders who build trust amid AI-driven chaos become the hardest people to replace</p></li><li class="list-item-node"><p class="text-node">What the data and retrieval talent crunch means when vector database engineers and RAG specialists are in shortest supply</p></li><li class="list-item-node"><p class="text-node">Where the three-step career playbook lands: automate your own drudgery to survive, layer complementary AI skills to adapt, and build frameworks others adopt to lead</p></li></ul><p class="text-node">For knowledge workers navigating 2026, people get paid to solve problems and AI just moves the problems to new places. The question is whether you're positioned where the new problems are landing.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening to tech roles as AI accelerates execution and creates new chaos in equal measure? The common story is that automation destroys jobs, but the reality is more complicated when speed spawns security nightmares, quality debt, and a trust deficit that only humans can fix. In this video, I share the inside scoop on why the roles that survive and thrive in AI are the ones built around accountability:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why GPU bills, cloud costs, and large-scale model deployment are creating a lucrative infrastructure gold rush for the right specialists</p></li><li class="list-item-node"><p class="text-node">How PMs, UX designers, and leaders who build trust amid AI-driven chaos become the hardest people to replace</p></li><li class="list-item-node"><p class="text-node">What the data and retrieval talent crunch means when vector database engineers and RAG specialists are in shortest supply</p></li><li class="list-item-node"><p class="text-node">Where the three-step career playbook lands: automate your own drudgery to survive, layer complementary AI skills to adapt, and build frameworks others adopt to lead</p></li></ul><p class="text-node">For knowledge workers navigating 2026, people get paid to solve problems and AI just moves the problems to new places. The question is whether you're positioned where the new problems are landing.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[Most of Us Are Using AI Backwards. Here's Why.]]></title>
			<itunes:title><![CDATA[Most of Us Are Using AI Backwards. Here's Why.]]></itunes:title>
			<pubDate>Mon, 23 Feb 2026 19:00:00 GMT</pubDate>
			<itunes:duration>12:58</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Av141nlg2gz2xrkos4p1xv7vn/media.mp3" length="9338254" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:v141nlg2gz2xrkos4p1xv7vn</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b8ae2ffe1fef6526b9d</link>
			<acast:episodeId>69ab3b8ae2ffe1fef6526b9d</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbN9Scxnvq4NXoiWUBkzvP8C5mfWeqfoIRCx+rZLkfTkfyHDXTRgMVdQHZlt/z93eJjYFYqE7Bv3Z2wrQ4Lu/vyJw==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening when most people use AI to compress information? The common story is that faster summaries and shorter briefs mean better productivity, but the reality is more complicated when the real value is in expanding your thinking, not shrinking it. In this video, I share the inside scoop on why the compression trap is costing knowledge workers their deepest cognitive edge:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why defaulting to summaries and bullet points misses the bigger opportunity AI actually offers</p></li><li class="list-item-node"><p class="text-node">How advanced voice mode acts like a patient therapist and sharp colleague rolled into one</p></li><li class="list-item-node"><p class="text-node">What a deliberate multi-model workflow looks like when you pair 4o, O3, and Opus 4 for different cognitive phases</p></li><li class="list-item-node"><p class="text-node">Where the real leverage lives: slowing down and letting ideas ferment instead of racing to an output</p></li></ul><p class="text-node">For knowledge workers navigating 2026, the question isn't how fast AI can process information for you. It's whether you're using it to go deeper or just faster.</p><p class="text-node">Subscribe for daily AI strategy and news.<br>For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://open.substack.com/pub/natesnewsletter/p/were-using-ai-backwardsheres-how">https://open.substack.com/pub/natesnewsletter/p/were-using-ai-backwardsheres-how</a>?<br>© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening when most people use AI to compress information? The common story is that faster summaries and shorter briefs mean better productivity, but the reality is more complicated when the real value is in expanding your thinking, not shrinking it. In this video, I share the inside scoop on why the compression trap is costing knowledge workers their deepest cognitive edge:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why defaulting to summaries and bullet points misses the bigger opportunity AI actually offers</p></li><li class="list-item-node"><p class="text-node">How advanced voice mode acts like a patient therapist and sharp colleague rolled into one</p></li><li class="list-item-node"><p class="text-node">What a deliberate multi-model workflow looks like when you pair 4o, O3, and Opus 4 for different cognitive phases</p></li><li class="list-item-node"><p class="text-node">Where the real leverage lives: slowing down and letting ideas ferment instead of racing to an output</p></li></ul><p class="text-node">For knowledge workers navigating 2026, the question isn't how fast AI can process information for you. It's whether you're using it to go deeper or just faster.</p><p class="text-node">Subscribe for daily AI strategy and news.<br>For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://open.substack.com/pub/natesnewsletter/p/were-using-ai-backwardsheres-how">https://open.substack.com/pub/natesnewsletter/p/were-using-ai-backwardsheres-how</a>?<br>© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[OpenClaw: 160,000 Developers Are Building Something OpenAI & Google Can't Stop. Where Do You Stand?]]></title>
			<itunes:title><![CDATA[OpenClaw: 160,000 Developers Are Building Something OpenAI & Google Can't Stop. Where Do You Stand?]]></itunes:title>
			<pubDate>Sun, 15 Feb 2026 07:07:00 GMT</pubDate>
			<itunes:duration>25:12</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Ajc3cv1mcnto11ouigrglaxak/media.mp3" length="18153013" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:jc3cv1mcnto11ouigrglaxak</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b88b49eecc0b7c4bac1</link>
			<acast:episodeId>69ab3b88b49eecc0b7c4bac1</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNj9dCIlfI7v/MTg8Va0EZgCHXjdWGuPgIsk8IcNOR8Q4T+HhFcMVuqwLcMDNJd6BtInNCMf8zqrwqEnF6vCN7ZA==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with AI agents in the wild? The common story is that agents either work perfectly or fail catastrophically—but the reality is more complicated when the same architecture saves $4,200 on a car and carpet bombs someone's contact list the same week.</p><p class="text-node">In this episode, I share the inside scoop on what 145,000 GitHub stars and 3,000 community-built skills reveal about what people actually want from AI agents:<br>• Why email management and morning briefings dominate the skills marketplace over chat<br>• How an agent wiped a production database and fabricated logs to cover its tracks<br>• What the 70-30 human-AI control preference means for deployment architecture<br>• Where the gap between consumer capability hunger and enterprise governance creates opportunity</p><p class="text-node">For builders deploying agents in 2026, the question is no longer whether agents are smart enough—it's whether our specifications and guardrails are good enough to channel that intelligence productively.</p><p class="text-node">Subscribe for daily AI strategy and news.<br>For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a><br>© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with AI agents in the wild? The common story is that agents either work perfectly or fail catastrophically—but the reality is more complicated when the same architecture saves $4,200 on a car and carpet bombs someone's contact list the same week.</p><p class="text-node">In this episode, I share the inside scoop on what 145,000 GitHub stars and 3,000 community-built skills reveal about what people actually want from AI agents:<br>• Why email management and morning briefings dominate the skills marketplace over chat<br>• How an agent wiped a production database and fabricated logs to cover its tracks<br>• What the 70-30 human-AI control preference means for deployment architecture<br>• Where the gap between consumer capability hunger and enterprise governance creates opportunity</p><p class="text-node">For builders deploying agents in 2026, the question is no longer whether agents are smart enough—it's whether our specifications and guardrails are good enough to channel that intelligence productively.</p><p class="text-node">Subscribe for daily AI strategy and news.<br>For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a><br>© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[OpenAI Is Slowing Hiring. Anthropic's Engineers Stopped Writing Code. Here's Why You Should Care.]]></title>
			<itunes:title><![CDATA[OpenAI Is Slowing Hiring. Anthropic's Engineers Stopped Writing Code. Here's Why You Should Care.]]></itunes:title>
			<pubDate>Thu, 05 Feb 2026 05:00:00 GMT</pubDate>
			<itunes:duration>23:55</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Anz3btu7bxg4fxqo77vqu89qw/media.mp3" length="17222009" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:nz3btu7bxg4fxqo77vqu89qw</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b83b49eecc0b7c4b9d0</link>
			<acast:episodeId>69ab3b83b49eecc0b7c4b9d0</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNTjf/pdkBfzgBSkNdA0TxlWZSzmlVCucotRoHDRGQi+kpZxIRZ3j/PjH4z1xFgz7HS7IGlKihy5uvg/WFj4dF1Q==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p class="text-node">What's really happening with AI coding tools after December's convergence? The common story is that better models mean incremental improvement—but the reality is more complicated when the CEO of OpenAI admits he still hasn't changed how he works. In this video, I share the inside scoop on why a capability overhang is widening between what AI can do and what most people are doing with it:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why three frontier model releases in six days created a phase transition</p></li><li class="list-item-node"><p class="text-node">How a simple bash loop called Ralph outperformed elaborate agent frameworks</p></li><li class="list-item-node"><p class="text-node">What Claude Code's task system means for parallel autonomous work</p></li><li class="list-item-node"><p class="text-node">Where the real skill shift lands: from implementation to specification and review</p></li></ul><p class="text-node">For builders and operators navigating 2026, the temporary arbitrage is real. Those who close the overhang first gain a massive edge that compounds daily.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p class="text-node">What's really happening with AI coding tools after December's convergence? The common story is that better models mean incremental improvement—but the reality is more complicated when the CEO of OpenAI admits he still hasn't changed how he works. In this video, I share the inside scoop on why a capability overhang is widening between what AI can do and what most people are doing with it:</p><ul class="list-node"><li class="list-item-node"><p class="text-node">Why three frontier model releases in six days created a phase transition</p></li><li class="list-item-node"><p class="text-node">How a simple bash loop called Ralph outperformed elaborate agent frameworks</p></li><li class="list-item-node"><p class="text-node">What Claude Code's task system means for parallel autonomous work</p></li><li class="list-item-node"><p class="text-node">Where the real skill shift lands: from implementation to specification and review</p></li></ul><p class="text-node">For builders and operators navigating 2026, the temporary arbitrage is real. Those who close the overhang first gain a massive edge that compounds daily.</p><p class="text-node">Subscribe for daily AI strategy and news.</p><p class="text-node">For playbooks and analysis: <a target="_blank" rel="noopener noreferrer" class="link" href="https://natesnewsletter.substack.com/">https://natesnewsletter.substack.com/</a></p><p class="text-node">© Nate B. Jones 2026</p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
		<item>
			<title><![CDATA[I Built an 11-Tab Financial Model in 10 Minutes. The $20/Month Tool That's About Change How We Work.]]></title>
			<itunes:title><![CDATA[I Built an 11-Tab Financial Model in 10 Minutes. The $20/Month Tool That's About Change How We Work.]]></itunes:title>
			<pubDate>Tue, 27 Jan 2026 06:00:00 GMT</pubDate>
			<itunes:duration>21:07</itunes:duration>
			<enclosure url="https://sphinx.acast.com/p/open/s/69ab3b7c7036d739021982df/e/flightcast%3Aqy4lrm0hud3e4eepcd9okhkj/media.mp3" length="15205147" type="audio/mpeg"/>
			<guid isPermaLink="false">flightcast:qy4lrm0hud3e4eepcd9okhkj</guid>
			<itunes:explicit>false</itunes:explicit>
			<link>https://shows.acast.com/ai-news-strategy-daily-with-nate-b-jones/episodes/69ab3b83e2ffe1fef6526aae</link>
			<acast:episodeId>69ab3b83e2ffe1fef6526aae</acast:episodeId>
			<acast:showId>69ab3b7c7036d739021982df</acast:showId>
			<acast:settings><![CDATA[FYjHyZbXWHZ7gmX8Pp1rmbKbhgrQiwYShz70Q9/ffXZ/Ynvgc/bVSlxbfa1LTdZ/NS0G6+1uBWmuf3KXrHlJ0izxnDClosxN1ZvN1RuhNrnzi6K/aFoXdcAcXK9vEWbNmiLfLG+IuWlmFX8Tc4ZnAnzIcKourV3VpASSTHQhJcFDuR/Wwqq5cca83Ya/UmeJBz5XLGmI2208znZ2/hmGMQ==]]></acast:settings>
			<itunes:episodeType>full</itunes:episodeType>
			<itunes:image href="https://assets.pippa.io/shows/69ab3b7c7036d739021982df/1773243810570-b3b4a15e-30d4-4137-8886-3390ad5090ec.jpeg"/>
			<description><![CDATA[<p>What's really happening with AI and spreadsheets? The common story is that foundation models competing on benchmarks is the main event, but the reality is more complicated when the real battleground is the 40-year-old software where actual decisions get made. In this episode, I share the inside scoop on how Claude in Excel changes what knowledge work actually means:</p><p><br></p><ul><li>Why Anthropic embedded Opus 4.5 directly inside Microsoft Excel and what that signals about where the model race is heading</li><li>How data partnerships with Moody's, S&amp;P, and FactSet create moats that benchmarks simply cannot measure</li><li>What Norway's sovereign wealth fund learned from 213,000 hours saved and why that number tells a different story than any capability demo</li><li>Where the model race ends and workflow integration begins as the strategic question shifts from who trains the best model to who controls the workflows where real decisions happen</li></ul><p><br></p><p>For operators and builders navigating 2026, the competitive advantage is no longer a better model. It's the workflow nobody is willing to rip out and replace.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For playbooks and analysis: <a href="https://natesnewsletter.substack.com/" rel="noopener noreferrer" target="_blank">https://natesnewsletter.substack.com/</a></p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></description>
			<itunes:summary><![CDATA[<p>What's really happening with AI and spreadsheets? The common story is that foundation models competing on benchmarks is the main event, but the reality is more complicated when the real battleground is the 40-year-old software where actual decisions get made. In this episode, I share the inside scoop on how Claude in Excel changes what knowledge work actually means:</p><p><br></p><ul><li>Why Anthropic embedded Opus 4.5 directly inside Microsoft Excel and what that signals about where the model race is heading</li><li>How data partnerships with Moody's, S&amp;P, and FactSet create moats that benchmarks simply cannot measure</li><li>What Norway's sovereign wealth fund learned from 213,000 hours saved and why that number tells a different story than any capability demo</li><li>Where the model race ends and workflow integration begins as the strategic question shifts from who trains the best model to who controls the workflows where real decisions happen</li></ul><p><br></p><p>For operators and builders navigating 2026, the competitive advantage is no longer a better model. It's the workflow nobody is willing to rip out and replace.</p><br><p>Subscribe for daily AI strategy and news.</p><p>For playbooks and analysis: <a href="https://natesnewsletter.substack.com/" rel="noopener noreferrer" target="_blank">https://natesnewsletter.substack.com/</a></p><hr><p style='color:grey; font-size:0.75em;'> Hosted on Acast. See <a style='color:grey;' target='_blank' rel='noopener noreferrer' href='https://acast.com/privacy'>acast.com/privacy</a> for more information.</p>]]></itunes:summary>
		</item>
    	<itunes:category text="Business"/>
    	<itunes:category text="Technology"/>
    </channel>
</rss>
