Behind the Scenes with an early OpenClaw contributor! | E2252

This Week in StartupsThis Week in StartupsFeb 26, 20261h 22min

Tyler Yust, an early OpenClaw contributor, breaks down the technical architecture behind AI agents and reveals how sub-agents are revolutionizing automation workflows by working in parallel while maintaining chat interfaces. The conversation takes a sharp turn into market dynamics, exploring why AI companies are achieving unprecedented valuations and growth rates that would have seemed impossible just five years ago.

Key takeaways

  • AI models have doubled in speed from 30 to 60 tokens per second in just three weeks, dramatically improving agent performance despite remaining bottlenecks.
  • Sub-agents can execute complex tasks in the background without interrupting your primary conversation, though this parallel processing comes at the cost of higher token usage.
  • AI companies are growing at rates that make traditional revenue-to-valuation multiples completely obsolete compared to historical benchmarks.
  • The lowest-end knowledge work jobs will be eliminated first as AI capabilities increasingly exceed human performance at basic intellectual tasks.
  • OpenClaw's architecture allows agents to leverage a shared database of learned APIs, enabling faster execution by avoiding redundant API discovery.

Listen to full episode

0:00

Best moment

27:55· 43stactical advice

AI sub-agents work in background while you chat, then report results

27:55 / 28:38

more than two or three tool calls to spawn a sub agent. So what that sub agent does is it works in the background so you could still chat with it on Slack, and it doesn't interfere with anything. And while that sub agent is going, it's working. It's doing all that, like, hard work that takes fifteen, twenty minutes, and then reports back to the main agent saying, hey. Here's your results, and you just get the results. You don't have to wait for it to, like

Is that gonna be automated? This like, shouldn't it know to just

every request on a sub agent? Yeah. Some people don't really want it because, once again, you're using more tokens when you do create a sub agent a sub agent because, I mean, it has to recreate, like, the the memory the prompt and memory and stuff, so it was 20,000

We've just never seen companies grow this fast.

at 4:28

8 clips

Ordered by timestamp. Jump to the moment you care about.

Related episodes

From the blog

Want clips like this for your podcast?

We find your top 5-8 clips, write the hooks, and deliver ready-to-post content. First 2 episodes are free.

Get 2 Episodes Clipped Free