Why founders are ditching ChatGPT for local AI models

the energy that's needed, right, rather than concentrating it in these big data centers, right, at specific I don't know.

15:47 / 16:26

the energy that's needed, right, rather than concentrating it in these big data centers, right, at specific

I don't know. That's a much like, that would assume there's gonna be a lot more compute on the edge. Right? I mean, some of these models are so tiny. Yeah. I mean, the Gemini's latest is is operating on the iPhone. Right? Well, that's a very

forward to Optima I mean, actually thinking back to the, you know, the robots and and things like that. I don't know. I mean, yeah, probably would be big enough to dent what's going on in the cloud, of course. Doug, the beauty is you have models of different sizes and different

memory and hardware requirements. The reason people are running things locally is because they keep running out of Claude tokens or or OpenAI tokens.

About this clip

The discussion explores how AI models are becoming small enough to run locally on devices like iPhones, potentially shifting compute from centralized data centers to edge devices. The speakers note that developers are increasingly running models locally not just for performance, but because they keep hitting token limits on services like Claude and OpenAI.

Why this clip

Reveals a practical shift in how developers are actually using AI tools, driven by cost and access limitations rather than just technical capabilities.

15:47 - 16:2639smarket insight

Share

LinkedInX

What they said next

US government vs Anthropic: VCs debate AI regulation priorities

21:24 - 44s · contrarian take

More from this episode

Similar clips from other shows

From the blog

Want clips like this for your podcast?

We find your top 5-8 clips, write the hooks, and deliver ready-to-post content. First 2 episodes are free.

Get 2 Episodes Clipped Free