There's been a lot of talk this week about MCP, but the real issue isn’t MCP itself—it's a symptom of a larger problem in AI. You already know what MCP is, and if you don’t, your LLM of choice can explain. The real question isn’t whether MCP is the answer—it isn’t. In fact, the entire paradigm of 'agents' is fundamentally flawed.
Agential AI is too inefficient to work at scale because it is fundamentally bottlenecked by compute and energy. Just look at OpenAI’s continued financial losses or the enormous resource demands of Grok. Beyond scalability, it’s too inefficient to be practical in real life—at current costs, centralized, cloud-based solutions are prohibitively expensive, even for commercial applications like quality control.
For consumers, a service dependent on a large, centralized API will be priced out of relevance, no matter how impressive it is. With GPT-4.5’s API coming in at a staggering $150/mt, it’s clear that the era of AI price halving every six months is behind us. The teams that once promised cheap, accessible AI won’t be able to deliver.
But this isn’t how nature solves intelligence. Nature didn’t cover the earth in one giant, all-encompassing cell. Physics resists singularity. Intelligence in nature is always distributed, always collaborative. Ants build complex structures by breaking tasks into ant-sized pieces. Elephants into elephant-sized pieces. Humans into human-sized pieces. The future of AI should follow the same principle.
Research into large-scale AI is commendable—it’s an ambitious effort to reach true artificial general intelligence. But in the marketplace, these solutions won’t work. Despite advancements in nuclear fusion and renewable energy, we’re still years away from making large models cost-effective at scale—especially as global energy costs continue to rise.
Instead of asking, 'How big does a model need to be to handle every task?' we should be asking, 'How small can a model be while still completing every task?' This shift in thinking is why I’ve rebranded 'agents' in Pulse as 'workers'—to better reflect this new paradigm. Rather than clumsily interacting with APIs or relying on centralized AI like MCP, Pulse coordinates groups of specialized workers to get the job done. A single, streamlined API handles the process efficiently, allowing for dynamic collaboration. You can still call on a larger model when necessary, but simpler tasks remain lightweight and cost-effective.
Instead of haphazardly integrating with the bloated ecosystem of B2B-SaaS companies that burned through venture capital for a decade, Pulse provides a general framework for commerce and interaction. No more juggling 15 apps or clicking through 10 websites—users simply search their network and interact with relevant nodes. This can be how you order food, shop, consult with a doctor or technician, submit homework—essentially, everything we currently rely on fragmented web services for. The framework is already functional, battle-tested, and ready for beta deployment—running on compute we own outright, powered by a solar farm."
There is nothing theoretical about this - Pulse is already self-sustaining. We don’t need venture capital—our growth is simple: as users join, we scale compute and solar to match. From beta, we’ll continue expanding both, supporting more and more nodes. The only question now is how many nodes will it take before the industry wakes up to reality.
Member discussion: