OpenClaw's Viral Success Risk Exhausting Computing Power. What Opportunities Exist for US Stocks?
The OpenClaw's craze has, for the first time, given the public a tangible view of a new form of Agent: AI capable of long-term task execution, cross-system action, and gradually approaching digital employees, corresponding to the emergence of Long-Horizon Agents. Over the past two years, capital has been most concentrated in Coding Agents. This is a closed, highly deterministic environment, and also the easiest starting point for Agents to succeed. Now real opportunities are migrating from the code world to enterprise processes and real business.
What changes does the Long-Horizon Agent bring?
1. Long-Horizon Agents are longer action chains that can break down a vague goal into multiple subtasks and maintain state for hours or even days, running continuously between different systems. Agents need state management. In many cases, Agents fail to execute tasks not because the model isn't smart enough, but because they can't maintain state in long-term tasks. Durable Execution and State Management are becoming a new infrastructure layer.
2. By 2026, interaction will shift from passive response to proactive intervention. Future Agents will not just respond to users but continuously observe the environment, make suggestions, and automatically execute when authorized. For example, current salespeople need to manually operate CRM; future Agents will automatically analyze two years of emails, mine potential customers, and draft follow-ups, with users only needing to click Approve. This greatly shortens the sales cycle.These changes lead to a dramatic increase in computing power demand. At the same time, the importance of layering models according to accuracy and cost-effectiveness also increases.
How much can revenue increase with the surge in Gemini and Claude usage?
In the last week of February, with Google's price optimization for Gemini 3 Flash and official version iterations of OpenClaw, token usage surged. Data shows that Google, Anthropic, and OpenAI currently hold market shares of 17.6%, 16.2%, and 14.3% respectively in large model usage. Chinese models are closely following.
Factors considered in estimating increased revenue include:
1. Claude charges more than Gemini, with the most used Gemini 3 Flash Preview charging $3 per million tokens; Google 3.1 Preview charges $12 per million tokens. Claude Opus 4.5 charges as high as $25.32 per million tokens.
2. OpenClaw accounts for 44% of OpenRouter's computing power consumption. The remainder is mostly various AI coding applications, though their share is shrinking. We conservatively assume this ratio remains unchanged.
3. Industry estimates suggest that the computing power consumption provided by OpenClaw through OpenRouter accounts for only one-third or less of all OpenClaw's increased computing power, as much of the computing power is accessed through direct official API connections, bypassing the OpenRouter platform. Thus, Google and Anthropic's actual computing power revenue growth is higher; the multiplier is assumed to be 3.
Based on the estimate, OpenRouter brings Google an annualized revenue of $375 million and Anthropic a revenue of $2.031 billion. However, this data considers a scenario with 0% month-over-month growth. In reality, OpenRouter shows that overall month-over-month growth is still surging.
Considering non-OpenRouter channels, OpenClaw might bring incremental revenues of $495 million and $2.682 billion to Gemini and Claude respectively.
The growth in computing power demand will also drive increased demand in the Gemini and Claude industry chains. Broadcom, which provides TPU chips for Google, as well as Amazon (which collaborates with Anthropic) and its chip suppliers NVIDIA and Marvell Technology, will also benefit from this increased demand.
Google's new Gemini Embedding 2 model will help OpenClaw to truly "see" the world.
Google released its first native multimodal embedding model Gemini Embedding 2 this week and launched public beta testing through Gemini API and Vertex AI.
Traditional embedding models mainly focused on text. For the latest Gemini Embedding 2, text, images, videos, audio, and documents are all compressed into the same vector space. This means the model has achieved "cross-modal semantic alignment," allowing the textual concept of a cat and the visual concept of a cat's photo to be extremely close in mathematical vector distance within a unified embedding space. In simple terms, when a user searches for "cat," the system can not only find relevant text but also directly find cat images, videos, and even sounds.
It provides a crucial foundation for AI Agents like OpenClaw to truly understand the world. For Agents like OpenClaw that need to operate computers and recognize screens, it no longer just recognizes text. It can directly understand which pixel area is a settings icon, which button is most relevant to the current task, and the relationship between screenshots and text instructions.
@TigerStars @CaptainTiger @TigerWire @Daily_Discussion @Tiger_chat @Tiger_comments @MillionaireTiger
Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

