Today, China’s National Development and Reform Commission disclosed a foreign investment security review decision: the office of the foreign investment security review mechanism decided to prohibit the foreign acquisition of the Manus project and required the parties to unwind the transaction.
Strictly speaking, this is not a “penalty” imposed on Manus. A more accurate description is that a foreign acquisition involving Manus was blocked under China’s foreign investment security review regime. The issue was not a wrong model answer, nor a specific operational violation. The issue was the transfer of control over an AI agent project.
What matters is not only that Manus was a breakout AI product, or that the reported buyer was Meta. The important part is the path behind it: born out of Wuhan’s Optics Valley, Manus caught the AI application window as a general agent, moved toward Singapore and a more global capital structure, and then hit a national security boundary on the way into a US tech giant’s acquisition path.
Manus being blocked is not just one failed AI transaction. It is a case study in how AI agent companies are priced, how they exit, how they globalize, how they face security review, and how they enter the field of finance and compliance teams.
1. Manus Is an Application Company, Not a Model Religion
Manus first became widely visible in March 2025. DeepSeek’s moment was still fresh, and Manus suddenly broke out as a “general AI agent.” Unlike a conventional chatbot, it did not just answer questions. It tried to complete tasks: research, reports, websites, stock analysis, travel planning, breaking a goal into steps and calling tools to execute them.
It looked like a product miracle, but it did not come from nowhere. Public reports connect the Butterfly Effect team behind Manus closely with Wuhan’s Optics Valley. Founder Xiao Hong graduated from Huazhong University of Science and Technology, and early Manus R&D was also based in Wuhan. Universities, engineers, incubators, affordable offices, and local startup policy formed the soil that made the early company possible.
To understand Manus, it also helps to look at Xiao Hong’s own playbook. In Zhang Xiaojun’s three-hour Business Interview podcast, he comes across less as a foundation-model idealist and more as a very pragmatic application founder. He worked on WeChat ecosystem tools, then Monica, then Manus. The common thread is not owning the deepest layer of technology, but reading platform shifts, model shifts, and user demand, then productizing quickly when a new capability window opens.
That is what makes Manus interesting. It showed that Chinese AI startups do not have to train a frontier model to create a globally watched product. The application layer can matter. But that strength later became part of the sensitivity.
Once an AI agent is genuinely useful, it is no longer just an app. It touches files, webpages, accounts, browsers, enterprise systems, and business workflows. Traditional software sells functionality. Agents sell execution. Once execution becomes embedded in workflows, it starts to look like infrastructure.
That is exactly where security review starts paying attention.
After Manus broke out, the story moved quickly toward Singapore. Public reporting says Manus confirmed in mid-2025 that its headquarters had moved from China to Singapore, and later reports said its Wuhan office had largely emptied. Commercially, this is understandable: Singapore offers easier access to dollar capital, global customers, a more international legal and data environment, and a clearer path into Western tech M&A.
But in a security review frame, going global can also mean reassigning control, technology trajectory, and data boundaries. That is the core tension in the Manus case.
2. Finance: Valuation Is Not Price Until the Deal Closes
If Meta had completed the acquisition of Manus, it would have looked like a clean startup exit. Public reporting put the deal value at around US$2 billion. The parties did not publicly disclose an official final amount, but the reported scale itself suggests this was not an ordinary product acquisition. It was strategic pricing for agent execution capability.
For early employees, investors, and option holders, that would mean a major liquidity event. For Meta, it would mean acquiring a bundle of agent capability, global distribution, compute, and product surface area.
The security review decision changes the valuation formula.
In the past, AI application companies might be valued on revenue growth, retention, agent success rates, enterprise penetration, and strategic acquisition premium. Now another item has to be added: regulatory completion risk.
If a deal cannot pass foreign investment security review, technology export review, or data compliance review, the valuation remains a paper number. In M&A, the real price is not the number in the headline. It is the price that closes.
This directly affects deal structure. Regulatory approval becomes a heavier condition precedent. Transaction documents will need tighter representations and warranties, indemnities, divestiture obligations, and termination mechanics. Sellers also have to rethink exit certainty. A high-valuation acquisition that cannot close can drag on governance, employee morale, and customer confidence.
Employee options are affected too. AI company valuations can rise quickly, but option value ultimately depends on whether a liquidity event completes. Cases like Manus remind founders and CFOs that compensation design, financing terms, and M&A planning cannot rely only on growth narratives. They have to account for deal completion probability.
For investors, “sell to a US tech giant” used to be a valuation-supporting exit path. For AI agent companies with Chinese R&D roots, enterprise data scenarios, automation capability, and offshore control structures, that option now needs a discount.
This does not mean going global or being acquired is impossible. It means the transaction structure itself has become a risk asset.
3. Compliance: An Agent Is a High-Privilege Digital Worker
Manus-type products are sensitive not merely because they “know AI,” but because they can execute tasks.
Once an agent connects to email, browsers, cloud drives, CRM, ERP, finance systems, code repositories, or approval workflows, it starts touching the core questions of internal control:
- Who can read the data?
- Who can initiate actions?
- Who can modify files?
- Who can generate payment, contract, quotation, or approval language?
- Are logs auditable?
- Do high-risk actions require human review?
- After a change of control, can offshore entities access data or permissions?
This is no longer just an IT procurement issue. It is a finance control, data compliance, third-party risk, and business continuity issue.
Traditional SaaS provides functionality. Agents provide execution. An agent may read invoices, fill reimbursement forms, classify accounts, and generate payment explanations. It may read contracts, suggest pricing, write customer follow-up messages, change code, run tests, open pull requests, or touch deployment environments.
Finance and compliance teams can no longer ask only whether a tool improves efficiency. They have to ask whether it can initiate actions with financial consequences. If the answer is yes, it belongs inside the internal control perimeter.
Enterprise procurement of AI agents should include a new review checklist: permission matrix, data flow map, log retention, human approval points, high-risk action blocking, supplier change-of-control notice, training data terms, offshore access arrangements, exit rights, and data deletion mechanics.
The biggest lesson for enterprise customers is simple: an agent is not ordinary software. It is a digital worker acting on your behalf. If it can execute for you, it becomes part of your control environment.
4. Timing: The Agent Race Is Accelerating, Not Slowing
April 2026 is an interesting moment.
On one side, China’s security review blocked the Meta acquisition of Manus, showing that cross-border M&A has hit a national security boundary.
On the other side, agent products are rapidly moving from web apps into the desktop execution layer.
On April 16, OpenAI announced a major Codex update, saying Codex can operate a computer, connect to more everyday tools, generate images, remember preferences, and support features like an in-app browser, remote devbox, and PR review. Claude’s official release notes also show Claude Cowork reaching GA through Claude Desktop on macOS and Windows in April, alongside enterprise controls such as RBAC, Analytics API, and OpenTelemetry.
OpenClaw and similar open-source or local agent projects point in the same direction: agents are moving toward the local or desktop environment, connecting messaging tools, browsers, file systems, and external APIs as execution surfaces for individual and enterprise workflows.
The center of competition is shifting.
The first phase was model capability: whose model is smarter.
The second phase was application packaging: who can turn models into usable products.
The third phase is execution access: who can safely enter the user’s computer, browser, files, email, code repository, and enterprise systems.
Manus sits between the second and third phases. Codex, Claude Desktop, OpenClaw, and the broader ecosystem show that the third phase has already begun.
So Manus being blocked does not make the AI agent opportunity smaller. It shows the opposite: the category is now too important to be treated as ordinary software.
The old question for AI application companies was whether users would pay.
The new questions are whether users will grant permissions, whether enterprises will let agents into systems, and whether regulators will allow control to move across borders.
Those are the real battlegrounds for the next stage of agents.
Closing
In that podcast interview, Xiao Hong made an interesting point: the world is not a linear extrapolation, and founders need to become important variables in the game.
Manus did become a variable. It showed that Chinese AI application founders can build globally watched products without training the foundation model themselves. It also showed regulators that once an application-layer agent becomes powerful enough, it may no longer be just an application.
Manus is unlikely to be the last case. It has simply placed the question on the table earlier than expected: can an AI agent grown out of the Chinese ecosystem ultimately be sold to a US tech giant?
At least in the Manus case, today’s answer is clear.
Sources:
- NDRC: Foreign investment security review decision on the proposed acquisition of Manus
- NDRC: Measures for the Security Review of Foreign Investment
- Business Interview: three-hour interview with Manus founder Xiao Hong
- Wuhan Municipal Government: visit to the Manus R&D company
- ITHome: Manus Wuhan team relocation and Singapore operations
- OpenAI: Codex for almost everything
- Claude Help Center: Release notes