- The AI Field
- Posts
- 🔔China Bans Foreign AI Chips!
🔔China Bans Foreign AI Chips!
The global AI race heats up as Google, Microsoft, Apple, and Moonshot push the limits of intelligence. from trillion-parameter models to open-source breakthroughs.

Hey AI Enthusiasts!
This week was packed with power moves, from China cutting off foreign AI chips and Google unveiling its record-breaking Ironwood TPU, to Microsoft breaking free from OpenAI’s limits, Apple betting big on Gemini, and Moonshot revealing an open-source model that rivals GPT-5.
Let’s dive in!
In today’s insights:
🇨🇳 China bans Nvidia, AMD, and Intel chips in state data centers
⚙️ Google launches Ironwood, its fastest AI chip ever
🧩 Moonshot’s $4.6M open-source model Kimi K2 rivals GPT-5
🧠 Microsoft finally gets to build its own AI models
🍏 Apple partners with Google Gemini to power the new Siri
Read time: 8 minutes.
🗞️ Recent Updates
AI Regulation
🚫 China Slams Door on Foreign AI Chips

The AI Field: China has issued guidance requiring state-funded data centers to use only domestically-made AI chips, forcing projects less than 30% complete to remove or replace foreign chips including those from Nvidia, AMD, and Intel.
Details:
Projects more advanced than 30% completion will be evaluated case-by-case, while some facilities planning to deploy Nvidia chips have already been suspended before breaking ground.
AI data center projects in China have drawn over $100 billion in state funding since 2021, though it's unclear exactly how many will be affected by the new directive.
The move comes amid a pause in trade hostilities between Washington and Beijing, representing one of China's most aggressive steps yet to eliminate foreign technology from critical infrastructure.
Nvidia's share of the Chinese AI chip market has plummeted from 95% in 2022 to zero currently, according to the company.
Why This Matters: This directive accelerates China's push for self-sufficiency in critical technologies amid escalating tech tensions with the U.S. While it strengthens domestic chipmakers like Huawei and Cambricon, it also risks widening the technology gap with the U.S. as China's access to cutting-edge foreign AI chips becomes increasingly restricted. For American chip giants, this effectively closes off a major revenue stream even as diplomatic relations show signs of thawing.
Want to get the most out of ChatGPT?
ChatGPT is a superpower if you know how to use it correctly.
Discover how HubSpot's guide to AI can elevate both your productivity and creativity to get more things done.
Learn to automate tasks, enhance decision-making, and foster innovation with the power of AI.

The AI Field: Google announced its seventh-generation Tensor Processing Unit called Ironwood will become generally available in the coming weeks, offering performance more than four times faster than its predecessor and capable of scaling up to 9,216 chips in a single pod.
Details:
Ironwood delivers up to 10 times the peak performance of Google's TPU v5p and more than four times the performance per chip compared to the previous generation.
Each Ironwood TPU boasts 4.6 petaFLOPS of dense FP8 performance, slightly exceeding Nvidia's B200 at 4.5 petaFLOPS and approaching the GB200's 5 petaFLOPS.
Connected through a 9.6 Tb/s Inter-Chip Interconnect network, Ironwood pods deliver 42.5 FP8 ExaFLOPS of compute power for training and inference—far exceeding Nvidia's GB300 NVL72 system at 0.36 ExaFLOPS.
Anthropic, one of Google's key AI partners, plans to deploy up to one million TPUs to train and serve its Claude models, citing Ironwood's gains in speed and scalability.
Why This Matters: Google is making a serious play to challenge Nvidia's dominance in the AI chip market with custom silicon that rivals—and in some configurations surpasses—the performance of Nvidia's latest offerings. With 192 GB of HBM3E memory per chip (six times that of its predecessor), Ironwood can process larger models and datasets while reducing data transfer bottlenecks. This could reshape the competitive landscape for AI infrastructure and give Google Cloud a powerful differentiator against AWS and Azure.
The AI Field: Chinese startup Moonshot AI released Kimi K2 Thinking, an open-source AI model with 1 trillion parameters that reportedly cost just $4.6 million to train—a fraction of what U.S. companies spend—while outperforming rivals on complex reasoning benchmarks.
Details:
The model can execute 200 to 300 sequential tool calls without human intervention, using software tools like search, calculations, and data retrieval while reasoning through each step.
Kimi K2 achieved a 43% score on Humanity's Last Exam, a benchmark of 3,000 graduate-level reasoning questions, reportedly exceeding OpenAI's GPT-5 and Anthropic's Claude Sonnet 4.5.
The model supports INT4 inference using Quantization-Aware Training, achieving around double the speed improvements while maintaining state-of-the-art accuracy.
At $4.6 million in training costs, it undercuts even DeepSeek's V3 model ($5.6 million) and stands in stark contrast to the hundreds of millions or billions spent by OpenAI and Anthropic.
Why This Matters: Moonshot's release highlights a fundamental shift in the AI race—China is discovering dramatically more cost-efficient paths to frontier AI development. While U.S. companies pour billions into training, Chinese startups are achieving comparable or superior results at a tiny fraction of the cost through algorithmic innovation. This efficiency advantage, combined with China's unified regulatory approach and energy subsidies, validates Nvidia CEO Jensen Huang's recent warning that China is positioned to beat the U.S. in the AI race.
🗞️ More Updates

The AI Field: Microsoft announced the formation of its MAI Superintelligence Team, marking a major shift after the company was previously barred from pursuing its own AGI research and restricted from training large models beyond a certain computing threshold due to its OpenAI partnership.
Details:
The new team, led by AI chief Mustafa Suleyman and chief scientist Karén Simonyan, will work toward "humanist superintelligence" while Microsoft extends its OpenAI partnership through 2030 to maintain early access to OpenAI's models.
Microsoft joins Meta (which rebranded its AI efforts as Meta Superintelligence Labs in June 2025) and other tech giants in publicly positioning their work as a drive toward superintelligence, though no such systems currently exist.
The team includes researchers Microsoft poached from Google, DeepMind, Meta, OpenAI, and Anthropic, signaling the company's serious investment in building frontier AI capabilities in-house.
Suleyman acknowledged it will take several years before Microsoft is fully on the path to frontier AI but called it a "key priority," with the company investing heavily in GPU infrastructure for model training.
Why This Matters: Microsoft is hedging its bets in the AI race by no longer putting all its eggs in the OpenAI basket. After being contractually limited in what it could build, the company now has the freedom to develop its own advanced models while still maintaining access to OpenAI's technology. This dual strategy positions Microsoft to compete more directly with Google, Meta, and other tech giants who've been training massive models independently. The focus on "humanist superintelligence" also signals Microsoft's attempt to differentiate its approach from competitors by emphasizing safety and human benefit over pure technological capability.
👉 Read more here

The AI Field: Apple is finalizing an agreement to pay Google approximately $1 billion annually for access to a custom 1.2 trillion parameter Gemini AI model that will power the long-awaited overhaul of Siri, expected to launch as early as spring 2026.
Details:
The Gemini model's 1.2 trillion parameters dwarf Apple's current cloud-based model (150 billion parameters) and will handle Siri's summarizer and planner functions to help the assistant synthesize information and execute complex tasks.
Apple evaluated models from OpenAI and Anthropic before choosing Google, with Anthropic's fees reportedly exceeding $1.5 billion annually—making Google's offer more financially sustainable.
Despite using Google's AI, Apple will maintain strict privacy controls with Gemini operating on Apple's Private Cloud Compute servers, ensuring user data remains isolated from Google's infrastructure with no data stored between sessions.
Apple isn't giving up on independence—the company is still working on its own 1 trillion parameter cloud-based model that could be ready as soon as 2026 to eventually replace Gemini.
Why This Matters: This marks a significant admission from Apple that it's fallen too far behind in AI to go it alone right now. The company that prides itself on owning its entire stack is outsourcing Siri's brain to its biggest rival in search. It's a temporary fix to buy time while Apple scrambles to develop competitive in-house models, following the same playbook it used with maps, weather data, and chips—lean on partners until your own solution is ready. For Google, it's a massive validation that Gemini can power experiences across platforms, and a $1 billion annual payday to boot. The real question is whether Apple can actually catch up, or if this "temporary" solution becomes permanent as Google continues advancing its models.
👉 Read more here
🗞️ More AI Hits
Snap inks US$400 M deal with Perplexity: Snap Inc. will integrate Perplexity’s conversational-AI search engine into its app beginning in early 2026, backed by a $400 million cash + equity deal.
Anthropic projects US$70B revenue by 2028: Anthropic expects to generate up to $70 billion in revenue (and about $17 billion cash-flow) by 2028, driven by rapid B2B adoption of its AI models.
Golden Nuggets
🇨🇳 China bans Nvidia, AMD, and Intel chips in state data centers
⚙️ Google launches Ironwood, its fastest AI chip ever
🧩 Moonshot’s $4.6M open-source model Kimi K2 rivals GPT-5
🧠 Microsoft finally gets to build its own AI models
🍏 Apple partners with Google Gemini to power the new Siri
What did you think of today’s edition? |
Until next time!
Olle | Founder of The AI Field

