Key Points
- The Chinese A-share PCB sector saw a historic 2.62% surge on March 18, with several companies hitting daily price limits, driven by Nvidia’s GTC 2026 announcement.
- Nvidia (Yingweida 英伟达) unveiled the Groq 3 Language Processing Unit (LPU), a new chip designed specifically for AI inference, offering a 35x improvement in inference throughput-to-power ratio.
- The LPU, built on Groq technology, is optimized for the latency-sensitive “decoding” stage of AI, complementing GPUs which excel at “pre-fill” training data processing.
- Analysts project the AI inference chip market to reach ¥1,015 billion RMB ($145 billion USD) by 2026, accounting for 52% of the global AI chip market and growing at over 50% annually.
- The rally in PCB stocks highlights titles that complex LPU architectures will drive massive demand for sophisticated and high-density PCB designs, reflecting that the “inference era has arrived.”
- Energy Efficiency: Dramatic power reduction per operation.
- Cost-Effectiveness: Lower operational costs for high-volume AI agents.
- Latency Performance: Optimized for real-time interactive speed.

On March 18, something remarkalbe happened in China’s stock market.
The A-share Printed Circuit Board (PCB) sector didn’t just gain—it dominated.
We’re talking about a 2.62% sector-wide surge that outpaced every other conceptual stock category trading that day.
But here’s what’s interesting: this wasn’t random market enthusiasm.
This was a direct response to Nvidia’s (Yingweida 英伟达) landmark announcement at GTC 2026.
The reason?
A completely new type of AI chip just hit the scene, and it’s about to reshape the entire inference market.
The PCB Stock Explosion: A Sector-Wide Win
The gains were nowhere near subtle.
Jinlu Electronics (Jinlu Dianzi 金禄电子) surged over 10% in a single trading session.
But the real story is how broad the rally was.
Three companies hit the daily price limit (“limit up”):
- Aohong Electronics (Aohong Dianzi 澳弘电子)
- Ascot (Aushikang 奥士康)
- Guanghe Technology (Guanghe Keji 广合科技)
Meanwhile, the rest of the sector’s heavy hitters posted impressive gains:
- Jin’an Guoji (Jin’an Guoji 金安国纪) and Hongxin Electron (Hongxin Dianzi 弘信电子)—both up more than 7%
- Pengding Holdings (Pengding Konggu 鹏鼎控股), Huadian Huazhong (Hudian Gufe 沪电股份), Mankun Technology (Mankun Keji 满坤科技), Yanmai Technology (Yanmai Keji 燕麦科技), and Eastwin (Dongwei Keji 东威科技)—all gaining over 6%
- Han’s CNC (Dazu Shukuang 大族数控), Aiko Optoelectronic (Aike Guangdian 埃科光电), Ultrasonic Electronics (Chaosheng Dianzi 超声电子), Sihui Fuji (Sihui Fushi 四会富仕), Tianshan Electronics (Tianshan Dianzi 天山电子), and Honghe Technology (Honghe Keji 宏和科技)—climbing more than 5%
This kind of coordinated sector movement doesn’t happen by accident.
Something fundamental shifted in the market’s understanding of where AI infrastructure is heading.
Find Top Talent on China's Leading Networks
- Post Across China's Job Sites from $299 / role
- Qualified Applicant Bundles
- One Central Candidate Hub
Your First Job Post Use Checkout Code 'Fresh20'

The LPU: Nvidia’s Quiet Revolution in AI Inference
The catalyst?
Nvidia’s GTC 2026 conference in San Jose, California.
Jensen Huang (Huang Renxun 黄仁勋), Nvidia’s founder and CEO, took the stage in his signature black leather jacket and unveiled something unexpected.
Not a massive GPU with more cores.
Not an incremental performance upgrade.
Instead: the Groq 3 Language Processing Unit (LPU)—a compact, purpose-built chip specifically designed for one job.
AI inference.
What Is an LPU, and Why Should You Care?
Let’s break down what makes this chip different.
The LPU (Language Processing Unit) is a dedicated AI inference acceleration chip built on technology from Groq—a company Nvidia acquired last year.
Here’s the key insight: GPUs and LPUs handle different phases of AI workloads.
- GPUs are generalists—they excel at the computationally intensive “pre-fill” stage where you’re processing training data and building foundational models
- LPUs are specialists—they’re optimized for the latency-sensitive “decoding” stage where the model actually generates responses in real-time
Think of it like assembly line efficiency.
GPUs are like having massive machines that can process huge batches of raw materials at once.
LPUs are like having precision tools that specialize in speed and responsiveness.
Together, they create a complete AI pipeline.
The Vera Rubin + Groq Architecture: A Game-Changing Hybrid System
Jensen Huang described a future where these two chip types work in perfect tandem.
The Vera Rubin system integrates:
- 72 Rubin GPUs for compute-heavy pre-fill operations
- 36 Vera CPUs for supporting infrastructure
This GPU cluster handles all the heavy computational lifting.
Then the Groq LPU takes over for the decoding phase—the part that actually matters to end users, because that’s where latency determines the quality of experience.
The result?
A 35x improvement in inference throughput-to-power ratio.
Let that sink in.
Not 35% better.
Not 3.5x better.
35 times more efficient at getting the same inference results while using dramatically less power.
ExpatInvest China
Grow Your RMB in China:
- Invest Your RMB Locally
- Buy & Sell Online in CN¥
- No Lock-In Periods
- English Service & Data
- Start with Only ¥1,000

The AI Inference Market Is About to Explode
Here’s where the PCB stocks’ reaction makes perfect sense.
Brokerage research indicates that inference demand is about to skyrocket as the world enters the AI Agent era.
And the numbers are staggering.
The Numbers: Inference Is the New Gold Rush
By 2026, the global AI chip market is projected to reach:
¥1,960 billion RMB ($280 billion USD)
That’s a massive market.
But here’s what matters: inference chips alone are expected to capture ¥1,015 billion RMB ($145 billion USD) of that total—accounting for 52% of the entire market.
Translation: inference is the bigger opportunity than training.
And the growth rate?
An annual compound growth rate exceeding 50%.
For context, that’s venture-scale growth rates in a multi-hundred-billion-dollar established market.
Why LPU Technology Will Dominant Inference
The market isn’t speculating here—there are concrete technical reasons why LPUs are the superior choice for inference workloads.
Compared to GPUs, LPUs offer three critical advantages:
- Energy Efficiency: LPUs consume dramatically less power per inference operation, which matters enormously at scale
- Cost-Effectiveness: Lower power consumption means lower operational costs, making inference economically viable at the volumes needed for AI agents
- Latency Performance: LPUs are optimized for speed, which directly impacts user experience in interactive AI applications
In other words, LPUs aren’t just incrementally better—they’re architecturally suited to what inference actually requires.
Analysts are bullish because they believe LPU technology will gradually become the dominant inference solution.
Resume Captain
Your AI Career Toolkit:
- AI Resume Optimization
- Custom Cover Letters
- LinkedIn Profile Boost
- Interview Question Prep
- Salary Negotiation Agent

Why This Matters for PCB Companies
Now, the obvious question: why did PCB stocks rally on this news?
Because complex chip architectures require sophisticated printed circuit boards.
LPU systems won’t ship as single-chip solutions.
They’ll ship as complex architectural systems with multiple components, interconnects, and advanced PCB designs.
That means:
- More PCB units per system
- Higher technical complexity (premium pricing)
- Multi-year growth as LPU adoption accelerates
PCB manufacturers supply the physical infrastructure that makes advanced chips work in production systems.
As the inference chip market explodes, PCB demand follows directly behind.

The Bigger Picture: We’re Entering the Inference Era
What happened on March 18 in Chinese stock markets reflects a broader shift happening globally.
For years, the narrative was all about training.
Bigger models, more data, breakthrough architectures—all focused on creating AI capabilities.
But that story is maturing.
Now the focus is on deploying those capabilities at scale through inference.
And inference has different requirements entirely.
It needs:
- Low latency for responsive user experiences
- Efficient power consumption for cost-effective deployment
- Specialized hardware that’s different from general-purpose GPUs
Enter the LPU.
Jensen Huang’s announcement wasn’t just a product launch.
It was Nvidia signaling that the company is betting big on a future where specialized inference chips dominate deployment.
And the PCB sector’s rally suggests that investors in China understood the implications immediately.

What’s Next for AI Infrastructure Investors
The LPU adoption cycle is just beginning.
If the market thesis holds—and brokerage forecasts suggest it will—we’re looking at:
- Massive growth in LPU shipments through 2026 and beyond
- Corresponding demand for advanced PCB manufacturing capacity
- Consolidation and specialization in the PCB supply chain
- Premium pricing for companies supplying complex, high-density PCBs
The 35x improvement in throughput-to-power efficiency isn’t hyperbole—it’s a fundamental shift in what’s economically viable for AI deployment.
And that shift cascades through the entire hardware supply chain, starting with the PCB manufacturers who build the physical systems.
The rally on March 18 wasn’t just about a new chip.
It was about investors recognizing that the inference era has arrived, and the infrastructure to support it is about to become incredibly valuable.





