Breakthrough: China Develops High‑Precision Memristor-Based Analog Matrix‑Computing Chip

Key Points

  • Full‑analog memristor compute‑in‑memory: A memristor‑based analog matrix solver from Peking University (Běijīng Dàxué 北京大学) led by Sun Zhong (Sūn Zhòng 孙仲) approaches digital precision while delivering much higher energy efficiency and throughput for linear‑algebra kernels.
  • Notable precision milestone: Demonstrated a 16×16 matrix inversion at 24‑bit fixed‑point precision, using iterative refinement to reduce relative error; for 32×32 the chip already exceeds single‑core high‑end GPU performance.
  • Massive throughput gains reported: For larger problems (128×128) the paper reports compute throughput of more than 1,000× that of top digital processors — tasks a modern GPU might take a day could be done in about a minute under reported conditions.
  • Targeted accelerator role and caveats: Positioned as a complementary accelerator for high‑precision matrix solvers (e.g., second‑order training, MIMO), with commercialization challenges including manufacturing yield, device variability, packaging, and system integration.
Decorative Image

High‑precision memristor‑based analog matrix‑computing chip leads the headline for a real step change in how we might accelerate linear algebra and matrix solvers at massive scale.

Summary — what happened

Researchers at Peking University (Běijīng Dàxué 北京大学) led by Sun Zhong (Sūn Zhòng 孙仲) released a memristor‑based analog matrix solver that approaches digital precision while delivering big gains in energy and throughput for specific matrix tasks.

The work is a collaboration between Peking University’s Artificial Intelligence Research Institute and its School of Integrated Circuits.

The paper was published on October 13 in Nature Electronics.

Resume Captain Logo

Resume Captain

Your AI Career Toolkit:

  • AI Resume Optimization
  • Custom Cover Letters
  • LinkedIn Profile Boost
  • Interview Question Prep
  • Salary Negotiation Agent
Get Started Free
Decorative Image

What they built — the tech in plain terms

The team built a compute‑in‑memory analog chip that uses memristor arrays to store and compute in the same physical devices.

Analog computation represents numbers directly as continuous physical quantities — for example, voltage or current — instead of translating values into binary 0/1 streams like conventional von Neumann systems.

That removes many repeated encoding/decoding steps and unlocks higher energy efficiency and throughput for linear algebra primitives.

Decorative Image

Why this matters — big picture for investors and builders

Historically, analogue computing (Lèibǐ jìsuàn 类比计算) lost out to digital because of precision limits.

This team’s achievement focuses on removing that precision bottleneck while keeping analog’s efficiency and parallelism advantages.

That makes memristor‑based analog computing an attractive complementary accelerator for targeted workloads rather than a wholesale CPU/GPU replacement.

TeamedUp China Logo

Find Top Talent on China's Leading Networks

  • Post Across China's Job Sites from $299 / role, or
  • Hire Our Recruiting Pros from $799 / role
  • - - - - - - - -
  • Qualified Candidate Bundles
  • Lower Hiring Costs by 80%+
  • Expert Team Since 2014
Get 25% Off
Your First Job Post
Decorative Image

Key technical achievements — what the paper reports

  • Full‑analog matrix solver demonstrated with high precision.
  • 16×16 matrix inversion at 24‑bit fixed‑point precision — a notable precision milestone for analog hardware.
  • Iterative refinement was used to drive down relative error experimentally.
  • Performance comparisons: for a 32×32 matrix inversion the chip’s effective compute performance already exceeds single‑core performance of high‑end GPUs.
  • For larger problems (128×128) the paper reports compute throughput more than 1,000× that of top digital processors on the same tasks — meaning workloads a modern GPU might handle in a day could be completed by this chip in about a minute under the reported conditions.

ExpatInvest China Logo

ExpatInvest China

Grow Your RMB in China:

  • Invest Your RMB Locally
  • Buy & Sell Online in CN¥
  • No Lock-In Periods
  • English Service & Data
  • Start with Only ¥1,000
View Funds & Invest
Decorative Image

Where this fits in the compute stack — practical framing

Analog memristor chips are framed as complementary accelerators rather than replacements for CPUs or GPUs.

CPUs remain the flexible orchestrators across control flow and system tasks.

GPUs will continue to dominate first‑order AI workloads and dense matrix multiply acceleration.

Memristor analog chips can accelerate the most time‑ and energy‑consuming operations — notably high‑precision matrix equation solvers used in certain second‑order training algorithms and large‑scale signal processing (for example, MIMO detection).

Decorative Image

Applications and outlook — who should care now

  • AI model training — especially second‑order methods or adaptation routines that need high‑precision linear solves.
  • Robotics and intelligent control — algorithms that require fast, high‑precision matrix solutions for model predictive control and related tasks.
  • Large‑scale signal processing — for example, MIMO detection in communications where analog parallelism and low latency matter.
  • Edge and data‑center acceleration — where throughput per watt and latency for specialized kernels are critical metrics.

Decorative Image

Technical caveats — what still needs work

The results are experimental and an important proof of concept.

Key challenges on the commercialization path include manufacturing yield, device variability, packaging, error‑correction, and co‑integration with existing heterogeneous compute systems.

The team positions this as a technology to relieve digital accelerators on specific kernels rather than to replace the broader software and system ecosystem.

Decorative Image

Actionable takeaways for investors, founders, and engineers

  • Investors: watch for companies that can integrate memristor arrays into modular accelerator boards, and for IP around error‑correction and calibration stacks.
  • Founders: prioritize software stacks and APIs that make analog matrix solvers accessible as drop‑in kernels for existing ML frameworks.
  • Engineers: focus on system integration, hybrid workflows (CPU/GPU/analog), and tooling for iterative refinement and error management.
  • Researchers: follow reproducibility and benchmarks on real AI workloads beyond synthetic matrix inversion tests.

Decorative Image

Why this is interesting for the market and ecosystem

This development signals a credible move toward closing the precision gap for analog hardware.

That opens the door for specialized accelerators that can dramatically cut energy and time for targeted linear algebra kernels used across AI, signal processing, and control systems.

Strategic partnerships between memristor hardware teams and software or cloud providers could unlock near‑term adoption in niche high‑value workloads.

Decorative Image

Final thoughts

This work from Peking University (Běijīng Dàxué 北京大学) shows memristor‑based analog matrix computing can approach digital precision while offering major energy and throughput advantages for specific problems.

Investors, founders, and engineers who focus on integration, error management, and targeted algorithmic kernels will find the most promising pathways to commercialization.

High‑precision memristor‑based analog matrix‑computing chip could be a game changer for specialized acceleration of matrix‑heavy workloads when it is productized and integrated into heterogeneous compute stacks.

Decorative Image

References

In this article
Scroll to Top