Key Points
- Problem Solved: Hangzhou Tongxing Technology (杭州瞳行科技) launched AI-powered glasses to address the “last ten meters” navigation challenge for the visually impaired, impacting over 17 million individuals in China alone.
- Technology and Features: The glasses integrate smart glasses, a smartphone interface, a remote-control ring, and a specialized white cane. They use proprietary vision models and Qwen-VL with 121-degree ultra-wide-angle dual cameras, providing an ultra-low latency of just 300 milliseconds for real-time obstacle avoidance.
- Dual Operating Modes: The AI offers two modes: Obstacle Avoidance & Immediate Navigation for real-time surroundings and Detailed Environmental Understanding for in-depth information when stationary.
- Cost-Effectiveness & Scaling: The product’s development was made possible by the significant reduction in computing power cost, allowing the startup to fine-tune existing foundation models like Tongyi Qianwen (通义千问), drastically cutting development time and costs.
- Global Impact: This technology has the potential to address mobility challenges for the over 2 billion global blind and low-vision population, demonstrating a blueprint for AI accessibility products at a venture-scale.
- Smart Glasses: Equipped with 121-degree dual cameras and local processing.
- Smartphone Interface: Provides an application layer for settings and complex data relay.
- Remote-Control Ring: Enables discreet, hands-free interaction with the AI.
- Integrated White Cane: Combines traditional haptic feedback with digital system integration.
Over 17 million visually impaired individuals in China face a daily challenge most of us never think about: getting from point A to point B.
Independent mobility isn’t a luxury for them—it’s freedom.
And right now, the technology to genuinely help them exists.
Hangzhou Tongxing Technology (Hangzhou Tongxing Keji 杭州瞳行科技) just launched AI-powered glasses designed specifically to tackle this problem. Built on vision-language models and equipped with real-time obstacle detection, these glasses aren’t just another accessibility gadget. They’re the kind of product that actually changes how someone moves through the world.
Let’s break down what’s happening here—and why it matters.
—
The Real Problem: “Last Ten Meters” Navigation
Here’s what most navigation apps get wrong: they can guide you to a location, but they can’t tell you what you’re actually looking at when you arrive.
For the visually impaired, this gap is massive.
Current challenges include:
- Navigation software failures — especially in the final stretch of a journey (the infamous “last ten meters”)
- Delayed service responses — waiting for human assistance that may not come quickly
- Limited independence — many visually impaired individuals avoid leaving their homes due to these barriers
- Over-reliance on human help and manual tools — white canes and guide dogs help, but they’re not scalable solutions for everyone
The result? Reduced mobility, reduced independence, reduced quality of life.
This is the problem Hangzhou Tongxing Technology (Hangzhou Tongxing Keji 杭州瞳行科技) decided to solve.
—

How These AI Glasses Actually Work
The glasses themselves are built on four integrated components that work together as a complete system:
- Smart glasses with built-in cameras and processing
- Smartphone interface for additional control and information
- Remote-control ring for hands-free navigation
- Specialized white cane that integrates with the system
The real magic happens in the tech stack.
Hangzhou Tongxing Technology (Hangzhou Tongxing Keji 杭州瞳行科技) combined proprietary vision models with Qwen-VL (a series of large-scale vision-language models from Tongyi Qianwen (Tongyi Qianwen 通义千问)) and mounted 121-degree ultra-wide-angle dual cameras on the device.
This combination gives the glasses an incredibly wide field of vision—essential when you can’t rely on your eyes.
—
Find Top Talent on China's Leading Networks
- Post Across China's Job Sites from $299 / role
- Qualified Applicant Bundles
- One Central Candidate Hub
Your First Job Post Use Checkout Code 'Fresh20'

Two Distinct Modes: Real-Time Navigation vs. Information Gathering
The AI operates in two different ways depending on what the user needs:
Mode 1: Obstacle Avoidance & Immediate Navigation
When someone is walking, the glasses provide concise audio summaries of their surroundings.
The system identifies:
- Bus stop signs
- Street markers
- Physical obstacles in real-time
- Environmental hazards
Critically, this all happens with an ultra-low latency of just 300 milliseconds.
That means real-time navigation cues with every single step.
No delays. No guessing.
Mode 2: Detailed Environmental Understanding
When the user is stationary (at a restaurant, shop, or landmark), the glasses switch gears.
Instead of quick summaries, the vision-language model provides:
- Detailed audio descriptions of text and signage
- Physical layout information about the space
- Specific business details (restaurant menus, shop hours, etc.)
- Navigation guidance to reach specific destinations
This is how users actually accomplish tasks—reading a menu, finding an address, understanding their exact location.
—
ExpatInvest China
Grow Your RMB in China:
- Invest Your RMB Locally
- Buy & Sell Online in CN¥
- No Lock-In Periods
- English Service & Data
- Start with Only ¥1,000

The Technology Stack: Why This Matters Now
Here’s the interesting part: this technology didn’t exist a few years ago.
Not because the problem wasn’t known.
But because the infrastructure to solve it was prohibitively expensive.
Chen Gang (Chen Gang 陈刚), Marketing and Technical Director at Hangzhou Tongxing Technology (Hangzhou Tongxing Keji 杭州瞳行科技), explained it plainly:
“Before the emergence of large language models, developing effective AI products for the blind was extremely difficult. The significant reduction in the cost of computing power has allowed AI startups to scale rapidly. By utilizing Tongyi Qianwen (Tongyi Qianwen 通义千问) through a ‘foundation model reuse plus fine-tuning’ approach, we were able to quickly implement these essential features.”
Translation: affordable compute changed the game.
Instead of building vision-language models from scratch (which would cost millions and take years), Hangzhou Tongxing Technology (Hangzhou Tongxing Keji 杭州瞳行科技) took an existing foundation model and fine-tuned it for their specific use case.
This approach dramatically reduced:
- Development time
- Capital requirements
- Time-to-market
- Technical risk
The result is a product that solves a real problem for real people—and proves that the accessibility tech space is finally getting the attention (and funding) it deserves.
—
Resume Captain
Your AI Career Toolkit:
- AI Resume Optimization
- Custom Cover Letters
- LinkedIn Profile Boost
- Interview Question Prep
- Salary Negotiation Agent

Why This Matters Beyond China
While Hangzhou Tongxing Technology (Hangzhou Tongxing Keji 杭州瞪行科技) is based in China, this technology addresses a global problem.
The global blind and low-vision population is estimated at over 2 billion people.
Most of them face the same mobility challenges.
Most of them don’t have access to solutions like this.
If this startup can prove the model works—and eventually scale it—we’re looking at a blueprint that other companies will replicate worldwide.
This is what happens when AI accessibility products meet venture-scale economics.
—

The Bottom Line
AI glasses for the visually impaired aren’t science fiction anymore.
They’re being built right now by startups leveraging cheaper compute and foundation models.
The accessibility tech space is heating up—and for once, it’s actually being built by people who understand the problem deeply.
That’s worth paying attention to.
—






