How Silicon Photonics and FMCW Transform LiDAR Technology

By integrating optical components on silicon chips and utilizing FMCW technology, LiDAR systems are becoming smaller, cheaper, and more robust, paving the way for widespread adoption in environments requiring high-performance sensing under challenging conditions.
Feb. 10, 2026
6 min read

Key Highlights

  • Silicon photonics allows for the integration of lasers, detectors, and beam steering on a single chip, reducing size, cost, and manufacturing complexity.
  • Frequency modulated continuous wave (FMCW) sensing provides simultaneous distance and velocity measurement, improving accuracy and performance in challenging environments like fog, dust, and sunlight.
  • Monolithic, wafer-scale LiDAR systems eliminate the need for complex optical alignments, enabling reliable, high-volume production.
  • The combination of these technologies breaks the traditional performance-reliability-cost trade-off, expanding LiDAR's applicability across various industries.
  • As LiDAR becomes more accessible and scalable, it will play a critical role in advancing physical AI, enabling machines to perceive and interact with the environment safely and effectively.

After spending years in the automotive LiDAR world, I’ve seen the industry chase the same goals over and over: longer range, tighter resolution, lower cost, and systems that don’t fall apart after a few thousand miles of vibration testing. For a long time, these competing priorities kept LiDAR limited to a handful of applications, most notably the automotive sector.

But autonomy is no longer something reserved for cars. Factories, warehouses, and even consumer devices now expect machines to sense and react to the world around them in real time. Robotic arms working beside people, drones threading their way through busy airspace, infrastructure monitoring all require reliable spatial awareness. Developers realize that being able to detect distance, motion, shape, and velocity are critical requirements, particularly in the Physical AI world.

LiDAR is recognized as the only sensing modality that can deliver dense, high-quality 3D perception under almost any condition. The problem is that traditional designs that are big, expensive, alignment-sensitive systems are difficult to scale to the millions of units required in many application areas. To expand its use, LiDAR needs to be cheaper, smaller, and much easier to manufacture than anything we built in the early automotive days.

That’s where silicon photonics and frequency modulated continuous wave (FMCW) sensing come in. Together, they offer a path toward LiDAR that can be manufactured like semiconductor chips, rather than using handcrafted optical assemblies.

Related: Unlocking the Next Generation of LiDAR With Progammable Optics

Related: Bot Auto Sends Autonomous Truck on Successful Driverless Run

Why Silicon Photonics Changes the Game

Silicon photonics does for optics what integration did for electronics decades ago. Instead of stitching together lasers, detectors, steering mechanisms, and amplifiers as separate components, we can now fabricate them on the same silicon substrate using standard CMOS processes.

This isn’t like the attempts at integration we’ve seen before, where the transmitter and receiver live on different dies and still require painstaking alignment. A truly integrated photonic IC unifies emission, detection and beam steering into one chip and aperture.

Automotive LiDAR development has pursued these goals for years. Manufacturers can scale by moving from boutique assembly to wafer-level production, which reduces unit-to-unit variation. Other benefits include:

  • Miniaturization: Optical generation, steering, and detection in a compact footprint.
  • Cost reduction: Eliminating sensitive alignment steps slashes manufacturing overhead.
  • Reliability: Fewer moving parts and fewer calibration steps mean improved tolerance to heat, vibration, and aging.
  • Consistency: More consistent units off the line, something traditional optical assemblies could never guarantee.

The same economic forces that scaled cameras and radars later enabled the mass-market adoption of CPUs.

Measuring Distance and Motion with the Same Beam

While silicon photonics gives us the scalable platform, FMCW gives LiDAR capabilities we’ve wanted for years.

Traditional time-of-flight (ToF) LiDAR sends a pulse and waits for it to return. FMCW works more like modern radar. It emits a continuously “chirped” laser signal and mixes the reflected light with a reference copy inside the receiver. The frequency difference—the beat frequency—contains everything we need, like distance from time delay and velocity from Doppler shift.

With FMCW, every pixel inherently knows both where an object is and how it’s moving. That single feature is the key.

Instant per‑pixel velocity removes frame stitching. It also improves SNR in sunlight, fog, and cross‑talk, letting systems separate moving and static objects immediately.

This is exactly what robots, drones, and autonomous systems need for safe, high-speed operation. For industrial, robotics, and infrastructure applications, where lighting, vibration, and dust can’t be controlled, these advantages are critical.

From Complex Optics to Chip-Scale Systems

Anyone who has ever built or tested mechanical or hybrid LiDAR systems knows the pain points: multiple optical paths, mirrors or MEMS steering, and alignment steps that feel more like surgery than assembly.

Even more modern designs often rely on bi-static layouts—separate transmitter and receiver modules that have to be matched with microscopic precision. It works, but it doesn’t scale.

A monostatic silicon-photonics architecture solves this by integrating everything behind a single aperture, eliminating stacked modules, precision alignments, and moving beam‑steering hardware.

Integrating photonic beam steering on-chip lets software control pointing, removing manual alignment and producing consistent, wafer-scale devices. With digital photonic beam steering built directly onto the chip, pointing becomes software-defined. Wafer-scale uniformity replaces hand-tuned optics. And because these designs piggyback on the same manufacturing ecosystem used for optical networking, production capacity and supplier networks are already available.

Breaking the Traditional Trade-Offs

For decades, the LiDAR industry has been stuck in a triangle of compromise: performance, reliability, and cost. You could pick two, but rarely all three.

Silicon photonics and FMCW finally allow LiDAR to hit all corners of that triangle. That opens the door to markets far beyond automotive: drones, AMRs, industrial automation, safety systems, smart infrastructure, consumer devices, and environments where LiDAR was previously “too big, too expensive, or too fragile.” Compute is abundant; the real gap is reliable 3D sensing in varied lighting and at range.

LiDAR’s Role in the Coming Era of Physical AI

As AI increasingly moves into the physical world, perception becomes as important as reasoning. The bottleneck isn’t computing power; it’s environmental awareness.

For machines to act autonomously, sensors need to be as easy to access and integrate and be as affordable as the processors running the AI models. That means semiconductor-style economics: high-volume, low-cost production, year after year.

Silicon photonics combined with FMCW is the first technology approach for LiDAR that gets us there.

FMCW platform emerging

Fully integrated, chip-scale FMCW LiDAR platforms are already emerging. These systems combine beam steering, signal generation, and coherent detection onto a single photonic stack, with a level of integration required to meet market requirements.

They’re being developed for industrial robotics and automotive markets, with:

  • Multi-line, high-resolution FMCW imaging
  • Ranges up to ~200 m
  • Minimal calibration
  • Low cost per unit
  • Compatibility with automated assembly lines

Scaling Perception for the Next Frontier of Automation

LiDAR is now going through the same transformation computing went through: miniaturized, integrated, and made accessible at scale. Racks of equipment have been replaced by a chip and condensed into a single silicon photonics platform. By collapsing complex optics into a single silicon photonics platform, LiDAR becomes affordable while remaining reliable and high‑performing. Machines that can see, understand, and move safely through the real world will define the next era of automation. Smarter AI alone won’t get us there; better sensing will.

About the Author

Clément Nouvel

Clément Nouvel

Clément Nouvel is CEO of Voyant Photonics. Prior to that, he served as CTO of Valeo’s LiDAR organization, where he spent nearly a decade advancing automotive sensing. At Valeo, he led initiatives that included business development, R&D, and production. He brings deep technical and operational insight to help reshape the future of active sensing.

Sign up for our eNewsletters
Get the latest news and updates

Voice Your Opinion!

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!