Home / Gadgets / Oculii looks to supercharge radar for autonomy with $55M round B

Oculii looks to supercharge radar for autonomy with $55M round B


Autonomous vehicles rely on many sensors to perceive the world around them, and while cameras and lidar get a lot of the attention, good old radar is an important piece of the puzzle — though it has some fundamental limitations. Oculii, which just raised a $55 million round, aims to minimize those limitations and make radar more capable with a smart software layer for existing devices — and sell its own as well.

Radar’s advantages lie in its superior range, and in the fact that its radio frequency beams can pass through things like raindrops, snow and fog — making it crucial for perceiving the environment during inclement weather. Lidar and ordinary visible light cameras can be totally flummoxed by these common events, so it’s necessary to have a backup.

But radar’s major disadvantage is that, due to the wavelengths and how the antennas work, it can’t image things in detail the way lidar can. You tend to get very precisely located blobs rather than detailed shapes. It still provides invaluable capabilities in a suite of sensors, but if anyone could add a bit of extra fidelity to its scans, it would be that much better.

That’s exactly what Oculii does — takes an ordinary radar and supercharges it. The company claims a 100x improvement to spatial resolution accomplished by handing over control of the system to its software. Co-founder and CEO Steven Hong explained in an email that a standard radar might have, for a 120-degree field of view, a 10-degree spatial resolution, so it can tell where something is with a precision of a few degrees on either side, and little or no ability to tell the object’s elevation.

Some are better, some worse, but for the purposes of this example that amounts to an effectively 12×1 resolution. Not great!

Handing over control to the Oculii system, however, which intelligently adjusts the transmissions based on what it’s already perceiving, could raise that to a 0.5° horizonal x 1° vertical resolution, giving it an effective resolution of perhaps 120×10. (Again, these numbers are purely for explanatory purposes and aren’t inherent to the system.)

That’s a huge improvement and results in the ability to see that something is, for example, two objects near each other and not one large one, or that an object is smaller than another near it, or — with additional computation — that it is moving one way or the other at such and such a speed relative to the radar unit.

Here’s a video demonstration of one of their own devices, showing considerably more detail than one would expect: