Traditional imaging is typically associated with conventional cameras that capture a simple, straightforward representation of a particular subject or scene.
However, lensless technology pioneered by Rambus scientists is roughly analogous to the way a human, animal or insect brain perceives the world: the real-time interpretation of a scene or object facilitated by inherent pattern recognition capabilities. To be sure, data leaving a human retina looks nothing like a conventional bitmap, although it contains all the information required to interpret an image.
As inventor Dr. Patrick R. Gill notes, lensless technology allows sensors to capture information rich images using a low-cost phase grating. Although the raw ‘snap’ is indecipherable to the naked human eye, the sensor, which is approximately the size of pinhead, is capable of capturing all of the information in the visual world up to a certain resolution.
Gill says he envisions the use of lensless smart sensors (LSS) in a wide range of diverse applications and scenarios, including helping tiny satellites orient themselves.
“Space is a hostile environment where every gram must be taken into consideration when designing stations and satellites. In terms of the latter, satellites are frequently tasked with maintaining a specific attitude and locating the sun,” Gill told Rambus Press.
“Current sun-finding platforms typically require several focusing cameras, with each weighing in at several grams. In contrast, Rambus lensless sensors only total a few tens of milligrams apiece, allowing designers to further reduce the overall size of satellites.”
Similarly, small autonomous quad-copters are often limited to a 30-gram payload – with little available weight for obstacle-avoiding cameras. However, a constellation of Rambus lensless sensors would total less than a gram, making optical crash avoidance possible even with the smallest of flyers.
Additional potential applications for lensless technology includes the pairing of sensors with mini radios, such as the prototype unit recently designed by Stanford researchers for Internet of Things (IoT) applications.
“Both devices are approximately the same size. Plus, there are certain LSS modes (such as image change detection), which require very low power – down to about 2 uW. Where the radio scavenges power from 24 Ghz input radio waves, the LSS could also scavenge power from a small solar cell,” Gill explained.
“More broadly, coupling just enough sensing and computation with a low-power radio into a thin, light package that doesn’t need to be wired will enable all types of new applications. Some of my personal (potential) favorite LSS-based devices include easily-deployed sensing patches, solar panels, tiny processors and very low-power radios, all on a sticker-backed package you can attach anywhere.”
Gill also envisions the integration of lensless sensor technology in heavy industrial equipment such as robotic arms.
“A central, high-quality camera designed to triangulate the location of every possible obstacle and object of interest presents somewhat of a computational challenge for engineers,” he said.
“In contrast, packing a robotic arm with LSSs creates an effective proximity sensor that facilitates adherence to very simple rules such as basic obstacle avoidance, reaching behavior and goal-seeking.”
Overall, says Patrick, Rambus views LSS as a disruptive, visible-light sensing technology due to its miniscule form factor and inexpensive price tag.
Interested in learning more? You can read the paper titled “Lensless Ultra-Miniature CMOS Computational Imagers and Sensors” by David G. Stork and Patrick R. Gill here and check out our recent article on the subject titled “From lensless sensors to artificial intelligence” here.
Leave a Reply