Small autonomous quad-copters and hex-copters typically carry a limited payload – with little weight available for obstacle-avoiding cameras. Meanwhile, robots equipped with a central, high-quality camera that triangulates the location of every possible obstacle and object of interest present a computational challenge for engineers.
Patrick Gill, a Principal Research Scientist at Rambus, says lensless smart sensors (LSS) can potentially help address weight and computational limitations associated with both ‘copters and robots.
“Robots are frequently tasked with reaching a specific destination or interacting with a target object in dynamic environments. This applies to a wide range of robots, including vacuum-cleaner ‘bots attempting to maneuver around the legs of a chair and outdoor house-painting ‘bots plotting a course that bypasses a tree in a backyard,” he explained.
“One popular method of avoiding multiple obstacles is to employ single, stereo or 3D cameras that estimate the location of each and every obstacle within a robot’s field of view. This is traditionally accomplished by drawing a bounding box around the objects, then searching for a planned trajectory that allows the robot (or robotic arms) to navigate hazards and successfully reach its destination or more precisely interact with a moving object.”
However, the above-mentioned approach is limited by a number of disadvantages.
“Firstly, the farther away an object is from the camera system, the less certain an estimate of its location becomes. Secondly, household objects may move rather unexpectedly, especially if there are children around. Thirdly, even the most advanced vision systems on the market are incapable of reliably predicting what is waiting for a ‘bot on the far side of an object. Simply put, drawing a bounding box with the right depth can be tricky,” said Gill.
“Instead, let us think about a scenario where multiple sensors were deployed along the parts of a robot likely to come into close proximity with obstacles. These sensors would help the ‘bot ‘understand’ whether it has a clear path or if detours are necessary. So rather than installing a costly (central) camera system to pre-determine roadblocks, every potential point of contact between the robot and a hazard would be monitored by clusters of tiny sensors in real time.”
According to Gill, lensless smart sensors would also help facilitate the use of relatively basic, streamlined obstacle-avoidance software. Indeed, such an app could be easily configured to alter the robot’s course in real-time if any obstacle is closer than a few inches.
“This is certainly an easier and far more effective approach than relying on complex algorithms to determine where all obstacles are (or potentially will be) in 3D space, especially given depth measurements from only one or two locations,” he continued.
Similarly, says Patrick, LSS technology can be used to improve the obstacle-avoiding capabilities of quad-copters and hex-copters, allowing them to more effectively fulfill civilian missions such as delivering medicine and other aid items to hard-hit disaster areas or inaccessible quarantine zones. To be sure, a constellation of Rambus lensless sensors would total less than a gram, making optical crash avoidance possible even with the smallest of flyers.
“Rescue teams could also use a combination of airborne ‘copters and ground-based bots to search rubble for survivors in locations that are difficult to reach after a hurricane, earthquake, tornado or tsunami,” he concluded. “To put it succinctly, we view LSS as a disruptive, visible-light sensing technology due to its minuscule form factor and inexpensive price tag.”
So, how does lensless sensor technology differ from conventional cameras? As we’ve previously discussed on Rambus Press, traditional imaging is typically associated with conventional cameras that capture a simple, straightforward representation of a particular subject or scene. However, lensless technology pioneered by Rambus scientists is roughly analogous to the way a human, animal or insect brain perceives the world: the real-time interpretation of a scene or object facilitated by inherent pattern recognition capabilities.
Indeed, data leaving a human retina looks nothing like a conventional bitmap, although it contains all the information required to interpret an image. Similarly, LSS allows sensors to capture information rich images using a low-cost phase grating. Although the raw ‘snap’ is indecipherable to the naked human eye, the sensor, which is approximately the size of pinhead, is capable of capturing all of the information in the visual world up to a certain resolution.
Interested in learning more? You can read the paper titled “Lensless Ultra-Miniature CMOS Computational Imagers and Sensors” by David G. Stork and Patrick R. Gill here; check out our recent article on the subject titled “From lensless sensors to artificial intelligence” here and peruse “Lensless smart sensors eye the final frontier” here.
Leave a Reply