Ed Sperling of Semiconductor Engineering observes that sensor technology is beginning to change on a fundamental level. Indeed, companies are now looking beyond the five senses – on which early sensors were modeled – to tailoring the versatile technology for specific applications.
“In some cases, sensors don’t have to be as accurate as the sight, smell, touch, taste and hearing of a person. In others, they can be augmented to far exceed human limitations,” he explains. “And while the human brain remains more efficient and effective at certain operations, such as adding context around sensory data, sensors connected to digital logic can react more quickly and predictably to known stimuli.”
Perhaps not surprisingly, the majority of early vision technology was conducted primarily for medical research purposes, with scientists working to cure blindness and compensate for impaired vision.
“[However], machine vision has a different purpose,” says Sperling. “Rather than striving for visual acuity that is as good or better than a person’s eyesight, current efforts add the ability to sense objects in the non-visible spectra, such as infrared imaging, or radar to detect objects around corners or other objects that are not visible to people.”
According to Steve Woo, VP of Enterprise Solutions Technology at Rambus, the proliferation of next-gen sensors means the growth rate of data will be enormous.
“[Nevertheless], this is [far] more data than can be moved back to the data center. It will require more edge computing, where there will be filters or pre-processing. So you basically can have simple processing to get to more meaningful data,” he says. “You may also start to see more machine learning in the end points, where you scan information and learn the important events about that data and send along consolidated information. There are ways you can do that with reasonable security back and forth over the air.”
As Woo previously told Rambus Press, the rapidly evolving Internet of Things (IoT) has prompted the semiconductor industry to place an emphasis on more efficiently capturing, securing, moving and analyzing an increasing volume of digital data.
“We share the industry’s vision of 50 billion connected devices by 2020, which will also include always-on, always-connected smart sensor endpoints tasked with capturing and delivering a wide range of data.”
Moore’s Law, says Woo, remains a critical factor in making this vision a reality, as the size of refractive imagers is currently limited by optics. Then again, as Sperling points out in the Semiconductor Engineering article referenced above, many applications don’t actually require extensive high resolution imaging capabilities provided by a standard lens-based configuration.
“How can we build even smaller imagers? By replacing the traditional camera lens with a diffraction grating, while leveraging advanced algorithms and chip processing capabilities,” Woo explains. “Now this is where Moore’s Law comes into play, because it continues to help enable the technology necessary for Rambus scientists to create and refine miniature, lensless smart sensors (LSS). While Moore’s Law has been a driving force in the computing industry for decades, we’re seeing a growing number of benefits in computation imaging and sensing applications such as LSS.”
According to Woo, Rambus’ low power sipping sensor technology is capable of performing a wide range of functions, including image change detection, point tracking, range finding, sophisticated gesture recognition, object recognition and image capturing.
“These versatile capabilities make LSS technology suitable for at least five key ‘smart’ verticals, including consumer, cities, transportation, manufacturing and medical,” he adds.
Interested in learning more about the technology behind Rambus lensless smart sensors? You can check out our LSS article archive here.
Leave a Reply