A recent study conducted by researchers at the Georgia Institute of Technology hypothesizes that hawkmoths manage complex sensing and control challenges by slowing their brains to improve vision under low-light conditions.
As Futurity’s John Toon notes, scientists have long been aware of moths using their specialized eye structures to maximize the amount of light they can capture. However, the above-mentioned study focuses on how hawkmoths slow their nervous systems to optimize the use of limited light.
According to Toon, the findings could help the next generation of small flying robots operate more efficiently under a broad range of lighting conditions.
“There has been a lot of interest in understanding how animals deal with challenging sensing environments, especially when they are also doing difficult tasks like hovering in mid-air,” says lead author Simon Sponberg, assistant professor in the School of Physics and School of Applied Physiology at Georgia Institute of Technology.
“If we want to have robots or machine vision systems that are working under [a] broad range of conditions, understanding how these moths function under these varying light conditions would be very useful.”
Commenting on the above-mentioned report, Dr. Patrick Gill, Principal Research Scientist at Rambus, says the behavior exhibited in the hawkmoth highlights how a visual system can be successfully task-optimized.
“In dim light, a hovering moth’s visual system slows to collect more scarce photons. However, it slows only to the point where its flight system still gets best-guess updates in time to direct every wingbeat,” he explained. “This is an example of a real-time constraint on a control system, where understanding the sensory system’s function is easiest when seen as part of the system as a whole, including the downstream motor task.”
In many cases, says Gill, the Lensless Smart Sensor (LSS) technology Rambus is currently developing can also be viewed in the context of a broader feedback system.
“The final objective may include optical sensing rather than image formation, or physical constraints limiting the size and complexity of the optically active components that can be deployed,” he added.