Human eyes are a marvel of evolution, and trying to replicate artificially that incredible biological machine is a massive challenge. But that’s not necessarily the goal behind making eyes for machines — we want robots to do things humans cannot. To that end, engineers have now managed to create something that could push robotic eyes past the confines of the human eye itself: a camera capable of taking images in four dimensions.

The “fourth dimension” in this instance isn’t time, but rather an ability to capture an image in greater physical depth than before. You can think of it as an expansion of the third dimension within a single frame. Developed by engineers at Stanford University and the University of California, San Diego, this 4D camera is able to generate information-rich images and video frames that could potentially give robots and automated cars an incredible boost in navigating environments and identifying specific objects and details, as well as augment the graphics rendered in virtual reality scenes.

The research team presented their findings at the computer vision conference CVPR 2017 in July, providing more details behind the camera’s ability to capture 138 degrees of information in a single image. The researchers say the design of the camera doesn’t actually emulate what the human eye is able to do, but rather caters to the specific actions a robotic system will be tasked with.

“We want to consider what would be the right camera for a robot that drives or delivers packages by air,” said Donald Dansereau, an electrical engineer at Stanford and the first author of the paper, in a news release. “We’re great at making cameras for humans but do robots need to see the way humans do? Probably not.”

To get a 4D capture, the researchers used a novel spherical lens, made in a previous UC San Diego project for a 360-degree camera to give the camera a very wide field of view. Combined with new Stanford technology, the new device has an uncanny ability to collect a layer of information most cameras cannot. In addition, the light field photographic capabilities of the camera allow a system to refocus images after they’re already taken, in order to help fix obscurities caused by rain or other environmental impediments.

A.I. systems could take an enormous advantage of this technology, able to modify images on the go and ascertain what is in their view without the need of a human operator to double check things.

Though the camera is just in a proof-of-concept stage, the findings are very encouraging for increased robot development and automated vehicle efficiency and safety, as well as making VR simulations feel more real to the user. Who would’ve thought the key to getting robots to work on their own was giving them another dimension of vision?

Photos via Stanford Computational Imaging Lab and Photonic Systems Integration Laboratory at UC San Diego