New year new blog — With a new year comes a new Google Pixel, and with a new Pixel comes a post on Google’s blog explaining the improvements to its computational camera system. This year the Pixel 4 added a second camera and, Google being Google, it squeezes every drop of data possible out of that second lens.
More cameras more data — In the post Google lays out a handful of techniques for using two cameras in tandem to improve the depth maps it creates to separate the foreground from the background, which then allows it to selectively blur the background. The company was already using the dual-pixel autofocus system to create the parallax (which was the foundation of the depth maps in the Pixel 2 and 3), and now the system has even greater parallax by using dual pixels in dual cameras.
Basically, more cameras and more pixels make for cleaner depth maps.
Better bokeh — The second big improvement Google packed into the Pixel 4 is a subtle but important change to how it blurs the background. Before it was tone mapping and then blurring, and now it’s blurring then tone mapping. There are also some interesting details in the post about how the camera actually creates its SLR-like bokeh, so if you’re into state-of-the-art blurry blobs, give it a look.