New AI from Google Creates Beautiful Scenery from a Photo
With millions of 2D photos capturing natural scenery around the world, imagine if a single picture of a landscape could become an entire fly-through video exploring the area. This is precisely what Google AI has accomplished. In this work, Google AI showed that by simply observing 2D internet photo collections, artificial intelligence can create three degenerative models, allowing you to step into a picture and fly through beautiful scenery like a bird exploring nature.
Perpetual View Generation
Google AI research presents a method for learning to generate unbounded fly-through videos of natural scenes starting from just a single RGB image, which they call perpetual view generation. This method is trained only from unstructured photographs without requiring camera poses or multiple views of each scene.
To achieve this, the Google AI researchers proposed a new self-supervised view generation training paradigm, which enables training high-quality perpetual view generation from virtual camera trajectories alone. First, they introduced a self-supervised view synthesis loss via cyclic virtual camera trajectories, providing the network signals for a single step of view synthesis without multi-view data. Second, for generating a long sequence of novel views, they employed an adversarial perceptual view generation strategy, encouraging view sequences rendered from long camera trajectories to be realistic and stable.
Comparison with Recent Video Prediction View Synthesis and Future Generation Methods
AI compared their approach with recent video prediction view synthesis and future generation methods that require multi-view data during training. This self-supervised approach demonstrates significant improvement over prior supervised learning methods in terms of realism, diversity, and consistency. For example, view synthesis methods GFES and video prediction methods fail to produce reasonable views for a long traversal distance. Infinite nature produces plausible views but often quickly turns into unrealistic content drawn from a single node.
Explicit Camera Viewpoint Control
This new approach has explicit camera viewpoint control, allowing it to render different camera motion starting from the same input image. It can generate sequences of hundreds of high-quality views at resolutions of 512 by 512 pixels.
Results on Landscape HQ Data Set and Aerial Coastline Imagery Dataset
Here are the results on the landscape HQ data set showing that view generation remains stable for very long camera trajectories while generated views cover realistic landscapes such as mountains, lakes, and trees. Finally, here are the results from the aerial coastline imagery dataset. The work demonstrates the potential of using 2D photo collections for tackling 3D generative modeling of unfounded natural scenes, and researchers hope to inspire more work in this direction in the future.
Physicists Create Intelligent Quantum Sensor
Scientists at the University of Texas of Dallas and Yale have developed an atomically thin intelligent quantum sensing device that simultaneously detects all fundamental properties of an incoming wave of light. This new concept, which is based on quantum geometry and published in the journal Nature, can be used in deep space exploration, healthcare, and remote sensing.
The scientists are quite excited about the research because usually, when trying to determine the intensity, wavelength, and polarization of light waves, different instruments are required, which can be bulky and take up a lot of space on an optical table. Now, the researchers have one device in an extremely small chip that can determine all of these properties simultaneously in a very short time.
This device takes advantage of the unique physical properties of moira metamaterials, which are a new family of two-dimensional materials. These 2D materials are atomically thin and have periodic structures. When two layers of these materials are overlaid with a small rotational twist, a moira pattern with an emergent orders of magnitude larger periodicity can form. As a result, the moire metamaterial has electronic properties that are significantly different from those of a single layer or two naturally aligned layers.
Moire Metamaterial Displays Bulk Photovoltaic Effect
The researchers chose to demonstrate their idea using a sensing device, which has two layers of naturally occurring bi-layer graphene that are relatively twisty, making for a total of four layers. Moire metamaterial displays what’s known as a bulk photovoltaic effect, which is quite unusual.
To produce any current in a material, there usually needs to be a voltage bias. But with this, they simply shine a light onto the moire metamaterial, and it generates a current through the bulk photovoltaic effect. The light intensity, wavelength, and polarization state are all factors in determining the magnitude and phase effects of photovoltage. By tuning the moira metamaterial, the photovoltage generated from a given incoming wave of light creates a 2D map that is unique for that wave, similar to a fingerprint.
From this map, one can infer the wave’s properties. Though it is difficult, two metal plates or gates were placed on top and beneath the moire metamaterial, and the researchers were able to adjust the quantum geometric properties of the material to encode infrared light wave properties into fingerprints.
Convolutional Neural Network
To decode fingerprints, the team used a convolutional neural network, which is an artificial intelligence algorithm widely used for computer vision. The process starts from light, about which the researchers know the intensity, wavelength, and polarization. They shine the device through it to be tuned in various ways to generate the different fingerprints. The artificial neural network can then recognize these patterns after it has been trained with 10,000 examples of the data. Once it learns enough, it can characterize an unknown light.
New AI-Powered X-Ray Technique to Detect Explosives Can Also Identify Tumors
According to a report published by MIT Technology Review, the new X-ray method could detect early-stage tumors in humans. It works in conjunction with an artificial intelligence deep learning algorithm to detect explosives inside airport luggage.
This convolutional neural network artificial intelligence algorithm uses computer vision to identify objects even when they are hidden. Because there will be a bit of texture in between many other things, which the algorithm will find, this technique could be used for medical purposes, especially in cancer screening.
Researchers are not yet able to determine if the technique can distinguish between the texture of a breast tumor and surrounding healthy tissue. But they are optimistic about the possibility that the technique could detect very small tumors that were previously hidden behind a patient’s ribcage.
100% Accuracy Rate in Detecting Explosives
When explosive materials are hidden within electronics, it can be difficult to detect them using traditional X-ray techniques. But the researchers found that this new AI method had a 100% accuracy rate in detecting explosives under test conditions.
The UCL team concealed small amounts of explosives inside electrical appliances, including hair dryers and cell phones, which made the bag look almost like a traveler’s backpack. The scan the bags, the researchers used a machine with masks, which were sheets of metal with holes punched into them. This was an alternative to using ordinary X-ray machines that hit objects with a uniform beam of X-rays as they passed through the bag and its contents. The beamlets scattered at angles as small as a microradian, roughly one twenty thousandth of a degree.
Promising Work Combining Novel Imaging with Artificial Intelligence
The artificial intelligence was trained to detect the texture of certain materials from specific angles. Although scientists acknowledged that it was unrealistic to expect such high accuracy in larger studies that closely mirror real-world situations at a large scale, the algorithm was capable of correctly identifying explosives in all experiments conducted under test conditions.
This is very promising work from the UCL team that combines novel imaging with artificial intelligence and has great potential for the extremely difficult tasks of threat detection within hand baggage. The work was published in the scientific journal Nature Communications. While cancer detection has its own set of difficulties, the researchers look forward to seeing the progress in this area as they continue their work.