“Unveiling a Fascinating Breakthrough: Google DeepMind and Nvidia AI Uncover Alternative Physics with New Algorithm”

Artificial Intelligence Discovers Alternative Physics: A Breakthrough in Physics Theory Development


Artificial intelligence (AI) is changing the field of physics, and a team of researchers from Columbia University has demonstrated this through the development of an AI program that observes physical events and identifies pertinent variables. Understanding these variables is a crucial first step in physics theory development. However, the researchers found that the variables that the AI identified were unexpected. This article will delve into the specifics of this breakthrough research, as well as other recent developments in AI that are revolutionizing fields such as natural language processing and protein science.

The AI Program and the Swinging Double Pendulum

To understand how the AI program developed by the Columbia University researchers works, let us first look at a specific experiment that they conducted. They fed the network video of a swinging double pendulum, which is a physical system with two arms. The angular velocity and angle values were already known, and after hours of processing, the AI program returned a result. However, the researchers needed to figure out what variables the program had identified.

One of the challenges of extracting variables from the AI program was that it could not provide human-readable descriptions of the variables that it identified. Two variables appeared to correspond to arm angles, whereas the other two were still unclear. Nonetheless, the researchers believed that the AI had discovered a valid set of four variables based on its accurate predictions. But they still could not understand the mathematical language that the AI was using.

Validation of Known Answers and Exploration of Unknown Systems

To test the validity of the AI program’s results, the researchers then validated the known answers of a variety of other physical systems. They then supplied the network with videos of systems for which they did not know the explicit answer. One of these videos depicted an air dancer dancing in front of a car park. After several hours of analysis, the program returned eight variables. A lava lamp was also recorded, resulting in eight variables. When they supplied a video of flames from a holiday fire, the program returned 24 variables.

The researchers also had a question whether the set of variables was unique for every system. In the experiments, the number of variables was the same each time the AI restarted, but the variables themselves were different each time. According to the results, it appears that there are various ways to describe the universe, and physics may not be as comprehensive as it is currently perceived. This AI model could theoretically help researchers discover complex phenomena in areas ranging from cosmology to biology.

NVIDIA’s Nemo Megatron: Advancements in Large Language Model Training

Apart from physics, AI is also transforming the field of natural language processing (NLP). Customer service chatbots, voice-controlled assistants like Amazon’s Alexa and Apple’s Siri, and auto-correction functions on smartphones are just some examples of the wide range of NLP applications driving the global NLP market. This market is predicted to increase from $20.98 billion last year to $127.26 billion by 2028, according to Fortune Business Insights.

To cater to this market, NVIDIA released its Nemo Megatron, which is a Triton inference server-based large language model framework. The company recently released an optimization and scaling tool for large language model training on as many GPUs as desired. It claims that this tool will considerably speed up the training process using two new methods: sequence parallelism and selective activation recomputation.

The new Nemo Megatron updates will enable 30% faster training times for models ranging from 22 billion to 1 trillion parameters in size. Additionally, models with 175 billion parameters can also be trained using 1024 NVIDIA A100 GPUs in just 24 days. This is a reduction of 10 days or about 250,000 GPU computing hours for researchers training large language models.

Google DeepMind’s Breakthrough in Protein Science with Alpha Fold

Apart from physics and NLP, AI is also making significant contributions to the field of protein science. In one of the field’s grand challenges, understanding how protein chains are intricately twisted and folded into three-dimensional shapes, DeepMind’s Alpha Fold program has successfully predicted the structure of nearly every protein known to science. This breakthrough is paving the way for new discoveries and technologies in areas such as healthcare, food security, and climate change research.

DeepMind has made available a database of more than 200 million protein structures to the public. In a news conference, DeepMind’s CEO referred to the database as the entire protein universe, encompassing every organism’s sequenced genome. Alpha Fold’s enhanced database includes proteins for plants, bacteria, animals, and other organisms. Researchers can utilize Alpha Fold’s updates to further their work on major issues such as environmental sustainability, neglected diseases, and food insecurity.


AI is rapidly advancing in numerous fields, from physics to NLP to protein science. It is transforming the way we understand and analyze complex phenomena, enabling us to make breakthroughs that were once impossible. As these technologies continue to evolve, we can expect to see more exciting developments and breakthroughs in the near future.

Recent Articles

Related Stories

Get the daily news directly in your inbox