Don’t Overlook Graph Neural Networks (GNN) in 2023!

Don't Overlook Graph Neural Networks (GNN) in 2023!

Introduction

Graph Neural Networks (GNN) — a dark horse among various neural networks. It is widely applicable across various fields, including recommendation systems, Google Maps traffic prediction, drug discovery, protein discovery, and more.

To explore the development and real-world applications of graph neural networks in algorithmic neural solving, the Intelligence Club, in collaboration with Associate Professor Fan Changjun from the National University of Defense Technology’s School of Systems Engineering and Assistant Professor Huang Wenbing from Renmin University’s Gaoling School of Artificial Intelligence, initiated a reading club on “Graph Neural Networks and Combinatorial Optimization”. The club focuses on areas related to graph neural networks and algorithmic neural solving, including neural algorithm reasoning, combinatorial optimization problem solving, geometric graph neural networks, and the application of algorithmic neural solving in AI for Science, aiming to provide participants with an academic exchange platform, stimulate their academic interest, and further promote research and application development in related fields. The reading club started on June 14, 2023, and is held every Wednesday evening from 19:00 to 21:00 for an expected duration of 8 weeks. Friends interested are welcome to sign up for participation.
Keywords: Graph Neural Networks, Recommendation Systems, Traffic Prediction, Drug Discovery, Protein Discovery
Don't Overlook Graph Neural Networks (GNN) in 2023!

ShuYini | Author

AINLPer | Source

Introduction

Although generative AI such as “ChatGPT and diffusion models” has been the focus of attention in recent months, we should not overlook the rapid development of Graph Neural Networks (GNN). After several years of development, research on GNN has transitioned from purely academic studies to large-scale practical applications, quietly becoming a dark horse among various neural networks.
Many large companies, including Alibaba, Google, Uber, and Twitter, have already applied GNN-related technologies in some of their core products. The main reason is that GNN-based methods have shown stronger performance compared to previously state-of-the-art AI architectures. Despite the diversity of problems faced by various core products and the differences in underlying datasets, they all use a unified framework based on GNN as the core. This indicates that “GNN can provide a universal and flexible framework for describing and analyzing any possible set of entities and their relationships.”
So, what are the actual advantages of Graph Neural Networks (GNN)? Why are Graph Neural Networks (GNN) important in 2023? This article will discuss these two questions and share some practical applications of GNN networks.

Introduction to GNN

Firstly, graph data is ubiquitous in the world: any system composed of entities and the relationships between them can be represented as a graph. In the past decade, deep learning algorithms have made significant progress in fields such as natural language processing, computer vision, and speech recognition, primarily because they can extract high-level features from data through nonlinear layers. However, “most deep learning frameworks are designed for data with Euclidean structures” (e.g., tabular data, images, text, and audio), neglecting graph-structured data.
Specifically, traditional AI methods mainly extract information from objects encoded with some “fixed” structure. For example, images are typically encoded as fixed-size 2D pixel grids (as shown below), while text is encoded as 1D sequences of words (or “tokens”). Using a graph structure to represent data can extract more valuable information from the representations of entities and their relationships.

Don't Overlook Graph Neural Networks (GNN) in 2023!

However, “the high flexibility of graph structures allows for a multitude of possibilities to represent the same piece of data, increasing the complexity of model architecture when designing systems that can learn from that data and generalize across different domains.” Over the past two decades, various methods have been proposed for AI systems capable of processing graph data at scale, but these advances are often related to the specific cases and settings in which they were developed.
In some ways, this reflects what happened during the deep learning revolution a decade ago, when speech recognition systems were once composed of hidden Markov models, Gaussian mixture models, and computer vision systems that heavily relied on traditional signal processing, gradually merging into end-to-end deep learning systems that often use the same basic architecture: the Transformer architecture (Attention mechanism) (originating in the field of natural language processing).
In recent years, a community of advanced deep learning researchers has made significant progress in transforming various data problems from different fields into graph problems. “Among them, Graph Neural Networks and some of their variants have consistently outperformed mainstream methods in various deep learning tasks.” GNN has indeed become an important tool for solving real-world problems across many completely different and seemingly unrelated fields, such as drug discovery, recommendation systems, traffic prediction, and more. So, what is the role of GNN in the broader AI research landscape today? Let’s look at some numbers that reveal the astonishing progress made in GNN-related research fields.

GNN in AI Research

An undeniable impact is that GNN has quickly found its place as a universal paradigm that can learn any graph structure, and any improvements made to it can generalize across various fields. The attention of academic and industry researchers worldwide on this topic has experienced explosive growth in recent years, which is not surprising. “If we look at the accepted papers from the past three years at the two top international conferences, ICLR and NeurIPS, it is not difficult to find that the number of GNN-related papers has significantly increased.” Additionally, we find that the term Graph Neural Network has consistently ranked among the top three keywords.

Don't Overlook Graph Neural Networks (GNN) in 2023!

A recent bibliometric study systematically analyzed this research trend, revealing that published research on GNN has grown exponentially, with an average annual growth rate of 447% during the period from 2017 to 2019. The AI Status Report 2021 further confirms that Graph Neural Networks are a keyword in AI research publications, with the highest increase in usage from 2019 to 2020.

Don't Overlook Graph Neural Networks (GNN) in 2023!

We can also examine the versatility of Graph Neural Networks by looking at their impact across different application areas. The following figure illustrates the distribution of GNN papers across 22 categories, showing that they predominantly occupy the field of computer science but also extend to various application domains.

Don't Overlook Graph Neural Networks (GNN) in 2023!

GNN Application Cases

As seen above, GNN has a wide range of applications across various fields. Next, let’s look at some examples and results of applying GNN in large-scale models in production.

Recommendation Systems

The Uber Eats team developed a food delivery application (similar to China’s Dianping) and recently began integrating graph learning technology into their recommendation system, “aiming to showcase the food most likely to attract users.” Given the large scale of the graphs processed in such settings (Uber Eats is a portal for over 320,000 restaurants across more than 500 cities worldwide), graph neural networks are a very attractive option.
When the model for recommending dishes and restaurants was first tested, the team reported a performance improvement of over 20% compared to existing production models based on key metrics such as Mean Reciprocal Rank, Precision@K, and NDCG. After integrating the GNN model into the Uber Eats recommendation system (which includes other non-graph-based features), developers observed an increase in AUC from 78% to 87% compared to the existing production baseline model, and subsequent impact analysis revealed that GNN-based features were the most influential features in the entire recommendation model to date.

Don't Overlook Graph Neural Networks (GNN) in 2023!

Google Maps Traffic Prediction

Another highly impactful application of Graph Neural Networks comes from a group of researchers at DeepMind, who demonstrated how to “apply GNN to traffic maps to improve the accuracy of estimated time of arrival (ETA).” The idea is to use GNN to learn representations of the traffic network to capture the underlying structure and dynamics of the network.
Google Maps has actively deployed this system on a large scale in several major cities around the world, significantly reducing the proportion of negative results users encounter when querying ETA (with a 50% increase in accuracy compared to previous methods).

Don't Overlook Graph Neural Networks (GNN) in 2023!

MIT Drug Discovery

One of the most notable recent applications of AI methods in the pharmaceutical field comes from a research project at MIT, which was later published in the prestigious scientific journal Cell.
The goal was to use AI models to predict the antibiotic activity of molecules by learning the graphical representations of molecules, thereby capturing their potential antibiotic activity. In this case, it is natural to use graphs to encode information because antibiotics can be represented as small molecular graphs where nodes are atoms and edges correspond to their chemical bonds.
The AI model learns from this data to predict the most promising molecules under certain ideal conditions, and these predictions are subsequently tested and validated in the laboratory, helping biologists prioritize which molecules to analyze from billions of possible candidates.

Don't Overlook Graph Neural Networks (GNN) in 2023!

This led to the identification of a previously unknown compound, Halicin, which was found to be an effective antibiotic against antibiotic-resistant bacteria. This result was regarded by field experts as a significant breakthrough in antibiotic discovery research. The news generated a buzz in the media, with articles published by the BBC and the Financial Times, but almost everyone overlooked the fact that this compound was discovered based on a GNN AI model. On the other hand, researchers reported how using a directed message-passing deep neural network approach (a core feature of GNN) was also crucial to this discovery: in fact, unlike graph-based AI models, other state-of-the-art models tested for Halicin failed to output higher predictive rankings.

Protein Discovery

The goal of protein design is to create proteins with desired characteristics, which can be achieved through (often costly) experimental methods that allow researchers to design new proteins by directly manipulating the amino acid sequences of proteins. The design of new proteins has enormous potential applications, such as developing new drugs, enzymes, or materials.
The Baker Lab recently combined graph neural networks with diffusion technology to create an AI system called RosettaFoldDiffusion (RFDiffusion), which has proven capable of designing protein structures that meet custom constraints. The AI model runs through E(n)-Equivariant graph neural networks, a special type of GNN specifically designed to handle data structures with rigid motion symmetries (e.g., translations, rotations, and reflections in space) and is fine-tuned as a denoiser, i.e., a diffusion model.

Don't Overlook Graph Neural Networks (GNN) in 2023!

RFDiffusion was released in November 2022 and is a highly complex system capable of solving a large number of specific tasks in protein design. It has been tested against various metrics and benchmarks. Experimental results show that RFDiffusion solved over 100% of benchmark problems (23 out of 25) compared to state-of-the-art models, solving 23 more benchmark problems in designing protein structure patterns than the previous best deep neural network models, achieving an 18% success rate in designing protein binders. Additionally, the experimental success rate of RFDiffusion varies from 5 to 214 times depending on the target protein.
Some experts in the field believe that RFDiffusion may represent “one of the biggest advances in structural biology in the past decade, alongside AlphaFold,” a breakthrough that heavily relies on recent advances in graph neural networks.

Conclusion

Graph Neural Networks are a rapidly developing field with many exciting developments, and the AI research community has significantly increased its attention to this area in recent years. In the industrial sector, applications of graph machine learning across different fields have only recently begun to emerge, with GNN establishing its position as a game changer in some large-scale deployed production models. Recent application cases have brought opportunities for a range of new applications; let’s see what surprises this field will bring us this year.

Reading Club on Graph Neural Networks and Combinatorial Optimization

The solution of numerous real-world problems relies on the design and solving of algorithms. Traditional algorithms are designed by human experts, but as AI technology continues to evolve, there are increasing cases of algorithms learning algorithms, such as AI algorithms represented by neural networks, which is the reason for algorithmic neural solving. In the direction of algorithmic neural solving, graph neural networks are a powerful tool that can fully leverage the characteristics of graph structures to achieve efficient approximations of high-complexity algorithms. Complex system optimization and control based on graph neural networks will be a new future direction following the wave of large models.
The Intelligence Club, in collaboration with Associate Professor Fan Changjun from the National University of Defense Technology’s School of Systems Engineering and Assistant Professor Huang Wenbing from Renmin University’s Gaoling School of Artificial Intelligence, initiated a reading club on “Graph Neural Networks and Combinatorial Optimization”. The club focuses on areas related to graph neural networks and algorithmic neural solving, including neural algorithm reasoning, combinatorial optimization problem solving, geometric graph neural networks, and the application of algorithmic neural solving in AI for Science. The reading club started on June 14, 2023, and is held every Wednesday evening from 19:00 to 21:00 for an expected duration of 8 weeks. Friends interested are welcome to sign up for participation!
Don't Overlook Graph Neural Networks (GNN) in 2023!
For more details, please see:
Accelerating the Efficiency of Classical Algorithms, Breaking Through Real-World Technical Bottlenecks: The Launch of the Reading Club on Graph Neural Networks and Combinatorial Optimization

Recommended Reading

1. Research Dispatch: Graph Neural Networks Predict Propagation Phenomena in Complex Networks
2.Barabási’s Latest Research: Accelerating Network Layout Using Graph Neural Networks
3.Nat. Mach. Intell. Dispatch: Graph Neural Networks Achieve Particle Tracking in Three-Dimensional Fluid Motion
4. “Zhangjiang: Frontiers of Complex Science – 27 Lectures” is Fully Online!
5. Become an Intelligence Club VIP, Unlock All Courses/Reading Clubs
6. Join Intelligence Club, Let’s Explore Complexity Together!
Click “Read Original” to sign up for the reading club

Leave a Comment