Article Title: Overview of Robustness Verification for Feedforward and Recurrent Neural Networks
All Authors: Liu Ying, Yang Pengfei, Zhang Lijun, Wu Zhilin, Feng Yuan
First Affiliation: National Key Laboratory of Computer Science (Institute of Software, Chinese Academy of Sciences)
Publication Date: 2023, 34(7): 3134-3166
Abstract
With the advent of the intelligent era, intelligent systems deployed with deep neural networks have permeated various aspects of human life. However, due to the black-box nature and large scale of neural networks, their prediction results are difficult to fully trust. When applied in safety-critical areas such as autonomous driving, ensuring their safety remains a significant challenge faced by academia and industry. In response, academia has conducted research on a specific safety aspect of neural networks—robustness—and proposed numerous analysis and verification methods for robustness. So far, methods for verifying feedforward neural networks (FNN) include precise verification methods and approximate verification methods, which have developed quite prosperously; while research on robustness verification for other types of networks, such as recurrent neural networks (RNN), is still in its infancy. This review outlines the development of deep neural networks and the challenges they face in daily life; thoroughly surveys the robustness verification methods for FNN and RNN, and analyzes and compares the intrinsic connections between these verification methods; surveys the safety verification methods for RNN in real application scenarios; and clarifies future research directions that can be explored in the field of neural network robustness verification.
Scan to read the full articleContent
This article will mainly introduce the robustness verification methods for the two commonly used neural networks: feedforward neural networks (FNN) and recurrent neural networks (RNN), and systematically analyze the interconnections between these verification methods.
(1) Current Research Status of FNN Verification
Currently, the verification methods for FNN are mainly divided into two categories: one is the precise verification methods based on satisfiability modulo theory, mixed integer linear programming, and other optimization theories; the other is the approximate verification methods based on convex optimization, abstract interpretation, etc. In the following figure, we select representative verification methods from each category for comparison. Different colors represent different categories of verification methods, and arrows indicate that the tool pointed to has improved upon or borrowed from the tool it originates from; verification tools in the same colored rectangular box have the same accuracy.
Figure 1 Comparison of Main FNN Verification Methods
The following table compares the advantages and disadvantages of various FNN verification methods. This review showcases the development of various FNN and RNN verification methods over time and analyzes more about RNN verification methods and the connections between FNN and RNN verification methods.
(2) RNN Verification Methods
Existing articles on RNN robustness verification are mainly divided into three major categories: one category is based on abstract interpretation methods, which abstract the input range of RNN and the propagation within RNN into a set of specific geometric shapes, such as box abstract domains, zonotope abstract domains, polytopes, linear inequalities, etc., and then verify whether the abstract domain obtained from the output layer satisfies the corresponding properties. Another verification method is to unfold the RNN into an FNN and then use the verification methods of FNN to verify it. The third category extracts approximate automata (DFA) or finite state machines from RNN and uses more mature automata model verification methods to verify their properties. The connections and comparisons of various RNN verification methods are shown in the following figure.
Figure 2 RNN Verification Methods, Different Colors Represent Different Verification Technology Categories
(3) Future Research Directions
This article mainly introduces research related to the robustness verification of FNN and RNN and explores the intrinsic connections between the two. Through analysis, we find that while there are numerous local robustness verification methods for FNN in academia, research on the robust properties of RNN and verification of other types of robust properties is still in its infancy, with relatively few related verification methods currently available. However, overall, the scale of neural networks that these formal verification methods can handle can be expanded to large networks like VGG, ResNet, but there are still bottlenecks in verifying large complex systems in industry. To further advance the field of neural network robustness verification, we summarize several feasible research directions for the future.
① Research other properties beyond local robustness, such as probabilistic robustness, geometric robustness, fairness, safety, interpretability, etc.
② Validate neural networks from a different perspective. Whether it is the robustness verification of neural networks or program correctness analysis, they essentially belong to the domain of model checking. Therefore, Liu et al. took a novel approach from the perspective of model checking, proposing a ReLU temporal logic (ReTL) for ReLU-activated neural networks, defining relevant semantics, and verifying properties such as reachability in neural networks, which is a very innovative method for verifying model robustness.
③ Study the robustness of other types of neural networks.
④ Research the safety of large intelligent systems.
(4) Conclusion
Research on the robustness of neural networks is rapidly developing in the field of formal verification. As more and more intelligent systems are applied in real life, the robustness verification of neural networks will become increasingly important. Although the robustness research of FNN is extensive, current studies still remain at the trade-off between accuracy and efficiency in verifying medium-scale networks, making it difficult to extend to large neural networks in practical applications. Furthermore, verification work on other robust properties, such as global robustness and probabilistic robustness, is still relatively scarce. At the same time, our understanding of the prediction mechanisms and principles of neural networks remains limited, making it challenging to apply these formal verification methods to large intelligent systems in industry that contain intelligent components like neural networks. RNNs, due to their more complex structure, have relatively few studies on their local robustness, and the scale of verifiable neural networks only includes a few hundred to a thousand neurons. This article analyzes a large number of related articles on FNN and RNN verification, aiming to outline the development status of FNN and RNN verification methods and reveal the intrinsic connections between the robustness verification methods of the two neural networks. We conclude by discussing some other definitions and verification methods of robustness for neural networks, as well as potential research directions in the field of neural network robustness verification, hoping to further promote research in this area.
Author Biography
Liu Ying, Master’s student, main research area is model verification based on learning algorithms.
Yang Pengfei, PhD, CCF professional member, main research area is artificial intelligence safety, probabilistic model checking.
Zhang Lijun, PhD, researcher, doctoral supervisor, CCF senior member, main research area is probabilistic model checking, protocol verification, learning algorithms.
Wu Zhilin, PhD, researcher, CCF professional member, main research area is automata and logic, formal analysis and verification of software, database theory.
Feng Yuan, PhD, professor, doctoral supervisor, main research area is quantum computing, program theory, quantum program verification.