Click on the above“Beginner Learning Vision” to choose to addStar or “Top”
Heavyweight content delivered promptly
As we all know, programmers often search for a lot of information online while coding, most of which are code snippets. However, this work can often be exhausting and time-consuming. Therefore, today the editor has reprinted an article from Zhihu, introducing some commonly used PyTorch code snippets, hoping to help many programmers fighting at their desks!
This code is based on PyTorch version 1.x and requires the following packages:
import collections
import os
import shutil
import tqdm
import numpy as np
import PIL.Image
import torch
import torchvision
Check PyTorch Version
torch.__version__ # PyTorch version
torch.version.cuda # Corresponding CUDA version
torch.backends.cudnn.version() # Corresponding cuDNN version
torch.cuda.get_device_name(0) # GPU type
Update PyTorch
PyTorch will be installed in anaconda3/lib/python3.7/site-packages/torch/ directory.
conda update pytorch torchvision -c pytorch
Set Random Seed
torch.manual_seed(0)
torch.cuda.manual_seed_all(0)
Specify Program to Run on Specific GPU
Specify environment variable in command line
CUDA_VISIBLE_DEVICES=0,1 python train.py
Or specify in code
os.environ['CUDA_VISIBLE_DEVICES'] = '0,1'
Check if CUDA is supported
torch.cuda.is_available()
Set to cuDNN Benchmark Mode
Benchmark mode will improve computation speed, but due to randomness in calculations, the results of each network feedforward may vary slightly.
torch.backends.cudnn.benchmark = True
If you want to avoid such result fluctuations, set
torch.backends.cudnn.deterministic = True
Clear GPU Memory
Sometimes after stopping a run with Control-C, GPU memory is not released in time, and needs to be cleared manually. In PyTorch, you can
torch.cuda.empty_cache()
Or in the command line, first use ps to find the program’s PID, then use kill to terminate that process
ps aux | grep python
kill -9 [pid]
Or directly reset the GPU that has not been cleared
nvidia-smi --gpu-reset -i [gpu_id]
Basic Information of Tensor
tensor.type() # Data type
tensor.size() # Shape of the tensor. It is a subclass of Python tuple
tensor.dim() # Number of dimensions.
Data Type Conversion
# Set default tensor type. Float in PyTorch is much faster than double.
torch.set_default_tensor_type(torch.FloatTensor)
# Type conversions.
tensor = tensor.cuda()
tensor = tensor.cpu()
tensor = tensor.float()
tensor = tensor.long()
torch.Tensor and np.ndarray Conversion
# torch.Tensor -> np.ndarray.
ndarray = tensor.cpu().numpy()
# np.ndarray -> torch.Tensor.
tensor = torch.from_numpy(ndarray).float()
tensor = torch.from_numpy(ndarray.copy()).float() # If ndarray has negative stride
torch.Tensor and PIL.Image Conversion
In PyTorch, tensors default to N×D×H×W order, and the data range is [0, 1], which needs to be transposed and normalized.
# torch.Tensor -> PIL.Image.
image = PIL.Image.fromarray(torch.clamp(tensor * 255, min=0, max=255
).byte().permute(1, 2, 0).cpu().numpy())
image = torchvision.transforms.functional.to_pil_image(tensor) # Equivalently way
# PIL.Image -> torch.Tensor.
tensor = torch.from_numpy(np.asarray(PIL.Image.open(path))
).permute(2, 0, 1).float() / 255
tensor = torchvision.transforms.functional.to_tensor(PIL.Image.open(path)) # Equivalently way
np.ndarray and PIL.Image Conversion
# np.ndarray -> PIL.Image.
image = PIL.Image.fromarray(ndarray.astype(np.uint8))
# PIL.Image -> np.ndarray.
ndarray = np.asarray(PIL.Image.open(path))
Extract Value from Tensor Containing Only One Element
This is particularly useful for tracking the loss change process during training. Otherwise, this will accumulate the computation graph, making GPU memory usage increase.
value = tensor.item()
Tensor Reshaping
Tensor reshaping is often needed to input convolutional layer features into the fully connected layer. Compared to torch.view, torch.reshape can automatically handle cases where the input tensor is not contiguous.
tensor = torch.reshape(tensor, shape)
Shuffle Order
tensor = tensor[torch.randperm(tensor.size(0))] # Shuffle the first dimension
Horizontal Flip
PyTorch does not support tensor[::-1] style negative step operations, horizontal flipping can be achieved using tensor indexing.
# Assume tensor has shape N*D*H*W.
tensor = tensor[:, :, :, torch.arange(tensor.size(3) - 1, -1, -1).long()]
Copying Tensors
There are three ways to copy, corresponding to different needs.
# Operation | New/Shared memory | Still in computation graph |
tensor.clone() # | New | Yes |
tensor.detach() # | Shared | No |
tensor.detach().clone()() # | New | No |
Concatenate Tensors
Note that the difference between torch.cat and torch.stack is that torch.cat concatenates along the specified dimension, while torch.stack adds a dimension. For example, if the parameters are 3 tensors of size 10×5, the result of torch.cat is a tensor of size 30×5, while the result of torch.stack is a tensor of size 3×10×5.
tensor = torch.cat(list_of_tensors, dim=0)
tensor = torch.stack(list_of_tensors, dim=0)
Convert Integer Labels to One-Hot Encoding
Labels in PyTorch default from 0.
N = tensor.size(0)
one_hot = torch.zeros(N, num_classes).long()
one_hot.scatter_(dim=1, index=torch.unsqueeze(tensor, dim=1), src=torch.ones(N, num_classes).long())
Get Non-Zero/Zero Elements
torch.nonzero(tensor) # Index of non-zero elements
torch.nonzero(tensor == 0) # Index of zero elements
torch.nonzero(tensor).size(0) # Number of non-zero elements
torch.nonzero(tensor == 0).size(0) # Number of zero elements
Tensor Expansion
# Expand tensor of shape 64*512 to shape 64*512*7*7.
torch.reshape(tensor, (64, 512, 1, 1)).expand(64, 512, 7, 7)
Matrix Multiplication
# Matrix multiplication: (m*n) * (n*p) -> (m*p).
result = torch.mm(tensor1, tensor2)
# Batch matrix multiplication: (b*m*n) * (b*n*p) -> (b*m*p).
result = torch.bmm(tensor1, tensor2)
# Element-wise multiplication.
result = tensor1 * tensor2
Calculate Pairwise Euclidean Distance Between Two Sets of Data
# X1 is of shape m*d.
X1 = torch.unsqueeze(X1, dim=1).expand(m, n, d)
# X2 is of shape n*d.
X2 = torch.unsqueeze(X2, dim=0).expand(m, n, d)
# dist is of shape m*n, where dist[i][j] = sqrt(|X1[i, :] - X[j, :]|^2)
dist = torch.sqrt(torch.sum((X1 - X2) ** 2, dim=2))
Convolution Layer
The most commonly used convolution layer configurations are
conv = torch.nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1, bias=True)
conv = torch.nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0, bias=True)
If the convolution layer configuration is complex and inconvenient to compute the output size, you can use the following visualization tool for assistance
Link: https://ezyang.github.io/convolution-visualizer/index.html
Global Average Pooling (GAP) Layer
gap = torch.nn.AdaptiveAvgPool2d(output_size=1)
Bilinear Pooling
X = torch.reshape(N, D, H * W) # Assume X has shape N*D*H*W
X = torch.bmm(X, torch.transpose(X, 1, 2)) / (H * W) # Bilinear pooling
assert X.size() == (N, D, D)
X = torch.reshape(X, (N, D * D))
X = torch.sign(X) * torch.sqrt(torch.abs(X) + 1e-5) # Signed-sqrt normalization
X = torch.nn.functional.normalize(X) # L2 normalization
Multi-GPU Synchronized Batch Normalization
When using torch.nn.DataParallel to run code on multiple GPUs, the default operation of PyTorch’s BN layer is to calculate the mean and standard deviation of each card independently. Synchronized BN uses data from all cards to calculate the mean and standard deviation of the BN layer, alleviating the issue of inaccurate mean and standard deviation estimates when the batch size is small, which is an effective performance enhancement technique in tasks such as object detection.
Link: https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
Similar to BN Sliding Average
To implement an operation similar to BN sliding average, use in-place operations in the forward function to assign values to the sliding average.
class BN(torch.nn.Module)
def __init__(self):
...
self.register_buffer('running_mean', torch.zeros(num_features))
def forward(self, X):
...
self.running_mean += momentum * (current - self.running_mean)
Calculate Total Number of Model Parameters
num_parameters = sum(torch.numel(parameter) for parameter in model.parameters())
Similar to Keras model.summary() Output Model Information
Link: https://github.com/sksq96/pytorch-summary
Model Weight Initialization
Note the difference between model.modules() and model.children(): model.modules() iterates through all sublayers of the model, while model.children() only iterates through one layer of the model.
# Common practice for initialization.
for layer in model.modules():
if isinstance(layer, torch.nn.Conv2d):
torch.nn.init.kaiming_normal_(layer.weight, mode='fan_out',
nonlinearity='relu')
if layer.bias is not None:
torch.nn.init.constant_(layer.bias, val=0.0)
elif isinstance(layer, torch.nn.BatchNorm2d):
torch.nn.init.constant_(layer.weight, val=1.0)
torch.nn.init.constant_(layer.bias, val=0.0)
elif isinstance(layer, torch.nn.Linear):
torch.nn.init.xavier_normal_(layer.weight)
if layer.bias is not None:
torch.nn.init.constant_(layer.bias, val=0.0)
# Initialization with given tensor.
layer.weight = torch.nn.Parameter(tensor)
Some Layers Use Pre-trained Models
Note that if the saved model is torch.nn.DataParallel, the current model also needs to be
model.load_state_dict(torch.load('model,pth'), strict=False)
Load Model Saved on GPU to CPU
model.load_state_dict(torch.load('model,pth', map_location='cpu'))
Get Basic Information of Video Data
import cv2
video = cv2.VideoCapture(mp4_path)
height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT))
width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH))
num_frames = int(video.get(cv2.CAP_PROP_FRAME_COUNT))
fps = int(video.get(cv2.CAP_PROP_FPS))
video.release()
TSN Sample One Frame of Video per Segment
K = self._num_segments
if is_train:
if num_frames > K:
# Random index for each segment.
frame_indices = torch.randint(
high=num_frames // K, size=(K,), dtype=torch.long)
frame_indices += num_frames // K * torch.arange(K)
else:
frame_indices = torch.randint(
high=num_frames, size=(K - num_frames,), dtype=torch.long)
frame_indices = torch.sort(torch.cat((
torch.arange(num_frames), frame_indices)))[0]
else:
if num_frames > K:
# Middle index for each segment.
frame_indices = num_frames / K // 2
frame_indices += num_frames // K * torch.arange(K)
else:
frame_indices = torch.sort(torch.cat((
torch.arange(num_frames), torch.arange(K - num_frames))))[0]
assert frame_indices.size() == (K,)
return [frame_indices[i] for i in range(K)]
Extract Convolution Features from a Certain Layer of ImageNet Pre-trained Model
# VGG-16 relu5-3 feature.
model = torchvision.models.vgg16(pretrained=True).features[:-1]
# VGG-16 pool5 feature.
model = torchvision.models.vgg16(pretrained=True).features
# VGG-16 fc7 feature.
model = torchvision.models.vgg16(pretrained=True)
model.classifier = torch.nn.Sequential(*list(model.classifier.children())[:-3])
# ResNet GAP feature.
model = torchvision.models.resnet18(pretrained=True)
model = torch.nn.Sequential(collections.OrderedDict(
list(model.named_children())[:-1]))
with torch.no_grad():
model.eval()
conv_representation = model(image)
Extract Convolution Features from Multiple Layers of ImageNet Pre-trained Model
class FeatureExtractor(torch.nn.Module):
"""Helper class to extract several convolution features from the given
pre-trained model.
Attributes:
_model, torch.nn.Module.
_layers_to_extract, list<str> or set<str>
Example:
>>> model = torchvision.models.resnet152(pretrained=True)
>>> model = torch.nn.Sequential(collections.OrderedDict(
list(model.named_children())[:-1]))
>>> conv_representation = FeatureExtractor(
pretrained_model=model,
layers_to_extract={'layer1', 'layer2', 'layer3', 'layer4'})(image)
"""
def __init__(self, pretrained_model, layers_to_extract):
torch.nn.Module.__init__(self)
self._model = pretrained_model
self._model.eval()
self._layers_to_extract = set(layers_to_extract)
def forward(self, x):
with torch.no_grad():
conv_representation = []
for name, layer in self._model.named_children():
x = layer(x)
if name in self._layers_to_extract:
conv_representation.append(x)
return conv_representation
Other Pre-trained Models
Link: https://github.com/Cadene/pretrained-models.pytorch
Fine-tune Fully Connected Layer
model = torchvision.models.resnet18(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.fc = nn.Linear(512, 100) # Replace the last fc layer
optimizer = torch.optim.SGD(model.fc.parameters(), lr=1e-2, momentum=0.9, weight_decay=1e-4)
Use Larger Learning Rate to Fine-tune Fully Connected Layer, Smaller Learning Rate to Fine-tune Convolution Layer
model = torchvision.models.resnet18(pretrained=True)
finetuned_parameters = list(map(id, model.fc.parameters()))
conv_parameters = (p for p in model.parameters() if id(p) not in finetuned_parameters)
parameters = [{'params': conv_parameters, 'lr': 1e-3},
{'params': model.fc.parameters()}]
optimizer = torch.optim.SGD(parameters, lr=1e-2, momentum=0.9, weight_decay=1e-4)
Common Training and Validation Data Preprocessing
Among them, the ToTensor operation converts PIL.Image or np.ndarray with shape H×W×D and value range [0, 255] into a torch.Tensor with shape D×H×W and value range [0.0, 1.0].
train_transform = torchvision.transforms.Compose([
torchvision.transforms.RandomResizedCrop(size=224,
scale=(0.08, 1.0)),
torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(mean=(0.485, 0.456, 0.406),
std=(0.229, 0.224, 0.225)),
])
val_transform = torchvision.transforms.Compose([
torchvision.transforms.Resize(224),
torchvision.transforms.CenterCrop(224),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(mean=(0.485, 0.456, 0.406),
std=(0.229, 0.224, 0.225)),
])
Basic Code Framework for Training
for t in epoch(80):
for images, labels in tqdm.tqdm(train_loader, desc='Epoch %3d' % (t + 1)):
images, labels = images.cuda(), labels.cuda()
scores = model(images)
loss = loss_function(scores, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
Label Smoothing
for images, labels in train_loader:
images, labels = images.cuda(), labels.cuda()
N = labels.size(0)
# C is the number of classes.
smoothed_labels = torch.full(size=(N, C), fill_value=0.1 / (C - 1)).cuda()
smoothed_labels.scatter_(dim=1, index=torch.unsqueeze(labels, dim=1), value=0.9)
score = model(images)
log_prob = torch.nn.functional.log_softmax(score, dim=1)
loss = -torch.sum(log_prob * smoothed_labels) / N
optimizer.zero_grad()
loss.backward()
optimizer.step()
Mixup
beta_distribution = torch.distributions.beta.Beta(alpha, alpha)
for images, labels in train_loader:
images, labels = images.cuda(), labels.cuda()
# Mixup images.
lambda_ = beta_distribution.sample([]).item()
index = torch.randperm(images.size(0)).cuda()
mixed_images = lambda_ * images + (1 - lambda_) * images[index, :]
# Mixup loss.
scores = model(mixed_images)
loss = (lambda_ * loss_function(scores, labels)
+ (1 - lambda_) * loss_function(scores, labels[index]))
optimizer.zero_grad()
loss.backward()
optimizer.step()
L1 Regularization
l1_regularization = torch.nn.L1Loss(reduction='sum')
loss = ... # Standard cross-entropy loss
for param in model.parameters():
loss += torch.sum(torch.abs(param))
loss.backward()
Do not apply L2 regularization/weight decay to bias terms
bias_list = (param for name, param in model.named_parameters() if name[-4:] == 'bias')
others_list = (param for name, param in model.named_parameters() if name[-4:] != 'bias')
parameters = [{'parameters': bias_list, 'weight_decay': 0},
{'parameters': others_list}]
optimizer = torch.optim.SGD(parameters, lr=1e-2, momentum=0.9, weight_decay=1e-4)
Gradient Clipping
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=20)
Calculate Accuracy of Softmax Output
score = model(images)
prediction = torch.argmax(score, dim=1)
num_correct = torch.sum(prediction == labels).item()
accuruacy = num_correct / labels.size(0)
Visualize the Computation Graph of Model Feedforward
Link: https://github.com/szagoruyko/pytorchviz
Visualize Learning Curves
There are two options: Visdom developed by Facebook and Tensorboard.
https://github.com/facebookresearch/visdom
https://github.com/lanpa/tensorboardX
# Example using Visdom.
vis = visdom.Visdom(env='Learning curve', use_incoming_socket=False)
assert self._visdom.check_connection()
self._visdom.close()
options = collections.namedtuple('Options', ['loss', 'acc', 'lr'])(
loss={'xlabel': 'Epoch', 'ylabel': 'Loss', 'showlegend': True},
acc={'xlabel': 'Epoch', 'ylabel': 'Accuracy', 'showlegend': True},
lr={'xlabel': 'Epoch', 'ylabel': 'Learning rate', 'showlegend': True})
for t in epoch(80):
tran(...)
val(...)
vis.line(X=torch.Tensor([t + 1]), Y=torch.Tensor([train_loss]),
name='train', win='Loss', update='append', opts=options.loss)
vis.line(X=torch.Tensor([t + 1]), Y=torch.Tensor([val_loss]),
name='val', win='Loss', update='append', opts=options.loss)
vis.line(X=torch.Tensor([t + 1]), Y=torch.Tensor([train_acc]),
name='train', win='Accuracy', update='append', opts=options.acc)
vis.line(X=torch.Tensor([t + 1]), Y=torch.Tensor([val_acc]),
name='val', win='Accuracy', update='append', opts=options.acc)
vis.line(X=torch.Tensor([t + 1]), Y=torch.Tensor([lr]),
win='Learning rate', update='append', opts=options.lr)
Get Current Learning Rate
# If there is one global learning rate (which is the common case).
lr = next(iter(optimizer.param_groups))['lr']
# If there are multiple learning rates for different layers.
all_lr = []
for param_group in optimizer.param_groups:
all_lr.append(param_group['lr'])
Learning Rate Decay
# Reduce learning rate when validation accuracy plateau.
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='max', patience=5, verbose=True)
for t in range(0, 80):
train(...); val(...)
scheduler.step(val_acc)
# Cosine annealing learning rate.
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=80)
# Reduce learning rate by 10 at given epochs.
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[50, 70], gamma=0.1)
for t in range(0, 80):
scheduler.step()
train(...); val(...)
# Learning rate warmup by 10 epochs.
scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda t: t / 10)
for t in range(0, 10):
scheduler.step()
train(...); val(...)
Save and Load Checkpoints
Note that in order to resume training, we need to save both the model and optimizer states, as well as the current training epoch.
# Save checkpoint.
is_best = current_acc > best_acc
best_acc = max(best_acc, current_acc)
checkpoint = {
'best_acc': best_acc,
'epoch': t + 1,
'model': model.state_dict(),
'optimizer': optimizer.state_dict(),
}
model_path = os.path.join('model', 'checkpoint.pth.tar')
torch.save(checkpoint, model_path)
if is_best:
shutil.copy('checkpoint.pth.tar', model_path)
# Load checkpoint.
if resume:
model_path = os.path.join('model', 'checkpoint.pth.tar')
assert os.path.isfile(model_path)
checkpoint = torch.load(model_path)
best_acc = checkpoint['best_acc']
start_epoch = checkpoint['epoch']
model.load_state_dict(checkpoint['model'])
optimizer.load_state_dict(checkpoint['optimizer'])
print('Load checkpoint at epoch %d.' % start_epoch)
Calculate Accuracy, Precision, and Recall
# data['label'] and data['prediction'] are groundtruth label and prediction
# for each image, respectively.
accuracy = np.mean(data['label'] == data['prediction']) * 100
# Compute precision and recall for each class.
for c in range(len(num_classes)):
tp = np.dot((data['label'] == c).astype(int),
(data['prediction'] == c).astype(int))
tp_fp = np.sum(data['prediction'] == c)
tp_fn = np.sum(data['label'] == c)
precision = tp / tp_fp * 100
recall = tp / tp_fn * 100
Model Definition
-
It is recommended to define layers with parameters and pooling layers using the torch.nn module, while activation functions should directly use torch.nn.functional. The difference between torch.nn module and torch.nn.functional is that the torch.nn module calls torch.nn.functional at the bottom layer during computation, but the torch.nn module includes the layer parameters and can handle both training and testing network states. When using torch.nn.functional, be mindful of the network state, such as
def forward(self, x):
...
x = torch.nn.functional.dropout(x, p=0.5, training=self.training)
-
Switch the network state with model.train() and model.eval() before model(x).
-
Wrap code blocks that do not require gradient calculation with with torch.no_grad(). The difference between model.eval() and torch.no_grad() is that model.eval() switches the network to testing state, for example, BN and dropout use different computation methods in training and testing phases. torch.no_grad() turns off PyTorch’s automatic gradient calculation mechanism to reduce memory usage and speed up computation, and the results cannot perform loss.backward().
-
Input to torch.nn.CrossEntropyLoss does not need to go through Softmax. torch.nn.CrossEntropyLoss is equivalent to torch.nn.functional.log_softmax + torch.nn.NLLLoss.
-
Use optimizer.zero_grad() to clear accumulated gradients before loss.backward(). optimizer.zero_grad() and model.zero_grad() have the same effect.
PyTorch Performance and Debugging
-
In torch.utils.data.DataLoader, try to set pin_memory=True; for particularly small datasets like MNIST, setting pin_memory=False can be faster. The setting of num_workers needs to be found experimentally for the fastest value.
-
Use del to timely delete unused intermediate variables to save GPU memory.
-
Using in-place operations can save GPU memory, for example
x = torch.nn.functional.relu(x, inplace=True)
-
Reduce data transfer between CPU and GPU. For example, if you want to know the loss and accuracy of each mini-batch in an epoch, accumulating them in GPU and transferring them back to CPU after an epoch ends is faster than transferring back CPU for each mini-batch.
-
Using half-precision floating point half() can yield some speed improvements, but specific efficiency depends on the GPU model. Be cautious of stability issues caused by low numerical precision.
-
Frequently use assert tensor.size() == (N, D, H, W) as a debugging tool to ensure tensor dimensions match your expectations.
-
Avoid using one-dimensional tensors except for labels y, and use n*1 two-dimensional tensors instead to avoid unexpected results of one-dimensional tensor calculations.
-
Profile the time spent on each part of the code
with torch.autograd.profiler.profile(enabled=True, use_cuda=False) as profile:
...
print(profile)
Or run in the command line
python -m torch.utils.bottleneck main.py
Thanks to @Somewhat Flowing Years and @El tnoto for their corrections. Due to the author’s limited knowledge, and the constraints of time and energy, there may inevitably be errors in the code; readers are encouraged to criticize and correct them.
-
PyTorch Official Code: pytorch/examples (https://link.zhihu.com/?target=https%3A//github.com/pytorch/examples)
-
PyTorch Forum: PyTorch Forums (https://link.zhihu.com/?target=https%3A//discuss.pytorch.org/latest%3Forder%3Dviews)
-
PyTorch Documentation: http://pytorch.org/docs/stable/index.html (https://link.zhihu.com/?target=http%3A//pytorch.org/docs/stable/index.html)
-
Other publicly available implementations based on PyTorch cannot be listed one by one
Original Zhihu Link:https://zhuanlan.zhihu.com/p/59205847?
Group Chat
Welcome to join the public account reader group to exchange with peers. Currently, there are WeChat groups for SLAM, three-dimensional vision, sensors, autonomous driving, computational photography, detection, segmentation, recognition, medical imaging, GAN, algorithm competitions etc. (will gradually be subdivided in the future), please scan the WeChat ID below to join the group, with the note: “nickname + school/company + research direction”, for example: “Zhang San + Shanghai Jiao Tong University + Vision SLAM”. Please note in the format, otherwise it will not be approved. After successful addition, you will be invited to enter the relevant WeChat group according to your research direction. Please do not send advertisements in the group, otherwise you will be removed from the group, thank you for your understanding~