Current Research Status of Target Detection Algorithms Based on Transformer
Inspired by these studies, Shilong Liu and others conducted an in-depth study on the cross-attention module in the Transformer decoder and proposed using 4D box coordinates (x, y, w, h) as queries in DETR, namely anchor boxes. By updating layer by layer, this new query method introduces better spatial priors in the cross-attention module, simplifying … Read more