Can you identify the repeated parts in these images without the marked indications?
Source: Papers accused of image misconduct
The above images are from a paper under investigation for academic misconduct.[1] Similar problematic images in papers have become increasingly common. Elisabeth Bik and her colleagues quantified and analyzed images from over 20,000 papers published between 1995 and 2014, finding that nearly 4% of papers contained suspicious images. Bik noted that as the number of papers published continues to rise, the number of images in papers is also increasing, and suspicious images are likely to grow as well.[2].
In recent years, following the focus on plagiarism and text copying in papers, the potential integrity issues surrounding images in papers have also begun to receive attention. It is equally important to clarify the boundaries of image manipulation in scientific papers and identify what constitutes misconduct with images.
Is Image Reuse Academic Misconduct?
Not all image reuse constitutes academic misconduct. In some cases, images are reused appropriately within the same paper for comparative purposes; or due to the large number of images in a paper, misuse can occur unintentionally.
Image reuse involving academic misconduct[3]:
Repeated use of images within a single article: This means having identical images in the same article that represent different results; or an image in the same article that is cut and spliced in different ways to represent different results. Images identified as academic misconduct are often those that have been reused after being altered.
Forms of image alteration:
Image revision: Modifying the original image through scaling, cropping, or editing without declaring the image’s source.
Image merging: Combining several images into one without proper explanation or declaration, which can mislead readers regarding the authenticity of the image.
Deliberate alteration of images: Intentionally modifying or changing images to indicate experimental results, which constitutes image manipulation and is a serious form of image fraud.
Image recombination: Combining different images or parts of images to present as new experimental images, which also represents a serious distortion of image information.
(Source: Public Account Pubpeer)
Repeated use of images across multiple articles: Using the same image in different articles is currently one of the more commonly discovered forms of improper image reuse and is generally recognized as academic misconduct. This behavior may also imply issues related to paper mills or the sale of experimental images.
“Yes, eight.”
Image Enhancement = Fabrication and Alteration?
Besides academic misconduct resulting from parts or complete repetition of images in papers, excessive enhancement of images can also distort image information, ultimately being identified as image alteration. However, not all “photoshopping” equates to fabrication or alteration.
China Science Daily interviewed relevant experts on this issue. Yan Jun, director of the Academy of Aerospace and the Journal Department, believes that some degree of image processing is unavoidable in certain fields or research experiments, and that authors are permitted to carry out reasonable and standardized processing of images. For instance, removing noise from images or enhancing specific effects in fluorescence microscopy is acceptable. “As long as the information in the original image is not hidden, eliminated, or distorted, (image processing) does not fall under academic misconduct,”[4].
The article published in 2010 in Science and Engineering Ethics titled Avoiding Twisted Pixels: Ethical Guidelines for the Appropriate Use and Manipulation of Scientific Digital Images[5] listed 12 relevant opinions on image processing that remain of reference value today.

Ethical Guidelines for Appropriate Use and Manipulation of Scientific Digital Images
▣ Scientific digital image data may be compromised due to improper handling.
▣ Only copies of the original image data files may be processed (the original data files must always be kept secure and remain unchanged!).
▣ Simple adjustments can be made to the entire image.
▣ Images that need to be compared should be obtained under the same conditions, and the post-processing methods should also be unified.
▣ Post-processing of specific areas of an image is considered suspicious.
▣ Filters to enhance image quality are generally not recommended for biological images.
▣ Copying other parts from the same image or objects from different images is highly questionable.
▣ Intensity measurements should be conducted on uniformly processed image data and should be calibrated based on known standards.
▣ Avoid using lossy compression.
▣ Magnification and resolution are important.
▣ Be cautious when adjusting pixels to change image size.
For more information about image processing in papers and its principles, refer to the Principles of Scientific Imaging on the ImageJ software website.[6].
Technical Support: Automated Image Detection
In the early days, “academic detectives” represented by Elisabeth Bik relied primarily on visual inspection to identify image reuse and misconduct across multiple papers. As shown in the image examples at the beginning of the article, identifying and determining images solely by human inspection is evidently a challenging task. The method of manual checks combined with reports to discover image misconduct has become inadequate in an environment where integrity issues in papers are frequent, leading to images in papers becoming a “blind spot” for academic misconduct.
With advancements in technologies such as image recognition, the detection of images in papers has gradually received technical support. In February 2018, an article in Nature stated that researchers had finally developed a tool capable of detecting duplicate images across thousands of papers.[7] Meanwhile, international scientific publishing institutions quickly responded: in 2020, publishing giants Elsevier, Wiley, SN, and T&F jointly established a cross-publisher working group to develop a set of strict standards for verifying images in papers and for the software used for image verification.

Catriona Fennell, head of publishing services at Elsevier, pointed out that publishers ultimately need a shared image database similar to CrossCheck to check for duplicate image usage between papers. “We need to collaborate similarly in the field of images.”[8]
Currently, the image similarity detection function based on image databases has been proven feasible and is being widely applied. For example, Wanfang Wenzha has established an image library through three steps: extracting images from academic paper PDFs, cutting images, and vectorization. Currently, the online service image library has collected papers from Pubmed, Elsevier, Springer, Wiley, and other academic publishing institutions and open access platforms, covering fields including but not limited to medicine, health, chemistry, and biology, with a total of over 50 million image data.
The implementation of image detection functionality uses feature point-based detection and matching algorithms to detect reuse situations in images such as actual images, strip images, contrast images, stained images, etc., including operations like image copying, cropping, scaling, and rotation. Reuse sources are categorized as “in-article similar images” and “library similar images.”

Wanfang Wenzha Verification Report – Similar Image Pairs
Since the development of image detection technology for papers has been put into use over the past few years, major journal publishers have gradually begun to introduce image screening services. Elisabeth Bik previously stated that if software can identify image issues in manuscripts during peer review, it would be “great” progress: “I hope to do less work.” However, doing less does not mean doing nothing; judging misconduct is a task that requires rigorous attention. Although systems can greatly reduce the pressure on reviewers, currently no algorithm can 100% replace manual review, and it still requires subject-experienced reviewers to make further judgments based on the context of the paper. In the long term, “more attention to rigor and the reproducibility of data, and maintaining a sense of awe regarding the potential impacts and consequences of academic misconduct” is the best way to address issues of image misuse.
(Source: Research and Integrity)
Hosted by the Inner Mongolia Autonomous Region Mongolian Medicine Research Institute