Plot
In episode 7 of Qing Yu Nian 2, aside from the queen, there is a question of “who exactly is this woman in the painting”. Fan Xian also discovers the faceless woman in the prince’s painting, and he wants to find out who the prince is colluding with.
Faced with a tricky problem: Fan Xian needs to find out who this woman is. To accomplish this task, Fan Xian decides to use machine learning algorithms and image recognition and feature matching techniques to compare these paintings with photos of all the women in the palace, thereby discovering which woman the prince is colluding with.
1. Data Collection and Preprocessing
First, Fan Xian needs to collect enough data, including the headshots of the woman in the prince’s painting and frontal photos of all the palace maids. This data can be collected and preprocessed through the following steps:
Data Collection
Headshot of the woman in the prince’s painting: Extract all faceless women’s headshots from the prince’s album.
Frontal photos of all women: Collect photos of all the women in the show.
Data Preprocessing
Grayscale Conversion: Convert all images to grayscale to reduce computational complexity.
Resize: Resize all images to a uniform size (e.g., 128×128 pixels).
Normalization: Normalize pixel values to the range [0, 1].
import cv2
import numpy as np
def preprocess_image(image_path):
# Read image
image = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)
# Resize image
image = cv2.resize(image, (128, 128))
# Normalize
image = image / 255.0
# Add dimension to fit model input
image = np.expand_dims(image, axis=-1) # Shape becomes (128, 128, 1)
return image
# Example usage
image_path = 'path_to_image.jpg'
processed_image = preprocess_image(image_path)
print(processed_image.shape) # Output: (128, 128, 1)
print(processed_image) # Output preprocessed image data
2. Feature Extraction
The entire feature extraction process goes through multiple rounds of convolutional layers and pooling layers, which is the overall feature visualization process. Each step is run through code, and the features extracted from the input image of Li Yunrui are abstract, so they become blurry upon output. It is best to run photos from various angles.
To identify features in the image, Fan Xian uses Convolutional Neural Networks (CNN) for feature extraction. CNN is a deep learning model that can effectively extract high-level features from images. Below is a simple CNN example:
import tensorflow as tf
from tensorflow.keras import layers, models
def build_cnn(input_shape):
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=input_shape))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(11, activation='softmax')) # Assume there are 128 people
return model
input_shape = (128, 128, 1) # Assume input is a 128x128 grayscale image
model = build_cnn(input_shape)
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
3. Model Training
Using the frontal photos of the palace women for model training, the labels are the identities of each woman. Assume we have prepared the training data and labels.
# Assume training data and labels are prepared
# X_train, y_train are training data and labels
history = model.fit(X_train, y_train, epochs=10, batch_size=32, validation_split=0.2)
4. Feature Matching
By using feature matching methods, compare the headshot of the woman in the prince’s painting with known female images. A pre-trained VGG16 model can be used to extract features and then calculate similarity.
from tensorflow.keras.applications.vgg16 import VGG16, preprocess_input
from tensorflow.keras.preprocessing import image
# Load pre-trained VGG16 model
vgg_model = VGG16(weights='imagenet', include_top=False, input_shape=(128, 128, 3))
def extract_features(img_path, model):
img = image.load_img(img_path, target_size=(128, 128))
img_data = image.img_to_array(img)
img_data = np.expand_dims(img_data, axis=0)
img_data = preprocess_input(img_data)
features = model.predict(img_data)
return features
# Extract features
known_image_path = 'path_to_known_image.jpg'
unknown_image_path = 'path_to_unknown_image.jpg'
known_features = extract_features(known_image_path, vgg_model)
unknown_features = extract_features(unknown_image_path, vgg_model)
Each layer will gradually become blurry; here we only show the first
5. Calculating Cosine Similarity
Cosine similarity measures similarity by calculating the cosine value of the angle between two feature vectors, with the formula:
# Calculate cosine similarity
similarity = np.dot(known_features.flatten(), unknown_features.flatten()) / (np.linalg.norm(known_features) * np.linalg.norm(unknown_features))
print('Similarity:', similarity)
6. Comparing Similarity
Calculate the similarity between all known female images and the headshot of the woman in the prince’s painting, selecting the highest similarity as the matching result.
known_images_paths = ['path_to_known_image1.jpg', 'path_to_known_image2.jpg', ...]
similarities = []
for known_image_path in known_images_paths:
known_features = extract_features(known_image_path, vgg_model)
similarity = np.dot(known_features.flatten(), unknown_features.flatten()) / (np.linalg.norm(known_features) * np.linalg.norm(unknown_features))
similarities.append(similarity)
# Find the image with the highest similarity
max_similarity_index = np.argmax(similarities)
best_match_image_path = known_images_paths[max_similarity_index]
print('Best matching image path:', best_match_image_path)
Finally, output the matching results: Li Yunrui: Match Rate 85.60% Yuan Meng: Match Rate 14.30% Other candidates have match rates below 10%
Conclusion
Through the above steps, Fan Xian successfully matched the faceless woman’s headshot in the prince’s painting with Li Yunrui’s photo using machine learning algorithms and feature matching techniques, revealing the true identity of the woman in the painting. This process includes data collection, preprocessing, feature extraction, model training, and similarity calculation, showcasing the powerful application of machine learning in image recognition.
Through these steps, Fan Xian not only uncovered the identity of the woman in the prince’s painting but also discovered the story behind the prince and Princess Li Yunrui, knowing that the princess and the second prince were just playing along to prepare for further dismantling each prince’s plans, while also demonstrating the potential of modern technology in solving complex problems.
If this article has helped you, feel free to forward, watch, and like ❤️❤️