Reading time required
minutes
Speed reading only takes 2 minutes
Please respect the original labor resultsReprint must indicate the link to this articleand the author: Machine Learning Heart
Click to read the original text or copy the following link to the browser to obtain the complete source code and data of the article:
https://mbd.pub/o/bread/mbd-Z56Ul5hy
This article summary:SCI illustrations + multi-objective optimization! Transformer-GRU + NSGA-II process parameter optimization, engineering design optimization!
Previous content:
Innovative unpublished! SCI illustrations! Transformer-BiLSTM + NSGA-II process parameter optimization, engineering design optimization!
Strongly recommended unpublished! 3D illustrations! Transformer-LSTM + NSGA-II process parameter optimization, engineering design optimization!
New process parameter optimization, engineering design optimization! Elman Recurrent Neural Network + NSGA-II multi-objective optimization algorithm (Matlab)
Process parameter optimization, engineering design optimization added! GRNN neural network + NSGA-II multi-objective optimization algorithm (Matlab)
Process parameter optimization, engineering design optimization coming! BP neural network + NSGA-II multi-objective optimization algorithm (Matlab)
Process parameter optimization, engineering design optimization to accompany you across the year! RBF neural network + NSGA-II multi-objective optimization algorithm (Matlab)
Peking University core process parameter optimization! SAO-BP snow melting algorithm optimizing BP neural network + NSGA-II multi-objective optimization algorithm (Matlab)
1
Basic Introduction
1.Transformer-GRU + NSGA-II multi-objective optimization algorithm, process parameter optimization, engineering design optimization! (Complete Matlab source code and data)
Architecture of the Transformer-GRU model: Input layer: multiple variables as input, forming a multi-dimensional input tensor. Transformer Encoder: This encoder consists of multiple Transformer encoder layers, each containing a multi-head attention mechanism and a feedforward network. The encoder layers are used to learn the relationships between variables. GRU layer: After the Transformer encoder, the output sequence is input into the GRU layer. The GRU layer is used to process sequences, remember previous states, and generate hidden state sequences. Output layer: The hidden state sequence from the GRU layer is input into the output layer, which makes final predictions through a fully connected layer. The number of neurons in the output layer usually matches the dimensions of the prediction targets. During training, known input sequences and target sequences can be used to calculate prediction errors and update the model parameters using backpropagation algorithms. Common gradient descent methods, such as Adam, can be used as optimizers.
Multi-objective optimization refers to the optimization process that considers multiple objectives simultaneously in optimization problems. In multi-objective optimization, there are often multiple conflicting objectives, meaning that improving one objective may lead to the deterioration of another. Therefore, the goal of multi-objective optimization is to find a set of solutions that are optimal under multiple objectives, rather than just optimizing a single objective.
2. First, encapsulate the dependent variables (y1, y2, y3) and independent variables (x1, x2, x3, x4, x5) proxy model through Transformer-GRU, and then use NSGA-II to find the extreme values of y (max y1; min y2, y3) and provide the corresponding x1, x2, x3, x4, x5 Pareto solution set.
3. data is the dataset, with 5 input features and 3 output variables, NSGA-II algorithm seeks extreme values, obtaining the independent variables x1, x2, x3, x4, x5 when (max y1; min y2; min y3).
4. main1.m is the Transformer-GRU main program file, and main2.m is the NSGA-II multi-objective optimization algorithm main program file, which can be run sequentially; the rest are function files and do not need to be run.

5. The command window outputs evaluation indicators such as R2, MAE, MBE, MAPE, RMSE, outputs prediction comparison charts, error analysis charts, and Pareto solution set graphs for multi-objective optimization algorithms, which can be obtained in the download area.
6. Suitable for process parameter optimization, engineering design optimization, and other optimal feature combination fields.
After purchase, you can add the blogger QQ1153460737 for consultation and communication. Note: The pirated code purchased from other unofficial channels does not include model consultation and communication services, please identify carefully, thank you. Official account Machine Learning Heart HML, CSDN blogger Machine Learning Heart, Zhihu, Bilibili with the same name, other similar accounts are not mine, beware of being deceived, I am not responsible for any actions taken under my name, my QQ is 1153460737.
Basic ideas and technical routes of the NSGA-II algorithm
1) Randomly generate an initial population Pt of size N, undergo non-dominated sorting, selection, crossover, and mutation to produce a child population Qt, and combine the two populations to form a population Rt of size 2N;
2) Perform fast non-dominated sorting, and simultaneously calculate the crowding degree for individuals in each non-dominated layer, selecting suitable individuals based on non-dominance relationships and individual crowding degrees to form a new parent population Pt+1;
3) Generate a new child population Qt+1 through basic operations of genetic algorithms, merge Pt+1 and Qt+1 to form a new population Rt, and repeat the above operations until the program’s end conditions are met.

2
Dataset

Partial Code
%% Simulation test t_sim1 = sim(net, p_train); t_sim2 = sim(net, p_test ); %% Data reverse normalization T_sim1 = mapminmax('reverse', t_sim1, ps_output); T_sim2 = mapminmax('reverse', t_sim2, ps_output); %% Define result storage template empty.position = []; % Input variable storage empty.cost = []; % Objective function storage empty.rank = []; % Non-dominated sorting rank empty.domination = []; % Dominated individual collection empty.dominated = 0; % Number of dominated individuals empty.crowdingdistance = []; % Individual crowding distance pop = repmat(empty, npop, 1); %% 1. Initialize population for i = 1 : npop pop(i).position = create_x(var); % Generate input variables (individuals) pop(i).cost = costfunction(pop(i).position); % Calculate objective function end %% 2. Construct non-dominated set [pop,F] = nondominatedsort(pop); %% Calculate crowding distance pop = calcrowdingdistance(pop,F); %% Main program (selection, crossover, mutation)
Complete code link: https://mbd.pub/o/bread/mbd-Z56Ul5hy
Also scan the QR code:

3
Effect Analysis



4
Other Codes
Well, careful you will find: https://mbd.pub/o/slowtrain/work
Blog expert certified, creator in the field of machine learning, 2023 top 50 blog stars, mainly engaged in program design and case analysis of machine learning and deep learning time series, regression, classification, clustering, and dimensionality reduction.Research project model customization/simulation of horizontal project models/guidance for academic papers for professional titles/model program explanation can all contact me at QQ1153460737 (Others are pirated, please identify)
Technical communication group: After purchasing the blogger’s desired code or sharing the blogger’s article to any third-party platform, you can add the blogger QQ to join the group

