The Shrike Optimization Algorithm (SHOA) is a swarm intelligence optimization algorithm. Many creatures, who live in groups and survive for the next generation, randomly search for food; they follow the best one in the swarm, a phenomenon known as swarm intelligence, while swarm-based algorithms mimic the behaviors of creatures, they struggle to find optimal solutions in multi-modal problem competitions. The swarming behaviors of shrike birds in nature serve as the main inspiration for the proposed algorithm. The shrike birds migrate from their territory to survive. However, the SHOA replicates the survival strategies of shrike birds to facilitate their living, adaptation, and breeding. Two parts of optimization exploration and exploitation are designed by modeling shrike breeding and searching for foods to feed nestlings until they get ready to fly and live independently. The SHOA benchmarked 19 well-known mathematical test functions, 10 from CEC-2019 and 12 from CEC-2022’s most recent test functions, for a total of 41 competitive mathematical test functions and four real world engineering problems with different conditions, both constrained and unconstrained. The statistical results obtained from the Wilcoxon ranking sum and Fridman test show that SHOA has a significant statistical superiority in handling the test benchmarks compared to competitor algorithms in multi-modal problems. The results for engineering optimization problems show the SHOA outperforms other nature-inspired algorithms in many cases
This study proposes a modified version of IFDO, called M-IFDO. The enhancement is conducted by updating the location of the scout bee to the IFDO to move the scout bees to achieve better performance and optimal solutions. More specifically, two parameters in IFDO, which are alignment and cohesion, are removed. Instead, the Lambda parameter is replaced in the place of alignment and cohesion. To verify the performance of the newly introduced algorithm, M-IFDO is tested on 19 basic benchmark functions, 10 IEEE Congress of Evolutionary Computation (CEC-C06 2019), and five real-world problems. M-IFDO is compared against five state-of-the-art algorithms: Improved Fitness Dependent Optimizer (IFDO), Improving Multi-Objective Differential Evolution algorithm (IMODE), Hybrid Sampling Evolution Strategy (HSES), Linear Success-History based Parameter Adaptation for Differential Evolution (LSHADE) and CMA-ES Integrated with an Occasional Restart Strategy and Increasing Population Size and An Iterative Local Search (NBIPOP-aCMAES). The verification criteria are based on how well the algorithm reaches convergence, memory usage, and statistical results. The results show that M-IFDO surpasses its competitors in several cases on the benchmark functions and five real-world problems.
Fox Artificial Neural Network: FOXANN is a novel classification model that combines the recently developed Fox optimizer with ANN to solve ML problems. Fox optimizer replaces the backpropagation algorithm in ANN; optimizes synaptic weights; and achieves high classification accuracy with a minimum loss, improved model generalization, and interpretability. The performance of FOXANN is evaluated on three standard datasets: Iris Flower, Breast Cancer Wisconsin, and Wine. Moreover, the results show that FOXANN outperforms traditional ANN and logistic regression methods as well as other models proposed in the literature such as ABC-ANN,
ABC-MNN, CROANN, and PSO-DNN, achieving a higher accuracy of 0.9969 and a lower validation loss of 0.0028. These results demonstrate that FOXANN is more effective than traditional methods and other proposed models across standard datasets. Thus, FOXANN effectively addresses the challenges in ML algorithms and improves classification performance.
Fitness Dependent Optimizer (FDO) algorithm for training a Multilayer Perceptron Neural Network (MLP)
Child Drawing Development Optimization Algorithm Based on Child’s Cognitive Development
LSTM trained with two optimizing algorithms. The optimization algorithms are biogeography-based optimization (BBO) and genetic algorithm (GA)
Using Accuracy Measure for Improving the Training of LSTM with Metaheuristic Algorithms; Harmony Search, Gray Wolf Optimizer, Sine Cosine, and Ant Lion Optimization algorithms.