News‎ > ‎

بڵاوکردنەوەی توێژینەوەی زانستی لە لایەن م. دوستی خدر


لە نوێترین کاری ئەکادیمی خوێدا، مامۆستای بەشی ئەندازیاری نەوت لە فاکەڵتییەکەمان بەڕێز "م. دوستی خدر عباس" توێژینەوەیەکی زانستی بە ناونیشانی: Using Fitness Dependent Optimizer for Training Multi-layer perceptron لە گۆڤاری Journal of Internet Technology بڵاوکردەوە کە خاوەنی فاکتەری کاریگەری (١.٠٠٥)ە.


This study presents a novel training algorithm depending upon the recently proposed Fitness Dependent Optimizer (FDO). The

 stability of this algorithm has been verified and performance-proofed in both the exploration and exploitation stages using some

 standard measurements. This influenced our target to gauge the performance of the algorithm in training multilayer perceptron

 neural networks (MLP). This study combines FDO with MLP (codename FDO-MLP) for optimizing weights and biases to predict

 outcomes of students. This study can improve the learning system in terms of the educational background of students besides

 increasing their achievements. The experimental results of this approach are affirmed by comparing with the Back-Propagation

 algorithm (BP) and some evolutionary models such as FDO with cascade MLP (FDO-CMLP), Grey Wolf Optimizer (GWO)

 combined with MLP (GWO-MLP), modified GWO combined with MLP (MGWO-MLP), GWO with cascade MLP (GWO-CMLP),

 and modified GWO with cascade MLP (MGWO-CMLP). The qualitative and quantitative results prove that the proposed

 approach

Using FDO as a trainer can outperform the other approaches using different trainers on the dataset in terms of convergence speed

 and local optima avoidance. The proposed FDO-MLP approach classifies with a rate of 0.97


بۆ زانیاری زیاتر کلیک لە سەر ئەم لینکە بکە: 

DOI: 10.53106/160792642021122207011

https://jit.ndhu.edu.tw/article/download/2628/2648