## What is/are Bayesian Regularization?

Bayesian Regularization - It employs the Bayesian Regularization for better input–output correlation and system performance.^{[1]}Bayesian regularization was employed for training the models to improve the generalization capacity of the models.

^{[2]}In this investigation, AI-based intelligent backpropagation networks of Bayesian regularization (IBNs-BR) were exploited for the numerical treatment of mathematical models representing environmental economic systems (EESs).

^{[3]}Four different training methods are adopted for CFNN: 1) PSO, 2) Levenberg Marquart (LM), 3) LM in combination with the Bayesian Regularization (BR), and 4) hybrid algorithm between LM - BR and PSO, which is used for the first time in training the CFNN to predict the net recharge rates and is found to be the best one.

^{[4]}The multilayer feedforward training algorithms consisted of four variants of the gradient descent method, four variants of the conju- gate gradient method, Quasi-Newton, One Step Secant, Resilient backprop- agation, Levenberg-Marquardt method and Levenberg-Marquardt method using Bayesian regularization.

^{[5]}99472 was observed for modelling all zones of microstructure in the same ANN using Bayesian Regularization with 17 and 15 neurons in the first and second hidden layers, respectively, with 4 training runs (which was the lowest R value among all tested configurations).

^{[6]}The Bayesian regularization with the Levenberg–Marquardt optimized network (BNN) is found suitable to develop the geophysical model to retrieve CW speed using Cyclone Global Navigation Satellite System (CYGNSS) measurements.

^{[7]}Levenberg-Marquardt algorithm is used to train the proposed model, and the Bayesian regularization is used to tune the regularized parameter automatically.

^{[8]}Based on the results obtained, Bayesian regularization with 12 hidden neurons is the optimized network structure, with mean absolute percentage error in testing dataset of O3 and PM10 at 51.

^{[9]}The numerically generated S-parameters of various dielectric samples are used here as a training dataset for the ANN, which is trained using the Levenberg-Marquardt backpropagation algorithm in combination with the Bayesian regularization.

^{[10]}In this study, Bayesian regularization and Levenberg-Marquardt trained multilayer perceptron neural networks were employed in predictive modeling of hydrogen production by thermo-catalytic methane decomposition.

^{[11]}Bayesian regularization is proposed to factorize the flow profile data.

^{[12]}30% accuracy on general plant leaf disease and 100% accuracy on specific plant leaf disease based on Bayesian regularization, automation of cluster, and without overfitting on considered plant diseases over various other implemented methods.

^{[13]}This supervised committee machine with training algorithms (SCMTA) combines Levenberg–Marquardt (LM), Bayesian regularization (BR), gradient descent (GD), one-step secant (OSS) and resilient back-propagation (RP) algorithms using a supervised combiner to estimate non-leaky confined aquifer parameters using pumping test data set.

^{[14]}Four ANN training algorithms such as back propagation neural network (BPNN) with gradient descent momentum (GDM), BPNN with Levenberg Marquardt (LM) algorithm, BPNN with Bayesian regularization (BR), and radial basis function networks (RBFN) method have been used for prediction modelling.

^{[15]}A backpropagation ANN model called a multilayer perceptron (MLP), trained with Bayesian regularization were used in this study.

^{[16]}Firstly, the particle swarm algorithm is optimized through the chaotic sequence, and the back-propagation (BP) neural network is optimized using Chaos Particle Swarm Optimization (CPSO) and Bayesian Regularization (BR) algorithm.

^{[17]}The best model was obtained with the structure 5-9-2, trained using the Levenberg-Marquardt algorithm with Bayesian Regularization and having the softmax and linear transfer functions in the hidden and output layers, respectively.

^{[18]}Levenberg Marquardt (LM) and Bayesian Regularization (BR) were used as the learning algorithm.

^{[19]}At the same time, to eliminate the overfitting of training, Bayesian regularization is used to optimize the neural network.

^{[20]}To increase the reliability of linear regression and the neural network, the results of back propagation, including gradient descent, Levenberg–Marquardt (LM), and Bayesian regularization (BR) methods, are compared.

^{[21]}Three training algorithms were tested during the model development, including the Levenberg-Marquadt (LM), Bayesian Regularization (BR), and Scale Conjugate Gradient (SCG).

^{[22]}Furthermore, the analysis of the results showed that the ensemble of 50 ANN trained by Bayesian regularization and Levenberg–Marquardt algorithms slightly outperforms RF.

^{[23]}In this letter, we propose an indoor visible light positioning technique that combines deep neural network based on the Bayesian Regularization (BR-DNN) with sparse diagonal training data set.

^{[24]}The neural network observer is trained using Levenberg-Marquardt backpropagation with Bayesian regularization (LMBR) to improve the generalization capability.

^{[25]}To optimize the efficiency of a predictive model, two optimization algorithms including LevenbergMarquardt (LM) and Bayesian regularization were utilized to find the optimal models’ parameters during prediction analysis.

^{[26]}The ANN was trained using Levenberg-Marquardt algorithm with Bayesian regularization, using 86 sensogram profiles, in the ratio of 80:10:10 for internal training, validation, and testing.

^{[27]}In case of NN, Bayesian Regularization and Resilient Back-Propagation algorithms show best and worst performance with an accuracy of 99.

^{[28]}We provide a link between Bayesian regularization and proximal updating, which provides an equivalence between finding a posterior mode and a posterior mean with a different regularization prior.

^{[29]}Two backpropagation‐based methods, namely Levenberg–Marquardt (LM) and Bayesian Regularization (BR), were applied to optimize the MLP algorithm.

^{[30]}The CNR values achieved by 3DK-BSS were compared to those produced by normalized cross-correlation (NCC), Bayesian regularization, and BSS implemented using a two-dimensional (2D) kernel at ARF power levels of 5 to 45% of the full system power.

^{[31]}Two training algorithms, Bayesian regularization (BR) and Levenberg–Marquardt (LM), were employed for training ANN.

^{[32]}This study assessed the predictive ability of genomic best linear unbiased prediction (GBLUP) and Bayesian regularization for feed-forward neural networks (BRNN-s1-s3-neuron) with one to three neurons using genomic relationship based on single nucleotide polymorphisms markers.

^{[33]}Bayesian regularization is a central tool in modern-day statistical and machine learning methods.

^{[34]}The most optimum functions were chosen, including the Bayesian Regularization as the training functions, the function of gradient descent with regard for moments as the Learning Function, the hyperbolic tangent activation function was taken as an activation function and the Mean Square Error was taken as an execution function.

^{[35]}We trained a single two-layer feed-forward ANN on a random majority (70%) partitioning of data from all centers using Bayesian Regularization and minimizing mean squared error.

^{[36]}To avoid over-fitting in the selection of weights, Bayesian regularization of the solution is applied.

^{[37]}5 estimation, selection of appropriate forward features from the input variables is carried out using FFS technique and Bayesian regularization is incorporated to the neural network system to avoid the overfitting problem.

^{[38]}Finally, Bayesian regularization (BR) is proposed to train the optimized NN model.

^{[39]}Levenberg–Marquardt minimization with Bayesian regularization is also implemented, providing an optimal regularized solution and insight into parametrization efficiency.

^{[40]}The dynamic ANN configuration with Bayesian regularization is proposed to predict one-step ahead of system performance behaviour.

^{[41]}The results obtained from Levemberg-Marquadt algorithm and Bayesian regularization show good accuracy compared to the applied dataset and to the measurement data from a 230 kV transmission line, a practical case analyzed.

^{[42]}Here, three NNARx model were trained using different methods to avoid overfitting: Early stopping, Bayesian regularization and a combination of both.

^{[43]}The machine learning system was pre-trained and optimised based on the Bayesian Regularization (BR) algorithm, as described in our previous research, and used to predict the solar power PV production for the next 24 hours using weather data of the last five consecutive days.

^{[44]}In this paper, the ANN rainfall-runoff models are trained by the Levenberg Marquardt (LM), Bayesian Regularization (BR) and Particle Swarm Optimization (PSO).

^{[45]}

## scaled conjugate gradient

Houston, 1998)’; statistical methods are ‘multiple variable regression,’ ‘fine tree, medium tree, coarse tree-based regression tree,’ and ‘bagged tree, boosted tree-based tree ensembles’; and soft computing techniques are ‘support vector machine (SVM)’ and ‘Levenberg–Marquardt (LM), Bayesian regularization (BR), and scaled conjugate gradient (SCG)-based artificial neural network (ANN).^{[1]}Finally, on the basis of the credit risk assessment indicators system constructed in this paper, BP neural network is built based on the BP algorithm, which is trained by the LM algorithm (Levenberg-Marquardt), Scaled Conjugate Gradient, and Bayesian Regularization respectively, to complete the credit risk assessment model.

^{[2]}Also, based on the computational fluid dynamics results, different ANN architectures with different number of neurons in the hidden layers and several training algorithms (Levenberg–Marquardt, Bayesian regularization, scaled conjugate gradient) are tested to find the best ANN architecture.

^{[3]}This NAR-NNTS model is trained with Scaled Conjugate Gradient (SCG), Levenberg Marquardt (LM) and Bayesian Regularization (BR) training algorithms.

^{[4]}The data were trained using Bayesian Regularization (BR), Scaled Conjugate Gradient (SCG) and Levenberg-Marquardt (LM) training algorithms.

^{[5]}In this paper, the ANN model is trained and tested using three different algorithm Levenberg–Marquardt (LM), Bayesian regularization (BR) and scaled conjugate gradient (SCG) algorithm available in the NN toolbox of MATLAB-2015 version.

^{[6]}Moreover, six optimization algorithms; including Bayesian regularization (BR), scaled conjugate gradient (SCG), Levenberg-Marquardt (LM), conjugate gradient backpropagation with Fletcher-Reeves updates (CGF), resilient backpropagation (RB), and conjugate gradient backpropagation with Polak-Ribiere updates (CGP) are used to improve the performance and prediction ability of the MLP and CFNN neural networks.

^{[7]}The MLP approach was optimized with four diverse algorithms including Resilient Backpropagation (RB), Bayesian Regularization (BR), Levenberg-Marquardt (LM), and Scaled Conjugate Gradient (SCG).

^{[8]}Levenberg–Marquardt (LM), Bayesian Regularization (BR), and Scaled Conjugate Gradient (SCG) prediction algorithms are used to develop a Shallow Neural Network (SNN) for time series prediction.

^{[9]})e ANNmodel was trained using three well-known training algorithms, namely, Levenberg–Marquadt (LM), Bayesian regularization (BR), and scaled conjugate gradient (SCG).

^{[10]}ANN modelling study exhibited that Bayesian regularization backpropagation, scaled conjugate gradient backpropagation and Levenberg-Marquardt backpropagation algorithms better modelled the experimental data compared to studied 11 backpropagation algorithms.

^{[11]}The neural network (NN) is trained and optimized with three algorithms, the Levenberg–Marquardt Algorithm (NARX-LMA), the Bayesian Regularization Algorithm (NARX-BRA) and the Scaled Conjugate Gradient Algorithm (NARX-SCGA), to attain the best performance.

^{[12]}To get the best model, the performance of FFNN was tested with three variations of the algorithm (Levenberg-Marquardt, Bayesian Regularization, and Scaled Conjugate Gradient) and ten variations of the number of neurons (10, 15, 20, 25, 30, 35, 40, 45 and 50).

^{[13]}Three algorithms—including Levenberg–Marquardt (LM), Bayesian Regularization (BR), and a Scaled Conjugate Gradient (SCG)—were selected for the simulation.

^{[14]}Five training algorithms were used to develop the ANN models (Bayesian regularization, Levenberg Marquardt, conjugate gradient, scaled conjugate gradient, and resilient backpropagation), two fuzzy inference systems to develop fuzzy models (Mamdani and Sugeno), and two training algorithms to develop the ANFIS models (hybrid and backpropagation).

^{[15]}This work explores the efficiency of three deep learning (Dl) techniques, namely Bayesian regularization (BE), Levenberg–Marquardt (lM), and scaled conjugate gradient (SCG), for training nonlinear autoregressive artificial neural networks (NARX) for predicting specifically the closing price of the Egyptian Stock Exchange indices (EGX-30, EGX-30-Capped, EGX-50-EWI, EGX-70, EGX-100, and NIlE).

^{[16]}The best MLP network configurations of 5–15–2, 5–4–2, and 5–7–2 were obtained for the Levenberg-Marquardt-, the Bayesian Regularization-, and the Scaled conjugate gradient-trained MLP, respectively.

^{[17]}The work employed three training algorithms to train a neural network as Levenberg-Marquard, Bayesian Regularization, and Scaled Conjugate Gradient algorithm.

^{[18]}Levenberg Marquardt (LM), Bayesian Regularization (BR) and Scaled Conjugate Gradient (SCG) were used as the learning algorithm.

^{[19]}Three different training methods Levenberg Marquardt, Bayesian Regularization and Scaled Conjugate Gradient are used.

^{[20]}, Levenberg–Marquardt, Bayesian regularization, and scaled conjugate gradient, are exploited to predict the motion scenarios.

^{[21]}Four multilayer neural network training algorithm such as Levenberg-Marquardt, Bayesian regularization Backpropagation, Scaled Conjugate Gradient, and One-step secant Backpropagation algorithm has been used to classify the seven different singers voices.

^{[22]}, Levenberg-Marquardt, Scaled Conjugate Gradient and Bayesian Regularization for stock market prediction based on tick data as well as 15-min data of an Indian company and their results compared.

^{[23]}This shows result in simulation-based on the form of MSE and Regression with job dataset, on behalf of the comparison of three algorithms like Scaled Conjugate Gradient (SCG), Levenberg Marquardt (LM) and Bayesian Regularization (BR).

^{[24]}There were constructed the dependencies of mean square error and training time (number of epochs) of the neural network on the number of hidden neurons according to different learning algorithms: Levenberg-Marquardt; Bayesian Regularization; Scaled Conjugate Gradient on samples of different lengths.

^{[25]}The extracted features are fed in to the training algorithms such as Levenberg-Marquardt, Scaled Conjugate Gradient, Gradient Descent with Adaptive Learning Rate, Bayesian Regularization and Resilent Backpropagation.

^{[26]}The same operation has been performed applying Bayesian Regularization algorithm and Scaled Conjugate Gradient algorithm.

^{[27]}, Scaled Conjugate Gradient (SCG) and Bayesian Regularization (BR) has been done.

^{[28]}Levenberg–Marquardt Algorithm (LMA), Bayesian Regularization Algorithm (BRA) and Scaled Conjugate Gradient Algorithm (SCGA) among them LMA found to have best fit with the experiments as compared to the SCGA and BRA.

^{[29]}Performance measurement of Bayesian Regularization (BR) algorithm, Levenberg–Marquardt (LM) algorithm, and Scaled Conjugate Gradient (SCG) algorithm has been analyzed.

^{[30]}Three training algorithms are explored, including Levenberg-Marquardt, Scaled Conjugate gradient back-propagation and Bayesian Regularization (BR).

^{[31]}Three algorithms were used and tested: LM (Levenberg-Marquardt), SCG (Scaled Conjugate Gradient) and BayR (Bayesian Regularization).

^{[32]}The methodologies used were descriptive statistics, factor analysis, neural network and hybrid models technique using the following Learning algorithms; Levenberg-Marquardt (LM), Bayesian Regularization (BR), BFGS Quasi-Newton (BFG), Scaled Conjugate Gradient (SCG), Gradient Descent (GD) in artificial neural network model while for the second Hybrid model only the best two algorithms where use; Levenberg-Marquardt (LM), Bayesian Regularization (BR).

^{[33]}These algorithms are (1) Scaled Conjugate Gradient (2) Bayesian Regularization and (3) Gradient Descent.

^{[34]}Two intelligent techniques; namely Radial Basis Function (RBF) and Multilayer Perceptron (MLP) neural network were developed, and various optimization techniques; including Genetic Algorithm (GA), Gravitational Search Algorithm (GSA), Imperialist Competitive Algorithm (ICA), Particle Swarm Optimization (PSO), Differential Evolution (DE), Ant Colony Optimization (ACO), Scaled Conjugate Gradient (SCG), Levenberg-Marquardt (LM), Resilient Back Propagation (RB), and Bayesian Regularization (BR) were applied.

^{[35]}In our proposed models, Bayesian Regularization and Scaled Conjugate Gradient, training functions are used to train the Artificial Neural Networks.

^{[36]}Performance of different learning algorithms of CFFNN including gradient descent (GD), gradient descent with momentum (GDM), scaled conjugate gradient (SCG), Levenberg-Marquardt (LM), and Bayesian regularization (BR) are compared.

^{[37]}This study investigates the applicability of the Leven–Marquardt algorithm, Bayesian regularization, and a scaled conjugate gradient algorithm as training algorithms for an artificial neural network (ANN) predictively modeling the rate of CO and H2 production by methane dry reforming over a Co/Pr2O3 catalyst.

^{[38]}Bayesian Regularization Neural Network (BRNN), Scaled Conjugate Gradient Neural Network (SCGNN), and Levenberg Marquardt Neural Network (LMNN).

^{[39]}Comparative analysis of three types of training algorithms (Bayesian regularization, Lavenberg-Marquardt and scaled conjugate gradient) against different number of neurons of hidden layer is performed to obtain minimized Mean Square Error (MSE).

^{[40]}Levenberg–Marquardt, Bayesian regularization, and scaled conjugate gradient back-propagation algorithms were used for the analysis.

^{[41]}Different learning algorithms, including gradient descent (GD), gradient descent with momentum (GDM), scaled conjugate gradient (SCG), Levenberg-Marquardt (LM), and Bayesian regularization (BR) are used in CFFNN method.

^{[42]}

## artificial neural network

This study aims at predicting the inelastic displacement demand of structures through utilizing artificial neural networks and Bayesian regularization algorithm.^{[1]}This study used two algorithms, namely, Levenberg–Marquardt and the Bayesian regularization, to estimate the parameters of an artificial neural network model fitted to predict the average monthly waste generated and critically assess the factors that influence solid waste generation in some selected districts of the Greater Accra region.

^{[2]}In this paper, artificial neural network (ANN) based Levenberg-Marquardt (LM), Bayesian Regularization (BR) and Scaled Conjugate Gradient (SCG) algorithms are deployed in maximum power point tracking (MPPT) energy harvesting in solar photovoltaic (PV) system to forge a comparative performance analysis of the three different algorithms.

^{[3]}Moreover, a development of Artificial Neural Networks (ANN) model is implemented based on feedforward back-propagation and Bayesian regularization learning algorithm.

^{[4]}In addition, the classified data is categorized by implementing a Bayesian Regularization Artificial Neural Network (BRANN) classifier.

^{[5]}This research work explores the Levenberg- Marquardt training algorithm used for Artificial Neural Network (ANN) optimization during training and the Bayesian Regularization algorithm for the enhanced generalized trained network in training a designed non-linear vector median filter built on Multi-Layer Perceptron (MLP) ANN called model-1 and a conventional MLP ANN called model-2.

^{[6]}They enabled to create a robust Bayesian Regularization (BR)-based Artificial Neural Network (ANN) in order to predict with accuracy the material best corrosion properties.

^{[7]}This paper uses an artificial neural network (ANN) machine learning toolbox in a MATLAB programming environment together with a Bayesian regularization algorithm, the Levenberg-Marquardt algorithm and a scaled conjugate gradient algorithm to attain a specified target compressive strength at 28 days.

^{[8]}This paper proposes an Artificial Neural Network based weather prediction model that uses hourly weather data for training of neural networks and this model provides accurate short-term weather information using back-propagation algorithmic rule such as leven-berg Marquardt & Bayesian regularization, with standard NARX feed forward network built in MATLAB.

^{[9]}INNMF is based on the artificial neural network (ANN) algorithm through improved particle swarm optimization (PSO) algorithm and Bayesian Regularization (BR) optimization.

^{[10]}In this work, for modeling of gas-lift operation, the potential application of an Artificial Neural Network (ANN) using Bayesian Regularization (BR) is investigated and the results are compared with Levenberg–Marquardt (LM) back-propagation training algorithm.

^{[11]}Therefore, a more robust Bayesian Regularization Artificial Neural Network (BRANN) is introduced in this study.

^{[12]}Abbreviations: ANN - Artificial Neural Network; ARE - Absolute Relative Error; BR - Bayesian Regularization; D - Day; GA - Genetic Algorithm; GLPC - Gas Lift Performance Curve; GOR - Gas Oil Ratio; IPR - Inflow Performance Relationship; LM - Levenberg–Marquardt; MD - Measured Depth; MINLP - Mixed Integer Nonlinear Programming; MMSCF - Million standard Cubic Feet; MSCF - Thousand Standard Cubic Feet; P.

^{[13]}Artificial Neural Network was trained by using Bayesian Regularization algorithm, and used.

^{[14]}Recently, the Bayesian Regularization Artificial Neural Network (BRANN) approach has been used for flammable cloud estimation in a congested offshore setting.

^{[15]}This study presents a more accurate algorithm, namely Bayesian Regularization Artificial Neural Network (BRANN) and accordingly proposes the frameworks regarding BRANN-based models for the CFD-based ERA procedure.

^{[16]}A deep learning algorithm called Bayesian Regularization Artificial Neural Network (BRANN) is used to synthesize the data from Depth Sensing Camera and IMU due to its capabilities in complex and non-linear problems with considerable time.

^{[17]}Linear model was compared to Artificial Neural Networks (ANN) models with Levenberg–Marquardt (L-M), Bayesian Regularization (BR) and Scaled Conjugate Gradient (SCG) learning algorithms, to evaluate the relative accuracy in predicting antler beam diameter and length using age and dressed body weight in white-tailed deer.

^{[18]}The artificial neural network is learned using the Levenberg–Marquardt algorithm with Bayesian regularization.

^{[19]}This paper proposes Bayesian Regularization (BR) along with artificial neural network (ANN) and random forest (RF) based machine learning to model power converters and analyze their performance.

^{[20]}This study focuses on classification of fundus image that contains with or without signs of DR and utilizes artificial neural network (NN) namely Multi-layered Perceptron (MLP) trained by Levenberg-Marquardt (LM) and Bayesian Regularization (BR) to classify the data.

^{[21]}A predictive surrogate model, mimicking the full scale model along with reservoir heterogeneity is then developed using a feed forward back-propagation artificial neural network trained using the Levenberg-Marquardt algorithm coupled with Bayesian regularization.

^{[22]}Accordingly, a Artificial Neural Network (ANN) model was trained using the Bayesian regularization function.

^{[23]}

## neural network model

For this purpose, four multilayer perceptron (MLP) neural network models and four cascade forward (CF) neural network models optimized with Bayesian Regularization (BR), Levenberg-Marquardt (LM), Resilient Backpropagation (RB), and Scaled Conjugate Gradient (SCG), as well as a radial basis function (RBF) neural network model and a generalized regression (GR) neural network model were developed to predict groundwater level using 1377 data point.^{[1]}The results show that the best performing algorithm is a hidden layer neural network model containing eight neurons with a Bayesian regularization algorithm as a training algorithm and tan-sigmoid and linear transfer functions.

^{[2]}Thereafter, two Bayesian regularization-based backpropagation multilayer perceptron neural network models were designed to predict the joint angles in the stance and swing phase.

^{[3]}In addition, the Bayesian Regularization training algorithm is selected in the BP neural network model to generate the code for predicting traffic congestion time.

^{[4]}This paper presents an intelligent design methodology of microstrip filters in which a dynamic neural network model based on Bayesian Regularization Back-Propagation (BRBP) learning algorithm is used.

^{[5]}The data obtained as a result of experiments are processed using a neural network model with Bayesian regularization, which has high smoothness and works well in conditions of small training samples.

^{[6]}

## Namely Bayesian Regularization

This work explores the efficiency of three deep learning (Dl) techniques, namely Bayesian regularization (BE), Levenberg–Marquardt (lM), and scaled conjugate gradient (SCG), for training nonlinear autoregressive artificial neural networks (NARX) for predicting specifically the closing price of the Egyptian Stock Exchange indices (EGX-30, EGX-30-Capped, EGX-50-EWI, EGX-70, EGX-100, and NIlE).^{[1]}The main objectives of this paper are: (a) Evaluate the performance of the ANN considering two back-propagation learning algorithms, namely Bayesian regularization (BR) and Levenberg-Marquardt (LM); (b) Analyse the relative performance of the model for hour-ahead and day-ahead load forecasting for different types of buildings; (c) Investigate how the network design parameters such as number of hidden layers, hidden neurons, number of inputs and training data affect the model’s ability to accurately forecast loads.

^{[2]}This study presents a more accurate algorithm, namely Bayesian Regularization Artificial Neural Network (BRANN) and accordingly proposes the frameworks regarding BRANN-based models for the CFD-based ERA procedure.

^{[3]}

## Robust Bayesian Regularization

They enabled to create a robust Bayesian Regularization (BR)-based Artificial Neural Network (ANN) in order to predict with accuracy the material best corrosion properties.^{[1]}Therefore, a more robust Bayesian Regularization Artificial Neural Network (BRANN) is introduced in this study.

^{[2]}

## bayesian regularization algorithm

It uses a Bayesian regularization algorithm as a learning function.^{[1]}Bayesian regularization algorithm (that updates weights and biases during network training) following the LevenbergMarquardt optimization training algorithm was used as the training algorithm.

^{[2]}The backpropagation neural network method based on particle swarm optimization and Bayesian regularization algorithms (called BMPB) is proposed to solve this problem.

^{[3]}This study aims at predicting the inelastic displacement demand of structures through utilizing artificial neural networks and Bayesian regularization algorithm.

^{[4]}Results showed that the Bayesian regularization algorithm was the most adequate for classification rendering precisions of M1 = 95.

^{[5]}Bayesian regularization algorithm is used to learn the weights of the neural network, which solves the problems of slow convergence speed and easy to fall into local extremum of back propagation learning algorithm, and improves the generalization ability of the neural network.

^{[6]}To do so, multilayer perceptron (MLP) optimized with Levenberg-Marquardt algorithm (MLP-LMA) and Bayesian Regularization algorithm (MLP-BR) were taught using 88 experimental measurements.

^{[7]}Assessment of a different number of hidden neurons with a different training function using the Bayesian Regularization algorithm, the best training function for the ANN model with 5 hidden neurons is found to have the most satisfying results.

^{[8]}Using the training dataset with Levenberg-Marquardt optimization and Bayesian regularization algorithms, the predicted model has the best performance with the least mean square error and the best $R^{2}$ values.

^{[9]}The results show that the best performing algorithm is a hidden layer neural network model containing eight neurons with a Bayesian regularization algorithm as a training algorithm and tan-sigmoid and linear transfer functions.

^{[10]}Feedforward neural networks with a 2-15-1 structure were developed and trained using the Bayesian regularization algorithm.

^{[11]}This study adopts the Bayesian regularization algorithm to predict the flexural parameters.

^{[12]}The Bayesian regularization algorithm was used for training of the feedforward backpropagation SNN, and a k-fold cross-validation procedure was implemented for a fair performance evaluation.

^{[13]}This research work explores the Levenberg- Marquardt training algorithm used for Artificial Neural Network (ANN) optimization during training and the Bayesian Regularization algorithm for the enhanced generalized trained network in training a designed non-linear vector median filter built on Multi-Layer Perceptron (MLP) ANN called model-1 and a conventional MLP ANN called model-2.

^{[14]}The neural network (NN) is trained and optimized with three algorithms, the Levenberg–Marquardt Algorithm (NARX-LMA), the Bayesian Regularization Algorithm (NARX-BRA) and the Scaled Conjugate Gradient Algorithm (NARX-SCGA), to attain the best performance.

^{[15]}Therefore, this research will predict the power output PV one day ahead using Recurrent Neural Network (RNN) method with Bayesian Regularization Algorithm because it can solve problems regarding prediction, classification, and energy management.

^{[16]}This paper uses an artificial neural network (ANN) machine learning toolbox in a MATLAB programming environment together with a Bayesian regularization algorithm, the Levenberg-Marquardt algorithm and a scaled conjugate gradient algorithm to attain a specified target compressive strength at 28 days.

^{[17]}Two back propagation training algorithms, the Levenberg–Marquardt and the Bayesian Regularization algorithms were employed for the network training and comparison of their prediction abilities examined using same values of learning rates and on application of early stopping method as well as without learning rate and without early stopping method.

^{[18]}According to prediction data with the selected ANN model, which was 50 hidden layer sizes, trained with Bayesian Regularization algorithm, the maximum cumulative specific methane yield of the co-digestion was simulated as 468.

^{[19]}Maps of BD values produced by stepwise regression estimation deviated significantly from maps generated with real values whereas ANN-II (Bayesian regularization algorithm) values were closest to the real values and that was reflected in the increased accuracy of mapping.

^{[20]}Artificial Neural Network was trained by using Bayesian Regularization algorithm, and used.

^{[21]}This research proposes the application of the Bayesian regularization algorithm for evaluating the performance of software complexity prediction model based on requirement.

^{[22]}The same operation has been performed applying Bayesian Regularization algorithm and Scaled Conjugate Gradient algorithm.

^{[23]}The Bayesian regularization algorithm was used to avoid converging on local minimum value in the neural network.

^{[24]}Levenberg–Marquardt Algorithm (LMA), Bayesian Regularization Algorithm (BRA) and Scaled Conjugate Gradient Algorithm (SCGA) among them LMA found to have best fit with the experiments as compared to the SCGA and BRA.

^{[25]}The results show that the dynamic response of the exergy destruction and room temperature can be predicted accurately by the optimized ANN model using three neurons, a Bayesian regularization algorithm, five delayed inputs for the compressor speed and room temperature, and six delayed inputs for the cooling load and ambient temperature.

^{[26]}Bayesian Regularization algorithm with seven neurons presented the best correlation (R = 0.

^{[27]}A feedforward neural networks was trained using the Levenberg-Marquardt and Bayesian regularization algorithms along with the radial basis and generalized regression neural network architectures in order to perform parameter adaption based on the forward propagation of the prior probabilities of the target state (expected: target range, velocity and signal to noise ratio & the target range and velocity variance).

^{[28]}Bayesian regularization algorithm as a method of back-propagation technique is used for selecting the optimal ANN size.

^{[29]}

## bayesian regularization backpropagation

The Bayesian regularization backpropagation has been adopted as the learning algorithm.^{[1]}The network was trained using the Bayesian regularization backpropagation.

^{[2]}ANN modelling study exhibited that Bayesian regularization backpropagation, scaled conjugate gradient backpropagation and Levenberg-Marquardt backpropagation algorithms better modelled the experimental data compared to studied 11 backpropagation algorithms.

^{[3]}Real drone flight-tests were performed in order to realize an adequate database needed to train the adopted neural network as a classifier, employing the Bayesian regularization backpropagation algorithm as training function.

^{[4]}Intelligent computing is exploited through Levenberg–Marquardt backpropagation networks (LMBNs) and Bayesian regularization backpropagation networks (BRBNs) to provide the solutions to nonlinear second-order LE–PDDEs.

^{[5]}The resulting NARX net consisted of an open-loop, 12-node hidden layer, 100 iterations, using Bayesian regularization backpropagation.

^{[6]}Four multilayer neural network training algorithm such as Levenberg-Marquardt, Bayesian regularization Backpropagation, Scaled Conjugate Gradient, and One-step secant Backpropagation algorithm has been used to classify the seven different singers voices.

^{[7]}The optimization process illustrated that the best training function was Bayesian regularization backpropagation (trainbr), and the best transferring function was Elliot symmetric sigmoid (elliotsig).

^{[8]}The simulation data are analyzed with the neural network trained by the Bayesian Regularization backpropagation algorithm to guide the parameter settings for the sea trials.

^{[9]}Then multiple feedforward neural networks are trained offline by a Bayesian regularization backpropagation algorithm.

^{[10]}Different training algorithms and range of learning rate values have been investigated, and the Bayesian regularization backpropagation training algorithm and 0.

^{[11]}A Bayesian regularization backpropagation was conducted to train the model.

^{[12]}Eleven different training algorithms are tested in ANN and Bayesian Regularization backpropagation with 9 hidden neurons is found to be the optimum ANN structure.

^{[13]}Qualitative comparisons were made to ascertain how the amount of training data and number of hidden neurons affects bucket filling performance, for NNs trained using both the Levenberg-Marquardt and Bayesian Regularization backpropagation algorithms.

^{[14]}The resulting NARX is an open-loop net, consisting of a 12-node hidden layer, 100-iterations, using the Bayesian regularization backpropagation algorithm.

^{[15]}

## bayesian regularization neural

3), Bayesian Regularization Neural Network (BRNN) (U95=2.^{[1]}In this study, machine learning techniques, Random Forest Boruta algorithms, Random Forest Recursive Feature Elimination, and Bayesian Regularization Neural Networks (BRNN) were utilized.

^{[2]}Bayesian regularization neural network is used as an intelli.

^{[3]}Therefore, taking cogging torque as the goal of motor performance analysis, a cogging torque prediction and analysis method based on the AdaBoost integrated Bayesian Regularization neural network (BRNN) learning algorithm is proposed.

^{[4]}The applies database preprocessing and Bayesian regularization neural networks (BRNN) in Stage I.

^{[5]}Out of these, Bayesian Regularization Neural Network consisting of 13 neurons in the hidden layer with ‘hyperbolic tangent-sigmoid’ activation function is found to be the best-fit model.

^{[6]}Furthermore, the data-mining algorithms are compared and validated with the previous study, and the MAPE of Bayesian regularization neural networks is calculated 2.

^{[7]}In this paper, a Bayesian Regularization Neural Network (BRNN) is utilized to avoid overfitting due to the small amount of data.

^{[8]}Then, the Bayesian regularization neural network is used for fault diagnosis and good test results are obtained.

^{[9]}Bayesian Regularization Neural Network (BRNN), Scaled Conjugate Gradient Neural Network (SCGNN), and Levenberg Marquardt Neural Network (LMNN).

^{[10]}As a case, tensile strength prediction model of X70 pipeline steels was established and comparisons of different data-driven models, including the two new techniques and the already extensively used stepwise regression (SR), Bayesian regularization neural network (BRNN), radial-basis function neural network (RBFNN) and support vector machine (SVM), were made.

^{[11]}

## bayesian regularization artificial

In addition, the classified data is categorized by implementing a Bayesian Regularization Artificial Neural Network (BRANN) classifier.^{[1]}In order to overcome the disadvantage of RSM, Bayesian Regularization Artificial Neural (BRANN)-based model has been recently developed and its robustness and efficiency have been widely verified.

^{[2]}Therefore, a more robust Bayesian Regularization Artificial Neural Network (BRANN) is introduced in this study.

^{[3]}Recently, the Bayesian Regularization Artificial Neural Network (BRANN) approach has been used for flammable cloud estimation in a congested offshore setting.

^{[4]}This study presents a more accurate algorithm, namely Bayesian Regularization Artificial Neural Network (BRANN) and accordingly proposes the frameworks regarding BRANN-based models for the CFD-based ERA procedure.

^{[5]}A deep learning algorithm called Bayesian Regularization Artificial Neural Network (BRANN) is used to synthesize the data from Depth Sensing Camera and IMU due to its capabilities in complex and non-linear problems with considerable time.

^{[6]}Bayesian Regularization Artificial Neuron Network (BRANN) model presenting the non-linear relationship between the turbulent flame enhancement factor X and its affecting factors is subsequently developed.

^{[7]}

## bayesian regularization training

First, A multilayered feed-forward ANN model trained with Bayesian Regularization training algorithm is developed and compared against the proposed model based on the concrete capacity design method (CCD).^{[1]}In addition, the Bayesian Regularization training algorithm is selected in the BP neural network model to generate the code for predicting traffic congestion time.

^{[2]}Feed Forward Neural Network (FFNN) Water Quality Model was developed using the Levenberg–Marquardt Training Algorithm and Bayesian Regularization Training Algorithm.

^{[3]}In this paper, hysteresis operator is introduced and using Bayesian regularization training algorithm to train BP neural network to construct hysteresis model of piezoelectric ceramic actuator, an experimental study was conducted on a piezoelectric actuator developed by Institute of Optics and Electronics, Chinese Academy of Sciences.

^{[4]}Leave-One-Out cross validation within various combinations of orthogonal arrays determines 7 nodes in the hidden layer, a minimum ratio of 16 between dataset size and number of input nodes, and a Bayesian regularization training algorithm as the optimal definitions for the BP-ANN model.

^{[5]}

## bayesian regularization method

The training function is the Bayesian Regularization method.^{[1]}To improve the generalization ability of the neural network, Bayesian regularization method was adopted.

^{[2]}The best results were shown by the feedforward and backpropagation network, architecture with nonlinear autoregressive and learning algorithms: Levenberg-Marquard nonlinear optimization, Bayesian Regularization method and conjugate gradient method.

^{[3]}The Bayesian regularization method (BR) is used for training.

^{[4]}

## bayesian regularization back

We have used the Bayesian regularization back propagation algorithm to fine-tune the results.^{[1]}This study mapped flood susceptibility in the northeast region of Bangladesh using Bayesian regularization back propagation (BRBP) neural network, classification and regression trees (CART), a statistical model (STM) using the evidence belief function (EBF), and their ensemble models (EMs) for three time periods (2000, 2014, and 2017).

^{[2]}This paper presents an intelligent design methodology of microstrip filters in which a dynamic neural network model based on Bayesian Regularization Back-Propagation (BRBP) learning algorithm is used.

^{[3]}

## bayesian regularization function

99998 was obtained for the Bayesian Regularization function, considering the hyperbolic tangent function in the first layer and pure linear in the second layer.^{[1]}It was found out that ANN optimized with Bayesian regularization function performed best with the highest correlation coefficient, and lowest MAE, MSE and RMSE.

^{[2]}Accordingly, a Artificial Neural Network (ANN) model was trained using the Bayesian regularization function.

^{[3]}

## bayesian regularization approach

Risk factors were identified using a Bayesian regularization approach.^{[1]}The early stopping and Bayesian regularization approaches are implemented for the better generalization competency of neural networks and to avoid the over fitting.

^{[2]}