## What is/are Naive B?

Naive B - KeywoRDS Boosted Tree Model, Breast Cancer, Classification and Regression Model, Naive Bayes, Random Forest.^{[1]}Naive Bayesian algorithm was applied to the Sepsis data set and the accuracy rate was calculated as 93.

^{[2]}To get a more precise classification method, this experiment compared the Naive Bayes method and the Naive Bayes method with PSO.

^{[3]}In this paper, we propose the use of the Naive Bayes classifier as a fitness function within a semi-wrapper feature selection approach.

^{[4]}This paper proposes a Hybrid Lexicon-Naive Bayesian Classifier (HL-NBC) method for sentimental analysis.

^{[5]}Specifically, we built three change prediction models based on different predictors, ie, product‐, process‐ metrics‐, and developer‐related factors, comparing the performances of four ensemble techniques (ie, Boosting, Random Forest, Bagging, and Voting) with those of standard machine learning classifiers (ie, Logistic Regression, Naive Bayes, Simple Logistic, and Multilayer Perceptron).

^{[6]}We conduct an empirical study in which we create fake reviews, merge them with verified reviews and then employ four methods (Naive Bayes, SVMs, human computation and hybrid human-machine approaches) to discriminate the genuine reviews from the false ones.

^{[7]}One of the most widely used algorithms of SDP models is Naive Bayes (NB) because of its simplicity, effectiveness and robustness.

^{[8]}Keywords: Sample Bootstrapping; Naive Bayes; Pap Smear.

^{[9]}To show the accuracy of the dimensional reduction result, so it is performed classification on data using KNN and Naive Bayes.

^{[10]}The result of these proposed algorithm compared with Naive Bayes and AdaBoost.

^{[11]}By looking at the condition of a person's body based on sex, blood pressure, age, whether or not a smoker and some indicators that become a person's characteristics of heart disease are described in a study using the Neural Network and Naive Bayes algorithm with the aim of comparing the level of accuracy to attributes influential to predict heart disease, so the results of this study can be used as a reference to predict whether a person has heart disease or not.

^{[12]}Among them 9 were able to reach their therapeutic goals (defined here as HbA1c Results: Coefficient of variation (CV) computed from 14 days of CGM data at baseline MDI therapy and HbA1c have been found to be suitable features for an a priori classification via Naive Bayes or Classification Tree (CT) algorithm which results in a perfect separation of classes.

^{[13]}In order to find the best predictive modeling technique different experiments were conducted using Random Forest, Decision tree, Naive Bays and ID3 predictive models.

^{[14]}In another experiment, the baseline features (10 in total) utilised on Random Forest outperforms the set of all features (48 in total) used on SVM, Naive Bayes, C4.

^{[15]}By utilizing the Naive Bayes classifier, the proposed system could detect human presence based on the Doppler spectrum.

^{[16]}We performed solar power forecasting with the support vector regression (SVR) model, the naive Bayes classifier (NBC), and the hourly regression model.

^{[17]}Three predictive models; the Naive Bayes, Decision Tree and the Probabilistic Neural Network (PNN) Predictors were deployed for comparative analysis.

^{[18]}99% efficiency, Naive Bayes with 97.

^{[19]}The lung cancer prediction was analysed using classification algorithms such as Naive Bayes, SVM, Decision tree and Logistic Regression.

^{[20]}99%) compared to Linear SVM, Naive Bayesian (NB), Random Forest(RF), and Decision Tree (DT) Spark’s classifiers.

^{[21]}We verify that algorithms have been used for the classification of cardiac diseases: Apriori, decision tree, naive Bayesian.

^{[22]}The classification error and root mean square error are used as the evaluation criteria of the imputation methods performance and the Naive Bayes algorithm is used as the classifier.

^{[23]}With the existing problems will be overcome by the datamining technique that will be used for this research is the Naive Bayes algorithm and genetic algorithm which aims to predict the Telemarketing customers' sources sourced from public UCI Repsitory data so that the bank offers a product to the customer right at the target.

^{[24]}A nomogram was then constructed using the naive Bayesian classifier model in order to visualize risk factors of COPD.

^{[25]}Destek vektor makinesi, Naive Bayes, rastgele orman, K en yakin komsu ve lojistik regresyon metotlari ile gerceklestirilen egitim surecinin ardindan confusion matrisleri ve roc egrileri olusturulmustur.

^{[26]}This problem is transformed into a binary form and five different methods, Logistic Regression, Logistic Regression with LASSO, Naive Bayes, Linear Discriminant Analysis, and Quadratic Discriminant Analysis, are implemented.

^{[27]}The Naive Bayes rule was first employed to select a supreme color feature from ten color models.

^{[28]}Secondly, the stock was classified into high yield stocks and other stocks by the stock characteristic information using naive Bayesian classification method.

^{[29]}In T2W MR Images, the best performance was for naive Bayesian network classifier (AUC = 85.

^{[30]}Binary classification models performed consistently better for all tested classifiers (k-NN, Naive Bayes, Decision Tree, Multilayer Perceptron, Random Forests, HMM).

^{[31]}To analyze sentiments, the Naive Bayes (NB) model is one of the more popular methods due to low computational time and understandability.

^{[32]}To predict the stock prices, authors have proposed a technique by first calculating the sentiment scores through Naive Bayes classifier and after that neural network is applied on both sentiment scores and historical stock dataset.

^{[33]}Sensor data obtained from calving events on three farms were used as one training dataset and two independent validation datasets to evaluate the predictive performance of a Naive Bayes classifier model for calving prediction at 1 h before the start of calving.

^{[34]}The data samples contain 1800 emotional marks from Imprecity and 2450 geolocated comments from Google Places marked by experts and then processed with Naive Bayes Classifier.

^{[35]}Additionally, SVM, Random Forest, and Naive Bayes algorithms have been applied as the classification algorithms, and their combinations with two vectorization approaches have been tested and analyzed.

^{[36]}Due to the simplicity and competitive classification performance of the naive Bayes (NB), researchers have proposed many approaches to improve NB by weakening its attribute independence assumption.

^{[37]}7%), Naive Bayes (accuracy = 88.

^{[38]}Overcoming this problem, a sentiment analysis classification model using naive bayes algorithm (NB) was applied to get this information.

^{[39]}Sentiment analysis model was constructed by Naive Bayes Classification technique.

^{[40]}CONCLUSIONS AC was associated with improved survival in locally advanced (pT3-4, pN0) and regionally advanced (pT2-4, pN1) chemotherapy-naive BCa.

^{[41]}After that, the training examples were passed through a set of machine learning frameworks which consist of ‘Nearest Neighbors', ‘Linear SVM’, ‘RBF SVM’, ‘Gaussian Process', ‘Decision Tree’, ‘Random Forest’, ‘Neural Net’, ‘Ada-Boost’, ‘Naive Bayes' and ‘QDA’ algorithms.

^{[42]}For this region the Overlap Muon Track Finder (OMTF) uses a novel algorithm based on a naive Bayes classifier.

^{[43]}After using naive bayes algorithm to mark, train and classify the web log, it is not good to find out the web penetration behavior.

^{[44]}Then the naive Bayes classifier was used to predict the learning style of a student in real time.

^{[45]}

## support vector machine

Firstly a state of art diagram for prediction is designed and data mining classifier like naive bayes, support vector machine, decision tree, knearest neighbour are compared and then proposed methodology with new techniques are proposed.^{[1]}Our experiments demonstrate that a Decision Tree method of KerTSDroid can achieve the best precision rate (96%-99%) with a lower overhead on average, the other three methods (Naive Bayes, Logistic Regression, and Support Vector Machine) lead to lower precision rate while discriminating abnormal behaviors with the in-memory parallel-data.

^{[2]}, Support Vector Machine (SVM), Naive Bayes (NB) and K-nearest neighbor (KNN) for prediction.

^{[3]}For predicting tweets selling drugs, Support Vector Machine, yielded the highest accuracy rate (96%), whereas for predicting the legality of the advertised drugs, the Naive Bayes, classifier yielded the highest accuracy rate (85%).

^{[4]}Machine learning technologies that include neural networks, fuzzy sets, rough sets, support vector machines, Naive Bayesian, swarm optimization, and deep learning are also presented.

^{[5]}On the pretext of classification models, Support vector machine (SVM), k-NN, artificial neural networks (ANN), logistic regression (LR), random forest (RF) and Naive Bayes (NB) was employed to identify the type of tricks performed.

^{[6]}Support Vector Machine (SVM), Random Forest (RF), Naive Bayes (NB), Decision Tree (DT), K-Nearest Neighbor (KNN), Artificial Neural Network (ANN), Fuzzy Neural Network (FNN), Radial Basis Function Network (RBFN), Shuffled Frog Leaping with Levy Flight, Particle Swarm Optimization, Back Propagation Neural Network, Multilayered Perceptron, SVM Recursive Feature Elimination etc.

^{[7]}The experiments then investigated the performance of the proposed classification model based on eight supervised classification algorithms, which are ZeroR, Rule Induction, Support Vector Machine, Naive Bayes, Decision Tree, Decision Stump, k-Nearest Neighbour, and Classification via Regression.

^{[8]}Empirical work is performed on different algorithms like Support Vector Machine, Random Forest, XGBoost, Logistic Regression, Neural networks, Naive Bayes Classifier.

^{[9]}In our algorithm, initially we classify the reviews of each domain using naive Bayes and Support Vector Machine (SVM) algorithms which are in machine learning approach and then find the polarity at document level using HARN’s algorithm which comes under lexicon-based approach.

^{[10]}Naive Bayes, Support Vector Machines, Artificial Neural Networks, Decision Trees and Logistic Regression classification algorithms are used at the classification phase and the obtained results are shared.

^{[11]}We show that the algorithm can outperform eight other classification methods, namely naive Bayes, support vector machines, linear discriminant analysis, multilayer perceptrons, decision trees, and $k$ -nearest neighbors, and two recently proposed classification methods, in 12 standard classification data sets.

^{[12]}Finally all the features extracted from these methods were fed into the Support Vector Machine (SVM), Random Forest (RF), K-Nearest Neighbor (KNN) and Naive Bayes (NB) classifiers to diagnose the skin image which is either melanoma or benign lesions.

^{[13]}Then feature vectors were formed to classify these two types of patients using the support vector machine (SVM) and the naive Bayes (NB) classifier.

^{[14]}Subsequently, the proposed method was applied in labeling instances of the input data (Quranic verses) using standard classifiers: naive bayes (NB), support vector machine (SVM), decision trees (J48).

^{[15]}Our reported performance comparison results focus on four ML models: Deep Learning (DL), Random Forest (RF), Naive Bayes (NB) and Support Vector Machines (SVM).

^{[16]}In this study, we used k-Nearest Neighbour (k-NN), Logistic Regression (LR), Naive Bayes (NB), Decision Tree (DT), support vector machine (SVM), random forest (RF), and boosting as the base classifiers of ensemble model.

^{[17]}This paper performs a comparative study on human activity recognition process in terms of employment of two different data preprocessing methods accompanied by five fashionable classifiers entitled as Random Forest (RF), Support Vector Machine (SVM), Naive Bayes (NB), Multilayer Perceptron (MLP) and Deep Convolutional Neural Network (CNN).

^{[18]}Machine learning techniques such as J48, Support Vector Machine (SVM), Logistic Regression (LR), Naive Bayes (NB) and Artificial Neural Network (ANN) were widely to detect the phishing attacks.

^{[19]}The effectiveness of the designed scheme is evaluated with existing machine learning schemes such as Naive Bayes, AABC, APSO, and support vector machine, which outperform the HOS.

^{[20]}This study uses 2 algorithm methods are Naive Bayes and Support Vector Machine (SVM).

^{[21]}Online and offline classification performance of two classifiers (Gaussian Naive Bayes classifier, GNB, and support vector machine, SVM) were investigated.

^{[22]}The k-nearest neighbour (k-NN), Naive Bayes (NB), support vector machine (SVM) classifiers were represented by PBIA, the Decision Tree (DT) classifier was examined as OBIA and the Dempster–Shafer (DS) fusion classifier was manifested for the first time as FBIA.

^{[23]}In this study, the applicability and efficiency of four ML models: support vector machine (SVM), random forest (RF), naive Bayes (NB) and generalized additive model (GAM), for snow avalanche hazard mapping, were evaluated.

^{[24]}The classifiers evaluated were: support vector machines (SVM), K-nearest neighbors (KNN), J48, Random Forest (RF), Naive Bayes and linear discriminant analysis (LDA).

^{[25]}, CatBoost, Logistic Regression, Naive Bayes, Random Forest, and Support Vector Machine, were evaluated using the Python programming language.

^{[26]}In order to test our model’s performance against other methods, we implemented Local Binary Patterns (LBP) for feature extraction, and Support Vector Machines (SVM), Gaussian Naive Bayes (GNB) and k-Nearest Neighbor (kNN) classifiers.

^{[27]}The emotion analysis has been performed by Support Vector Machines (SVM) and multinomial Naive Bayes (NB) using test and train sets derived from Twitter corpus.

^{[28]}This paper proposes using Decision Tree (DT), Support Vector Machine (SVM), Naive Bayesian (NB), K-nearest neighbour (KNN) and Artificial Neural Network (ANN) to study and analyse delays among aircrafts.

^{[29]}Molecules activity is predicted using support vector machine (SVM), Naive Bayesian (NB), K-Nearest Neighbor (KNN), Decision Tree (DT) and Neural Network (NN) Classifiers.

^{[30]}The Support Vector Machine, Nearest Neighbor, Naive Bayes, Neural Networks and Random Forest algorithms are applied to model an intelligent system which will evaluate the attention level of the students.

^{[31]}85, using a bag of support vector machines, naive Bayes, logistic regression, and random forest algorithms.

^{[32]}In processing ML, we chose Support Vector Machine (SVM) and Naive Bayes (NB) to form three models: Word2Vec-SVM, TFIDF-SVM, and TFIDF-NB.

^{[33]}The First stage consists of four models: Random Forest, Support Vector Machine, Naive Bayes and Decision Trees.

^{[34]}Support Vector Machines (SVM) and Naive Bayes (NB) are used as supervised machine learning classification tools.

^{[35]}Several data mining techniques such as Support Vector Machine (SVM), K-Nearest Neighbor (KNN), Decision tree, Naive Bayes and Artificial Neural Network (ANN) are introduced for the prediction of health disease.

^{[36]}Experiments were conducted on two widely used supervised spam classifier algorithms: Naive Bayes and Support Vector Machine.

^{[37]}Support vector machine with naive bayes features (NB-SVM) is selected as the enhance model after comparing with several baseline models.

^{[38]}The tested algorithm will be Support Vector Machine, Logistic Regression, Naive Bayes, Random Forest, and K-Nearest Neighbor.

^{[39]}We used Support Vector Machine classifier to find English word for a Sanskrit word in case of sentences with more than 5 words and it resulted in 10% more accurate translations as compared to that by using Naive Bayes Classifier.

^{[40]}These generic features are then used with Support Vector Machines, Logistic Regression, Naive Bayes and Decision trees to predict the data into on-time or delayed processes.

^{[41]}We evaluated its efficacy on the SNP genotyping data collected by the Southeastern University of China and compared it with naive Bayes, support vector machine, and random forest.

^{[42]}We built faultiness estimation models---by using Binary Logistic Regression, Naive Bayes, Support Vector Machines, and Decision Trees---for 54 datasets from the PROMISE repository.

^{[43]}We investigate the relative performance of various classifiers such as Naive Bayes, SMO-Support Vector Machine (SVM), Decision Tree, and also Neural Network (multilayer perceptron) for our purpose.

^{[44]}This study used Naive Bayes, Support Vector Machines, and Linear Support Vector for the classification processes.

^{[45]}For diagnosis of a disease, Naive Bayesian [NB], Support Vector Machine [SVM] and Artificial Neural Network [ANN] Classification systems are investigated and Fuzzy C-Means Clustering are analyzed to make clusters.

^{[46]}They are Decision trees, Naive Bayes algorithm and Support Vector Machines.

^{[47]}For sentiments classification, the authors used different classifiers such as support vector machines (SVM), Naive Bayes (NB) and logistic regression (LR).

^{[48]}The proposed approach includes ensemble of combination of MIWrapper and SimpleMI learners with Naive Bayes, Support Vector Machines (SVM), Neural Networks (Multilayer Perceptron (MLP)), and Decision Tree (C4.

^{[49]}While Support Vector Machine, Naive Bayes, and k-Nearest Neighbor are among the frequently used standard classifiers, Fuzzy classifiers and ensemble learning also have been attempted.

^{[50]}

## k nearest neighbor

Supervised ML algorithms, which included decision tree, naive Bayes with Laplace correction, k-nearest neighbors, and artificial neural networks, were trained and tested as binary classifiers (infection or no infection).^{[1]}The supervised learning models involve in the study are Naive Bayes, K-Nearest Neighbor (KNN), Decision Tree, Support Vector Machine (SVM) and Random Forest.

^{[2]}Machine-learning classifiers like Naive Bayes, Iterative Dichotomiser-3 (ID3), K-Nearest Neighbor (KNN), Decision Tree and Random Forest used for the classification of legitimate and illegitimate websites.

^{[3]}Proses klasifikasi jati menggunakan pengolahan citra digital dengan Metode Naive Bayes dan k-Nearest Neighbor (k-NN).

^{[4]}Various classifiers, such as k-nearest neighbor (k-NN), rule-based classifier, decision tree, random forest, naive Bayes, and support vector machine were benchmarked against each other.

^{[5]}Classification techniques used are K-Nearest Neighbor (KNN), Decision Trees (DT), Naive Bayes (NB), Support Vector Machines (SVM) and ensemble techniques Bagging, Voting and Random Forest (RF).

^{[6]}The weather forecast is by applying data mining using the algorithm Naive Bayes, K-nearest Neighbor (K-NN), and C.

^{[7]}Then, we used K-nearest neighbor (KNN), support vector machine (SVM), and naive Bayesian (NB) classifiers to evaluate the performance of the proposed method in the selection of relevant genes.

^{[8]}The features that are extracted from the human brain can be classified using different algorithms such as k-nearest neighbor (KNN), support vector machine (SVM), Naive Bayes (NB), and Artificial neural network (ANN) in order to finalize the process of decoding human brain.

^{[9]}5), Naive Bayes (NB) and k Nearest Neighbors (k-NN) applied to the Wisconsin Breast Cancer (WBC original) datasets.

^{[10]}The classifiers evaluated are K Nearest Neighbor, Support Vector Machine, Gaussian Process, Decision Tree, Random Forest, Multilayer Perceptron, AdaBoost, Gaussian Naive Bayes, and Quadratic Discriminant Analysis.

^{[11]}At the classification step, the k nearest neighbor (kNN) algorithm or the naive Bayes classifier (NBC) is used to determine whether each pixel of the image belongs to the neuron’s cell body.

^{[12]}Naive Bayes, k-nearest neighbor and j48 algorithm are used in this paper for predicting cancer disease.

^{[13]}Currently, common text classification algorithms include KNN (k-Nearest Neighbor), SVM (Support Vector Machine) and Naive Bayes.

^{[14]}25% classification accuracy compared to the six classifiers, which are K-Nearest Neighbors Classifier, Multi Class Classifier, Tree-Random, Multilayer Perceptron, Naive Bayes, and Support Vector Machine.

^{[15]}K Nearest Neighbor (KNN), Linear Discriminant Analysis (LDA), Multinomial Logistic Regression (MLR), Naive Bayes (NB), Decision Trees and Support Vector Machines (SVM) are six classifiers that are evaluated in this study.

^{[16]}This paper presents the Naive Bayes improved K-Nearest Neighbor method (NBKNN) for breast cancer prediction and compares the results with traditional classifiers like traditional K-nearest Neighbor and naive Bayes.

^{[17]}Extensive experiments were performed to test and validate the model features to train random forest (RF), K-nearest neighbor (KNN), naive Bayes (NB), and decision tree (DT).

^{[18]}K-nearest neighbor (K-NN), linear discriminant analysis, naive Bayes, error-correcting output classifier and decision tree classifiers are used for image recognition process.

^{[19]}Main Outcomes and Measures Through an iterative process, performance metrics associated with instrument movement and force, resection of tissues, and bleeding generated from the raw simulator data output were selected by K-nearest neighbor, naive Bayes, discriminant analysis, and support vector machine algorithms to most accurately determine group membership.

^{[20]}This paper presents the result of popular classification method, k-Nearest neighbor, Centroid Classifier, and Naive Bayes to handle outlier detection task.

^{[21]}5, Bayesian Network (BN), K-Nearest Neighbors (KNN), Naive Bayes (NB), Neural Network (NN) dan SVM (Support Vector Machine).

^{[22]}In this paper, it was majorly discussed all the research work being carried out using the data mining techniques to enhance heart disease diagnosis and prediction including decision trees, Naive Bayes classifiers, K-nearest neighbor classification (KNN), support vector machine (SVM), decision tree and PCA.

^{[23]}We perform experiments on human activity dataset using four classifiers, Naive Bayes, Random Forest, K-Nearest Neighbor and Support Vector Machine(NB, RF, KNN and SVM).

^{[24]}To automate the process of categorizing internet traffic, machine learning based supervised classification techniques namely Naive Bayes and K Nearest Neighbors are implemented.

^{[25]}5), Naive Bayes (NB) and k Nearest Neighbors (k-NN) on the Wisconsin Breast Cancer (original) datasets is conducted.

^{[26]}The hepatitis dataset and the data set formed by the attributes determined by the correlation-based and the fuzzy-based rough-attribute selection methods were classified using the k-nearest neighbor, Random Forest, Naive Bayes, and Logistic Regression algorithms and the results were compared.

^{[27]}Classification algorithms investigated in this study were Naive Bayes, K-Nearest Neighbor (K-NN), Support Vector Machine (SVM), and Random Forest (RF).

^{[28]}In this study, k-nearest neighbor (knn) classifier and Naive Bayes(NB) classifier, are used Training was performed by applying the k-fold cross validation method.

^{[29]}Here, five powerful classification algorithms including k-Nearest Neighbors (k-NN), Gaussian Naive Bayes (GNB), Support Vector Machine (SVM), Multilayer Perceptron (MLP) Neural Network, and Random Forest (RF) are tested.

^{[30]}With the proposed algorithm, different classifiers such as k-nearest neighbor (KNN), decision tree, naive Bayes, and multi-support vector machines (SVM) are tested.

^{[31]}Also, it was measured the accuracy for face recognition and execution time for the Smart Event Faces Database using ResNet 34 for feature extraction and the next classifiers: K-Nearest Neighbors, Naive Bayes, Random Forest, Multi-Layer Perceptron, Decision Tree, Adaboost and Support Vector Machine.

^{[32]}Currently, common text categorization algorithms include k-Nearest Neighbor, Support Vector Machine (SVM) and Naive Bayes.

^{[33]}The performances of Decision Tree, Naive Bayes, and k-Nearest Neighbor (kNN) methods have been tested on these data sets, and the kNN method has given the best result on two data sets.

^{[34]}After that, to predict mortality, many classifiers were tested which are Linear Discriminant Analysis (LDA), K-Nearest Neighbors (KNN), Classification and Regression Trees (CART), Logistic Regression (LR), Support Vector Machine (SVM), Naive Bayes (NB) and Random Forest (RF).

^{[35]}Finally, we use k-Nearest Neighbor (k-NN) algorithm and Naive Bayes algorithm combined with voting mechanism to successfully identify the driver's identity.

^{[36]}The method used in the comparative study is K-Nearest Neighbors (KNN) from the basis of similarity, Naive Bayes (NB) from the probability base, and C4.

^{[37]}The used classifiers are: K-Nearest Neighbors, Gaussian Naive Bayes and Support Vector Machines.

^{[38]}As a method K-Nearest Neighbor (KNN), Case Based Reasoning (CBR) and Naive Bayes methods are applied to Zolkepli's English Text Emotion (ETE) dataset.

^{[39]}This paper proposes a novel framework that aims at enhancing the aforementioned advantages in terms of scalability by increasing the number of nodes in the Hadoop cluster and analyzing the performance of classification algorithms like K-Nearest Neighbor, Naive Bayes and Decision Tree.

^{[40]}The models presented are K-Nearest Neighbors, Naive Bayes, Decision Trees, Random Forests, and Extra Trees.

^{[41]}In this work, our focus is on diagnosis of thyroid diseases by using three classification models like K-Nearest Neighbor (K-NN), Decision Tree and Naive bayes based on certain clinical thyroid attributes like Age, Gender, TSH, T3 and T4.

^{[42]}Here, the lung image classification is done by four different classifiers such as K-Nearest Neighbor (KNN), Naive Bayes (NB), Neural Network (NN) and Random Forest (RF).

^{[43]}Experimental work has been carried out using classification algorithms such as K Nearest Neighbor (KNN), Decision Tree(DT), Naive Bayes (NB), Support Vector Machine (SVM), Logistic Regression (LR) and Random Forest(RF) on Pima Indians Diabetes dataset using nine attributes which is available online on UCI Repository.

^{[44]}Conventional machine learning techniques including Decision Tree, Naive Bayes, K Nearest Neighbors, and Support Vector Machine (SVM), ensemble methods including Random Forest, Gradient Boosting, and Adaboosting, and the deep learning approach Siamese Network are tested and compared.

^{[45]}To predict whether individual person is a donor or not from the data given by the person, Naive Bayes technique and K-nearest neighbors (KNN) algorithm are used.

^{[46]}Finally, K-Nearest Neighbor (K-NN), Support Vector Machine (SVM), Logistic Regression (LR) and Gaussian Naive Bayes (GNB) classifiers are employed for emotion recognition.

^{[47]}The research focuses on a comparison of the effect of classifiers using K-nearest Neighbor (KNN), Naive Bayesian (NB), and Support Vector Machine (SVM) on spam classifiers (without using feature selection) also enhances the reliability of feature selection by proposing optimization feature selection to reduce number of features that are not important.

^{[48]}For that end, we rebuilt the training and test datasets and selected the following six classifiers to our analyses: classification and regression trees, random forest, k-nearest neighbors, naive Bayes, neural network and support vector machines.

^{[49]}The following classification methods were tested during the development process: k-nearest neighbors, support vector machine, random forest classifier, logistic regression, naive Bayes.

^{[50]}

## machine learning algorithm

The authors have compared the performance of tweet credibility with state-of-art of machine learning algorithms such as Naive Bayes classifier, SVM rank algorithm, and Random Forest classifier.^{[1]}Fraud prevention in e-commerce shall be developed using machine learning, this work to analyze the suitable machine learning algorithm, the algorithm to be used is the Decision Tree, Naive Bayes, Random Forest, and Neural Network.

^{[2]}In this study, we applied lexicon based method and Machine learning algorithms that are support vector machine, naive bayes, logistic regression and decision tree methods to various sized Turkish datasets.

^{[3]}We used Machine Learning Algorithms like CART, K-NN, Gaussian Naive Bayes, and Multilayer Perceptron (MLP).

^{[4]}A Hybrid approach combining the lexicon approach Sentiment VADER and machine learning algorithm Naive Bayes is applied on the comments to predict the sentiment.

^{[5]}We have used machine learning algorithms Stochastic Gradient Descent(SGD), Decision Tree(DT), Support Vector Machine(SVM) and Naive Bayes(NB) for the question classification process where Support Vector Machine(SVM) with linear kernel performs the best providing the accuracy 90.

^{[6]}This paper shows how performance metrics of the same machine learning algorithms change when using the ADASYN method while analyzing an imbalanced text corpus using the K-nearest neighbors’ method and Naive Bayes.

^{[7]}This paper aims to empirically analyze various statistical machine learning algorithms like Naive Bayes, Support Vector Machine, Random Forest and deep learning algorithms like Convolutional Neural Network, Long Short Term Memory over emodb dataset which is publicly available for emotion classification into angry, sad, happy, neutral, other classes.

^{[8]}The datasets are processed in python programming using two main Machine Learning Algorithm namely Decision Tree Algorithm and Naive Bayes Algorithm which shows the best algorithm among these two in terms of accuracy level of heart disease.

^{[9]}The fall and other human activities are classified by using machine-learning algorithms like Support Vector Machine (SVM), Naive Bayes and Decision Tree.

^{[10]}This paper is focused on comparison and study of hybrid model of classification and machine learning algorithms based on decision tree, clustering, artificial neural network, Naive Bayes, etc.

^{[11]}Next, eight machine learning algorithms, namely Bayes Net, Naive Bayes, SMO, J48, Random Forest, AdaBoost, AdaBag and logistic regression, were trained by the virtually screened data, and used to predict the activity or inactivity of a drug through bioassays.

^{[12]}Four machine learning algorithms (naive Bayes, k-Nearest Neighbour, Support Vector Machine, and Decision Trees J48) were used during training and classification.

^{[13]}Results are analyzed using Machine Learning Algorithms like Support vector Machine and Naive Bayes.

^{[14]}This paper presents a new hybrid machine learning algorithm which combines two existing algorithms - Naive Bayes and C4.

^{[15]}Moreover, we find that deep learning displays comparable performance to other machine learning algorithms such as support vector machines, k-nearest neighbors, naive Bayes classifier, and logistic regression.

^{[16]}Further, we compared the performances of RNNs to 5 machine learning algorithms including Naive Bayes, K-nearest Neighbor, Support Vector Machine for classification, Random forest, and Logistic Regression.

^{[17]}The machine learning algorithms (Naive Bayes, maximum entropy) are utilized to choose the outcomes.

^{[18]}Based on various performance measures, this paper compares the results of machine learning algorithms like Multinomial Naive Bayes algorithm, Logistic Regression, SVM Classifier, Decision Tree and Random Forest.

^{[19]}Machine learning algorithms like support vector machines, decision tree classifier, naive Bayes classifier, and artificial neural networks have been effectively used for such kind of problems.

^{[20]}The data set is mined using machine learning algorithms namely Logistic Regression, Random Forest, Support Vector Machine, Naive Bayes and k-Nearest Neighbors.

^{[21]}Multinomial Naive Bayes (MNB), Support Vector Regression (SVR), Decision Tree (DT), Random Forest (RF), and Decision Integration Strategy (DIS) are evaluated as conventional machine learning algorithms.

^{[22]}Many types of machine learning algorithms are widely adopted and implemented for the early detection of various diseases; these algorithms are like Decision Tree, Naive Bayes, Support Vector Machine, and Logistic Regression.

^{[23]}In this research, the focus is made on the popularly used machine learning algorithms like K-Nearest Neighbors (KNN), Naive Bayes (NB), Support Vector Machine (SVM) and Decision Trees (DT) along with their suitability, advantages and disadvantages with performance accuracy.

^{[24]}The proposed experiment is based on a combination of classic machine learning algorithms such as Naive Bayes and Random Forest with various ensemble methods such as Stochastic, Linear Discriminant Analysis (LDA), Tree model (C5.

^{[25]}This paper studies the forecasting mechanism of the most widely used machine learning algorithms, namely linear discriminant analysis, logistic regression, k-nearest neighbors, random forests, artificial neural network, naive Bayes, classification and regression trees, support vector machines, adaptive boosting, and stacking ensemble model, in forecasting first-generation college students’ six-year graduation using the first college year’s data.

^{[26]}Next, we analyze the generated dataset to select the best feature set to detect different attacks as well as evaluate our dataset through the execution of 4 common machine learning algorithms, namely decision tree, Naive Bayesian, Support Vector Machine and Multi-Layer Perceptron.

^{[27]}The RAndom k-labELsets (RAkEL) multi-label ensemble learning algorithm in combination with machine learning algorithms, like J48, support vector machine (SVM) and Naive Bayes (NB), are utilized to build up the proposed IDS by classifying different network intrusions with higher detection rate and lower false-positive rate.

^{[28]}Multiple text classification algorithms will be available as listed below: Linear SVM Random Forest Multinomial Naive Bayes Bernoulli Naive Bayes Ridge Regressio Perceptron Passive Aggressive Classifier Deep machine learning algorithm.

^{[29]}We tested the performance of for machine learning algorithms (Naive Bayes, Generalized Linear Model, Logistic Regression, Decision Tree) on our data, including the 254 items in the baseline forms.

^{[30]}For the detection of these accounts, machine learning algorithms like Naive Bayes, logistic regression, support vector machines and neural networks are applied.

^{[31]}The proposed methodology uses dimensional reduction techniques for visualization (PCA, t-SNE) and machine learning algorithms (SVM, Naive Bayes, random forest, logistic regression) to perform classification of URLs contained in HTTP headers.

^{[32]}Among them, machine learning algorithms include Logistic Regression, Naive Bayes and Support Vector Machine(SVM).

^{[33]}We predicted COHSI score and RFTN using random Bootstrap samples with manually introduced Gaussian noise together with machine learning algorithms, such as Extreme Gradient Boosting and Naive Bayesian algorithms (using R).

^{[34]}The objectives of this study are to (1) develop a supervised naive Bayes machine-learning algorithm using preoperative patient data to predict length of stay and cost after hip fracture and (2) propose a patient-specific payment model to project reimbursements based on patient comorbidities.

^{[35]}The basic machine learning algorithms used in the work: Logistic regression, Decision Tree, Naive Bayes, and in addition to the Random Forest ensemble learning method.

^{[36]}Then, the Machine Learning algorithms such as Decision Tree (DT) and Naive Bayes (NB) are used for classification.

^{[37]}Machine learning algorithms such as J48, Naive Bayes, Random Forest and REP tree are compared using Kddcup99 dataset.

^{[38]}The machine learning algorithms used in this work were Logistic Regression, Naive Bayes Classifier, Random Forest Classifier and K-Nearest Neighbors.

^{[39]}The recurrence risk classification performances of machine learning algorithms (random forest, neural network, naive Bayes, logistic regression, and support vector machine) using the 20 best-ranked features were compared using the areas under the receiver operating characteristic curve (AUC) and validated by the random sampling method.

^{[40]}This research will explore the use of some machine learning algorithms like support vector machine, multinomial naive Bayes, and decision tree to classify news article in the Indonesian Language.

^{[41]}This prediction is implemented by using machine learning algorithms such as Gaussian Naive Bayes, Support Vector Machine, K-Nearest Neighbor and Random Forest.

^{[42]}Five different types of machine learning algorithms were tested: support vector machines (SVM), decision trees using the algorithm J48 and random forest, k-nearest neighbors (k-NN), and Naive Bayes.

^{[43]}A detailed survey over variety of machine learning algorithms like SVM, Naive Bayes, Dession Tree, Random Forecast, K-Means Clustering, Partition Algorithm, Bayesian Algorithm, Hierarchical Algorithm, Missing Values, Low Variance, Principal Component Analysis, Rough Set Theory, etc, over the seven medical data sets scenario which is taken to study and the results will be made on the aspect, which algorithms are good for what kind of medical records.

^{[44]}Then, the classifier was constructed by using a machine learning algorithm called Naive Bayes.

^{[45]}Using simulations, we compare the performance of PSM and PSW based on logistic regression and machine learning algorithms (CART; Bagging; Boosting; Random Forest; Neural Networks; naive Bayes).

^{[46]}Four machine learning algorithms were selected in this study: logistic regression, naive Bayes, AdaBoost, and random forest.

^{[47]}Six typical machine learning algorithms (Naive Bayes Classifier, Multilayer Perceptron, LogitBoost, Bagging, Random Forest, and Decision Tree) have been used to predict change proneness using code smell from a set of 8200 Java classes spanning 14 software systems.

^{[48]}Many statistical methods and machine learning algorithms such as support vector machine, principal component analysis (PCA), logistic regression, simple linear regression, naive Bayes classifier, generalized linear regression, and random forest have been used for drug target prediction.

^{[49]}We used several cutting-edge machine learning algorithms including Random Forest (RF), Support Vector Machine (SVM), Decision Tree (DT), Naive Bayes (NB) on diabetes data.

^{[50]}

## different machine learning

The selected features are trained using different machine learning classifiers Logistic Regression (LR), Support Vector Machines (SVM), Decision Tree (DT) and Naive Bayes (NB).^{[1]}Six different machine learning algorithms - Naive Bayes, Decision Tree, Random Forest, Tree Ensemble, Logistic Regression, and Support Vector Machines - were used in our cyberbullying detection models.

^{[2]}The performance will be compared between different machine learning algorithms: Random Forest classifier (RF), Logistic Regression (LR), Decision Tree (DT), Support Vector Machine (SVM) and Naive Bayes (NB) on AML dataset National Cancer Institute (NCI), Cairo University.

^{[3]}Specifically, different machine learning algorithms including K-Nearest-Neighbor (KNN),