Trending February 2024 # Room Occupancy Detection Using Machine Learning Algorithms # Suggested March 2024 # Top 6 Popular

You are reading the article Room Occupancy Detection Using Machine Learning Algorithms updated in February 2024 on the website Minhminhbmm.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested March 2024 Room Occupancy Detection Using Machine Learning Algorithms

This article was published as a part of the Data Science Blogathon

In this article, we will see how we can detect room occupancy using environmental variables data with machine learning algorithms. For this purpose, I am using Occupancy Detection Dataset from UCI ML Repository. Here, Ground-truth occupancy is obtained from time-stamped pictures of environmental variables like temperature, humidity, light, CO2 were taken every minute. The implementation of an ML algorithm instead of a physical PIR sensor will be cost and maintenance-free. This might be useful in the field of HVAC (Heating, Ventilation, and Air Conditioning).

Data Understanding and EDA

Here we are using R for ML programming. The dataset zip has 3 text files, one for training the model and two for testing the model. For reading these files in R we use chúng tôi and to explore data structure, data dimensions, and 5 point statics of dataset we use the “summarytools” package. Images that are included here are captured from the R console while executing the code.

data= read.csv("datatrain.txt",header = T, sep = ",", row.names = 1) View(data) library(summarytools) summarytools::view(dfSummary(data))

Data Summary 

Observations

All environmental variables are read correctly as numeric type by R, however, we need to define a variable “date” as of date class type. For the ‘Date’ variable treatment, we use the “lubridate” package. Also, we have daytime information available in the date column which we can extract and use for modeling as occupancy for spaces like offices will depend on daytime. Another important observation is that we don’t have missing values in the entire dataset. An Occupancy variable needs to be defined as a factor type for further analysis.

library("readr") library("lubridate") data$date1 = as_date(data$date) data$date= as.POSIXct(data$date, format = "%Y-%m-%d %H:%M:%S") data$time = format(data$date, format = "%H:%M:%S") data1= data[,-1] data1$Occupancy = as.factor(data1$Occupancy)

Now our processed data looks like this:

Processed Data Preview

Next, we check two important aspects of data, one is the correlation plot of variables to understand multicollinearity issues in the dataset and the proportion of target variable distribution.

library(corrplot) numeric.list <- sapply(data1, is.numeric) numeric.list sum(numeric.list) numeric.df <- data1[, numeric.list] cor.mat <- cor(numeric.df) corrplot(cor.mat, type = "lower", method = "number")

Correlation Plot

library(plotrix) pie3D(prop.table((table(data1$Occupancy))), main = "Occupied Vs Unoccupied", labels=c("Unoccupied","Occupied"), col = c("Blue", "Dark Blue")) Pie Chart for Occupancy

From the correlation plot, we observe that temperature and light are positively correlated while temperature and Humidity are negatively correlated, humidity and humidity ratio are highly correlated which is obvious as the Humidity ratio is the ratio of Humidity and temperature. Hence while building various models we will consider variable Humidity and omit Humidity ratio.

Now we are all set for model building. As our response variable Occupancy is binary, we need classification types of models. Here we implement CART, RF, and ANN.

Model Building- Classification and Regression Trees (CART): Now we define training and test datasets in the required format for model building.

p_train = data1 p_test = read.csv("datatest2.txt",header = T, sep = ",", row.names = 1) p_test$date1 = as_date(p_test$date) p_test$date= as.POSIXct(p_test$date, format = "%Y-%m-%d %H:%M:%S") p_test$time = format(p_test$date, format = "%H:%M:%S") p_test= p_test[,-1]

Note that the R implementation of the CART algorithm is called RPART (Recursive Partitioning And Regression Trees) available in a package of the same name. The algorithm of decision tree models works by repeatedly partitioning/splitting the data into multiple sub-spaces so that the outcomes in each final sub-space are as homogeneous as possible.

The model uses different splitting rules that can be used to effectively predict the type of outcome. These rules are produced by repeatedly splitting the predictor variables, starting with the variable that has the highest association with the response variable. The process continues until some predetermined stopping criteria are met. We define these stopping criteria using control parameters such as a minimum number of observations in a node of the tree before attempting a split, a split must decrease the overall lack of fit by a factor before being attempted.

library(rpart) library(rpart.plot) library(rattle) #Setting the control parameters r.ctrl = rpart.control(minsplit=100, minbucket = 10, cp = 0, xval = 10) #Building the CART model set.seed(123) m1 <- rpart(formula = Occupancy~ Temperature+Humidity+Light+CO2+date1+time, data = p_train, method = "class", control = r.ctrl) #Displaying the decision tree fancyRpartPlot(m1).

Decision Tree

Now we predict the occupancy variable for the test dataset using predict function.

p_test$predict.class1 <- predict(ptree, p_test[,-6], type="class") p_test$predict.score1 <- predict(ptree, p_test[,-6], type="prob") View(p_test)

Test Data Preview with Actual and Predicted Occupancy

Now evaluate the performance of our model by plotting the ROC curve and building a confusion matrix.

library(ROCR) pred <- prediction(p_test$predict.score1[,2], p_test$Occupancy) perf <- performance(pred, "tpr", "fpr") plot(perf,main = "ROC curve") auc1=as.numeric(performance(pred, "auc")@y.values) library(caret) m1=confusionMatrix(table(p_test$predict.class1,p_test$Occupancy), positive="1",mode="everything")

Here we get an AUC of 82% and a model accuracy of 98.1%.

Now next in the list is Random Forest. In the random forest approach, a large number of decision trees are created. Every observation is fed into every decision tree. The most common outcome for each observation is used as the final output. A new observation is fed into all the trees and taking a majority vote for each classification model. The R package “randomForest” is used to create random forests.

RFmodel = randomForest(Occupancy~Temperature+Humidity+Light+CO2+date1+time, data = p_train1, mtry = 5, nodesize = 10, ntree = 501, importance = TRUE) print(RFmodel) plot(RFmodel)

Error Vs No of trees plot

Here we observe that error remains constant after n=150, so we can tune the model with trees 150.

Also, we can have a look at important variables in the model which are contributing to occupancy detection.

importance(RFmodel)

Variable Importance table

We observe from the above output that light is the most important predictor followed by CO2, Temperature, Time, Humidity, and date when we consider accuracy. Now, let’s prune the RF model with new control parameters.

set.seed(123) tRF=tuneRF(x=p_train1[,-c(5,6)],y=as.factor(p_train1$Occupancy), mtryStart = 5, ntreeTry = 150, stepFactor = 1.15, improve = 0.0001, trace = TRUE, plot = TRUE, doBest = TRUE, nodesize=10, importance= TRUE)

Now with this tuned model we again variable importance as follows. Please note here variable importance is measured for decreasing Gini Index, however earlier it was a mean decrease in model accuracy.

varImpPlot(tRF,type=2, main = "Important predictors in the analysis")

Variable Importance Plot

Now next we predict occupancy for the test dataset using predict function and tuned RF model.

p_test1$predict.class= predict(tRF, p_test1, type= "class") p_test1$predict.score= predict(tRF, p_test1, type= "prob")

We check the performance of this model using the ROC curve and confusion matrix parameters. The AUC turns out to be 99.13% with a very stiff curve as below and the accuracy of prediction is 98.12%.

ROC Curve

It seems like this model is doing better than the CART model. Time to check ANN!

Now we build an artificial neural network for the classification. Neural Network (or Artificial Neural Network) has the ability to learn by examples. ANN is an information processing model inspired by the biological neuron system. It is composed of a large number of highly interconnected processing elements known as the neuron to solve problems.

Here I am using the ‘neuralnet’ package in R. When we try to build ANN for our case, we observe that the model does not accept date class variables we will omit them. Another way could be we create a separate factor variable from the daytime variable with levels like Morning, Afternoon, Evening, and Night and then create a dummy variable for the factor variable.

Before actually building ANN, we need to scale our data as variables have values in different ranges. ANN being a weight-based algorithm, maybe biased results if data is not scaled.

p_train2=p_train2[,-c(6,7,8)] p_train_sc=scale(p_train2) p_train_sc = as.data.frame(p_train_sc) p_train_sc$Occupancy=data1$Occupancy p_test3=p_test2[,-c(6,7,8)] p_test_sc=scale(p_test3) p_test_sc = as.data.frame(p_test_sc) p_test_sc$Occupancy=p_test2$Occupancy

After scaling all the variables (except Occupancy) our data will look like this.

Data Preview After Scaling

Now we are all set for model building.

nn1 = neuralnet(formula = Occupancy~Temperature+Humidity+Light+CO2+HumidityRatio, data = p_train_sc, hidden = 3, chúng tôi = "sse",linear.output = FALSE,lifesign = "full", chúng tôi = 10, threshold = 0.03, stepmax = 10000) plot(nn1)

We calculate results for the test dataset using Compute function.

compute.output = compute(nn1, p_test_sc[,-6]) p_test_sc$Predict.score <- compute.output$net.result

Models Performance Comparison: We again evaluate this model using the confusion matrix and ROC curve. I have tabulated results obtained from all three models as follows:

Performance measure CART on a test dataset RF on a test dataset ANN on a test dataset

AUC 0.8253 0.9913057 0.996836

Accuracy 0.981 0.9812 0.9942

Kappa 0.9429 0.9437 0.9825

Sensitivity 0.9514 0.9526 0.9951

Specificity 0.9889 0.9889 0.9939

Precision 0.9586 0.9587 0.9775

Recall 0.9514 0.9526 0.9951

F1 0.9550 0.9556 0.9862

Balanced Accuracy 0.9702 0.9708 0.9945

From performance measures comparison, we observe that ANN outperforms other models, followed by RF and CART. With machine earning algorithms we can replace occupancy sensor functionality efficiently with good accuracy.

The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.

Related

You're reading Room Occupancy Detection Using Machine Learning Algorithms

Machine Learning Unlocks Insights For Stress Detection

Introduction

Stress is a natural response of the body and mind to a demanding or challenging situation. It is the body’s way of reacting to external pressures or internal thoughts and feelings. Stress can be triggered by a variety of factors, such as work-related pressure, financial difficulties, relationship problems, health issues, or major life events. Stress detection insights, driven by data science and machine learning, aims to forecast stress levels in individuals or populations. By analyzing a variety of data sources, such as physiological measurements, behavioral data, and environmental factors, predictive models can identify patterns and risk factors associated with stress.

This proactive approach enables timely intervention and tailored support. Stress prediction holds potential in health care for early detection and personalized intervention as well as in occupational settings to optimize work environments. It can also inform public health initiatives and policy decisions. With the ability to predict stress, these models provide valuable insights for improving well-being and increasing resilience in individuals and communities.

This article was published as a part of the Data Science Blogathon.

Overview of Stress Detection Using Machine Learning

Stress detection using machine learning involves collecting, cleaning, and preprocessing data. Feature engineering techniques are applied to extract meaningful information or create new features that can capture patterns related to stress. This may involve extracting statistical measures, frequency domain analysis, or time-series analysis to capture physiological or behavioral indicators of stress. Relevant features are extracted or engineered to enhance performance.

Researchers train machine learning models like logistic regression, SVM, decision trees, random forests, or neural networks by utilizing labeled data to classify stress levels. They evaluate the performance of the models using metrics such as accuracy, precision, recall, and F1-score. Integration of the trained model into real-world applications enables real-time stress monitoring. Continuous monitoring, updates, and user feedback are crucial for improving accuracy.

It is crucial to consider ethical issues and privacy concerns when dealing with sensitive personal data related to stress. Proper informed consent, data anonymization, and secure data storage procedures should be followed to protect individuals’ privacy and rights. Ethical considerations, privacy, and data security are important during the entire process. Machine learning-based stress detection enables early intervention, personalized stress management, and improved well-being.

Data Description

The “stress” dataset contains information related to stress levels. Without the specific structure and columns of the dataset, I can provide a general overview of what a data description for a percentile might look like.

The dataset may contain numerical variables that represent quantitative measurements, such as age, blood pressure, heart rate, or stress levels measured on a scale. It may also include categorical variables that represent qualitative characteristics, such as gender, occupation categories, or stress levels classified into different categories (low, medium, high).

# Array import numpy as np # Dataframe import pandas as pd #Visualization import matplotlib.pyplot as plt import seaborn as sns # warnings import warnings warnings.filterwarnings('ignore') #Data Reading stress_c= pd.read_csv('/human-stress-prediction/Stress.csv') # Copy stress=stress_c.copy() # Data stress.head()

below function is allowing you to quickly assess the data types and find out missing or null values. This summary is useful when working with large datasets or performing data cleaning and preprocessing tasks.

# Info stress.info()

Use the code stress.isnull().sum() to check for null values in the “stress” dataset and calculate the sum of null values in each column.

# Checking null values stress.isnull().sum()

To generate statistical information about the “stress” dataset. By compiling this code, you will get a summary of descriptive statistics for each numerical column in the dataset.

# Statistical Information stress.describe() Exploratory Data Analysis(EDA)

Exploratory Data Analysis (EDA) is a crucial step in understanding and analyzing a dataset. It involves visually exploring and summarizing the main characteristics, patterns, and relationships within the data

lst=['subreddit','label'] plt.figure(figsize=(15,12)) for i in range(len(lst)): plt.subplot(1,2,i+1) a=stress[lst[i]].value_counts() lbl=a.index plt.title(lst[i]+'_Distribution') plt.pie(x=a,labels=lbl,autopct="%.1f %%") plt.show()

The Matplotlib and Seaborn libraries create a count plot for the “stress” dataset. It visualizes the count of stress instances across different subreddits, with the stress labels differentiated by different colors.

plt.figure(figsize=(20,12)) plt.title('Subreddit wise stress count') plt.xlabel('Subreddit') sns.countplot(data=stress,x='subreddit',hue='label',palette='gist_heat') plt.show() Text Preprocessing

Text preprocessing refers to the process of converting raw text data into a more clean and structured format that is suitable for analysis or modeling tasks. It specially involves a series of steps to remove noise, normalize text, and extract relevant features. Here I added all libraries related to this text processing.

# Regular Expression import re # Handling string import string # NLP tool import spacy nlp=spacy.load('en_core_web_sm') from spacy.lang.en.stop_words import STOP_WORDS # Importing Natural Language Tool Kit for NLP operations import nltk nltk.download('stopwords') nltk.download('wordnet') nltk.download('punkt') nltk.download('omw-1.4') from chúng tôi import WordNetLemmatizer from wordcloud import WordCloud, STOPWORDS from nltk.corpus import stopwords from collections import Counter

Some common techniques used in text preprocessing include:

Text Cleaning

Removing special characters: Remove punctuation, symbols, or non-alphanumeric characters that do not contribute to the meaning of the text.

Removing numbers: Remove numerical digits if they are not relevant to the analysis.

Lowercasing: Convert all text to lowercase to ensure consistency in text matching and analysis.

Removing stop words: Remove common words that do not carry much information, such as “a”, “the”, “is”, etc.

Tokenization

Normalization

Lemmatization: Reduce words to their base or dictionary form (lemmas). For example, converting “running” and “ran” to “run”.

Stemming: Reduce words to their base form by removing prefixes or suffixes. For example, converting “running” and “ran” to “run”.

Removing diacritics: Remove accents or other diacritical marks from characters.

#defining function for preprocessing def preprocess(text,remove_digits=True): text = re.sub('W+',' ', text) text = re.sub('s+',' ', text) text = re.sub("(?<!w)d+", "", text) text=text.lower() nopunc=[char for char in text if char not in string.punctuation] nopunc=''.join(nopunc) nopunc=' '.join([word for word in nopunc.split() if word.lower() not in stopwords.words('english')]) return nopunc # Defining a function for lemitization def lemmatize(words): words=nlp(words) lemmas = [] for word in words: lemmas.append(word.lemma_) return lemmas #converting them into string def listtostring(s): str1=' ' return (str1.join(s)) def clean_text(input): word=preprocess(input) lemmas=lemmatize(word) return listtostring(lemmas) # Creating a feature to store clean texts stress['clean_text']=stress['text'].apply(clean_text) stress.head() Machine Learning Model Building

Machine learning model building is the process of creating a mathematical representation or model that can learn patterns and make predictions or decisions from data. It involves training a model using a labeled dataset and then using that model to make predictions on new, unseen data.

Selecting or creating relevant features from the available data. Feature engineering aims to extract meaningful information from the raw data that can help the model learn patterns effectively.

# Vectorization from sklearn.feature_extraction.text import TfidfVectorizer # Model Building from sklearn.model_selection import GridSearchCV,StratifiedKFold, KFold,train_test_split,cross_val_score,cross_val_predict from sklearn.linear_model import LogisticRegression,SGDClassifier from sklearn import preprocessing from sklearn.naive_bayes import MultinomialNB from chúng tôi import DecisionTreeClassifier from sklearn.ensemble import StackingClassifier,RandomForestClassifier, AdaBoostClassifier from sklearn.neighbors import KNeighborsClassifier #Model Evaluation from sklearn.metrics import confusion_matrix,classification_report, accuracy_score,f1_score,precision_score from sklearn.pipeline import Pipeline # Time from time import time # Defining target & feature for ML model building x=stress['clean_text'] y=stress['label'] x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.3,random_state=1)

Choosing an appropriate machine learning algorithm or model architecture based on the nature of the problem and the characteristics of the data. Different models, such as decision trees, support vector machines, or neural networks, have different strengths and weaknesses.

Training the selected model using the labeled data. This step involves feeding the training data to the model and allowing it to learn the patterns and relationships between the features and the target variable.

# Self-defining function to convert the data into vector form by tf idf #vectorizer and classify and create model by Logistic regression def model_lr_tf(x_train, x_test, y_train, y_test): global acc_lr_tf,f1_lr_tf # Text to vector transformation vector = TfidfVectorizer() x_train = vector.fit_transform(x_train) x_test = vector.transform(x_test) ovr = LogisticRegression() #fitting training data into the model & predicting t0 = time() ovr.fit(x_train, y_train) y_pred = ovr.predict(x_test) # Model Evaluation conf=confusion_matrix(y_test,y_pred) acc_lr_tf=accuracy_score(y_test,y_pred) f1_lr_tf=f1_score(y_test,y_pred,average='weighted') print('Time :',time()-t0) print('Accuracy: ',acc_lr_tf) print(10*'===========') print('Confusion Matrix: n',conf) print(10*'===========') print('Classification Report: n',classification_report(y_test,y_pred)) return y_test,y_pred,acc_lr_tf # Self defining function to convert the data into vector form by tf idf #vectorizer and classify and create model by MultinomialNB def model_nb_tf(x_train, x_test, y_train, y_test): global acc_nb_tf,f1_nb_tf # Text to vector transformation vector = TfidfVectorizer() x_train = vector.fit_transform(x_train) x_test = vector.transform(x_test) ovr = MultinomialNB() #fitting training data into the model & predicting t0 = time() ovr.fit(x_train, y_train) y_pred = ovr.predict(x_test) # Model Evaluation conf=confusion_matrix(y_test,y_pred) acc_nb_tf=accuracy_score(y_test,y_pred) f1_nb_tf=f1_score(y_test,y_pred,average='weighted') print('Time : ',time()-t0) print('Accuracy: ',acc_nb_tf) print(10*'===========') print('Confusion Matrix: n',conf) print(10*'===========') print('Classification Report: n',classification_report(y_test,y_pred)) return y_test,y_pred,acc_nb_tf # Self defining function to convert the data into vector form by tf idf # vectorizer and classify and create model by Decision Tree def model_dt_tf(x_train, x_test, y_train, y_test): global acc_dt_tf,f1_dt_tf # Text to vector transformation vector = TfidfVectorizer() x_train = vector.fit_transform(x_train) x_test = vector.transform(x_test) ovr = DecisionTreeClassifier(random_state=1) #fitting training data into the model & predicting t0 = time() ovr.fit(x_train, y_train) y_pred = ovr.predict(x_test) # Model Evaluation conf=confusion_matrix(y_test,y_pred) acc_dt_tf=accuracy_score(y_test,y_pred) f1_dt_tf=f1_score(y_test,y_pred,average='weighted') print('Time : ',time()-t0) print('Accuracy: ',acc_dt_tf) print(10*'===========') print('Confusion Matrix: n',conf) print(10*'===========') print('Classification Report: n',classification_report(y_test,y_pred)) return y_test,y_pred,acc_dt_tf # Self defining function to convert the data into vector form by tf idf #vectorizer and classify and create model by KNN def model_knn_tf(x_train, x_test, y_train, y_test): global acc_knn_tf,f1_knn_tf # Text to vector transformation vector = TfidfVectorizer() x_train = vector.fit_transform(x_train) x_test = vector.transform(x_test) ovr = KNeighborsClassifier() #fitting training data into the model & predicting t0 = time() ovr.fit(x_train, y_train) y_pred = ovr.predict(x_test) # Model Evaluation conf=confusion_matrix(y_test,y_pred) acc_knn_tf=accuracy_score(y_test,y_pred) f1_knn_tf=f1_score(y_test,y_pred,average='weighted') print('Time : ',time()-t0) print('Accuracy: ',acc_knn_tf) print(10*'===========') print('Confusion Matrix: n',conf) print(10*'===========') print('Classification Report: n',classification_report(y_test,y_pred)) # Self defining function to convert the data into vector form by tf idf #vectorizer and classify and create model by Random Forest def model_rf_tf(x_train, x_test, y_train, y_test): global acc_rf_tf,f1_rf_tf # Text to vector transformation vector = TfidfVectorizer() x_train = vector.fit_transform(x_train) x_test = vector.transform(x_test) ovr = RandomForestClassifier(random_state=1) #fitting training data into the model & predicting t0 = time() ovr.fit(x_train, y_train) y_pred = ovr.predict(x_test) # Model Evaluation conf=confusion_matrix(y_test,y_pred) acc_rf_tf=accuracy_score(y_test,y_pred) f1_rf_tf=f1_score(y_test,y_pred,average='weighted') print('Time : ',time()-t0) print('Accuracy: ',acc_rf_tf) print(10*'===========') print('Confusion Matrix: n',conf) print(10*'===========') print('Classification Report: n',classification_report(y_test,y_pred)) # Self defining function to convert the data into vector form by tf idf # vectorizer and classify and create model by Adaptive Boosting def model_ab_tf(x_train, x_test, y_train, y_test): global acc_ab_tf,f1_ab_tf # Text to vector transformation vector = TfidfVectorizer() x_train = vector.fit_transform(x_train) x_test = vector.transform(x_test) ovr = AdaBoostClassifier(random_state=1) #fitting training data into the model & predicting t0 = time() ovr.fit(x_train, y_train) y_pred = ovr.predict(x_test) # Model Evaluation conf=confusion_matrix(y_test,y_pred) acc_ab_tf=accuracy_score(y_test,y_pred) f1_ab_tf=f1_score(y_test,y_pred,average='weighted') print('Time : ',time()-t0) print('Accuracy: ',acc_ab_tf) print(10*'===========') print('Confusion Matrix: n',conf) print(10*'===========') print('Classification Report: n',classification_report(y_test,y_pred)) Model Evaluation

Model evaluation is a crucial step in machine learning to assess the performance and effectiveness of a trained model. It involves measuring how well the multiple models generalizes to unseen data and whether it meets the desired objectives. Evaluate the trained model’s performance on the testing data. Calculate evaluation metrics such as accuracy, precision, recall, and F1-score to assess the model’s effectiveness in stress detection. Model evaluation provides insights into the model’s strengths, weaknesses, and its suitability for the intended task.

# Evaluating Models print('********************Logistic Regression*********************') print('n') model_lr_tf(x_train, x_test, y_train, y_test) print('n') print(30*'==========') print('n') print('********************Multinomial NB*********************') print('n') model_nb_tf(x_train, x_test, y_train, y_test) print('n') print(30*'==========') print('n') print('********************Decision Tree*********************') print('n') model_dt_tf(x_train, x_test, y_train, y_test) print('n') print(30*'==========') print('n') print('********************KNN*********************') print('n') model_knn_tf(x_train, x_test, y_train, y_test) print('n') print(30*'==========') print('n') print('********************Random Forest Bagging*********************') print('n') model_rf_tf(x_train, x_test, y_train, y_test) print('n') print(30*'==========') print('n') print('********************Adaptive Boosting*********************') print('n') model_ab_tf(x_train, x_test, y_train, y_test) print('n') print(30*'==========') print('n') Model Performance Comparison

This is a crucial step in machine learning to identify the best-performing model for a given task. When comparing models, it is important to have a clear objective in mind. Whether it is maximizing accuracy, optimizing for speed, or prioritizing interpretability, the evaluation metrics and techniques should align with the specific objective.

Consistency is key in model performance comparison. Using consistent evaluation metrics across all models ensures a fair and meaningful comparison. It is also important to split the data into training, validation, and test sets consistently across all models. By ensuring that the models evaluate on the same data subsets, researchers enable a fair comparison of their performance.

Considering these above factors, researchers can conduct a comprehensive and fair model performance comparison, which will lead to informed decisions regarding model selection for the specific problem at hand.

# Creating tabular format for better comparison tbl=pd.DataFrame() tbl['Model']=pd.Series(['Logistic Regreesion','Multinomial NB', 'Decision Tree','KNN','Random Forest','Adaptive Boosting']) tbl['Accuracy']=pd.Series([acc_lr_tf,acc_nb_tf,acc_dt_tf,acc_knn_tf, acc_rf_tf,acc_ab_tf]) tbl['F1_Score']=pd.Series([f1_lr_tf,f1_nb_tf,f1_dt_tf,f1_knn_tf, f1_rf_tf,f1_ab_tf]) tbl.set_index('Model') # Best model on the basis of F1 Score tbl.sort_values('F1_Score',ascending=False) Cross Validation to Avoid Overfitting

Cross-validation is indeed a valuable technique to help avoid overfitting when training machine learning models. It provides a robust evaluation of the model’s performance by using multiple subsets of the data for training and testing. It helps assess the model’s generalization capability by estimating its performance on unseen data.

# Using cross validation method to avoid overfitting import statistics as st vector = TfidfVectorizer() x_train_v = vector.fit_transform(x_train) x_test_v = vector.transform(x_test) # Model building lr =LogisticRegression() mnb=MultinomialNB() dct=DecisionTreeClassifier(random_state=1) knn=KNeighborsClassifier() rf=RandomForestClassifier(random_state=1) ab=AdaBoostClassifier(random_state=1) m =[lr,mnb,dct,knn,rf,ab] model_name=['Logistic R','MultiNB','DecTRee','KNN','R forest','Ada Boost'] results, mean_results, p, f1_test=list(),list(),list(),list() #Model fitting,cross-validating and evaluating performance def algor(model): print('n',i) pipe=Pipeline([('model',model)]) pipe.fit(x_train_v,y_train) cv=StratifiedKFold(n_splits=5) n_scores=cross_val_score(pipe,x_train_v,y_train,scoring='f1_weighted', cv=cv,n_jobs=-1,error_score='raise') results.append(n_scores) mean_results.append(st.mean(n_scores)) print('f1-Score(train): mean= (%.3f), min=(%.3f)) ,max= (%.3f), stdev= (%.3f)'%(st.mean(n_scores), min(n_scores), max(n_scores),np.std(n_scores))) y_pred=cross_val_predict(model,x_train_v,y_train,cv=cv) p.append(y_pred) f1=f1_score(y_train,y_pred, average = 'weighted') f1_test.append(f1) print('f1-Score(test): %.4f'%(f1)) for i in m: algor(i) # Model comparison By Visualizing fig=plt.subplots(figsize=(20,15)) plt.title('MODEL EVALUATION BY CROSS VALIDATION METHOD') plt.xlabel('MODELS') plt.ylabel('F1 Score') plt.boxplot(results,labels=model_name,showmeans=True) plt.show() As F1 scores of the models are coming quite similar in both methods. So now we are applying the Leave One Out method to build the best-performed model. x=stress['clean_text'] y=stress['label'] x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.3,random_state=1) vector = TfidfVectorizer() x_train = vector.fit_transform(x_train) x_test = vector.transform(x_test) model_lr_tf=LogisticRegression() model_lr_tf.fit(x_train,y_train) y_pred=model_lr_tf.predict(x_test) # Model Evaluation conf=confusion_matrix(y_test,y_pred) acc_lr=accuracy_score(y_test,y_pred) f1_lr=f1_score(y_test,y_pred,average='weighted') print('Accuracy: ',acc_lr) print('F1 Score: ',f1_lr) print(10*'===========') print('Confusion Matrix: n',conf) print(10*'===========') print('Classification Report: n',classification_report(y_test,y_pred)) Word Clouds of Stressed & Non-stressed Words

The dataset contains text messages or documents that are labeled as either stressed or non-stressed. The code loops through the two labels to create a word cloud for each label using the WordCloud library and display the word cloud visualization. Each word cloud represents the most commonly used words in the respective category, with larger words indicating higher frequency. The choice of the color map (‘winter’, ‘autumn’, ‘magma’, ‘Viridis’, ‘plasma’) determines the color scheme of the word clouds. The resulting visualizations provide a concise representation of the most frequent words associated with stressed and non-stressed messages or documents.

Here are word clouds representing stressed and non-stressed words commonly associated with stress detection:

for label, cmap in zip([0,1], ['winter', 'autumn', 'magma', 'viridis', 'plasma']): text = stress.query('label == @label')['text'].str.cat(sep=' ') plt.figure(figsize=(12, 9)) wc = WordCloud(width=1000, height=600, background_color="#f8f8f8", colormap=cmap) wc.generate_from_text(text) plt.imshow(wc) plt.axis("off") plt.title(f"Words Commonly Used in ${label}$ Messages", size=20) plt.show() Prediction

The new input data is preprocessed and features are extracted to match the model’s expectations. The predict function is then used to generate predictions based on the extracted features. Finally, the predictions are printed or utilized as required for further analysis or decision-making.

data=["""I don't have the ability to cope with it anymore. I'm trying, but a lot of things are triggering me, and I'm shutting down at work, just finding the place I feel safest, and staying there for an hour or two until I feel like I can do something again. I'm tired of watching my back, tired of traveling to places I don't feel safe, tired of reliving that moment, tired of being triggered, tired of the stress, tired of anxiety and knots in my stomach, tired of irrational thought when triggered, tired of irrational paranoia. I'm exhausted and need a break, but know it won't be enough until I journey the long road through therapy. I'm not suicidal at all, just wishing this pain and misery would end, to have my life back again."""] data=vector.transform(data) model_lr_tf.predict(data) data=["""In case this is the first time you're reading this post... We are looking for people who are willing to complete some online questionnaires about employment and well-being which we hope will help us to improve services for assisting people with mental health difficulties to obtain and retain employment. We are developing an employment questionnaire for people with personality disorders; however we are looking for people from all backgrounds to complete it. That means you do not need to have a diagnosis of personality disorder – you just need to have an interest in completing the online questionnaires. The questionnaires will only take about 10 minutes to complete online. For your participation, we’ll donate £1 on your behalf to a mental health charity (Young Minds: Child & Adolescent Mental Health, Mental Health Foundation, or Rethink)"""] data=vector.transform(data) model_lr_tf.predict(data) Conclusion

The application of machine learning techniques in predicting stress levels provides personalized insights for mental well-being. By analyzing a variety of factors such as numerical measurements ( blood pressure, heart- rate) and categorical characteristics (eg, gender, occupation), machine learning models can learn patterns and make predictions on an individual stress level. With the ability to accurately detect and monitor stress levels, machine learning contributes to the development of proactive strategies and interventions to manage and enhance mental well-being.

We explored the insights from using machine learning in stress prediction and its potential to revolutionize our approach to addressing this critical issue.

Accurate Predictions: Machine learning algorithms analyze vast amounts of historical data to accurately predict stress occurrences, providing valuable insights and forecasts.

Early Detection: Machine learning can detect warning signs early on, allowing for proactive measures and timely support in vulnerable areas.

Enhanced Planning and Resource Allocation: Machine learning enables forecasting of stree hotspots and intensities, optimizing the allocation of resources such as emergency services and medical facilities.

Improved Public Safety: Timely alerts and warnings issued through machine learning predictions empower individuals to take necessary precautions, reducing the impact of stree and enhancing public safety.

In conclusion, this stress prediction analysis provides valuable insights into stress levels and their prediction using machine learning. Use the findings to develop tools and interventions for stress management, promoting overall well-being and improved quality of life.

Frequently Asked Questions

Related

Bank Customer Churn Prediction Using Machine Learning

This article was published as a part of the Data Science Blogathon.

Introduction

Customer Churn prediction means knowing which customers are likely to leave or unsubscribe from your service. For many companies, this is an important prediction. This is because acquiring new customers often costs more than retaining existing ones. Once you’ve identified customers at risk of churn, you need to know exactly what marketing efforts you should make with each customer to maximize their likelihood of staying.

Customers have different behaviors and preferences, and reasons for cancelling their subscriptions. Therefore, it is important to actively communicate with each of them to keep them on your customer list. You need to know which marketing activities are most effective for individual customers and when they are most effective.

Impact of customer churn on businesses

A company with a high churn rate loses many subscribers, resulting in lower growth rates and a greater impact on sales and profits. Companies with low churn rates can retain customers.

Why is Analyzing Customer Churn Prediction Important?

Customer churn is important because it costs more to acquire new customers than to sell to existing customers. This is the metric that determines the success or failure of a business. Successful customer retention increases the customer’s average lifetime value, making all future sales more valuable and improving unit margins.

The way to maximize a company’s resources is often by increasing revenue from recurring subscriptions and trusted repeat business rather than investing in acquiring new customers. Retaining loyal customers for years makes it much easier to grow and weather financial hardship than spending money to acquire new customers to replace those who have left.

Benefits of Analyzing Customer Churn Prediction

Increase profits

Improve the customer experience

One of the worst ways to lose a customer is an easy-to-avoid mistake like: Ship the wrong item. Understanding why customers churn, you can better understand their priorities, identify your weaknesses, and improve the overall customer experience.

Customer experience, also known as “CX”, is the customer’s perception or opinion of their interactions with your business. The perception of your brand is shaped throughout the buyer journey, from the first interaction to after-sales support, and has a lasting impact on your business, including your bottom line.

Optimize your products and services

Customer retention

The opposite of customer churn is customer retention. A company can retain customers and continue to generate revenue from them. High customer loyalty enables companies to increase the profitability of their existing customers and maximize their lifetime value (LTV).

If you sell a service for $1,000 per month and keep the customer for another 3 months, he will earn an additional $3,000 for each customer without spending on customer acquisition. The scope and amount vary depending on the business, but the concept of “repeat business = profitable business” is universal.

How does Customer Churn Prediction Work?

We first have to do some Exploratory Data Analysis in the Dataset, then fit the dataset into Machine Learning Classification Algorithm and choose the best Algorithm for the Bank Customer Churn Dataset.

Algorithms for Churn Prediction Models

XGBoost, short for Extreme Gradient Boosting, is a scalable machine learning library with Distributed Gradient Boosted Decision Trees (GBDT). It provides Parallel Tree Boosting and is the leading machine learning library for regression, classification and ranking problems. To understand XGBoost, it’s important first to understand the machine learning concepts and algorithms that XGBoost is built on: supervised machine learning, decision trees, ensemble learning, and gradient boosting. Supervised machine learning uses an algorithm to train a model to find patterns in a dataset containing labels and features and then uses the trained model to predict the labels of the features in a new dataset.

Decision trees are models that predict labels by evaluating a tree of if-then-else true/false functional questions and estimating the minimum number of questions needed to evaluate the likelihood of a correct decision. Decision trees can be used for classification to predict categories and regression to predict continuous numbers. The following simple example uses a decision tree to estimate a house’s price (tag) based on the size and number of bedrooms (features).

Gradient Boosted Decision Trees (GBDT) is a random forest-like decision tree ensemble learning algorithm for classification and regression. Ensemble learning algorithms combine multiple machine learning algorithms to get a better model. Both Random Forest and GBDT create models that consist of multiple decision trees. The difference is in how the trees are constructed and combined.

Source: researchgate.net

Decision Tree

Decision trees are a nonparametric supervised learning method used for classification and regression. The goal is to build a model that predicts the value of a target variable by learning simple decision rules derived from the properties of the data. A tree can be viewed as a piecewise constant approximation.

For example, in the following example, a decision tree learns from data to approximate a sine wave using a series of if-then-else decision rules. The deeper the tree, the more complex the decision rules and the better the model.

Easy to understand and easy to interpret. You can visualize trees.

Little or no data preparation is required. Other techniques often require normalizing the data, creating dummy variables, and removing empty values. However, please note that this module does not support missing values.

The cost of using a tree (predicting data) is the logarithm of the number of data points used to train the tree.

It can handle both numeric and categorical data. However, scikit-learn’s implementation does not currently support categorical variables. Other techniques tend to specialize in analyzing datasets containing only one variable type. See Algorithms for details. Can handle multi-output issues.

Adopted the white box model. If a given situation is observable in the model, the description of that state can be easily explained by Boolean logic. In contrast, results from black-box models (such as artificial neural networks) can be more difficult to interpret.

Possibility to validate the model with statistical tests. This can explain the reliability of the model.

It works well even when the assumptions are somewhat violated by the true model from which the data were generated.

Decision tree learners can create overly complex trees that fail to generalize the data well. This is called overfitting. Mechanisms such as pruning, setting a minimum number of samples required at a leaf node, or setting a maximum tree depth are required to avoid this problem.

Decision trees can be unstable. This is because small deviations in the data can produce completely different trees. This problem is mitigated by using decision trees within the ensemble. The figure above shows that the decision tree prediction is neither smooth nor continuous but a piecewise constant approximation. Therefore, they are bad at extrapolation.

The problem of learning optimal decision trees is known to be NP-complete under some aspects of optimality and even for simple concepts. Therefore, practical decision tree learning algorithms are based on heuristic algorithms, such as the greedy algorithm, where the locally optimal decision is made at each node. Such algorithms cannot guarantee to return of globally optimal decision trees. This can be mitigated by training multiple trees in an ensemble learner and using surrogates to randomly sample features and samples.

Some concepts, such as XOR, parity, and multiplexer problems, are difficult to master because they cannot be easily represented in decision trees.

Decision tree learners create skewed trees when some classes are dominant. Therefore, it is recommended to balance the data set before fitting the decision tree.

Random Forest

Random forest is a machine learning technique to solve regression and classification problems. It uses ensemble learning, a technique that combines many classifiers to provide solutions to complex problems.

A random forest algorithm consists of many decision trees. The “forest” created by the random forest algorithm is trained by bagging or bootstrap aggregation. Bagging is an ensemble meta-algorithm that improves the accuracy of machine learning algorithms. A (random forest) algorithm determines an outcome based on the predictions of a decision tree. Predict by averaging outputs from different trees. Increasing the number of trees improves the accuracy of the results.

Random forest removes the limitations of decision tree algorithms. Reduce data set overfitting and increase accuracy. Generate predictions without requiring a lot of configuration in your package

Source: trivusi.web.id Support Vector Machines

Source: researchgate.net

Effective in high-dimensional space.

It works even if the number of dimensions exceeds the number of samples.

It is also memory efficient because it uses a subset of the training points in the decision function (called support vectors).

Versatility: You can specify different kernel functions for the decision function. A generic kernel is provided, but it is possible to specify a custom kernel.

When the number of features is much larger than the number of samples, avoiding overfitting when choosing a kernel function, the regularization term becomes important.

SVM does not provide direct probability estimates. These are computed using an expensive 5-fold cross-validation.

Coding to Predict Bank Customer Churn Prediction

Now we have to import some libraries :

After that, we have to read the dataset using pandas



Exploratory Data Analysis

The first thing we have to do in Exploratory Data Analysis is checked if there are null values in the dataset.

df.isnull().head() df.isnull().sum() #Checking Data types df.dtypes #Counting 1 and 0 Value in Churn column color_wheel = {1: "#0392cf", 2: "#7bc043"} colors = df["churn"].map(lambda x: color_wheel.get(x + 1)) print(df.churn.value_counts()) p=df.churn.value_counts().plot(kind="bar") #Change value in country column df['country'] = df['country'].replace(['Germany'],'0') df['country'] = df['country'].replace(['France'],'1') df['country'] = df['country'].replace(['Spain'],'2') #Change value in gender column df['gender'] = df['gender'].replace(['Female'],'0') df['gender'] = df['gender'].replace(['Male'],'1') df.head() #convert object data types column to integer df['country'] = pd.to_numeric(df['country']) df['gender'] = pd.to_numeric(df['gender']) df.dtypes #Remove customer_id column df2 = df.drop('customer_id', axis=1) df2.head() sns.heatmap(df2.corr(), fmt='.2g') Build Machine Learning Model X = df2.drop('churn', axis=1) y = df2['churn'] #test size 20% and train size 80% from sklearn.model_selection import train_test_split, cross_val_score, cross_val_predict from sklearn.metrics import accuracy_score X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.2,random_state=7) Decision Tree from chúng tôi import DecisionTreeClassifier dtree = DecisionTreeClassifier() dtree.fit(X_train, y_train) y_pred = dtree.predict(X_test) print("Accuracy Score :", accuracy_score(y_test, y_pred)*100, "%") Random Forest from sklearn.ensemble import RandomForestClassifier rfc = RandomForestClassifier() rfc.fit(X_train, y_train) y_pred = rfc.predict(X_test) print("Accuracy Score :", accuracy_score(y_test, y_pred)*100, "%") Support Vector Machine from sklearn import svm svm = svm.SVC() svm.fit(X_train, y_train) y_pred = svm.predict(X_test) print("Accuracy Score :", accuracy_score(y_test, y_pred)*100, "%") XGBoost from xgboost import XGBClassifier xgb_model = XGBClassifier() xgb_model.fit(X_train, y_train) y_pred = xgb_model.predict(X_test) print("Accuracy Score :", accuracy_score(y_test, y_pred)*100, "%") Visualize Random Forest and XGBoost Algorithm because Random Forest and XGBoost Algorithm have the Best Accuracy #importing classification report and confusion matrix from sklearn from sklearn.metrics import classification_report, confusion_matrix Random Forest y_pred = rfc.predict(X_test) print("Classification report - n", classification_report(y_test,y_pred)) cm = confusion_matrix(y_test, y_pred) plt.figure(figsize=(5,5)) sns.heatmap(data=cm,linewidths=.5, annot=True,square = True, cmap = 'Blues') plt.ylabel('Actual label') plt.xlabel('Predicted label') all_sample_title = 'Accuracy Score: {0}'.format(rfc.score(X_test, y_test)) plt.title(all_sample_title, size = 15) from sklearn.metrics import roc_curve, roc_auc_score y_pred_proba = rfc.predict_proba(X_test)[:][:,1] df_actual_predicted = pd.concat([pd.DataFrame(np.array(y_test), columns=['y_actual']), pd.DataFrame(y_pred_proba, columns=['y_pred_proba'])], axis=1) df_actual_predicted.index = y_test.index fpr, tpr, tr = roc_curve(df_actual_predicted['y_actual'], df_actual_predicted['y_pred_proba']) auc = roc_auc_score(df_actual_predicted['y_actual'], df_actual_predicted['y_pred_proba']) plt.plot(fpr, tpr, label='AUC = %0.4f' %auc) plt.plot(fpr, fpr, linestyle = '--', color='k') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('ROC Curve', size = 15) plt.legend() XGBoost y_pred = xgb_model.predict(X_test) print("Classification report - n", classification_report(y_test,y_pred)) cm = confusion_matrix(y_test, y_pred) plt.figure(figsize=(5,5)) sns.heatmap(data=cm,linewidths=.5, annot=True,square = True, cmap = 'Blues') plt.ylabel('Actual label') plt.xlabel('Predicted label') all_sample_title = 'Accuracy Score: {0}'.format(xgb_model.score(X_test, y_test)) plt.title(all_sample_title, size = 15) from sklearn.metrics import roc_curve, roc_auc_score y_pred_proba = xgb_model.predict_proba(X_test)[:][:,1] df_actual_predicted = pd.concat([pd.DataFrame(np.array(y_test), columns=['y_actual']), pd.DataFrame(y_pred_proba, columns=['y_pred_proba'])], axis=1) df_actual_predicted.index = y_test.index fpr, tpr, tr = roc_curve(df_actual_predicted['y_actual'], df_actual_predicted['y_pred_proba']) auc = roc_auc_score(df_actual_predicted['y_actual'], df_actual_predicted['y_pred_proba']) plt.plot(fpr, tpr, label='AUC = %0.4f' %auc) plt.plot(fpr, fpr, linestyle = '--', color='k') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('ROC Curve', size = 15) plt.legend() Conclusion

The churn variable has imbalanced data. So, the solution to handle imbalanced data are :

Resample Training set

Use K-fold Cross-Validation in the Right Way

Ensemble Different Resampled Datasets

We have to change the value in the Country and Gender columns so the Machine Learning model can read and predict the dataset; after changing the value, we have to change the data types on the Country and Gender column from string to integer because XGBoost Machine Learning Model cannot read string data types even though the value in the column is number.

Lastly, X (86,85% and 86.45%). Random Forest and XGBoost have perfect AUC Scores. They have 0.8731 and 0.8600 AUC Scores.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Related

How Deepmind Strengthens Ecological Research Using Machine Learning?

In the past few years, machine learning has highlighted the strength of information technology in discovering the fundamentals of life and the environment. With the volume of data generated every day, it has become necessary to acknowledge smart analysis solutions to know more about the world we live in. In particular, machine learning aims to develop classifying languages simple enough to be understood easily by the human. Moreover, to get into layers of detailed study of ecology, Google’s DeepMind has collaborated with ecologists and conservationists to develop machine learning methods to help study the behavioral dynamics of an entire African animal community in the Serengeti National Park and Grumeti Reserve in Tanzania. According to a DeepMind’s blog, the Serengeti-Mara ecosystem is globally unparalleled in its biodiversity, hosting an estimated 70 large mammal species and 500 bird species, thanks in part to its unique geology and varied habitat types. Around 10 years ago, the Serengeti Lion Research program installed hundreds of motion-sensitive cameras within the core of the protected area which is triggered by passing wildlife, capturing animal images frequently, across vast spatial scales, allowing researchers to study animal behavior, distribution, and demography with great spatial and temporal resolution. This has allowed the team to collect and store millions of photos. To date, volunteers from across the world have helped to identify and count the species in the photos by hand using the Zooniverse web-based platform, which hosts many similar projects for citizen-scientists. This comprehensive study has resulted in a rich dataset, Snapshot Serengeti, featuring labels and counts for around 50 different species. Moreover, to help researchers unlock this data with greater efficiency, DeepMind has used the Snapshot Serengeti dataset to train machine learning models to automatically detect, identify, and count animals. DeepMind says, “Camera trap data can be hard to work with–animals may appear out of focus, and can be at many different distances and positions with respect to the camera. With expert input from leading ecologist and conservationist Dr. Meredith Palmer, our project quickly took shape, and we now have a model that can perform on par with, or better than, human annotators for most of the species in the region.” Most importantly, this method shortens the data processing pipeline by up to 9 months, which has immense potential to help researchers in the field. In a more obvious manner the field work is quite challenging, and it is fraught with unexpected hazards such as failing power lines and limited or no internet access. DeepMind is preparing the software for deployment in the field and looking at ways to safely run its pre-trained model with modest hardware requirements and little Internet access. The company has worked closely with its collaborators in the field to be sure that its technology is used responsibly. Once in place, researchers in the Serengeti will be able to make direct use of this tool, helping provide them with up-to-date species information to better support their conservation efforts.

In the past few years, machine learning has highlighted the strength of information technology in discovering the fundamentals of life and the environment. With the volume of data generated every day, it has become necessary to acknowledge smart analysis solutions to know more about the world we live in. In particular, machine learning aims to develop classifying languages simple enough to be understood easily by the human. Moreover, to get into layers of detailed study of ecology, Google’s DeepMind has collaborated with ecologists and conservationists to develop machine learning methods to help study the behavioral dynamics of an entire African animal community in the Serengeti National Park and Grumeti Reserve in Tanzania. According to a DeepMind’s blog, the Serengeti-Mara ecosystem is globally unparalleled in its biodiversity, hosting an estimated 70 large mammal species and 500 bird species, thanks in part to its unique geology and varied habitat types. Around 10 years ago, the Serengeti Lion Research program installed hundreds of motion-sensitive cameras within the core of the protected area which is triggered by passing wildlife, capturing animal images frequently, across vast spatial scales, allowing researchers to study animal behavior, distribution, and demography with great spatial and temporal resolution. This has allowed the team to collect and store millions of photos. To date, volunteers from across the world have helped to identify and count the species in the photos by hand using the Zooniverse web-based platform, which hosts many similar projects for citizen-scientists. This comprehensive study has resulted in a rich dataset, Snapshot Serengeti, featuring labels and counts for around 50 different species. Moreover, to help researchers unlock this data with greater efficiency, DeepMind has used the Snapshot Serengeti dataset to train machine learning models to automatically detect, identify, and count animals. DeepMind says, “Camera trap data can be hard to work with–animals may appear out of focus, and can be at many different distances and positions with respect to the camera. With expert input from leading ecologist and conservationist Dr. Meredith Palmer, our project quickly took shape, and we now have a model that can perform on par with, or better than, human annotators for most of the species in the region.” Most importantly, this method shortens the data processing pipeline by up to 9 months, which has immense potential to help researchers in the field. In a more obvious manner the field work is quite challenging, and it is fraught with unexpected hazards such as failing power lines and limited or no internet access. DeepMind is preparing the software for deployment in the field and looking at ways to safely run its pre-trained model with modest hardware requirements and little Internet access. The company has worked closely with its collaborators in the field to be sure that its technology is used responsibly. Once in place, researchers in the Serengeti will be able to make direct use of this tool, helping provide them with up-to-date species information to better support their conservation efforts. The DeepMind Science Team works to leverage AI to tackle key scientific challenges that impact the world. The company has developed a robust model for detecting and analyzing animal populations in-field data and has helped to consolidate data to enable the growing machine learning community in Africa to build AI systems for conservation which, it hopes, will scale to other parks. DeepMind says, “We’ll next be validating our models by deploying them in the field and tracking their progress. Our hope is to contribute towards making AI research more inclusive–both in terms of the kinds of domains we apply it to, and the people developing it. Hence, participating in meetings like Indaba is key for helping build a global team of AI practitioners who can deploy machine learning for diverse projects.”

Outliers Detection Using Iqr, Z

This article was published as a part of the Data Science Blogathon.

Introduction

data? Which methods will work well if data density is not the same through the dispersion? Identification of outliers? Etc. 

Guys, this article will help you to get many such questions answered along with practical applications, no matter if you are doing a data cleaning process before conducting EDA,/ passing data to a Machine learning model, o ng any statistical test.

What are Inliers and Outliers?

Ou are values that seem excessively different from most of the rest of the values in the given dataset. Outliers could normally exist due to new inventions (true outliers), development of new patterns/phenomena, experimental errors, rarely occurring incidents, anomalies, incorrectly fed data due to typographical mistakes, failure of data recording systems/components, etc. However, all outliers are not bad; some reveal new information. Inliers are all the data points that are part of the distribution except outliers. 

Outlier’s Identification

Collective Outliers: When a Group of datapoint deviates from the distribution, it is called a collective outlier. It is completely subjective to interpret their relevance according to the specific domain. Also, collective outliers show the formation of new phenomena or development. Ref. 

Contextual Outliers: These are specific conditions based on where the interpretation of its relevance becomes (i.e., the usual temperature in Leh during winter goes near 9°C which is the rarest phenomenon in Ahmedabad, Gujarat), Punctuation symbols while attempting text analysis,  background noise single while doing speech recognition, etc.)

Fig: 1 (Point/Global or collective Outliers)

For ease of understanding, I have considered a real case study on steel scrap sales over three years.

Real Case Example of Outliers

Considering a real-case scenario of Steel Sheet Scrap Rate (Rs/Kg) sold across India from 2023 to 2023 has been captured to understand the statistics and predict the price in the future. Still, before doing that, as part of the data-cleaning process, we want to understand the presence of an outlier and its weightage accordingly. 

Importing important libraries to load the dataset and conduct further analysis:

import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import scipy.stats as st %matplotlib inline import warnings warnings.filterwarnings('ignore') df=pdf.read_excel("scrap_data.xlsx", skiprows=2) df.head(), print('shape of data:',df.shape)

To understand the trend, I have tried to plot line plot on two main independent

variables (‘Scrap Rate’ and ‘Scrap Weight’)  with ref. to their date of sale. 

plt.figure(figsize =(15,5)) plt.subplot(1,2,1) sns.lineplot(x=df['Job Start Date'], y=df['Rate in Rs./Kg.'], color='r') plt.title("Steel Scrap Rate (Rs/Kg)", fontsize=20) plt.xlabel('Date') plt.subplot(1,2,2) sns.lineplot(x=df['Job Start Date'], y=df['Scrape Sale Qty.'], color='b') plt.title("Steel Scrap Weight (Rs/Kg)", fontsize=20) plt.xlabel('Date')

Looking at the trend in the Scrap Rate feature, we understand that there were sudden spikes in rates crossing 120 Rs/kg, indicating anomalies as the scrap rate must be the same and increase or decrease gradually. However, in the case of Scrap weight, depending on the size of a construction project, scrap generated at the end of closing the project may be high or low in volume anytime.

Let’s try applying the different methods of detecting and treating outliers:

Inter Quartile Range (IQR)

IQR measures variability by dividing the dataset into four equal quartiles. First, the entire data is to be sorted in ascending order, and then splitting it into four equal quartiles called Q1, Q2, Q3, and Q4, which can be calculated using the following equation. IQR Method is best suited when the data forms a skewed distribution.

The first Quartile (Q1) divides the smallest 25% of the values from the other 75% that are larger.

The Third quantile (Q3) divides the smallest 75%  of the values from the largest 25%.

Lower Bound Limit = Q1 – 1.5 x IQR

Upper Bound Limit = Q3 + 1.5 x IQR 

So outliers can be considered any values which are greater than Upper Bound Limit (Q3+1.5*IQR) and less than Lower Bound Limit (Q1-1.5*IQR) in the given dataset.

Let’s plot Boxplot to know the presence of outliers;

plt.figure(figsize=(15,5)) plt.subplot(1,2,1) sns.boxplot(df['Scrape Sale Qty.']) plt.xticks(fontsize = (12)) plt.xlabel('Steel-Scrap Weight (in Kgs)') plt.legend (title="Steel Scrap Weight", fontsize=10, title_fontsize=15) plt.subplot(1,2,2) sns.boxplot(df['Rate in Rs./Kg.']) plt.xlabel('Steel Scrap Rate Rs/kg') plt.xticks(fontsize =(12)); plt.legend (title="Steel Scrap Rate", fontsize=10, title_fontsize=15);

To make the calculation faster, I have created a function to derive Inter-Quartile-Range (IQR), Lower Fence, and Upper Fence and added conditions either to drop them or fill them with upper or lower values, respectively. 

def identifying_treating_outliers(df,col,remove_or_fill_with_quartile): q1=df[col].quantile(0.25) q3=df[col].quantile(0.75) iqr=q3-q1 lower_fence=q1-1.5*(iqr) upper_fence=q3+1.5*(iqr) print('Lower Fence;', lower_fence) print('Upper Fence:', upper_fence) print('Total number of outliers are left:', df[df[col] upper_fence].shape[0]) if remove_or_fill_with_quartile=="drop": df.drop(df.loc[df[col]<lower_fence].index,inplace=True) elif remove_or_fill_with_quartile=="fill": df[col] = np.where(df[col] < lower_fence, lower_fence, df[col])

Applying the Function to the Scrap Rate and Scrap Weight column:

identifying_treating_outliers(df,'Scrape Sale Qty.','drop') identifying_treating_outliers(df,'Rate in Rs./Kg.','drop')

DF shape before Application of Function : (1001, 5)

DF Shape after Application of Function : (925, 5)

Plotting Boxplot to check the status of outliers after applying the ‘indentifying_treating_outliers’ function: 

plt.figure(figsize=(15,5))

plt.subplot(1,2,1) sns.boxplot(df['Scrape Sale Qty.']) plt.xticks(fontsize = (12)) plt.xlabel('Steel-Scrap Weight (in Kgs)') plt.legend (title="Steel Scrap Weight", fontsize=10, title_fontsize=15) plt.subplot(1,2,2) sns.boxplot(df['Rate in Rs./Kg.']) plt.xlabel('Steel Scrap Rate Rs/kg') plt.xticks(fontsize =(12)); plt.legend (title="Steel Scrap Rate", fontsize=10, title_fontsize=15); Z-score Method

TheZ-score of the values is the difference between that value and the mean, divided by the standard deviation. Z-Scores help identify outliers by values if a particular data point has a Z-score value either less than -3 or greater than +3.Z score can be mathematically expressed as;

= particular value, =mean, =standard deviation

Below pic expresses the transformation of data from normal distribution to a standard normal distribution using a Z-score is given here for ref.

In our dataset, we will apply the Zscore for outliers with a Zscore of more than +3 and less than -3. Just a few lines of code will help us get Zscore, and we can see the differences using the distribution plot (before and after).

# Applying Zscore in Scrap Rate column defining dataframe by dfn zr = st.zscore(df['Rate in Rs./Kg.']) dfn = df[(zr-3)] # Applying Zscore in Steel Weight Column defining dataframe by dfnf zw= st.zscore(dfn['Scrape Sale Qty.']) dfnf = dfn[(zw-3)] plt.figure(figsize=(12,5)) plt.subplot(1,2,1) sns.distplot(df['Rate in Rs./Kg.']) plt.title('Z Score Plot Before Removing Outlier',fontsize=15) plt.subplot(1,2,2) sns.distplot(st.zscore(dfn['Rate in Rs./Kg.'])) plt.title('Z Score Plot After Removing Outlier',fontsize=15)

Our data forms a Positive Skewed distribution (skewness value- 0.874) which cannot be considered approximately normally distributed through the above plot. Significant improvement can be seen comparing the plot shown before and after applying Zscore.

print('before df shape', df.shape) print('After df shape for Observation dropped in Scrap Rate', dfn.shape) print('After df shape for observation dropped in weight', dfnf.shape)

Using the Z Score method, in Scrap Rate and Scrap Weight columns, we have dropped 21 data points (3 from Scrap Rate and 18 from Scrap Weight) with Zscore -3.

Local Outliers Finder (LOF)

Local Outlier Finder is an unsupervised machine learning technique to detect outliers based on the closest neighborhood density of data points and works well when the spread of the dataset (density) is not the same. LOF basically considers  K-distance (distance between the points) and K-neighbors (set of points lies in the circle of K-distance (radius)). Ref. detailed documentation: SK-Learn  Library.

Lof takes two major parameters into consideration (1) n_neighbors: The number of neighbors which has a default value of 20 (2) Contamination: the proportion of outliers in the given dataset which can be set ‘auto’ or float values (0, 0.02, 0.005).

Importing important libraries and defining model

from sklearn.neighbors import LocalOutlierFactor d2 = df.values #converting the df into numpy array lof = LocalOutlierFactor(n_neighbors=20, contamination='auto') good = lof.fit_predict(d2) == 1 plt.figure(figsize=(10,5)) plt.scatter(d2[good, 1], d2[good, 0], s=2, label="Inliers", color="#4CAF50") plt.scatter(d2[~good, 1], d2[~good, 0], s=8, label="Outliers", color="#F44336") plt.title('Outlier Detection using Local Outlier Factor', fontsize=20) plt.legend (fontsize=15, title_fontsize=15)

In our case, I have set contamination as ‘auto’ (see the above plot) to see the result and found LOF is not performing that well as my data spread (density) is not deviating much. Also, I tried different Contamination values of 0.005, 0.01, 0.02, 0.05, and 0.09 but the performance was not that well.

Density-Based Spatial Clustering for Application with Noise (DBSCAN)

When our dataset is large enough that have multiple numeric features (multivariate) then it becomes difficult to handle outliers using IQR, Zscore, or LOF. Here SK-Learn library DBSCAN comes to the rescue to allow us to handle outliers for the Multi-variate datasets.

DBSCAN considers two main parameters (as mentioned below) to form a cluster with the nearest data point and based on the high or low-density region, it detects Inliers or outliers.

(1) Epsilon (Radius of datapoint that we can calculate based on k-distance graph)

However, in our case, we don’t have more than 5 features and we have just selected two important numeric features out of them to apply our learnings and visualize the same. Due to the technology & human brain’s limitation in visualizing the Multi-dimensional data altogether at the moment, we are applying DBSCAN to our dataset.

Importing the libraries and fitting the model. To nullify the noise in the data set, we have u normalized the data using Min-Max Scaler.

from sklearn.cluster import DBSCAN from sklearn.preprocessing import MinMaxScaler mms = MinMaxScaler() df[['Scrape Sale Qty.','Rate in Rs./Kg.']] = mms.fit_transform(df[['Scrape Sale Qty.','Rate in Rs./Kg.']]) df.head() from sklearn.neighbors import NearestNeighbors neigh = NearestNeighbors(n_neighbors=2) nbrs = neigh.fit(df[['Scrape Sale Qty.', 'Rate in Rs./Kg.']]) distances, indices = nbrs.kneighbors(df[['Rate in Rs./Kg.', 'Rate in Rs./Kg.']]) # Plotting K-distance Graph distances = np.sort(distances, axis=0) distances = distances[:,1] plt.figure(figsize=(8,5)) plt.plot(distances) plt.title('K-distance Graph',fontsize=20) plt.xlabel('Data Points sorted by distance',fontsize=14) plt.ylabel('Epsilon',fontsize=14) plt.show()

The above plot shows the Maximum Epsilon value is closing to 0.08, and for the sample size (number of points we want within the epsilon value of each data point), we are selecting 10 now. 

model = DBSCAN(eps = 0.08, min_samples = 10).fit(data) colors = model.labels_ plt.figure(figsize=(10,7)) plt.scatter(df['Rate in Rs./Kg.'], df['Scrape Sale Qty.'], c = colors) plt.title('Outliers Detection using DBSCAN',fontsize=20)

DBSCAN technique has efficiently detected the significant outliers using Density-Based Spatial Clustering and can be seen in the below plots.

Conclusion

IQR is the simplest and most mathematically explained technique. It is good for univariate and bivariate data to identify outliers as it considers the median as a measure of dispersion to detect extreme values but is limited to multivariate datasets while dealing with huge numbers of numeric features. In our case, we have applied it by defining a function to detect and treat outliers and detected 76 dropped data points as outliers.

DBSCAN does not require to define by a number of clusters and is able to detect anomalies where data spread is arbitrarily distributed and linearly not separable. It has its own limitations while working with varying density data spread. In our case, it been detected 16 datapoints as potential outliers.

Happy learning !!

For further details, Connect me;

[email protected]

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Related

Calibration Of Machine Learning Models

This article was published as a part of the Data Science Blogathon.

Introduction

source: iPhone Weather App

A screen image related to a weather forecast must be a familiar picture to most of us. The AI Model predicting the expected weather predicts a 40% chance of rain today, a 50% chance of Wednesday, and a 50% on Thursday. Here the AI/ML Model is talking about the probability of occurrence, which is the interesting part. Now, the question is this AI/ML model trustworthy?

As learners of Data Science/Machine Learning, we would have walked through stages where we build various supervisory ML Models(both classification and regression models). We also look at different model parameters that tell us how well the model performs. One important but probably not so well-understood model reliability parameter is Model Calibration. The calibration tells us how much we can trust a model prediction. This article explores the basics of model calibration and its relevancy in the MLOps cycle. Even though Model Calibration applies to regression models as well, we will exclusively look at classification examples to get a grasp on the basics.

The Need for Model Calibration

Wikipedia amplifies calibration as ” In measurement technology and metrology, calibration is the comparison of measurement values delivered by a device under test with those of a calibration standard of known accuracy. “

The model outputs two important pieces of information in a typical classification ML model. One is the predicted class label (for example, classification as spam or not spam emails), and the other is the predicted probability. In binary classification, the sci-kit learn library gives a method called the model.predict_proba(test_data) that gives us the probability for the target to be 0 and 1 in an array form.  A model predicting rain can give us a 40% probability of rain and a 60% probability of no rain. We are interested in the uncertainty in the estimate of a classifier. There are typical use cases where the predicted probability of the model is very much of interest to us, such as weather models, fraud detection models, customer churn models, etc. For example, we may be interested in answering the question, what is the probability of this customer repaying the loan?

Let’s say we have an ML model which predicts whether a patient has cancer-based on certain features. The model predicts a particular patient does not have cancer (Good, a happy scenario!).  But if the predicted probability is 40%, then the Doctor may like to conduct some more tests for a certain conclusion. This is a typical scenario where the prediction probability is critical and of immense interest to us. The Model Calibration helps us improve the model’s prediction probability so that the model’s reliability improves. It also helps us to decipher the predicted probability observed from the model predictions. We can’t take for granted that the model is twice as confident when giving a predicted probability of 0.8 against a figure of 0.4.

We also must understand that calibration differs from the model’s accuracy. The model accuracy is defined as the number of correct predictions divided by the total number of predictions made by the model. It is to be clearly understood that we can have an accurate but not calibrated model and vice versa.

If we have a model predicting rain with 80% predicted probability at all times, then if we take data for 100 days and find 80 days are rainy, we can say that model is well calibrated. In other words, calibration attempts to remove bias in the predicted probability.

Consider a scenario where the ML model predicts whether the user who is making a purchase on an e-commerce website will buy another associated item or not. The model predicts the user has a probability of 68% for buying Item A  and an item B probability of 73%. Here we will present Item B to the user(higher predicted probability), and we are not interested in actual figures. In this scenario, we may not insist on strict calibration as it is not so critical to the application.

The following shows details of 3 classifiers (assume that models predict whether an image is a dog image or not). Which of the following model is calibrated and hence reliable?

(a) Model 1 : 90% Accuracy, 0.85 confidence in each prediction

(b) Model 2 : 90% Accuracy, 0.98 confidence in each prediction

(c) Model 3 : 90% Accuracy ,0.91 confidence in each prediction

If we look at the first model, it is underconfident in its prediction, whereas model 2 seems overconfident. Model  3 seems well-calibrated, giving us confidence in the model’s ability. Model 3 thinks it is correct 91% of the time and 90% of the time, which shows good calibration.

Reliability Curves

The model’s calibration can be checked by creating a calibration plot or Reliability Plot. The calibration plot reveals the disparity between the probability predicted by the model and the true class probabilities in the data. If the model is well calibrated, we expect to see a straight line at 45 degrees from the origin (indicative that estimated probability is always the same as empirical probability ).

We will attempt to understand the calibration plot using a toy dataset to concretize our understanding of the subject.

Source: own-document

The resulting probability is divided into multiple bins representing possible ranges of outcomes. For example,  [0-0.1), [0.1-0.2), etc., can be created with 10 bins. For each bin, we calculate the percentage of positive samples. For a well-calibrated model, we expect the percentage to correspond to the bin center. If we take the bin with the interval [0.9-1.0), the bin center is 0.95, and for a well-calibrated model, we expect the percentage of positive samples ( samples with label 1) to be 95%.

Source: self-document

We can plot the Mean predicted value (midpoint of the bin ) vs. Fraction of TRUE Positives in each bin in a line plot to check the calibration. of the model.

We can see the difference between the ideal curve and the actual curve, indicating the need for our model to be calibrated. Suppose the points obtained are below the diagonal. In that case, it indicates that the model has overestimated (model predicted probabilities are too high). If the points are above the diagonal, it can be estimated that model has been underconfident in its predictions(the probability is too small). Let’s also look at a real-life Random Forest Model curve in the image below.

If we look at the above plot, the S curve ( remember the sigmoid curve seen in Logistic Regression !) is observed commonly for some models. The Model is seen to be underconfident at high probabilities and overconfident when predicting low probabilities. For the above curve, for the samples for which the model predicted probability is 30%, the actual value is only 10%. So the Model was overestimating at low probabilities.

The toy dataset we have shown above is for understanding, and in reality, the choice of bin size is dependent on the amount of data we have, and we would like to have enough points in each bin such that the standard error on the mean of each bin is small.

Brier Score

We do not need to go for the visual information to estimate the Model calibration. The calibration can be measured using the Brier Score. The Brier score is similar to the Mean Squared Error but used slightly in a different context. It takes values from 0 to 1, with 0 meaning perfect calibration, and the lower the Brier Score, the better the model calibration.

The Brier score is a statistical metric used to measure probabilistic forecasts’ accuracy. It is mostly used for binary classification.

Let’s say a probabilistic model predicts a 90% chance of rain on a particular day, and it indeed rains on that day. The Brier score can be calculated using the following formula,

Brier Score = (forecast-outcome)2

 Brier Score in the above case is calculated to be (0.90-1)2  = 0.01.

The Brier Score for a set of observations is the average of individual Brier Scores.

On the other hand, if a model predicts with a 97%  probability that it will rain but does not rain, then the calculated Brier Score, in this case, will be,

Brier Score = (0.97-0)2 = 0.9409 . A lower Brier Score is preferable.

Calibration Process

Now, let’s try and get a glimpse of how the calibration process works without getting into too many details.

Some algorithms, like Logistic Regression, show good inherent calibration standards, and these models may not require calibration. On the other hand, models like SVM, Decision Trees, etc., may benefit from calibration.  The calibration is a rescaling process after a model has made the predictions.

 There are two popular methods for calibrating probabilities of ML models, viz,

(a) Platt Scaling

(b) Isotonic Regression

It is not the intention of this article to get into details of the mathematics behind the implementation of the above approaches. However, let’s look at both methods from a ringside perspective.

The Platt Scaling is used for small datasets with a reliability curve in the sigmoid shape. It can be loosely understood as putting a sigmoid curve on top of the calibration plot to modify the predictive probabilities of the model.

The above images show how imposing a Platt calibrator curve on the reliability curve of the model modifies the curve. It is seen that the points in the calibration curve are pulled toward the ideal line (dotted line) during the calibration process.

It is noted that for practical implementation during model development, standard libraries like sklearn support easy model calibration(sklearn.calibration.CalibratedClassifier).

Impact on Performance

It is pertinent to note that calibration modifies the outputs of trained ML models. It could be possible that calibration also affects the model’s accuracy. Post calibration, some values close to the decision boundary (say 50% for binary classification) may be modified in such a way as to produce an output label different from prior calibration. The impact on accuracy is rarely huge, and it is important to note that calibration improves the reliability of the ML model.

Conclusion

In this article, we have looked at the theoretical background of Model Calibration. Calibration of Machine Learning models is an important but often overlooked aspect of developing a reliable model. The following are key takeaways from our learnings:-

(a) Model Calibration gives insight or understanding of uncertainty in the prediction of the model and in turn, the reliability of the model to be understood by the end-user, especially in critical applications.

(b) Model calibration is extremely valuable to us in cases where predicted probability is of interest.

(c) Reliability curves and Brier Score gives us an estimate of the calibration levels of the model.

(c) Platt scaling and Isotonic Regression is popular methods to scale the calibration levels and improve the predicted probability.

Where do we go from here? This article aims to give you a basic understanding of Model Calibration. We can further build on this by exploring actual implementation using standard python libraries like scikit Learn for use cases.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Related

Update the detailed information about Room Occupancy Detection Using Machine Learning Algorithms on the Minhminhbmm.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!