Trending February 2024 # Video Machine Learning: A Content Marketing Revolution? # Suggested March 2024 # Top 10 Popular

You are reading the article Video Machine Learning: A Content Marketing Revolution? updated in February 2024 on the website Minhminhbmm.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested March 2024 Video Machine Learning: A Content Marketing Revolution?

Video marketing is being revolutionized by fast data, machine learning, and artificial intelligence.  The dawn of data-driven video is upon us. Video takes the lion’s share of marketing spend and fast-growing mobile video is surpassing all other marketing methods.

Understanding behavior and content consumption is key in optimizing mobile video. Brands have an insatiable appetite for consumer engagement, as evident in brands’ adoption of video, report YouTube, Facebook and InMobi.

The industry is moving away from the video interruption ad model and premium video is taking a key spot. A major battle is brewing between video networks, publishers, and content creators. Those who have intelligent data will win the video marketing revolution.

With few exceptions, old school person-to-person media buying is fading fast. Machine learning is being used to ensure the optimal deal is always reached in programmatic video placement. We are seeing a torrent of data coming in from ad platforms, beacons, wearables, IoT, and so forth. This data tsunami is compounding daily, creating what the industry calls “fast data”. Video and human action on video is a big challenge due to consumption volume. The competitive weapons are now speed and agility when building an intelligent video arsenal.

In July, I attended the launch of Miip by InMobi, an intelligent video and ad unit experience. These units are like Facebook’s left and right slider units, but Miip has also implemented discovery. Check out the video to see more of what I’m talking about:

With all this technology, the one thing that remains true is content still must resonate with the consumer – and machine learning is creating a huge opportunity to match the right content with the right consumer.

We see big players like Tubmoguel seeing massive growth, as described in Mobile Programmatic Buying Is Taking Off. Programmatic spend in mobile now surpasses desktop by 56.2%, eMarketer points out. 

Video Creation & Growth

Low-cost broadcast-quality video is here with iMovie HD and Camtasia Studio 8. Full commercials are edited on iPhones only. There is an explosion of professional content now. What was once cost-prohibitive is now the industry norm. With all this video technology unleashed, hundreds of YouTube stars were born. The cable cord-cutting acceleration is upon the cable networks now. As more high-quality digital video hits the scene this will fuel grater choice on the consumer’s terms.

In both cases, content engineering is a must-have (see 5 Hypnotic Mobile Native Video Content Marketing Methods).

Secrets To A Successful Video Strategy from Social@Ogilvy

from

Data Driven Video Storytelling

This year, Cannes Lions was all about VIDEO storytelling with a big focus on data. Visual and mobile content experiences are personal. I am seeing a massive shift to data-driven journalism. Companies like Google News Lab, Facebook’s Publishing Garage, and Truffle Pig (a content creation agency) are all working with Snapchat, Daily Mail, and WPP – all powering scaled content creation.

[pullquote]“The power of digital allows content, platform, and companies to test and learn in real time before scaling.” -Max Kalehoff[/pullquote]

Hear more on this movement from David Leonhard from New York Times’ The Upshot, Mona Chalabi from Facebook Garage, and Ezra Klein and Melissa Bell from Vox:

Video is Not Spandex

Consumers are not one-size-fits-all when it comes to how they consume content. The creation of content is a natural progress for using artificial intelligence (AI) technology. Machine learning has the ability to connect many data elements and test many hypotheses in real-time. Using humans to adjust the algorithms is “supervised learning”. “Unsupervised learning,” a self learning and constantly improving system, is the holy grail in AI.

The exciting part is when machine can create by themselves. We are witnessing this at Google: see Inceptionism: Going Deeper into Neural Networks.

Getting the right message to the right person is critical in obtaining a positive response. The delivery process and decision will impact the responsiveness. Each platform requires a different strategy. Companies like Tubemogul, Tremor Video, and Hulu all have programmatic video management.

The following are three examples of machine learning techniques being used to enhance video engagement levels:

Identify what visual objects induce habitual responses: What visual objects allow for higher consumer engagement? Visual content can then be grouped and that knowledge can be used over and over in later videos.

Machine learning predicts video consumption habits: What people watch tells you a great deal about their preferences. Measuring audience behavior across video types creates a consumption map. Consumption maps predict things like video placement and cycle times.

The type of visual content affects the reaction of a targeted segment. Machine learning can track the visual preference of the video segments. Each brand and content creator structure can achieve a new level of understanding. What does the audience find most appealing? Is there a large-scale pattern you can identify?

Visual Programmatic

The next frontier of mobile video is intelligence – the ability to predict, as well as adapt, content based on all the data available. We are seeing companies like chúng tôi indexing video libraries to recommend content. Netflix and Amazon have the capability to “predict” using supervised learning human curators. All this metadata in video is providing a treasure trove of information: now we’re connecting with the social graph changing the game.

Finding content that viewers will enjoy is the ultimate goal and extended deep video engagement is a big opportunity. Achieving this level of nirvana has its challenges: see Why Websites Still Can’t Predict Exactly What You Want. We are just scratching the learning algorithms surface of artificial intelligence.

In the age of intelligent data, audience insight is always a winning strategy. Those who tune their video content with intelligence will achieve higher levels of revenue.

Image Credits

You're reading Video Machine Learning: A Content Marketing Revolution?

Calibration Of Machine Learning Models

This article was published as a part of the Data Science Blogathon.

Introduction

source: iPhone Weather App

A screen image related to a weather forecast must be a familiar picture to most of us. The AI Model predicting the expected weather predicts a 40% chance of rain today, a 50% chance of Wednesday, and a 50% on Thursday. Here the AI/ML Model is talking about the probability of occurrence, which is the interesting part. Now, the question is this AI/ML model trustworthy?

As learners of Data Science/Machine Learning, we would have walked through stages where we build various supervisory ML Models(both classification and regression models). We also look at different model parameters that tell us how well the model performs. One important but probably not so well-understood model reliability parameter is Model Calibration. The calibration tells us how much we can trust a model prediction. This article explores the basics of model calibration and its relevancy in the MLOps cycle. Even though Model Calibration applies to regression models as well, we will exclusively look at classification examples to get a grasp on the basics.

The Need for Model Calibration

Wikipedia amplifies calibration as ” In measurement technology and metrology, calibration is the comparison of measurement values delivered by a device under test with those of a calibration standard of known accuracy. “

The model outputs two important pieces of information in a typical classification ML model. One is the predicted class label (for example, classification as spam or not spam emails), and the other is the predicted probability. In binary classification, the sci-kit learn library gives a method called the model.predict_proba(test_data) that gives us the probability for the target to be 0 and 1 in an array form.  A model predicting rain can give us a 40% probability of rain and a 60% probability of no rain. We are interested in the uncertainty in the estimate of a classifier. There are typical use cases where the predicted probability of the model is very much of interest to us, such as weather models, fraud detection models, customer churn models, etc. For example, we may be interested in answering the question, what is the probability of this customer repaying the loan?

Let’s say we have an ML model which predicts whether a patient has cancer-based on certain features. The model predicts a particular patient does not have cancer (Good, a happy scenario!).  But if the predicted probability is 40%, then the Doctor may like to conduct some more tests for a certain conclusion. This is a typical scenario where the prediction probability is critical and of immense interest to us. The Model Calibration helps us improve the model’s prediction probability so that the model’s reliability improves. It also helps us to decipher the predicted probability observed from the model predictions. We can’t take for granted that the model is twice as confident when giving a predicted probability of 0.8 against a figure of 0.4.

We also must understand that calibration differs from the model’s accuracy. The model accuracy is defined as the number of correct predictions divided by the total number of predictions made by the model. It is to be clearly understood that we can have an accurate but not calibrated model and vice versa.

If we have a model predicting rain with 80% predicted probability at all times, then if we take data for 100 days and find 80 days are rainy, we can say that model is well calibrated. In other words, calibration attempts to remove bias in the predicted probability.

Consider a scenario where the ML model predicts whether the user who is making a purchase on an e-commerce website will buy another associated item or not. The model predicts the user has a probability of 68% for buying Item A  and an item B probability of 73%. Here we will present Item B to the user(higher predicted probability), and we are not interested in actual figures. In this scenario, we may not insist on strict calibration as it is not so critical to the application.

The following shows details of 3 classifiers (assume that models predict whether an image is a dog image or not). Which of the following model is calibrated and hence reliable?

(a) Model 1 : 90% Accuracy, 0.85 confidence in each prediction

(b) Model 2 : 90% Accuracy, 0.98 confidence in each prediction

(c) Model 3 : 90% Accuracy ,0.91 confidence in each prediction

If we look at the first model, it is underconfident in its prediction, whereas model 2 seems overconfident. Model  3 seems well-calibrated, giving us confidence in the model’s ability. Model 3 thinks it is correct 91% of the time and 90% of the time, which shows good calibration.

Reliability Curves

The model’s calibration can be checked by creating a calibration plot or Reliability Plot. The calibration plot reveals the disparity between the probability predicted by the model and the true class probabilities in the data. If the model is well calibrated, we expect to see a straight line at 45 degrees from the origin (indicative that estimated probability is always the same as empirical probability ).

We will attempt to understand the calibration plot using a toy dataset to concretize our understanding of the subject.

Source: own-document

The resulting probability is divided into multiple bins representing possible ranges of outcomes. For example,  [0-0.1), [0.1-0.2), etc., can be created with 10 bins. For each bin, we calculate the percentage of positive samples. For a well-calibrated model, we expect the percentage to correspond to the bin center. If we take the bin with the interval [0.9-1.0), the bin center is 0.95, and for a well-calibrated model, we expect the percentage of positive samples ( samples with label 1) to be 95%.

Source: self-document

We can plot the Mean predicted value (midpoint of the bin ) vs. Fraction of TRUE Positives in each bin in a line plot to check the calibration. of the model.

We can see the difference between the ideal curve and the actual curve, indicating the need for our model to be calibrated. Suppose the points obtained are below the diagonal. In that case, it indicates that the model has overestimated (model predicted probabilities are too high). If the points are above the diagonal, it can be estimated that model has been underconfident in its predictions(the probability is too small). Let’s also look at a real-life Random Forest Model curve in the image below.

If we look at the above plot, the S curve ( remember the sigmoid curve seen in Logistic Regression !) is observed commonly for some models. The Model is seen to be underconfident at high probabilities and overconfident when predicting low probabilities. For the above curve, for the samples for which the model predicted probability is 30%, the actual value is only 10%. So the Model was overestimating at low probabilities.

The toy dataset we have shown above is for understanding, and in reality, the choice of bin size is dependent on the amount of data we have, and we would like to have enough points in each bin such that the standard error on the mean of each bin is small.

Brier Score

We do not need to go for the visual information to estimate the Model calibration. The calibration can be measured using the Brier Score. The Brier score is similar to the Mean Squared Error but used slightly in a different context. It takes values from 0 to 1, with 0 meaning perfect calibration, and the lower the Brier Score, the better the model calibration.

The Brier score is a statistical metric used to measure probabilistic forecasts’ accuracy. It is mostly used for binary classification.

Let’s say a probabilistic model predicts a 90% chance of rain on a particular day, and it indeed rains on that day. The Brier score can be calculated using the following formula,

Brier Score = (forecast-outcome)2

 Brier Score in the above case is calculated to be (0.90-1)2  = 0.01.

The Brier Score for a set of observations is the average of individual Brier Scores.

On the other hand, if a model predicts with a 97%  probability that it will rain but does not rain, then the calculated Brier Score, in this case, will be,

Brier Score = (0.97-0)2 = 0.9409 . A lower Brier Score is preferable.

Calibration Process

Now, let’s try and get a glimpse of how the calibration process works without getting into too many details.

Some algorithms, like Logistic Regression, show good inherent calibration standards, and these models may not require calibration. On the other hand, models like SVM, Decision Trees, etc., may benefit from calibration.  The calibration is a rescaling process after a model has made the predictions.

 There are two popular methods for calibrating probabilities of ML models, viz,

(a) Platt Scaling

(b) Isotonic Regression

It is not the intention of this article to get into details of the mathematics behind the implementation of the above approaches. However, let’s look at both methods from a ringside perspective.

The Platt Scaling is used for small datasets with a reliability curve in the sigmoid shape. It can be loosely understood as putting a sigmoid curve on top of the calibration plot to modify the predictive probabilities of the model.

The above images show how imposing a Platt calibrator curve on the reliability curve of the model modifies the curve. It is seen that the points in the calibration curve are pulled toward the ideal line (dotted line) during the calibration process.

It is noted that for practical implementation during model development, standard libraries like sklearn support easy model calibration(sklearn.calibration.CalibratedClassifier).

Impact on Performance

It is pertinent to note that calibration modifies the outputs of trained ML models. It could be possible that calibration also affects the model’s accuracy. Post calibration, some values close to the decision boundary (say 50% for binary classification) may be modified in such a way as to produce an output label different from prior calibration. The impact on accuracy is rarely huge, and it is important to note that calibration improves the reliability of the ML model.

Conclusion

In this article, we have looked at the theoretical background of Model Calibration. Calibration of Machine Learning models is an important but often overlooked aspect of developing a reliable model. The following are key takeaways from our learnings:-

(a) Model Calibration gives insight or understanding of uncertainty in the prediction of the model and in turn, the reliability of the model to be understood by the end-user, especially in critical applications.

(b) Model calibration is extremely valuable to us in cases where predicted probability is of interest.

(c) Reliability curves and Brier Score gives us an estimate of the calibration levels of the model.

(c) Platt scaling and Isotonic Regression is popular methods to scale the calibration levels and improve the predicted probability.

Where do we go from here? This article aims to give you a basic understanding of Model Calibration. We can further build on this by exploring actual implementation using standard python libraries like scikit Learn for use cases.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Related

Hyperparameters In Machine Learning Explained

To improve the learning model of machine learning, there are various concepts given in machine learning. Hyperparameters are one of such important concepts that are used to improve the learning model. They are generally classified as model hyperparameters that are not included while setting or fitting the machine to the training set because they refer to the model selection task. In deep learning and machine learning, hyperparameters are the variables that you need to apply or set before the application of a learning algorithm to a dataset.

What are Hyperparameters?

Hyperparameters are those parameters that are specifically defined by the user to improve the learning model and control the process of training the machine. They are explicitly used in machine learning so that their values are set before applying the learning process of the model. This simply means that the values cannot be changed during the training of machine learning. Hyperparameters make it easy for the learning process to control the overfitting of the training set. Hyperparameters provide the best or optimal way to control the learning process.

Hyperparameters are externally applied to the training process and their values cannot be changed during the process. Most of the time, people get confused between parameters and hyperparameters used in the learning process. But parameters and hyperparameters are different in various aspects. Let us have a brief look over the differences between parameters and hyperparameters in the below section.

Parameters Vs Hyperparameters

These are generally misunderstood terms by users. But hyperparameters and parameters are very different from each other. You will get to know these differences as below −

Model parameters are the variables that are learned from the training data by the model itself. On the other hand, hyperparameters are set by the user before training the model.

The values of model parameters are learned during the process whereas, the values of hyperparameters cannot be learned or changed during the learning process.

Model parameters, as the name suggests, have a fixed number of parameters, and hyperparameters are not part of the trained model so the values of hyperparameters are not saved.

Classification of Hyperparameters

Hyperparameters are broadly classified into two categories. They are explained below −

Hyperparameter for Optimization

The hyperparameters that are used for the enhancement of the learning model are known as hyperparameters for optimization. The most important optimization hyperparameters are given below −

Learning Rate − The learning rate hyperparameter decides how it overrides the previously available data in the dataset. If the learning rate hyperparameter has a high value of optimization, then the learning model will be unable to optimize properly and this will lead to the possibility that the hyperparameter will skip over minima. Alternatively, if the learning rate hyperparameter has a very low value of optimization, then the convergence will also be very slow which may raise problems in determining the cross-checking of the learning model.

Batch Size − The optimization of a learning model depends upon different hyperparameters. Batch size is one of those hyperparameters. The speed of the learning process can be enhanced using the batch method. This method involves speeding up the learning process of the dataset by dividing the hyperparameters into different batches. To adjust the values of all the hyperparameters, the batch method is acquired. In this method, the training model follows the procedure of making small batches, training them, and evaluating to adjust the different values of all the hyperparameters. Batch size affects many factors like memory, time, etc. If you increase the size of the batch, then more learning time will be needed and more memory will also be required to process the calculation. In the same manner, the smaller size of the batch will lower the performance of hyperparameters and it will lead to more noise in the error calculation.

Number of Epochs − An epoch in machine learning is a type of hyperparameter that specifies one complete cycle of training data. The epoch number is a major hyperparameter for the training of the data. An epoch number is always an integer value that is represented after every cycle. An epoch plays a major role in the learning process where repetition of trial and error procedure is required. Validation errors can be controlled by increasing the number of epochs. Epoch is also named as an early stopping hyperparameter.

Hyperparameter for Specific Models

Number of Hidden Units − There are various neural networks hidden in deep learning models. These neural networks must be defined to know the learning capacity of the model. The hyperparameter used to find the number of these neural networks is known as the number of hidden units. The number of hidden units is defined for critical functions and it should not overfit the learning model.

Number of Layers − Hyperparameters that use more layers can give better performance than that of less number of layers. It helps in performance enhancement as it makes the training model more reliable and error-free.

Conclusion

Hyperparameters are those parameters that are externally defined by machine learning engineers to improve the learning model.

Hyperparameters control the process of training the machine.

Parameters and hyperparameters are terms that sound similar but they differ in nature and performance completely.

Parameters are the variables that can be changed during the learning process but hyperparameters are externally applied to the training process and their values cannot be changed during the process.

There are various methods categorized in different types of hyperparameters that enhance the performance of the learning model and also make error-free learning models.

How Google Uses Machine Learning?

In the last five years, data scientists have created data-crunching machines by using cutting-edge methodologies. Various machine learning models have been developed that help resolve challenging situations in the real world. With the growth in technologies, various services related to the public and government sectors are getting over the internet. It makes the process fast and the reach of services increases rapidly among the citizens.

Google is really making our life easy in every aspect. Whether it is from booking a taxi to finding a dentist near me, all these tasks can be done using the various services of Google. Have you ever wondered what is behind these services? How Google makes such personalized suggestions and recommendations. It uses various machine learning algorithms with the collected data from the user to make these things possible. But before knowing how Google uses machine learning algorithms, let us have a brief look over what machine learning is.

What is Machine Learning?

Machine Learning algorithms are defined as a method that is used by Artificial Intelligence models to provide an output on the basis of some given data as an input. It is a subset of Artificial Intelligence. Machine learning algorithms are basically classified into four types that are −

Supervised learning

Unsupervised learning

Semi-supervised learning

Reinforcement learning

How Google uses Machine Learning?

Now, let us see the various ways in which Google uses the Machine learning algorithm to perform various tasks using its different services.

Google Translate

The world is developing very fast. People from different countries and different communities travel around the globe for different purposes. And one of the major factors that defined one’s ethnicity is language. There are around 6500 languages worldwide. In India only, there are 24 different languages identified by the Indian government. The Google Translate feature by Google uses Statistical Machine Translation (SMT) to help a large number of people to translate texts into their own preferred language. This helps people in various aspects such as translating the website into their own language. Also, they can understand other written texts in their own language without any manual help. Tourists traveling in other countries use Google translate to establish communication with the locals of another country. The company does not claim to translate 100% correctly in all the languages but it is able to provide a clear generalized understanding to the users.

Google Photos

In today’s time, the media on any device whether it is mobile, laptop, etc is very essential part of daily life. People use various social media applications to show their status and images related to their lifestyle, and also they store many pictures for future use. All these need a good media manager. Google Photos by Google company helps a user store their pictures in the cloud. The user can access them anytime they want. There is also a backup option in Google Photos which makes your data safe and secure. Apart from these basic features, Photos by Google uses a machine learning algorithm to suggest some best pics from your travel album. It also sets a reminder notification for various timelines of your pictures. The pictures can be organized on the basis of face recognition, on the basis of location, etc.

Gmail

As we know, there is a separate account for each individual using the Gmail Service by Google. The inbox, social, and promotional sections have different mail for each Gmail user. This is because Google uses the machine learning algorithm to filter these emails and send them to each user according to their search history, browser history over the system, and interest. The Gmail service uses labeled data that shows Gmail suggestions.

Apart from these promotional emails, the Gmail service by Google also uses a smart reply feature. The Gmail service uses machine learning algorithms to suggest quick replies according to the received text in the particular mail. The user can just do very instant replies using these suggestions and this saves their time. And the quick reply feature is not limited to only the English language, it is available in various languages such as Spanish, French, Portuguese, etc.

Google Assistant

Google Assistant is now provided by almost every android device. It helps a user to get the results from all over the internet by simply using the voice command. From finding the best restaurant to booking tickets for a movie, all can be done by using your voice. This helps a user to just get their work done without disturbing their current work. You can simply keep writing something on your laptop and use google assistant to get the news headlines of today using your voice command. Google uses the algorithm to catch your oral words and converts them into text then returns the output according to the text provided as voice input.

Natural Language Processing

Natural Language Processing is used by Google to extract the meaning and the structure of the provided text. This helps to get some meaningful information out of the texts. NLP is used by an organization to extract some data about a person, place, etc to better understand the trends over the internet. This will help the organization to make better suggestions for every service to a specific user. The various uses of Natural Language Processing are Content classification, document analysis, trend spotting, understanding the sentiments of a customer, etc. The Natural Language API developed by Google is used to perform natural language processing.

Map and Navigation

Using Google maps has become very common these days. Travelers use google maps to travel to unknown places. The delivery boys, cab drivers, etc use google maps extensively to reach the delivery location. Google maps use a machine learning algorithm to suggest the best route for the searched destination. Google map also shows the various levels of traffic in the route of that specific location. It shows an estimated time of arrival (ETA) on the basis of calculations based on traffic, distance, and mode of transport.

Ad Suggestion Summary

Machine Learning algorithms are defined as a method that is used by Artificial Intelligence models to provide an output on the basis of some given data as an input.

The various uses of machine learning algorithms in Google services are Gmail, Google Assistant, Maps and Navigations, Natural Language Processing, etc.

Gmail service by Google also uses a quick reply feature. The Gmail service uses machine learning algorithms to suggest quick replies according to the received text in the particular mail.

The Google translate feature by Google uses Statistical Machine Translation (SMT) to help a large number of people to translate the language into their own preferred language.

Google map uses a machine learning algorithm to show an estimated time of arrival (ETA) on the basis of calculations based on traffic, distance, and mode of transport.

What Machine Learning Is Rocket Science?

If you have been interested in machine learning, this guide is a fantastic place to begin researching it. Aside from introducing readers to the fundamentals, in addition, it motivates you to find out from pointing you in the path of different online libraries and courses. Rapid improvements in this field have surely driven people to feel this will induce innovation for at least a couple of years.

There are some extraordinary improvements in AI which have led many to think it’s going to be the technology which will form our future.

1. As stated by many:  Move is regarded as the most complicated professional sport due to a massive number of possible moves which may be made.

2. AI predicted US election outcomes: Many people were amazed by the results of this US presidential election outcomes, however, a startup named MogIA established in Mumbai managed to forecast it a month before the results had been announced. The organization analysed social networking opinion through countless social networking data points. This was the company’s fourth successful forecast in a row.

3. AI enhances cancer diagnosis: There are some path-breaking innovations within the business of healthcare. It’s thought that the healthcare market will benefit the most from AI.

You will find Artificial intelligence app that may now predict the incidence of cancer using 90 percent accuracy by simply analysing the signs of a patient, which can assist a physician to begin treatment early. However, these aren’t the very same things. It’s been shown that computers could be programmed to execute quite complex tasks which were previously only performed by people. It’s regarded as one the most prosperous methods to AI, however, is only one strategy. As an instance, there are lots of chatbots which are principle-based, i.e., they could reply only certain queries, based on the way they were programmed.

However, they won’t be able to find out anything new from these queries. So this is sometimes categorized as AI since the discussion bots replicate human behaviour, but cannot be termed as machine learning. The question is: Can machines actually ‘know’? How can it be possible to get a system to find out if it does not have a mind and an intricate nervous system like individuals? In accordance with Arthur Samuel, “Machine learning could be described as a subject of research that provides computers the ability to master without being explicitly programmed.”

Also read:

Best ecommerce platform in 2023

We could even specify it as the computer’s capacity to learn from experience to execute a particular job, whereby the operation will improve with experience. This is comparable to a computer software playing chess, which is often abbreviated as machine learning, even in case it learns from prior experiences and then makes better motions to win a match. It utilizes neural networks to mimic human decision-making abilities. A neural network is made up of neurons and hence looks like a human nervous system. Have you ever thought about how Facebook finds your head amongst many, in a picture? Picture detection is among those cases of profound learning, which is quite a bit more complicated since it requires tons of information to train. As an example, a profound learning algorithm may learn how to recognise a vehicle but might need to be educated on a massive data set which is composed of automobiles in addition to some other objects. If that isn’t done, it may make a wrong choice like identifying a bus for a vehicle. Hence, in contrast to other machine learning algorithms, a profound learning algorithm requires more information so as to detect and understand every minute detail to make the proper decisions.

Now you have recognized the differences between artificial intelligence, machine learning and profound learning, let us dig deeper in machine learning.

There are 3 chief kinds of machine learning algorithms.

1. Supervised learning: The information collection in supervised learning is made up of input information in addition to the anticipated output. The plan is a function that maps this input to the anticipated result. Then this model may be applied to fresh sets of information, for which the anticipated outcome isn’t available but has to be called from a given set of information.

For better results, the business may use a data collection of automobile models of different manufacturers and their costs. This would assist the organization in establishing a competitive cost.

In machine learning, the top results aren’t attained using a fantastic algorithm but using the maximum data.

2. Unsupervised learning: The sole difference between supervised and unsupervised learning is the information collection does not have the anticipated outcome as from the supervised learning version. The data collection will just have input (or attributes) and also the algorithm is going to need to forecast the results. As an example, if a top manufacturing firm is seeking to fabricate three distinct forms of shirts (small, medium and big ), its own information includes the shoulder, waist and torso dimensions of its clients. Now, based upon this massive data collection, the business should set the dimensions into three classes so that there could be the best match for everybody. Here unsupervised learning tool may be used to set different information points in three distinct sizes and forecast a suitable top size for every single client.

In accordance with the chart given in Figure 2, let us consider a business which has just the shoulder and waist measurements as the input of this data collection. It is going to finally have to categorize this data collection into three classes, which can enable the business forecast the top size for every single client. This technique is referred to as clustering, where the information set is clustered to the desired variety of clusters. The majority of the time, the information collection isn’t just like the one displayed in this case. Data points which are extremely near each other make it tricky to implement clustering. Additionally, clustering is simply one of many techniques used in learning to forecast the results.

Also read:

Best Top 10 Paid Online Survey Website in the World

3. Reinforcement learning: In reinforcement learning, a system or a broker trains itself when subjected to specific surroundings, with a process of trial and error. Let us think about a child who wants to learn how to ride a bike. To begin with, she’ll attempt to learn from a person who already knows how to ride a bike. Afterwards, she’ll try out riding her own and may fall down lots of occasions. Learning from her prior mistakes, she’ll attempt to ride without decreasing.

When she eventually rides the bicycle without decreasing, it could be regarded as a reward for her efforts. Now let us think about this child for a machine or a broker who’s getting punished (decreasing) for committing an error and making a reward (not decreasing) for not committing any error.

A chess-playing program may be a fantastic illustration of this, where one wrong move will penalize the broker and it might eliminate a match, even though a mix of one or more appropriate moves will make it a reward by creating it triumph. In accordance with the requirement, these versions may be utilised in combination to yield a new version. For example, supervised learning can at times be used alongside unsupervised learning, determined by the data collection in addition to the anticipated result.

People frequently believe machine learning is simply for somebody who’s great with math or numbers, and will not be possible to learn for anybody. Machine learning isn’t rocket science after all. The one thing that’s required to learn it’s eagerness and curiosity. The amount of libraries and tools available is now easier to learn it. Google’s TensorFlow library, that is now open source, or even the many Python libraries such as NumPy and scikit-learn, are only a couple of them. Everyone can make use of these libraries and also bring about them to address issues since they are open source. You do not need to be concerned about the intricacies involved with your algorithm, such as complicated mathematical computations (like gradient, matrix multiplication, etc) because this task could be abandoned for all these libraries to execute. Libraries make it a lot easier for everybody so that rather than becoming involved in executing complicated computations, the consumer is now able to concentrate on the use of this algorithm.

In addition, there are many APIs available which may be utilized to execute an artificial development app. Watson is really capable of performing many tasks such as answering a user’s concerns, helping physicians to identify diseases, and far more.

If you’re excited about the prospects that machine learning provides, our electronic schooling era has made matters simpler for you. There are lots of massive open online classes (MOOC) provided by many businesses. 1 such class is supplied by Coursera-Machine Learning. That can be taught by Andrew Ng, among those co-founders of all Coursera. This class will provide you a simple comprehension of the algorithms which are employed in machine learning, and it comprises both supervised learning and unsupervised learning. It is a self-paced class but designed to be completed within 12 weeks. If you would like to dig deeper and research profound learning, which will be a subset of machine learning, then you can learn it via a different course supplied by chúng tôi This training course is divided into two components: Practical profound learning to get coders (Component 1) and Cutting border deep learning to get coders (Component 2). Both are designed for seven months each and supply you with a fantastic insight into profound learning. If you want to concentrate in profound learning, then you can elect for a profound learning specialisation class by Coursera and chúng tôi So, for one to practice, there are lots of resources that may supply you a massive data collection to check your own expertise and execute what you’ve learned. 1 such site is Kaggle, which offers a varied data set and will be able to assist you to conquer your important obstacle, i.e., obtaining information to check your learning version.

In the event that you sometimes feel lost in this journey of learning, as soon as your algorithm doesn’t function as anticipated or when you do not know an intricate equation, don’t forget the famous dialogue from the film, The Pursuit of Happiness:”Do not ever let someone tell you you can not do something. I. You have a fantasy; you have ta shield it. When folks cannot do anything, they are gonna inform you you can not do it.”

Room Occupancy Detection Using Machine Learning Algorithms

This article was published as a part of the Data Science Blogathon

In this article, we will see how we can detect room occupancy using environmental variables data with machine learning algorithms. For this purpose, I am using Occupancy Detection Dataset from UCI ML Repository. Here, Ground-truth occupancy is obtained from time-stamped pictures of environmental variables like temperature, humidity, light, CO2 were taken every minute. The implementation of an ML algorithm instead of a physical PIR sensor will be cost and maintenance-free. This might be useful in the field of HVAC (Heating, Ventilation, and Air Conditioning).

Data Understanding and EDA

Here we are using R for ML programming. The dataset zip has 3 text files, one for training the model and two for testing the model. For reading these files in R we use chúng tôi and to explore data structure, data dimensions, and 5 point statics of dataset we use the “summarytools” package. Images that are included here are captured from the R console while executing the code.

data= read.csv("datatrain.txt",header = T, sep = ",", row.names = 1) View(data) library(summarytools) summarytools::view(dfSummary(data))

Data Summary 

Observations

All environmental variables are read correctly as numeric type by R, however, we need to define a variable “date” as of date class type. For the ‘Date’ variable treatment, we use the “lubridate” package. Also, we have daytime information available in the date column which we can extract and use for modeling as occupancy for spaces like offices will depend on daytime. Another important observation is that we don’t have missing values in the entire dataset. An Occupancy variable needs to be defined as a factor type for further analysis.

library("readr") library("lubridate") data$date1 = as_date(data$date) data$date= as.POSIXct(data$date, format = "%Y-%m-%d %H:%M:%S") data$time = format(data$date, format = "%H:%M:%S") data1= data[,-1] data1$Occupancy = as.factor(data1$Occupancy)

Now our processed data looks like this:

Processed Data Preview

Next, we check two important aspects of data, one is the correlation plot of variables to understand multicollinearity issues in the dataset and the proportion of target variable distribution.

library(corrplot) numeric.list <- sapply(data1, is.numeric) numeric.list sum(numeric.list) numeric.df <- data1[, numeric.list] cor.mat <- cor(numeric.df) corrplot(cor.mat, type = "lower", method = "number")

Correlation Plot

library(plotrix) pie3D(prop.table((table(data1$Occupancy))), main = "Occupied Vs Unoccupied", labels=c("Unoccupied","Occupied"), col = c("Blue", "Dark Blue")) Pie Chart for Occupancy

From the correlation plot, we observe that temperature and light are positively correlated while temperature and Humidity are negatively correlated, humidity and humidity ratio are highly correlated which is obvious as the Humidity ratio is the ratio of Humidity and temperature. Hence while building various models we will consider variable Humidity and omit Humidity ratio.

Now we are all set for model building. As our response variable Occupancy is binary, we need classification types of models. Here we implement CART, RF, and ANN.

Model Building- Classification and Regression Trees (CART): Now we define training and test datasets in the required format for model building.

p_train = data1 p_test = read.csv("datatest2.txt",header = T, sep = ",", row.names = 1) p_test$date1 = as_date(p_test$date) p_test$date= as.POSIXct(p_test$date, format = "%Y-%m-%d %H:%M:%S") p_test$time = format(p_test$date, format = "%H:%M:%S") p_test= p_test[,-1]

Note that the R implementation of the CART algorithm is called RPART (Recursive Partitioning And Regression Trees) available in a package of the same name. The algorithm of decision tree models works by repeatedly partitioning/splitting the data into multiple sub-spaces so that the outcomes in each final sub-space are as homogeneous as possible.

The model uses different splitting rules that can be used to effectively predict the type of outcome. These rules are produced by repeatedly splitting the predictor variables, starting with the variable that has the highest association with the response variable. The process continues until some predetermined stopping criteria are met. We define these stopping criteria using control parameters such as a minimum number of observations in a node of the tree before attempting a split, a split must decrease the overall lack of fit by a factor before being attempted.

library(rpart) library(rpart.plot) library(rattle) #Setting the control parameters r.ctrl = rpart.control(minsplit=100, minbucket = 10, cp = 0, xval = 10) #Building the CART model set.seed(123) m1 <- rpart(formula = Occupancy~ Temperature+Humidity+Light+CO2+date1+time, data = p_train, method = "class", control = r.ctrl) #Displaying the decision tree fancyRpartPlot(m1).

Decision Tree

Now we predict the occupancy variable for the test dataset using predict function.

p_test$predict.class1 <- predict(ptree, p_test[,-6], type="class") p_test$predict.score1 <- predict(ptree, p_test[,-6], type="prob") View(p_test)

Test Data Preview with Actual and Predicted Occupancy

Now evaluate the performance of our model by plotting the ROC curve and building a confusion matrix.

library(ROCR) pred <- prediction(p_test$predict.score1[,2], p_test$Occupancy) perf <- performance(pred, "tpr", "fpr") plot(perf,main = "ROC curve") auc1=as.numeric(performance(pred, "auc")@y.values) library(caret) m1=confusionMatrix(table(p_test$predict.class1,p_test$Occupancy), positive="1",mode="everything")

Here we get an AUC of 82% and a model accuracy of 98.1%.

Now next in the list is Random Forest. In the random forest approach, a large number of decision trees are created. Every observation is fed into every decision tree. The most common outcome for each observation is used as the final output. A new observation is fed into all the trees and taking a majority vote for each classification model. The R package “randomForest” is used to create random forests.

RFmodel = randomForest(Occupancy~Temperature+Humidity+Light+CO2+date1+time, data = p_train1, mtry = 5, nodesize = 10, ntree = 501, importance = TRUE) print(RFmodel) plot(RFmodel)

Error Vs No of trees plot

Here we observe that error remains constant after n=150, so we can tune the model with trees 150.

Also, we can have a look at important variables in the model which are contributing to occupancy detection.

importance(RFmodel)

Variable Importance table

We observe from the above output that light is the most important predictor followed by CO2, Temperature, Time, Humidity, and date when we consider accuracy. Now, let’s prune the RF model with new control parameters.

set.seed(123) tRF=tuneRF(x=p_train1[,-c(5,6)],y=as.factor(p_train1$Occupancy), mtryStart = 5, ntreeTry = 150, stepFactor = 1.15, improve = 0.0001, trace = TRUE, plot = TRUE, doBest = TRUE, nodesize=10, importance= TRUE)

Now with this tuned model we again variable importance as follows. Please note here variable importance is measured for decreasing Gini Index, however earlier it was a mean decrease in model accuracy.

varImpPlot(tRF,type=2, main = "Important predictors in the analysis")

Variable Importance Plot

Now next we predict occupancy for the test dataset using predict function and tuned RF model.

p_test1$predict.class= predict(tRF, p_test1, type= "class") p_test1$predict.score= predict(tRF, p_test1, type= "prob")

We check the performance of this model using the ROC curve and confusion matrix parameters. The AUC turns out to be 99.13% with a very stiff curve as below and the accuracy of prediction is 98.12%.

ROC Curve

It seems like this model is doing better than the CART model. Time to check ANN!

Now we build an artificial neural network for the classification. Neural Network (or Artificial Neural Network) has the ability to learn by examples. ANN is an information processing model inspired by the biological neuron system. It is composed of a large number of highly interconnected processing elements known as the neuron to solve problems.

Here I am using the ‘neuralnet’ package in R. When we try to build ANN for our case, we observe that the model does not accept date class variables we will omit them. Another way could be we create a separate factor variable from the daytime variable with levels like Morning, Afternoon, Evening, and Night and then create a dummy variable for the factor variable.

Before actually building ANN, we need to scale our data as variables have values in different ranges. ANN being a weight-based algorithm, maybe biased results if data is not scaled.

p_train2=p_train2[,-c(6,7,8)] p_train_sc=scale(p_train2) p_train_sc = as.data.frame(p_train_sc) p_train_sc$Occupancy=data1$Occupancy p_test3=p_test2[,-c(6,7,8)] p_test_sc=scale(p_test3) p_test_sc = as.data.frame(p_test_sc) p_test_sc$Occupancy=p_test2$Occupancy

After scaling all the variables (except Occupancy) our data will look like this.

Data Preview After Scaling

Now we are all set for model building.

nn1 = neuralnet(formula = Occupancy~Temperature+Humidity+Light+CO2+HumidityRatio, data = p_train_sc, hidden = 3, chúng tôi = "sse",linear.output = FALSE,lifesign = "full", chúng tôi = 10, threshold = 0.03, stepmax = 10000) plot(nn1)

We calculate results for the test dataset using Compute function.

compute.output = compute(nn1, p_test_sc[,-6]) p_test_sc$Predict.score <- compute.output$net.result

Models Performance Comparison: We again evaluate this model using the confusion matrix and ROC curve. I have tabulated results obtained from all three models as follows:

Performance measure CART on a test dataset RF on a test dataset ANN on a test dataset

AUC 0.8253 0.9913057 0.996836

Accuracy 0.981 0.9812 0.9942

Kappa 0.9429 0.9437 0.9825

Sensitivity 0.9514 0.9526 0.9951

Specificity 0.9889 0.9889 0.9939

Precision 0.9586 0.9587 0.9775

Recall 0.9514 0.9526 0.9951

F1 0.9550 0.9556 0.9862

Balanced Accuracy 0.9702 0.9708 0.9945

From performance measures comparison, we observe that ANN outperforms other models, followed by RF and CART. With machine earning algorithms we can replace occupancy sensor functionality efficiently with good accuracy.

The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.

Related

Update the detailed information about Video Machine Learning: A Content Marketing Revolution? on the Minhminhbmm.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!