You are reading the article Pokemon Prediction Using Random Forest updated in November 2023 on the website Minhminhbmm.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested December 2023 Pokemon Prediction Using Random Forest
This dataset has 721 unique values i.e. it has features of 721 unique pokemon; for further details, visit this link.
import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score, classification_report from sklearn.ensemble import RandomForestClassifierReading the dataset
pokemon_data = pd.read_csv('Pokemon Data.csv')Now, let’s see what our dataset has in it!
poke = pd.DataFrame(pokemon_data) poke.head()Output:
Checking out folet’sl values
poke.isnull().sum()Output:
Number 0 Name 0 Type_1 0 Type_2 371 Total 0 HP 0 Attack 0 Defense 0 Sp_Atk 0 Sp_Def 0 Speed 0 Generation 0 isLegendary 0 Color 0 hasGender 0 Pr_Male 77 Egg_Group_1 0 Egg_Group_2 530 hasMegaEvolution 0 Height_m 0 Weight_kg 0 Catch_Rate 0 Body_Style 0 dtype: int64We have seen the null values in its users n; let’s visualize them using the heatmap.
plt.figure(figsize=(10,7)) sns.heatmap(poke.isnull(), cbar=False)Output:
Here it’s visible that Type_2, Pr_Male, and Egg_Group_2 have relatively null values.
We have visualized the nucan’tlues using the heatmap but in that kind of visualization, we can’t get the count of Let’s null values, so we are using the dist-plot.
plt.figure(figsize=(20,20)) sns.displot( data=poke.isna().melt(value_name="missing"), y="variable", hue="missing", multiple="fill", aspect=2 )Output:
Let’s know the dimensions of our dataset.
poke.shapeOutput:
(721, 23)From the shape, it is clear the dataset is small, meaning we can remove the null values columns as filling them can make the dataset a little biased.
We have seen that type_2, egg_group_2, and Pr_male have null values.
poke['Pr_Male'].value_counts()Output:
0.500 458 0.875 101 0.000 23 0.250 22 0.750 19 1.000 19 0.125 2 Name: Pr_Male, dtype: int64Since Type_2 and Egg_group_2 columns have so many NULL values we will be removing those columns, you won’t impute them with other methods, but for simplicity, we won’t do that here. We only set the Pr_Male column since it had only 77 missing values.
poke['Pr_Male'].fillna(0.500, inplace=True) poke['Pr_Male'].isnull().sum()Output:
0 # as we can see that there are no null values now.Dropping unnecessary columns
new_poke = poke.drop(['Type_2', 'Egg_Group_2'], axis=1)Now let’s understand the type of each column and its values.
new_poke.describe()Output:
plt.figure(figsize=(10,10)) sns.heatmap(new_poke.corr(),annot=True,cmap='viridis',linewidths=.5)Output:
The above is a correlation graph that tells you how much a feature is correlated to another since a high correlation means one of the two features does not speak much to the model when predicting.
Usually, it is to be determined by you itself for the high value of correlation and removed.
From the above table, it is clear that different features have different ranges of value, which creates complexity for the model, so we tone them down usually using StandardScalar() class which we will do later on.
new_poke['Type_1'].value_counts()Output:
Water 105 Normal 93 Grass 66 Bug 63 Psychic 47 Fire 47 Rock 41 Electric 36 Ground 30 Poison 28 Dark 28 Fighting 25 Dragon 24 Ice 23 Ghost 23 Steel 22 Fairy 17 Flying 3 Name: Type_1, dtype: int64Value counts of all the generations
new_poke['Generation'].value_counts()Output:
5 156 1 151 3 135 4 107 2 100 6 72 Name: Generation, dtype: int64 Visualizing I’me categorical valuesHere for visualizing the categorical data, I’m using seaborn’s cat plot() function. Well, one can use the line plot scatter plot or box plot separately, but here, the cat plot brings up the unified version of using all the plots hence I preferred the cat plot rather than the separate version of eI’m plot.
Here for counting each type (6) category of generations, I’m using the cougeneration’snd in the cat plot to get the number of count of each generation’s column.
sns.catplot(x="Generation",kind="count",palette="ch:.25", data=poke)Output:
Inference: In the above graph, the 5th generation is the most in numbers.
Here we are using the default kind of cat plot, i.e. scatter plot to plot the Generation vs Defense graph where we will be able to figure outPokemonlationship between the defence power of each general Pokemon.
sns.catplot(x="Generation", y="Defense", data=poke)Output:
Inference: Here, we can see that only two pcan’tn in generation 2 have the highest defence capability. Still, we can’t conclude that generation 2 has the most increased defence capabilities as the outliers. Still, in the graph, it is evident that generation 6 and 4 has the highest defence capabilities.
Here we are using the Box plot because boxplot will help us understand the variations in the large dataset better; it will also let us know about the outliers more clearly.
sns.catplot(x="Generation", y="Attack",kind="boxen", data=poke)Output:
Here in the above boxplot, we can see that there are a lot of outliers in generation 4 and generation 1 when it comes to attacking capabilities.
Also, generation 4 has the highest median values of their attacking capabilities than all the other generations.
Now we are using bar kind via cat plot, which will let us know about the Attacking capabilities of different generations based on their Pokemon. For example, in generation 1, the pokemon power of male Pokemon are higher than those of the female Pokemon of the same generation. Still, that generation also has the least attacking power than other generations.
sns.catplot(x="Generation", y="Attack",kind='bar',hue='hasGender', data=poke)Output:
FromPokemonove graph, we can conclude that,
In generaPokemononly the male Pokemon has more attacking power than the female Pokemon, which contradicts other generations.
Generation 6 has the highest attacking power wLet’sgeneration 1 has the lowest attacking power.
new_poke['Color'].value_counts()Output:
Blue 134 Brown 110 Green 79 Red 75 Grey 69 Purple 65 Yellow 64 White 52 Pink 41 Black 32 Name: Color, dtype: int64 new_poke['Egg_Group_1'].value_counts()Output:
Field 169 Monster 74 Water_1 74 Undiscovered 73 Bug 66 Mineral 46 Flying 44 Amorphous 41 Human-Like 37 Fairy 30 Grass 27 Water_2 15 Water_3 14 Dragon 10 Ditto 1 Name: Egg_Group_1, dtype: int64Let’s also consider the number of values in our target column
new_poke['isLegendary'].value_counts()Output:
False 675 True 46 Name: isLegendary, dtype: int64 Feature EngineeringThis may seem uncomfortable to some, but you will get why I did it like that.
poke_type1 = new_poke.replace(['Water', 'Ice'], 'Water') poke_type1 = poke_type1.replace(['Grass', 'Bug'], 'Grass') poke_type1 = poke_type1.replace(['Ground', 'Rock'], 'Rock') poke_type1 = poke_type1.replace(['Psychic', 'Dark', 'Ghost', 'Fairy'], 'Dark') poke_type1 = poke_type1.replace(['Electric', 'Steel'], 'Electric') poke_type1['Type_1'].value_counts()Output:
Grass 129 Water 128 Dark 115 Normal 93 Rock 71 Electric 58 Fire 47 Poison 28 Fighting 25 Dragon 24 Flying 3 Name: Type_1, dtype: int64 ref1 = dict(poke_type1['Body_Style'].value_counts()) poke_type1['Body_Style_new'] = poke_type1['Body_Style'].map(ref1)You may be wondering what I did; I took the value counts of each body tyLet’sd replace the body type with the numbers; see below
poke_type1['Body_Style_new'].head()Output:
0 135 1 135 2 135 3 158 4 158 Name: Body_Style_new, dtype: int64Let’s look towards the Body_style
poke_type1['Body_Style'].head()Output:
0 quadruped 1 quadruped 2 quadruped 3 bipedal_tailed 4 bipedal_tailed Name: Body_Style, dtype: object Encoding data – features like Type_1 and Color types_poke = pd.get_dummies(poke_type1['Type_1']) color_poke = pd.get_dummies(poke_type1['Color']) X = pd.concat([poke_type1, types_poke], axis=1) X = pd.concat([X, color_poke], axis=1) X.head()Output:
Now we have built some features and extracted some feature data, what’s left is to remove redundant features
X.columnsOutput:
Index(['Number', 'Name', 'Type_1', 'Total', 'HP', 'Attack', 'Defense', 'Sp_Atk', 'Sp_Def', 'Speed', 'Generation', 'isLegendary', 'Color', 'hasGender', 'Pr_Male', 'Egg_Group_1', 'hasMegaEvolution', 'Height_m', 'Weight_kg', 'Catch_Rate', 'Body_Style', 'Body_Style_new', 'Dark', 'Dragon', 'Electric', 'Fighting', 'Fire', 'Flying', 'Grass', 'Normal', 'Poison', 'Rock', 'Water', 'Black', 'Blue', 'Brown', 'Green', 'Grey', 'Pink', 'Purple', 'Red', 'White', 'Yellow'], dtype='object')X_.shape
Output:
(721, 38)Now, let’s see the shape of our updated feature columns
X.shape
Lastly, we define our target variable and set it into a variable called y
y = X_['isLegendary'] X_final = X_.drop(['isLegendary', 'Body_Style'], axis = 1) X_final.columnsOutput:
Index(['Total', 'HP', 'Attack', 'Defense', 'Sp_Atk', 'Sp_Def', 'Speed', 'Generation', 'hasGender', 'Pr_Male', 'hasMegaEvolution', 'Height_m', 'Weight_kg', 'Catch_Rate', 'Body_Style_new', 'Dark', 'Dragon', 'Electric', 'Fighting', 'Fire', 'Flying', 'Grass', 'Normal', 'Poison', 'Rock', 'Water', 'Black', 'Blue', 'Brown', 'Green', 'Grey', 'Pink', 'Purple', 'Red', 'White', 'Yellow'], dtype='object') X_final.head()Output:
Creating and training our modelSplitting the dataset into training and testing dataset
Xtrain, Xtest, ytrain, ytest = train_test_split(X_final, y, test_size=0.2)Using random forest classifier for training our model
random_model = RandomForestClassifier(n_estimators=500, random_state = 42)Fitting the model
model_final = random_model.fit(Xtrain, ytrain) y_pred = model_final.predict(Xtest)Checking the accuracy
random_model_accuracy = round(model_final.score(Xtrain, ytrain)*100,2) print(round(random_model_accuracy, 2), '%')Output:
100.0 %Getting the accuracy of the model
random_model_accuracy1 = round(random_model.score(Xtest, ytest)*100,2) print(round(random_model_accuracy1, 2), '%')Output:
99.31 %Saving the model to disk
import pickle filename = 'pokemon_model.pickle' pickle.dump(model_final, open(filename, 'wb'))Load the model from the disk
filename = 'pokemon_model.pickle' loaded_model = pickle.load(open(filename, 'rb')) result = loaded_model.score(Xtest, ytest) result*100Output:
99.3103448275862 ConclusionHere I conclude the legendary pokemon prediction with 99% accuracy; this might be a overfit model; having said that, the dataset was not so complex that it will lead to such a situaHere’set all the suggestions and improvements are always welcome.
Here’s the repo link to this article.
Here you can access my other articles, which are published on Analytics Vidhya as a part of the Blogathon (link)
If got any queries you can connect with I’m on LinkedIn, refer to this link
About meGreeting to everyone, I’m currently working in TCS and previously, I worked as a Data Science AssociI’veAnalyst in Zorba Consulting India. Along with full-time work, I’ve got an immense interest in the same field, i.e. Data Science, along with its other subsets of Artificial Intelligence such as Computer Vision, Machine learning, and Deep learning; feel free to collaborate with me on any project on the domains mentioned above (LinkedIn).
The media shown in this article is not owned by Analytics Vidhya and are used at the Author’s discretion.
You're reading Pokemon Prediction Using Random Forest
Bank Customer Churn Prediction Using Machine Learning
This article was published as a part of the Data Science Blogathon.
IntroductionCustomer Churn prediction means knowing which customers are likely to leave or unsubscribe from your service. For many companies, this is an important prediction. This is because acquiring new customers often costs more than retaining existing ones. Once you’ve identified customers at risk of churn, you need to know exactly what marketing efforts you should make with each customer to maximize their likelihood of staying.
Customers have different behaviors and preferences, and reasons for cancelling their subscriptions. Therefore, it is important to actively communicate with each of them to keep them on your customer list. You need to know which marketing activities are most effective for individual customers and when they are most effective.
Impact of customer churn on businesses
A company with a high churn rate loses many subscribers, resulting in lower growth rates and a greater impact on sales and profits. Companies with low churn rates can retain customers.
Why is Analyzing Customer Churn Prediction Important?Customer churn is important because it costs more to acquire new customers than to sell to existing customers. This is the metric that determines the success or failure of a business. Successful customer retention increases the customer’s average lifetime value, making all future sales more valuable and improving unit margins.
The way to maximize a company’s resources is often by increasing revenue from recurring subscriptions and trusted repeat business rather than investing in acquiring new customers. Retaining loyal customers for years makes it much easier to grow and weather financial hardship than spending money to acquire new customers to replace those who have left.
Benefits of Analyzing Customer Churn PredictionIncrease profits
Improve the customer experience
One of the worst ways to lose a customer is an easy-to-avoid mistake like: Ship the wrong item. Understanding why customers churn, you can better understand their priorities, identify your weaknesses, and improve the overall customer experience.
Customer experience, also known as “CX”, is the customer’s perception or opinion of their interactions with your business. The perception of your brand is shaped throughout the buyer journey, from the first interaction to after-sales support, and has a lasting impact on your business, including your bottom line.
Optimize your products and services
Customer retention
The opposite of customer churn is customer retention. A company can retain customers and continue to generate revenue from them. High customer loyalty enables companies to increase the profitability of their existing customers and maximize their lifetime value (LTV).
If you sell a service for $1,000 per month and keep the customer for another 3 months, he will earn an additional $3,000 for each customer without spending on customer acquisition. The scope and amount vary depending on the business, but the concept of “repeat business = profitable business” is universal.
How does Customer Churn Prediction Work?We first have to do some Exploratory Data Analysis in the Dataset, then fit the dataset into Machine Learning Classification Algorithm and choose the best Algorithm for the Bank Customer Churn Dataset.
Algorithms for Churn Prediction ModelsXGBoost, short for Extreme Gradient Boosting, is a scalable machine learning library with Distributed Gradient Boosted Decision Trees (GBDT). It provides Parallel Tree Boosting and is the leading machine learning library for regression, classification and ranking problems. To understand XGBoost, it’s important first to understand the machine learning concepts and algorithms that XGBoost is built on: supervised machine learning, decision trees, ensemble learning, and gradient boosting. Supervised machine learning uses an algorithm to train a model to find patterns in a dataset containing labels and features and then uses the trained model to predict the labels of the features in a new dataset.
Decision trees are models that predict labels by evaluating a tree of if-then-else true/false functional questions and estimating the minimum number of questions needed to evaluate the likelihood of a correct decision. Decision trees can be used for classification to predict categories and regression to predict continuous numbers. The following simple example uses a decision tree to estimate a house’s price (tag) based on the size and number of bedrooms (features).
Gradient Boosted Decision Trees (GBDT) is a random forest-like decision tree ensemble learning algorithm for classification and regression. Ensemble learning algorithms combine multiple machine learning algorithms to get a better model. Both Random Forest and GBDT create models that consist of multiple decision trees. The difference is in how the trees are constructed and combined.
Source: researchgate.net
Decision Tree
Decision trees are a nonparametric supervised learning method used for classification and regression. The goal is to build a model that predicts the value of a target variable by learning simple decision rules derived from the properties of the data. A tree can be viewed as a piecewise constant approximation.
For example, in the following example, a decision tree learns from data to approximate a sine wave using a series of if-then-else decision rules. The deeper the tree, the more complex the decision rules and the better the model.
Easy to understand and easy to interpret. You can visualize trees.
Little or no data preparation is required. Other techniques often require normalizing the data, creating dummy variables, and removing empty values. However, please note that this module does not support missing values.
The cost of using a tree (predicting data) is the logarithm of the number of data points used to train the tree.
It can handle both numeric and categorical data. However, scikit-learn’s implementation does not currently support categorical variables. Other techniques tend to specialize in analyzing datasets containing only one variable type. See Algorithms for details. Can handle multi-output issues.
Adopted the white box model. If a given situation is observable in the model, the description of that state can be easily explained by Boolean logic. In contrast, results from black-box models (such as artificial neural networks) can be more difficult to interpret.
Possibility to validate the model with statistical tests. This can explain the reliability of the model.
It works well even when the assumptions are somewhat violated by the true model from which the data were generated.
Decision tree learners can create overly complex trees that fail to generalize the data well. This is called overfitting. Mechanisms such as pruning, setting a minimum number of samples required at a leaf node, or setting a maximum tree depth are required to avoid this problem.
Decision trees can be unstable. This is because small deviations in the data can produce completely different trees. This problem is mitigated by using decision trees within the ensemble. The figure above shows that the decision tree prediction is neither smooth nor continuous but a piecewise constant approximation. Therefore, they are bad at extrapolation.
The problem of learning optimal decision trees is known to be NP-complete under some aspects of optimality and even for simple concepts. Therefore, practical decision tree learning algorithms are based on heuristic algorithms, such as the greedy algorithm, where the locally optimal decision is made at each node. Such algorithms cannot guarantee to return of globally optimal decision trees. This can be mitigated by training multiple trees in an ensemble learner and using surrogates to randomly sample features and samples.
Some concepts, such as XOR, parity, and multiplexer problems, are difficult to master because they cannot be easily represented in decision trees.
Decision tree learners create skewed trees when some classes are dominant. Therefore, it is recommended to balance the data set before fitting the decision tree.
Random Forest
Random forest is a machine learning technique to solve regression and classification problems. It uses ensemble learning, a technique that combines many classifiers to provide solutions to complex problems.
A random forest algorithm consists of many decision trees. The “forest” created by the random forest algorithm is trained by bagging or bootstrap aggregation. Bagging is an ensemble meta-algorithm that improves the accuracy of machine learning algorithms. A (random forest) algorithm determines an outcome based on the predictions of a decision tree. Predict by averaging outputs from different trees. Increasing the number of trees improves the accuracy of the results.
Random forest removes the limitations of decision tree algorithms. Reduce data set overfitting and increase accuracy. Generate predictions without requiring a lot of configuration in your package
Source: trivusi.web.id Support Vector Machines
Source: researchgate.net
Effective in high-dimensional space.
It works even if the number of dimensions exceeds the number of samples.
It is also memory efficient because it uses a subset of the training points in the decision function (called support vectors).
Versatility: You can specify different kernel functions for the decision function. A generic kernel is provided, but it is possible to specify a custom kernel.
When the number of features is much larger than the number of samples, avoiding overfitting when choosing a kernel function, the regularization term becomes important.
SVM does not provide direct probability estimates. These are computed using an expensive 5-fold cross-validation.
Coding to Predict Bank Customer Churn PredictionNow we have to import some libraries :
After that, we have to read the dataset using pandas
Exploratory Data AnalysisThe first thing we have to do in Exploratory Data Analysis is checked if there are null values in the dataset.
df.isnull().head() df.isnull().sum() #Checking Data types df.dtypes #Counting 1 and 0 Value in Churn column color_wheel = {1: "#0392cf", 2: "#7bc043"} colors = df["churn"].map(lambda x: color_wheel.get(x + 1)) print(df.churn.value_counts()) p=df.churn.value_counts().plot(kind="bar") #Change value in country column df['country'] = df['country'].replace(['Germany'],'0') df['country'] = df['country'].replace(['France'],'1') df['country'] = df['country'].replace(['Spain'],'2') #Change value in gender column df['gender'] = df['gender'].replace(['Female'],'0') df['gender'] = df['gender'].replace(['Male'],'1') df.head() #convert object data types column to integer df['country'] = pd.to_numeric(df['country']) df['gender'] = pd.to_numeric(df['gender']) df.dtypes #Remove customer_id column df2 = df.drop('customer_id', axis=1) df2.head() sns.heatmap(df2.corr(), fmt='.2g') Build Machine Learning Model X = df2.drop('churn', axis=1) y = df2['churn'] #test size 20% and train size 80% from sklearn.model_selection import train_test_split, cross_val_score, cross_val_predict from sklearn.metrics import accuracy_score X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.2,random_state=7) Decision Tree from chúng tôi import DecisionTreeClassifier dtree = DecisionTreeClassifier() dtree.fit(X_train, y_train) y_pred = dtree.predict(X_test) print("Accuracy Score :", accuracy_score(y_test, y_pred)*100, "%") Random Forest from sklearn.ensemble import RandomForestClassifier rfc = RandomForestClassifier() rfc.fit(X_train, y_train) y_pred = rfc.predict(X_test) print("Accuracy Score :", accuracy_score(y_test, y_pred)*100, "%") Support Vector Machine from sklearn import svm svm = svm.SVC() svm.fit(X_train, y_train) y_pred = svm.predict(X_test) print("Accuracy Score :", accuracy_score(y_test, y_pred)*100, "%") XGBoost from xgboost import XGBClassifier xgb_model = XGBClassifier() xgb_model.fit(X_train, y_train) y_pred = xgb_model.predict(X_test) print("Accuracy Score :", accuracy_score(y_test, y_pred)*100, "%") Visualize Random Forest and XGBoost Algorithm because Random Forest and XGBoost Algorithm have the Best Accuracy #importing classification report and confusion matrix from sklearn from sklearn.metrics import classification_report, confusion_matrix Random Forest y_pred = rfc.predict(X_test) print("Classification report - n", classification_report(y_test,y_pred)) cm = confusion_matrix(y_test, y_pred) plt.figure(figsize=(5,5)) sns.heatmap(data=cm,linewidths=.5, annot=True,square = True, cmap = 'Blues') plt.ylabel('Actual label') plt.xlabel('Predicted label') all_sample_title = 'Accuracy Score: {0}'.format(rfc.score(X_test, y_test)) plt.title(all_sample_title, size = 15) from sklearn.metrics import roc_curve, roc_auc_score y_pred_proba = rfc.predict_proba(X_test)[:][:,1] df_actual_predicted = pd.concat([pd.DataFrame(np.array(y_test), columns=['y_actual']), pd.DataFrame(y_pred_proba, columns=['y_pred_proba'])], axis=1) df_actual_predicted.index = y_test.index fpr, tpr, tr = roc_curve(df_actual_predicted['y_actual'], df_actual_predicted['y_pred_proba']) auc = roc_auc_score(df_actual_predicted['y_actual'], df_actual_predicted['y_pred_proba']) plt.plot(fpr, tpr, label='AUC = %0.4f' %auc) plt.plot(fpr, fpr, linestyle = '--', color='k') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('ROC Curve', size = 15) plt.legend() XGBoost y_pred = xgb_model.predict(X_test) print("Classification report - n", classification_report(y_test,y_pred)) cm = confusion_matrix(y_test, y_pred) plt.figure(figsize=(5,5)) sns.heatmap(data=cm,linewidths=.5, annot=True,square = True, cmap = 'Blues') plt.ylabel('Actual label') plt.xlabel('Predicted label') all_sample_title = 'Accuracy Score: {0}'.format(xgb_model.score(X_test, y_test)) plt.title(all_sample_title, size = 15) from sklearn.metrics import roc_curve, roc_auc_score y_pred_proba = xgb_model.predict_proba(X_test)[:][:,1] df_actual_predicted = pd.concat([pd.DataFrame(np.array(y_test), columns=['y_actual']), pd.DataFrame(y_pred_proba, columns=['y_pred_proba'])], axis=1) df_actual_predicted.index = y_test.index fpr, tpr, tr = roc_curve(df_actual_predicted['y_actual'], df_actual_predicted['y_pred_proba']) auc = roc_auc_score(df_actual_predicted['y_actual'], df_actual_predicted['y_pred_proba']) plt.plot(fpr, tpr, label='AUC = %0.4f' %auc) plt.plot(fpr, fpr, linestyle = '--', color='k') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('ROC Curve', size = 15) plt.legend() ConclusionThe churn variable has imbalanced data. So, the solution to handle imbalanced data are :
Resample Training set
Use K-fold Cross-Validation in the Right Way
Ensemble Different Resampled Datasets
We have to change the value in the Country and Gender columns so the Machine Learning model can read and predict the dataset; after changing the value, we have to change the data types on the Country and Gender column from string to integer because XGBoost Machine Learning Model cannot read string data types even though the value in the column is number.
Lastly, X (86,85% and 86.45%). Random Forest and XGBoost have perfect AUC Scores. They have 0.8731 and 0.8600 AUC Scores.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.
Related
What Is Dram (Dynamic Random Access Memory)? Explained!
While you may be familiar with RAM, the vital PC component that helps your computer run faster and not crash after opening more than 4 Chrome tabs, you must be wondering what is DRAM. Is it vastly different from RAM? The world of computers is full of jargon, and keeping up with the latest technologies (and their naming schemes) can be overwhelming. Fret not, for we are here to help! In this guide, let’s start by understanding what DRAM means and then look at the various types of DRAM.
What is DRAM?DRAM is the most common type of RAM we use today. The RAM DIMMs (dual in-line memory modules) or sticks that we install in our computers are, in fact, DRAM sticks. But what exactly makes DRAM dynamic? Let’s find out!
How Does DRAM Work?By design, DRAM is volatile memory, which means it can only store data for a short period. Each DRAM cell is constructed using a transistor and a capacitor, with data stored in the latter. Transistors tend to leak small amounts of electricity over time, due to which the capacitors get discharged, losing the information stored within them in the process. Hence, the DRAM must be refreshed with a new electric charge every few milliseconds to help it hold onto stored data. When DRAM loses access to power (such as when you turn off your PC), all data stored within it is lost too. The need for constant refreshing of data is what makes DRAM dynamic. Static memory, like SRAM (Static Random Access Memory), does not need to be refreshed.
DRAM vs SRAMSRAM uses a six-transistor memory cell to store data, as opposed to the transistor and capacitor pair approach taken by DRAM. SRAM is an on-chip memory typically used as cache memory for CPUs. It’s considerably faster and more power efficient than most other types of RAM, including DRAM. However, it is also significantly more expensive to produce and isn’t user-replaceable/ upgradeable. DRAM, on the other hand, is often user replaceable. Here are the key differences between DRAM and SRAM:
DRAMSRAMIt uses capacitors to store dataIt uses transistors to store dataCapacitors need constant refreshing to retain dataDoesn’t need refreshing as it doesn’t use capacitors to store dataHas slower speeds than SRAMSignificantly faster than DRAMCheaper to manufactureVery expensiveDRAM devices are high-densitySRAM is low-densityUsed as main memoryUsed as cache memory for CPUsRelatively lower heat output and power consumption than SRAMHigh heat output and power consumption
Types of DRAMNow that you know how dynamic RAM works, let’s look at the five different types of DRAM:
ADRAMTraditional DRAM modules operated asynchronously or independently. These were known as ADRAM (Asynchronous DRAM). Here, the memory would receive a request from the CPU to access certain information, then process that request and provide users access. Thus, the memory would only be able to handle requests one at a time, leading to delays.
SDRAMSDRAM, or Synchronous DRAM, works by synchronizing its memory access with your CPU’s clock speeds. Here, your CPU can communicate with the RAM, letting it know which data it would require and when, so the RAM can have it ready beforehand. The RAM and the CPU thus work in tandem, resulting in faster data transfer rates.
DDR SDRAMAs you may have guessed from the name, Double-Data-Rate SDRAM is a faster version of SDRAM with almost twice the bandwidth. It performs functions on both edges of the CPU clock signal (once when it rises and once when it falls), while standard SDRAM only does it at the rising edge of the CPU clock signal.
DDR memory had a 2-bit prefetch buffer (a memory cache that stores data before it’s needed), which resulted in significantly faster data transfer rates. As the years progressed, we got newer generations of DDR SDRAM.
DDR2 SDRAMDDR2 memory was introduced in 2003 and was twice as fast as DDR, thanks to its improved bus signal. While it has the same internal clock speed as DDR memory, it has a 4-bit prefetch and can reach data transfer rates of 533 to 800MT/s. Also, DDR2 RAM can be installed in pairs for dual-channel configuration (that we gamers all know and love) for increased memory throughput.
DDR3 SDRAMDDR3 first came around in 2007 and carried forward the trend of doubling the prefetch buffer (8-bit) and improving transfer speeds (800 to 2133MT/s). However, it had another trick up its sleeve — an approximately 40% reduction in power consumption. While DDR2 ran at 1.8 volts, DDR3 ran at anywhere between 1.35 to 1.5 volts. With better transfer speeds and lower power consumption, DDR3 became a terrific option for laptop memory.
DDR4 SDRAM DDR5 SDRAMDDR5 is the most recent generation of DDR memory and was introduced in 2023. While the power consumption has not reduced drastically (at 1.1 volts), the performance has — DDR5 offers almost double the performance of DDR4.
DDR5 memory modules, on the other hand, come equipped with two independent 32-bit channels — meaning that a single stick of DDR5 RAM already runs in dual-channel.
DDR5 also changes how voltage regulation is handled. For previous generations of DRAM, the motherboard was responsible for voltage regulation. However, DDR5 modules have an onboard power management IC.
SDRAMDDRDDR2DDR3DDR4DDR5Prefetch Buffer1-Bit2-Bit4-Bit8-Bit8-Bit16-BitTransfer Rate (GB/s)0.8 – 1.32.1 – 3.24.2 – 6.48.5 – 14.917 – 25.638.4 – 51.2Data Rate (MT/s)100 – 166266 – 400533 – 8001066 – 16002133 – 5100+3200 – 6400Voltage3.32.5 – 2.61.81.35 – 1.51.21.1
ECC MemoryWhile errors don’t usually occur on their own, they can be caused by interference. Electrical, magnetic, or even cosmic interference naturally present as background radiation in the atmosphere can cause single bits of DRAM to spontaneously flip to the opposite state.
Each byte is made of 8 bits. Let’s take 00100100, for instance. If interference causes one of these bits to change spontaneously, we might end up with — 00100101. Now, if these bits represent letters, the change in values will result in garbled or corrupted data. ECC constantly scans for such errors and corrects them.
The extra bits on the ECC RAM module store an encrypted error-correcting code when data is written to memory. When the same data is read, a new ECC is generated. The two are compared to determine if any bits have flipped. If they have, the ECC quickly corrects it, thus preventing data loss or corruption.
ECC memory is super valuable to businesses handling massive amounts of data, such as cloud service providers and financial institutions. Think about it — if a cloud service like iCloud or Google Drive fell victim to data corruption in their servers, all your precious photos and documents would be lost forever. We can’t have that now, can we? ECC memory is the way to go for servers and workstations.
Rambus DRAMRDRAM was introduced back in the mid-1990s by Rambus, Inc., as an alternative to DDR SDRAM. It featured a synchronous memory interface like SDRAM and faster data transfer rates (266 to 800 MT/s). RDRAM was mainly used for video games and GPUs, and even Intel jumped on board the RDRAM train for a short period until they phased it out in 2001. It was succeeded by XDR (Extreme Data Rate) memory by Rambus, which was used in various consumer devices, including Sony’s PlayStation 3 console. XDR was then superseded by XDR2, but it failed to take off as the DDR standard became more widely adopted.
DRAM in SSDs: What’s the Use?Unlike mechanical hard drives, SSDs do not store data on a spinning platter. Instead, in SSDs, data is written directly to their flash memory cells, known as NAND flash. Any data stored in an SSD is constantly moved around from one cell to another to ensure that no single memory cell gets worn out due to excessive reading and writing of data. While that’s essential for increasing the longevity and reliability of the drive, how do you know where any data is stored if it keeps moving around?
SSDs keep a virtual map of all your data, tracking where each file is stored. On a DRAM SSD, this data map is stored on the DRAM chip, which works like a super-fast cache. If you want to open a file, your PC can directly access the DRAM on the SSD to find it quickly.
However, on DRAM-less SSDs, the data map is stored on the NAND flash, which is much slower than DRAM. It will still be faster than a mechanical hard drive any day but slightly slower than a DRAM SSD.
DRAM in a NutshellWe have discussed DRAM at length in this article, explaining not only how it works but also how it has evolved over the past 30+ years. To recap what we have learned, DRAM (Dynamic Random Access Memory) is a type of RAM that’s volatile, which means it will lose all stored data once power is cut. There have been five types of DRAM, with DDR5 being the latest one to pick up the pace. We recommend having at least 8GB of DRAM in your PC to keep it running smoothly and stutter-free. However, if you are a heavy gamer or power user, 16 gigabytes of RAM would suit you better. If you want to upgrade your RAM but aren’t sure if your PC has an available RAM slot, go through our article on how to check available RAM slots in Windows 11.
Indigenous Languages Hold The Keys To Medicinal Forest Libraries
Of the world’s 7,400 languages, over 30 percent are expected to be lost by the end of the century. With those languages, unique Indigenous plant medicinal insights are likely to be erased as well, according to a new study in the journal Proceedings of the National Academy of Sciences.
An analysis of 236 Indigenous languages in three of the world’s most biodiverse regions found that over 75 percent of 12,495 plant medicinal attributes documented in these areas are exclusive to a specific language.
“If these languages disappear, we’ll lose this index to the forest library,” says study co-author Rodrigo Cámara-Leret, a researcher studying biological and cultural diversity at the University of Zurich in Switzerland. “We can read the landscape thanks to the information compiled by native peoples,” he says.
The study authors mapped the links between the loss of languages and the loss of ecological knowledge. To do so, they identified medicinal plant species and their uses documented in three of the most biodiverse regions in the world—Amazonia, New Guinea, and North America. The researchers then grouped each recorded medicinal plant service by language into one of 20 broad categories of cures, from digestion problems to infections to poisoning. Unique knowledge was defined as a medicinal service cited exclusively by a specific Indigenous language.
They found that it wasn’t the species in these cures that are under threat—but the vernacular of the unique knowledge themselves. Since languages with unique knowledge are scattered throughout the linguistic phylogenetic tree, “It’s not enough to protect a family of languages [in one major branch], we need to look across the entire diversity of the linguistic tree,” says study co-author Jordi Bascompte, an ecologist at the University of Zurich in Switzerland.
Interestingly, high biodiversity regions, which cover 25 percent of Earth’s terrestrial surface, also contain roughly 70 percent of the world’s known languages, according to a 2012 study. Researchers debate whether this pattern occurs because competition for a bounty of resources generates greater linguistic diversity or if those diverse resources reduce the need to communicate and share with other groups.
[Related: Local languages are dying out and taking invaluable knowledge with them.]
However, only six percent of land-based plants have so far been evaluated for their medically relevant traits, such as anti-cancer or anti-microbial activity. At the same time, the growing global herbal medicine market—expected to reach a valuation of $411.2 billion by the year 2026—offers an economic incentive to preserve this knowledge.
Nokwanda Makunga, a medicinal plant biologist at Stellenbosch University in Capetown, South Africa, says there are around 5,000-6,000 species utilized as ethnobotanicals, or plants used as medicine by Indigenous cultures, in Africa. At least 60-70 percent of the South African population uses plants as a primary source of healthcare. “We haven’t gone deep enough to characterize the medicinal properties of plants,” she says. At the same time, she has witnessed the loss of traditional ecological information as regional dialects disappear. Exacerbating the loss, the South African government doesn’t even recognize the languages of some aboriginal people in the area.
Makunga says medicinal plant knowledge isn’t always shared with non-native speakers. “For a long time, the practice of traditional medicines in South Africa was totally outlawed. It was illegal to carry herbs. It was witchcraft,” she says. Further, she adds, the subtle details that maximize a plant’s medicinal qualities—such as preparation, when to harvest, which plant part is most efficacious—can easily be lost.
Unfortunately, linguistic studies don’t typically focus on botanical information. Zach O’Hagan, a postdoctoral scholar in linguistics at University of California at Berkeley, recently inherited a treasure trove of Amazonian audio recordings, field diaries, and notes of former Florida Atlantic University anthropologist Gerald Weiss.
O’Hagan says the high level of ethnobiological information captured in Weiss’s collection is quite rare in the documentation of Indigenous languages. For example, efforts were made to document common and scientific names for species and compare the information to other dialects of the Ashaninka language, the largest language family in the Amazon.
O’Hagan cautions, however, that the loss of ethnobiological knowledge can long precede the loss of language. “We can have language vitality with knowledge gaps,” agrees Carolyn O’Meara, who studies Indigenous languages at the National Autonomous University of Mexico in Mexico City. “There’s a lot more subtlety at work, especially in areas where kids are still acquiring the language to some extent, but maybe no one’s using plants for medicinal purposes because they have a clinic in their village.”
Cámara-Leret hopes this study will trigger more in-depth, interdisciplinary research focused on endangered knowledge that simultaneously gives a voice to local communities. This sentiment is shared—the United Nations declared 2023-2032 the International Decade of Indigenous Languages to draw attention to the urgent need to preserve and revitalize these languages as a way to empower their speakers. “If [more research] could help to identify the most at-risk cultures, that would be really beautiful,” he says.
This Is The Best Pokemon Go Toy Ever
This is the best Pokemon GO toy ever
Pokemon GO and Pokemon Sun and Moon have brought about a new age of Poke-Popularity – and with it, one very awesome toy. This toy captures the joy of tossing a real Pokeball, ejecting the pocket monster therein. This Pokeball doesn’t just bounce away, like most Pokemon toys would – this one unlatches and sends its contents (a Pokemon figure) flying. This is the Pokemon Throw ‘n’ Pop Poke Ball, and it’s something we’ve been waiting for for years.
We’ve got a bundle of Pokemon toys here sent over by TOMY, the lot of which are enjoying extra popularity this summer thanks to the oncoming release of Pokemon Sun, Pokemon Moon, and Pokemon GO. The first thing we’re going to show here is the best Pokemon toy ever made. Bar none.
This contraption is one of four versions of the Throw n’ Pop Poke Ball. This is Pikachu with an original Pokeball. Three other characters from Pokemon X and Y are also available – only the cutest of the bunch – including Chespin with a Premier Ball, Fennekin with a Great Ball, and Froakie with an Ultra Ball.
Below you’ll see how this oddity works. Each of the sides of the Pokeball are held open with springs, and latched with the side of the Pokeball with the button. When the Pokeball is tossed and the button is hit, the latch unlatches and the entire Pokeball springs inside out, launching the Pokemon inside outward.
It’s crazy. Each of these Throw ‘n’ Pop Poke Balls costs around $13 USD, and they should be in stores soon, if they’re not already in a store near you now. This is the first wave of these toys – we’re rooting for a sandshrew next generation.
2023 is the 20th anniversary of Pokemon – what was originally released in Japan as Pocket Monsters Red and Pocket Monsters Green. One of the most massive celebrations of this anniversary was the Pokemon Super Bowl Commercial – as you’ll see below.
This is also the year in which Pokemon GO launched. This genuine phenomenon of a game has seen massive success, and extreme interest has spanned multiple forms of media and disparate companies of all sorts.
The toymakers at TOMY have combined forces with The Pokemon Company to create toys that are both screen-accurate and very high quality.
Above you’ll see some of the newest in Pokemon stuffed animals by TOMY. Stuffed animals like these are a rare mix of top quality manufacturing and relative low cost.
The Eevee here, for example, costs $11 USD. The stitching on this stuffed animal is amongst the best we’ve ever seen on a stuffed animal – ever.
Below you’ll find one of the several-inches-tall fully articulated action figures from TOMY, Mewtwo. Action figures like these cost around $13 USD.
The quality of these larger action figures from TOMY is good. This isn’t something that’s going to bust apart the first time a child tosses it across the room.
Perhaps the best option for catching ’em all in Pokemon toys for adults is in the 3 and 4-inch toys that come in 4-packs.
Generally these come with a set of one evolution – Charmander, Charmeleon, Charizard, and either Mega Charizard X or Y. We’ve got Y here.
These toys are a little bit more fragile. Some some with parts that aren’t made to be tossed, that is to say.
These miniature Pokemon are far better than any of the low-end toys that came out in the 1990s in the USA, that’s for certain. These seem very much to be made as much for kids as they have been made for adults that were kids when Pokemon was first becoming popular.
You can find most of the toys above wherever fine Pokemon products are sold – and most are the type that will only be sold in stores once.
This Pokemon toy collecting hobby is the least forgiving monster in the Pokemon universe. It’s brutal! That is, unless you compare it to Pokemon GO.
Polygon (Matic) Price Prediction For 2023
The recent developments surrounding Polygon (Matic) are remarkable. In the first two weeks of 2023, Polygon announced significant partnerships with names like Disney and Reddit. There is also the exciting launch of a web3-focused incubator with fintech giant Mastercard. On top of all that, Polygon has disclosed that it will be going ahead with a plan to hard-fork the network.
All these developments, combined with the network’s impressive analytics insight, unequivocally put Polygon in a position of strength. But what does the future hold for this Ethereum scaling solution? Let’s take a closer look.
The MATIC price is currently at USD 0.9671 after reaching a high of USD 1.05. and is up 21.2% in the last seven days. The current Global ranking is #10, with a market cap of USD 8.4B. MATIC price was stuck in a sideways trend between 75 cents and 0.90$ for some weeks before gaining 10% in 24 hours on 14th Jan.
Polygon (MATIC) Price Predictions for 2023The current market sentiment favors MATIC, and MATIC is likely to have more gains in the future. According to analysts’ predictions for 2023, MATIC could be looking for $1.2 in the short term and $1.8 in the mid-term. In Long-term, the altcoin could be looking at prices as high as $2.5 by the end of the year.
Trade top currencies such as BTC, MATIC, ETH, and more directly from your crypto wallet with Covo Finance, a 100% decentralized trading platform on Polygon. Enjoy up to 50X leverage and trade with confidence. Or, Earn up to 20-50% APR on your crypto by depositing ETH and MATIC to COVO Pools and earning a share of the platform’s trading fees.
Polygon has secured many partnerships that have bolstered its popularity and increased its usability. In the newest collaboration, Polygon has announced a partnership with fintech giant Mastercard to launch a web3-focused incubator. Disney and Reddit have also forged collaborations with the platform, while crypto payment firm Stripe has established large crypto payouts using its technology. In 8 months, Institutional deposits worth more than $11 billion were recorded by Polygon, thanks to the platform’s Ethereum’s PoS and Plasma bridges.
Major Growth in Polygon’s NFT ActivityAs per the 34th edition of the PolygonInsights report, overall key metrics show a promising future for the network. With 817,000 weekly active users, NFT volumes grew by 400%, and daily active address metric rose virtually every day.
NFTs activity on Polygon has seen explosive growth in the past year, with multiple industry giants like Adidas Originals, Prada, and Alan Howard releasing collections and investments. Big-name projects like Yoots and The Sandbox have also transitioned from Solana to Polygon, further solidifying its place in Web3. This shift is further evident from a report by Alchemy, which suggests that Polygon’s Web3 hosting capabilities make it the best-positioned protocol to drive the booming economy. Moreover, Citigroup has described Polygon as the AWS of Web3 and estimated that the Metaverse economy will be worth $13 trillion by 2030.
Whale data from blockchain analytics firm Santiment showed that following the market-wide sell-off triggered by the collapse of Terra, most MATIC supply held by whale addresses was taken off of exchanges. The event marked an outflow of over 240 million MATIC from CEXs. Later in July, observations showed another sharp decline of 120 million MATIC supply held by top exchange addresses, while non-exchange addresses had a whopping 6.6 billion MATIC.
Update the detailed information about Pokemon Prediction Using Random Forest on the Minhminhbmm.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!