You are reading the article An Accurate Approach To Data Imputation updated in November 2023 on the website Minhminhbmm.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested December 2023 An Accurate Approach To Data Imputation
This article was published as a part of the
Data Science Blogathon
.
IntroductionIn order to build machine learning models that are highl to a wide range of test conditions, training models with high-quality data is essential. Unfortunately, a large part of the data collected is not readily ideal for training machine learning models, this increases the need for pre-processing steps such that the data is pipelined as how a machine learning model would expect.
In the process of data cleaning, one of the most crucial steps is to deal with/fill missing values (Imputing missing data or simply data imputation) accurately such that the machine learning model learns the patterns in data as expected. Some of the most commonly practiced methods of dealing with missing values are:
This method is one of the most commonly used techniques to eradicate the inconvenience of dealing with missing data in the training phase if in case, the training data available is huge.
This method is rarely used in the machine learning community due to its less promising results. However, the method is subject to yield good results in certain situations.
Fill in the NULL values with statistically determined values based on the statistics of the training data such as training distribution mean, variance, etc.
This method is the most generally used method to fill the missing values
4. Predict the NULL values with Machine Learning Algorithms using the entire training data (without NULL values).
Article FocusThroughout this article, we dive completely into “How to deal with/fill the missing data” using ML algorithms.
Resources for the Statistical MethodsSince this article focuses on predicting the missing values instead of inferring them from the distribution of the dataset, the reference to the three methods mentioned above that either deal with/fills the missing values are gathered below. Consider checking them out to better understand where and when to use each of these methods.
Introduction to “Understanding and Tackling Missing Values” talks from scratch, all the way up to dealing with the most complex techniques with examples.
Furthermore “Dealing with Missing Values in Python” talks about the fundamentals and important ideas in dealing with missing values.
and Furthermore in “Dealing with Missing Values in Python“, and “Statistical Imputation for Missing Value“.
Data Imputation Using ML AlgorithmsFundamentally, the problem of data imputation using ML algorithms is broadly classified into two types, using the classification algorithm and the regression algorithm. Based on the type of training data, we need to use it to categorize the problem. In this article we take look at using a classification algorithm, however, using a regression algorithm is identical (refer to an example of a regression algorithm here). In order the solve the problem of missing values in the datasets using ML algorithms, data need to undergo certain steps including pre-processing and modelling. Below are the 5 most commonly categorized steps to fill the missing values with accurate data using ML algorithms.
0. OverviewOn the high level, a dataset, by dropping the labels or y column, is considered and divided into two sets, one called the training set and another one the test set. The division takes place based on the rows with NULL values and rows without NULL values. Each of these datasets is further divided into X, and y such that we have X_train, y_train, X_test, and y_test. y_train and y _test are column(s) containing the missing values (to be predicted). Upon training, we predict the missing values using the test data. More on how this works in the below steps.
Throughout this article, we use the most popular Kaggle dataset on the regression problem statements. The Google Colab notebook with detailed code and documentation is available here.
1. Preparing the DataAfter dropping the label or y column, the dataset is divided into training and testing. The division takes place with reference to the presence of NULL values in individual rows.
1. Import the data using pandas CSV:
Reading the CSV file using pandas read_csv() function
# Importing all the required packages import pandas as pd import numpy as np import matplotlib.pyplot as plt import sklearn # Reading the csv file using pandas read_csv df = pd.read_csv('/content/drive/MyDrive/TrainAndValid.csv/TrainAndValid.csv')#
I am importing data from my Gdrive but you can get access by downloading from Kaggle or from the following link:2. Investigate the dataset using methods such as info(), describe(), etc:
EDA (Exploratory Data Analysis) is a crucial step in understanding the data and the very first few functions used to initiate this process are info(), describe(), isna(), etc.
# Let's learn about the dataset df.info() # Check out the df.describe() in a new cell to learn more.3. Split the dataset based on NULL values in the dataset:
We drop the NULL values in our dataset and use that as a training set and then use the complete dataset to test on. Since we don’t have the true values for missing data it is a better option to use the complete dataset to evaluate the performance of the model.
from sklearn.model_selection import train_test_split X_test = df.drop('UsageBand', axis=1) y_test = df['UsageBand'] df_train = df.dropna() X_train = df_train.drop('UsageBand', axis=1) y_train = df_train['UsageBand']2. Modelling
We use the Random Forest Classifier model for modelling to impute the data.
# Training the model based on the X_train and y_train data from sklearn.ensemble import RandomForestClassifier rfc = RandomForestClassifier() # fit the model rfc.fit(X_train, y_train)3. Predict the Missing Values
Using the trained model Random Forest Classifier, fill/predict the class values of categorical column
# predict the values y_filled = rfc.predict(X_test)4. Substitute the Data & Use the clean data
Replace the y_predicted with the column consisting of missing data in the original data set and continue the modelling.
# now we have the missing values filled so replace the column in original dataset and use the predicted one df['UsageBand'] = y_filled # Proceed and use the dataset for modelling the actual problem 🙂In this article, we have seen, how to impute the missing data using ML algorithms? Also gathered some of the resources to learn various Importantly, we have seen how to use the existing training data to train and infer missing values using a statistical machine learning model (Random Forest Regressor in this case). We have also explored some of the standard methods followed while predicting the missing values using machine learning models such as split data based on NULL values in a row, training a model based on training datatype, and predicting missing data based on the missing data density in the training data. We barely scratched the surface of EDA (Exploratory Data Analysis) in this article through functions such as drop(), dropna(), etcetera.
tual problem i.e., in this case, predicting automobile prices.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.
Related
You're reading An Accurate Approach To Data Imputation
How Big Data Is Shaping Healthcare To Make It Further Affordable, Accurate & Intelligent
This article was published as a part of the Data Science Blogathon
Image By fabio on Unsplash
Big data is a sub-domain of Data Science that commits to applying specific tools and methods to learn and extract detailed insights into massive volumes of data.
Big data does a similar thing. Besides having data from a handful of colleagues, the file comprises thousands also millions of reviews from people.
Health Is Especially Well Fitted To Profit From Big DataFor many years, hospitals, researchers, and state agencies have diligently assembled an enormous kind of health data, from the completion rates of drug cases to the value of an ordinary medical plan to patients’ demographic data to the expected waiting period in emergency rooms.
Recently a report was published by the research and consulting firm McKinsey & Company. They discovered that there are four “provisions” in which the data is present in the health care domain:
Pharmaceutical analysis data-(e.g. clinical trial outcomes)
Clinical data-(e.g. patient reports)
Action and cost data-(e.g. expected procedure expenses)
Subject behavior data-(e.g. health investments history)
Big data appears in collecting all this data collectively in one place, sometimes from multiple heterogeneous data storehouses, and applying it to obtain insights into whereby our health care system can be more beneficial.
To distinguish which drugs are least probable to produce side effects?
Which private doctors have the most beneficial results?
Which methods are best and cost-effective?
Big data could clarify these issues and more numerous.
Three Active Units Where Big Data Is Remodelling Health CareImage By Hush Naidoo on Unsplash
1.Healthcare Providers Frequently Utilize Extensive Data To Recognize Patients At Significant Risk For Particular Healing Conditions Before Significant Difficulties Happen.A diverse provider has been practicing patient record data to produce predictive models throughout intervention successes. The report provided by the data has enormously cut down on hospital readmissions by 49%.
Google is also taking part in recognizing health hazards. Applying data from user exploration histories, the tech giant company can follow the extent of the flu worldwide incoming real-time.
It is where the authority of big data matches the strength of data. By operating on prominent data specialists like Siemens Healthcare, providers can apply results that automate healthcare data acquisition.
Experts use big data to aggregate and normalize the data beyond an industry, thereby adopting predictive analytics techniques to recognize populations at risk better while controlling execution at all levels of an association at the boom.
2.Big Data Is Related To Enhance The Quality Of Care Experienced By Patients.One way this is happening is by practicing data to produce a “clinical decision support software,” which is a mechanism health care providers can practice estimating their suggested practices — for instance, recognizing medical failures before they occur.
In a different case, a health care company in Delhi (the capital city of India) applied clinical data on the efforts of staff doctors to discover that one physician was working on a particular antibiotic considerably more frequently than the rest of the crew. They were possibly raising the venture of drug-resistant bacteria.
3.Big Data Is Helping Overcome The Mounting Expenses Of Health Care.Health care is one of the most influential areas in Indian economics and uses a meaningful measure of the country’s total domestic goods. At the equivalent time, there is sufficient confirmation that each dollar’s outstanding balance contributed to health care is misused, whether by bloated expenses or redundant reports and treatments.
Big data has a significant part in bringing these charges down. In one example, big data experts applied clinical data to determine which doctors charge the most cash in methods and other procedures.
By examining their activities, the health care provider could recognize and lessen duplicate tests and additional procedures. The movement not only dropped expenses but also enhanced patient results.
The National Health Service of India utilizes data on the hospitals and cost-effectiveness of different drugs to better negotiate drug rates with pharmaceutical manufacturing.
In Bangalore, one healthcare system handles data of 40,000 patients and 6,000 workers to recognize people expected to demand costly health care co-operations in the future. They manage the data and identify who to target with preventative care before the expensive health issue appears.
ConclusionMcKinsey & Company predicts that the application of big data in health care could generate profits of up to approximately a trillion dollars in 2023.
The private domain is not the exclusive field taking note of the influence of big data in the fate of health. In Jan 2023, the National Institutes of Health declared $50 million in yearly funding to produce some of the “Data Centers of Excellence for Big Data In Health Care.”
The hubs will improve the health care study and clinic community thoroughly learn how it can apply big data to develop its health care system.
Summing-UpHealth care is all-around people, whether performance statistics or generative data but not figures. Big data’s increasing role in health care does not substitute that. By exerting the power of the wealth of health data possible through operating with notable data experts, providers can classify regions where growth is likely and work to accomplish more beneficial results, increased productivity, and a further sustainable healthcare environment.
About AuthorMrinal Walia is a professional Python Developer with a computer science background specializing in Machine Learning, Artificial Intelligence, and Computer Vision. In addition to this, Mrinal is an interactive blogger, author, and geek with over four years of experience in his work. With a background working through most areas of computer science, Mrinal currently works as a Testing and Automation Engineer at Versa Networks, India. My aim to reach my creative goals one step at a time, and I believe in doing everything with a smile.
The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion
Related
Exploring Data Visualization In Altair: An Interesting Alternative To Seaborn
This article was published as a part of the Data Science Blogathon
Data Visualization is important to uncover the hidden trends and patterns in the data by converting them to visuals. For visualizing any form of data, we all might have used pivot tables and charts like bar charts, histograms, pie charts, scatter plots, line charts, map-based charts, etc., at some point in time. These are easy to understand and help us convey the exact information. Based on a detailed data analysis, we can decide how to best make use of the data at hand. This helps us to make informed decisions.
Now, if you are a Data Science or Machine Learning beginner, you surely must have tried Matplotlib and Seaborn for your data visualizations. Undoubtedly these are the two most commonly used powerful open-source Python data visualization libraries for Data Analysis.
Seaborn is based on Matplotlib and provides a high-level interface for building informative statistical visualizations. However, there is an alternative to Seaborn. This library is called ‘Altair’, an open-source Python library built for statistical data visualization. According to the official documentation, it is based on the Vega and Vega-lite language. Using Altair we can create interactive data visualizations through bar chart, histogram, scatter plot and bubble chart, grid plot and error chart, etc. similar to the Seaborn plots.
While Matplotlib library is imperative in syntax done, and the machine decides the how part of it. This gives the user freedom to focus on interpreting the data rather than being caught up in writing the correct syntax. The only downside of this declarative approach could be that the user has lesser control over customizing the visualization which is ok for most of the users unfamiliar with the coding part.
Installing Seaborn and AltairTo install these libraries from PyPi, use the following commands
pip install altair pip install seaborn Importing Basic libraries and datasetAs always, we import Pandas and NumPy libraries to handle the dataset, Matplotlib and Seaborn along with the newly installed library Altair for building the visualizations.
#importing required libraries import pandas as pd import numpy as np import seaborn as sns Import matplotlib.pyplot as plt import altair as altWe will use the ‘mpg’ or the ‘miles per gallon’ dataset from the seaborn dataset library to generate these different plots. This famous dataset contains 398 samples and 9 attributes for automotive models of various brands. Let us explore the dataset more.
#importing dataset df = sns.load_dataset('mpg') df.shape #dataset column names df.keys()Output
‘acceleration’, ‘model_year’, ‘origin’, ‘name’],
dtype=’object’)
#checking datatypes df.dtypes #checking dataset df.head()This dataset is simple and has a nice blend of both categorical and numerical features. We can now plot our charts for comparison.
Scatter & Bubble plots in Seaborn and AltairWe will start with simple scatter and bubble plots. We will use the ‘mpg’ and ‘horsepower’ variables for these.
For Seaborn scatterplot, we can use either the relplot command and pass ‘scatter’ as the kind of plot
sns.relplot(y='mpg',x='horsepower',data=df,kind='scatter',size='displacement',hue='origin',aspect=1.2);or we can directly use the scatterplot command.
sns.scatterplot(data=df, x="horsepower", y="mpg", size="displacement", hue='origin',legend=True)whereas for Altair, we use the following syntax
alt.Chart(df).mark_point().encode(alt.Y('mpg'),alt.X('horsepower'),alt.Color('origin'),alt.OpacityValue(0.7),size='displacement')s using another attribute ‘origin’ and control the size of the points using an additional variable ‘displacement’ for both libraries. In Seaborn, we can control the aspect ratio of the plot using the ‘aspect’ setting. However, in Altair, we can also control the opacity value of the point by passing a value between 0 to 1(1 being perfectly opaque). To convert a scatter plot in Seaborn to a bubble plot, simply pass a value for ‘sizes’ which denotes the smallest and biggest size of bubbles in the chart. For Altair, we simply pass (filled=True) for generating the bubble plot.
sns.scatterplot(data=df, x="horsepower", y="mpg", size="displacement", hue='origin',legend=True, sizes=(10, 500)) alt.Chart(df).mark_point(filled=True).encode( x='horsepower', y='mpg', size='displacement', color='origin' )With the above scatter plots, we can understand the relationship between ‘horsepower’ and ‘mpg’ variables i.e., lower ‘horsepower’ vehicles seem to have a higher ‘mpg’. The syntax for both plots is similar and can be customized to display the values.
Line plots in Seaborn and AltairNow, we plot line charts for ‘acceleration’ vs ‘horsepower’ attributes. The syntax for the line plots is quite simple for both. We pass DataFrame as data, the above two variables as x and y while the ‘origin’ as the legend color.
Seaborn-
sns.lineplot(data=df, x='horsepower', y='acceleration',hue='origin')Altair-
alt.Chart(df).mark_line().encode( alt.X('horsepower'), alt.Y('acceleration'), alt.Color('origin') )Here we can understand that ‘usa’ vehicles have a higher range of ‘horsepower’ whereas the other two ‘japan’ and ‘europe’ have a narrower range of ‘horsepower’. Again, both graphs provide the same information nicely and look equally good. Let us move to the next one.
Bar plots & Count plots in Seaborn and AltairIn the next set of visualizations, we will plot a basic bar plot and count plot. This time, we will add a chart title as well. We will use the ‘cylinders’ and ‘mpg’ attributes as x and y for the plot.
For the Seaborn plot, we pass the above two features along with the Dataframe. To customize the color, we choose a palette=’magma_r’ from Seaborn’s predefined color palette.
sns.catplot(x='cylinders', y='mpg', hue="origin", kind="bar", data=df, palette='magma_r')In the Altair bar plot, we pass df, x and y and specify the color based on the ‘origin’ feature. Here we can customize the size of the bars by passing a value in the ‘mark_bar’ command as shown below.
plot=alt.Chart(df).mark_bar(size=40).encode( alt.X('cylinders'), alt.Y('mpg'), alt.Color('origin') ) plot.properties(title='cylinders vs mpg')From the above bar plots, we can see that vehicles with 4 cylinders seem to be the most efficient for ‘mpg’ values.
Here is the syntax for count plots,
Seaborn- We use the FacetGrid command to display multiple plots on a grid based on the variable ‘origin’.
g = sns.FacetGrid(df, col="cylinders", height=4,aspect=.5,hue='origin',palette='magma_r') g.map(sns.countplot, "origin", order = df['origin'].value_counts().index)Altair- We use the ‘mark_bar’ command again but pass the ‘count()’ for cylinders column as y to generate the count plot.
alt.Chart(df).mark_bar().encode( x='origin', y='count()', column='cylinders:Q', color=alt.Color('origin') ).properties( width=100, height=100 )From these two count plots, we can easily understand that ‘japan’ has (3,4,6) cylinder vehicles, ‘europe’ has (4,5,6) cylinder vehicles and ‘usa’ has (4,6,8) cylinder vehicles. From a syntax point of view, the libraries require inputs for the data source, x, y to plot. The output looks equally pleasing for both the libraries. Let us try a couple of more plots and compare them.
HistogramIn this set of visualizations, we will plot the basic histogram plots. In Seaborn, we use the distplot command and pass the name of the dataframe, name of the column to be plotted. We can also adjust the height and width of the plot using the ‘aspect’ setting which is a ratio of width to height.
Seaborn sns.distplot(df, x='model_year', aspect=1.2) Altair alt.Chart(df).mark_bar().encode( alt.X("model_year:Q", bin=True), y='count()', ).configure_mark( opacity=0.7, color='cyan' )In this set of visualizations, the selected default bins are different for both libraries, and hence the plots look slightly different. We can get the same plot in Seaborn by adjusting the bin sizes.
sns.displot(df, x='model_year',bins=[70,72,74,76,78,80,82], aspect=1.2)Now the plots look similar. However, in both the plots we can see that the maximum number of vehicles was after ’76 and prominently in the year ’82. Additionally, we used a configure command to modify the color and opacity of the bars, which sort of acts like a theme in the case of the Altair plot.
Strip plots using both LibrariesThe next set of visualizations are the strip plots.
For Seaborn, we will use the stripplot command and pass the entire DataFrame and variables ‘cylinders’, ‘horsepower’ to x and y respectively.
ax = sns.stripplot(data=df, y= ‘horsepower’, x= ‘cylinders’)For the Altair plot, we use the mark_tick command to generate the strip plot with the same variables.
alt.Chart(df).mark_tick(filled=True).encode( x='horsepower:Q', y='cylinders:O', color='origin' )From the above plots, we can clearly see the scatter of the categorical variable ‘cylinders’ for different ‘origin’. Both the charts seem to be equally effective in conveying the relationship between the number of cylinders. For the Altair plot, you will find that the x and y columns have been interchanged in the syntax to avoid a taller and narrower-looking plot.
Interactive plotsWe now come to the final set of visualization in this comparison. These are the interactive plots. Altair scores when it comes to interactive plots. The syntax is simpler as compared to Bokeh, Plotly, and Dash libraries. Seaborn, on the other hand, does not provide interactivity to any charts. This might be a letdown if you want to filter out data inside the plot itself and focus on a region/area of interest in the plot. To set up an interactive chart in Altair, we define a selection with an ‘interval’ kind of selection i.e. between two values on the chart. Then we define the active points for columns using the earlier defined selection. Next, we specify the type of chart to be shown for the selection (plotted below the main chart) and pass the ‘select’ as the filter for the displayed values.
select = alt.selection(type='interval') values = alt.Chart(df).mark_point().encode( x='horsepower:Q', y='mpg:Q', color=alt.condition(select, 'origin:N', alt.value('lightgray')) ).add_selection( select ) bars = alt.Chart(df).mark_bar().encode( y='origin:N', color='origin:N', x='count(origin):Q' ).transform_filter( select ) values & barsFor the interactive plot, we can easily visualize the count of samples for the selected area. This is useful when there are too many samples/points in one area of the chart and we want to visualize their details to understand the underlying data better.
Additional points to consider while using Altair Pie Chart & Donut ChartUnfortunately, Altair does not support pie charts. Here is where Seaborn gets an edge i.e. you can utilize the matplotlib functionality to generate a pie chart with the Seaborn library.
Plotting grids, themes, and customizing plot sizesBoth these libraries also allow customizing of the plots in terms of generating multiple plots, manipulating the aspect ratio or the size of the figure as well as support different themes to be set for colors and backgrounds to modify the look and feel of the charts.
Advanced plots ConclusionI hope you enjoyed reading this comparison. If you have not tried Altair before, do give it a try for building some beautiful plots in your next data visualization project!
Author BioDevashree has an chúng tôi degree in Information Technology from Germany and a Data Science background. As an Engineer, she enjoys working with numbers and uncovering hidden insights in diverse datasets from different sectors to build beautiful visualizations to try and solve interesting real-world machine learning problems.
In her spare time, she loves to cook, read & write, discover new Python-Machine Learning libraries or participate in coding competitions.
You can follow her on LinkedIn, GitHub, Kaggle, Medium, Twitter.
The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.
Related
Bu Launches Data Science As An Undergraduate Major
The new Center for Computing & Data Sciences is on track to open in fall 2023. And starting this fall, undergraduates will be able to declare data science as a major, with a master’s degree program in data science expected to launch by next fall. Photo by Janice Checchio
There’s a new major at Boston University, and it’s designed to equip students with the analytical and computational skills that are now necessary for success in almost every field, from science and engineering to arts and humanities. Starting this fall, undergraduates will be able to declare data science (DS) as a major. Drawing on various topics in traditional STEM disciplines, the new Data Science BS Degree program, offered by the Faculty of Computing & Data Sciences (CDS), provides students with the foundational knowledge and practical training in algorithmic and statistical data analysis, machine learning, and software engineering—crucial competencies in a world increasingly defined by computation, big data, and artificial intelligence (AI).
To learn more about the new undergraduate program, BU Today spoke with Azer Bestavros, associate provost for computing and data sciences, a William Fairfield Warren Distinguished Professor, and a College of Arts & Sciences professor of computer science.
Azer Bestavros: Sure. There are a few courses planned for this fall which you can think of as “on ramps” to the major. They are intended to initiate students from varied backgrounds to the art, science, and practice of data science. One is DS-100, or Data Speaks Louder Than Words. This course is for students who are intrigued by data science as a potential major, and it is all about using the universal language of computation and data to think critically about, and derive solutions for, societally relevant problems. Two other on-ramps to the major are DS-110 and DS-120. Each of these courses is the “first” in a sequence of courses focused on the computational and mathematical foundations of data science. In addition to these on-ramps, which are meant to be courses that a freshman or a sophomore may take, CDS will be offering other elective courses. Two favorites of mine are our DS-457, Law for Algorithms , course and DS-563, Algorithmic Techniques for Taming Big Data , course. As you can tell from their titles, these courses are all about the new brave world we live in where we have to think about the ethical and legal frameworks for algorithmic decision-making and AI, and about the ways in which we can tame the torrents of data that we need to analyze, monetize, or put to good use. In addition to these courses offered by CDS, I want to note that students will be able to satisfy various requirements for the data science major by taking courses offered by other departments, not only Computer Science, Electrical & Computer Engineering, and Mathematics & Statistics, but also those offered in disciplines that leverage data science, such as Earth & Environment in CAS and Business Analytics in Questrom. In fact, it is a feature of the Data Science program that many of the elective courses that count towards the major will not be offered by CDS.
Can you tell us about some of the courses that will be available this upcoming academic year?
Azer Bestavros: Yes, of course. At the undergraduate level, plans are underway to launch a minor in data science, which should be available to students at BU very soon. At the graduate level, we have been accepting students in the PhD program in CDS launched last fall , with an inaugural group of four students starting in September. We are also planning a master’s degree program in data science, which we hope to launch by next fall. In addition to programs offered through CDS, and in collaboration with other schools and colleges at BU, within a year or two we expect to offer joint undergraduate and graduate degree programs and certificates—most notably bachelor’s and master’s degree programs that combine data science with a disciplinary focus.
Azer Bestavros: We expect to see interest mostly from students who in the fall of 2023 will either be first-year students or sophomores. This doesn’t mean that juniors won’t be interested, but rather that there will be more interest from students, especially those early enough in their BU studies, who are yet to declare or commit to a major, and who will realize that this new major is now an option for them.
Azer Bestavros: Enrollment in the data science major in CDS is available to all students at BU, provided that they meet our transfer criteria. Basically, students must have been at BU for at least one semester and have an overall GPA of 2.0 or better, a requirement that is consistent across the University. Additionally, students must have a B- or higher for any non-CDS course that they wish to count toward the DS major, subject to approval by CDS.
Azer Bestavros: We will start taking transfer students this fall, and they will have to apply through the Intra-University Transfer (IUT) process. In September 2023, we will welcome our first class of incoming first-year students to be admitted directly to CDS.
BU Today:
Let’s talk about the new CDS building. Is it still scheduled to open in fall 2023?
Yes, as far as I can tell, it is on schedule and everything is moving along. A few weeks ago, we went through the exercise of narrowing down the options for the furniture that BU will be taking bids on. That’s a good sign, and we could not be more excited. The next time you walk along Comm Ave this summer be sure to look it up—I think we are up to the fifth floor so far.
BU Today:
Until that building is completed, where is CDS located? Where will the classes be taught?
Our current home base for CDS is a newly renovated suite of offices and collaboration spaces on the first floor of the Math & Computer Science (MCS) Building at 111 Cummington Mall. While we expect that some of our classes will be taught there, others will be taught in classrooms assigned by the registrar’s office around campus. Once the CDS building is completed, students can expect to take most, if not all, CDS courses in that building.
BU Today:
What about students who are intimidated by mathematics? Should they consider this program anyway?
They certainly should consider this program. The whole pedagogical framework for the undergraduate program in data science is meant to be accessible to students with minimum or no prior exposure to college-level mathematics, or students who did not connect with abstract mathematics in high school. This is not to say that mathematical foundations are not an important part of the data science major. Quite the contrary; they are important, but we have designed the curriculum in an integrative fashion so that students learn the math they need when they need it.
BU Today:
What do you mean by learning the math they need when they need it? How is that different from the current practices in STEM disciplines?
BU Today:
You’ve said that the data science degree is different from most degree majors. Could you talk about that?
BU Today:
It sounds like this degree is well suited for students looking for a second major.
Yes. The data science major is very friendly to a minor and those who are hoping to pursue a dual degree. Eventually, we hope that students will be able to complete double majors across colleges; the DS major will lend itself well to that in the future. It leaves a lot of room for students to explore different fields and interests. In fact, with only 64 required credits, students can do as many as half of the courses required for a BU degree in other departments. It’s also important to say that this is a highly flexible major. Once done with the foundational and core requirements for the major, students can get the same data science degree by following one of two different paths. One path is more technology-focused, getting students deeper into statistics, machine learning, software and data engineering, and AI. The other path is what we call Data Science in the Field, which is more about connecting data science with a particular discipline or set of disciplines.
BU Today:
How will students decide which path to take?
BU Today:
What kind of job titles and careers should students in this major expect to have when they graduate?
BU Today:
What can you tell students about why they should consider this new major?
Students should think about majoring in data science not only because they are excited about data science as a technical subject with amazing career opportunities, but also because they are passionate about a particular discipline or cause and want to explore the ways data-driven approaches can make a difference. The program puts students in front of real data and big questions. They will learn the tools of data science and the foundations of data science by being immersed in curricular and cocurricular experiential learning opportunities leading them to complete a practicum in collaboration with industry or with a nonprofit.
BU Today:
Bottom line: how would you describe the goal of this program?
The major is really about putting students in a position where they can make a bigger impact in whatever career they choose to pursue.
Why Chief Data Scientist Is Important For An Organization?
A chief data scientist bridges the gap between the organization and data scientists.
Organizations often engaged in conversations, where the role and deliverance by the data scientists are scrutinised under the lens of uncertainty. Often the topic about the necessity of chief data scientists is discussed amongst organizations. What most organizations fail to comprehend is, the profile of chief data scientist is not confined to work as an employee in the organization, but they unburden the Chief Technology Officers’ (CTO’s) job, by monitoring Data scientists Data scientists are one of the most valuable entities of an organization. They are the modern-day, data-hungry miners of the tech world, who can convert the data coal into valuable insights. They are researchers who explore every option, look at every algorithm before giving a green flag for the insights. Truthfully, in traditional organizations, the possibility of data scientists getting the required amount of guidance and monitoring by CTO becomes less. Ira Cohen, CTO of Anodot says , “The reason why you need a Chief Data Scientist in the first place is you need somebody who can bridge the gap between management and [the data scientists], and what machine learning can do and cannot do. You need somebody who understands what it is in a deeper form than a CTO, who might have a broader knowledge of a lot of things, but not necessarily machine learning.” The machine learning algorithm is backed by a huge amount of data. But the journey from harnessing data, deploying algorithms, and gaining valuable insights, is not exactly a smooth sale. Different departments have silos, which thwarts the trusts amongst different organizations. The outcomes are often not exactly according to people’s expectation. A part of chief data scientist’s job is to make sure that machine learning models are working well, and that data is transferred seamlessly. Data scientists experiment with data. They dug down the Rabbit hole , search for the problem, fix it and then provide insights. This often involves success and failures. The success of an organization is often driven by the capabilities of the data scientists. While some organizations provide room to data scientists to excavate, and experiment with data and seek a novel solution, many organizations consider this unnecessary. It is then the job of chief data scientists to embellish this aspect of data scientists and help them figure out the time management for data excavation. Ira adds , “When you’re a researcher, it’s very easy for you to go down rabbit holes. You go down lots of rabbit holes. I’ve had to pull my people out of rabbit holes. This is part of what we do. ‘You’ve done enough, pull out. If we have time, we’ll go down that rabbit hole again. But let’s move to the next hole.” This also impacts the confidence and enthusiasm of data scientists and helps them in building trust with the organization. Thus the job of chief data scientists has more to with monitoring the internal environment of a company than to observe the external infrastructure. To reap the maximum benefits from the capabilities of data scientists and to overcome the silos between different departments, organizations must consider keeping chief data scientists as part of an organization.
76% Of Organizations Use A Strategic Content Management Approach
Chart of the Week: 76% of marketers say their companies follow a strategic approach to marketing but not all of these have a documented content strategy.
The majority of marketers say that their organization takes a strategic approach to managing their content. According to new research from the Content Marketing Institute, 76% of marketers state that their company follows a strategic approach when it comes to content marketing.
Of those who said that this is the case, 97% said that they were involved in the strategic content management used by their company. This covered a number of areas, with 89% being involved in the creation of content and 84% having a say in the organization’s content marketing strategy, including thought leadership, owned media management and distribution channels.
Other areas of involvement in business’ strategy included the content strategy itself (77%), which includes governance, content management, audits and taxonomies, general marketing (59%), communications (53%) and information technology (10%).
Why don’t organizations take a strategic approach?Of those whose companies don’t take a strategic approach, the biggest reason for this was leadership. According to 56% or respondents, a strategic approach to content management has not been made a priority by those in charge.
Another reason given by just under half (49%) of respondents is that their organization doesn’t have enough processes in place, with 47% placing the blame in the culture of their company.
In addition to these reasons, a lack of financial investment in resources (41%) and leadership doesn’t view content as something that needs to be strategically managed (39%) were also cited.
All of these reasons could mean that 24% of organizations not using a strategic approach to content management might be failing to get the most out of content marketing. A strategic approach means that companies can better scale and deliver content, ultimately improving the overall customer experience, which can help improve conversion and keep customers engaged over time.
Documented content management strategiesWhile 76% or respondents say their organizations take a strategic approach to content marketing management, only 59% have a documented content marketing strategy. This means that 41% of companies could face issues when handing over their strategy to new members of staff, ensuring that the strategy is followed, documenting any changes or assessing what aspects of the strategy are no longer working.
Maintaining an up-to-date content marketing strategy document can ensure that everyone working on the organization’s content is essentially on the same page. This means that all content is being created in-line with the strategy and goals of the company and that all departments are working together.
Of those whose organizations do have a documented strategy, 94% say that it includes the business’ goals and objectives, while 79% state their strategy includes defined roles and responsibilities. Other elements included in strategy documents include measurements and KPs (76%), desired outcomes (72%), defined workflows (71%), timeframes (62%), content governance specifications (59%) and training guidelines (21%).
Ideally, all content marketing strategy documents should cover these points in order to ensure they are as informative as possible. After all, there is not much point in including an organization’s goals if your strategy does not state how success is to be measured.
Content auditsContent audits are an important part of a content marketing strategy as they enable a review of all content and ensure that a website is creating the best customer experience. Performing an audit means you are able to ascertain whether your content is effective and reaching your goals, if it offers the best experience, if it is engaging and whether it is aiding in conversion. You can also take a look at what content can be repurposed and updated, saving time as it reduces the need for brand new content to be created.
These benefits are likely why 66% of respondents say their organizations have undertaken a content audit to evaluate their existing content. A further 67% have also undertaken a content inventory to create a list of all current content assets.
Performing both of these tasks ensure you have a thorough understanding of the content on your website and help with the creation of an effect content marketing strategy.
A large number of respondents also said that their organizations have undertaken a content gap analysis (56%) to see where they need additional content, research to understand potential audiences to improve a content strategy (55%) and research to better understand user experience (52%).
Only 6% of respondents who are involved with the strategic management of content marketing in their organizations said they haven’t undertaken any of these activities. This suggests that they may still not be getting the most out of their content strategy due to the lack of research around their audience and understanding of current content.
Development of content marketingThose respondents whose organizations do take a strategic approach to the management of their content marketing also tend to have a number of content development aids in place compared to those who don’t manage content strategically.
One of the biggest aids used is search engine optimization (SEO) and keyword research, which 77% of those who strategically manage content make use of. In comparison, just 52% of those who don’t have a content management strategy in place use SEO and keyword research to aid in the development of their content marketing.
Other content development aids used by both sets of respondents include editorial calendars (72% versus 45%), editorial guidelines (72% versus 45%), content performance analytics (64% versus 45%) and customer personas (61% versus 32%).
One of the biggest difference when it comes to content aids between those who do and those who don’t manage content strategically is formal workflow process. While 57% of strategic content managers have a formal workflow process in place, just 18% of those who don’t strategically manage content. This can mean that the workflow varies from team to team, if not person to person, resulting in work being delivered in different ways and at different times.
Content deliveryDespite such a large proportion of survey respondents managing content strategically, only 10% of those involved in the management believe that their organization delivers the right content to the right person at the right time. Some 39% somewhat agree that their organization is successful in this way, while 20% somewhat disagrees with the statement.
While only 2% strongly disagrees that their audience manages to content delivery right, it is still a fairly small number who thing they really succeed. This throws into question whether their content management strategy is up-to-date and whether they need updating in order to get a better result.
While content may not reach the right people at the right time, 26% of respondents strongly agree that their organization extracts meaningful insights from content marketing data and analytics. This makes sense as just under half (44%) of organizations view content as being a business asset, so they are likely to want to get the most ut of it.
However, these insights still are not being used to ensure the right people are seeing the content at a time in the customer journey that will be most beneficial to them and a company. So while the information is useful, it may not be utilized to fully optimize a strategy.
Final thoughtsAlthough many companies are strategically managing their content marketing efforts, it doesn’t look like this always pays off. While organizations that do so use more content development aids than those who don’t, they are not always successfully timing their content or targeting it at the right audience.
Only by constant measurement and regular auditing can you ensure that your content marketing brings the right results.
Update the detailed information about An Accurate Approach To Data Imputation on the Minhminhbmm.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!