# Trending December 2023 # Pca(Principal Component Analysis) On Mnist Dataset # Suggested January 2024 # Top 12 Popular

You are reading the article Pca(Principal Component Analysis) On Mnist Dataset updated in December 2023 on the website Minhminhbmm.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Pca(Principal Component Analysis) On Mnist Dataset

This article was published as a part of the Data Science Blogathon.

Hello Learners, Welcome!

In this article, we are going to learn about PCA and its implementation on the MNIST dataset. In this article, we are going to implement the Principal Component Analysis(PCA) technic on the MNIST dataset from scratch. but before we apply PCA technic to the MNIST dataset, we will first learn what is PCA, the geometric interpretation of PCA, the mathematical formulation of PCA, and the implementation of PCA on the MNIST dataset.

So the dataset we are going to use in this article is called the MNIST dataset, which contains the information of handwritten digits 0 to 9. in this dataset the information of single-digit is stored in the form of 784*1 array, where the single element of 784*1 array represents a single pixel of 28*28 image. here the value of single-pixel varies from 0 to 1, where the black colour is represented by 1 and white by 0 and middle values represent the shades of grey.

Geometric Interpretation of PCA:

Now let’s take an example, Suppose we have a DxN dimensional dataset called X, where the d = 2 and n = 20. and the two features of the dataset is f1 and f1,

Now let’s see that we make the scatter plot with this data and its data distribution is look like the figure shown below,

After seeing the scatter plot, you can easily say that the variance of feature f1 is much more than the variance of feature f2. The variability of f2 is unimportant compared to the variability of f1. if we have to choose one feature between f1and f1, we can easily select the feature f1. now let’s suppose that you cannot visualize 2d data and for visualizing the data you have to convert your 2d data into 1d data then what do you do? so the simple answer is you directly keep those features that have the highest variance. and remove those features which have less impact on the overall result. and that’s what PCA internally does.

So first of all we ensure that our data is standardized because performing the PCA on standardized data becomes much easier than original data.

So now again let’s see that we have a d*n dimensional dataset called X, where the d = 2 and n = 20. and the two features of the dataset are f1 and f2. and remember we standardized the data. but in this case, the scatter plot looks like this.

In this case, if we have to decrease dimensions from 2d to 1d then we can’t clearly select feature f1 or f2 because this time the variance of both features is almost the same both the features seem important. so how does PCA do it?

In this situation, PCA tries to draw the vector of line in the direction where the variance of data is very high. which means instead of projecting the data or measuring the variance in the f1 or f1 axis what if we quantify the variance in the f1′ or f2′ direction because measuring the variance in the f1′ or f2′ direction makes much more sense.

So PCA tries to find the direction of vector or line where the variance of data is very high. the direction of vector where the variance of data is highest is called PC1 ( Principal Component 1 ) and second-highest is called PC2 and third is PC3 and so on.

Mathematical Formulation of PCA:

So we show the geometric intuition of PCA, we show that how does PCA reduces the dimensions of data. so PCA simply finds the direction and draws the vector where the variance of data is very high, but you might wonder how the PCA does it and how it finds the right direction of vector where the variance of data is very high. how the PCA calculates the angle and gives us the accurate slope. so PCA uses two techniques to find the angle of a vector. the two methods are Variance maximization and Distance Minimization. so let’s learn about them in brief

1. Variance Maximization: In this method, we simply project all the data points on the unit vector u1 and find the variance of all projected data points. We select that direction where the variance of projected points is maximum.

So let’s assume that we have two-dimensional datasets and the features of the dataset are f1 and f2, and xi is datapoint and u1 is our unit vector. and if we project the data point xi on u1 the projected point is xi’,

u1 = unit vector

f1 and f2 = features of dataset

xi = data point

xi’ = projection of xi on u1

now assume that D = { xi } (1 to n) is our dataset

and D’ = { xi’ } (1 to n) is our dataset of projected point of xi on u1.

now x^’ = u1T * x^ ……..(2) [ x^ = mean of x ]

so find u1 such that the variance{ projection of xi on u1 } is maximum

var {u1T * xi} (i is 1 to n)

if data is columns standardized then mean = 0 and variance = 1

so x^ = [0, 0, 0… .. . . . .0]

we want to maximize the variance.

2. Distance Minimization: So in this technique of PCA we are trying to minimize the distance of data point from u1 ( unit vector of length 1)

we want to minimize the sum of all distance squared.

Implementing PCA on MNIST dataset:

So as we talked about the MNIST dataset earlier and we just complete our understanding of PCA so it is the best time to perform the dimensionality reduction technique PCA on the MNIST dataset and the implementation will be from scratch so without wasting any more time lets start it,

So first of all we import our mandatory python libraries which are required for the implementation of PCA.

import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns df = pd.read_csv('mnist_train.csv', nrows = 20000) print("the shape of data is :", df.shape) df.head()

Hit Run to see the output

﻿

Extracting label column from the dataset

label = df['label'] df.drop('label', axis = 1, inplace = True) ind = np.random.randint(0, 20000) plt.figure(figsize = (20, 5)) grid_data = np.array(df.iloc[ind]).reshape(28,28) plt.imshow(grid_data, interpolation = None, cmap = 'gray') plt.show() print(label[ind])

Plotting a random sample data point from The dataset using matplotlib imshow() method

Column standardization of our dataset using StandardScalar class of sklearn.preprocessing module. because after column standardization of our data the mean of every feature becomes 0 (zero) and variance 1. so we perform PCA from the origin point.

from sklearn.preprocessing import StandardScaler scaler = StandardScaler() std_df = scaler.fit_transform(df) std_df.shape

Now Find the Co-Variance matrix which is AT * A using NumPy matmul method. after multiplication, the dimensions of our Co-Variance matrix is 784 * 784 because AT(784 * 20000) * A(20000 * 784).

covar_mat = np.matmul(std_df.T, std_df) covar_mat.shape

Finding the top two Eigen-values and corresponding eigenvectors for projecting onto a 2D surface. The parameter ‘eigvals’ is defined (low value to high value), the eigh function will return the eigenvalues in ascending order and this code generates only the top 2 (782 and 783) eigenvalues.

converting the eigenvectors into (2,d) form for easiness of further computations

from scipy.linalg import eigh values, vectors = eigh(covar_mat, eigvals = (782, 783)) print("Dimensions of Eigen vector:", vectors.shape) vectors = vectors.T print("Dimensions of Eigen vector:", vectors.shape)

here the vectors represent the eigenvector corresponding 1st principal eigenvector

here the vectors represent the eigenvector corresponding 2nd principal eigenvector

If we multiply the two top vectors to the Co-Variance matrix, we found our two principal components PC1 and PC2.

final_df = np.matmul(vectors, std_df.T) print("vectros:", vectors.shape, "n", "std_df:", std_df.T.shape, "n", "final_df:", final_df.shape)

Now we vertically stack our final_df and label and then Transpose them, then we found the NumPy data table so with the help of pd.DataFrame we create the data frame of our two components with class labels.

final_dfT = np.vstack((final_df, label)).T dataFrame = pd.DataFrame(final_dfT, columns = ['pca_1', 'pca_2', 'label']) dataFrame

Now let’s visualize the final data with help of the seaborn FacetGrid method.

sns.FacetGrid(dataFrame, hue = 'label', size = 8) .map(sns.scatterplot, 'pca_1', 'pca_2') .add_legend() plt.show()

So you can see that we are successfully converted our 20000*785 data to 20000*3 using PCA. So this is how PCA is used to convert big extent to smaller ones.

What do we learn in this article? We took a brief intro about the PCA and mathematical intuition of PCA. This was all from me thank you for reading this article. I am currently pursuing a chúng tôi in CSE I loved to write articles in data Science. Hope you like this article.

Thank you.

Related

You're reading Pca(Principal Component Analysis) On Mnist Dataset

## 20 Questions To Test Your Skills On Dimensionality Reduction (Pca)

This article was published as a part of the Data Science Blogathon

Introduction

Principal Component Analysis is one of the famous Dimensionality Reduction techniques which helps when we work with datasets having very large dimensions.

Therefore it becomes necessary for every aspiring Data Scientist and Machine Learning Engineer to have a good knowledge of Dimensionality Reduction.

In this article, we will discuss the most important questions on Dimensionality Reduction which is helpful to get you a clear understanding of the techniques, and also for Data Science Interviews, which cover its very fundamental level to complex concepts.

Let’s get started, 1. What is Dimensionality Reduction?

In Machine Learning, dimension refers to the number of features in a particular dataset.

In simple words, Dimensionality Reduction refers to reducing dimensions or features so that we can get a more interpretable model, and improves the performance of the model.

2. Explain the significance of Dimensionality Reduction.

There are basically three reasons for Dimensionality reduction:

Visualization

Interpretability

Time and Space Complexity

Let’s understand this with an example:

Imagine we have worked on an MNIST dataset that contains 28 × 28 images and when we convert images to features we get 784 features.

If we try to think of each feature as one dimension, then how can we think of 784 dimensions in our mind?

We are not able to visualize the scattering of points of 784 dimensions.

That is the first reason why Dimensionality Reduction is Important!

Let’s say you are a data scientist and you have to explain your model to clients who do not understand Machine Learning, how will you make them understand the working of 784 features or dimensions.

In simple language, how we interpret the model to the clients.

That is the second reason why Dimensionality Reduction is Important!

Let’s say you are working for an internet-based company where the output of something must be in milliseconds or less than that, so “Time complexity” and “Space Complexity” matter a lot. More features need more Time which these types of companies can’t afford.

That is the third reason why Dimensionality Reduction is Important!

3. What is PCA? What does a PCA do?

Principal Component analysis. It is a dimensionality reduction technique that summarizes a large set of correlated variables (basically high dimensional data) into a smaller number of representative variables, called the Principal Components, that explains most of the variability of the original set i.e, not losing that much of the information.

PCA stands for. It is a dimensionality reduction technique that summarizes a large set of correlated variables (basically high dimensional data) into a smaller number of representative variables, called the Principal Components, that explains most of the variability of the original set i.e, not losing that much of the information.

PCA is a deterministic algorithm in which we have not any parameters to initialize and it doesn’t have a problem of local minima, like most of the machine learning algorithms has.

4. List down the steps of a PCA algorithm.

The major steps which are to be followed while using the PCA algorithm are as follows:

Step-1: Get the dataset.

Step-2: Compute the mean vector (µ).

Step-3: Subtract the means from the given data.

Step-4: Compute the covariance matrix.

Step-5: Determine the eigenvectors and eigenvalues of the covariance matrix.

Step-6: Choosing Principal Components and forming a feature vector.

Step-7: Deriving the new data set by taking the projection on the weight vector.

5. Is it important to standardize the data before applying PCA?

Usually, the aim of standardization is to assign equal weights to all the variables. PCA finds new axes based on the covariance matrix of original variables. As the covariance matrix is sensitive to the standardization of variables therefore if we use features of different scales, we often get misleading directions.

Moreover, if all the variables are on the same scale, then there is no need to standardize the variables.

6. Is rotation necessary in PCA? If yes, Why? Discuss the consequences if we do not rotate the components?

Yes, the idea behind rotation i.e, orthogonal Components is so that we are able to capture the maximum variance of the training set.

If we don’t rotate the components, the effect of PCA will diminish and we’ll have to select more Principal Components to explain the maximum variance of the training dataset.

7. What are the assumptions taken into consideration while applying PCA?

The assumptions needed for PCA are as follows:

1. PCA is based on Pearson correlation coefficients. As a result, there needs to be a linear relationship between the variables for applying the PCA algorithm.

2. For getting reliable results by using the PCA algorithm, we require a large enough sample size i.e, we should have sampling adequacy.

3. Your data should be suitable for data reduction i.e., we need to have adequate correlations between the variables to be reduced to a smaller number of components.

4. No significant noisy data or outliers are present in the dataset.

8. What will happen when eigenvalues are roughly equal while applying PCA?

While applying the PCA algorithm, If we get all eigenvectors the same, then the algorithm won’t be able to select the Principal Components because in such cases, all the Principal Components are equal.

9. What are the properties of Principal Components in PCA?

The properties of principal components in PCA are as follows:

1. These Principal Components are linear combinations of original variables that result in an axis or a set of axes that explain/s most of the variability in the dataset.

2. All Principal Components are orthogonal to each other.

3. The first Principal Component accounts for most of the possible variability of the original data i.e, maximum possible variance.

4. The number of Principal Components for n-dimensional data should be at utmost equal to n(=dimension). For Example, There can be only two Principal Components for a two-dimensional data set.

10. What does a Principal Component in a PCA signify? How can we represent them mathematically?

The Principal Component represents a line or an axis along which the data varies the most and it also is the line that is closest to all of the n observations in the dataset.

In mathematical terms, we can say that the first Principal Component is the eigenvector of the covariance matrix corresponding to the maximum eigenvalue.

Accordingly,

Sum of squared distances = Eigenvalue for PC-1

Sqrt of Eigenvalue = Singular value for PC-1

11. What does the coefficient of Principal Component signify?

If we project all the points on the Principal Component, they tell us that the independent variable 2 is N times as important as of independent variable 1.

12. Can PCA be used for regression-based problem statements? If Yes, then explain the scenario where we can use it.

Yes, we can use Principal Components for regression problem statements.

, we can use Principal Components for regression problem statements.

PCA would perform well in cases when the first few Principal Components are sufficient to capture most of the variation in the independent variables as well as the relationship with the dependent variable.

The only problem with this approach is that the new reduced set of features would be modeled by ignoring the dependent variable Y when applying a PCA and while these features may do a good overall job of explaining the variation in X, the model will perform poorly if these variables don’t explain the variation in Y.

13. Can we use PCA for feature selection?

Feature selection refers to choosing a subset of the features from the complete set of features.

No, PCA is not used as a feature selection technique because we know that any Principal Component axis is a linear combination of all the original set of feature variables which defines a new set of axes that explain most of the variations in the data.

Therefore while it performs well in many practical settings, it does not result in the development of a model that relies upon a small set of the original features.

14. Comment whether PCA can be used to reduce the dimensionality of the non-linear dataset.

PCA does not take the nature of the data i.e, linear or non-linear into considerations during its algorithm run but PCA focuses on reducing the dimensionality of most datasets significantly. PCA can at least get rid of useless dimensions.

However, reducing dimensionality with PCA will lose too much information if there are no useless dimensions.

15. How can you evaluate the performance of a dimensionality reduction algorithm on your dataset?

A dimensionality reduction algorithm is said to work well if it eliminates a significant number of dimensions from the dataset without losing too much information. Moreover, the use of dimensionality reduction in preprocessing before training the model allows measuring the performance of the second algorithm.

We can therefore infer if an algorithm performed well if the dimensionality reduction does not lose too much information after applying the algorithm.

Comprehension Type Question: (16 – 18) Consider a set of 2D points {(-3,-3), (-1,-1),(1,1),(3,3)}. We want to reduce the dimensionality of these points by 1 using PCA algorithms. Assume sqrt(2)=1.414.

SOLUTION:

2 i.e, two-dimensional space, and our objective is to reduce the dimensionality of the data to 1 i.e, 1-dimensional data ⇒ K=1

Here the original data resides in Ri.e, two-dimensional space, and our objective is to reduce the dimensionality of the data to 1 i.e, 1-dimensional data ⇒

We try to solve these set of problem step by step so that you have a clear understanding of the steps involved in the PCA algorithm:

Step-1: Get the Dataset

Here data matrix X is given by [ [ -3, -1, 1 ,3 ], [ -3, -1, 1, 3 ] ]

Step-2:  Compute the mean vector (µ)

Mean Vector: [ {-3+(-1)+1+3}/4, {-3+(-1)+1+3}/4 ] = [ 0, 0 ]

Step-3: Subtract the means from the given data

Since here the mean vector is 0, 0 so while subtracting all the points from the mean we get the same data points.

Step-4: Compute the covariance matrix

Therefore, the covariance matrix becomes XXT since the mean is at the origin.

Therefore, XXT becomes [ [ -3, -1, 1 ,3 ], [ -3, -1, 1, 3 ] ] ( [ [ -3, -1, 1 ,3 ], [ -3, -1, 1, 3 ] ] )T

= [ [ 20, 20 ], [ 20, 20 ] ]

Step-5: Determine the eigenvectors and eigenvalues of the covariance matrix

det(C-λI)=0 gives the eigenvalues as 0 and 40.

Now, choose the maximum eigenvalue from the calculated and find the eigenvector corresponding to λ = 40 by using the equations CX = λX :

Accordingly, we get the eigenvector as (1/√ 2 ) [ 1, 1 ]

Therefore, the eigenvalues of matrix XXT are 0 and 40.

Step-6: Choosing Principal Components and forming a weight vector

Here, U = R2×1 and equal to the eigenvector of XXT corresponding to the largest eigenvalue.

Now, the eigenvalue decomposition of C=XXT

And W (weight matrix) is the transpose of the U matrix and given as a row vector.

Therefore, the weight matrix is given by  [1 1]/1.414

Step-7: Deriving the new data set by taking the projection on the weight vector

Now, reduced dimensionality data is obtained as xi = UT Xi = WXi

x1 = WX1= (1/√ 2 ) [ 1, 1 ] [ -3, -3 ]T = – 3√ 2

x2 = WX2= (1/√ 2)  [ 1, 1 ] [ -1, -1 ]T = – √ 2

x3 = WX3= (1/√ 2)  [ 1, 1 ] [ 1, 1]T = – √ 2

x4 = WX4= (1/√ 2 ) [ 1, 1 ] [ 3, 3 ]T = – 3√ 2

Therefore, the reduced dimensionality will be equal to {-3*1.414, -1.414,1.414, 3*1.414}.

This completes our example!

19. What are the Advantages of Dimensionality Reduction?

1. Less misleading data means model accuracy improves.

2. Fewer dimensions mean less computing. Less data means that algorithms train faster.

3. Less data means less storage space required.

4. Removes redundant features and noise.

5. Dimensionality Reduction helps us to visualize the data that is present in higher dimensions in 2D or 3D.

2. It can be computationally intensive.

3. Transformed features are often hard to interpret.

4. It makes the independent variables less interpretable.

End Notes

Currently, I am pursuing my Bachelor of Technology (B.Tech) in Computer Science and Engineering from the Indian Institute of Technology Jodhpur(IITJ). I am very enthusiastic about Machine learning, Deep Learning, and Artificial Intelligence.

The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion.

Related

## How To Call A Vuejs Component Method From Outside The Component

Generally, we cannot call a Vue component method from outside the component. But we have a way to call the component from outside. We can use the Vue’s ref Directive. This metthod allows a component to be referenced from the parent for direct access.

To apply the ref Directive, we will first create a div element with id ‘app’. Once the div element is created, we can apply the ref to the element by initializing the its data.

Syntax

Following is the syntax to call a component method from outside the component in chúng tôi −

Here, the component is named “my-component” and it has a “ref” attribute set to another component “foo”.

Example

Create two files chúng tôi and chúng tôi in your Vue project. The file and directory with code snippets are given below for both files. Copy paste the below code snipped in your Vue project and run the Vue project. You shall see the below output on the browser window.

FileName – app.js

Directory Structure — \$project/public — app.js

var MyComponent = Vue.extend({ data: function() { return { things: ['first thing'] }; }, methods: { addThing: function() { this.things.push('New Thing ' + this.things.length); } } }); var vm = new Vue({ el: '#app', components: { 'my-component': MyComponent } }); vm.\$refs.foo.addThing(); };

FileName – index.html

Directory Structure — \$project/public — index.html

Welcome to Tutorials Point

Run the following command to get the below output −

Complete Code

Let’s define a complete working code by combining the above two files- chúng tôi and index.html. We can now run this code as an HTML program.

var MyComponent = Vue.extend({ data: function() { return { things: [‘first thing’]}; }, methods: { addThing: function() { this.things.push(‘New Thing ‘ + this.things.length); } } }); var vm = new Vue({ el: ‘#app’, components: { ‘my-component’: MyComponent } }); vm.\$refs.foo.addThing(); };

In this article, we demonstrated how to call a component method from outside the component in Vue JS. To do this, we created two files named as chúng tôi and index.html.

## Component Testing Vs Unit Testing

Difference Between Component Testing vs Unit Testing

Start Your Free Software Development Course

Web development, programming languages, Software testing & others

Unit Testing is the technique in software testing individual applications, modules that indicate the execution of the programme according to the specification. Unit Testing is a form of white box software testing in which individual software units are tested to determine whether or not they are eligible for use. These software unit includes a group of computer programme usage procedures, modules, as well as operating procedures. Error Detection is easy in unit testing as it is done after each development step. For the modules being evaluated, a driver function is responsible for creating method calls. The component which uses a strategy is imitated as the stub. The original substitute for the misplaced approaches is these stubs.

Key Difference Between Component Testing vs Unit Testing

The main difference between component testing and unit testing is that the component testing is performed by testers on the other hand the Unit testing is executed by SDET professionals or developers. Component testing is performed at the application level whereas unit testing is done at a granular level.

It is examined in unit testing whether the piece of code or individual program is executed as defined. In component testing, each software object is independently evaluated with or without separation from other device objects or components.

In component testing, testing is done by validating use cases and test requirements whereas Unit testing is tested against design documents.

Component testing is a type of black box testing while unit testing is a type of white box testing.

Component testing is performed once the unit testing is performed while before the component testing. In component testing, tester does not have knowledge about the internal architecture of the software. On the other hand, while doing unit testing, developers know the internal architecture of the software.

Error detection is a bit difficult in component testing as compared to unit testing and it is performed only after the whole software is developed. Whereas unit testing is done after each development step. Hence component testing is important for finding the errors and bugs. In order to make sure that each component of an application works efficiently, it is recommended to conduct the component test before proceeding with the unit testing.

Component Testing vs Unit Testing Comparison Table

Let’s discuss the top comparison between Component Testing vs Unit Testing:

Sr. No Component Testing Unit Testing

1 In this testing, each object or component of the software is tested separately.  In this type of testing, individual modules or programs for program execution.

2 It validates use cases and test requirements. It is tested against design documents.

3 It is performed once the unit testing is performed. It is performed before the component testing.

4 In component testing, tester do not have knowledge about the internal architecture of the software. In Unit testing, developers knows the internal architecture of the software.

5 Error detection is bit difficult as compared to unit testing. Error detection is easy in unit testing.

6 Component testing is performed only after the whole software is developed. Unit testing is done after each development step.

7 It is done at application level. Unit testing is done at a granular level.

8 It is a type of black box testing. It is a type of white box testing.

Conclusion

In this article we have seen key differences between Component Testing and Unit Testing. Component testing is just much like unit testing, but it is conducted at a higher level of the context of the application and integration. If the Component testing is done correctly then there are fewer bugs in the next stage hence it is conducted before unit testing which tests the programs.

Recommended Articles

This is a guide to Component Testing vs Unit Testing. Here we discuss the key differences with infographics and comparison table respectively. You may also have a look at the following articles to learn more –

## How Switch Component Works In React

Introduction to React-Native Switch

React-Native Switch is a component controlled by Boolean which assigns its value to true or false. To update the value prop in respect of the component to reflect user actions, on Value Change callback method of React-Native Switch is used. If there is no update in the valueprop the component won’t be able to give the expected result for user action instead it will continuously provide the supplied value. The props of the Switch are disabled, trackColor, ios_backgroundColor, onValueChange, testID, thumbColor, tintColor, value. The major used props of the Switch are on Value Change (invoked with the change in switch value) and value (switchvalue).

Start Your Free Software Development Course

Web development, programming languages, Software testing & others

import { Switch} from 'react-native' <Switch onValueChange={ (value) =? this.setState({ toggled: value })} value={ this.state.toggled }

Syntax to use Render in the Switch:

How Switch Component works in React-Native?

The working of switch component in react native is defined in the following steps:

Step 1: For logic, HomeContainer component is used, and in the code below presentational component is created with the help of new file SwitchExample.js.

Step 2: To toggle switch items in SwitchExamplecomponent, the value has been passed from the state and functions. For updating the state Toggle functions are used. Switch component takes two props. When a user presses the switch, the onValueChange prop will trigger the toggle functions. To the state of the HomeContainer component, the value prop is bounded. If Switch is pressed, the state will be updated and one can check the values in the console, before that values bounded to default.

Logic and Presentation of Switch in the Application

Given below is the coding for logic and presentation of switch in the application:

Code:

import React, { Component } from 'react' import {StyleSheet, Switch, View, Text} from 'react-native' export default class SwitchExample extends Component { state = { switchValue: false }; render() { return ( <Switch value={this.state.switchValue} onValueChange ); } } const styles = StyleSheet.create({ container:{ flex:1, alignItems: 'center', justifyContent: 'center', backgroundColor: '#96f2ca', }, textStyle:{ margin: 25, fontSize: 24, fontWeight: 'bold', textAlign: 'center', color: '#3a4a35' } })

Output:

Examples of React Native Switch

Given below are the examples:

Example #1

React Native Switch.

In the example below, initially the Switch value is set to “FALSE” and display TEXT with “OFF”. When there is change of the value of Switch to “TRUE” by calling onValueChange the component of TEXT will reset to“ON”.

import React from 'react'; import { Switch ,Text ,View , StyleSheet } from 'react-native'; export default class App extends React.Component{ state = { switchValue: false }; }; render() { return ( <Switch style={{ marginTop: 31 }} onValueChange={this.toggleSwitch} value={this.state.switchValue} ); } } const styles = StyleSheet.create({ container: { flex: 1, justifyContent: 'center', alignItems: 'center', backgroundColor: '#edb5ff', }, });

Output:

Example #2

Using Switch Case Statement in React Native.

Code:

Output:

Example #3

Customisable Switch Component for React Native.

import React, { Component } from 'react'; import { StyleSheet ,Text ,View , Switch , Alert } from 'react-native'; export default class App extends Component { constructor() { super(); this.state = { SwitchOnValueHolder: false } } SwitchOnValueHolder: value }) if (value == true) { Alert.alert("You have truned ON the Switch."); } else { Alert.alert("You have turned OFF the Switch."); } } render() { return ( <Switch ); } } const styles = StyleSheet.create({ container: { flex: 1, justifyContent: 'center', alignItems: 'center', backgroundColor: '#afff63', }, text: { fontSize: 19, color: '#000000', }, });

Output:

Below image shows the window that will appear when Switch is turned ON and Switch is turned OFF respectively.

When Switch is in ONstate:

When Switch is in OFFstate:

Conclusion

Here we got to know that the Switch value can be set to ON when the value prop is set to TRUE and the Switch value can be set to OFF when the valueprop is set to FALSE which is also the default value of valueprop. We have also seen the working of the Switch in React-Native from creating a file then to logic then finally to presentation. We also got to know about how to develop a simple switch, developing switch using a switch case statement and also developing a customizable switch. In React-Native switch can be developed very easily and very efficiently.

Recommended Articles

This is a guide to React-Native Switch. Here we discuss the introduction, how switch component works in react-native and examples. You may also have a look at the following articles to learn more –

## Mesh Elite Skylake Pca Skylake Gaming Pc Review

Pros

Cons

Our Verdict

At £999 without a monitor, the Mesh Elite Skylake PCA is one of the most expensive PCs in our group test, but its features and build quality most certainly earn that price tag. Its feels like a higher-quality system and the high-end internal components deliver useful additional capabilities and bags of upgrade potential.

It may not resemble a  gaming PC at first glance, but the Mesh Elite Skylake PCA exudes quality. Its tower case comes with a matt black finished that’s soft to the touch, giving it an expensive feel, while at the top an illuminated display shows the current CPU temperature in a variety of colours which can be altered at the push of a button. Also see: Best gaming PCs 2023/2023.

Unlike most of the gaming PCs we review, the case has no transparent side panel. It’s a real shame in this case, because the Mesh Skylake PCA is by far the most impressive-looking inside. The case is spacious, with plenty of available drive bays and the Gigabyte GA-Z170X-Gaming 3 motherboard features attractive red and black details. Most impressive though is the Raijintek Triton 250mm high performance all-in-one CPU cooler, its two transparent pipes fat and filled with striking blue coolant. There’s also a blue downlight which illuminates the desk surface from the bottom of the case. We’d say the build quality of this case is considerably higher than most, certainly a tier above those from Cyberpower and Chillblast.

Under that fancy cooler lurks an Intel Core-i5 6600K Skylake processor, overclocked from 3.5GHz to 4.4GHz. This yields a decent boost in performance without pushing components to the absolute limit. It’s coupled with 16GB of 2400MHz DDR4 RAM – a little faster than the base 2133MHz stuff found in lower-end systems and comes with a 250GB Samsung SSD backed by a 1TB Seagate hard drive. Although the SSD uses one of the two M.2 ports on the motherboard, it’s using the SATA interface, rather than PCI-E so it can’t match the raw performance of the Samsung 128GB SM951 used by Chillblast. However it does perform very well and its extra capacity may well prove more beneficial than extra speed. See all PC reviews.

Mesh has opted for the ever-popular Nvidia GeForce GTX 970 graphics card in the Elite Skylake PCA, and in this case it’s a Palit-branded model running at standard clock speeds, rather than the boosted speeds found in some competitors’ systems.

Mesh’s chosen motherboard doesn’t just look good, it’s also designed specifically for gaming and comes with a selection of features not found on lesser models. Not only does it support USB 3.1, but it also supports USB 3.1 Gen 2 which allows for speeds of up to 10Gb/s and up to a claimed 16 Gb/s using Intel’s USB 3.1 controller. It also supports both USB Type-C and Type-A connectors.

Audio quality has also been boosted, claiming 115dB signal to noise ratio and featuring support for the Sound Blaster X-Fi MB3 audio suite. The OP-Amp chips have also been made user-upgradable, so if you want the very best sound quality, you can swap them out for higher-fidelity alternatives of your choice. See all gaming PC reviews.

If you prefer to use an external USB audio device, you can use Gigabytes, “DAC-UP” USB ports which feature isolated power supplies to ensure there’s no interference from other components.

The board also includes a high-performance “Killer Ethernet” network interface, designed to reduce latency and improve overall system performance and is one of the few reviewed here to offer 2-way Nvidia SLI certification, allowing the addition of an extra GTX 970 as a future upgrade. The supplied 750W power supply also provides plenty of upgrade potential.

The Mesh Elite Skylake PCA is a great performer, but not the fastest overall. Chillblast’s Fusion Krypton, for example, beats is in the application performance tests, probably due to its faster SSD, and also beats it by a few fps in gaming, thanks to its factory overclocked card. The Mesh system does come with double the amount of SSD storage however, which means more games can be installed on it for much faster loading times.

Performance in our tests was as follows: PCMark 8 2.0 Home: 5316; PCMark8 2.0 Work: 5748; PCMark8 2.0 Creative: 7282; PCMark8 2.0 Storage: 4996; Alien vs Predator 1080/720: 89.6/169.6fps; Sniper Elite V2 Ultra/Medium/Low: 47.6/203.2/444.7fps; Final Fantasy XIV Creation Benchmark Maximum: 130.4fps; 3DMark Fire Strike Ultra/Fire StrikeExtreme/Fire Strike/Sky Diver/Cloud Gate/Ice Storm Unlimited/Ice Storm Extreme/Ice Storm: 2,588/4,919/9,494/24,308/23,574/207,151/183,687/194,602; max CPU temp under load: 51ºC; power consumption idle/load: 63/251W

Specs Mesh Elite Skylake PCA: Specs

3.5GHz Intel Core i5-6600K @ 4.4GHz

Raijintek Triton 240mm High Performance AIO Water Cooling Solution – BLUE Coolant

16GB DDR4 2400MHz

250GB Samsung M.2, 1 TB Seagate SATA 3 HDD

750W FSP Quiet Power Supply – Silver 80 PLUS

GIGABYTE GA-Z170X-Gaming 3

Windows 10 Home

Palit Nvidia GeForce GTX 970 4GB 1051/1178MHz Core, 7000MHz RAM

onboard sound

Killer Lan 2200 Gigabit ethernet

3x USB 3, 2x USB 2, 1x USB 3.1 Type-A, 1 x USB 3.1 Type-C

1 x D-Sub, 1 x DVI-D, 1 x HDMI

24x DVD Writer (read/write CD & DVD)

Aero Cool DS 200

ROCCAT ISKU Keyboard (ROC-12-722), ROCCAT Lua Mouse (ROC-11-310)

Gold Warranty (Lifetime Labour, 2 Year Parts, 1 Year Free Collect & Return)

Update the detailed information about Pca(Principal Component Analysis) On Mnist Dataset on the Minhminhbmm.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!