You are reading the article Intel’s Jeff Klaus: Edge Computing And Data Center Management updated in December 2023 on the website Minhminhbmm.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Intel’s Jeff Klaus: Edge Computing And Data Center Management
Clearly, managing a data center is much harder than it used to be. Emerging technologies like artificial intelligence and Big Data and IoT have increased the workload, boosting expense and complexity.
To provide guidance on these trends, I spoke with Jeff Klaus, General Manager, Data Center Software at Intel. We discussed:
Why it’s harder to run a data center than ever before.
The tools and solutions that can help the efficiency of data center management. The role of DCIM (data center infrastructure management) tools.
The mega trend toward collecting and processing data at the edge. While more than 90 percent of data is processed in data centers today, Gartner predicts that by 2023 about 75 percent will be handled at the edge.
Key issues and challenges. How can IT managers prepare this move to the edge?
Scroll down below the video to see highlights
“I give [data center managers] a lot of credit for managing the evolving set of technologies. We had this Big Data phenomenon and now it’s AI, and then you’re moving to the edge. And at the same time, the level of interconnects and devices that are connecting to the data center and the speed requirements – it’s a significant challenge.”
“We asked [data center managers] how many remote environments that they’re managing and end points are they managing. And it turned out that close to 60% are managing five plus more, five or more unique environment.
“And it might be the traditional data center that we think of, but it can also be this phenomenon of edge and the continued movement of getting compute resources to the customer closer to the customer and more inexpensive positioning with the customer. And that level of complexity is going to continue, but it certainly has contributed to the challenges of data center operators today.”
“So we [at Intel] are trying to feed additional data to the operators so that they can make more intelligent decisions with this disparate remote environment they’re managing.
And what we’ve seen is that they’re asking us for more analysis tools. We went through this period of time where just getting the data was a struggle, and even in the survey that we commissioned, there were still about 40% of data center operators that are struggling to just get the data they need to manage their environment.”
“Just from talking to customers and understanding some of the complexity, we have a lot of revisions that occur at the OEM space and Intel contributes to some of that complexity because we self-obsolete ourselves every two and a half years or so with a new chipset. And that goes out to all of the OEMs and then they release new servers and new technology to their customers.
And what happens is, the OEM takes a lot of the baseline components that Intel is providing, and they add their firmware layers on top of the chipset or on top of the architecture, and the customization in firmware is really causing some complexity.”
“The edge is faster and it’s cheaper. It’s closer to the customer. And depending on the type of services that are required, it’s a requirement to ensure that the customer’s information is processed that much more quickly in a customer type setting, rather than going back to a traditional data center.
“So the challenge is more remoteness. We talked before about five-plus remote environments, you’re going to see that significantly increase.”
“So there’s generally not going to be a human being there to be able to remediate, fix, turn off and on, or analyze a set of hardware. So you need a lighter tool to remotely establish a link to find out or look and discover what type of issues could be occurring there.
“But you also don’t want a network hog or an analysis tool that requires a lot of network bandwidth or requires a significant amount of people to manage.
“So I think the toolsets, the traditional toolsets, and DCIM, have evolved into a set of buckets that are underneath that larger umbrella that are really just defining customer problems and addressing them. And Edge … has similar issues, but it’s just on a smaller scale. And what we’re seeing is, we see a lot of requests for analysis tools ‘I got all this data, but I want to understand how to interpret that information and what to do with it.’”
“I think there are many tools out there. I think that’s one of the bigger challenges, that I think data center managers are being hit up to evaluate something almost weekly.
“There are customers or partners that are in this space that are doing a good job on the business development by getting into the market and trying to grow. And it’s pretty easy to set up a software tool that can collect some information and evaluate it.
“The IT department discovered [that] really understanding what it’s going take to implement and maintain a solution that really says it’s going to do everything for you, and that’s part of the promises that are made, [requires you to] set aside individuals and resources. Not only to implement, but also to have a much higher level of maintenance resources than you traditionally would believe.
“So, kicking the tires, doing some good diligence, getting some customer referrals, those types of really basic requirements are something that I would encourage all data center managers to do.”
“Then with newer generations, when you look at the evolution of IoT, its sensors are getting smaller and smaller to be able to put inside industrial equipment.
“Well, that’s essentially what’s happened within the IT devices: now there are more sensors that are within your IT devices to help monitor the health, monitor the temperature, and monitor the power utilization.
“So that has blossomed into a whole number of use cases from this information. And how we’ve packaged our sets of tools is, ‘Tell me your top three problems, I have a portfolio of roughly 10 use cases that I can apply to your issues. Let me prove one of those use cases out to you before you make an investment in people or an investment in capital towards the tool.’”
You're reading Intel’s Jeff Klaus: Edge Computing And Data Center Management
The world of data center equipment has been turned on its head over the past two years due to issues surrounding the global supply chain.
It no longer is a case of calling up a vendor to order data center equipment and expecting it to arrive a short time later.
Here are some of the top trends in the data center equipment market:
Electrical switch gear and generators that traditionally have had lead times of up to half a year may now take 18 months to arrive, according to one contractor that builds major data centers.
This is due to a global supply chain slow down. As well as lack of materials, such as steel, cement, bolts, and electrical circuitry, there are also shortages in drivers, ships, and components.
All of this adds up to slower construction of new data centers and delays in construction or upgrades to power systems due to the waiting time for gear. Those wanting to expand existing data centers or build new ones must take into account the sluggishness of the supply chain.
Servers used to take days to arrive. These days, they can take several months.
Several companies have reported such lead times for servers, laptops, motherboards, processors, networking hardware, and other components. This is pushing up equipment costs and overall infrastructure project costs while causing major delays.
A survey by GetApp found more than three quarters of respondents have been dealing with significant delays in the supply chain for IT hardware. This included all types of top suppliers.
A big part of the problem is shipping. Most noted that shipping was at the bottom of component and equipment delays with more than half waiting anywhere from four months to 13 months.
In response to worldwide supply chain disruption, organizations are initiating a variety of strategies.
“Supply constraints going back two years triggered many organizations to look for new approaches to IT infrastructure,” said Mario Blandini, VP, iXsystems.
Hyperscale cloud providers, such as Google Cloud, and Facebook as well as major data center construction firms are buying up equipment and supplies for projects they may not need for another year or more.
Other approaches include placing orders for more than is needed to expand inventory, paying more for equipment or faster delivery, cooperative arrangements between multiple companies to place larger orders with better discounts and earn a higher priority for delivery, and finding more local suppliers instead of relying on overseas resources.
On storage equipment, Blandini noted that some are turning to open-source storage as a solution to delays in receiving gear from traditional suppliers.
“We expect more organizations to freely evaluate open-source storage software and have seen those who have deployed open storage expanding their use after proving how well it works in their environments,” Blandini said.
Another shift is a greater emphasis on refurbishment.
According to GetApp surveys, 58% are now refurbishing or upgrading older hardware to make it last longer and compensate for supply chain delays. Cost, too, factors into the equation.
Refurbishment is often done in response to the need to refresh data center equipment. If it is going to take six months to a year for servers, networking gear, and power equipment to arrive, some are delaying full-scale equipment refreshes for yet another year or two.
But to make this work, they have to ensure their aging equipment is really up to the task. That may mean a thorough maintenance check, replacing noisy fans and other faltering components, adding more memory, and upgrading CPUs, if available, to enable the equipment to cope with the pace of modern day applications.
Some have seen the writing on the wall in what can sometimes be a futile attempt to look for equipment and supplies.
This is sometimes causing them to look to outsource data center functions and leave the equipment hassles to bigger suppliers with deeper pockets.
According to Gartner, 40% of newly procured premises-based compute and storage will be consumed as a service by 2025, up from less than 10% in 2023.
“Virtual infrastructure offered as a service enables users to deploy their apps quickly and relies heavily on cloud operations not just storage or on-prem,” said Patrick Aleksanyan, enterprise sales executive, North America, CloudBlue.
“Infrastructure for IT services is no longer just in the data center but through cloud as well.”
Agile management and flexibility can help retailers use data to better capitalize on consumer trends. One European retailer is now using an efficient supply chain and real-time information about trends to bring consumers the latest fashions as soon as they hit the runway.
Latest Trends at Affordable Prices
Headquartered in Berlin, Lesara is an online fashion retailer offering 50,000 products annually to consumers in 23 European countries. Lesara is unique in its use of data to identify current fashion trends. It then works with factories to bring the latest products directly to consumers at a competitive price. Lesara founder and CEO Roman Kirsch said in an interview with Forbes that with agile management, robust analytics and a fast, transparent supply chain, Lesara’s turnaround time beats that of its competitors by almost an entire month.
“We don’t invest in expensive supermodel campaigns, but rather invest in great experiences that promote word-of-mouth through social media,” Kirsch said. “Our vision is to enable everyone, everywhere, to afford the newest trends in apparel at great prices.”
Kirsch explained that they support this model by investing in efficiency and using the supply chain as a main driver for cost structure. Lesara’s data-driven demand forecasts, elimination of overstock and online-only operations allow the company to deliver up to 25 percent cheaper price points without sacrificing quality.
Innovation Is at the Forefront of Retail
According to a report on agile innovation by Ernst & Young, leaders need to rally their organizations around new models of innovation and “cultivate an agile culture of experimentation” by encouraging ideas and embracing failure. They should also eliminate bureaucracy, think simply and act fast with an approach for identifying and pursuing new innovations.
Additionally, Kirsch noted the importance of stepping outside your comfort zone, learning from your mistakes and promoting values like speed over perfection. So far, Lesara has been able to prove that having management agile enough to respond to robust data can help bring the right products to market faster. “We believe that subjective decision making based on instincts can never be as powerful as understanding consumer demands as they develop and catering to them in real time,” said Kirsch.
Darrell Rigby, head of the global innovation and retail practices division at Bain & Company, stated in an article in Harvard Business Review that agile innovation methods often start at the top, and typically include Scrum (emphasizing creative and adaptive teamwork in solving complex problems), lean development (a focus on the continual elimination of waste) and Kanban (reducing lead times and the amount of work in process). According to Rigby, companies can bring valuable products to market and improve engagement by focusing on visibility and adapting to customers’ changing priorities.
In contrast to many other retailers in the fashion industry, Kirsch firmly believes that the main driver of fashion growth will be mobile, and that it’s only at the beginning of its innovation. “We’re constantly challenging the status quo and pushing widely accepted opinions and assumptions, with the best argument winning the conversation,” said Kirsch.
Microsoft moved away from Internet Explorer (IE), which was first seen in Windows 95 and remained in Windows for two decades to bring us the cutting edge Microsoft Edge. The Microsoft Edge browser was developed for Windows 10 and is a direct rival to the popular Google Chrome. Microsoft Edge supports Chrome extensions, addons, even syncs with your Chrome browsing data.
In terms of aesthetics, Microsoft Edge is also not lacking. You can completely customize your browser interface to your taste using themes. Only recently, you could only use two themes – Light and Dark themes. But now, you have the license to install a variety of beautiful themes. You’ll discover our choice of the 5 best Microsoft Edge themes and how to apply them to your browser on this page.Best Themes for Microsoft Edge browser 1] GitHub Dark Theme
Designed with the late-night developers in mind, the GitHub Dark theme doesn’t do a complete overhaul of your browser’s interface. Its only business is with the GitHub website.
When you activate this theme and visit GitHub during your routing late night hustle, you’ll notice that GitHub’s pages are all on dark mode.2] Succulents New Tab Plants Theme
Get in tune with your natural side with the Succulents New Tab Plants Theme. This is a high-quality Microsoft Edge theme that displays HD images of the cactus succulents plants on the background of every new tab you open.3] Cute Dogs and Puppies Wallpapers New Tab Theme
The name of this theme is self-explanatory. Obviously designed for dog lovers, this is far from a minimalist them, as it features all types of dogs and puppies.
One minute, I was writing this post, reviewing this theme, and the next, I was lost on YouTube, checking out cute dog videos. That’s how cool the theme is.
Apply this theme and be greeted by awesome dogs and puppies pictures whenever you open a new tab in Edge.4] The Black Cat – Dark Theme
The Black Cat – Dark Theme is truly incredible, and it has nothing to do with cats.
The Black Cat – Dark Theme does more than change your new tab backgrounds; it gives you the dark mode in an extensive list of websites. The developers promise not to ruin your experience. Using it so far, I think they kept their promise.5] Dark Theme for Edge
Speaking of dark themes, this is the ultimate one. Some selections on this list modify how new tabs look, while others render specific websites in night mode. If you want to transform your entire Microsoft Edge experience to the night skin, get the Dark Theme for Edge.
This theme changes every single website to the black color. However, it doesn’t do the same for new tabs. With Dark Theme for Edge, you don’t have to set individual sites to night mode.
The dark skin worked on every website I tested, including the popular Facebook, YouTube, Twitter, etc. If you want to make use of the dark skin on Edge, this is the ultimate solution.
You can get more themes from the Edge addons and extensions store here.
TIP: You can also use Chrome themes on the Edge browser. Check out our fantastic roundup of the 10 best themes for Google Chrome.How to apply Microsoft Edge themes
Select Extensions from the dropdown menu.
On the resulting popup, select Add extension and allow it for a while until you get a confirmation notification.
NOTE: Microsoft Edge will block the activation of some themes. To enable them, go to the Extensions page and toggle them on manually.
The more you think about it, the more you realize that the Microsoft Edge browser is more of an upgrade to Chrome than a rival. It does everything Chrome can do while using way fewer resources.
What is Cloud Computing Architecture?
Cloud Computing Architecture is a combination of components required for a Cloud Computing service. A Cloud computing architecture consists of several components like a frontend platform, a backend platform or servers, a network or Internet service, and a cloud-based delivery service.
Let’s have a look into Cloud Computing and see what Cloud Computing is made of. Cloud computing comprises two components, the front end, and the back end. The front end consists of the client part of a cloud computing system. It comprises interfaces and applications that are required to access the Cloud computing or Cloud programming platform.
Cloud Computing Architecture
While the back end refers to the cloud itself, it comprises the resources required for cloud computing services. It consists of virtual machines, servers, data storage, security mechanisms, etc. It is under the provider’s control.
In this Cloud Computing Architecture tutorial, you will learn-Cloud Computing Architecture
In this Cloud Computing Architecture tutorial, you will learn-
The Architecture of Cloud computing contains many different components. It includes Client infrastructure, applications, services, runtime clouds, storage spaces, management, and security. These are all the parts of a Cloud computing architecture.
The client uses the front end, which contains a client-side interface and application. Both of these components are important to access the Cloud computing platform. The front end includes web servers (Chrome, Firefox, Opera, etc.), clients, and mobile devices.
The backend part helps you manage all the resources needed to provide Cloud computing services. This Cloud architecture part includes a security mechanism, a large amount of data storage, servers, virtual machines, traffic control mechanisms, etc.
Cloud Computing Architecture DiagramImportant Components of Cloud Computing Architecture
Here are some important components of Cloud computing architecture:1. Client Infrastructure:
Client Infrastructure is a front-end component that provides a GUI. It helps users to interact with the Cloud.2. Application:
The application can be any software or platform which a client wants to access.3. Service:
The service component manages which type of service you can access according to the client’s requirements.
Three Cloud computing services are:
Software as a Service (SaaS)
Platform as a Service (PaaS)
Infrastructure as a Service (IaaS)4. Runtime Cloud:
Runtime cloud offers the execution and runtime environment to the virtual machines.5. Storage:
Storage is another important Cloud computing architecture component. It provides a large amount of storage capacity in the Cloud to store and manage data.6. Infrastructure:
It offers services on the host level, network level, and application level. Cloud infrastructure includes hardware and software components like servers, storage, network devices, virtualization software, and various other storage resources that are needed to support the cloud computing model.7. Management:
This component manages components like application, service, runtime cloud, storage, infrastructure, and other security matters in the backend. It also establishes coordination between them.8. Security:
Security in the backend refers to implementing different security mechanisms for secure Cloud systems, resources, files, and infrastructure to the end-user.9. Internet: Benefits of Cloud Computing Architecture
Following are the cloud computing architecture benefits:
Makes the overall Cloud computing system simpler.
Helps to enhance your data processing.
Provides high security.
It has better disaster recovery.
Offers good user accessibility.
Significantly reduces IT operating costs.Virtualization and Cloud Computing
The main enabling technology for Cloud Computing is Virtualization. Virtualization is the partitioning of a single physical server into multiple logical servers. Once the physical server is divided, each logical server behaves like a physical server and can run an operating system and applications independently. Many popular companies like VMware and Microsoft provide virtualization services. Instead of using your PC for storage and computation, you can use their virtual servers. They are fast, cost-effective, and less time-consuming.
For software developers and testers, virtualization comes in very handy. It allows developers to write code that runs in many different environments for testing.
Virtualization is mainly used for three main purposes: 1) Network Virtualization, 2) Server Virtualization, and 3) Storage Virtualization
Network Virtualization: It is a method of combining the available resources in a network by splitting up the available bandwidth into channels. Each channel is independent of others and can be assigned to a specific server or device in real time.
Storage Virtualization: It is the pooling of physical storage from multiple network storage devices into what appears to be a single storage device that is managed from a central console. Storage virtualization is commonly used in storage area networks (SANs).
Server Virtualization: Server virtualization is the masking of server resources like processors, RAM, operating system, etc., from server users. Server virtualization intends to increase resource sharing and reduce the burden and complexity of computation from users.
Virtualization is the key to unlock the Cloud system, what makes virtualization so important for the cloud is that it decouples the software from the hardware. For example, PCs can use virtual memory to borrow extra memory from the hard disk. Usually, a hard disk has a lot more space than memory. Although virtual disks are slower than real memory, if managed properly, the substitution works perfectly. Likewise, there is software that can imitate an entire computer, which means 1 computer can perform the functions equals to 20 computers. This concept of virtualization is a crucial element in various types of cloud computing, which you can learn more about in this comprehensive guide.Summary
Cloud Computing Architecture is a combination of components required for a Cloud Computing service.
The front-end part is used by the client that contains client-side interfaces and applications, which are important to access the Cloud computing platforms.
The service provider uses the back-end part to manage all the needed resources to provide Cloud computing services.
Components of Cloud Computers are 1) Client Infrastructure, 2) Application, 3) Service, 4) Runtime Cloud, 5) Storage, 6) Infrastructure, 7) Management, 8) Security, and 9) Internet.
Cloud computing makes a complete Cloud computing system simpler.
Virtualization is the partitioning of a single physical server into multiple logical servers.
Introduction to SAS Grid
SAS Grid is a type of manager support. It is most widely used to distribute or spread the user tasks across multiple computers through the network connection, enabling the workload balance to accelerate the data process. The job schedules is more flexible and sophisticated on the grid computing, which already has a centralized environment with peak areas, and computing demands cost efficiency and reliability.
Start Your Free Data Science Course
Hadoop, Data Science, Statistics & others
It distributes the n number of tasks to multiple computers on the same network.
It enabled the workforce load balancing algorithm to process the accelerated jobs and schedules.
It is a more flexible and centralized one.
It has faster data processing in the migrated environment.
To increase power and save money.
What is SAS Grid?
In SAS Enterprise guide, the architecture is mainly called and used for sharing multiple computer resources via a network. It acts as the manager role, so it’s named SAS Grid Manager, which provides:
The load balancing algorithm.
Application connectivity access like Policy enforcement and more resource allocation.
Prioritization is highly available in the analytical environment.
It needs several types of machines, which looks like a cluster setup across the same network, and it has several software products. Using the server-side load balancing algorithm, it used the workspace server, which sends the job at a busy set of nodes through config via if the grid is available, the project is configured, or else the grid is configured to a specific set of tasks that already run on the grid table.
SAS Grid Computing
SAS Grid manager, which delivers the load balancing algorithm available on all the set of work tasks and high availability with faster when compared to other computing environments. The cluster is the set or group of computers with their efficiency and specifications across the networks. Workload is also split up of each computer, and it is called the tasks with the help of Workload balancing algorithm for sharing the resource pool and accelerated processing for allowing multiple users with the same sharing datas.
The workload distribution helps to enable the functionality of the SAS grid as the below:
Mainly, it enables the n number of users that are more than 1 user to be performed in the SAS environment, which distributes the data workload to the shared resource pool.
It helps to distribute tasks similar to subtask child processes, so it splits the SAS single job into a shared resources pool.
The jobs allowed users to route the shared resources pool at the exact scheduled time.
SAS Grid Legacy
The grid legacy mainly enabled the users for computing to develop the shared environment and control the larger volume of data that processed and analyzed the program code.
It helps fast the code, which accomplishes the user dynamics to reload the data load resources to balance the split and multiple nodes.Steps to Create SAS Grid
Given below are the steps mentioned:
1. Navigate to the below URL.
2. And paste the below code for to create the grid table.
3. %MACRO First(August11=);
4. PROC SQL;
5. create table &August11. (inp1 CHAR(100),inp2 CHAR(1));
7. %IF %SYSFUNC(grdsvc_enable(_all_,Server=SASApp)) NE 0 %THEN
9. %PUT WARNING: There is no grid in series;
19. In the above code we used Macro for initializing and created the table like &August11 with 2 input parameters.SAS Grid on AWS
SAS on AWS is one of the run time environments for allowing the organizations which deployed the application on either open source or some other feature in the SAS models. We used the data infrastructure to support the wide variety of analytics the patterns will support the AWS devops. For mid-tier architecture, we used Amazon EC2 or r5 instances and types to load the data share client contents by using two or more number of instances in the SAS requirement. Unless we used high availability metadata servers in the EC2 instance types which exceed the minimum requirement from SAS memory recommendations.
The above diagram explains about the AWS cloud in the SAS platform through the Gateway and the amazon VPC[Virtual Private Cloud].Example of SAS Grid
Given below are the examples mentioned:
Code:%MACRO Second(vars=, AUgust11= ); PROC SQL; create table &AUgust11. (inps1 CHAR(25),inps2 CHAR(3)); QUIT; %IF %SYSFUNC(grdsvc_enable(_all_,Server=SASApp)) NE 0 %THEN %DO; %PUT WARNING: There is no grid table on this series; %a(AUgust11=&AUgust11.); %b(AUgust11=&AUgust11.); %c(AUgust11=&AUgust11.); %END; %ELSE %DO; %PUT WARNING: Its Grid and used parallel macros; %IF %UPCASE(&vars.) = WORK %THEN %DO; %PUT ERROR: Specified Work is not shared in RSUBMITs; %GOTO Finish; %END; %methd(d=aug11,g=&vars.); %methd(e=aug12,h=&vars.); %methd(f=aug13,i=&vars.); PROC SQL; CREATE TABLE aug11.AUgust11 AS SELECT * FROM &AUgust11.; CREATE TABLE aug12.AUgust11 AS SELECT * FROM &AUgust11.; CREATE TABLE aug13.AUgust11 AS SELECT * FROM &AUgust11.; QUIT; %END; %Finish: %MEND; LIBNAME Sandboxtesting '\MyNetwork'; %Second(vars=Sandboxtesting, AUgust11=WORK.August111);
In the above example, we created SAS grid by using the macro along with procedure SQL.
By using IF and another conditional statement we can validate the inputs.
Table will be created for each session with parallel macros.
Network location is shared at the end of the method.FAQ
Given below are the FAQs mentioned:Q1. What is SAS Grid?
The computing tasks are split into sub-tasks and assigned to multiple PCs with the network.Q2. How SAS Grid works?
By using the workload the SAS grid will be enabled and operated in the environment.
Efficient Resource AllocationQ4. Define Grid Manager.
It’s a web-based tool to monitor the resources, users, and jobs which already scheduled.Q5. What is SAS Grid Server?
It serves as an intermediate between the SAS application and grid environment.Conclusion
The SAS grid helps to convert the existing codes to the parallel processing system on the remote sessions like a straightforward approach. By using SAS keywords like RSUBMIT, %SYSLPUT, INHERITLIB are handled, and executing the macros for merging datasets without causing any errors. More complexities exist, and parallel processes will be used by the SAS Grid to perform independent and synchronized data operations.Recommended Articles
This is a guide to SAS Grid. Here we discuss the introduction, SAS grid computing and legacy, example, and FAQ respectively. You may also have a look at the following articles to learn more –
Update the detailed information about Intel’s Jeff Klaus: Edge Computing And Data Center Management on the Minhminhbmm.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!