How to deal with the challenge of edge computing

Some see edge computing as a glorified form of data acquisition or local digital process control. There is more to the edges, however,

Edges involve many data sources, usually located in geographically distributed locations. But remember, the aggregation of these data is key to value and insight. The analysis of this data takes place in the core data center and often requires actions guided by the insights to be performed at the edge. This means that a surprising challenge for the limbic system is efficient flow not only from the edge to the core, but also back again.

The scale of all this is also a problem. Incoming data from edge sources is usually very large; Large amounts of data from large numbers of edge locations can create really large amounts of data.

A prime example of dealing with the extreme scale of edge computing is the development of self-driving cars. Automakers need access to global data, processing petabytes of data every day. They also had to meet key key performance indicators (KPIs), which included measuring how long it took to collect data from the test vehicle, how long it took to process it, and how long it took to provide insights.

Of course, not all edge systems involve data of this extreme size, but most edge cases involve too much data to transfer all of it from the edge to the central data center. This means that data must be processed and reduced at the edge before it can be sent to the core. This type of data analysis, modeling, and data movement must be effectively coordinated on a large scale.

To better understand the challenges of the limbic system, let's delve into what happens at the edges, at the core, and in between.

Active on the edge
Edge computing typically involves systems at multiple locations, each doing data ingestion, temporary data storage, and running multiple applications to reduce data before transferring to the core data center. These tasks are illustrated in the left half of Figure 1.

How to deal with the challenge of edge computing

Analytical applications are used for preprocessing and data simplification. Ai and machine learning models are also used to simplify data, such as deciding which data is important and should be passed to the core data center. In addition, the model allows intelligent actions to occur at the edges. Another typical edge requirement is to figure out what steps took place and what data files were created.

All of this has to happen in many places, and no one place will have a lot of on-site management, so edge hardware and software must be reliable and managed remotely. With these requirements, self-healing software is a huge advantage.

Training AI models, etc. -- core
The activities that occur at the core, shown on the right side of Figure 1, are similar to edge processes but have a global perspective, using collective data from many edge locations. The analysis here can go even further. This is where deep historical data is used to train artificial intelligence models. As with edge locations, the core contains the actions that have been performed and the list of data that has been created. The core is also the place where high-level business goals are connected beneath the edge system goals.

The core data infrastructure must meet challenging requirements because this is where the data from all edge systems converge. Data from the edge (or from core processing and modeling) can be very large or can contain a large number of files. The infrastructure must be robust in handling large numbers of objects and data volumes.

Of course, analysis and model development workflows are iterative. As organizations learn from global edge data aggregation, new AI models are generated and updated. In addition, analysis applications were developed that had to be deployed on the edge. This brings us to the next topic, which is what needs to happen between the edge and the core.

Traffic that moves between the edge and the core
Just as Figure 1 lists the key activities at the edge or core, it also shows the key interaction between the two: the movement of data. Clearly, the system needs to move the absorbed and reduced data from the edge to the core for final analysis. What is sometimes overlooked, however, is an unexpected journey: bringing new AI and machine learning models or updating the analytics developed by the core team back to the edge.

In addition, analysts, developers, and data scientists sometimes need to examine raw data for one or more edge locations. Direct access to raw data at edge locations from the core is very helpful.

Almost all large-scale data movement should be done using data infrastructure, but direct access to services running on the edge or at the core can be useful. Secure service grids are useful for this process, especially if they use modern zero-trust workload authentication methods, such as the SPIFFE protocol.

Now that I've identified what's going on at the edge, the core, and the middle, let's look at what the data infrastructure needs to do to make this possible.

HPE Ezmeral Data Fabric: From edge to core and back
HPE is known for its excellent hardware, including the Edgeline series (designed for use at the edge). However, HPE also produces hardware-agnostic HPE Ezmeral Data Fabric software, designed to extend from the edge to the core, both locally and in the cloud.

HPE Ezmeral Data Fabric lets you simplify your system architecture and optimize resource usage and performance. Figure 2 shows how the power of data structures can be used to address the challenges of edge computing.

Computations can use Kubernetes at the edge or core to manage containerized applications. HPE Ezmeral Data Fabric provides the Data layer for such applications. With the global namespace of HPE Ezmeral Data Fabric, teams working in the Data center can remotely access Data that is still on the edge.

No comments

Related recommendation

No related articles!

微信扫一扫,分享到朋友圈

How to deal with the challenge of edge computing