K3s is a Kubernetes compatible distribution by SUSE(former Rancher Labs) fully certified by CNCF(Cloud Native Computing Foundation), the so-called distribution, the kernel mechanism is the same as K8s, but removed many external dependencies and K8s alpha, beta features. It also changed the way it is deployed and operated, with the aim of lightening the K8s and applying it to IoT devices such as raspberry PI. In simple terms, K3s are lightweight K8s that consume very few resources. To achieve this, K3s is designed as a binary file of approximately 45MB that fully implements the Kubernetes API.
K3s has no full name and no official pronunciation. Kubernetes is a 10-letter word, abbreviated K8s. Since Kubernetes is supposed to be half the size in terms of memory footprint, the half size of Kubernetes is a five-letter word, abbreviated K3s. K3s is suitable for edge computing, Internet of Things, CI, Development, ARM and embedded K8s scenarios.
Perfect for edge environments: K3S is a highly available, CNCF-certified Kubernetes distribution designed for unattended, resource-constrained, remote locations, or production workloads inside iot devices.
Simple and secure: K3S are packaged into a single binary of less than 60MB, reducing the dependencies and steps required to run the installation, run, and automatically update the production Kubernetes cluster.
Optimized for ARM: ARM64 and ARMv7 both support binary files and multi-source mirrors. K3s works well in environments as small as raspberry PI or as large as AWS A1.4 Xlarge 32GiB servers.
Official website: https://k3s.io/GitHub: https://
KubeEdge is an open source system for extending containerized application choreography capabilities to Edge's host. It is built on Kubernetes and provides infrastructure support for web applications. Deployment and metadata synchronization between cloud and edge. KubeEdge is the industry's first edge container platform project. On March 18, 2019, KubeEdge was included by CNCF and is currently in the incubation level.
The goal of KubeEdge is to create an open platform that enables edge computing and extends containerized application choreography to edge nodes and devices.
The main advantages of KubeEdge include:
Edge computing: By running business logic at the edge, you can protect and process large amounts of data where it is generated. This reduces network bandwidth requirements and consumption between the edge and the cloud. This improves responsiveness, reduces costs, and protects customer data privacy.
Simplify development: Developers can write regular HTTP or MQTT-based applications, container them, and then run them anywhere -- whether on the edge or in the cloud -- in a more appropriate way.
Kubernetes native support: With KubeEdge, users can orchestrate applications, manage devices, and monitor application and device status on edge nodes, just like traditional Kubernetes clusters in the cloud.
Rich applications: It is easy to capture and deploy existing sophisticated machine learning, image recognition, event processing, and other advanced applications to the edge.
KubeEdge's components run in two separate locations -- on the cloud and on edge nodes. The components that run on the Cloud are collectively called CloudCore and include Controller and Cloud Hub. The Cloud Hub acts as the gateway to receive requests sent by edge nodes, and the Controller acts as the choreographer. The components that run on edge nodes are collectively called EdgeCore and include EdgeHub, EdgeMesh, MetadataManager, and DeviceTwin.
In October 2021, the KubeEdge community self-released KubeEdge version 1.8. This release includes active-active HA support for large-scale cluster CloudCore, EdgeMesh architecture modifications, EdgeMesh cross-LAN communication, and Kubernetes dependency upgrades:
CloudCore active HA support for large clusters [Beta]
EdgeMesh architecture modified
EdgeMesh communicates across lans
Onvif Device mapper
Kubernetes dependency upgrade
More than 30 bug fixes and enhancements
OpenYurt is built on native Kubernetes with the goal of extending it to seamlessly support edge computing. In short, OpenYurt enables users to manage applications running on edge infrastructures as if they were running on cloud infrastructures. OpenYurt is the first edge computing cloud native open source project released by Ali Cloud in May 2020. It is the first project in the industry to extend Kubernetes to the edge computing field in a non-invasive way. It will officially become the CNCF sandbox project in September 2020.
OpenYurt is designed to meet the various DevOps requirements of a typical edge infrastructure. Using OpenYurt to manage edge applications, users can get the same user experience as centralized cloud computing application management. It solves many challenges of Kubernetes in the cloud edge integration scenario, such as unreliable or disconnected cloud edge network, autonomous edge node, edge device management, cross-regional business deployment, etc. OpenYurt maintains full Kubernetes API compatibility, no vendor binding, easy to use.
OpenYurt is now widely used in typical edge computing scenarios such as Internet of Things, edge cloud and distributed cloud, and covers many industries such as logistics, energy, transportation, manufacturing, retail, medical care and CDN.
In September 2021, OpenYurt released v0.5.0. The kubernetes-Native non-invasive and extensible edge device management standard is introduced for the first time in this release, seamlessly integrating the Kubernetes business load model with the IOT device management model. At the same time, we cooperated with VMware to promote EdgeX Foundry as the first implementation and successful implementation of cloud native device management model, greatly reducing the complexity of EdgeX Foundry deployment management on Kubernetes and improving the efficiency of edge device management.
Version v0.6.0 was also released in the same year, with new features including:
Launch the OpenYurt Experience Center, which enables end users to learn OpenYurt easily.
Ingress controller of NodePool level is supported.
Local storage supports multiple device paths
Add YurtAppDaemon to manage workloads such as DaemonSet at the NodePool level.
Add YurtCluster Operator(kubernetes and OpenYurt conversion declaration)
In November 2020, Tencent Cloud jointly released the SuperEdge edge container open source project with Intel, VMware, Huya, Cambrian, Meituan and Capital Online.
SuperEdge is an edge container management system based on Kubernetes-Native. The system extends the cloud native capability to the edge, and well realizes the management and control of the cloud to the edge, greatly simplifying the process of application deployment from the cloud to the edge. SuperEdge provides powerful support for applications to implement edge biogenesis.
Compared with OpenYurt and KubeEdge, SuperEdge not only has Kubernetes zero intrusion and edge autonomy features, but also supports unique distributed health check and edge service access control and other advanced features, greatly reducing the impact of cloud network instability on services. At the same time, it also facilitates the publishing and governance of edge cluster services to a large extent.
At present, SuperEdge has been widely used, covering the Internet of Things, industrial Internet, transportation, energy, retail, smart city, smart building, cloud gaming and interactive live broadcasting.
In September 2021, SuperEdge officially became a CNCF sandbox project. SuperEdge v0.6.0 was released on September 26. This update focuses on production integration, bringing SuperEdge to the user's production environment, adding local persistent storage support, edge IoT device access, ServiceGroup deployment status and event feedback, As well as the use of micro-services framework Tars, edge application monitoring data collection, Tengine AI framework used in SuperEdge Demo, the specific content is as follows:
Integration with TopoLVM to support edge local persistent storage
Dynamic PV configuration: PV resources on edge nodes are automatically created when PVC objects are created.
Dynamic capacity expansion: The PVC object can be edited to automatically expand the CAPACITY of the PV.
Capacity indicator collection: It can collect capacity indicators from Kubelet for storage capacity and read/write monitoring.
Extended scheduler storage policy: TopoLVM extends kube-Scheduler, uses CSI topology function to schedule Pod to node where LVM volume resides, and can set storage capacity scheduling policy.
Unified management of local storage resources: Multiple physical volume groups can be added to a VolumeGroup to centrally allocate storage resources to PODS, shielding details about underlying physical volumes.
Fledge is an open source framework and community for the industrial edge, focusing on critical operations, predictive maintenance, situational awareness and security. Fledge's architecture aims to integrate the Industrial Internet of Things (IIoT), sensors and modern machines with the cloud and existing "brownfield" systems such as DCS(Distributed Control systems), PLCS (Program Logic Controllers) and SCADA(Supervisory Control and Data). All share a common set of management and application apis.
Fledge developers and operators no longer face complexity and fragmentation when building IIoT applications, automating and transforming their businesses by collecting and processing more sensor data. Fledge's modern pluggable architecture eliminates data silos. Fledge has created a unified solution by using a consistent set of RESTful apis to develop, manage, and secure IIoT applications.
Fledge works closely with Project EVE, which provides system and choreography services and container Runtime for Fledge applications and services. Fledge is also integrated with Akraino, and both projects support the roll-out of 5G and dedicated LTE networks. On September 29, 2021, Fledge released v1.9.2.