Sorry, you need to enable JavaScript to visit this website.


Your feedback is important to keep improving our website and offer you a more reliable experience.

From Cloud Computing to Edge Computing

BY Shane Wang, Ruoyu Ying ON May 22, 2019


Over the last four decades, the Internet and networking have emerged, later transforming into cloud computing, and now evolving into edge computing to meet the needs of various users and use cases. Edge computing is becoming the next wave of data center infrastructure, powered by 5G and Internet of Things (IoT) technologies. Many popular, mainstream open source projects have emerged to support edge computing, several of which are described in this article.

History of Cloud Computing

In the early 1980s, Sun Microsystems* proposed a concept called ‘The network is the computer,’ which many believe to be the original prototype of cloud computing. The rapid development of computer technology and the rise of the Internet industry in the years that followed revealed how accurate and perceptive this concept truly was.

In recent decades, Amazon* has significantly impacted the cloud computing industry after founder Jeff Bezos mandated a transformation in corporate culture to a service-oriented architecture. In March 2006, Amazon launched its Elastic Compute Cloud service (EC2), where users paid only for the services they used, based on the time or resources they consumed, effectively commercializing cloud computing.

Following the introduction of Amazon Web Services* (AWS*), several enterprise companies entered the cloud computing field with cloud computing solutions of their own. Examples included Alibaba Cloud*, Google Cloud* platform, IBM Cloud*, and Microsoft Azure*, among others.

In addition to these commercial offerings, many open source projects emerged. In July 2010, NASA* and Rackspace* jointly launched an open source cloud software initiative known as OpenStack*, alongside other alternatives that commanded considerable market share, including OpenNebula*, Eucalyptus*, and CloudStack*. Of these projects, OpenStack has emerged as the de-facto standard of open-source-based Infrastructure-as-a-Service (IaaS) solutions.

Birth and Development of Edge Computing

In 1995, MIT* Professor and Internet Inventor, Tim Berners-Lee, foresaw the network congestion problem that we experience today. He challenged his colleagues to invent a new paradigm to deliver online content. In response, MIT researchers, led by Tom Leighton, came together to launch Akamai*, a company inside MIT, and to invent the Content Delivery Network (CDN) platform, the prototype of edge computing. Different from modern edge computing, this prototype was responsible only for storage and data marginalization.

The development of edge computing is closely related to the development of cloud computing. Inevitably, cloud computing users will face limitations such as network congestion, high latency, poor real-time effectiveness, and performance bottlenecks, which prevent the technology from fulfilling their business requirements. Edge computing can be seen as the extension of cloud computing. The European Telecommunications Standards Institute (ETSI) critically influenced its development by creating a working group in the mobile industry in 2014 to advance its standardization, proposing the Mobile Edge Computing (MEC) concept. In 2016, ETSI evolved this concept, extending edge computing from cellular networks to other kinds of access points, known as Multi-Access Edge Computing. Ultimately, this concept has served as the standard and reference architecture for edge computing. The following diagram shows a reference architecture from the ETSI Mobile Edge Computing (MEC) Framework and Reference Architecture Group Specification.

Image source:

The translation for MEC in Chinese is “multi-access“, “edge” and “computing”, and they are defined as:

  • Multi-access: Providing various network access modes to accommodate different IoT scenarios, such as LTE*, WiFi, cable network, ZigBee*, LoRa*, NB-IoT, and others.
  • Edge: Network functions and applications are deployed at the edge of the network, closer to end users, to reduce the latency of transmission.
  • Computing: To make full use of the limited resources in computing, storage, and networking by uniting cloud computing and fog computing.

Therefore, an edge computing system based on this architecture should include these properties.

In the course of discussing edge computing, we must also discuss another concept known as fog computing. Initially proposed by Columbia University* Professor Stolfo in 2011, fog computing is a system-level architecture that can help distribute computing, storage, network and control services and resources from clouds to objects. In 2012, Cisco* officially proposed this theory with a detailed description of its definition. According to the fog computing concept, data, data processing, and applications should all be centralized inside network edge devices rather than a centralized data center. The processing mode is referred to as a fog since fog has closer proximity to the ground than a cloud.

In terms of the methodology, fog computing is more systematic than edge computing, with a higher level of hierarchy and expansibility, and a broader range. It not only supports the network edge, but also provides successional services for the extension from objects to the cloud. In most cases, fog computing can also support edge computing, so for our purposes, we propose that fog computing encompasses edge computing.

In November 2015, companies including ARM*, Cisco, Dell*, Intel, Microsoft*, and Princeton University* founded the OpenFog* Consortium, focused on defining an open, interoperable reference architecture for fog computing that facilitates the distribution of compute, storage and network resources such that they are closer to the end user. The consortium also aims to influence standards development, build operational models and testbeds, define and advance technology, educate the market, proliferate best practices, and promote industry interest and development through a thriving OpenFog ecosystem.

After the emergence of the MEC concept, edge computing and MEC spread inside the telecommunication industry and became one of the critical technologies for the future of 5G:

  • Working groups of the 3rd Generation Partnership Project* (3GPP*) focused on RAN3 and SA2 released MEC-related tech reports and confirmed MEC as a critical point in 5G architecture.
  • Next Generation Mobile Networks (NGMN) agreed to add MEC to the requirements and architecture of 5G, and pointed out the need for an intelligent node as part of the core network.
  • 5G/IMT-2020 team asserted that MEC will help move the business platform to the network edge, and provide computing and caching capabilities to users through local services.
  • China Communications Standard Association (CCSA) launched a research project, the MEC system as Service-oriented RAN (SoRAN), which aims to conduct research on the architecture, application and requirements of SoRAN, API standards, and impact on existing wireless devices and networks.

With its low latency, high speed and other capabilities that are well suited for the telecommunication industry, edge computing may seem built solely for telecommunication applications. However, it can be applied well beyond the limits of the telecommunication industry, and various alliances have formed to accelerate its development and shape its standard architecture.

In September 2016, the 5G Automotive Association (5GAA), a global cross-industry alliance, was founded to develop and advance telecommunication solutions for smart cars and transportation. This alliance aims to improve the evolution of standards, explore business opportunities and expand global markets. Originally founded by Audi*, BMW*, Daimler*, and five telecommunication corporations including Ericsson*, Huawei*, Intel*, Nokia*, and Qualcomm*, this alliance has grown steadily since its inception.

In December 2016, corporations including ARM, China Academy of Information and Communications Technology* (CAICT), Huawei, Intel, iSoftStone*, and Shenyang Institute of Automation of the Chinese Academy of Sciences* founded the Edge Computing Consortium (ECC) to foster open industry cooperation (particularly in Operations, Information and Communications Technology), accelerate innovation, nurture best practices and advance the sustainable development of edge computing.

In August 2017, Ericsson, Intel, NTT Japan*, and Toyota* founded the Automotive Edge Computing Consortium* (AECC) to address the network and computing demands of big data in the automotive industry through edge computing and more efficient system design. The premise of this consortium is that connected cars, while delivering safer driving, optimized energy consumption and lower emissions, will require much larger data transfer capacity that necessitates an evolution in network architectures and computing infrastructures. This consortium evaluates these demands, and defines subsequent use cases and best practices. According to the AECC, intelligent, automatic and unpiloted driving systems can all be achieved through an edge computing infrastructure.

In addition to these alliances and consortia, a number of others have also formed to accelerate the development of edge computing and define its standard architecture.

Edge Computing Use Cases

Edge computing can be applied to a large number of scenarios including intelligent city, intelligent household, intelligent hospital, online live broadcasting, intelligent parking, piloted driving, unmanned aerial vehicle, intelligent manufacturing, virtual reality, and advanced reality. Furthermore, people are still discovering new applications and use cases for edge computing, which include artificial intelligence and 5G technology.

One simple example is the use of surveillance cameras to monitor traffic violations. Once the camera captures a violation, it can transmit the video to an edge site, process and analyze the license plate data locally, and ultimately, identify the license plate in a fraction of the time. Without the use of edge computing, the surveillance video must be sent to a remote data center for data processing and ultimate license plate identification. Over time, the amount and size of this data increases, placing significant strains on network bandwidth and real-time performance. By processing this data locally, we see considerable savings in time and network bandwidth.

Traffic violation detection is only one example; the benefits of edge computing can be realized in many other critical applications, including facial recognition, safety systems such as smoke alarms, population density monitoring systems to prevent stampedes, dynamic monitoring, forest fire prevention, and weather monitoring. In addition, edge computing can also be applied to more complicated scenarios. For example, online live broadcasting and video-on-demand in sports stadiums, interconnected monitoring and automation inside factories, cold chain monitoring and management, and vRAN and uCPE in the mobile communication industry.

Open Source Projects

A few projects have emerged that are related to edge computing technology. One of the first of these projects, known as CORD, or Central Office Re-architected as a Datacenter, is an Open Network Operating System (ONOS) use case proposed by AT&T*. We all know that the Central Office (CO) is the provider of the pivotal convergence layer and access layer for network services, such as wired network, optical network, DSL, and wireless network. AT&T launched a Domain 2.0 white paper in November 2013, which aimed to make its network business and infrastructure agile, scalable and economic, just like the data-centric cloud service does. It also helps users to orchestrate, schedule, manage and use their service conveniently. CORD is called “Central Office Re-architected as a Datacenter” because it aims to make the telecom central office cloud-enabled.

In January 2015, AT&T and Open Network Lab* (ON.Lab) defined the concept of CORD and demonstrated it at the ONS convention in June of the same year. In July 2016, CORD was moved to a separate open source project under the Linux Foundation*. At first, CORD did not target edge computing; instead, it relied on SDN, NFV and cloud computing technology. Gradually, it will converge into a new end-to-end solution. CORD is built based on OpenStack or Docker*, using ONOS as the SDN controller and XOS as the orchestration tool. Through a distributed system, open source software and white box switches, we can innovate; reducing enterprise costs enables businesses to accelerate this pace of innovation.

Since the dawn of edge computing, CORD has gradually covered the content of edge computing in MEC since 2017 and has become a platform for edge computing. It aims to provide edge computing solutions through open source technologies. CORD can be applied to scenarios like telecom central office, family and even corporate environments. It can also work at different places such as tower, automobile and unmanned aerial vehicle. Currently, the users of CORD include AT&T, China Unicom*, NTT*, and SK Telecom*.

In June 2016, a new project called virtual Central Office (vCO) appeared at the OPNFV Beijing Summit. It is developed by the OPNFV open source working group based on the open source SDN controller ODL and OpenStack. Members of this working group include Cisco, Cumulus*, Ericsson, F5*, Intel, Lenovo*, Mellanox*, Netscout*, Nokia*, and Red Hat*. vCO uses ODL and OpenStack to instantiate a VNF. Moreover, an increasing number of new services have emerged in CO, making CO a critical part inside the NFV strategies of telecommunication operators, which includes MEC. Like CORD, vCO takes CO as the best place to host edge computing. In MEC, network functions that serve from the user edge can be orchestrated and managed through vCO. vCO can be viewed as another CORD, however, the difference between them is that vCO can support both OpenFlow white box switches and Border Gateway Protocol (BGP).

In the IoT industry, the Linux Foundation launched an open-source IoT edge computing project named EdgeX Foundry* in April 2017. It is based on the FUSE IoT middleware framework using the Apache* 2.0 license from Dell. The project includes a dozen micro-services and more than 125 thousand lines of code. The Linux Foundation and Dell created the EdgeX Foundry after FUSE merged with a similar project called AllJoyn-compliant IoTX. The south-bound of the projects includes all the IoT physical devices and network edge devices that directly communicate with sensors, executors, and other components. The north-bound projects work on building a cloud platform that is mainly responsible for data collection, storage, data gathering, analysis and decision-making, and also communication with that cloud platform. This project aims to create an interoperable IoT edge computing ecosystem that allows plug and play devices, coordinates different sensor network protocols and cloud platforms (even analysis platform), and makes EdgeX a common open-source standard in IoT.

EdgeX Foundry is not a new standard, but a unified standard and application mode for edge applications. It is a simple, interoperable framework that is separate from operating systems and supports any physical devices and applications, in order to improve the connectivity between devices, applications and cloud platforms. Its mission is to simplify and standardize the industrial IoT edge computing, while sustaining its openness.

In early 2018, the Linux Foundation and AT&T launched an open source project called Akraino Edge Stack*. It aims to develop, through a mixture of code and integration, a set of open source software blueprints to support cloud services that optimize the edge computing system and applications with high usability. Akraino* will help to provide telecommunication operators, suppliers, and IoT with services with high usability, high reliability, and high performance. Many corporations including China Telecom*, China Unicom*, CMCC*, Huawei*, Intel, Tencent*, Wind River*, and ZTE* have joined the community to discuss, deliberate, and design its future architecture.

Akraino includes several components, including Airship* (a set of tools for automating cloud provisioning) and StarlingX* (a virtualization platform for building mission critical edge clouds). Above all, Akraino will be an open-source project that works across communities, projects, and corporations in the future. We’ve made an ambitious plan for Akraino, since it has mature code and also support from multiple corporations and suppliers. We believe that with the effort of the community, the project will become much more mature and functional, and finally become a preferred open-source solution for edge computing.

What’s more, in OpenStack, although the support for edge computing is at an early stage, the foundation has already set up an edge computing working group due to the evolution of cloud computing. According to a suggestion from the OpenStack Boston summit, a two-day OpenDev seminar was held in September 2017 in San Francisco, to discuss topics related to edge computing. In February 2018, the OpenStack Foundation published a white paper on edge computing based on the community’s feedback and the working group’s efforts. The OpenStack Foundation added Airship and StarlingX into its community in May 2018 to support the development of a whole stack solution for edge computing.


Until recently, the development of edge computing was still in its infancy. The instrumentation and infrastructure to construct distributed edge computing were not put into practice yet, nor were the software and hardware solutions. By relying on mature solutions and experiences learned from cloud computing, and also the blossoming 5G technology, the specific infrastructure for edge computing will take shape soon. Operators and cloud service suppliers will launch edge computing platforms, which will introduce more complex scenarios such as real-time applications and other use cases for edge computing, leading us into a new era of computing.


Shane Wang, Software Engineering Manager at NST/DSS/SSP, Intel Corporation; Individual Director of OpenStack Foundation Board

Ruoyu Ying, Cloud Software Engineer at NST/DSS/SSP, Intel Corporation