Exploring the MEC Architecture: Enabling Edge Computing for the Future

Hemant Rawat
16 min readJul 2, 2023

--

https://unsplash.com/photos/nGwhwpzLGnU

ABSTRACT:

Data is not uniform in its creation and usage. Currently, many use cases focus on consolidating data in centralized locations, such as data centers or public cloud infrastructures. However, there are numerous untapped scenarios where the computing infrastructure must be situated closer to the data source. How we handle and utilize data at the edge will define new possibilities and advancements. The ability to generate real-time responses and actionable insights is a crucial driving factor for deploying edge computing solutions

As the world becomes increasingly connected and reliant on real-time data processing, traditional cloud computing approaches face challenges related to latency, bandwidth limitations, and the growing volume of data generated by various IoT devices. To address these concerns, the European Telecommunications Standards Institute (ETSI) [1] introduced the Multi-access Edge Computing (MEC) [2] architecture. This article delves into the ETSI MEC architecture, its key components, and the benefits it offers for edge computing.

KEYOWRDS: ETSI ISG MEC, Edge Computing, 5G, IoT

INTRODUCTION:

Edge computing is an alternative IT architecture strategy that allows organizations to unlock new business opportunities that are currently limited by centralized IT systems. Several factors drive the need for decentralization, including the expenses and accessibility of bandwidth, network and application latency, scalability, availability, as well as security and regulatory considerations. Edge opportunities exists across all industries and business segments.

A. Understanding ETSI MEC Architecture

Multi-Access Edge Computing (MEC) offers application developers and content providers cloud-computing capabilities and an IT environment at the edge of the network. This environment is characterized by ultra-low latency and high bandwidth as well as real-time access to radio network information that can be leveraged by applications.

The ETSI MEC architecture focuses on pushing computational capabilities closer to the network edge, enabling applications to leverage low-latency, high-bandwidth computing resources. By distributing computing tasks between centralized cloud infrastructure and edge devices, MEC offers significant advantages in terms of reduced latency, improved bandwidth efficiency, and enhanced user experiences. Let’s explore the key components that constitute the ETSI MEC architecture [2].

Figure 1: ESTI MEC Architecture (Image Source: ETSI)

I. MEC Platform

At the core of the ETSI MEC architecture lies the MEC platform, which serves as a framework for deploying and managing edge computing resources.

The MEC platform consists of several functional components that work together to enable edge services. These components include:

a. MEC Host: The MEC host represents the physical or virtual infrastructure deployed at the network edge. It provides resources such as computing, storage, and networking capabilities to support MEC applications.

b. MEC Platform Manager: The MEC Platform Manager acts as the orchestrator for the MEC platform. It handles resource allocation, scalability, security, and overall management of MEC services.

c. MEC Application Lifecycle Manager: This component is responsible for managing the lifecycle of MEC applications, including deployment, scaling, termination, and monitoring.

d. MEC Service Management: The MEC Service Management component handles the discovery, onboarding, and lifecycle management of MEC services. It enables service providers to expose their capabilities to applications running on the MEC platform.

II. MEC Application

MEC applications are the software components that utilize the edge computing infrastructure to deliver services with ultra-low latency and improved performance. These applications can be developed by third-party developers, enterprises, or network operators. By leveraging the MEC platform’s resources and capabilities, MEC applications can benefit from the proximity to end-users, data sources, and other network services.

B. EDGE REQUIREMENTS

Edge computing requirements can vary depending on the specific use case and deployment scenario. However, there are several common requirements that are often associated with edge computing [4]:

I. Cloud principles applied at Edge

· Virtualization (compute, storage, network)

· On Demand

· API driven

· Automated LCM

· Commodity hardware

II. Infrastructure requirements at Edge

This diversity of requirements adds complexity to hardware infra. To accommodate the complexity, a common approach is to provide unified, consistent APIs, open to upper layer applications, allowing platform implementations to vary [10].

TECHNICAL ARCHITECTURE:

Although the ETSI MEC architecture offers a framework for deploying edge applications and services within mobile networks, it does not explicitly outline distinct types of MEC deployments. Nevertheless, there are recognized categorizations commonly known as Type 1, Type 2, Type 3, Type 4, and Type 5 MEC deployments. These classifications assist in comprehending various deployment scenarios and their specific characteristics. Let us delve into each type to gain a better understanding.

Figure 2: MEC Types

I. Type 1 MEC:

Type 1 MEC refers to a deployment where MEC applications are hosted on dedicated servers or infrastructure located at the edge of the network. These dedicated MEC servers provide computing resources solely for MEC purposes and are typically deployed in close proximity to the access network. Type 1 MEC deployments offer low-latency processing, high-performance capabilities, and efficient resource allocation dedicated to MEC applications.

II. Type 2 MEC:

Type 2 MEC deployments involve hosting MEC applications on existing infrastructure elements such as base station equipment, routers, or switches. In this scenario, the existing infrastructure components are utilized to host MEC applications, eliminating the need for additional dedicated servers. Type 2 MEC deployments leverage the computational resources available within the network elements to support MEC application execution, thereby reducing the need for separate MEC infrastructure.

III. Type 3 MEC:

Type 3 MEC deployments involve the integration of MEC capabilities within the radio access network (RAN). The MEC platform is tightly integrated with the RAN components, allowing MEC applications to leverage the proximity to the edge devices and gain access to real-time radio-related information. Type 3 MEC deployments enable efficient processing of radio-related data and the implementation of low-latency and radio-aware services.

IV. Type 4 MEC:

Type 4 MEC refers to a deployment where MEC applications are hosted in virtualized environments, such as virtual machines (VMs) or containers. In this deployment model, MEC applications run on virtualized infrastructure managed by virtualization technologies like hypervisors or containerization platforms. Type 4 MEC enables flexibility, scalability, and efficient resource utilization through virtualization technologies, allowing multiple MEC applications to coexist on the same physical infrastructure.

V. Type 5 MEC:

Type 5 MEC deployments involve the integration of MEC capabilities directly into the user equipment (UE) or client devices. In this deployment model, MEC applications are executed directly on the UE or client devices, leveraging their processing capabilities and proximity to the end-user. Type 5 MEC enables device-centric applications, offloading certain processing tasks from the network to the user equipment, resulting in reduced latency and improved performance.

It’s important to note that these type classifications are not mutually exclusive, and different combinations of deployment types can be used based on the specific requirements and objectives of a given MEC implementation. These types provide a framework to understand the various deployment options available within the broader scope of the ETSI MEC architecture, enabling operators and service providers to choose the most suitable approach for their specific use cases and network environments.

EDGE STACK FUNCTIONAL LAYER

I. MEC Interfaces and APIs

To ensure interoperability and seamless integration of components within the MEC architecture, ETSI defines a set of standardized Application Programming Interfaces (APIs) and interfaces [5]. These interfaces allow MEC applications to interact with the MEC platform, network infrastructure, and other services. Standardization of APIs simplifies the development and deployment of MEC applications across different MEC platforms and environments.

Figure 3: ESTI MEC APIs

To facilitate communication and interaction between different components of the ETSI MEC architecture, the ETSI specification defines three key interfaces known as the P1, P2, and P3 interfaces. These interfaces play a crucial role in enabling interoperability and seamless integration within the MEC ecosystem. Let’s explore each interface in more detail:

I. P1 Interface:

The P1 interface, also referred to as the 3GPP Interface, serves as the interface between the MEC platform and the MEC applications. It defines a standardized set of APIs that enable MEC applications to discover and interact with the underlying MEC platform functionalities. The P1 interface provides capabilities such as application lifecycle management, context information exchange, location-based services, and security mechanisms. By adhering to the P1 interface specifications, application developers can ensure compatibility and interoperability across different MEC platforms.

II. P2 Interface:

The P2 interface, known as the MEC Management API, facilitates communication between the MEC platform and the MEC management system. It enables the management system to control and configure MEC platform resources, such as MEC hosts and their associated applications. The P2 interface provides functionalities for resource discovery, configuration management, monitoring, and event notification. With the P2 interface, the management system can efficiently orchestrate MEC resources, allocate computing resources, and monitor their status and performance.

Also known as Edge enabler APIs, they are a set of interfaces and protocols that facilitate communication and interaction between applications, services, and the underlying MEC platform. These APIs enable developers to leverage the capabilities of the MEC environment and access resources at the edge, such as computing power, storage, and network services. It contains many APIs like the one explained below:

  • The Radio Network Information Service (RNIS) APIs are a specific set of APIs that focus on providing access to radio network-related information and functionalities within the MEC architecture. These APIs allow applications to retrieve real-time information about the radio network, including signal strength, quality, available frequencies, and coverage areas. This information can be utilized by applications to optimize their behavior, adapt to network conditions, and enhance the overall user experience. Radio Network Information Specification APIs may specify a data model containing S1 bearer information.

III. P3 Interface:

The P3 interface, also called the MEC Location API, enables MEC applications to access location-based services from the underlying MEC platform. It provides standardized APIs for querying location information, geofencing capabilities, and position tracking. The P3 interface is particularly useful for location-aware applications that require real-time location data to deliver personalized and context-aware services. By leveraging the P3 interface, MEC applications can obtain accurate and up-to-date location information, enhancing the functionality and user experience of location-based services.

These interfaces, P1, P2, and P3, collectively form the backbone of the ETSI MEC architecture, facilitating seamless communication, interoperability, and integration between different components. By adhering to these standardized interfaces, MEC platform providers, application developers, and management systems can ensure compatibility and enable a vibrant ecosystem of MEC applications and services.

IV. MEC Subscription API:

The MEC Subscription API enables MEC applications to subscribe to specific events or notifications from the MEC platform. It allows applications to register their interest in specific events or changes in the MEC environment, such as network status updates, service availability changes, or context information updates. The Subscription API ensures that MEC applications stay updated and responsive to changes happening within the MEC ecosystem.

V. MEC Service Availability API:

The MEC Service Availability API enables MEC applications to discover and determine the availability of specific MEC services within the MEC environment. It provides APIs for querying the availability status of various MEC services, their capabilities, and associated service parameters. The Service Availability API allows MEC applications to dynamically adapt their behavior based on the availability of required services.

DEPLOYMENT OPTIONS FOR MEC

MEC (Multi-access Edge Computing) offers various deployment options to suit different use cases and infrastructure requirements. These deployment options enable organizations to optimize their edge computing implementations based on factors such as network architecture, resource availability, and scalability needs.

In the context of MEC deployment within a 5G environment, Telecom Operators have various options at their disposal to optimize the deployment. With access to location information for both devices and edge nodes exclusively available to the telco network, the following functionalities can be provided:

1. SERVICE DISCOVERY: Edge service discovery locates an appropriate cloud, based on location and other information, and provides an IP address for MEC app

2. DEVICE MOBILITY: Telecom Operators can utilize their knowledge of device mobility within the network to enhance MEC deployment. This includes efficiently managing handovers and ensuring continuous connectivity as devices move across different edge nodes, guaranteeing uninterrupted service delivery.

3. TRAFFIC STEERING: Traffic steering enables the routing of traffic towards specific MEC applications. Applications can request routing policies and rules that are then enforced by Telco Network functions such as AF (Application Function), PCF (Policy Control Function), and NEF (Network Exposure Function). The User Plane Function (UPF) performs the actual routing actions. Additionally, the NEF exposes Telco network services and capabilities to applications, encompassing device monitoring, provisioning, Quality of Service (QoS), and charging policies. NEF establishes a trust model, allowing the deployment of applications in an edge cloud even if they are untrusted by the telco network.

3GPP Release 16 (R16) [3] introduces several features and enhancements that support Multi-access Edge Computing (MEC) within mobile networks. These features aim to enable efficient and seamless integration of MEC services and applications, improving performance, flexibility, and user experience. Let’s explore some of the key R16 features supporting MEC:

· UPF Resolution: R16 introduces UPF (User Plane Function) resolution, which allows MEC applications to discover and interact with the appropriate UPF for data traffic processing. This resolution mechanism ensures efficient routing and delivery of data between MEC applications and the UPF, optimizing network resources and reducing latency.

· Local Routing and Traffic Steering: With R16, local routing and traffic steering capabilities are enhanced. MEC applications can leverage network functions, such as Application Function (AF) and Policy Control Function (PCF), to influence the routing and steer traffic towards specific MEC services or applications. This enables efficient and optimized data flow within the edge infrastructure.

· Session and Service Continuity: R16 emphasizes session and service continuity, ensuring uninterrupted connectivity and seamless handovers for MEC applications. It enables applications to maintain their sessions and services while moving across different edge nodes or transitioning between different network environments, enhancing the overall user experience.

· AF Influenced Traffic Steering: R16 allows the AF to influence traffic steering decisions made by network functions. This means that the AF can provide input and preferences regarding the routing of traffic, allowing applications to have more control over how their data flows within the network.

· Network Capability Exposure: R16 introduces network capability exposure mechanisms that enable MEC applications to access and utilize network capabilities and services. This includes exposing QoS (Quality of Service) and charging policies, allowing applications to optimize resource allocation and billing based on their specific requirements.

· QoS and Charging: R16 enhances QoS and charging mechanisms, enabling MEC applications to request and receive specific quality levels and charging parameters. This ensures that MEC services and applications can be prioritized and billed accordingly, based on their QoS requirements and resource usage.

· Local Area Data Network (LADN): R16 introduces LADN, which allows MEC services to be deployed in a local area network within the mobile network infrastructure. LADN enables efficient processing and data handling for MEC applications by bringing the computing resources closer to the data source, reducing latency and improving performance.

These features and enhancements in 3GPP Release 16 provide a solid foundation for the integration and support of MEC within mobile networks. They empower MEC applications to leverage network capabilities, optimize data routing, ensure service continuity, and enhance QoS and charging, ultimately enabling the deployment of innovative and high-performance edge services and applications.

The placement of the User Plane Function (UPF) in the 5G network architecture can have a significant impact on the ETSI MEC architecture. The UPF is responsible for handling the user plane traffic, including data forwarding and packet processing, in the 5G core network. Here’s how the placement of the UPF can affect the MEC architecture:

Figure 4: UPF placement in MEC Deployments

I. Centralized UPF Placement:

In a centralized UPF placement scenario, the UPF is located in the core network, typically in a centralized data center. In this setup, the MEC architecture can leverage the centralized UPF for data offloading and traffic routing. MEC applications deployed at the edge can benefit from reduced latency and improved network efficiency by utilizing the centralized UPF for data processing. This placement enables MEC applications to offload data-intensive tasks to the centralized UPF, reducing the workload at the edge and improving overall performance.

II. Distributed UPF Placement:

In a distributed UPF placement scenario, the UPF is deployed closer to the edge, in proximity to the MEC hosts or edge nodes. This deployment strategy reduces the backhaul traffic and enables localized data processing and traffic steering. With a distributed UPF placement, MEC applications can benefit from reduced latency as data is processed and routed locally within the edge network. This setup is particularly advantageous for latency-sensitive applications that require real-time processing and low-latency communication with the edge devices.

III. Hybrid UPF Placement:

In certain scenarios, a hybrid UPF placement approach may be employed, where a combination of centralized and distributed UPF instances are deployed. This allows for flexibility in optimizing network resources based on the specific requirements of MEC applications and traffic patterns. For example, critical MEC applications with stringent latency requirements can utilize a distributed UPF instance for local processing, while non-latency-sensitive applications can leverage the centralized UPF for data offloading and traffic routing. The hybrid UPF placement offers a balance between performance optimization and resource utilization.

Benefits of ETSI MEC Architecture

The ETSI MEC architecture offers several compelling benefits for edge computing scenarios:

· Reduced Latency: By moving computing resources closer to the network edge, MEC significantly reduces the latency experienced by applications. This is particularly important for real-time applications such as augmented reality, autonomous vehicles [7], industrial automation [8], and gaming, where even milliseconds of latency can impact performance and user experience.

·Enhanced Bandwidth Efficiency: MEC helps offload the data traffic from the core network by processing data locally at the edge. This reduces the amount of data that needs to traverse the network, thereby improving overall network bandwidth efficiency.

· Scalability and Flexibility: MEC enables the deployment of applications and services at multiple edge locations, allowing for increased scalability and flexibility. The ability to dynamically allocate resources based on demand ensures efficient utilization of edge computing infrastructure.

IMPEDITMENTS TO EDGE COMPUTING ADOPTION:

The following section provides an overview of the challenges and impediments faced in the deployment and adoption of Multi-access Edge Computing (MEC). These challenges need to be addressed to ensure successful implementation and widespread adoption of MEC solutions [6]. Let’s explore them:

a. Monetization Strategy: There is a lack of clear pathways for monetizing edge computing deployments, making it challenging for organizations to justify the investment and derive financial returns from their edge infrastructure.

b. Viability of MEC Architecture: The current architecture of Multi-access Edge Computing (MEC) may face viability concerns, such as scalability limitations, interoperability issues, and compatibility challenges with existing network infrastructure.

c. Immature Orchestration and Management Solutions: The availability of robust and comprehensive orchestration and management solutions for edge computing is still limited. This hinders efficient provisioning, monitoring, and management of resources and services at the edge.

d. Limited Availability of MEC Software Apps: The ecosystem of MEC software applications is relatively small compared to traditional cloud-based applications. This lack of diverse and mature MEC applications restricts the range of use cases that can be effectively deployed at the edge.

e. Lack of Consumer and Enterprise Interest: There is a need to generate greater awareness and interest among both consumers and enterprises regarding the benefits and potential applications of edge computing. Without sufficient demand and adoption, the growth of edge computing can be hindered.

f. High Initial Investment: Implementing edge computing infrastructure requires significant upfront investment in hardware, software, and networking components. This initial cost can act as a barrier for organizations considering edge computing deployments.

g. Immature Software Platform Ecosystem: The ecosystem of software platforms specifically designed for edge computing is still developing. This includes frameworks, development tools, and middleware that enable efficient application development and deployment at the edge.

h. Lack of Expertise and Skills: The specialized knowledge and skills required for designing, deploying, and managing edge computing environments are currently in short supply. Organizations may face challenges in finding qualified personnel with expertise in edge computing technologies.

i. Immature Hardware Platform Ecosystem: The availability of hardware platforms optimized for edge computing, such as edge servers or gateways, is not as mature as the ecosystem for traditional data centers. This can limit the options and scalability of edge computing infrastructure.

FOUR EDGE COMPUTING CHALLENGES:

1. Data:

a. Integration: Managing data integration from diverse sources and formats at the edge.

b. Governance: Ensuring proper data governance and compliance with regulations at the edge.

c. Analytics: Performing efficient and real-time analytics on edge data for actionable insights.

2. Diversity:

a. Use Cases: Addressing the diverse range of use cases and requirements for edge computing.

b. Topology: Adapting edge computing to different network topologies and configurations.

c. Technologies: Integrating and managing a variety of edge technologies and protocols.

d. Standards: Establishing common standards and interoperability among edge computing systems.

3. Protection:

a. Security: Implementing robust security measures to protect data and infrastructure at the edge.

b. Privacy: Safeguarding user privacy and ensuring compliance with data privacy regulations.

c. Compliance: Meeting regulatory compliance requirements related to data handling and storage at the edge.

4. Locations:

a. Scale: Scaling edge computing deployments across numerous distributed locations.

b. Environments: Adapting to various environmental conditions and constraints at edge locations.

c. Remote Management: Enabling efficient remote management and maintenance of edge infrastructure.

d. Autonomy: Empowering edge devices with autonomous capabilities for self-management and decision-making.

These challenges often require customized consulting and tailored technological solutions to address specific use cases and optimize edge computing deployments.

CONCLUSION:

Organizations will need to develop a multilayer edge computing strategy that addresses the challenges of diversity, location, protection and data. The variety of use cases and requirements can lead to sprawl of first-of-a-kind edge computing deployments, without any synergy and complicating efforts to secure and manage them. The scale of distributed computing and storage required by edge computing, as well as deployment locations that usually have no IT staff, combine to create new management challenges. With processing and storage placed outside traditional information security visibility and control, edge computing creates new security challenges that need to be addressed in depth. Edge computing creates a sprawling footprint across a distributed architecture that needs to be governed, integrated and processed.

REFERENCES:

[1] https://www.etsi.org/technologies/multi-access-edge-computing

[2] https://www.etsi.org/deliver/etsi_gs/MEC/001_099/011/01.01.01_60/gs_mec011v010101p.pdf

[3] https://www.3gpp.org/technical-specifications-and-technical-reports

[4] https://www.stateoftheedge.com

[5] https://www.alefedge.com/

[6] https://www.gartner.com/

[7] https://www.5gaa.org/

[8] https://www.gsma.com/futurenetworks/mec/

[9] https://www.edgecomputingworld.com/

[10]https://www.lfedge.org/

[11]https://www.openfogconsortium.org/ra/

--

--