홈으로ArticlesAll Issue
ArticlesScalable Edge Computing for IoT and Multimedia Applications Using Machine Learning
  • Mohammad Babar1, *, Muhammad Sohail Khan1, Usman Habib2, Babar Shah3, Farman Ali4, and Dongho Song5, *

Human-centric Computing and Information Sciences volume 11, Article number: 41 (2021)
Cite this article 2 Accesses
https://doi.org/10.22967/HCIS.2021.11.041

Abstract

Edge computing springs up a modern computing platform for Internet of Things (IoT), smart systems, and multimedia applications. These technologies are built using resource-constrained devices, which are incapable of executing complex tasks. Edge computing offers computation offloading to make them capable, but offloading at large scale creates congestion, and originate scalability problem in edge computing. This study focuses on addressing scalability issue by proposing a state-of-the-art cross-entropy based scalable edge computing framework. The framework comprises over IoT devices, the edge servers, and the cloud. We have clustered the IoT devices using social IoT (SIoT) clustering technique for control and improved QoS. We propose a cross entropy-based latency-critical computation offloading algorithm (LACCoA) for efficient resource scheduling at edge layer. It makes use of Kullback-Leibler (K-L) divergence, which is a distance metric between two probability distributions. LACCoA ensures the parallel utilization of edge resources, hence producing solutions with low computational complexity. In addition with, a lightweight request and admission cycle which ensure seamless computation offloading process. The abovementioned technique produces desirable results compared to particle swarm optimization (PSO) and adaptive PSO. The experimental results showed notable improvement in reducing latency, minimizing energy consumption, and converge the QoS requirements of the multimedia application and IoT. Furthermore, the framework also scale the edge server to compute the maximum number of offloaded tasks.


Keywords

Internet of Things, Multimedia Analytics, Edge Computing, Cross Entropy, Computation Offloading


Introduction

Future generation smart systems are expected to thrive through the Internet of Things (IoT). Interconnected smart IoT gadgets have expedited the evolution of smart cities, healthcare, transportation, and smart healthcare solutions [1]. However, these intelligent equipment’s are mainly resource-limited and unable to handle complex tasks. Cloud computing was assumed to be a resource-rich solution for smart devices. But, the inherent latencies within cloud computing make these devices impractical [2]. The applications of smart devices require real-time response and cannot afford excessive latencies. Edge computing has recently been introduced to integrate cloud services with smart devices [3].
Edge computing enfolds the characteristics of abundant bandwidth, reliability, and ultra-low latency, which increase the lifetime of IoT devices using computation offload facility [4]. Computation offloading is a process of migrating full or a portion of computationally complex task for distant processing that is very unlikely to handle by IoT. The edge server executes it, and the execution results are sent to the IoT devices [5, 6]. Computation offloading minimizes energy consumption, reduces latency, and strengthen the performance of IoT. Smart devices tend to generate huge data in a small interval of time. However, processing this large amount of data on a cloud platform consumes high energy, utilizes high bandwidth, and creates unnecessary congestion over back-haul links. Edge computing can work in close proximity of users, filter-out unnecessary traffic, and help to integrate cloud services with smart devices by taking timely decisions. Further, it can also protect the back-haul links resulting from congestion.

Fig. 1. A 3-tier edge computing framework.


Edge computing provides storage, computation, and other cloud services at the network edge. Edge based computing platform not only facilitates IoT devices with high bandwidth, and low latency, but also saves energy using computation offloading facility. However, offloading requests on large-scale create congestion over server resides at the edge, and originate the scalability issue [7]. To resolve the afore-mentioned issue, this article proposes a 3-tier edge computing framework using edge server, cloud, and IoT as presented in Fig.1. The designed framework efficiently addresses the scalability issue and manages the edge server resources. The following are the primary contributions of our research.
We developed a classical three-tier edge-cloud combination framework to simulate a realistic computation offloading scenario observing stringent energy and latency restraint for latency-critical tasks. The edge-cloud integration strengthen the edge performance by utilizing cloud resources in busy hours, that guards the edge server from congestion and further scales the edge server.
A latency critical computation offloading algorithm (LACCoA) is proposed and implemented over 3-tier edge-cloud integration framework that ensures the efficiency of task offloading for delay-sensitive (DS) and delay-tolerant (DT) tasks. In addition, a request and admission control cycle is proposed that encapsulates the energy and latency requirements of the task, and ensures seamless computation offloading process. For the performance improvement, the computation offloading issue is converted into integer linear programming problem, and resolved using cross entropy (CE) technique. CE is used to decide between two probability distributions of the task offloading. This proposed technique produces optimum solution, minimizes the delay, and reduces the power consumption of IoT.
A large amount of IoT devices are utilized, and a SIoT clustering technique is implemented, which prioritize the DS tasks for offloading to the edge server, and facilitate DT tasks on the first-come first-serve (FCFS) basis. The scalability at both the IoT layer and edge layer is efficiently achieved using the SIoT clustering technique, which control the amount of offloading request and ensure to keep functioning the edge server smoothly.
The reminder of the article is structured as. Section 2 presents the relevant literature, where the major contributions of the published articles are highlighted. Section 3 discusses a 3-tier framework, which includes SIoT clustering, the edge orchestration and resource management, and a latency critical computation offloading algorithm. Section 4 include the results achieved and performance comparison with the state-of-the-art published articles. Finally, Section 5 concludes the research study with future research directions.


Related Work

Smart systems are still the center of attraction for researchers in academia and industry. The deployment of the smart systems is the dire need of the day. Applications pertaining to smart-systems require stringent QoS requirements, mainly ultra-low latency, crisp response, a smart network, an intelligent system, and first-line security. Nevertheless, traditional cloud architecture cannot meet their required QoS because the cloud is several hopes away from user premises, incurring long latency [8].
Edge computing restructures the entire technical landscape of internet of things. It corrects the flaws of these low-resource devices and encapsulates the features of proximity and energy efficiency. The proximity guarantees ultra-low latency and computation offloading leads to energy efficiency for the IoT device [4, 7]. Despite, edge computing is rapidly developing, but it cannot keep their cloud counterpart. When a user request on the large scale reached to edge server simultaneously, it forms congestion on the edge server and originates scalability issue. In [9], a smart health infrastructure using edge computing is designed to provide affordable and scalable amenities to patients.This existing work efficiently discovered and compressed the data, and extracted features for event detection to accomplish the objective of in-network context aware processing for smart health [10].
The Scalability issue has been addressed in several ways. One potential solution is to design an effective computation offloading technique. A typical computation offloading process can be completed in three stages: prepare the device for computation offloading, offloading of complex task, and get back the post-execution output to intended offloading device. Conversely, task offloading is very often a difficult task. It includes the technicality of deciding a complete or part of application to offload, offload statically of dynamically, and to offload randomly or selectively [1114].
The computation offloading technique is discussed in the existing studies to resolve scalability in edge computing [15, 16]. Using this technique an entire application is sent for processing to a remote server. The edge system is scaled using these methods, nonetheless incurring additional latency, consume more power in the task preparation for offloading, and coordinate among different IoT devices. For computation offloading, a virtual machine (VM) migration strategy is suggested, that focuses the efficient exploitation of edge system’s available resources and regulates the workload on the edge system [17, 18]. The resource allocation and synchronization are streamlined by the proposed techniques but leads to the technicalities of distant execution control and context gathering [19, 20].
A computation offloading study in [21] presents a multi-tasking scenario that comprises multiple user. The study uses edge computing architecture observing stringent security, latency, and energy restraints. In addition, this model ensured the efficient use of shared resources of the edge, and reduced the transmission overhead to achieve the security requirements for IoT applications. A geo-distributed offloading scheme is proposed with large scale edge computing infrastructure [22, 23] includes wide range of edge nodes and IoT gadgets and a multi-objective optimization technique is utilized to attain low latency and effective resource allocation. These studies did not focus on load sharing or balancing over edge servers [24]. The proximate cloud in their study, devised a computation offloading technique called ACCOMMA. This technique is an ant-inspired and bi-objective offloading middleware that solves computation offloading via reinforcement learning to reduce the execution time for IoT-based applications [25]. A genetic algorithm based on computation offloading technique is proposed to decrease latency, processing time, and the power consumption of multitude of IoT [26]. However, these studies did not consider any clustering technique for scaling the edge computing architecture.
A distributed and flexible resource management technique is presented to ensure efficient provision and allocation of the edge resources [2730]. In [29, 3133], authors addressed the scalability issue by aggregating the offloaded task and prioritizing the task allocation to sustain the edge server. In addition, other researchers have jointly optimized the computation offloading decision and resource provision to achieve scalability [3438]. In [39], a distributed task execution model for body area sensor networks (BASN) was introduced to optimize energy and transmission cost, while achieving high QoS for mobile health system. However, this existing model is lack of standard edge computing architecture. In addition, it produces longer latencies because the offloaded tasks are executed over cloud. The existing architecture uses the personal digital assistants (PDAs) as edge devices, which are resource constraint and incapable to execute the compute intensive task. In this section, various studies are discussed that tried to achieve scalability using resource provisioning and task allocation over the edge server and IoT devices. While considering the edge server capacity, they deployed the optimization techniques to advance the computation offloading decision with energy or latency consideration or random task selection for computation offloading using intelligent base station (BS) channels selection. However, most of the studies did not consider the effect of the efficient clustering technique at IoT layer, while many disintegrate the resource-rich cloud in scaling edge server [40].
A seamless computation offloading process requires a firm and realistic testing environment for optimal performance. Several studies [4148] implemented a 2-tier user device layer and edge layer architecture to simulate computation offloading. However, the offloading scenarios comprised over a single device that offloads the task and a single server that executes the task. This is an unrealistic assumption, implementing the solution to the problem is extremely difficult. A number of studies [4951] implements effective offloading scenarios that involve multitude of end user devices to perform computation offloading, comprises a number of servers to execute the offloading requests, and represents more realistic scenarios. In such scenarios, the external challenges of computation offloading like service heterogeneity, hardware availability, and entities stochastic behavior are ignored or highlighted.


Edge-Cloud Integration Framework

In this section, the structure of the proposed framework is categorically divided into three different tiers. The first tier is about the IoT clustering, where the IoT devices are clustered. The second tier is the edge orchestration and resource management, which entertains the offloaded tasks. The third tier is a resource-rich cloud, which is capable of executing heterogeneous task and services. The flow chart of the 3-tier edge-cloud integration framework is shown in Fig. 2.
Most of the published resource focused on single point i.e. resource scheduling over edge server to address scalability problem. These scheduling’s objective acquired via the implementation of a computation offloading algorithm, optimization technique, or using a framework. However, we believe that addressing scalability is a multi-facet objective. Foresaid reason, we have addressed the scalability both at IoT and edge layer. We have deployed SIoT clustering that limit the amount of offloading requests sent for remote execution from layer 1. This approach shields the edge server from tailback. On the other hand, at edge layer, we tackled the scalability problem by implementing a novel computation offloading algorithm LACCoA based on CE technique that make certain the corresponding use of edge resource to scale the edge server. Furthermore, the framework is supported by a lightweight request and admission cycle that intact the latency and energy requirements of every task, which provide enough information for initiating the offloading process. This cycle reduces the technical complexity of computation offloading process and makes it more seamless. The LACCoA algorithm is installed in client-server architecture over offloading device and edge server independently. This client-server architecture reduces the signaling overhead and eases the synchronization between them. The framework produces notable results as mentioned in Section 4 of the article.

Fig. 2. Flow chart of the 3-tier edge-cloud integration framework.


Smart systems based on the IoT are made up of a huge number of devices, and searching of right device to provide the desired service is a difficult task. To deal with the service discovery problem, we have deployed SIoT clustering at IoT layer [52]. The SIoT initiates an association between these IoT devices on the basis of services, location, and ownership. This improve the service discovery in the IoT based systems because it provides the right service to the right device. However, the edge server is aware of SIoT paradigm and grouped these IoT into virtual clusters following their services, location, and ownership information. The devices inside the cluster sense and aggregates the data, and sends it to the cluster head for computation offloading. The SIoT and edge computing together produces a low latency architecture for resource-limited IoT devices.
The proposed algorithm works independently on IoT devices and edge server, and eases synchronization between them. The algorithm accompanying with request and admission control cycle makes every device independent. The SMP is responsible for task partitioning and offloading. The BS is responsible for providing the best channel. The edge server ensures the affective utilization of edge resources. This distribution reduces the signaling overhead of the offloading process, resulting in low latency communications. The hierarchical design of the proposed framework ensures efficient service discovery, work-load aggregation, and high-performance computing as a service, that further scales the edge server.

Edge Orchestration and Resource Management
The edge layer consists of BS, the edge server, and interconnecting cloud. The geographically distributed edge servers are placed hierarchically in the close proximity of the users, which are single hop away from the BS. The edge server in the close proximity brings down the cloud computing capabilities closer to the users. Its hierarchical structure enables edge computing tier to ensure efficient utilization of resources, aggregate the services, and protect the edge from the bottleneck in busy hours [53]. Alongside, edge offers low latency and enhances service visibility to interconnect geographically distributed, resource-limited, and heterogeneous IoT devices [36, 54, 55]. A traditional client-server networking architecture has been implemented, where smart mobile phone (SMP) works as a client and BS works as a server. The network-computation architecture enables the intercommunication model to reduce the communication overhead [37]. A holistic view of the computation offloading process is depicted in Fig. 3.

Fig. 3. Holistic view of computation offloading process.


The framework ensure the task is offloaded using best available channel. Metrics such as bandwidth, channel loss factor, and overhead decides the best channel. The SMP formulates the offloading and evaluates either offloading is feasible or local execution. If the task’s requirements exceed the SMP’s capabilities, the LACCoA algorithm performs offloading. We observed processing time, delay, bandwidth, and power consumption when determining task requirements. However, the existing models only consider latency and energy consumption, which provide inadequate information for a technical process like computation offloading [56, 57]. The SMP group the requests and perform a offloading using Equations (1) and (2).

$ Ω_{d,j} = \begin{cases} 1, & if(M_j < M_d )||(w_j<E_{d,j})\cr 0, & otherwise \end{cases} $(1)

where $Ω_{d,j},w_j,M_j,M_d$,and $E_{d,j}$ denote a decision parameter for computation offloading, the parameter of the time delay of the task $j$, the memory required by task $j$, the offered storage capacity on the device $d$, and job $j$ is the projected processing time on device d, respectively. When the edge e receives the offloaded job $j$, then the server executes it if free resources are available. However, if the server faces congestion, then the offloaded task is executed over cloud, as shown in Equation (2).

$ Ω_{e,j} = \begin{cases} 1,& if(w_j<E_{e,j}+ Q_{e,j})\cr 0,& otherwise \end{cases} $(2)

where $E_{e,j}$ and $Q_{e,j}$ are the estimated execution time and the queueing delay of job $j$ on the edge e, respectively.

Request and Admission Cycle for Computation Offloading
Computation offloading in edge computing is a rather more challenging task than in the cloud computing. Despite the development of edge devices and services, yet it is as resourceful as desktop. The IoT nodes are resourceless; they are not capable of handling complex tasks, and working consistently for long periods, which seriously affect the system performance. Reducing energy consumption, cutting down operational cost, and minimizing the active time of IoT devices would be the game changer in the IoT paradigm. An energy-efficient computation offloading scheme is of vital importance to save energy. In this regard, many computation offloading schemes are designed. A partial offloading and selective offloading schemes have been presented to reduce energy [5860]. However, these schemes require coordination between IoT devices, which consume more energy. A dynamic programming technique and an adaptive receding horizon approach among devices are used to reduce the latency and the cost [60], [61], which make the computation offloading decision in non-stationary conditions.
The request and admission computation offloading cycles are shown in Fig. 4. This figure comprises over several modules in both SMP and edge server. The SMP modules spread over the profiler module, the service’s module, the offloading module, and the synchronization module. The profiler module administers the program requirements, such as execution time, memory required, number of instruction and CPU cycles, etc. The service module is responsible to estimate the latency and energy consumption demands. A set of services of the delay-sensitive and the delay-tolerant are considered in this study. The offloading module makes the offloading decision and selects the appropriate server for the task to offload. The synch module synchronizes the communication between SMP and edge server (ES). On the other hand, the ES contains the sync module, the controller, the resource allocator, and the activator module. The synch module receives the offloading request. The controller module controls the offloading tasks by accepting only limited tasks based on available resources. The allocator module allocates a resource to each task. The activator module activates the VM to prepare for offloading from the selected device.

Fig. 4. Request and admission cycle for the computation offloading process.


The proposed request and admission cycle for computation offloading makes the SMP, ES, and BS independent. However, the LACCoA algorithm runs on SMP and ES, separately. The SMP performs the task of partitioning and offloading decision. The ES schedules the resource independently, and the BS selects the best channel. The communication interface is only used for sending offloading request and receiving computation result, which reduces the signaling overhead. The ES is further connected to the cloud, where the ES unloads the DT-tasks to cloud for processing in busy hours. This tactic scales the edge-architecture well to facilitate DS tasks.

Latency Critical Computation Offloading Algorithm
The proposed LACCoA completes the offloading process mentioned in Subsection 3.2. The proposed algorithm makes the offloading decision while satisfying the requirements of the task (i.e., minimizing the energy consumption of the servers and achieving low latency). In the proposed LACCoA scheme, we consider the latency, which is the sum of the transmission latency $L_{tran}$, Computation latency $L_{comp}$, and downloading latency $L_{down}$, as shown in Equation (3) [6265].

$ L_{total} = L_{tran} + L_{comp} + L_{down} $(3)

where $L_{tran}$, and $L_{comp}$ are the time taken to prepare the task for offloading, execute the task over the remote server, and send back the execution result to the cluster head (CH), respectively. The LACCoA yielded the improved and realistic results. We have considered the $L_{downtime}$ though it has a marginal effect on the transmission latency because the output size produced is much smaller than input. Therefore, many studies have ignored it[57, 64, 65]. The transmission latency is shown in Equation (4).

$L_{tran} = \frac{D_n}{Wlog_2 (1+\frac{Pu*Los}{N})}$(4)

In the above equation, $D_n$ is the task size selected for offloading, $W$ is the bandwidth of the communication channel, Pu is the maximum transmission power, which is configured by the offloading device [25], Losis the channel gain, and Nis the Gaussian noise power. As per Equation (4), the CH can adjust its data rate from 0 to $Wlog_2⁡(1+\frac{Pu*Los}{N})$ by controlling its transmission power. The computation latency is the task execution time, as shown in Equations(5) and (6).

$L_{comp} = \frac{C_n}{C_e}$(5)

$L_{down} = t_{down}$(6)

where $C_e$ is the execution capacity of the server for the task $C_n$, $t_{down}$ is the time taken to receive the execution results by the offloading device called download latency $L_{down}$. The total latency incurred by a computational offloading process can be achieved by combining Equations(4)–(6), as shown in Equation (7). However, the output size produced after the task execution on the remote server is much smaller than the offloaded task. Though, it has a marginal effect on the total latency of the computation offloading process [58].

$L_{total}=\frac{D_n}{Wlog_2 (1+\frac{Pu*Los}{N})}+\frac{C_n}{C_e} + t_{down}$(7)

The total latency incurring by the offloaded task is expressed in Equation (7). However, the task can be either executed locally, or it can be offloaded to the edge server for execution. Therefore, the latency incurs locally or by remote execution can be represented as Equation (8).

$D_{offload} = \cases{1 \cr 0}$(8)

where $D_{offload}$ shows the offloading decision. If the $D_{offload}$ is 0, then the task will be executed locally. Hence, no $L_{down}$ is considered and the total latency incurs will be computed as $L_{tran}$ and $L_{comp}$. If the $D_{offload}$ is 1, then the task will be offloaded for computation, and the total latency incurs will be $L_{tran}$, $L_{comp}$, and $L_{down}$. The energy consumption is the second objective considered in the LACCoA algorithm, which can be calculated using Equation (9) [64].

$E_{total} = E_{tran} + E_{comp}$(9)

$E_{total}$ is the total energy consumed by a task, which is the combination of the energy consumed in the transmission task $E_{tranand}$ during execution task $E_{comp}$. The energies consumed in the task offloading $E_{tran}$, and the task on the remote server $E_{comp}$ are shown in Equations(10) and (11), respectively.

$E_{tran}=\frac{D_t*P_{up}}{R_{(t,s)}}$(10)

$E_{comp}=\frac{C_t*S_{comp}}{S_{cpu}}$(11)

where $D_t$ is the data size of the task, $P_{up}$ is the energy consumption to upload a task, $R_{(t,s)}$ is the rate of server for the task $t$, $C_t$ is the CPU cycles required for the task, and $S_{compis}$ the energy consumed per second by the Server $S$. The energy consumption during the task execution is expressed in Equation (8). The total energy consumed in the task execution is obtained by considering both the energy consumptions, as shown in Equation (12).

$E_{total}=\begin{cases} 1, &E_{tran} + E_{comp} \cr 0, &E_{comp} \end{cases} $(12)

The goal is to minimize both the latency and energy consumption simultaneously. Therefore, it is multi-objective optimization problem, which can be defined as follow:

$χ(Ω)=δ_a L_{total} + δ_b E_{total}$(13)

where $δ_a$ and $δ_b$ are scalar weights, which are utilized to make adjustment between the latency and energy consumption, and $δ_a, δ_b∈$ [0,1]. Here, the weighted sum approach is used, which combines the total energy and latency with varying values of $δ_a$ and $δ_b$. This composite objective function must be optimized. Therefore, it is formulated as follows.

$ \displaystyle\min_{\rm Ω} ⁡χ(Ω) \\ s.t. Ω_n ∈ {0,1} ,∀_n ∈N $(14)

We used CE, which is a probabilistic optimization and learning technique. CE is a distance measure between two probabilities $j_{xand}$ $k_x$, as shown in the following equations.

$D(j||k)=H(j)-H(j,k)$(15)

where,

$H(j)= \sum j_x ln ⁡j_x$(16)

$H(j,k)= \sum j_x ln k_x$(17)

where $j_x$ is a distribution to find optimal solution, and $k_x$ is empirical distribution, which characterize the distribution of optimal solution. The CE was proposed for estimation of rare event problems; therefore, it can model as Bernoulli distribution given in the following equations.

$p(x,v)= \displaystyle∏_{m=1}^M (1 - v_m)^{(1-x)} v_m^{(x)}$(18)

vhas mean and variance. Minimum of χ(Ω) is denoted by Υ^*.

$Υ^*= \displaystyle\min_Ω χ(Ω)$(19)

The indicator functions${I_{χ(Ω)>Υ}}$ define various threshold level of Υ∈ Ɍ. A family of probability distribution functions $(pdfs){f(.;Ρ)}$ is used with the indicator function to randomize the problem. These pdfs are Gaussian distribution and linked with associated stochastic problem (ASP), as shown below.

$l_Υ= Ρ_q (χ(Ω)>Υ= E_q (I_{χ(Ω)>Υ})$(20)

where, $Ρ_u$ is a probability measure and $E_u$ is an expectation. In addition, the $l_Υ$ can be estimated using LR estimator defined as follow.

$v^*= \underset {v}{argmax} E_q (I_{χ(Ω)>Υ)} ln ⁡f(Ω;Ρ) 〗$(21)

Furthermore, it can be estimated by

$\hat{v^*}= \underset {v}{argmax} \frac{1}{L}\prod_{t=1}^L (I_{χ(Ω_t)>Υ}) ln ⁡f(Ω;Ρ)$(22)

where $Ω_t$ is generated from the pdf by the${f(.;Ρ)}$. The $\hat v_t$ and $\hat v_{t+1}$ cannot be calculated directly from Equation (23), but they can be updated using smoothed updating function.

$\hat v_{t+1}=τ v^*+(1- τ) \hat v_t$(23)

Here, parameter $τ$ is a learning rate. The value of $τ$ is a small number between 0 and 1. The algorithm based on CE method is shown in Algorithm 1. In general, this algorithm is coverage to an optimal solution [26].
Algorithm 1. Cross entropy based LACCoA algorithm
1: Initialization-Step:
2:   set t=0
3:   set ℵ, T         // N is the number of samples and T is number of Iterations
4:   End Initialization-Step
5:   While (t < T):
6:     Generate {$Ω_h^1, Ω_h^2,…,Ω_h^ℵ$} using {$f(.;Ρ)Ρ ∈ ζ$};// Draw population from {$f(.;Ρ)$}
7:     Compute {$χ(Ω_h^1), χ(Ω_h^2),…,χ(Ω_h^ℵ)$}; // Calculate /evaluate the objectives for the feasible samples generated in last step
8:     Sort {$χ(Ω_h^1), χ(Ω_h^2),…,χ(Ω_h^ℵ)$}; // Sort the Samples
9:     Select the minimum $χ(Ω_h^s)$ as a best-solution; // Select the minimum yielding objectives as elites of best solutions.
10:     update $\hat v_{t+1}$ using Equation (23)
11:     t=t+1
12: End While
13: Output $Ω_h$
The framework of edge-cloud integration is deliberately designed hierarchically over cloud, edge, and IoT layer. The hierarchical distribution ensures the efficient utilization of resources and distinguishes the responsibility of each layer. The scalability issue is addressed at both the IoT layer and edge layer. At the IoT layer, the DS task is offloaded on a priority basis, and the delay tolerant is aggregated and offloaded on FCFS basis. This tactic protects avoids congestion, and scales edge for maximum performance.
Task unloading is a very complicated. It necessitates the knowledge of hardware, scenarios, task partitioning, synchronization etc. A suboptimal offloading technique can increase the latency and energy consumption that makes it unviable. An efficient computation offloading algorithm is a serious concern to augment edge computing applications. Therefore, we propose a CE-based computation offloading algorithm LACCoA, which handles offloading requests under strict latency and energy constraints. The LACCoA uses CE technique in conjunction with iterative learning. It initially produces multiple samples and learn the probability distribution of the best sample. The aforementioned CE learning algorithm effectively utilizes the edge layer resources in parallel. This reduces the computational complexity of the offloaded task and incurs low end to end network latency, as the IoT layer and edge layer are independent to make offloading decision and task execution. The algorithm minimizes the synchronization and signaling overhead as it is implemented in client-server manner.
The request and admission cycle further strengthens the algorithm performance by encapsulating the energy and latency requirements of the offloading task while making offloading decision. It also ensures seamless computation offloading process. The algorithm outperforms the traditional convex optimization techniques via producing low complexity solutions. However, it perform deficiently with exceeding numbers of unloading jobs and violates energy and latency constraints. This is an in-built problem of edge-based computing models as both cannot be condensed instantaneously.


Results and Discussion

This section illustrates the numerical results based on simulations using MATLAB. We implemented a request and admission control cycle using the scenario of multi-user and multi-tasking edge computing. Where, the edge servers are placed in closed proximity of the users, and a BS is co-located with edge servers having a cell radius of 250 m, as shown in Fig. 1. The edge servers are connected to the cloud for load balancing and scalability issues handling. Our experimental environment had server clock frequency of 10 GHz with 16GB RAM, Window 10. We implemented the face recognition application; where the input size for the offloading task is 420 kB, which includes the program codes and input parameters [58]. The simulation setup provides us the opportunity to conduct the experiment in the control environment using the preferred set of parameters. The proposed LACCoA algorithm is evaluated based on cross entropy technique over the edge server. In this section through experiments, we highlighted several issues such as the dynamics of the computation offload framework, clustering techniques and their overall performance comparison to minimize energy consumption, reduce latency, and scale the edge server to handle the maximum number of simultaneous tasks. The proposed framework is compared with particle swarm optimization (PSO), which is an evolutionary search algorithm for continuous problems, but not for computation offloading. PSO works well with low fitness function values. However, as the number of offloading task increases, the value of fitness function increases, which means the network latency of the task is increases [37]. The 3-tier framework comprises over IoT devices and edge servers, which is positioned near users. The resource-rich cloud capable of storage and computing heterogeneous tasks are connected to the edge server for improved performance. The hierarchical representation of the framework leads to the efficient utilization of resources, and distinguishes the responsibilities of each tier. Table 1 presents the list of parameters and their corresponding values used in the simulations.
The results in Fig. 5 reflect significant improvement in reducing latency as the number of task increases. Our proposed framework ensures the efficient utilization of resources by leveraging both the computational resources at edge server and IoT devices effectively. This meets the strict QoS requirement for DS tasks, and loose QoS requirements for DT tasks. In addition, the proposed framework also shows a noticeable reduction in latency compared to standard PSO and adaptive PSO based frameworks. The client-server architecture between the edge server and IoT devices simplifies the synchronization process. It also reduces the overhead of the computation offloading technique at the IoT tier, which further decrease the latency. The LACCoA algorithm shows better performance for the DS and DT task, and guarantees the task completion under the latency constraints. The LACCoA achieved the average latency of 0.84 seconds for DS and 1.23 seconds for DT tasks, and showed the delay reduction up to 7% under the normal workload. In peak hours, the average latency recorded 1.12 seconds, and 1.63 seconds for the DS and DT tasks, respectively. When the number of tasks reached to 50, then the QoS requirements of both the tasks are violated, which is due to the inherent scalability problem in edge computing.

Table 1. Simulation parameters

S.no Parameters Value
1 Mobile device 0.5–2 GHz
2 Application Face recognition
3 Input task size 420 kB
4 Number of IoT devices 250–2,000
5 Communication parameters 3GPP specifications
6 Edge server 10 GHz
7 Cell radius 250 m
8 Number of tasks 10–50
9 Latency requirement of task DS = 100 ms, DT = 150 ms


Fig. 5. Performance comparison of latency with different number of tasks.


As it can be seen from Fig. 6, the proposed framework saves more energy than their counterpart standard PSO, and adaptive PSO [37]. This is because of the LACCoA algorithm, which makes efficient utilization of both edge and cloud resources. Edge architecture in conjunction with cloud not only reduces the energy consumption, but also scales the edge server to handle more tasks without violating the latency requirement of the task. We noticed that the average energy consumed for the DT task is 0.73j, which saves 25% energy compared to the standard PSO. In addition, we also noticed that more energies are saved for resource-constrained IoT devices when the DT tasks are offloaded to the edge server for execution. However, the energy consumption for DS tasks is increased by 10% when the number of tasks reached to 45.

Fig. 6. Energy consumed when number of tasks varies.


This increase is due to resource scarcity in IoT devices. When the work load of edge server is increases, the DT tasks are offloaded to the cloud to ensure timely execution of DS task. This also prolongs the operation of the IoT. However, the proposed solution yielded the energy efficiency to some extent in processing of DS and DT task. Moreover, the energy intake upsurges as tasks exceeded to 50, as we cannot minimize energy and latency concurrently.

Fig. 7. Number of tasks execution using different computing resources.


Fig. 7 depicts the number of tasks executed using different computing resources. While considering 30 tasks, the standard PSO and adaptive PSO execute more tasks on the SMP than the LACCoA. Therefore, they are incurring more latencies and consuming more energy. However, the selective offloading strategy of LACCoA executed the least number of tasks on SMP, and made a good use of resourceful edge and cloud resources, which leads to save energy, reduce latency, and protect the edge server from the bottleneck. Our proposed 3-tier framework along with LACCoAalgorithm scales the edge server to entertain the maximum number of simultaneous tasks. However, the scalability of the edge computing is also dependent on the resource capacity of the edge server [6467].
As it can be seen in Fig. 8, the LACCoA algorithm performed well with different task size using a single edge server in the user’s proximity, and saves 34.5% energy compared to with MTMS method (27.74%) [68]. The SIoT-based clustering technique possesses prior knowledge of the location and services of the IoT devices, which reduces the communication overhead of IoT devices, and further minimizes the energy consumption.

Fig. 8. Total energy consumption over different task size.


Fig. 9. Energy and time saved over different battery levels of mobile devices.


The results in Fig. 9 describes the amount of energy and time saved under different battery levels of SMP devices using the proposed framework. A notable improvement is witnessed using the SIoT clustering technique that combines the services, devices, and interaction between IoT devices and edge server [52]. This association minimizes the overhead of intercommunication [69].
The above results show the performance improvement in terms of minimizing energy consumption, lowering down latency, and scaling the edge server to handle maximum numbers of offloading requests. However, the proposed framework pinpoints an abnormal case. The LACCoA algorithm sometimes selects a random channel, which finishes the execution task earlier than the offloading one. The LACCoA algorithm performs at the suboptimal level when the number of devices is changing rapidly. This effect increases the time taken for computation offloading decision in comparison to normal conditions, which are the challenging task of the proposed study.
Fig. 10 shows the performance comparison of average response time for DS tasks as the number of offloaded tasks to the edge server increases. The LACCoA remains persistent as compared to the PSO and adaptive PSO algorithms. It makes the offloading decision at IoT layer independent of edge layer by evaluating the latency and energy consumption of offloading task, to conclude either the local execution is beneficial or offloading. This prioritizes the offloading of DS tasks to the edge server for execution and scales the edge server to the next level by executing increased number of tasks, well under the latency and energy constraints.

Fig. 10. Comparison of average response time of DS task.


Conclusion

In this article, we designed a 3-tier edge-cloud integration framework consists of cloud, edge server, and IoT. A distributed approach is used to handle scalability both at the IoT tier and edge tier. The proposed SIoT clustering technique at IoT tier limits the number of requests send to edge server for execution, and protects the edge server from congestion. A lightweight request and admission cycle for the seamless computation offloading process is designed that encapsulates the latency and energy requirements in a computation offloading request. A multi-tasking model using DS and DT task is employed to make the computation offloading request. A cross entropy based latency critical computation offloading algorithm LACCoA is designed to ensure the successful completion of the task under the stringent latency and energy consumption requirements. The numerical results exhibit that the proposed LACCoA algorithm minimizes the energy consumption of IoT devices under latency requirements. A client-server model in conjunction with request and admission cycle reduces communication overhead, eases the synchronization process, and further extends battery life of IoT devices. The proposed framework performed well in the heavy workload environment, and can handle the maximum number of simultaneous tasks to scale the edge server. We observed that scalability was largely dependent on the physical capacity of the edge server, energy, and latency, which cannot be reduced at the same time.


Acknowledgements

This research was a part of the project titled, Smart port IoT convergence and operation technology development (No. 20190399), funded by the Ministry of Ocean and Fisheries, Korea. This research work was also supported by the Research Incentive Grant R20129 of Zayed University, UAE.


Author’s Contributions

Conceptualization, MB. Funding acquisition, BS, DS. Investigation and methodology, MB, MSK, FA. Project administration, DS. Resources, MSK. Supervision, MSK, DS. Writing of the original draft, MB, FA. Writing of the review and editing, UH, FA. Software, MB. Validation, MSK. Formal analysis, BS. Visualization, MB.


Funding

Not applicable.


Competing Interests

The authors declare that they have no competing interests.


1st (Principal Author) and Corresponding Author

Author
Mr. MohammadBabar hasreceivedan honor’s degree in Computer Science in 2004, following aMaster’sdegreeinDataTelecommunicationandNetworksfromUniversityofSalford, UK,in2010 and now, pursuing his PhD degree from University of Engineering and Technology Mardan. He remains “System Engineer” in National Database & Registration authority (NADRA), Pakistan.Afterword’s, join Comsats University Islamabad as a lecturer in Computer Science. Besides,serves IBM (ERRA), UNHCR and many national and international level organizations. His areaofinterestsiscloudcomputing,edge computing, FogandIoT.
E-mail: mbabarcs@gmail.com
Contact: 923133000099.

Author
Mr.Muhammad SohailKhan
isanAssistantProfessorattheDepartmentofComputerSoftwareEngineering,Universityof Engineering&TechnologyMardan,Pakistan.HereceivedhisM.S.degreefromComputerSoftwareEngineeringDepartment, UniversityofEngineeringandTechnologyPeshawarin2012andcompletedhisPh.D.studiesfromJejuNationalUniversity SouthKoreain2016.Hehadbeenapartofthe softwaredevelopmentindustryinPakistanasadesigneranddeveloper.The majorfocusofhisworkistheinvestigationandapplicationofalternateprogrammingstrategiestoenabletheinvolvement ofmassesinInternet-of-Thingsapplicationdesignanddevelopment.End-UserProgramming,Human-ComputerInteractions andEmpiricalSoftwareEngineeringarealsoincludedinhisresearchinterests.
E-mail: Sohailkhan@uetmardan.edu.pk
Contact:923469358864

Author
FARMAN ALI is an Assistant Professor in the Department of Software at Sejong University, South Korea. He received his B.S. degree in computer science from the University of Peshawar, Pakistan, in 2011, M.S. degree in computer science from Gyeongsang National University, South Korea, in 2015, and a Ph.D. degree in information and communication engineering from Inha University, South Korea, in 2018, where he worked as a Post-Doctoral Fellow at the UWB Wireless Communications Research Center from September 2018 to August 2019. His current research interests include sentiment analysis / opinion mining, information extraction, information retrieval, feature fusion, artificial intelligence in text mining, ontology-based recommendation systems, healthcare monitoring systems, deep learning-based data mining, fuzzy ontology, fuzzy logic, and type-2 fuzzy logic. He has registered over 4 patents and published more than 50 research articles in peer-reviewed international journals and conferences. He has been awarded with Outstanding Research Award (Excellence of Journal Publications-2017), and the President Choice of the Best Researcher Award during graduate program at Inha University.

Author
Usman Habib is currently serving as an Associate Professor & head of Computer Science Department, FAST National University of Computers & Emerging Sciences (NUCES), Peshawar campus. Before joining FAST-NUCES, he has also served as an Assistant Professor at COMSATS Institute of Information Technology, Abbottabad, Pakistan. Along with teaching and research, he has also worked and successfully completed different industrial projects, such as project “eXtract” funded by the Austrian Funding Agency, under the funding programmee!MISSION. Dr. Usman has been actively involved in research as well, and has authored several conference and journal publications. He was also a member of organizing committee of a international conference, Frontiers of Information Technology (FIT) for several years (FIT’2006, FIT’2007, FIT’2010, FIT’2011, FIT’2012).

Author
Dong Ho Song is a Professor at the Department of Computer Engineering, Korea Aerospace Univ. Seoul, Korea. His research interest is in the areas including cloud computing and distributed operating systems. He holds a Ph.D. in Computer Science from Univ. of Newcastle in England. He founded SoftonNet Inc. in 1999 and has been working on virtualization technologies. Over 10 years before founding the company, he had worked on development and application of cutting edge computing and information technologies in Stanford Research Institute (SRI) in USA as well as ETRI (Electronics and Telecommunications Research Institute) in Korea.


References

[1] T. H. Kim, C. Ramos, and S. Mohammed, “Smart city and IoT,” Future Generation Computer Systems, vol. 76, pp. 159-162, 2017.
[2] S. P. Ahuja and N. Deval, “From cloud computing to fog computing: platforms for the Internet of Things (IoT),” International Journal of Fog Computing, vol. 1, no. 1, pp. 1-14, 2018.
[3] K. Zhang, S. Leng, Y. He, S. Maharjan, and Y. Zhang, “Mobile edge computing and networking for green and low-latency Internet of Things,” IEEE Communications Magazine, vol. 56, no. 5, pp. 39-45, 2018.
[4] W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu, “Edge computing: vision and challenges,” IEEE Internet of Things Journal, vol. 3, no. 5, pp. 637-646, 2016.
[5] N. Abbas, Y. Zhang, A. Taherkordi, and T. Skeie, “Mobile edge computing: a survey,” IEEE Internet of Things Journal, vol. 5, no. 1, pp. 450-465, 2017.
[6] M. Babar, M. S. Khan, F. Ali, M. Imran, and M. Shoaib, “Cloudlet computing: recent advances, taxonomy, and challenges,” IEEE Access, vol. 9, pp. 29609-29622, 2021.
[7] K. Aruna and G. Pradeep, “Performance and scalability improvement using IoT-based edge computing container technologies,” SN Computer Science, vol. 1, no. 2, pp. 1-7, 2020.
[8] J. Ren, G. Yu, Y. He, and G. Y. Li, “Collaborative cloud and edge computing for latency minimization,” IEEE Transactions on Vehicular Technology, vol. 68, no. 5, pp. 5031-5044, 2019.
[9] R. Ali, B. Kim, S. W. Kim, H. S. Kim, and F. Ishmanov, “(ReLBT): a reinforcement learning-enabled listen before talk mechanism for LTE-LAA and Wi-Fi coexistence in IoT,” Computer Communications, vol. 150, pp. 498-505, 2020.
[10] A. M. Maia, Y. Ghamri-Doudane, D. Vieira, and M. F. de Castro, “Optimized placement of scalable IoT services in edge computing,” in Proceedings of 2019 IFIP/IEEE Symposium on Integrated Network and Service Management (IM),Arlington, VA, 2019, pp. 189-197.
[11] X. Xu, Q. Liu, Y. Luo, K. Peng, X. Zhang, S. Meng, and L. Qi, “A computation offloading method over big data for IoT-enabled cloud-edge computing,” Future Generation Computer Systems, vol. 95, pp. 522-533, 2019.
[12] J. Zhao, Q. Li, Y. Gong, and K. Zhang, “Computation offloading and resource allocation for cloud assisted mobile edge computing in vehicular networks,” IEEE Transactions on Vehicular Technology, vol. 68, no. 8, pp. 7944-7956, 2019.
[13] H. Sun, F. Zhou, and R. Q. Hu, “Joint offloading and computation energy efficiency maximization in a mobile edge computing system,” IEEE Transactions on Vehicular Technology, vol. 68, no. 3, pp. 3052-3056, 2019.
[14] R. A. Dziyauddin, D. Niyato, N. C. Luong, M. A. M. Izhar, M. Hadhari, and S. Daud, “Computation offloading and content caching delivery in vehicular edge computing: a survey,” 2019 [Online]. Available: https://arxiv.org/abs/1912.07803.
[15] W. Zhou, W. Fang, Y. Li, B. Yuan, Y. Li, and T. Wang, “Markov approximation for task offloading and computation scaling in mobile edge computing,” Mobile Information Systems, vol. 2019, article no. 8172698, 2019. https://doi.org/10.1155/2019/8172698
[16] J. Hu, M. Jiang, Q. Zhang, Q. Li, and J. Qin, “Joint optimization of UAV position, time slot allocation, and computation task partition in multiuser aerial mobile-edge computing systems,” IEEE Transactions on Vehicular Technology, vol. 68, no. 7, pp. 7231-7235, 2019.
[17] T. G. Rodrigues, K. Suto, H. Nishiyama, N. Kato, and K. Temma, “Cloudlets activation scheme for scalable mobile edge computing with transmission power control and virtual machine migration,” IEEE Transactions on Computers, vol. 67, no. 9, pp. 1287-1300, 2018.
[18] M. G. R. Alam, M. M. Hassan, M. Z. Uddin, A. Almogren, and G. Fortino, “Autonomic computation offloading in mobile edge for IoT applications,” Future Generation Computer Systems, vol. 90, pp. 149-157, 2019.
[19] V. Cardellini, V. D. N. Persone, V. Di Valerio, F. Facchinei, V. Grassi, F. L. Presti, and V. Piccialli, “A game-theoretic approach to computation offloading in mobile cloud computing,” Mathematical Programming, vol. 157, no. 2, pp. 421-449, 2016.
[20] D. Uma, S. Udhayakumar, L. Tamilselvan, and J. Silviya, “Client aware scalable cloudlet to augment edge computing with mobile cloud migration service,” International Journal of Interactive Mobile Technologies, vol. 14, no. 12, pp. 165-178, 2020.
[21] I. A. Elgendy, W. Z. Zhang, Y. Zeng, H. He, Y. C. Tian, and Y. Yang, “Efficient and secure multi-user multi-task computation offloading for mobile-edge computing in mobile IoT networks,” IEEE Transactions on Network and Service Management, vol. 17, no. 4, pp. 2410-2422, 2020.
[22] R. Ali, Y. B. Zikria, B. S. Kim, and S. W. Kim, “Deep reinforcement learning paradigm for dense wireless networks in smart cities,” in Smart Cities Performability, Cognition, & Security. Cham, Switzerland: Springer, 2020, pp. 43-70.
[23] A. A. Abdellatif, A. Mohamed, C. F. Chiasserini, M. Tlili, and A. Erbad, “Edge computing for smart health: context-aware approaches, opportunities, and challenges,” IEEE Network, vol. 33, no. 3, pp. 196-203, 2019.
[24] K. Sonbol, O. Ozkasap, I. Al-Oqily, and M. Aloqaily, “EdgeKV: decentralized, scalable, and consistent storage for the edge,” Journal of Parallel and Distributed Computing, vol. 144, pp. 28-40, 2020.
[25] Z. Wang, G. Xue, S. Qian, and M. Li, “CampEdge: distributed computation offloading strategy under large-scale AP-based edge computing system for IoT applications,” IEEE Internet of Things Journal, vol. 8, no. 8, pp. 6733-6745, 2020.
[26] X. Chen, “Decentralized computation offloading game for mobile cloud computing,” IEEE Transactions on Parallel and Distributed Systems, vol. 26, no. 4, pp. 974-983, 2014.
[27] E. Katsaragakis, “Distributed market-based resource management of edge computing systems,” Master’s thesis, National Technical University of Athens, Greece, 2018.
[28] J. Liu, J. Wan, B. Zeng, Q. Wang, H. Song, and M. Qiu, “A scalable and quick-response software defined vehicular network assisted by mobile edge computing,” IEEE Communications Magazine, vol. 55, no. 7, pp. 94-100, 2017.
[29] T. Chen, S. Barbarossa, X. Wang, G. B. Giannakis, and Z. L. Zhang, “Learning and management for internet of things: accounting for adaptivity and scalability,” Proceedings of the IEEE, vol. 107, no. 4, pp. 778-796, 2019.
[30] M. Babar, M. S. Khan, A. Din, F. Ali, U. Habib, and K. S. Kwak, “Intelligent computation offloading for IoT applications in scalable edge computing using artificial bee colony optimization,” Complexity, vol. 2021, article no. 5563531, 2021.https://doi.org/10.1155/2021/5563531
[31] M. T. Kabir and C. Masouros, “A scalable energy vs. latency trade-off in full-duplex mobile edge computing systems,” IEEE Transactions on Communications, vol. 67, no. 8, pp. 5848-5861, 2019.
[32] X. Lyu, H. Tian, C. Sengul, and P. Zhang, “Multiuser joint task offloading and resource optimization in proximate clouds,” IEEE Transactions on Vehicular Technology, vol. 66, no. 4, pp. 3435-3447, 2016.
[33] M. Babar and M. Sohail Khan, “ScalEdge: a framework for scalable edge computing in Internet of Things-based smart systems,” International Journal of Distributed Sensor Networks, 2021.https://doi.org/10.1177/15501477211035332
[34] J. Ren, H. Guo, C. Xu, and Y. Zhang, “Serving at the edge: a scalable IoT architecture based on transparent computing,” IEEE Network, vol. 31, no. 5, pp. 96-105, 2017.
[35] X. Lyu, W. Ni, H. Tian, R. P. Liu, X. Wang, G. B. Giannakis, and A. Paulraj, “Optimal schedule of mobile edge computing for Internet of Things using partial information,” IEEE Journal on Selected Areas in Communications, vol. 35, no. 11, pp. 2606-2615, 2017.
[36] M. Bouet and V. Conan, “Mobile edge computing resources optimization: a geo-clustering approach,” IEEE Transactions on Network and Service Management, vol. 15, no. 2, pp. 787-796, 2018.
[37] J. Wu, Z. Cao, Y. Zhang, and X. Zhang, “Edge-cloud collaborative computation offloading model based on improved partical swarm optimization in MEC,” in Proceedings of 2019 IEEE 25th International Conference on Parallel and Distributed Systems (ICPADS), Tianjin, China, 2019, pp. 959-962.
[38] X. Tao, K. Ota, M. Dong, H. Qi, and K. Li, “Performance guaranteed computation offloading for mobile-edge cloud computing,” IEEE Wireless Communications Letters, vol. 6, no. 6, pp. 774-777, 2017.
[39] Y. Gao, Y. Cui, X. Wang, and Z. Liu, “Optimal resource allocation for scalable mobile edge computing,” IEEE Communications Letters, vol. 23, no. 7, pp. 1211-1214, 2019.
[40] D. Spatharakis, I. Dimolitsas, D. Dechouniotis, G. Papathanail, I. Fotoglou, P. Papadimitriou, and S. Papavassiliou, “A scalable edge computing architecture enabling smart offloading for location based services,” Pervasive and Mobile Computing, vol. 67, article no. 101217, 2020.https://doi.org/10.1016/j.pmcj.2020.101217
[41] A. Shakarami, A. Shahidinejad, and M. Ghobaei-Arani, “An autonomous computation offloading strategy in mobile edge computing: a deep learning-based hybrid approach,” Journal of Network and Computer Applications, vol. 178, article no. 102974, 2021. https://doi.org/10.1016/j.jnca.2021.102974
[42] Y. Wang, H. Zhu, X. Hei, Y. Kong, W. Ji, and L. Zhu, “An energy saving based on task migration for mobile edge computing,” EURASIP Journal on Wireless Communications and Networking, vol. 2019, article no. 133, 2019.https://doi.org/10.1186/s13638-019-1469-2
[43] S. Yu, X. Wang, and R. Langar, “Computation offloading for mobile edge computing: a deep learning approach,” in Proceedings of 2017 IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), Montreal, Canada, 2017, pp. 1-6.
[44] H. Eom, R. Figueiredo, H. Cai, Y. Zhang, and G. Huang, “MALMOS: machine learning-based mobile offloading scheduler with online training,” in Proceedings of 2015 3rd IEEE International Conference on Mobile Cloud Computing, Services, and Engineering, San Francisco, CA, 2015, pp. 51-60.
[45] L. Li, M. Siew, and T. Q. Quek, “Learning-based pricing for privacy-preserving job offloading in mobile edge computing,” in Proceedings of2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 2019, pp. 4784-4788.
[46] F. Zhang, J. Ge, C. Wong, C. Li, X. Chen, S. Zhang, B. Luo, H. Chang, and V. Chang, “Online learning offloading framework for heterogeneous mobile edge computing system,” Journal of Parallel and Distributed Computing, vol. 128, pp. 167-183, 2019.
[47] M. Aazam, S. Zeadally, and K. A. Harras, “Offloading in fog computing for IoT: review, enabling technologies, and research opportunities,” Future Generation Computer Systems, vol. 87, pp. 278-289, 2018.
[48] H. Meng, D. Chao, and Q. Guo, “Deep reinforcement learning based task offloading algorithm for mobile-edge computing systems,” in Proceedings of the 2019 4th International Conference on Mathematics and Artificial Intelligence, Chegndu, China, 2019, pp. 90-94.
[49] A. K. Sangaiah, D. V. Medhane, T. Han, M. S. Hossain, and S. Muhammad, “Enforcing position-based confidentiality with machine learning paradigm through mobile edge computing in real-time industrial informatics,” IEEE Transactions on Industrial Informatics, vol. 15, no. 7, pp. 4189-4196, 2019.
[50] A. Crutcher, C. Koch, K. Coleman, J. Patman, F. Esposito, and P. Calyam, “Hyperprofile-based computation offloading for mobile edge networks,” in Proceedings of 2017 IEEE 14th International Conference on Mobile Ad Hoc and Sensor Systems (MASS), Orlando, FL, 2017, pp. 525-529.
[51] S. Wu, W. Xia, W. Cui, Q. Chao, Z. Lan, F. Yan, and L. Shen, “An efficient offloading algorithm based on support vector machine for mobile edge computing in vehicular networks,” in Proceedings of 2018 10th International Conference on Wireless Communications and Signal Processing (WCSP), Hangzhou, China, 2018, pp. 1-6.
[52] L. Atzori, A. Iera, G. Morabito, and M. Nitti, “The social Internet of Things (SIOT)-when social networks meet the Internet of Things: concept, architecture and network characterization,” Computer Networks, vol. 56, no. 16, pp. 3594-3608, 2012.
[53] X. Lyu, H. Tian, L. Jiang, A. Vinel, S. Maharjan, S. Gjessing, and Y. Zhang, “Selective offloading in mobile edge computing for the green Internet of Things,” IEEE Network, vol. 32, no. 1, pp. 54-60, 2018.
[54] F. Guo, L. Yu, S. Tian, and J. Yu, “workflow task scheduling algorithm based on the resources' fuzzy clustering in cloud computing environment,” International Journal of Communication Systems, vol. 28, no. 6, pp. 1053-1067, 2015.
[55] R. Bruschi, F. Davoli, P. Lago, and J. F. Pajo, “A multi-clustering approach to scale distributed tenant networks for mobile edge computing,” IEEE Journal on Selected Areas in Communications, vol. 37, no. 3, pp. 499-514, 2019.
[56] X. Sun and N. Ansari, “Latency aware workload offloading in the cloudlet network,” IEEE Communications Letters, vol. 21, no. 7, pp. 1481-1484, 2017.
[57] Y. Nan, W. Li, W. Bao, F. C. Delicato, P. F. Pires, Y. Dou, and A. Y. Zomaya, “Adaptive energy-aware computation offloading for cloud of things systems,” IEEE Access, vol. 5, pp. 23947-23957, 2017.
[58] X. Chen, L. Jiao, W. Li, and X. Fu, “Efficient multi-user computation offloading for mobile-edge cloud computing,” IEEE/ACM Transactions on Networking, vol. 24, no. 5, pp. 2795-2808, 2015.
[59] W. Labidi, M. Sarkiss, and M. Kamoun, “Joint multi-user resource scheduling and computation offloading in small cell networks,” in Proceedings of 2015 IEEE 11th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob),Abu Dhabi, UAE, 2015, pp. 794-801.
[60] Y. H. Kao, B. Krishnamachari, M. R. Ra, and F. Bai, “Hermes: latency optimal task assignment for resource-constrained mobile computing,” IEEE Transactions on Mobile Computing, vol. 16, no. 11, pp. 3056-3069, 2017.
[61] X. Lyu and H. Tian, “Adaptive receding horizon offloading strategy under dynamic environment,” IEEE Communications Letters, vol. 20, no. 5, pp. 878-881, 2016.
[62] Y. Kim, H. W. Lee, and S. Chong, “Mobile computation offloading for application throughput fairness and energy efficiency,” IEEE Transactions on Wireless Communications, vol. 18, no. 1, pp. 3-19, 2018.
[63] P. Wang, C. Yao, Z. Zheng, G. Sun, and L. Song, “Joint task assignment, transmission, and computing resource allocation in multilayer mobile edge computing systems,” IEEE Internet of Things Journal, vol. 6, no. 2, pp. 2872-2884, 2018.
[64] B. Gu, Y. Chen, H. Liao, Z. Zhou, and D. Zhang, “A distributed and context-aware task assignment mechanism for collaborative mobile edge computing,” Sensors, vol. 18, no. 8, article no. 2423, 2018. https://doi.org/10.3390/s18082423
[65] J. Ren, G. Yu, Y. Cai, and Y. He, “Latency optimization for resource allocation in mobile-edge computation offloading,” IEEE Transactions on Wireless Communications, vol. 17, no. 8, pp. 5506-5519, 2018.
[66] K. Cheng, Y. Teng, W. Sun, A. Liu, and X. Wang, “Energy-efficient joint offloading and wireless resource allocation strategy in multi-MEC server systems,” in Proceedings of 2018 IEEE international conference on communications (ICC), Kansas City, MO, 2018, pp. 1-6.
[67] J. Opadere, Q. Liu, N. Zhang, and T. Han, “Joint computation and communication resource allocation for energy-efficient mobile edge networks,” in Proceedings of2019 IEEE International Conference on Communications (ICC), Shanghai, China, 2019, pp. 1-6.
[68] J. Wang, W. Wu, Z. Liao, A. K. Sangaiah, and R. S. Sherratt, “An energy-efficient off-loading scheme for low latency in collaborative edge computing,” IEEE Access, vol. 7, pp. 149182-149190, 2019.
[69] F. Cicirelli, A. Guerrieri, G. Spezzano, A. Vinci, O. Briante, A. Iera, and G. Ruggeri, “Edge computing and social internet of things for large-scale smart environments development,” IEEE Internet of Things Journal, vol. 5, no. 4, pp. 2557-2571, 2017.

About this article
Cite this article

Mohammad Babar1, *, Muhammad Sohail Khan1, Usman Habib2, Babar Shah3, Farman Ali4, and Dongho Song5, *, Scalable Edge Computing for IoT and Multimedia Applications Using Machine Learning, Article number: 11:41 (2021) Cite this article 2 Accesses

Download citation
  • Recived11 June 2021
  • Accepted30 September 2021
  • Published15 November 2021
Share this article

Anyone you share the following link with will be able to read this content:

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords