A green, secure, and deep intelligent method for dynamic IoT-edge-cloud offloading scenarios

Loading...
Thumbnail Image

Date

2023

Authors

Heidari, Arash
Navimipour, Nima Jafari
Jamali, Mohammad Ali Jabraeil
Akbarpour, Shahin

Journal Title

Journal ISSN

Volume Title

Publisher

Elsevier

Open Access Color

OpenAIRE Downloads

OpenAIRE Views

Research Projects

Organizational Units

Journal Issue

Abstract

To fulfill people's expectations for smart and user-friendly Internet of Things (IoT) applications, the quantity of processing is fast expanding, and task latency constraints are becoming extremely rigorous. On the other hand, the limited battery capacity of IoT objects severely affects the user experience. Energy Harvesting (EH) technology enables green energy to offer a continuous energy supply for IoT objects. It provides a solid assurance for the proper functioning of resource-constrained IoT objects when combined with the maturation of edge platforms and the development of parallel computing. The Markov Decision Process (MDP) and Deep Learning (DL) are used in this work to solve dynamic online/offline IoT-edge offloading scenarios. The suggested system may be used in both offline and online contexts and meets the user's quality of service expectations. Also, we investigate a blockchain scenario in which edge and cloud could work toward task offloading to address the tradeoff between limited processing power and high latency while ensuring data integrity during the offloading process. We provide a double Q-learning solution to the MDP issue that maximizes the acceptable offline offloading methods. During exploration, Transfer Learning (TL) is employed to quicken convergence by reducing pointless exploration. Although the recently promoted Deep Q-Network (DQN) may address this space complexity issue by replacing the huge Q-table in standard Q-learning with a Deep Neural Network (DNN), its learning speed may still be insufficient for IoT apps. In light of this, our work introduces a novel learning algorithm known as deep Post-Decision State (PDS)-learning, which combines the PDS-learning approach with the classic DQN. The system component in the proposed system can be dynamically chosen and modified to decrease object energy usage and delay. On average, the proposed technique outperforms multiple benchmarks in terms of delay by 4.5%, job failure rate by 5.7%, cost by 4.6%, computational overhead by 6.1%, and energy consumption by 3.9%.

Description

Keywords

Computation, Green Offloading, Blockchain, Deep Learning, IoT, Computation, Smart Edge, Blockchain, Blockchain

Turkish CoHE Thesis Center URL

Fields of Science

Citation

28

WoS Q

Q1

Scopus Q

Q1

Source

Sustainable Computing-Informatics & Systems

Volume

38

Issue

Start Page

End Page