邊緣計(jì)算中面向緩存的遷移決策和資源分配
doi: 10.11999/JEIT240427 cstr: 32379.14.JEIT240427
-
鄭州大學(xué)電氣與信息工程學(xué)院 鄭州 450001
Cache Oriented Migration Decision and Resource Allocation in Edge Computing
-
School of Information Engineering, Zhengzhou University, Zhengzhou 450001, China
-
摘要: 邊緣計(jì)算通過(guò)在網(wǎng)絡(luò)邊緣側(cè)為用戶(hù)提供計(jì)算資源和緩存服務(wù),可以有效降低執(zhí)行時(shí)延和能耗。由于用戶(hù)的移動(dòng)性和網(wǎng)絡(luò)的隨機(jī)性,緩存服務(wù)和用戶(hù)任務(wù)會(huì)頻繁地在邊緣服務(wù)器之間遷移,增加了系統(tǒng)成本。該文構(gòu)建了一種基于預(yù)緩存的遷移計(jì)算模型,研究了資源分配、服務(wù)緩存和遷移決策的聯(lián)合優(yōu)化問(wèn)題。針對(duì)這一混合整數(shù)非線(xiàn)性規(guī)劃問(wèn)題,通過(guò)分解原問(wèn)題,分別采用庫(kù)恩塔克條件和二分搜索法對(duì)資源分配進(jìn)行優(yōu)化,并提出一種基于貪婪策略的遷移決策和服務(wù)緩存聯(lián)合優(yōu)化算法(JMSGS)獲得最優(yōu)遷移決策和緩存決策。仿真結(jié)果驗(yàn)證了所提算法的有效性,實(shí)現(xiàn)系統(tǒng)能耗和時(shí)延加權(quán)和最小。Abstract: Edge computing provides computing resources and caching services at the network edge, effectively reducing execution latency and energy consumption. However, due to user mobility and network randomness, caching services and user tasks frequently migrate between edge servers, increasing system costs. The migration computation model based on pre-caching is constructed and the joint optimization problem of resource allocation, service caching and migration decision-making is investigated. To address this mixed-integer nonlinear programming problem, the original problem is decomposed to optimize the resource allocation using Karush-Kuhn-Tucker condition and bisection search iterative method. Additionally, a Joint optimization algorithm for Migration decision-making and Service caching based on a Greedy Strategy (JMSGS) is proposed to obtain the optimal migration and caching decisions. Simulation results show the effectiveness of the proposed algorithm in minimizing the weighted sum of system energy consumption and latency.
-
Key words:
- Edge computing /
- Migrate strategy /
- Service cache /
- Resource allocation
-
1 二分搜索的上行傳輸功率分配算法
初始化:傳輸功率$ {p_i} $范圍,收斂閾值$r$ (1) 根據(jù)式(21)計(jì)算得出$ \phi (p_i^{{\text{max}}}) $ (2) if $ \phi (p_i^{{\text{max}}}) \lt 0 $ then (3) $ p_i^* = p_i^{{\text{max}}} $ (4) else (5) 初始化參數(shù)$ {p_l} = p_i^{{\text{min}}} $, $ {p_h} = p_i^{{\text{max}}} $ (6) end if (7) if $ \phi ({p_m}) < 0 $ then (8) ${p_l} = {p_m}$ (9) else (10) $ {p_h} = {p_m} $ (11) end if (12) until $({p_h} - {p_l}) \le r$ (13) $ p_i^* = ({p_l} + {p_h})/2 $ 下載: 導(dǎo)出CSV
2 基于貪婪決策的遷移緩存聯(lián)合優(yōu)化算法
初始化:${N_{{\text{local}}}}{\text{ = }}{N_{\text{0}}}$, ${N_{{\text{mec}}}} = \phi $ (1) for $i{\text{ = 1:}}N$ (2) for $m{\text{ = 1:}}M_i^{{\text{sort}}}$ (3) 計(jì)算用戶(hù)的代價(jià)增益函數(shù)$\Delta C(m)$ (4) end for (5) 將每個(gè)用戶(hù)的代價(jià)增益函數(shù)倒序排列,加入序列$N_i^{{\text{sort}}}$ (6) for $ i{\text{ = 1:}}N_i^{{\text{sort}}} $ 計(jì)算目標(biāo)函數(shù)值 (7) if ${\text{ET}}{{\text{C}}_{o + i}}{\text{ \lt ET}}{{\text{C}}_o}$ (8) $ \alpha = 1 $, $ \vartheta = 1 $ or $ \varpi $=1 (9) else (10) 保持原有模式 (11) end if (12) if $ \varpi = 1 $, $ C_m^{\text} + C_m^{\text{a}} \le C_m^{\max } $ (13) 將應(yīng)用程序緩存至服務(wù)器 (14) else if $ X_m^{\min } < {X_i} $ (15) 更新服務(wù)器狀態(tài) (16) else (17) 本地執(zhí)行 (18) end if 下載: 導(dǎo)出CSV
表 1 仿真參數(shù)
參數(shù) 數(shù)值 任務(wù)大小$\lambda $(Mb) 10~20 應(yīng)用程序大小$b$(Gb) 1~5 本地計(jì)算能力${f_{{\text{loc}}}}$(GHz) 0.5~1.5 邊緣服務(wù)器數(shù)目(個(gè)) 5~20 噪聲功率譜密度${N_{\text{0}}}$(dBm/Hz) –174 系統(tǒng)帶寬B(MHz) 1~2 服務(wù)器計(jì)算能力${f_{{\text{es}}}}$(GHz) 15~25 服務(wù)器緩存容量(Gb) 20~30 下載: 導(dǎo)出CSV
-
[1] SHEN Xuemin, GAO Jie, WU Wen, et al. Holistic network virtualization and pervasive network intelligence for 6G[J]. IEEE Communications Surveys & Tutorials, 2022, 24(1): 1–30. doi: 10.1109/COMST.2021.3135829. [2] OKEGBILE S D, CAI Jun, NIYATO D, et al. Human digital twin for personalized healthcare: Vision, architecture and future directions[J]. IEEE Network, 2023, 37(2): 262–269. doi: 10.1109/MNET.118.2200071. [3] CAI Qing, ZHOU Yiqing, LIU Ling, et al. Collaboration of heterogeneous edge computing paradigms: How to fill the gap between theory and practice[J]. IEEE Wireless Communications, 2024, 31(1): 110–117. doi: 10.1109/MWC.014.2200283. [4] LI Zhuo, ZHOU Xu, and QIN Yifang. A survey of mobile edge computing in the industrial internet[C]. 2019 7th International Conference on Information, Macao, China, 2019: 94–98. doi: 10.1109/ICICN.2019.8834959. [5] MAO Yuyi, YOU Changsheng, ZHANG Jun, et al. A survey on mobile edge computing: The communication perspective[J]. IEEE Communications Surveys & Tutorials, 2017, 19(4): 2322–2358. doi: 10.1109/COMST.2017.2745201. [6] CHEN Xiangyi, BI Yuanguo, CHEN Xueping, et al. Dynamic service migration and request routing for microservice in multicell mobile-edge computing[J]. IEEE Internet of Things Journal, 2022, 9(15): 13126–13143. doi: 10.1109/JIOT.2022.3140183. [7] CHEN Jiayuan, YI Changyan, WANG Ran, et al. Learning aided joint sensor activation and mobile charging vehicle scheduling for energy-efficient WRSN-based industrial IoT[J]. IEEE Transactions on Vehicular Technology, 2023, 72(4): 5064–5078. doi: 10.1109/TVT.2022.3224443. [8] LIANG Zezu, LIU Yuan, LOK T M, et al. Multi-cell mobile edge computing: Joint service migration and resource allocation[J]. IEEE Transactions on Wireless Communications, 2021, 20(9): 5898–5912. doi: 10.1109/TWC.2021.3070974. [9] LIANG Zezu, LIU Yuan, LOK T M, et al. Multiuser computation offloading and downloading for edge computing with virtualization[J]. IEEE Transactions on Wireless Communications, 2019, 18(9): 4298–4311. doi: 10.1109/TWC.2019.2922613. [10] SHI You, YI Changyan, WANG Ran, et al. Service migration or task rerouting: A two-timescale online resource optimization for MEC[J]. IEEE Transactions on Wireless Communications, 2024, 23(2): 1503–1519. doi: 10.1109/TWC.2023.3290005. [11] XIA Zhuoqun, MAO Xiaoxiao, GU Ke, et al. Dual-mode data forwarding scheme based on interest tags for fog computing-based SIoVs[J]. IEEE Transactions on Network and Service Management, 2022, 19(3): 2780–2797. doi: 10.1109/TNSM.2022.3161539. [12] HU Yi, WANG Hao, WANG Liangyuan, et al. Joint deployment and request routing for microservice call graphs in data centers[J]. IEEE Transactions on Parallel and Distributed Systems, 2023, 34(11): 2994–3011. doi: 10.1109/TPDS.2023.3311767. [13] CHEN Long, ZHENG Shaojie, WU Yalan, et al. Resource and fairness-aware digital twin service caching and request routing with edge collaboration[J]. IEEE Wireless Communications Letters, 2023, 12(11): 1881–1885. doi: 10.1109/LWC.2023.3298200. [14] FENG Hao, GUO Songtao, YANG Li, et al. Collaborative data caching and computation offloading for multi-service mobile edge computing[J]. IEEE Transactions on Vehicular Technology, 2021, 70(9): 9408–9422. doi: 10.1109/TVT.2021.3099303. [15] SHEN Qiaoqiao, HU Binjie, and XIA Enjun. Dependency-aware task offloading and service caching in vehicular edge computing[J]. IEEE Transactions on Vehicular Technology, 2022, 71(12): 13182–13197. doi: 10.1109/TVT.2022.3196544. [16] HE Ting, KHAMFROUSH H, WANG Shiqiang, et al. It’s hard to share: Joint service placement and request scheduling in edge clouds with sharable and non-sharable resources[C]. 2018 IEEE 38th International Conference on Distributed Computing Systems, Vienna, Austria, 2018: 365–375. doi: 10.1109/ICDCS.2018.00044. [17] WANG Shiqiang, URGAONKAR R, HE Ting, et al. Dynamic service placement for mobile micro-clouds with predicted future costs[J]. IEEE Transactions on Parallel and Distributed Systems, 2017, 28(4): 1002–1016. doi: 10.1109/TPDS.2016.2604814. [18] 楊守義, 成昊澤, 黨亞萍. 基于集群協(xié)作的云霧混合計(jì)算資源分配和負(fù)載均衡策略[J]. 電子與信息學(xué)報(bào), 2023, 45(7): 2423–2431. doi: 10.11999/JEIT220719.YANG Shouyi, CHENG Haoze, and DANG Yaping. Resource allocation and load balancing strategy in cloud-fog hybrid computing based on cluster-collaboration[J]. Journal of Electronics & Information Technology, 2023, 45(7): 2423–2431. doi: 10.11999/JEIT220719. [19] 楊守義, 李富康, 任瑞敏. 信任環(huán)境下考慮系統(tǒng)公平性的邊緣計(jì)算卸載策略和資源分配[J]. 通信學(xué)報(bào), 2024, 45(3): 142–154. doi: 10.11959/j.issn.1000-436x.2024030.YANG Shouyi, LI Fukang, and REN Ruimin. Edge computing offloading policies and resource allocation considering system fairness in trusted environments[J]. Journal on Communications, 2024, 45(3): 142–154. doi: 10.11959/j.issn.1000-436x.2024030. [20] LYU Xinchen, TIAN Hui, SENGUL C, et al. Multiuser joint task offloading and resource optimization in proximate clouds[J]. IEEE Transactions on Vehicular Technology, 2017, 66(4): 3435–3447. doi: 10.1109/TVT.2016.2593486. -