Skip to main content

Study on Decentralized Machine Learning and Applications to Wireless Caching Networks

Time: Mon 2020-08-17 14.30

Location: Zoom (English)

Subject area: Electrical Engineering

Doctoral student: Yu Ye , Teknisk informationsvetenskap

Opponent: Zhu Han, Computer Science Department, University of Houston

Supervisor: Ming Xiao, Teknisk informationsvetenskap

Export to calendar

Abstract

To promote the development of distributed machine learning, it is crucial to provide efficient models and training algorithms. This thesis is devoted to the design of distributed multi-task learning and decentralized algorithms, as well as the application of distributed machine learning in wireless caching networks. Confronted with the challenges of model complexity for distributed machine learning, the distributed multi-task feature learning with randomized single-hidden layer feed-forward neural network (RSF) implementation is studied. To start with, the multi-task learning with RSF (MTL-RSF) is firstly investigated, where the shared sub-space across tasks and the task-specified weights are optimized through an alternating optimization based algorithm. The MTL-RSF problem is further extended to the decentralized scenario through introducing sharing variables for localized task objectives as well as a consensus constraint. Then the algorithm of hybrid Jacobian and Gauss-Seidel Proximal multi-block Alternating Direction Method of Multipliers (ADMM) is proposed to solve the decentralized learning problem. Besides, through theoretical analysis, we prove that the ADMM based algorithm converges to a stationary point. Regarding the bottleneck of communication load in distributed multi-task feature learning shown in previous work, the Parallel random Walk ADMM (PW-ADMM) algorithm that allows multiple random walks active is proposed to achieve a good trade-off between communication cost and the running speed. Furthermore, we provide the intelligent PW-ADMM (IPW-ADMM) by integrating the agents selection scheme Random walk with choice with PW-ADMM. To further reduce the communication cost in distributed machine learning, the incremental ADMM (I-ADMM) algorithm is proposed, in which the update order follows a Hamiltonian cycle. With the increasing concerns of privacy in distributed machine learning, the privacy-preserving decentralized algorithms with communication efficiency are further proposed. Taking advantage of the updating manner of I-ADMM, two privacy-preserving I-ADMM algorithms, i.e., PI-ADMM1 and PI-ADMM2, are provided through introducing the randomized initialization and perturbations over step size and primal update, respectively. The convergence of the proposed PI-ADMM1 algorithm is proved with mild assumptions. To apply distributed machine learning to wireless networks, the service delay for a typical user terminal in cache-enabled small cell networks (SCN) and ultra dense networks (UDN) with millimeter wave (mmWave) communications and user-centric clusters is analyzed. To minimize the average service delay of a typical user terminal for both scenarios, an alternating optimization based algorithm is presented to jointly optimize the caching schemes for small base stations and user terminals. Then through a Markov renewal process, the mobility pattern of user terminals is characterized. Then the problem of predicting content popularity in geography aspects is formulated by integrating the mobility pattern with a decentralized regularized multi task learning model. To minimize the loss with consensus constraint, a hybrid Jacobian and Gauss-Seidel proximal multi-block ADMM algorithm is proposed. It is proved that proposed solutions can converge to the optimum with O(1/k) rate when the algorithm parameters meet specific conditions.

urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-276455