D-ADMM

D-ADMM: a communication-efficient distributed algorithm for separable optimization. We propose a distributed algorithm, named Distributed Alternating Direction Method of Multipliers (D-ADMM), for solving separable optimization problems in networks of interconnected nodes or agents. In a separable optimization problem there is a private cost function and a private constraint set at each node. The goal is to minimize the sum of all the cost functions, constraining the solution to be in the intersection of all the constraint sets. D-ADMM is proven to converge when the network is bipartite or when all the functions are strongly convex, although in practice, convergence is observed even when these conditions are not met. We use D-ADMM to solve the following problems from signal processing and control: average consensus, compressed sensing, and support vector machines. Our simulations show that D-ADMM requires less communications than state-of-the-art algorithms to achieve a given accuracy level. Algorithms with low communication requirements are important, for example, in sensor networks, where sensors are typically battery-operated and communicating is the most energy consuming operation.


References in zbMATH (referenced in 16 articles , 1 standard article )

Showing results 1 to 16 of 16.
Sorted by year (citations)

  1. Falsone, Alessandro; Notarnicola, Ivano; Notarstefano, Giuseppe; Prandini, Maria: Tracking-ADMM for distributed constraint-coupled optimization (2020)
  2. Todescato, Marco; Bof, Nicoletta; Cavraro, Guido; Carli, Ruggero; Schenato, Luca: Partition-based multi-agent optimization in the presence of lossy and asynchronous communication (2020)
  3. Kaya, Kamer; Öztoprak, Figen; Birbil, Ş. İlker; Cemgil, A. Taylan; Şimşekli, Umut; Kuru, Nurdan; Koptagel, Hazal; Öztürk, M. Kaan: A framework for parallel second order incremental optimization algorithms for solving partially separable problems (2019)
  4. Shi, Chong-Xiao; Yang, Guang-Hong: Augmented Lagrange algorithms for distributed optimization over multi-agent networks via edge-based method (2018)
  5. Wang, Yamin; Wu, Lei; Li, Jie: A fully distributed asynchronous approach for multi-area coordinated network-constrained unit commitment (2018)
  6. Deng, Wei; Lai, Ming-Jun; Peng, Zhimin; Yin, Wotao: Parallel multi-block ADMM with (o(1/k)) convergence (2017)
  7. Eckstein, Jonathan: A simplified form of block-iterative operator splitting and an asynchronous algorithm resembling the multi-block alternating direction method of multipliers (2017)
  8. Lee, Jason D.; Lin, Qihang; Ma, Tengyu; Yang, Tianbao: Distributed stochastic variance reduced gradient methods by sampling extra data with replacement (2017)
  9. Wang, Dong; Ren, Hualing; Shao, Fubo: Distributed Newton methods for strictly convex consensus optimization problems in multi-agent networks (2017)
  10. Wang, Zheming; Ong, Chong Jin: Distributed Model Predictive Control of linear discrete-time systems with local and global constraints (2017)
  11. Ai, Wu; Chen, Weisheng; Xie, Jin: A zero-gradient-sum algorithm for distributed cooperative learning using a feedforward neural network with random weights (2016)
  12. Teng, Yueyang; Qi, Shouliang; Xiao, Dayu; Xu, Lisheng; Li, Jianhua; Kang, Yan: A general solution to least squares problems with box constraints and its applications (2016)
  13. Teng, Yueyang; Sun, Hang; Guo, Chen; Kang, Yan: ADMM-EM method for (L_1)-norm regularized weighted least squares PET reconstruction (2016)
  14. Boţ, Radu Ioan; Heinrich, André; Wanka, Gert: Employing different loss functions for the classification of images via supervised learning (2014)
  15. Boţ, Radu Ioan; Csetnek, Ernö Robert; Nagy, Erika: Solving systems of monotone inclusions via primal-dual splitting techniques (2013)
  16. Mota, João F. C.; Xavier, João M. F.; Aguiar, Pedro M. Q.; Püschel, Markus: D-ADMM: a communication-efficient distributed algorithm for separable optimization (2013)