D-ADMM

D-ADMM: a communication-efficient distributed algorithm for separable optimization. We propose a distributed algorithm, named Distributed Alternating Direction Method of Multipliers (D-ADMM), for solving separable optimization problems in networks of interconnected nodes or agents. In a separable optimization problem there is a private cost function and a private constraint set at each node. The goal is to minimize the sum of all the cost functions, constraining the solution to be in the intersection of all the constraint sets. D-ADMM is proven to converge when the network is bipartite or when all the functions are strongly convex, although in practice, convergence is observed even when these conditions are not met. We use D-ADMM to solve the following problems from signal processing and control: average consensus, compressed sensing, and support vector machines. Our simulations show that D-ADMM requires less communications than state-of-the-art algorithms to achieve a given accuracy level. Algorithms with low communication requirements are important, for example, in sensor networks, where sensors are typically battery-operated and communicating is the most energy consuming operation.


References in zbMATH (referenced in 25 articles , 1 standard article )

Showing results 1 to 20 of 25.
Sorted by year (citations)

1 2 next

  1. Wang, Ye; Manzie, Chris: Robust distributed model predictive control of linear systems: analysis and synthesis (2022)
  2. Bin, Michelangelo; Parisini, Thomas: A distributed methodology for approximate uniform global minimum sharing (2021)
  3. Camisa, Andrea; Farina, Francesco; Notarnicola, Ivano; Notarstefano, Giuseppe: Distributed constraint-coupled optimization via primal decomposition over random time-varying graphs (2021)
  4. Chen, Chenxi; Chen, Yunmei; Ye, Xiaojing: A randomized incremental primal-dual method for decentralized consensus optimization (2021)
  5. Yu, Wenwu; Liu, Hongzhe; Zheng, Wei Xing; Zhu, Yanan: Distributed discrete-time convex optimization with nonidentical local constraints over time-varying unbalanced directed graphs (2021)
  6. Cheng, Huqiang; Li, Huaqing; Wang, Zheng: On the convergence of exact distributed generalisation and acceleration algorithm for convex optimisation (2020)
  7. Falsone, Alessandro; Notarnicola, Ivano; Notarstefano, Giuseppe; Prandini, Maria: Tracking-ADMM for distributed constraint-coupled optimization (2020)
  8. Lv, Yuan-Wei; Yang, Guang-Hong; Shi, Chong-Xiao: Differentially private distributed optimization for multi-agent systems via the augmented Lagrangian algorithm (2020)
  9. Todescato, Marco; Bof, Nicoletta; Cavraro, Guido; Carli, Ruggero; Schenato, Luca: Partition-based multi-agent optimization in the presence of lossy and asynchronous communication (2020)
  10. Yan, Jiaqi; Guo, Fanghong; Wen, Changyun; Li, Guoqi: Parallel alternating direction method of multipliers (2020)
  11. Kaya, Kamer; Öztoprak, Figen; Birbil, Ş. İlker; Cemgil, A. Taylan; Şimşekli, Umut; Kuru, Nurdan; Koptagel, Hazal; Öztürk, M. Kaan: A framework for parallel second order incremental optimization algorithms for solving partially separable problems (2019)
  12. Price, Bradley S.; Geyer, Charles J.; Rothman, Adam J.: Automatic response category combination in multinomial logistic regression (2019)
  13. Shi, Chong-Xiao; Yang, Guang-Hong: Augmented Lagrange algorithms for distributed optimization over multi-agent networks via edge-based method (2018)
  14. Wang, Yamin; Wu, Lei; Li, Jie: A fully distributed asynchronous approach for multi-area coordinated network-constrained unit commitment (2018)
  15. Deng, Wei; Lai, Ming-Jun; Peng, Zhimin; Yin, Wotao: Parallel multi-block ADMM with (o(1/k)) convergence (2017)
  16. Eckstein, Jonathan: A simplified form of block-iterative operator splitting and an asynchronous algorithm resembling the multi-block alternating direction method of multipliers (2017)
  17. Lee, Jason D.; Lin, Qihang; Ma, Tengyu; Yang, Tianbao: Distributed stochastic variance reduced gradient methods by sampling extra data with replacement (2017)
  18. Wang, Dong; Ren, Hualing; Shao, Fubo: Distributed Newton methods for strictly convex consensus optimization problems in multi-agent networks (2017)
  19. Wang, Zheming; Ong, Chong Jin: Distributed Model Predictive Control of linear discrete-time systems with local and global constraints (2017)
  20. Ai, Wu; Chen, Weisheng; Xie, Jin: A zero-gradient-sum algorithm for distributed cooperative learning using a feedforward neural network with random weights (2016)

1 2 next