ARock
ARock: an algorithmic framework for asynchronous parallel coordinate updates. Finding a fixed point to a nonexpansive operator, i.e., x * =Tx * , abstracts many problems in numerical linear algebra, optimization, and other areas of data science. To solve fixed-point problems, we propose ARock, an algorithmic framework in which multiple agents (machines, processors, or cores) update x in an asynchronous parallel fashion. Asynchrony is crucial to parallel computing since it reduces synchronization wait, relaxes communication bottleneck, and thus speeds up computing significantly. At each step of ARock, an agent updates a randomly selected coordinate x i based on possibly out-of-date information on x. The agents share x through either global memory or communication. If writing x i is atomic, the agents can read and write x without memory locks. We prove that if the nonexpansive operator T has a fixed point, then with probability one, ARock generates a sequence that converges to a fixed point of T. Our conditions on T and step sizes are weaker than those in comparable work. Linear convergence is obtained under suitable assumptions. We propose special cases of ARock for linear systems, convex optimization, and machine learning, as well as distributed and decentralized consensus problems. Numerical experiments solving sparse logistic regression problems are presented.
Keywords for this software
References in zbMATH (referenced in 30 articles , 1 standard article )
Showing results 1 to 20 of 30.
Sorted by year (- Cenedese, Carlo; Belgioioso, Giuseppe; Grammatico, Sergio; Cao, Ming: An asynchronous distributed and scalable generalized Nash equilibrium seeking algorithm for strongly monotone games (2021)
- Grishchenko, Dmitry; Iutzeler, Franck; Malick, Jérôme; Amini, Massih-Reza: Distributed learning with sparse communications by identification (2021)
- Kozak, David; Becker, Stephen; Doostan, Alireza; Tenorio, Luis: A stochastic subspace approach to gradient-free optimization in high dimensions (2021)
- Bareilles, Gilles; Laguel, Yassine; Grishchenko, Dmitry; Iutzeler, Franck; Malick, Jérôme: Randomized progressive hedging methods for multi-stage stochastic programming (2020)
- Cannelli, Loris; Facchinei, Francisco; Kungurtsev, Vyacheslav; Scutari, Gesualdo: Asynchronous parallel algorithms for nonconvex optimization (2020)
- Glusa, Christian; Boman, Erik G.; Chow, Edmond; Rajamanickam, Sivasankaran; Szyld, Daniel B.: Scalable asynchronous domain decomposition solvers (2020)
- Iutzeler, Franck; Malick, Jérôme; de Oliveira, Welington: Asynchronous level bundle methods (2020)
- Mazurenko, Stanislav; Jauhiainen, Jyrki; Valkonen, Tuomo: Primal-dual block-proximal splitting for a class of non-convex problems (2020)
- Mishchenko, Konstantin; Iutzeler, Franck; Malick, Jérôme: A distributed flexible delay-tolerant proximal gradient algorithm (2020)
- Sun, Tao; Sun, Yuejiao; Xu, Yangyang; Yin, Wotao: Markov chain block coordinate descent (2020)
- Todescato, Marco; Bof, Nicoletta; Cavraro, Guido; Carli, Ruggero; Schenato, Luca: Partition-based multi-agent optimization in the presence of lossy and asynchronous communication (2020)
- Wang, Zhaojian; Liu, Feng; Su, Yifan; Yang, Peng; Qin, Boyu: Asynchronous distributed voltage control in active distribution networks (2020)
- Zhang, Jiaqi; You, Keyou: AsySPA: an exact asynchronous algorithm for convex optimization over digraphs (2020)
- Gao, Bin; Liu, Xin; Yuan, Ya-Xiang: Parallelizable algorithms for optimization problems with orthogonality constraints (2019)
- Heaton, Howard; Censor, Yair: Asynchronous sequential inertial iterations for common fixed points problems with an application to linear systems (2019)
- Pang, C. H. Jeffrey: Distributed deterministic asynchronous algorithms in time-varying graphs through Dykstra splitting (2019)
- Peng, Zhimin; Xu, Yangyang; Yan, Ming; Yin, Wotao: On the convergence of asynchronous parallel iteration with unbounded delays (2019)
- Stathopoulos, Giorgos; Jones, Colin N.: An inertial parallel and asynchronous forward-backward iteration for distributed convex optimization (2019)
- Valkonen, Tuomo: Block-proximal methods with spatially adapted acceleration (2019)
- Wang, Yu; Yin, Wotao; Zeng, Jinshan: Global convergence of ADMM in nonconvex nonsmooth optimization (2019)