ExtraPush for convex smooth decentralized optimization over directed networks. In this note, we extend the algorithms Extra and subgradient-push to a new algorithm ExtraPush for consensus optimization with convex differentiable objective functions over a directed network. When the stationary distribution of the network can be computed in advance, we propose a simplified algorithm called Normalized ExtraPush. Just like Extra, both ExtraPush and Normalized ExtraPush can iterate with a fixed step size. But unlike Extra, they can take a column-stochastic mixing matrix, which is not necessarily doubly stochastic. Therefore, they remove the undirected-network restriction of Extra. Subgradient-push, while also works for directed networks, is slower on the same type of problem because it must use a sequence of diminishing step sizes. We present preliminary analysis for ExtraPush under a bounded sequence assumption. For Normalized ExtraPush, we show that it naturally produces a bounded, linearly convergent sequence provided that the objective function is strongly convex. In our numerical experiments, ExtraPush and Normalized ExtraPush performed similarly well. They are significantly faster than subgradient-push, even when we hand-optimize the step sizes for the latter.
Keywords for this software
References in zbMATH (referenced in 5 articles , 1 standard article )
Showing results 1 to 5 of 5.
- Li, Yao; Yan, Ming: On the linear convergence of two decentralized algorithms (2021)
- Mai, Van Sy; Abed, Eyad H.: Distributed optimization over directed graphs with row stochasticity and constraint regularity (2019)
- Nedić, Angelia; Olshevsky, Alex; Shi, Wei: Achieving geometric convergence for distributed optimization over time-varying graphs (2017)
- Zeng, Jinshan; He, Tao; Wang, Mingwen: A fast proximal gradient algorithm for decentralized composite optimization over directed networks (2017)
- Zeng, Jinshan; Yin, Wotao: ExtraPush for convex smooth decentralized optimization over directed networks (2017)