In this paper, we study gradient projection algorithms based on random partial updates of decision variables. These algorithms generalize random coordinate descent methods. We analyze these algorithms with and without assuming strong convexity of the objective functions. We also present an accelerated version of the algorithm based on Nesterov's two-step gradient method . In each case, we prove convergence and provide a bound on the rate of convergence. We see that the randomized algorithms exhibit similar rates of convergence as their full gradient based deterministic counterparts.
|Title of host publication
|53rd IEEE Conference on Decision and Control,CDC 2014
|Institute of Electrical and Electronics Engineers Inc.
|Number of pages
|Published - 2014
|2014 53rd IEEE Annual Conference on Decision and Control, CDC 2014 - Los Angeles, United States
Duration: Dec 15 2014 → Dec 17 2014
|Proceedings of the IEEE Conference on Decision and Control
|2014 53rd IEEE Annual Conference on Decision and Control, CDC 2014
|12/15/14 → 12/17/14
ASJC Scopus subject areas
- Control and Systems Engineering
- Modeling and Simulation
- Control and Optimization