Publications
Published/Accepted Journal Articles (* and ** refer to graduate and undergraduate student advisee names, respectively)
- S. Samadi* and F. Yousefian, Improved Guarantees for Optimal Nash Equilibrium Seeking and Bilevel Variational Inequalities, SIAM Journal on Optimization, accepted [arXiv]
- Z. Alizadeh, A. Jalilzadeh, and F. Yousefian, Randomized Lagrangian Stochastic Approximation for Large-Scale Constrained Stochastic Nash Games, Optimization Letters, vol. 18, pp. 377–401, 2024, doi: 10.1007/s11590-023-02079-5 [arXiv]
- A. Jalilzadeh, F. Yousefian, and M. Ebrahimi*, Stochastic Approximation for Estimating the Price of Stability in Stochastic Nash Games, ACM Transactions on Modeling and Computer Simulation, vol. 34, pp 1—24, 2024, doi: 10.1145/3632525 [arXiv]
- D. Burbano and F. Yousefian, A Fish Rheotaxis Mechanism as a Zero-Order Optimization Strategy, IEEE Access, vol. 11, pp. 102781–102795, 2023, doi: 10.1109/ACCESS.2023.3315240
- H. D. Kaushik*, S. Samadi*, and F. Yousefian, An Incremental Gradient Method for Optimization Problems with Variational Inequality Constraints, IEEE Transactions on Automatic Control, 68 (12), 7879–7886, 2023. doi: 10.1109/TAC.2023.3251851 [arXiv]
-
S. Cui, U. V. Shanbhag, and F. Yousefian, Complexity Guarantees for an Implicit Smoothing-Enabled Method for Stochastic MPECs, Mathematical Programming, 198, 1153–225, 2023. doi: 10.1007/s10107-022-01893-6 [Springer Link] [arXiv]
-
A. Jalilzadeh, A. Nedich, U. V. Shanbhag, and F. Yousefian, A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization, Mathematics of Operations Research, 47 (1), pp. 690 –719, 2022. doi: 10.1287/moor.2021.1147 [arXiv]
-
H. D. Kaushik* and F. Yousefian, A Method with Convergence Rates for Optimization Problems with Variational Inequality Constraints, SIAM Journal on Optimization, 31 (3), 2171–2198, 2021. doi: 10.1137/20M1357378 [arXiv]
-
F. Yousefian, A. Nedich, and U. V. Shanbhag, On Stochastic and Deterministic Quasi-Newton Methods for Nonstrongly Convex Optimization: Asymptotic Convergence and Rate Analysis, SIAM Journal on Optimization, 30 (2), 1144-1172, 2020. doi: 10.1137/17M1152474 [arXiv]
-
N. Majlesinasab*, F. Yousefian, and A. Pourhabib, Self-tuned Stochastic Mirror Descent Methods for Smooth and Nonsmooth High-Dimensional Stochastic Optimization, IEEE Transactions on Automatic Control, 64 (10), 4377-4384, 2019. doi: 10.1109/TAC.2019.2897889 [arXiv]
-
F. Yousefian, A. Nedich, and U. V. Shanbhag, On Stochastic Mirror-Prox Algorithms for Stochastic Cartesian Variational Inequalities: Randomized Block Coordinate and Optimal Averaging Schemes, Set-Valued and Variational Analysis, 26 (4), 789-819, 2018. doi: 10.1007/s11228-018-0472-9 [Springer Link]
-
F. Yousefian, A. Nedich, and U. V. Shanbhag, On Smoothing, Regularization, and Averaging in Stochastic Approximation Methods for Stochastic Variational Inequality Problems, Mathematical Programming, 165 (1), 391-431, 2017. doi: 10.1007/s10107-017-1175-y [arXiv]
-
F. Yousefian, A. Nedich, and U. V. Shanbhag, Self-Tuned Stochastic Approximation Schemes for Non-Lipschitzian Stochastic Multi-User Optimization and Nash Games, IEEE Transactions on Automatic Control, 61 (7), 1753-1766, 2016. doi: 10.1109/TAC.2015.2478124 [arXiv]
-
F. Yousefian, A. Nedich, and U. V. Shanbhag, On Stochastic Gradient and Subgradient Methods with Adaptive Steplength Sequences, Automatica, 48 (1), 56-67, 2012. doi: 10.1016/j.automatica.2011.09.043 [arXiv]
Book Chapters
-
D. Newton, F. Yousefian, and R. Pasupathy, Stochastic Gradient Descent: Recent Trends, INFORMS TutORials in Operations Research, pp. 193-220, 2018. doi: 10.1287/educ.2018.0191 [Slides]
Under Review
- Y. Qiu*, F. Yousefian, and B. Zhang**, Iteratively Regularized Gradient Tracking Methods for Optimal Equilibrium Seeking, under review (Dec. 2024) [arXiv]
- L. Marrinan, U. V. Shanbhag, and F. Yousefian, Zeroth-order Gradient and Quasi-Newton Methods for Nonsmooth Nonconvex Stochastic Optimization, under revision [arXiv] (July 2024)
-
Y. Qiu*, U. V. Shanbhag, and F. Yousefian, Zeroth-Order Federated Methods for Stochastic MPECs and Nondifferentiable Nonconvex Hierarchical Optimization, under revision (March 2024) [arXiv]
Peer-Reviewed Conference Proceedings
-
M. Ebrahimi*, U. V. Shanbhag, and F. Yousefian, Distributed Gradient Tracking Methods with Guarantees for Computing a Solution to Stochastic MPECs, Proceedings of the 2024 American Control Conference (ACC), accepted [arXiv]
-
S. Samadi*, D. Burbano, and F. Yousefian, Achieving Optimal Complexity Guarantees for a Class of Bilevel Convex Optimization Problems, Proceedings of the 2024 American Control Conference (ACC), accepted [arXiv]
-
Y. Qiu*, U. V. Shanbhag, and F. Yousefian, Zeroth-Order Methods for Nondifferentiable, Nonconvex, and Hierarchical Federated Optimization, The 37th Annual Conference on Neural Information Processing Systems (NeurIPS 2023) [arXiv] [poster]
-
U. V. Shanbhag and F. Yousefian, Zeroth-Order Randomized Block Methods for Constrained Minimization of Expectation-Valued Lipschitz Continuous Functions, The Seventh Indian Control Conference (ICC) [arXiv]
-
F. Yousefian, Bilevel Distributed Optimization in Directed Networks, Proceedings of the 2021 American Control Conference (ACC) [arXiv]
-
H. D. Kaushik* and F. Yousefian, An Incremental Gradient Method for Large-scale Distributed Nonlinearly Constrained Optimization, Proceedings of the 2021 American Control Conference (ACC) [arXiv]
-
N. Majlesinasab*, F. Yousefian, and M. J. Feizollahi, A First-Order Method for Monotone Stochastic Variational Inequalities on Semidefinite Matrix Spaces, Proceedings of the 2019 American Control Conference (ACC) [arXiv]
-
H. D. Kaushik* and F. Yousefian, A Randomized Block Coordinate Iterative Regularized Subgradient Method for High-Dimensional Ill-Posed Convex Optimization, Proceedings of the 2019 American Control Conference (ACC) [arXiv]
-
M. Amini* and F. Yousefian, An Iterative Regularized Incremental Projected Subgradient Method for a Class of Bilevel Optimization Problems, Proceedings of the 2019 American Control Conference (ACC) [arXiv]
-
D. Newton, R. Pasupathy, and F. Yousefian, Recent Trends in Stochastic Gradient Descent for Machine Learning and Big Data, Proceedings of the 2018 Winter Simulation Conference (WSC) [Link]
-
A. Jalilzadeh, A. Nedich, and U. V. Shanbhag, F. Yousefian, A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization, 2018 IEEE Conference on Decision and Control (CDC) [Link]
-
F. Yousefian, A. Nedich, and U. V. Shanbhag, A Smoothing Stochastic Quasi-Newton Method for Non-Lipschitzian Stochastic Optimization Problems, Proceedings of the 2017 Winter Simulation Conference (WSC) [Link]
-
F. Yousefian, A. Nedich, and U. V. Shanbhag, Stochastic Quasi-Newton Methods for Non-Strongly Convex Problems: Convergence and Rate Analysis, 2016 IEEE Conference on Decision and Control (CDC) [Link]
-
F. Yousefian, A. Nedich, and U. V. Shanbhag, Optimal Robust Smoothing Extragradient Algorithms for Stochastic Variational Inequality Problems, 2014 IEEE Conference on Decision and Control (CDC) [Link]
-
F. Yousefian, A. Nedich, and U. V. Shanbhag, A Regularized Smoothing Stochastic Approximation (RSSA) Algorithm for Stochastic Variational Inequality Problems, Proceedings of the 2013 Winter Simulation Conference (WSC) [Link]
-
Awarded the Best Theoretical Paper in the 2013 Winter Simulation Conference
-
-
F. Yousefian, A. Nedich, and U. V. Shanbhag, A Distributed Adaptive Steplength Stochastic Approximation Method for Monotone Stochastic Nash Games, Proceedings of the 2013 American Control Conference (ACC) [Link]
-
Awarded the Best Presentation in Session
-
-
F. Yousefian, A. Nedich, and U. V. Shanbhag, A Regularized Adaptive Steplength Stochastic Approximation Scheme for Monotone Stochastic Variational Inequalities, Proceedings of the 2011 Winter Simulation Conference (WSC) [Link]
-
F. Yousefian, A. Nedich, and U. V. Shanbhag, Convex Nondifferentiable Stochastic Optimization: A Local Randomized Smoothing Technique, Proceedings of the 2010 American Control Conference (ACC) [Link]
-
Awarded the Best Paper in Session
-
Preprint Articles and Technical Notes
-
F. Yousefian, J. Yevale*, and H. D. Kaushik*, Distributed Randomized Block Stochastic Gradient Tracking Method, [arXiv]
-
N. Majlesinasab*, F. Yousefian, and M. J. Feizollahi, First-Order Methods with Convergence Rates for Multi-Agent Systems on Semidefinite Matrix Spaces [arXiv]
-
M. Amini* and F. Yousefian, An Iterative Regularized Mirror Descent Method for Ill-Posed Nondifferentiable Stochastic Optimization [arXiv]
Farzad Yousefian’s PhD Dissertation
-
Stochastic Approximation Schemes for Stochastic Optimization and Variational Problems: Adaptive Steplengths, Smoothing, and Regularization [Link]