Folgen
Farzad Yousefian
Farzad Yousefian
Assistant Professor, Rutgers University
Bestätigte E-Mail-Adresse bei rutgers.edu - Startseite
Titel
Zitiert von
Zitiert von
Jahr
On stochastic gradient and subgradient methods with adaptive steplength sequences
F Yousefian, A Nedić, UV Shanbhag
Automatica 48 (1), 56-67, 2012
1552012
On smoothing, regularization, and averaging in stochastic approximation methods for stochastic variational inequality problems
F Yousefian, A Nedić, UV Shanbhag
Mathematical Programming 165, 391-431, 2017
812017
Self-tuned stochastic approximation schemes for non-Lipschitzian stochastic multi-user optimization and Nash games
F Yousefian, A Nedić, UV Shanbhag
IEEE Transactions on Automatic Control 61 (7), 1753-1766, 2015
552015
Optimal robust smoothing extragradient algorithms for stochastic variational inequality problems
F Yousefian, A Nedić, UV Shanbhag
53rd IEEE conference on decision and control, 5831-5836, 2014
442014
On stochastic mirror-prox algorithms for stochastic cartesian variational inequalities: Randomized block coordinate and optimal averaging schemes
F Yousefian, A Nedić, UV Shanbhag
Set-Valued and Variational Analysis 26, 789-819, 2018
432018
A method with convergence rates for optimization problems with variational inequality constraints
HD Kaushik, F Yousefian
SIAM Journal on Optimization 31 (3), 2171-2198, 2021
332021
A Regularized Smoothing Stochastic Approximation (RSSA) Algorithm for Stochastic Variational Inequality Problems
F Yousefian, A Nedic, UV Shanbhag
Proceedings of the 2013 Winter Simulation Conference (WSC),, 933-944, 2013
332013
Stochastic gradient descent: Recent trends
D Newton, F Yousefian, R Pasupathy
Recent advances in optimization and modeling of contemporary problems, 193-220, 2018
292018
A variable sample-size stochastic quasi-Newton method for smooth and nonsmooth stochastic convex optimization
A Jalilzadeh, A Nedić, UV Shanbhag, F Yousefian
Mathematics of Operations Research 47 (1), 690-719, 2022
212022
An iterative regularized incremental projected subgradient method for a class of bilevel optimization problems
M Amini, F Yousefian
2019 American Control Conference (ACC), 4069-4074, 2019
192019
Recent trends in stochastic gradient descent for machine learning and Big Data
D Newton, R Pasupathy, F Yousefian
2018 Winter Simulation Conference (WSC), 366-380, 2018
172018
Distributed adaptive steplength stochastic approximation schemes for Cartesian stochastic variational inequality problems
F Yousefian, A Nedić, UV Shanbhag
arXiv preprint arXiv:1301.1711, 2013
172013
Bilevel Distributed Optimization in Directed Networks
F Yousefian
2021 American Control Conference (ACC), 2230-2235, 2021
162021
Convex nondifferentiable stochastic optimization: A local randomized smoothing technique
F Yousefian, A Nedić, UV Shanbhag
Proceedings of the 2010 American Control Conference, 4875-4880, 2010
162010
A distributed adaptive steplength stochastic approximation method for monotone stochastic Nash games
F Yousefian, A Nedić, UV Shanbhag
2013 American Control Conference, 4765-4770, 2013
142013
Stochastic quasi-Newton methods for non-strongly convex problems: convergence and rate analysis
F Yousefian, A Nedić, UV Shanbhag
2016 IEEE 55th Conference on Decision and Control (CDC), 4496-4503, 2016
132016
An iterative regularized mirror descent method for ill-posed nondifferentiable stochastic optimization
M Amini, F Yousefian
arXiv preprint arXiv:1901.09506, 2019
122019
On stochastic and deterministic quasi-Newton methods for nonstrongly convex optimization: Asymptotic convergence and rate analysis
F Yousefian, A Nedic, UV Shanbhag
SIAM Journal on Optimization 30 (2), 1144-1172, 2020
92020
Complexity guarantees for an implicit smoothing-enabled method for stochastic MPECs
S Cui, UV Shanbhag, F Yousefian
Mathematical Programming 198 (2), 1153-1225, 2023
82023
Self-tuned mirror descent schemes for smooth and nonsmooth high-dimensional stochastic optimization
N Majlesinasab, F Yousefian, A Pourhabib
IEEE Transactions on Automatic Control 64 (10), 4377-4384, 2019
82019
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20