Maxim Raginsky
Maxim Raginsky

Administrative Titles

  • William L. Everitt Fellow in Electrical and Computer Engineering
Professor
(217) 244-1782
162 Coordinated Science Lab

For More Information

Education

  • Ph.D. in Electrical Engineering, Northwestern University, 2002

Biography

Maxim Raginsky received the B.S. and M.S. degrees in 2000 and the Ph.D. degree in 2002 from Northwestern University, all in Electrical Engineering. He has held research positions with Northwestern, the University of Illinois at Urbana-Champaign (where he was a Beckman Foundation Fellow from 2004 to 2007), and Duke University. In 2012, he has returned to the UIUC, where he is currently a Professor with the Department of Electrical and Computer Engineering and the Coordinated Science Laboratory. He also holds a courtesy appointment with the Department of Computer Science.

Research Statement

Prof. Raginsky's interests cover probability and stochastic processes, deterministic and stochastic control, machine learning, optimization, and information theory. Much of his recent research is motivated by fundamental questions in modeling, learning, and simulation of nonlinear dynamical systems, with applications to advanced electronics, autonomy, and artificial intelligence.

Research Interests

  • Deterministic and stochastic dynamical systems in machine learning, optimization, and control
  • Probability and stochastic processes
  • Statistical machine learning
  • Information theory

Research Areas

  • Control
  • Dynamic games and decision theory
  • Information theory
  • Machine learning
  • Machine learning and pattern recognition
  • Nonlinear systems and control
  • Random processes
  • Stochastic systems and control

Research Topics

Chapters in Books

  • Maxim Raginsky, Alexander Rakhlin, and Aolin Xu, "Information-theoretic stability and generalization," in "Information-Theoretic Methods in Data Science" (Cambridge University Press, 2021)
  • Aryeh Kontorovich and Maxim Raginsky, "Concentration of measure without independence: a unified approach via the martingale method," in IMA Volume "Convexity and Concentration" (Springer, 2017)

Monographs

  • Maxim Raginsky and Igal Sason, "Concentration of measure inequalities in information theory, communications and coding," Foundations and Trends in Communications and Information Theory, vol. 10, issues 1 and 2, pp. 1-246, 2013; 2nd edition, 2014

Selected Articles in Journals

  • Maxim Raginsky, "Biological autonomy," Biological Theory, vol .18, pp. 303-308, 2023
  • Belinda Tzen, Anant Raj, Maxim Raginsky, and Francis Bach, "Variational principles for mirror descent and mirror Langevin dynamics," IEEE Control Systems Letters, vol. 7, pp. 1542-1547, 2023
  • Naci Saldi, Tamer Başar, and Maxim Raginsky, “Partially observed discrete-time risk-sensitive mean field games,” Dynamic Games and Applications, vol. 13, pp. 929–960, 2023
  • Jie Xiong, Alan Yang, Maxim Raginsky, and Elyse Rosenbaum “Neural ordinary differential equation models for circuits: Capabilities and pitfalls,” IEEE Transactions on Microwave Theory and Techniques, vol. 70, no. 11, pp. 4869–4884, 2022
  • Aolin Xu and Maxim Raginsky, "Minimum excess risk in Bayesian learning," IEEE Transactions on Information Theory, vol. 68, no. 12, pp. 7935–7955, 2022
  • Ali Devran Kara, Maxim Raginsky, and Serdar Yüksel, "Robustness to incorrect models and adaptive learning in average-cost optimal stochastic control," Automatica, vol. 139, 110179 (pp. 1-13), 2022
  • Jie Xiong, Zaichen Chen, Maxim Raginsky, and Elyse Rosenbaum, "Statistical learning of IC models for system-level ESD simulation," IEEE Transactions on Electromagnetic Compatibility, vol. 63, no. 5, pp. 1302-1311, 2021
  • Naci Saldi, Tamer Başar, and Maxim Raginsky, "Approximate Markov-Nash equilibria for discrete-time risk-sensitive mean-field games," Mathematics of Operations Research, vol. 45, no. 4, pp. 1596-1620, 2020
  • Naveen Goela and Maxim Raginsky, "Channel polarization through the lens of Blackwell measures," IEEE Transactions on Information Theory, vol. 66, no. 10, pp. 6220-6241, 2020
  • Jaeho Lee and Maxim Raginsky, "Learning finite-dimensional coding schemes with nonlinear reconstruction maps," SIAM Journal on Mathematics of Data Science, vol. 1, no. 3, pp. 617-642, 2019
  • Naci Saldi, Tamer Başar, and Maxim Raginsky, "Approximate Nash equilibria in partially observed stochastic games with mean-field interactions," Mathematics of Operations Research, vol. 44, no. 3, pp. 1006-1033, 2019
  • Naci Saldi, Tamer Başar, and Maxim Raginsky, "Markov-Nash equilibria in mean-field games with discounted cost," SIAM Journal on Control and Optimization, vol. 56, no. 6, pp. 4256-4287, 2018
  • Ehsan Shafieepoorfard and Maxim Raginsky, "Sequential empirical coordination under an output entropy constraint," IEEE Transactions on Information Theory, vol. 64, no. 10, pp. 6830-6841, 2018
  • Soomin Lee, Angelia Nedich, and Maxim Raginsky, "Coordinate dual averaging for decentralized online optimization with nonseparable global objectives," IEEE Transactions on Control of Network Systems, vol. 5, no. 1, pp. 34-44, 2018
  • Soomin Lee, Angelia Nedich, and Maxim Raginsky, "Stochastic dual averaging for decentralized online optimization on time-varying communication graphs," IEEE Transactions on Automatic Control, vol. 62, no. 12, pp. 6407-6414, 2017
  • Aolin Xu and Maxim Raginsky, "Information-theoretic lower bounds for distributed function computation," IEEE Transactions on Information Theory, vol. 63, no. 4, pp. 2314-2337, 2017
  • Aolin Xu and Maxim Raginsky, "Information-theoretic lower bounds on Bayes risk in decentralized estimation," IEEE Transactions on Information Theory, vol. 63, no. 3, pp. 1580-1600, 2017
  • Maxim Raginsky, "Strong data processing inequalities and Phi-Sobolev inequalities for discrete channels," IEEE Transactions on Information Theory, vol. 62, no. 6, pp. 3355-3389, 2016
  • Ehsan Shafieepoorfard, Maxim Raginsky, and Sean P. Meyn, “Rationally inattentive control of Markov processes,” SIAM Journal on Control and Optimization, vol. 54, no. 2, pp. 987-1016, 2016
  • Maxim Raginsky and Angelia Nedich, “Online discrete optimization in social networks in the presence of Knightian uncertainty,” Operations Research, vol. 64, no. 3, pp. 662-679, 2016 (special issue on Information and Decisions in Social and Economic Networks)
  • Mehmet A. Donmez, Maxim Raginsky, and Andrew C. Singer, "Online optimization under adversarial perturbations,” IEEE Journal on Selected Topics in Signal Processing, vol. 10, no. 2, pp. 256–269, 2016
  • Richard S. Laugesen, Prashant G. Mehta, Sean P. Meyn, and Maxim Raginsky, “Poisson’s equation in nonlinear filtering,” SIAM Journal on Control and Optimization, vol. 53, no. 1, pp. 501–525, 2015
  • Peng Guan, Maxim Raginsky, and Rebecca Willett, "Online Markov decision processes with Kullback-Leibler control cost," IEEE Transactions on Automatic Control, vol. 59, no. 6, pp. 1423–1438, 2014
  • Maxim Raginsky, "Empirical processes, typical sequences, and coordinated actions in standard Borel spaces," IEEE Transactions on Information Theory, vol. 59, no. 3, pp. 1288-1301, 2013
  • Maxim Raginsky and Alexander Rakhlin, "Information-based complexity, feedback and dynamics in convex programming," IEEE Transactions on Information Theory, vol. 57, no. 10, pp. 7036-7056, 2011
  • Svetlana Lazebnik and Maxim Raginsky, "Supervised learning of quantizer codebooks by information loss minimization," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 7, 1294-1309, 2009

Articles in Conference Proceedings

  • Tanya Veeravalli and Maxim Raginsky, "A constructive approach to function realization by neural stochastic differential equations," IEEE Conference on Decision and Control, 2023
  • Yifeng Chu and Maxim Raginsky, "A unified framework for information-theoretic generalization bounds," Advances in Neural Information Processing Systems, 2023
  • Yifeng Chu and Maxim Raginsky, "Majorizing measures, codes, and information," IEEE International Symposium on Information Theory, 2023
  • Tanya Veeravalli and Maxim Raginsky, "Nonlinear controllability and function representation by neural stochastic differential equations," 5th Conference on Learning for Dynamics and Control, 2023
  • Joshua Hanson and Maxim Raginsky, "Fitting an immersed submanifold to data via Sussmann's orbit theorem," IEEE Conference on Decision and Control, 2022
  • Alan Yang, Jie Xiong, Maxim Raginsky, and Elyse Rosenbaum, "Input-to-state stable neural ordinary differential equations with applications to transient modeling of circuits," 4th Conference on Learning for Dynamics and Control, 2022
  • Hrayr Harutyunyan, Maxim Raginsky, Greg Ver Steeg, Aram Galstyan, "Information-theoretic generalization bounds for black-box learning algorithms," Advances in Neural Information Processing Systems, 2021
  • Todd Coleman and Maxim Raginsky, "Sampling, variational Bayesian inference and conditioned stochastic differential equations," IEEE Conference on Decision and Control, 2021
  • Jie Xiong, Alan S. Yang, Maxim Raginsky, and Elyse Rosenbaum, "Neural networks for transient modeling of circuits," 3rd ACM/IEEE Workshop on Machine Learning for CAD (MLCAD), 2021
  • Joshua Hanson, Maxim Raginsky, and Eduardo Sontag, "Learning recurrent neural net models of nonlinear systems," 3rd Conference on Learning for Dynamics and Control, 2021
  • Alan Yang, Amiremad Ghassami, Maxim Raginsky, Negar Kiyavash, and Elyse Rosenbaum, "Model-augmented conditional mutual information estimation for feature selection," Conference on Uncertainty in Artificial Intelligence (UAI), 2020
  • Joshua Hanson and Maxim Raginsky, "Universal simulation of stable dynamical systems by recurrent neural nets," 2nd Conference on Learning for Dynamics and Control (L4DC), 2020
  • Joshua Hanson and Maxim Raginsky, "Universal approximation of input-output maps by temporal convolutional nets," Advances in Neural Information Processing Systems, 2019
  • Naci Saldi, Tamer Basar, and Maxim Raginsky, “Partially-observed discrete-time risk-sensitive mean-field games,” IEEE Conference on Decision and Control, 2019
  • Noyan Cem Sevüktekin, Maxim Raginsky, and Andrew C. Singer, “Linear noisy networks with stochastic components,” IEEE Conference on Decision and Control, 2019
  • Ali Devran Kara, Maxim Raginsky, and Serdar Yüksel, “Robustness to incorrect models in average-cost optimal stochastic control,” IEEE Conference on Decision and Control, 2019
  • Belinda Tzen and Maxim Raginsky, "Theoretical guarantees for sampling and inference in generative models with latent diffusions," Conference on Learning Theory, 2019
  • Jaeho Lee and Maxim Raginsky, “Minimax statistical learning with Wasserstein distances,” Advances in Neural Information Processing Systems, 2018
  • Belinda Tzen, Tengyuan Liang, and Maxim Raginsky, “Local optimality and generalization guarantees for the Langevin algorithm via empirical metastability,” Conference on Learning Theory, 2018
  • Jie Xiong, Zaichen Chen, Yang Xiu, Zhen Mu, Maxim Raginsky, and Elyse Rosenbaum, "Enhanced IC modeling methodology for system-level ESD simulation," Proceedings of the 2018 Electrical Overstress/Electrostatic Discharge Symposium (EOS/ESD)
  • Yang Xiu, Samuel Sagan, Advika Battini, Xiao Ma, Maxim Raginsky, and Elyse Rosenbaum, “Stochastic modeling of air electrostatic discharge parameters,” Proceedings of International Reliability Physics Symposium, 2018
  • Yanina Shkel, Maxim Raginsky, and Sergio Verdú, "Universal compression, list decoding, and logarithmic loss," IEEE International Symposium on Information Theory, 2018
  • Xiao Ma, Maxim Raginsky, and Andreas Cangellaris, "Machine learning methodology for inferring network S-parameters in the presence of variability," IEEE Workshop on Signal and Power Integrity, 2018
  • Yanina Shkel, Maxim Raginsky, and Sergio Verdú, "Sequential prediction with coded side information under logarithmic loss," Conference on Algorithmic Learning Theory (ALT), 2018
  • Zaichen Chen, Maxim Raginsky, and Elyse Rosenbaum, "Verilog-A compatible recurrent neural network model for transient circuit simulation," Conference on Electrical Performance of Electronic Packaging and Systems (EPEPS), 2017
  • Aolin Xu and Maxim Raginsky, "Information-theoretic analysis of generalization capability of learning algorithms," Advances in Neural Information Processing Systems, 2017
  • Maxim Raginsky, Aexander Rakhlin, and Matus Telgarsky, "Non-convex learning via Stochastic Gradient Langevin Dynamics: a nonasymptotic analysis," Conference on Learning Theory, 2017
  • Yanina Shkel, Maxim Raginsky, and Sergio Verdú, "Universal lossy compression under logarithmic loss," IEEE International Symposium on Information Theory, 2017
  • Naci Saldi, Tamer Basar, and Maxim Raginsky, "Markov-Nash equilibria in mean-field games with discounted cost," American Control Conference, 2017
  • Ehsan Shafieepoorfard and Maxim Raginsky, "Sequential empirical coordination under an output entropy constraint," IEEE Conference on Decision and Control, 2016
  • Peng Guan, Maxim Raginsky, Rebecca Willett, and Daphney-Stavroula Zois, "Regret minimization algorithms for single-controller zero-sum stochastic games," IEEE Conference on Decision and Control, 2016
  • Mehmet Donmez, Maxim Raginsky, Andrew Singer, and Lav R. Varshney, "Cost-performance tradeoffs in unreliable computation architectures," Asilomar Conference on Signals, Systems, and Computers, 2016
  • Daphney-Stavroula Zois and Maxim Raginsky, "Active object detection on graphs via locally informative trees," IEEE International Workshop on Machine Learning for Signal Processing, 2016
  • Maxim Raginsky, Alexander Rakhlin, Matthew Tsao, Yihong Wu, and Aolin Xu, "Information-theoretic analysis of stability and bias of learning algorithms," IEEE Information Theory Workshop, 2016
  • Maxim Raginsky, "Channel polarization and Blackwell measures," IEEE International Symposium on Information Theory, 2016
  • Jaeho Lee, Maxim Raginsky, and Pierre Moulin, “On MMSE estimation from quantized observations in the nonasymptotic regime,” IEEE International Symposium on Information Theory, 2015
  • Peng Guan, Maxim Raginsky, and Rebecca Willett, "From minimax value to low-regret algorithms for online Markov decision processes," Proceedings of the American Control Conference, 2014
  • Maxim Raginsky and Igal Sason, “Refined bounds on the empirical distribution of good channel codes via concentration inequalities,” Proceedings of the IEEE International Symposium on Information Theory, 2013
  • Maxim Raginsky and Jake Bouvrie, "Continuous-time stochastic mirror descent on a network: variance reduction, consensus, convergence," Proceedings of IEEE Conference on Decision and Control, 2012
  • Maxim Raginsky and Alexander Rakhlin, "Lower bounds for passive and active learning," Advances in Neural Information Processing 24, pp. 1026-1034, 2011
  • Maxim Raginsky, Nooshin Kiarashi, and Rebecca Willett, "Decentralized online convex programming with local information," Proceedings of the American Control Conference, 2011
  • Maxim Raginsky, "Divergence-based characterization of fundamental limitations of adaptive dynamical systems," Proceedings of the Forty-Eighth Annual Allerton Conference on Communication, Control, and Computing, 2010
  • Maxim Raginsky, Alexander Rakhlin and Serdar Yüksel, "Online convex programming and regularization in adaptive control," Proceedings of the IEEE Conference on Decision and Control, 2010
  • Todd Coleman and Maxim Raginsky, "Mutual information saddle points in channels of exponential family type," Proceedings of the IEEE International Symposium on Information Theory, 2010
  • Maxim Raginsky and Svetlana Lazebnik, "Locality-sensitive binary codes from shift-invariant kernels," Advances in Neural Information Processing 22, pp. 1509-1517, 2009
  • Maxim Raginsky, "Achievability results for statistical learning under communication constraints," Proceedings of the IEEE International Symposium on Informaiton Theory, 2009

Journal Editorships

  • Action editor, Journal of Machine Learning Research
  • Associate editor, Mathematics of Control, Signals, and Systems
  • Associate editor, SIAM Journal on Mathematics of Data Science
  • Associate editor for probability and statistics, IEEE Transactions on Information Theory (2018-2021)

Teaching Honors

  • UIUC List of Teachers Ranked as Excellent (Fall 2013, Fall 2014, Fall 2016, Spring 2017, Fall 2017, Spring 2018, Spring 2019, Fall 2020, Spring 2021)

Research Honors

  • NSF CAREER Award (2013)

Other Honors

  • Campus Distinguished Promotion, University of Illinois at Urbana-Champaign (2022)
  • William L. Everitt Fellow in Electrical and Computer Engineering, University of Illinois at Urbana-Champaign (2017-present)

Recent Courses Taught

  • ECE 486 - Control Systems
  • ECE 515 (ME 540) - Control System Theory & Design
  • ECE 517 - Nonlinear & Adaptive Control
  • ECE 543 - Statistical Learning Theory
  • ECE 555 - Control of Stochastic Systems
  • ECE 580 - Optimiz by Vector Space Methds
  • ECE 586 MR - Topics in Decision and Control