Noah Golmant

Writing

On the Computational Inefficiency of Large Batch Sizes for Stochastic Gradient Descent.
Noah Golmant, Nikita Vemuri, Zhewei Yao, Vladimir Feinberg, Amir Gholami, Kai Rothauge, Michael W. Mahoney, Joseph Gonzalez.
Preprint, 2018. [arXiv]

An Empirical Exploration of Gradient Correlations in Deep Learning.
Daniel Rothchild, Roy Fox, Noah Golmant, Joseph Gonzalez, Michael Mahoney, Kai Rothauge, Ion Stoica and Zhewei Yao.
Accepted to NeurIPS Integration of Deep Learning Theories workshop, 2018.

Shift: A Zero FLOP, Zero Parameter Alternative to Spatial Convolutions.
Bichen Wu, Alvin Wan, Xiangyu Yue, Peter Jin, Sicheng Zhao, Noah Golmant, Amir Gholaminejad, Joseph Gonzalez, Kurt Keutzer.
In Proceedings of CVPR, 2018. [arXiv]

On the Convergence of Model-Agnostic Meta-Learning.
Noah Golmant.
Preprint, 2018. [pdf]

Transferability of Adversarial Attacks in Model-Agnostic Meta-Learning.
Riley Edmunds, Noah Golmant, Vinay Ramasesh, Phillip Kuznetsov, Piyush Patil, Raul Puri.
Received Research Forum Award at NUS Deep Learning Security Workshop, 2017. [pdf]

Adversarial Machine Learning.
Phillip Kuznetsov, Riley Edmunds, Ted Xiao, Humza Iqbal, Raul Puri, Noah Golmant, Shannon Shih.
Chapter in the Artificial Intelligence Safety and Security textbook. CRC Press, 2018. [pdf]

Batch Methods for Incremental Learning.
Noah Golmant, Evan Sparks, Joseph Gonzalez.
Submitted to DEEM workshop, SIGMOD 2017. [pdf]

Other

My Blog

Using TensorFlow to Generate Images with PixelRNNs.
Phillip Kuznetsov, Noah Golmant.
O’Reilly blog post. [blog]

Derivations of the Univariate and Multivariate Normal Densities.
Alex Francis, Noah Golmant.
Material for Berkeley’s ML course, CS189. [pdf]

Decision Trees and some Basic Information Theory.
Noah Golmant.
Material for Berkeley’s ML course, CS189. [pdf]

A Tutorial on the Fréchet Derivative.
Noah Golmant.
Material for Berkeley’s ML course, CS189. [pdf]