ISI Directory

Greg Ver Steeg, Ph.D.

Research Associate Professor

Education

Ph.D. Physics, Caltech

Research Summary

The main focus of my research is unsupervised learning based on latent factor discovery. Applications are to understanding high-dimensional but under-sampled data coming from human biology and behavior.  This page is sporadically updated, but you can get more up-to-date information by looking at my Google scholar profile or occasional updates to my blog or twitter.

Generalization of neural networks is related to an adversary's ability to distinguish training and test data. We have some new results on this, see a preview with our work applying this to federating learning on neuroimaging data in MIDL 2021, or other work on generalization in ICML 2020.

Latent factor modeling.  CorEx is one type of latent factor model we've worked on with various theory and application papers appearing in NeurIPS, ICML, and AISTATS. More recently we've focused on fundamental issues in latent factor inference, e.g. using thermodynamic variational inference (ICML 2020) and information geometry (NeurIPS 2020 workshop paper).

An exciting new direction is based on using non-equilibrium dynamics to speed up sampling, which will make modeling of probability distributions with neural networks much more powerful (coming soon!).

Viewing representation learning as compression led to work on new ways to control compression (with "echo noise", NeurIPS-19), and we can use that for a variety of things like information-theoretic invariant representation learning (NeurIPS-18) leading, for instance, to a new approach to harmonize MRI scans across sites. Better information estimators can improve trade-offs (AAAI 2021).

We also study foundational issues about information theory and ML. (1) Information measures are hard to estimate, (UAI) (AISTATS) especially when dependencies are strong, where we showed that an exponential number of samples may be needed (NeurIPS). (2) Appropriate measures of higher-order dependencies are hard to define (Entropy 2021), but may help with things like neural network attribution (AISTATS 2021).

Applications: gene expression (interesting podcast and article about this work), neuroscience 1, 2, 3, 4, text analysis (code) 1 2, psychometrics, finance, and work on clinical time series in Nature Scientific Data (2019).