Kristjan Greenewald

Research Scientist

Kristjan Greenewald is a research scientist at the MIT-IBM Watson AI Lab, serving as principal investigator on multiple funded MIT-IBM research projects. Greenewald received his PhD from the University of Michigan in 2017 focusing on signal processing and machine learning and was a postdoctoral research fellow at the Harvard University Statistics department before joining IBM Research in 2018. His research interests include optimal transport, causal inference, and statistical learning theory, with recent applications including LLM evaluation and alignment, uncertainty quantification, differential privacy, and agentic frameworks.

Top Work

Causal inference is expensive. Here’s an algorithm for fixing that.

Causal inference is expensive. Here’s an algorithm for fixing that.

Causal Inference

Publications with the MIT-IBM Watson AI Lab

Learning Proximal Operators to Discover Multiple Optima
Learning Proximal Operators to Discover Multiple Optima
 
Minimum-Entropy Coupling Approximation Guarantees Beyond the Majorization Barrier
Minimum-Entropy Coupling Approximation Guarantees Beyond the Majorization Barrier
 
k-Sliced Mutual Information: A Quantitative Study of Scalability with Dimension
k-Sliced Mutual Information: A Quantitative Study of Scalability with Dimension
 
Entropic Causal Inference: Graph Identifiability
Entropic Causal Inference: Graph Identifiability
 
Log-Euclidean Signatures for Intrinsic Distances Between Unaligned Datasets
Log-Euclidean Signatures for Intrinsic Distances Between Unaligned Datasets
 
Sliced Mutual Information: A Scalable Measure of Statistical Dependence
Sliced Mutual Information: A Scalable Measure of Statistical Dependence
 
Measuring Generalization with Optimal Transport
Measuring Generalization with Optimal Transport
 
High-Dimensional Feature Selection for Sample Efficient Treatment Effect Estimation
High-Dimensional Feature Selection for Sample Efficient Treatment Effect Estimation
 
The Computational Limits of Deep Learning
The Computational Limits of Deep Learning
 
Active Structure Learning of Causal DAGs via Directed Clique Trees
Active Structure Learning of Causal DAGs via Directed Clique Trees
 
Entropic Causal Inference: Identifiability and Finite Sample Results
Entropic Causal Inference: Identifiability and Finite Sample Results
 
Asymptotic Guarantees for Generative Modeling based on the Smooth Wasserstein Distance
Asymptotic Guarantees for Generative Modeling based on the Smooth Wasserstein Distance
 
Gaussian-Smoothed Optimal Transport: Metric Structure and Statistical Efficiency
Gaussian-Smoothed Optimal Transport: Metric Structure and Statistical Efficiency
 
Statistical Model Aggregation via Parameter Matching
Statistical Model Aggregation via Parameter Matching
 
Sample Efficient Active Learning of Causal Trees
Sample Efficient Active Learning of Causal Trees
 
SPAHM: Parameter matching for model fusion
SPAHM: Parameter matching for model fusion
 
Causal inference is expensive. Here’s an algorithm for fixing that.
Causal inference is expensive. Here’s an algorithm for fixing that.
 
Estimating Information Flow in Deep Neural Networks
Estimating Information Flow in Deep Neural Networks
 
Bayesian Nonparametric Federated Learning of Neural Networks
Bayesian Nonparametric Federated Learning of Neural Networks
 
Action Centered Contextual Bandits
Action Centered Contextual Bandits
 
Ensemble Estimation of Information Divergence
Ensemble Estimation of Information Divergence