Dr. Théo Galy-Fajou

Julia software developer and Bayesian researcher at PlantingSpace

Bio

Théo Galy-Fajou is a doctor in Machine Learning specialized in Bayesian inference methods. He is particularly interested in approximate Bayesian inference methods like sampling, applications to Gaussian processes and online learning. Throughout his Ph.D., Théo developed numerous open source projects in Julia and dived deep into software development. He is currently working for PlantingSpace as a Julia software developer and Bayesian researcher.

You can find me on Github, Mastodon , ResearchGate and Twitter.

News

Publications

Flexible and Efficient Inference with Particles for the Variational Gaussian Approximation
Entropy Journal 21'  T. Galy-Fajou, V. Perrone, M. Opper
We solve the Gaussian Variational Approximation cubic complexity by defining a deterministic flow. It allows to use low-rank approximation (without diagonal) and obtain exact guarantees. Additionally, we show that using particles helps getting a more stable convergence and a higher accuracy.
Adaptive Inducing Points Selection For Gaussian Processes
Continual Learning Workshop ICML 20'  T. Galy-Fajou, M. Opper
We propose a new way to select inducing points locations both in an online and offline setting. The algorithm is fully data-centric but takes fully into account the sparse Gaussian process framework.
Automated Augmented Conjugate Inference for Non-conjugate Gaussian Process Models
AISTATS 20'  T. Galy-Fajou, F. Wenzel, M. Opper
We generalize the theory around latent variable augmentation for Gaussian models. From the Schoenberg theorem, we identify a new class of functions that can be augmented and provide methods to perform inference out of the box.
Multi-Class Gaussian Process Classification Made Conjugate: Efficient Inference via Data Augmentation
UAI 19'  T. Galy-Fajou, F. Wenzel, C. Donner, M. Opper
Using new augmentation methods, we derive an augmented likelihood for the logistic-softmax link. It allows us to infer categorical data in only a few iterations and obtain uncertainty estimates for each of the class.
Efficient Gaussian Process Classification Using Polya-Gamma Data Augmentation
AAAI 19'  F. Wenzel* T. Galy-Fajou*, C. Donner, M. Kloft, M. Opper
By applying the Pòlya-Gamma augmentation to Gaussian processes and sparse Gaussian Processes, we scale inference to billion of inducing points for the binary classification problem.
Bayesian Nonlinear Support Vector Machines for Big Data
ECML 17'  F. Wenzel, T. Galy-Fajou, M. Deutsch, M. Kloft
The Bayesian SVM is a probabilistic interpretation of the hinge loss. By applying an augmentation on it we are able to scale Gaussian processes using this likelihood to large datasets and converge extremely quickly.

Open Source Projects

Education

Work experience

You can find a more complete CV here