My main research interest is in developing, extending or re-evaluating Bayesian models with a view to producing inferences which are useful and interpretable in practical applications. My focus is in flexible modelling tools such as mixture models and tree models. For a complete list of publications (which is most up-to-date) visit my google scholar profile.

Causal inference for heterogeneous treatment effects

I am interested in the use of flexible tree models to infer heterogeneous treatment effects through non-parametric regression modelling. Hahn et al recently developed Bayesian Causal Forests (BCF) using a natural parameterisation that allows us to place priors (and therefore smoothing) directly on treatment effects. In our recent work we developed an extension to BCF to apply targeted shrinkage across the different covariates.

Different aspects of this work were jointly developed with my PhD students Alberto Caron and Ilina Yozova as well as collaborators Gianluca Baio and Richard Hahn.

Interpretability and coherence of topic models

Topic models (and mixture models in general) are powerful tools in non-parametric estimation. However, quantities such as posterior mean and uncertainty are generally not interpretable for mixture models, because of their inherent label uncertainty. I am interested in producing meaningful summaries of posterior distributions over mixture models, and using these to construct further model pieces.

This is joint work with my former PhD student Mariflor Vega-Carrasco and collaborators Mirco Musolesi, Rosie Prior and Jason O’Sullivan as part of our collaboration with dunnhumby ltd

Bayesian modelling in partially identified models

Although there is currently an abundance of data available through a variety of sources, much of these datasets are collected through unknown (and biased) sampling processes. This naturally leads to an identifiability problem in terms of any downstream modelling: if we don’t know exactly how the data were collected, and we don’t know who’s missing from the dataset, what can we say about the population? No amount of data can fix this problem, instead careful consideration of the underlying data generative process and unknown sampling mechanism is needed.

This is joint work with my former PhD student Helen (Zhenzheng) Hu and collaborators Ioannis Kosmidis, Richard Hahn, Jared Murray.

Bayesian modelling in health economics

Quantifying the impact of uncertainty on decisions is an important aspect of health-economic decision making. The analysis of the Value of Information (VoI) is an increasingly popular method for quantifying decision uncertainty, but it is frequently computationally prohibitive. This work used ideas from non-parametric modelling and moment-matching to allow feasible calculation of VoI summaries such as Expected Value of Partial Perfect Information (EVPPI) and the Expected Value of Sample Information (EVSI).

This is joint work with my former PhD student Anna Heath and collaborator Gianluca Baio.

Bayesian modelling in retail analytics

I’ve had a long-standing collaboration with dunnhumby ltd where our first few projects focused on developing models for the sales of slow-moving goods using Bayesian hierarchical self-exciting processes, as well as characterising differences in the competitive behaviour of different product groups.

This was joint work with my former PhD student James Pitkin and collaborators Gordon Ross, Rosie Prior and Jason O’Sullivan.

Diffusion modelling

I have used diffusion models to describe trajectories of various organisms. This work was motivated by immune cell data collected by 2-photon microscopy from inside the lymphnode of a live mouse, but was later extended to capture patterns of animal movement using Bayesian non-parametric diffusion models.

This was joint work with Mike West, Melanie Matheu and Mike Cahalan, and more recently with Yvo Pokern and Tjun-Yee Hoh.