Identifiability in VAEs
I gave this talk for a journal club on new methods in VAE research, mostly from reading the paper Provable concept learning for interpretable predictions using variational autoencoders. I think the paper is really interesting and I learned a lot from it. I then read some of the papers (Khemakhem et al. and Locatello et al.) which really were informative—I’ve since quite enjoyed reading papers from the same labs (Scholkopf and Hyvärinen). Particularly Hyvärinen, because of their innovation (which I had not been aware of) in ICA, or independent components analysis, which I used quite a lot during my fMRI analysis projects. Coincidentally I also found that Hyvärinen worked on sliced score matching back in the early 2000s and has since done a lot more research in the foundations of identifiable, (more) interpretable deep learning models.
The presentation is probably better viewed in its standalone form at this link: here. You can also fullscreen it if you are in Chrome by pressing the “f” key while interacting with the presentation.