STA 4273 / CSC 2547 Spring 2018:

Learning Discrete Latent Structure

Overview

New inference methods allow us to train learn generative latent-variable models. These models can generate novel images and text, find meaningful latent representations of data, take advantage of large unlabeled datasets, and even let us do analogical reasoning automatically. However, most generative models such as GANs and variational autoencoders currently have pre-specified model structure, and represent data using fixed-dimensional continuous vectors. This seminar course will develop extensions to these approaches to learn model structure, and represent data using mixed discrete and continuous data structures such as lists of vectors, graphs, or even programs. The class will have a major project component, and will be run in a similar manner to Differentiable Inference and Generative Models

Prerequisites:

This course is designed to bring students to the current state of the art, so that ideally, their course projects can make a novel contribution. A previous course in machine learning such as CSC321, CSC411, CSC412, STA414, or ECE521 is strongly recommended. However, the only hard requirements are linear algebra, basic multivariate calculus, basics of working with probability, and basic programming skills.

To check if you have the background for this course, try taking this Quiz. If more than half the questions are too difficult, you might want to put some extra work into preparation.

Where and When

What is discrete latent structure?

Loosely speaking, it referes to any discrete quantity that we wish to estimate or optimize. Concretely, in this course we’ll consider using gradient-based stochastic optimization to train models like:

Why discrete latent struture?

Why not discrete latent struture?

Course Structure

Aside from the first two and last two lectures, each week a different group of students will present on a set of related papers covering an aspect of these methods. I’ll provide guidance to each group about the content of these presentations.

In-class discussion will center around understanding the strengths and weaknesses of these methods, their relationships, possible extensions, and experiments that might better illuminate their properties.

The hope is that these discussions will lead to actual research papers, or resources that will help others understand these approaches.

Grades will be based on:

Submit assignments through Markus.

Project

Students can work on projects individually,in pairs, or even in triplets. The grade will depend on the ideas, how well you present them in the report, how clearly you position your work relative to existing literature, how illuminating your experiments are, and well-supported your conclusions are. Full marks will require a novel contribution.

Each group of students will write a short (around 2 pages) research project proposal, which ideally will be structured similarly to a standard paper. It should include a description of a minimum viable project, some nice-to-haves if time allows, and a short review of related work. You don’t have to do what your project proposal says - the point of the proposal is mainly to have a plan and to make it easy for me to give you feedback.

Towards the end of the course everyone will present their project in a short, roughly 5 minute, presentation.

At the end of the class you’ll hand in a project report (around 4 to 8 pages), ideally in the format of a machine learning conference paper such as NIPS.  Rubric

Tentative Schedule


Week 1 - Jan 12th - Optimization, integration, and the reparameterization trick

This lecture will set the scope of the course, the different settings where discrete structure must be estimated or chosen, and the main existing approaches. As a warm-up, we’ll look at the REINFORCE and reparameterization gradient estimators.

Lecture 1 slides


Week 2 - Jan 19th - Gradient estimators for non-differentiable computation graphs

Lecture 2 slides

Discrete variables makes gradient estimation hard, but there has been a lot of recent progress on developing unbiased gradient estimators.

Recommended reading:

Material that will be covered:

Related work:


Week 3 - Jan 26th - Deep Reinforcement learning and Evolution Strategies

Slides:

Recommended reading:

Material that will be covered:


Week 4 - Feb 2nd - Differentiable Data Structures and Adaptive Computation

Attempts learn programs using gradient-based methods, and program induction in general.

Slides:

Recommended reading:

Other material:


Week 5 - Feb 9th - Discrete latent structure

Variational autoencoders and GANs typically use continuous latent variables, but there is recent work on getting them to use discrete random variables.

Slides:

Recommended reading:

Other material:


Week 6 - Feb 16th - Adversarial training and text models

It’s not obvious how to train GANs to produce discrete structures, because this cuts off the gradient to the discriminator.

Slides:

Recommended reading:

Other material:


Week 7 - Feb 23rd - Bayesian nonparametrics

Models of infinitely-large discrete objects.

Slides:

Recommended reading:

Related material:


Week 8 - March 2nd - Learning model structure

Slides:

Recommended reading:

Related material:


Week 9 - March 9th - Graphs, permutations and parse trees

Slides:

Recommended reading:

Related material:


Weeks 10 and 11 - March 16th and 23rd - Project presentations

The last two weeks were project presentations, 38 in total. A few students were brave enough to have their slides online:


Epilogue: Some projects that developed out of this course

Some of the course projects have turned into papers: