CSC 2547 Fall 2019: Learning to Search
In planning, search, active learning, programming, and approximate inference, we usually face a series of similar tasks. We can often generalize from the problems solved so far, or even combine parts of previous solutions to solve a new one. This course will survey foundational ideas, recent work, and applications in this area. Specifically, it will cover self-improving tree-search methods such as alphazero, meta-learning, hypernetworks, self-tuning gradient estimators, amortized inference, self-improving theorem provers, and planning in POMDPs. Evaluation will be based mainly on a project involving original research by the students. Students should already be familiar with the basics of machine learning such as linear algebra, optimization, and probability.
The class will have a major project component, and will be run in a similar manner to Learning Discrete Latent Structure
This course is designed to bring students to the current state of the art, so that ideally, their course projects can make a novel contribution. A previous course in machine learning such as CSC321, CSC411, CSC412, STA414, or ECE521 is strongly recommended. However, the only hard requirements are linear algebra, basic multivariate calculus, the basics of probability, and programming skills.
To check if you have the background for this course, try taking this Quiz. If more than half the questions are too difficult, you might want to put some extra work into preparation.
Where and When
- Fall 2019
- Instructor: David Duvenaud
- Email: email@example.com (put “CSC2547” in the subject)
- Location: Sidney Sussex room 2102
- Time: Fridays, 1-3pm
- Office hours: TBD, in 384 Pratt
- Piazza: https://piazza.com/class/jxdq4fwzvb656y
Why Learn To Search?
- Active Learning and Exploration - Many problems require choosing which data would be most useful to acquire, or which experiment would be most useful to run. This can be viewed as a search problem: finding a sequential plan that can be expected to provide useful information by the end. Because we can adjust our plan after every action, we end up running many similar searches. Thus there is scope to gradually optimize our planning strategy.
- Approximate Inference and Inverse Design - In many situations, we know what a good explanation or design would look like, but need to search through a large discrete set to find one. For example, given a molecule we can often predict the mass spectra of its fragments, or how well it would perform a task. However finding a molecule that matches a given requirement is a hard search problem, that might benefit from experience of finding matches for similar tasks.
- Program generation - One of the hardest search tasks that humans regularly perform is programming, which can be viewed as a search for programs that meet a specification. One strategy for building large programs is to practice by building smaller programs to solve related problems. Programming is also a domain where one can often usefully re-use parts of other solutions.
Aside from the first two and last two lectures, each week a different group of students will present on a set of related papers covering an aspect of these methods. I’ll provide guidance to each group about the content of these presentations.
In-class discussion will center around understanding the strengths and weaknesses of these methods, their relationships, possible extensions, and experiments that might better illuminate their properties.
The hope is that these discussions will lead to actual research papers, or resources that will help others understand these approaches.
Grades will (tentatively) be based on:
- [15%] One assignment, to be handed in through MarkUs.
- [15%] Class presentations
- [15%] Project proposal, due Feb 13th. Project Proposal Grading Rubric
- [15%] Project presentations, March 16th and 23rd. Project Presentation Grading Rubric
- [40%] Project report and code, due April 10th. Project Grading Rubric
Students can work on projects individually, or in groups of up to four. The grade will depend on the ideas, how well you present them in the report, how clearly you position your work relative to existing literature, how illuminating your experiments are, and well-supported your conclusions are. Full marks will require a novel contribution.
Each group of students will write a short (around 2 pages) research project proposal, which ideally will be structured similarly to a standard paper. It should include a description of a minimum viable project, some nice-to-haves if time allows, and a short review of related work. You don’t have to do what your project proposal says - the point of the proposal is mainly to have a plan and to make it easy for me to give you feedback.
Towards the end of the course everyone will present their project in a short, roughly 5 minute, presentation.
At the end of the class you’ll hand in a project report (around 4 to 8 pages), ideally in the format of a machine learning conference paper such as NIPS.
Week 1 - Background lecture - Optimization and search
This lecture will set the scope of the course, the different settings where amortized search can be useful, and the main existing approaches.
Week 2 - SAT Solving
- Learning a SAT Solver from Single-Bit Supervision
- Machine Learning for Combinatorial Optimization: a Methodological Tour d’Horizon
- Learning to Solve NP-Complete Problems: A Graph Neural Network for Decision TSP
Week 3 - Deep Reinforcement learning
- Model-Based Planning in Discrete Action Spaces - “it is in fact possible to effectively perform planning via backprop in discrete action spaces”
- Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic - learns a linear surrogate function off-policy.
Week 4 - Differentiable Computation
Attempts learn programs using gradient-based methods, and program induction in general.
- Neural Turing Machines
- The Case for Learned Index Structures
- Reinforcement Learning Neural Turing Machines - attempts to train the NTM with REINFORCE.
- Programming with a Differentiable Forth Interpreter
- Sampling for Bayesian Program Learning
- Neural Sketch Learning for Conditional Program Generation
- Adaptive Computation Time for Recurrent Neural Networks
- Divide and Conquer Networks
Week 5 - Active Learning
- Efficient Nonmyopic Active Search
- Learning Hard Alignments with Variational Inference - in machine translation, the alignment between input and output words can be treated as a discrete latent variable.
Week 6 - Learning to program
- Neural Guided Constraint Logic Programming for Program Synthesis
- Learning Loop Invariants for Program Verification
- Program Synthesis for Character-Level Language Modeling
- Generating and designing DNA with deep generative models
- Emergence of Grounded Compositional Language in Multi-Agent Populations
Week 7 - Meta-reasoning
Week 8 - Asypmtotically Optimal Algorithms
Week 9 - Search trees
- Thinking Fast and Slow with Deep Learning and Tree Search
- Learning Latent Permutations with Gumbel-Sinkhorn Networks
- Reparameterizing the Birkhoff Polytope for Variational Permutation Inference
Weeks 10 and 11 - Project presentations
The last two weeks will be a series of 5-minute project presentations.
Extra related reading
Discrete variables makes gradient estimation hard, but there has been a lot of recent progress on developing unbiased gradient estimators.
- Gradient Estimation Using Stochastic Computation Graphs
- Backpropagation through the Void: Optimizing control variates for black-box gradient estimation code
- The original REINFORCE paper.
- The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables - a simple trick: turn all the step functions into sigmoids, and use backprop to get a biased gradient estimate.
- Categorical Reparameterization with Gumbel-Softmax - the exact same idea as the Concrete distribution, published simultaneously.
- REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models - fixes the concrete estimator to make it unbiased, and also gives a way to tune the temperature automatically.
- A Visual Guide to Evolution Strategies
- Evolution Strategies as a Scalable Alternative to Reinforcement Learning - replaces the exact gradient inside of REINFORCE with another call to REINFORCE.
- Optimization by Variational Bounding
- Natural Evolution Strategies
- On the Relationship Between the OpenAI Evolution Strategy and Stochastic Gradient Descent - shows that ES might work in high dimensions because most of the dimensions don’t usually matter.