CSC 2547 Fall 2019: Learning to Search

Overview

In planning, search, active learning, programming, and approximate inference, we usually face a series of similar tasks. We can often generalize from the problems solved so far, or even combine parts of previous solutions to solve a new one. This course will survey foundational ideas, recent work, and applications in this area. Specifically, it will cover self-improving tree-search methods such as alphazero, meta-learning, hypernetworks, self-tuning gradient estimators, amortized inference, self-improving theorem provers, and planning in POMDPs. Evaluation will be based mainly on a project involving original research by the students. Students should already be familiar with the basics of machine learning such as linear algebra, optimization, and probability.

The class will have a major project component, and will be run in a similar manner to Learning Discrete Latent Structure

Prerequisites:

This course is designed to bring students to the current state of the art, so that ideally, their course projects can make a novel contribution. A previous course in machine learning such as CSC321, CSC411, CSC412, STA414, or ECE521 is strongly recommended. However, the only hard requirements are linear algebra, basic multivariate calculus, the basics of probability, and programming skills.

To check if you have the background for this course, try taking this Quiz. If more than half the questions are too difficult, you might want to put some extra work into preparation.

Where and When

Course Structure

Aside from the first two and last two lectures, each week a different group of students will present on a set of related papers covering an aspect of these methods. I’ll provide guidance to each group about the content of these presentations.

In-class discussion will center around understanding the strengths and weaknesses of these methods, their relationships, possible extensions, and experiments that might better illuminate their properties.

The hope is that these discussions will lead to actual research papers, or resources that will help others understand these approaches.

Grades will be based on:

Project

Students can work on projects individually, or in groups of up to four. The grade will depend on the ideas, how well you present them in the report, how clearly you position your work relative to existing literature, how illuminating your experiments are, and well-supported your conclusions are. Full marks will require a novel contribution.

Each group of students will write a short (around 2 pages) research project proposal, which ideally will be structured similarly to a standard paper. It should include a description of a minimum viable project, some nice-to-haves if time allows, and a short review of related work. You don’t have to do what your project proposal says - the point of the proposal is mainly to have a plan and to make it easy for me to give you feedback.

Towards the end of the course everyone will present their project in a short, roughly 5 minute, presentation.

At the end of the class you’ll hand in a project report (around 4 to 8 pages), ideally in the format of a machine learning conference paper such as NIPS.

Calendar

Tentative Schedule


Week 1 - September 13th - Background, motivation, course setup

This lecture will set the scope of the course, the different settings where amortized search can be useful, and the main existing approaches. Slides


Week 2 - September 20th - Tutorial on main existing approaches, history of field

Recommended reading:

Related papers:


Week 3 - September 27th - Monte Carlo Tree Search and applications

Modern MCTS: Slides

Chemistry Application: Slides

Robotics and planning applications: Slides

Recent advances: Slides 1 Slides 2

Related work:

Other resources:


Week 4 - October 4th - Learning to SAT Solve and Prove Theorems

Learning to SAT Solve:

Theorem-proving benchmarks and environments: Slides

Approaches to learning to efficiently prove theorems Slides:

Relaxation-based approaches:


Week 5 - October 11th - Nested continuous optimization

Plain nested optimization:

Learning best-response functions:

Implicit function theorem:

Game theory: Slides

Other resources:


Week 6 - October 18th - Active Learning, POMDPs, and Bayesian Optimization:

Active learning: Slides

POMDPS: Slides

Curiosity and intrinsic motivation Slides:

Other resources:


Week 7 - October 25th - Evolutionary Approaches and Direct Optimization

Beam Search: Slides

Direct optimization: Slides

Related work:


Week 8 - November 1st - Learning to program

Search-based approaches:

Gradient-based approaches:

Learning a library of functions:

More related work:


Week 9 - November 8th - Meta-reasoning

Other related work:


Week 10 - November 15th - Asymptotically Optimal Algorithms

Optimizing search strategies:

Optimizing agents: Slides

AIXI: Slides

Other related papers:


Week 11 - November 22nd - Project presentations part I

The last two weeks will be a series of 5-minute project presentations.


Week 12 - November 29th - Project presentations part II