CASTLE Labs presentations
From time to time, I have the opportunity to give presentations that may be of general interest. Here are the Powerpoint presentations from these talks. Please note that the talks were designed assuming that I am giving the presentation. For this reason, these presentations may be little more than teasers. If a talk looks interesting, email me at firstname.lastname@example.org to see if it would be appropriate for me to give the presentation in person.
Clearing the Jungle of Stochastic Optimization - Johns Hopkins and Princeton, Feb 17 2014(powerpoint format)
This is my latest version of my talk on computational stochastic optimization, focusing on modeling, defining state variables, and describing the four classes of policies. This was given at Johns Hopkins in January, and then slightly modified for a talk in CASTLE Lab.
Energy and Uncertainty: Clearing the Jungle of Stochastic Optimization - CIRRELT, November 25, 2013 (powerpoint format)
This presentation was to have been given at the 3rd Colloquium of the NSERC/Hydro-Quebec Industrial Reearch Chair on Stochastic Optimization of Electricity Generation (but my flight was cancelled!). It contains a brief summary of different types of uncertainty that arise in energy applications, and then describes a modeling and algorithmic framework that encompasses different communities within stochastic optimization. The last half of the talk provides the most recent update of SMART-ISO.
On Languages for Stochastic Optimzation - University of Quebec at Montreal, November 18, 2013 (powerpoint format)
This presentation was given at the University of Quebec at Montreal as part of their commencement exercises, where I was awarded the Docteur honoris causa. Montreal, with its bilingual tradition, has been the place where I seem to keep coming back to the theme of modeling and languages (this is highlighted at the beginning of the talk). The rest of the talk brings out concepts, terminology and notation from the different communities that work in some form of stochastic optimization.
SMART-ISO- A Stochastic, Multiscale Model of the PJM Energy Markts - Fields Institute Workshop on Electricity Markets, August, 2013 (powerpoint format - 20 megs!)
This is the most recent of a series of talks on SMART-ISO, a large-scale and highly detailed simulator of the PJM energy system, includuing generation, distribution and consumption. SMART-ISO carefully models the flow of information and decisions, capturing day-ahead planning, day-of unit commitment and economic dispatch, and real-time economic dispatch. This talk also discusses some general modeling issues for sequential decision problems arising in energy (click here for more information on modeling).
Tutorial: Clearing the Jungle of Stochastic Optimization for Transportation and Logistics - Tristan VIII - June, 2013 (powerpoint format)
This was a tutorial given at Tristan VIII in June 2013. Using the context of transportation and logistics, it describes how to model sequential stochastic optimization problems ("dynamic programs") and then illustrates the four classes of policies. As with other presentations, it was designed to accompany my oral presentation. A version of the talk, designed to be read, is available here; click on lectures, and download at least the first two PowerPoint presentations.
Bridging stochastic programming and dynamic programming - March, 2013 (powerpoint format).
This talk was given at Georgia Tech, with the goal of introducing students to a particular style of modeling stochastic, dynamic optimization problems. It addresses topics such as defining state variables and the five components of a stochastic dynamic problem. We make the transition from finding the best decision in deterministic optimization problems, to finding the best policy for stochastic optimization problems. We identify four fundamental classes of policies, and then describe two of these in more detail: lookahead policies, and policies based on value function approximations. The presentation closes with a step by step translation of "dynamic programming" as it is done by leaders in stochastic programming (in the form of Alex Shapiro) to the classical notation of Markov decision processes.
Approximate dynamic programming for locomotive planning - April 17, 2012 (powerpoint format)
PLASMA uses the modeling and algorithmic framework of approximate dynamic programming to perform strategic, tactical and real-time optimization of locomotives. The system has been developed jointly with Norfolk Southern Railroad (BNSF also made major contributions to the system for several years). PLASMA breaks down the planning problem into a sequence of short-horizon problems that involve the assignment of locomotives to individual trains. Value function approximations capture the value of assignments now on the future, producing solutions that perform well over longer horizons. The system can perform strategic planning over horizons of a month or more, and tactical planning (which uses a live locomotive snapshot) over horizons of a week or more. The architecture is also well-suited to be run as a real-time, interactive assignment system. PLASMA models each locomotive and train at a high level of detail, including operational issues such as consist-breakup, shop routing and the management of foreign power. Arrival and departure times are captured down to the minute. PLASMA can also model randomness in transit times, yard delays, train schedules and equipment failures.
Finding the best policy: The curious case of approximate dynamic programming - May 2, 2011 (powerpoint format)
This talk, given at Optimization days in Montreal (May2, 2011) offers a fresh perspective on approximate dynamic programming. ADP is typically presented as a strategy based on approximating value functions. Here, I take the perspective that ADP is any approximate way of solving a sequential decision process. The challenge of dynamic programming is finding a policy. The dynamic programming literature tends to focus on solving Bellman's equation as a way of identifying a policy, but the literature is full of competing strategies. I have boiled these down into four fundamental classes: myopic policies (this is a special case), lookahead policies (decision trees, rolling horizon procedures, stochastic programming), policy function approximations (lookup tables, (s,S) inventory policies, or any decision function that is an analytic function of a state), and policies based on value function approximations (the traditional view of ADP). The talk then contrasts approximate value iteration and approximate policy iteration, highlighting algorithmic challenges. We close with two of our more challenging projects - a transportation application with a virtually infinite-dimensional state variable and a decision vector with over 50,000 dimensions, and an energy application with 175,000 time periods.
Stochastic optimization in energy systems (powerpoint format)
This talk was given at a DIMACS workshop on algorithmic decision theory at Rutgers on October 27, 2010. It focuses on the theme of modeling stochastic optimization problems in energy (states, actions, information, transition function and objective function), and summarizes the four major classes of policies for making decisions (myopic policies, lookahead policies, policy function approximations and policies based on value function approximations). These ideas are then illustrated in the context of problems in energy storage, the stochastic unit commitment model, and a stochastic, multiscale model for energy policy (SMART).
This was a keynote presentation at the IIE Meeting in June, 2010 in Cancun. It provides an introduction to the challenge of how to collect information efficiently. The tutorial provides an overview of the different types of learning problems, along with a brief (and necessarily incomplete) description of the different types of policies that have been used for collecting information. The remainder of the tutorial focuses on the concept of the "knowledge gradient" which is a policy that collects information that provides the greatest value if you are going to do a single measurement.
I have given variations of this talk a number of times, often emphasizing different classes of applications.
SMART: A Stochastic, Multiscale model for the Analysis of energy Resources, Technology and policy (powerpoint format)
SMART uses approximate dynamic programming to model energy flows on an hourly basis over a multidecade horizon (almost 200,000 time periods), under different forms of uncertainty. The model currently captures about 15 types of energy supply (from nuclear, coal to wind, biomass and hydrogen) to satisfy about 10 types of energy demand (from residential and industrial electrical demand to home heating oil and natural gas). Hourly dispatch decisions are linked through the need to decide how much energy to store (right now we only capture a single water reservoir), which means we have to step forward in time to properly manage storage. The model can also handle different types of coarse-grained uncertainty (e.g. will there be a new carbon tax).
Presented (twice) to the future academicians workshop at Informs, this provides a perspective on how to handle work with industry through a university. There are unique opportunities taking on industrial problems, but also unique challenges.
Presented at TRISTAN VI (June 2007), this is an overview of current research in rail operations (in particular, locomotive optimization) using approximate dynamic programming. We are involved in the development of a sequence of models for strategic, tactical and real-time locomotive optimization. ADP allows us to model operations at a very high level of detail, while retaining the ability to use commercial packages such as Cplex to optimally solve the assignment of locomotives to trains at a point in time (avoiding the need for heuristics). The modeling and algorithmic strategy allows us to handle various forms of uncertainty, while capturing all the complex operational constraints that govern freight rail operations in the United States.
This is a two hour tutorial given at the IEEE Workshop on Approximate Dynamic Programming and Reinforcement Learning. It provides a complete introduction to the essential ideas of approximate dynamic programming, but in particular introduces the power of using the post-decision state variable. This strategy makes it possible to solve large-scale stochastic resource allocation problems using commercial optimization packages (such as Cplex) to solve the decision problem at time t. Simulation is used to model the transition of the system between time periods. The ideas are illustrated using an inventory problem, the modeling of a single, complex entity, a blood management problem, and a large scale fleet management problem.
This talk was given at the Winter Simulation Conference in December, 2005. It describes a sequence of methods of simulating the types of large scale problems that arise in transportation and logistics, starting with simple rule-based logic, and extending through four classes of information. The third class represents value functions obtained using approximate dynamic programming, while the fourth class represents patterns of behavior.
We have worked for several years on a model for managing freight cars for a major railroad. The project started off looking like a textbook stochastic programming problem. By the end, the cars had evolved from a simple car-type attribute to a vector of attributes; the information process had morphed into a series of parallel information processes on customer orders, order attributes, car characteristics and transit times; we had learned about biases in customer orders, and we learned that there are some things that we will simply never learn.
Real Time Optimization for Real-World Problems (powerpoint presentation with sound!)
I have given this talk several times. This is a slightly shorter version that I gave to the Princeton Operations Research Society, where I experimented with recording the presentation. It worked better than expected, but please note that it is a 40 meg file.