In these demos, you will be introduced to the core concepts behind Optimal Learning, the optimization framework that sequentially guides you through the space of experiments in order to achieve some objective. Through toy problems, you will learn about prior knowledge and various ways to form prior knowledge, how to quantify uncertainty of your knowledge, and how this knowledge and uncertainty quantification can be used to pick the next experiment, using a decision policy that balances exploration and performance.
Which temperature results in the longest carbon nanotubes?
In this demo, you are a scientist tasked with screening the viability of a new catalyst in producing long carbon nanotubes. You want to quickly determine the temperature that maximizes carbon nanotube length so that you can determine if this catalyst is a viable alternative.
In this demo, you will use Optimal Learning to determine the sequence of temperatures at which to run your experiment, and how to incorporate information of nanotube growth from similar systems in forming prior knowledge.
Which semiconductor material yields nanoparticles with the widest band gap?
You have a new synthesis technique for making nanoparticles from III-V semiconductor material and you would like to know how much you can increase the band gap from semiconductor bulk using your nanoparticles.
In this demo, you will use Optimal learning to determine which III-V semiconductor results in nanoparticles with widest band gap. You will also learn how to use the hierarchal structure of III-V semiconductors to specify prior knowledge.
With a properly formed prior, we can perform simulations of the optimal learning process, in which we assume a ground truth, and try to learn this truth through sequential decision/experimentation process. Simulations allow us to measure the performance of decision policies, perform risk analysis, and plan experiment budgets.
In this demo, we introduce the basic concepts behind optimal learning simulations. We show how truth is sampled from a prior distribution, and illustrate how different policies make decisions. We also introduce a key metric of policy performance, the opportunity cost. Using this metric, we can compare different policies for specific problems.