Experiment space is combinatorially large

You run experiments in order to achieve an objective:

  • Which growth conditions produce optimal material properties?
  • What are the underlying physics of this system?

Deciding which experiments to run in order to achieve such objectives is a daunting task for several reasons:

The Combinatorial Problem: The set of experiments that you can run, what we call experiment space, is combinatorially large. Each tunable knob, choice of material, and selection of methods for experimentation and analysis adds many dimensions to an already difficult problem.

Brute force iteration over experiment space is intractable and ad hoc exploration is inefficient.

Knowledge Management: When you run an experiment, how do you incorporate the new information gotten from that experiment? How do you encode what you previously knew, and what you know after receiving new data?

Uncertainty: How do you deal with uncertainty in the form of experimental noise or imperfect knowledge of the underlying physics?

Optimal learning guides you through experiment space

Optimal learning (OL) addresses these issues in a systematic way to navigate experiment space and achieve your objective. Using Bayesian Statistics and Decision Theory, OL helps you decide on the next experiment based on your objective and what it has learned about the system so far.

  • Connector.

    Manage knowledge with Bayesian Statistics

    In OL, we manage what you know about the system by encoding knowledge as probability distributions. When new information is received, this knowledge is updated in a rigorous manner using Bayesian logic.

  • Connector.

    Select the next experiment using Decision Theory

    OL decides on the next experiment through a balance of exploring (selecting an experiment at random) vs. exploiting (selecting the experiment that best achieves your objective, under current knowledge).

  • Connector.

    Optimize and learn

    As OL steps toward its objective, it necessarily learns about the underlying system. When coupled with a physical model (with imperfect knowledge of the parameters specific to the system you’re studying), OL learns about the underlying physics as a consequence.

On the left: OL scores potential experiments by the expected amount of information to be gained from each experiment. Using this information, a scientist can make a more systematic decision on which experiment to run next.

On the right: We used a physical model with a priori unknown kinetic and thermodynamic parameters to help the OL algorithm score experiments. The plot is a visualization of the probability distribution for these parameters after a few experiments.

OL is a sequential decision-making algorithm, which uses prior belief to score potential experiments, and uses outcomes from those experiments to update belief. Through a combination of belief representation and a good decision-making policy, OL can accelerate materials design and discovery.

More information

The Optimal Learning process

Interested in learning what happens when we collaborate together to provide you an Optimal Learning solution? Read more about the entire Optimal Learning process, the conversations we’ll have together and the products we can supply.

The Optimal Learning demo

Want to learn more about Optimal Learning? Try our Optimal Learning demo to see it in action. We’ll show you how we manage knowledge and select experiments in order to optimize material properties.

Work with us

Together, we can craft an Optimal Learning solution specific to your problem, build the appropriate models and develop a tool that helps you navigate experiment space.

Contact