**When**: Sun, July 24 2022, Afternoon session 2 (after A1)

**Where**: Vienna, Austria, Messe Wien Exhibition and Congress Center, Gallerie 3-4

**Who**: Federico Cerutti and Lance M. Kaplan

## Short Description of the Tutorial

When collaborating with an AI system, we need to assess when to trust its recommendations. Suppose we mistakenly trust it in regions where it is likely to err. In that case, catastrophic failures may occur, hence the need for Bayesian approaches for reasoning and learning to determine the confidence (or epistemic uncertainty) in the probabilities of the queried outcome. Pure Bayesian methods, however, suffer from high computational costs. To overcome them, we revert to efficient and effective approximations. In this tutorial, PhD students and early stage researchers will be introduced to techniques that take the name of evidential reasoning and learning from the process of Bayesian update of given hypotheses based on additional evidence collected. The tutorial provides the reader with a gentle introduction to the area of investigation, the up-to-date research outcomes, and the open questions still left unanswered.

## Description of the Tutorial

To identify the regions where the AI system is likely to err, we need to
distinguish between (at least) two different sources of uncertainty:
*aleatory* (or *aleatoric*) and *epistemic* uncertainty. Aleatory
uncertainty refers to the variability in the outcome of an experiment
due to inherently random effects (e.g. flipping a fair coin): no
additional source of information but Laplace’s daemon can reduce
such variability. Epistemic uncertainty refers to the epistemic state of
the agent using the model, hence its lack of knowledge that—in
principle—can be reduced based on additional data samples.

This tutorial dwells on the research at the intersection of quantifying aleatory and epistemic uncertainty in reasoning and learning while using very efficient approximations based upon the idea of updating the Bayesian posterior in light of further evidence collected in favour (or against) a hypothesis. We primarily focus on the case of uncertain probabilities represented as beta or Dirichlet distributions following the Bayesian statistics paradigm. Unlike existing surveys on approaches for quantifying epistemic uncertainty in (deep) learning, in this tutorial, we aim at giving an overview of the challenges associated with the reasoning in the presence of epistemic uncertainty and with learning both with full and partial data. Logical reasoning in the presence of aleatory and epistemic uncertainty brings entirely novel problems that need to be addressed when wishing to limit the need for computational resources. We illustrate this idea using the notion of probabilistic circuits, which can encompass a large set of reasoning problems. We further discuss the challenges of ascertaining epistemic and aleatory uncertainty of probabilistic circuits parameters, particularly with partial observability of the training data. We finally discuss how to ascertain uncertain probabilities from the real world. Unsurprisingly, they are either provided by an oracle (e.g., an intelligence analyst) or learnt from raw data.

## Detailed Outline

#### A primer in Bayesian Statistics:

- Fundamentals of statistics and Bayes
- Beta and Dirichlet distributions as uncertain probabilities.

#### Evidential Reasoning:

- From logic to probabilistic circuits;
- Probabilistic circuits as a unifying method for probabilistic reasoning;
- Probabilistic circuits with uncertain probabilities.

#### Evidential Parameter Learning:

- Learning with complete observations;
- Learning with partial observations: preliminary proposals and discussions.

#### Ascertain Evidence from the Real World:

- Intelligence analysis and uncertainty
- Evidential Deep Learning;
- Alternative proposals.