**Friday july 28 at 17:00**

**Rutherford Physics Building, Room 118, McGill**

Next week, I’ll be talking about *Bayesian learning* at the Mathematical congress of the americas and at the Canadian undergraduate mathematics conference. These are somewhat challenging talks: I need to sell the idea of Bayesian statistics to a general mathematical audience (which knows nothing about it), interest them in some though problems of Bayesian nonparametrics, and then present some of our research results. This must be done in under 20 minutes.

To make the presentation more intuitive and accessible, I borrowed some language from machine learning. I’m talking about learning rather than inference, uncertain knowledge rather than subjective belief, and “asymptotic correctness” rather than consistency. These are essentially synonymous, although some authors might use them in different ways. This should not cause problems for this introductory talk.

# 1. What is learning?

This is the question that drives the talk. We do not want to deal with it in full generality, but rather want an operational definition of learning that allows us to program algorithms that can learn from experience. The suggestion is the following:

*Learning is about rationally adjusting uncertain knowledge in the light of new information.*

Bayesian learning implements this idea through the calculus of probability. Let’s see this in a simple context.

## 1.1 The classical problem

Some stochastic mechanism generates data , each being an independent sample from a unknown distribution . We want to use the data to learn about .

### Example

A cryptographer observes word at random in an unknown countable language. Learning could be about:

- What’s the distribution of the observed words?
- What’s the probability that the next word to be observed has never been seen before?

## 1.2 Bayesian Learning

Uncertain knowledge about what may is encoded through a *prior* probability measure on the space of all possibilities for . Thus, if is some set of probability distributions, then is the probability, representing the best of our knowledge, that ““. Then, we observe the data . This yields the *posterior* measure , obtained by adjusting in light of through probabilistic conditioning.

### Example (coin tossing)

We suspect a coin of being biased. Tossing it, there is an unknown probability that the coin falls on heads. A prior probability density function, quantifying our uncertainty about what may be, could look like this:

Now, we flip the coin 20 times and observe only 5 heads. The density of the posterior distribution, updating the prior in light of this information, is the following.

Certainly the results obtained depend on the choice of prior distribution, but similar priors yield similar posteriors:

# 2. Bayesian nonparametrics

It is tempting to apply the same procedure to more complex problems.

Reconsider, for instance, the cryptographer example. Each word has some probability of appearing. Since there is an infinite number of words, there is an infinite number of parameters for which we need to quantify uncertainty. A reasonable question is then:

*Is it feasible to faithfully quantify uncertain knowledge on an infinite dimensional space?*

The answer is no. At least, not always and not necessarily in a way that makes bayesian learning meaningfull. Diaconis and Freedman showed, in 1986, showed the following in the context of bayesian nonparametrics:

“Some statisticians, as more and more data is gathered, will become more and more convinced of the wrong answer.”

Bayesian learning does not always work in infinite dimension. To approach this problem, we need to figure out charateristics of prior distributions that

- describe how uncertainty is spread out; and
- ensure that bayesian learning
**works correctly**.

## 2.1 A positive result

An important positive result is based on the two following conditions / characteristics of prior distributions. We denote by the set of all probability densities on some common space, assuming that (the data-generating distribution has some density in ).

- puts positive mass to the relative entropy neighborhoods of :

This means that *a priori*, we’re not excluding the truth from the set of possibilities. Since is unknown, we require that this condition be satisfied whatever may be.

2. is of finite entropy: for all , there exists and a covering of of diameter less than such that

This means that is not too complex and that we can make sense of it through discretization.

Under these two conditions, bayesian learning is asymptotically correct: the posterior distribution concentrates around the truth.

**Theorem** (Walker, 2004)

*If the conditions (1) and (2) are satisfied, than for any neighborhood of , we have*

*almost surely as .*

This is helpful, but has not yet solved our problem.

- How do we, generally, construct priors satisfying the two conditions?
- How can we use these priors to solve practical problems?

This is where our research (as well as a lot of other research in Bayesian nonparametrics) comes in.

# 3. Some of our research

Two contributions I want to present.

- We developped a relationship between some statistical models and approximation theory that can be used to easily construct priors satisfying the two conditions.
- We use it to solve problems raised in the litterature.

Let’s see how this works in the context of directional statistics. Say we have some data distributed on the circle or, more generally, on a compact metric space.

We want to learn about the data-generating mechanism, e.g. do

- density estimation,
- hypothesis tests,
- classification,

or any other statistical procedure. First, we specify a prior on the set of all bounded densities on the circle.

## 3.1 Prior specification

We begin with a density basis of the trigonometric polynomials introduced in 2009:

We studied the statistical properties of this basis, and use it to construct approximation operators

where . It can be shown that these operators are variation-diminishing and possess shape-preserving properties. More importantly, they give the decomposition

By specifying priors on and mixing them together, we obtain the prior

on .

This reduces the problem of specifying a prior on an infinite dimensional space, to specifying an infinite number of priors on *finite* dimensional spaces. This turns out to be easier, and from properties of we obtain asymptotic correctness of bayesian learning.

**Theorem.**

*Let be any linear function mapping to itself, with increasing. If for all continuous , then taking such that and ensures that bayesian learning based on is asymptotically correct.*

What is interesting here:

- The approach scales easily to (almost) any set of bounded densities on a compact metric space.
- The approximation theory litterature provides a wealth of well-studied approximation operators that can be used to construct such priors.
- Properties of the operators relate to properties of the priors. If, for instance, it is known that the true density of is unimodal, then using a unimodality-preserving operator yields a prior the space of unimodal densities.

## 3.2 Simple application

We use a prior of the type considered above to estimate the density of , also quantifying uncertainty about our estimate. Below, the data-generating distribution is drawn in orange. A hundred samples are represented in the grey histogram. The blue line is an estimate of the unknown orange function, and the blue shaded region quantifies uncertainty (50% and 90% credible regions).

# 4. Take-home message

There are two things I would like you to remember.

1. The calculus of probability provides an operational definition of learning. That is what Bayesian statistics is about.

2. As you must already know, different fields of mathematics enrich each other in their interactions. Here, it is approximation theory that provides tools that ensure bayesian learning works correctly.

*Thank you for your attention!*

**References.**

Diaconis, P. & Freedman, D. (1986). On the Consistency of Bayes Estimates. *Ann. Stat.* 14**,** 1–26 .

Roth, A. et al. (2009). A cyclic basis for closed curve and surface modeling, *Computer Aided Geometric Design*, 26(5), 528-546.

Walker, S. (2004). New approaches to Bayesian consistency. *Ann. Statist*. 32(5), 2028-2043.

McVinish, R. \& Mengersen, K. (2008). Semiparametric Bayesian circular statistics. *Computational Statistics and Data Analysis* 52, 4722–4730.

Petrone, S. \& Veronese, P. (2010). Feller operators and mixture priors in Bayesian nonparametrics. *Statistica Sinica*, 20(1), 379-404.

Wu, Y., Ghosal, S. (2008) Kullback Leibler property of kernel mixture priors in Bayesian density estimation. *Electronic Journal of Statistics*, 2, 298-331.

Binette, O. & Guillotte, S. (2017) Bayesian Nonparametrics for Directional Statistics. *Working paper*.

[…] Bayesian learning […]