Inspired by a 'How to talk about mathematics' presentation by Ionica Smeets in August last year, I created this outline slide for my talk at University of Amsterdam. Looking forward to this week, with three talks: in Tilburg on Tuesday, in Amsterdam (UvA) on Friday and again in Amsterdam (CWI) on Saturday.

**Tuesday October 1th, Tilburg Colloquium**

The rules of the game called significance testing (wink wink)

If significance testing were a game, it would be dictated by chance and encourage researchers to cheat. A dominant rule would be that once you conduct a study, you go all in: you have one go at your one preregistered hypothesis — one outcome measure, one analysis plan, one sample size or stopping rule etc. — and either you win (significance!) or you lose everything. The game does not allow you to conduct a second study, unless you prespecified that as well, together with the first. Strategies that base future studies on previous results, and then meta-analyze, are not allowed. Honestly reporting the p-value next to your 'I lost everything' result does not help; that is like reporting the margin in a winner takes all game. In a new round you have to start over again. No wonder researchers cheat this game by filedrawering and p-hacking. The best way to solve this might be to change the game. Fortunately, this is possible by preventing researchers from losing everything and allowing them to reinvest their previous earnings in new studies. This new game keeps score in terms of $-values instead of p-values, and tests with Safe Tests.

**Friday October 4th, University of Amsterdam **

Accumulation Bias in Meta-Analysis: How to Describe and How to Handle It

Studies accumulate over time and meta-analyses are mainly retrospective. These two characteristics introduce dependencies between the analysis time, at which a series of studies is up for meta-analysis, and results within the series. Dependencies introduce bias —Accumulation Bias— and invalidate the sampling distribution assumed for p-value tests, thus inflating type-I errors. But dependencies are also inevitable, since for science to accumulate efficiently, new research needs to be informed by past results. In our paper, we investigate various ways in which time influences error control in meta-analysis testing. We introduce an Accumulation Bias Framework that allows us to model a wide variety of practically occurring dependencies including study series accumulation, meta-analysis timing, and approaches to multiple testing in living systematic reviews. The strength of this framework is that it shows how all dependencies affect p-value-based tests in a similar manner. This leads to two main conclusions. First, Accumulation Bias is inevitable, and even if it can be approximated and accounted for, no valid p-value tests can be constructed. Second, tests based on likelihood ratios withstand Accumulation Bias: they provide bounds on error probabilities that remain valid despite the bias.

**Saturday October 5th, Weekend van de Wetenschap CWI**

https://www.weekendvandewetenschap.nl/activiteiten/2019/wiskunde-en-informaticalezingen/

Zie deze video van de Nacht van de Wetenschap in Den Haag vorig jaar!

Thu, 26 September