The Number of Possibilities What is the most intuitive way to quantify uncertainty?
Imagine a murder has been committed. We don’t know who did it, so there’s uncertainty. If there are only two people who could have done it (Colonel Mustard and Professor Plum), the uncertainty is limited. With ten possible suspects, the uncertainty increases. If there’s only one person who could have done it, there’s no uncertainty.
Thus, a straightforward measure of uncertainty would be proportional to the number of possibilities.
Theory
Somewhat technical articles on a variety of theoretical subjects
Distributed Bayesian Reasoning Introduction
Part of the Distributed Bayesian Reasoning series
Distributed Bayesian Reasoning is a kind of hypothetical opinion poll. It tells us not what people actually believe, but what they would believe if they knew more.
In this article we introduce an argument model: a set of terms for analyzing arguments by naming their parts. There are various argument models in the academic literature on argumentation theory and related fields but none provide us with precise definitions for all the concepts behind our algorithms for improving online conversations. So we will define those concepts here.
Anatomy of an Argument Our model incorporates the basic ideas from the influential Toulmin model of argumentation first introduced in 1948, but uses a simpler model with more modern terminology.
A Bayesian Account of Argumentation
Part of the Bayesian Argumentation series
What is a good argument?
From a logical point of view, a good argument is logically sound. But in the real-world people rarely argue with pure logic.
From a rhetorical point of view, a good argument is one that is persuasive. But how can this be measured?
In this series of essays, I consider this question from a Bayesian point of view.
Relevance and Acceptance
Part of the Bayesian Argumentation series
Definition of Relevance
In the previous essay in this series, we introduced the basic ideas and terminology of Bayesian argumentation, including the concept of relevance. In this essay I explore a precise mathematical definition of relevance and the related concept of acceptance.
Necessity and Sufficiency
Part of the Bayesian Argumentation series
Argument and Information In the previous essay in this series, we introduced the idea of relevance, and said that a premise is relevant to the conclusion iff $P(A \vert B) > P(A \vert \bar{B})$.
Consider the argument (𝐴) this is a good candidate for the job because (𝐵) he has a pulse. Having a pulse may not be a very persuasive reason to hire somebody, but it is probably quite relevant, because if the candidate did not have a pulse, the subject would probably be much less likely to want to hire him.
Informativeness and Persuasiveness
Part of the Bayesian Argumentation series
Why Accept the Premise? In the previous essay in this series, we defined the ideas of necessity and sufficiency from the perspective of a Bayesian rational agent. If an argument is necessary, then if the subject were to reject the premise, they would decrease their acceptance of the conclusion. And if an argument is sufficient, then if the subject were to accept the premise, they would increase their acceptance of the conclusion.
Warrants and Corelevance
Part of the Bayesian Argumentation series
This is the final article in my series on Bayesian Argumentation. To understand this essay, read the introductory article for definition of key concepts and terminology.
Relevance is Not Absolute Relevance exists in the context of the subject’s other prior beliefs. For example, if the subject believes that ($\bar{𝐶}$) the car is out of gas, and also ($\bar{B}$) the battery is dead, then both of these are good reasons to believe ($\bar{A}$) the car won’t start.
Bayesian Argumentation Definitions
Part of the Bayesian Argumentation series
Bayesian Argumentation: Summary of Definitions Below is a summary of all the terms and equations defined in the essays in this series, followed by a detailed example.
Claim: A statement that one can agree or disagree with
Premise: A claim intended to support or oppose some conclusion
Conclusion: The claim supported or opposed by the premise
Argument: A premise asserted in support of or opposition to some conclusion
For an argument with premise 𝐵 and conclusion 𝐴, and a subject whose beliefs are represented by probability measure P…
“When you have eliminated the impossible, all that remains, no matter how improbable, must be the truth.”
– Sherlock Holmes (Arthur Conan Doyle)
For a long time Bayesian inference was something I understood without really understanding it. I only really got it it after reading Chapter 2 of John K. Kruschke’s textbook Doing Bayesian Data Analysis, where he describes Bayesian Inference as Reallocation of Credibility Across Possibilities
I now understand Bayesian Inference to be essentially a mathematical generalization of Sherlock Holmes’ pithy statement about eliminating the impossible.
Distributed Bayesian Reasoning Math
Part of the Distributed Bayesian Reasoning series
In this article we develop the basic mathematical formula for calculating the opinion of the meta-reasoner in arguments involving a single main argument thread.
The Meta-Reasoner
Part of the Distributed Bayesian Reasoning series
In the Introduction to Distributed Bayesian Reasoning, we argue that the rules of Bayesian inference can enable a form of distributed reasoning.
In this article we introduce the idea of meta-reasoner, which is the hypothetical fully-informed average juror.
The meta-reasoner resembles the average juror in that it holds prior beliefs equal to the average beliefs of the participants, but it is fully-informed because it holds beliefs for every relevant sub-jury.
This article is part of the series on distributed Bayesian reasoning. It assumes you have read the previous article on the Basic Math.
This essay explains how you can pay people to tell the truth, even if you can’t verify their answers.