Theory

Somewhat technical articles on a variety of theoretical subjects

Entropy as a Measure of Uncertainty

Measuring Uncertainty

How do you measure uncertainty?

That may seem like an odd question, but let’s just dive right into it, because it leads us down an interesting path to the definition of entropy.

The Number of Possibilities

Imagine a murder has been committed. We don’t know who did it, so there’s uncertainty. If there are only two people who could have done it (say, Colonel Mustard and Professor Plum), the uncertainty is limited. With ten possible suspects, the uncertainty increases. On the other hand if there’s only one person who could have done it, there’s no uncertainty at all.

The Deliberati Argument Model

In this article we introduce an argument model: a set of terms for analyzing arguments by naming their parts. There are various argument models in the academic literature on argumentation theory and related fields but none provide us with precise definitions for all the concepts behind our algorithms for improving online conversations. So we will define those concepts here.

Anatomy of an Argument

Our model incorporates the basic ideas from the influential Toulmin model of argumentation first introduced in 1948, but uses a simpler model with more modern terminology.

A Bayesian Account of Argumentation

Part of the Bayesian Argumentation series

What is a good argument? From a logical point of view, a good argument is logically sound. But in the real-world people rarely argue with pure logic. From a rhetorical point of view, a good argument is one that is persuasive. But how can this be measured? In this series of essays, I consider this question from a Bayesian point of view.

Necessity and Sufficiency

Part of the Bayesian Argumentation series

Argument and Information

In the previous essay in this series, we introduced the idea of relevance, and said that a premise is relevant to the conclusion iff $P(A \vert B) > P(A \vert \bar{B})$.

Consider the argument (𝐴) this is a good candidate for the job because (𝐵) he has a pulse. Having a pulse may not be a very persuasive reason to hire somebody, but it is probably quite relevant, because if the candidate did not have a pulse, the subject would probably be much less likely to want to hire him. That is $P(A \vert B) > P(A \vert \bar{B})$.

Informativeness and Persuasiveness

Part of the Bayesian Argumentation series

Why Accept the Premise?

In the previous essay in this series, we defined the ideas of necessity and sufficiency from the perspective of a Bayesian rational agent. If an argument is necessary, then if the subject were to reject the premise, they would decrease their acceptance of the conclusion. And if an argument is sufficient, then if the subject were to accept the premise, they would increase their acceptance of the conclusion.

Warrants and Corelevance

Part of the Bayesian Argumentation series

This is the final article in my series on Bayesian Argumentation. To understand this essay, read the introductory article and the article on [Relevance and Acceptance] (/relevance-and-acceptance).

Relevance is Not Absolute

Relevance exists in the context of the subject’s other prior beliefs. For example, if the subject believes that ($\bar{𝐶}$) the car is out of gas, and also ($\bar{B}$) the battery is dead, then both of these are good reasons to believe ($\bar{A}$) the car won’t start.

Bayesian Argumentation Definitions

Part of the Bayesian Argumentation series

Bayesian Argumentation: Summary of Definitions

Below is a summary of all the terms and equations defined in the essays in this series, followed by a detailed example.

  • Claim: A statement that one can agree or disagree with

  • Premise: A claim intended to support or oppose some conclusion

  • Conclusion: The claim supported or opposed by the premise

  • Argument: A premise asserted in support of or opposition to some conclusion

For an argument with premise 𝐵 and conclusion 𝐴, and a subject whose beliefs are represented by probability measure P…

A Bayesian Inference Primer

“When you have eliminated the impossible, all that remains, no matter how improbable, must be the truth.”

– Sherlock Holmes (Arthur Conan Doyle)

For a long time Bayesian inference was something I understood without really understanding it. I only really got it it after reading Chapter 2 of John K. Kruschke’s textbook Doing Bayesian Data Analysis, where he describes Bayesian Inference as Reallocation of Credibility Across Possibilities

I now understand Bayesian Inference to be essentially a mathematical generalization of Sherlock Holmes’ pithy statement about eliminating the impossible. This article is my own attempt to elucidate this idea. If this essay doesn’t do the trick, you might try Bayesian Reasoning for Intelligent People by Simon DeDeo or Kruschke’s Bayesian data analysis for newcomers.

The Meta-Reasoner

Part of the Distributed Bayesian Reasoning series

In the Introduction to Distributed Bayesian Reasoning, we argue that the rules of Bayesian inference can enable a form of distributed reasoning.

In this article we introduce the idea of meta-reasoner, which is the hypothetical fully-informed average juror.

The meta-reasoner resembles the average juror in that it holds prior beliefs equal to the average beliefs of the participants, but it is fully-informed because it holds beliefs for every relevant sub-jury.