[Highlights for the busy: de-bunking standard "Bayes is optimal" arguments;
frequentist Solomonoff induction; and a description of the online learning
framework.]
Short summary. This essay makes many points, each of which
I've decided to branch out a bit from technical discussions and engage in, as
Scott Aaronson would call it, some metaphysical spouting
[http://www.scottaaronson.com/blog/?cat=12]. The topic
An important concept in online learning and convex optimization is that of
strong convexity: a twice-differentiable function $f$ is said to be strongly
convex with respect to a norm $\|\cdot\|$ if
$z^T\
Here's a fun counterexample: a function $\mathbb{R}^n \to \mathbb{R}$ that is
jointly convex in any $n-1$ of the variables, but not in all variables at once.
The function
(This post represents research in progress. I may think about these concepts
entirely differently a few months from now, but for my own benefit I'm trying to
exposit on them in