abramdemski

Sequences

Consequences of Logical Induction
Partial Agency
Alternate Alignment Ideas
Filtered Evidence, Filtered Arguments
CDT=EDT?
Embedded Agency
Hufflepuff Cynicism

Comments

Radical Probabilism

Yeah, I don't think this can be generalized to model a radical probabilist in general, but it does seem like a relevant example of "extra-bayesian" (but not totally non-bayesian) calculations which can be performed to supplement Bayesian updates in practice.

Radical Probabilism

I do not understand how Jeffrey updates lead to path dependence. Is the trick that my probabilities can change without evidence, therefore I can just update B without observing anything that also updates A, and then use that for hocus pocus? Writing that out, I think that's probably it, but as I was reading the essay I wasn't sure which bit was where the key step was happening.

hmmmm. My attempt at an English translation of my example:

A and B are correlated, so moving B to 60% (up from 50%) makes A more probable as well. But then moving A up to 60% is less of a move for A. This means that (A&¬B) ends up smaller than (B&¬A): both get dragged up and then down, but (B&¬A) was dragged up by the larger update and down by the smaller.

Okay, I got tired and skipped most of the virtual evidence section (it got tough for me). You say "Exchange Virtual Evidence" and I would be interested in a concrete example of what that kind of conversation would look like. 

It would be nice to write a whole post on this, but the first thing you need to do is distinguish between likelihoods and probabilities.

The notation may look pointless at first. The main usage has to do with the way we usually regard the first argument as variable an the second as fixed. IE, "a probability function sums to one" can be understood as P(A|B)+P(¬A|B)=1; we more readily think of A as variable here. In a Bayesian update, we vary the hypothesis, not the evidence, so it's more natural to think in terms of a likelihood function, L(H|E).

In a Bayesian network, you propagate probability functions down links, and likelihood functions up links. Hence Pearl distinguished between the two strongly.

Likelihood functions don't sum to 1. Think of them as fragments of belief which aren't meaningful on their own until they're combined with a probability.

Base-rate neglect can be thought of as confusion of likelihood for probability. The conjunction fallacy could also be explained in this way.

I wish it were feasible to get people to use "likely" vs "probable" in this way. Sadly, that's unprobable to work.

I'm imagining it's something like "I thought for ages and changed my mind, let me tell you why".

What I'm pointing at is really much more outside-view than that. Standard warnings about outside view apply. ;p 

An example of exchanging probabilities is: I assert X, and another person agrees. I now know that they assign a high probability to X. But that does not tell me very much about how to update.

Exchanging likelihoods instead: I assert X, and the other person tells me they already thought that for unrelated reasons. This tells me that their agreement is further evidence for X, and I should update up.

Or, a different possibility: I assert X, and the other person updates to X, and tells me so. This doesn't provide me with further evidence in favor of X, except insofar as they acted as a proof-checker for my argument.

"Exchange virtual evidence" just means "communicate likelihoods" (or just likelihood ratios!)

Exchanging likelihoods is better than exchanging probabilities, because likelihoods are much easier to update on.

Granted, exchanging models is much better than either of those two ;3 However, it's not always feasible. There's the quick conversational examples like I gave, where someone may just want to express their epistemic state wrt what you just said in a way which doesn't interrupt the flow of conversation significantly. But we could also be in a position where we're trying to integrate many expert opinions in a forecasting-like setting. If we can't build a coherent model to fit all the information together, virtual evidence is probable to be one of the more practical and effective ways to go.

Radical Probabilism

The post says that radical probabilism rejects #3-#5, but also that Jeffrey's updates is derived from having rigidity (#5), which sounds like a contradiction.

Jeffrey doesn't see Jeffrey updates as normative! Like Bayesian updates, they're merely one possible way to update.

This is also part of why Pearl sounds like a critic of Jeffrey when in fact the two largely agree -- you have to realize that Jeffrey isn't advocating Jeffrey updating in a strong way, only using it as a kind of gateway drug to the more general fluid updates.

I don’t get why the proof of conservation of expected evidence is relevant. It seems to assume that not only do I know how I will update, but that the bookie does too, which seems like an odd and overpowered assumption, and feels in contrast with all the things you said about rigidity – why does the bookie get to know how I’ll update?

Hmm. It seems like a proper reply to this would be to step through the argument more carefully -- maybe later? But no, the argument doesn't require either of those. It requires only that you have some expectation about your update, and the bookie knows what that is (which is pretty standard, because in dutch book arguments the bookies generally have access to your beliefs). You might have a very broad distribution over your possible updates, but there will still be an expected value, which is what's used in the argument.

I didn’t follow the argument that classical bayesians don’t have calibration. I think it's just saying that classical bayesianism doesn't have any part for self-reference, and that's a big deal? I don't think this means bayesians aren't calibrated, just that they don't have calibration as an explicit part of their model.

Like convergence, this is dependent on the prior, so I can't say that classical Bayesians are never calibrated (although one could potentially prove some pretty strong negative results, as is the case with convergence?). I didn't really include any argument, I just stated it as a fact.

What I can say is that classical Bayesianism doesn't give you tools for getting calibrated. How do you construct a prior so that it'll have a calibration property wrt learning? Classical Bayesianism doesn't, to my knowledge, talk about this. Hence, by default, I expect most priors to be miscalibrated in practice when grain-of-truth (realizability) doesn't hold.

For example, I'm not sure whether Solomonoff induction has a calibration property -- nor whether it has a convergence property. These strike me as mathematically complex questions. What I do know is that the usual path to prove nice properties for Solomonoff induction doesn't let you prove either of these things. (IE, we can't just say "there's a program in the mixture that's calibrated/convergent, so...." ... whereas logical induction lets you argue calibration and convergence via the relatively simple "there are traders which enforce these properties")

Radical Probabilism

The solution is too large to fit in the margins, eh? j/k, I know there's a real paper. Should I go break my brain trying to read it, or wait for your explanation?

Oh, I definitely don't have a better explanation of that in the works at this point.

Re: convergence - how real of a problem is this? In your example you had two hypotheses that were precisely equally wrong. Does convergence still fail if the true probability is 0.500001 ?

My main concern here is if there's an adversarial process taking advantage of a system, as in the trolling mathematicians work.

In the case of mathematical reasoning, though, the problem is quite severe -- as is hopefully clear from what I've linked above, although an adversary can greatly exacerbate the problem, even a normal non-adversarial stream of evidence is going to keep flipping the probability up and down by non-negligible amounts. (And although the post offers a solution, it's a pretty dumb prior, and I argue all priors which avoid this problem will be similarly dumb.)

Re: calibration - I still believe that this can be included if you are jointly estimating your model and your hypothesis.

I don't get this at all! What do you mean?

As you'll hopefully agree with at this point, we can always manufacture the 100% condition by turning it into virtual evidence.

As I discussed earlier, I agree, but with the major caveat that the likelihoods from the update aren't found by first determining what you're updating on, and then looking up the likelihoods for that; instead, we have to determine the likelihoods and then update on the virtual-evidence proposition with those likelihoods.

That's not quite what I had in mind, but I can see how my 'continuously valued' comment might have thrown you off. A more concrete example might help: consider Example 2 in this paper. It posits three events:

[...]

What I'm saying is that you can just treat z as being an event itself, and do a Bayesian update from the likelihood [...]

Hmmmm. Unfortunately I'm not sure what to say to this one except that in logical induction, there's not generally a pre-existing z we can update on like that.

Take the example of calibration traders. These guys can be described as moving probabilities up and down to account for the calibration curves so far. But that doesn't really mean that you "update on the calibration trader" and e.g. move your 90% probabilities down to 89% in response. Instead, what happens is that the system takes a fixed-point, accounting for the calibration traders and also everything else, finding a point where all the various influences balance out. This point becomes the next overall belief distribution.

So the actual update is a fixed-point calculation, which isn't at all a nice formula such as multiplying all the forces pushing probabilities in different directions (finding the fixed point isn't even a continuous function).

We can make it into a Bayesian update on 100%-confident evidence by modeling it as virtual evidence, but the virtual evidence is sorta pulled from nowhere, just an arbitrary thing that gets us from the old belief state to the new one. The actual calculation of the new belief state is, as I said, the big fixed point operation.

You can't even say that we're updating on all those forces pulling the distribution in different directions, because there are more than one fixed points of those forces. We don't want to have uncertainty about which of those fixed points we end up in; that would give the wrong thing. So we really have to update on the fixed-point itself, which is already the answer to what to update to; not some information which we have pre-existing beliefs about and can figure out likelihood ratios for to figure out what to update to.

So that's my real crux, and any examples with telephone calls and earthquakes etc are merely illustrative for me. (Like I said, I don't know how to actually motivate any of this stuff except with actual logical uncertainty, and I'm surprised that any philosophers would have become convinced just from other sorts of examples.)

Radical Probabilism

Not necessarily. Typically, the events would be all the Lebesgue measurable subsets of the state space. That's large enough to furnish a suitable event to play the role of the virtual evidence.

True, although I'm not sure this would be "virtual evidence" anymore by Pearl's definition. But maybe this is the way it should be handled.

A different way of spelling out what's unique about virtual evidence is that, while compatible with Bayes, the update itself is calculated via some other means (IE, we get the likelihoods from somewhere else than looking at pre-defined likelihoods for the proposition we're updating on).

This also resolves Eliezer's Pascal's Muggle conundrum over how he should react to 10^10-to-1 evidence in favour of something for which he has a probability of 10^100-to-1 against. The background information X that went into the latter figure is called into question.

Does it, though? If you were going to call that background evidence into question for a mere 10^10-to-1 evidence, should the probability have been 10^100-to-1 against in the first place?

Radical Probabilism

I'll try to write some good follow-up posts for you to also curate ;3

nostalgebraist: Recursive Goodhart's Law

That does seem better. But I don't think it fills the shoes of the general notion of optimization people use.

  • Either you mean negative feedback loops specifically, or you mean to include both negative and positive feedback loops. But both choices seem a little problematic to me.
    • Negative: this seems to imply that all optimization is a kind of homeostasis. But it seems as if some optimization, at least, can be described as "more is better" (ie the more typical utility framing). It's hard to see how to characterize that as a negative feedback loop.
    • Both: this would include all positive feedback loops as well. But I think not all positive feedback loops have the optimization flavor. For example, there is a positive feedback loop between how reflective a water-rich planet's surface is and how much snow forms. Snow makes the surface more reflective, which makes it absorb less heat, which makes it colder, which makes more snow form. Idk, maybe it's fine for this to count as optimization? I'm not sure.
      • Does guess-and-check search even count as a positive feedback loop? It's definitely optimization in the broad sense, but finding one improvement doesn't help you find another, because you're just sampling each next point to check independently. So it doesn't seem to fit that well into the feedback loop model.
  • Feedback loops seem to require measurable (observable) targets. But broadly speaking, optimization can have non-measurable targets. An agent can steer the world based on a distant goal, such as aiming for a future with a flourishing galactic civilization.
    • Now, it's possible that the feedback loop model can recover from this one by pointing to a (positive) feedback loop within the agent's head, where the agent is continually searching for better plans to implement. However, I believe this is not always going to be the case. The inside of an agent's head can be a weird place. The "agent" could just be implementing a hardwired policy which indeed steers towards a distant goal, but which doesn't do it through any simple feedback loop. (Of course there is still, broadly speaking, feedback between the agent and the environment. But the same could be said of a rock.) I think this counts as the broad notion of optimization, because it would count under Eliezer's definition -- the agent makes the world end up surprisingly high in the metric, even though there's no feedback loop between it and the metric, nor an internal feedback loop involving a representation/expectation for the metric.
nostalgebraist: Recursive Goodhart's Law

I think (1) Dagon is right that if we consider a purely behavioral perspective the distinction gets meaningless at the boundaries, trying to distinguish between highly complex values vs incoherence; any set of actions can be justified via some values; (2)  humans are incoherent, in the sense that there are strong candidate partial specifications of our values (most of us like food and sex) and we're not always the most sensible in how we go about achieving them; (3) also, to the extent that humans can be said to have values, they're highly complex.

The thing that makes these three statements consistent is that we use more than just a behavioral lense to judge "human values". 

nostalgebraist: Recursive Goodhart's Law

I guess it depends on whether you want to keep "optimization" as a referent to the general motion that is making the world more likely to be one way than another or a specific type of making the world more likely to be one way rather than another.

I suspect you're trying to gesture at a slightly better definition here than the one you give, but since I'm currently in the business of arguing that we should be precise about what we mean by 'optimization'... what do you mean here?

Just about any element of the world will "make the world more likely to be one way rather than another".

nostalgebraist: Recursive Goodhart's Law

I did not mean to say that "everything is an optimization process". I did mean to say that decisions are an optimization process, and I now realize even that's too strong. I suspect all I can actually assert is that "intentionality is an optimization process".

Oh, I didn't mean to accuse you of that. It's more that this is a common implicit frame of reference (including/especially on LW).

I rather suspect the correct direction is to break down "optimization" into more careful concepts (starting, but not finishing, with something like selection vs control).

Load More