Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ike 02 August 2015 10:14:55PM 0 points [-]

In that case, your Bayes Factor will be either 2/0, or 0/2.

Log of the first is infinity, log of the second is negative infinity.

The average of those two numbers is insert handwave here 0.

(If you use the formula for log of divisions, this actually works).

Comment author: jsteinhardt 03 August 2015 02:09:42AM *  1 point [-]

Replace 1/2 and 1/2 in the prior with 1/3 and 2/3, and I don't think you can make them cancel anymore.

In response to comment by So8res on MIRI's Approach
Comment author: Kaj_Sotala 02 August 2015 11:28:18PM *  9 points [-]

Hmm. If you have lots and lots of computing power, you can always just... not use it. It's not clear to me how additional computing power can make the problem harder -- at worst, it can make the problem no easier.

Additional computing power might not make the problem literally harder, but the assumption of limitless computing power might direct your attention towards wrong parts of the search space.

For example, I suspect that the whole question about multilevel world-models might be something that arises from conceptualizing intelligence as something like AIXI, which implicitly assumes that there's only one true model of the world. It can do this because it has infinite computing power and can just replace its high-level representation of the world with one where all high-level predictions are derived from the basic atom-level interactions, something that would be intractable for any real-world system to do. Instead real-world systems will need to flexibly switch between different kinds of models depending on the needs of the situation, and use lower-level models in situations where the extra precision is worth the expense of extra computing time. Furthermore, those lower-level models will have been defined in terms of what furthers the system's goals, as defined on the higher-levels: it will pay preferential attention to those features of the lower-level model that allow it to further its higher-level goals.

In the AIXI framing, the question of multilevel world-models is "what happens when the AI realizes that the true world model doesn't contain carbon atoms as an ontological primitive". In the resource-limited framing, that whole question isn't even coherent, because the system has no such thing as a single true world-model. Instead the resource-limited version of how to get multilevel world-models to work is something like "how to reliably ensure that the AI will create a set of world models in which the appropriate configuration of subatomic objects in the subatomic model gets mapped to the concept of carbon atoms in the higher-level model, while the AI's utility function continues to evaluate outcomes in terms of this concept regardless of whether it's using the lower- or higher-level representation of it".

As an aside, this reframed version seems like the kind of question that you would need to solve in order to have any kind of AGI in the first place, and one which experimental machine learning work would seem the best suited for, so I'd assume it to get naturally solved by AGI researchers even if they weren't directly concerned with AI risk.

In response to comment by Kaj_Sotala on MIRI's Approach
Comment author: jsteinhardt 03 August 2015 02:05:37AM -1 points [-]


Comment author: ike 02 August 2015 02:33:36PM 0 points [-]

I'm claiming the second. I was framing it in my mind as "on average, the factor will be 1", but the kind of "average" required is the average log on further thought. I should probably use log in the future for statements like that.

Also, what is the expectation with respect to?

The prior.

Comment author: jsteinhardt 02 August 2015 08:15:13PM 0 points [-]

This seems wrong then. Imagine you have two hypotheses, which you place equal probability on but then will see an observation that definitively selects one of the two as correct. E[p(x)] = 1/2 both before and after the observation, but E[log p (x)] is -1 vs - infinity.

Comment author: ike 30 July 2015 02:42:39PM 0 points [-]

That's probably a better way of putting it. I'm trying to intuitively capture the idea of "no expected evidence", you can frame that in multiple ways.

Comment author: jsteinhardt 02 August 2015 01:24:36PM 0 points [-]

Huh? E[X] = 1 and E[\log(X)] = 0 are two very different claims; which one are you actually claiming?

Also, what is the expectation with respect to? Your prior or the data distribution or something else?

Comment author: gjm 24 July 2015 05:21:54PM 2 points [-]

Yes, verification is a strictly simpler problem, and one that's fairly thoroughly addressed by existing research -- which is why people working specifically on AI safety are paying attention to other things.

(Maybe they should actually be working on doing verification better first, but that doesn't seem obviously a superior strategy.)

Some AI takeover scenarios involve hacking (by the AI, of other systems). We might hope to make AI safer by making that harder, but that would require securing all the other important computer systems in the world. Even though making an AI safe is really hard, it may well be easier than that.

Comment author: jsteinhardt 27 July 2015 02:22:34AM 3 points [-]

Yes, verification is a strictly simpler problem, and one that's fairly thoroughly addressed by existing research -- which is why people working specifically on AI safety are paying attention to other things.

This doesn't really seem true to me. We are currently pretty bad at software verification, only able to deal with either fairly simple properties or fairly simple programs. I also think that people in verification do care about the "specification problem", which is roughly problem 2 above (although I don't think anyone really has that many ideas for how to address it).

Comment author: TheAncientGeek 23 July 2015 02:55:49PM *  9 points [-]

Arguments against AI risk, .or arguments against the MIRI conception of AI risk?

I have heard a hint of a whisper of a rumour that I am considered a bit of a contrarian around here...but I am actually a little more convinced of AI threat in general than I used be before I encountered less wrong. (in particular, at one time, I would have said "just pull the plug out", but there's some mileage in the unknowing arguments)

The short version of the arguments against MIRIs version of AI threat is that it is highly conjunctive. The long version is long. a consequence of having a multi stage argument, with a fan out of alternative possibilities at each stage.

Comment author: jsteinhardt 23 July 2015 06:10:02PM 8 points [-]

For an argument against at least some of MIRI's technical agenda, see Paul Christiano's medium post.

Comment author: eternal_neophyte 21 July 2015 12:41:22PM 1 point [-]

If your comments are that watertight perhaps you should spin them into articles?

Comment author: jsteinhardt 21 July 2015 02:51:42PM 0 points [-]

Yeah I guess I'm really talking about posts (i.e. articles) more than comments.

Comment author: eternal_neophyte 20 July 2015 06:19:35PM 4 points [-]

Upvotes are in my opinion a poor metric to measure the quality of a post. You're confusing information on how insightful, thoughtful or useful your writing is with information on how pleasing it is due to the upvoter due to providing social confirmation of their beliefs or entertaining them for other reasons.

A much more useful way to measure the quality of your own writing is to look at how interesting or thoughtful the replies you get are: this shows that people find your ideas worth engaging with. This is a subjective assessment however that can't be captured by the real line.

Comment author: jsteinhardt 21 July 2015 05:19:15AM 5 points [-]

I think some of my most-researched comment / posts have gotten relatively few replies. The more thorough you are, the less room there is for people to disagree without putting a decent amount of thought in. On the other hand, if you dash out a post without much fact-checking, you'll probably get lots of replies :).

Comment author: TheAncientGeek 19 July 2015 08:25:12AM 0 points [-]

How is a process of reasoning based on an infinite stack of algorithms concluded in a finite amount of time?

Comment author: jsteinhardt 19 July 2015 07:23:03PM 1 point [-]

You can stop recursing whenever you have sufficiently high confidence, which means that your algorithm terminates in finite time with probability 1, while also querying each algorithm in the infinite stack with non-zero probability.

Comment author: Viliam 30 June 2015 09:26:23PM *  0 points [-]

This is probably very arrogant of me to say, but my advice would be: "Listen to the domain expert when he tells you what you should do... and then find a Bayesian and let them explain to you why that works."

In my defense, this was my personal experience with statistics at school. I was very good at math in general, but statistics somehow didn't "click". I always had this feeling as if what was explained was built on some implicit assumptions that no one ever mentioned explicitly, so unlike with the rest of the math, I had no other choice here but to memorize that in a situation x you should do y, because, uhm, that's what my teachers told me to do. -- More than ten years later, I read LW, and here I am told that yes, the statistics that I was taught does have implicit assumptions, and suddenly it all makes sense. And it makes me very angry that no one told me this stuff at school. -- I am a "deep learner" (this, not this), and I have problem learning something when I am told how, but I can't find out why. Most people probably don't have a problem with this, they are told how, and they do, and can be quite successful with it; and probably later they will also get an idea of why. But I need to understand the stuff from the very beginning, otherwise I can't do it well. Telling me to trust a domain expert does not help; I may put a big confidence in how, but I still don't know why.

Comment author: jsteinhardt 30 June 2015 10:24:31PM 2 points [-]

ChristianKI is not telling you to trust a domain expert, but rather to read / listen to the domain expert long enough to understand what they are saying (rather than instantly assuming they are wrong because they say something that seems to conflict with your preconceived notions).

I think if you were to read most machine learning books, you would get quite a lot of "why". See this manuscript for instance. I don't really see why you think that Bayesians have a monopoly on being able to explain things.

View more: Next