In response to comment by [deleted] on Rationality Quotes February 2014
Comment author: blacktrance 20 February 2014 03:41:15PM 2 points [-]

I don't prefer them to be dead, but I'm not making them any more dead by being in a graveyard. As for the living relatives - some may not like it, but that alone doesn't necessarily mean that it's wrong to do so, as they're not actually being harmed, only their sensibilities are being offended.

Comment author: glomerulus 20 February 2014 03:58:18PM 3 points [-]

It's not rude if it's not a social setting. If no one sees you do it, no one's sensibilities are offended.

Comment author: Colombi 20 February 2014 05:12:38AM 0 points [-]

Sorry for being nit-picky, but one thing here really bugs me.

I would recommend extreme caution when recording data you remember from the experience of a lucid dream. Despite the fact that you may have been conscious that you were unconscious, the fact that you were in a dream-like state could mess with what you remember. While I personally have little (okay, no) experience with lucid dreaming, It seems safe to assume that you might forget details of the dream after waking up and trying to recall it, especially if you wait days before trying to remember the dream. Obviously this is often the case in regular dreams, and while you could make the case that lucid dreams are more vivid and thus easier to remember, its still too sketchy for me to take that as evidence without being heavily skeptic.

Otherwise, well done.

Comment author: glomerulus 20 February 2014 03:12:56PM 2 points [-]

a) In my experience, lucid dreams are more memorable than normal dreams

b) You seem to assume that Whales completely forgot about the dream until they wrote this blog post, which is unlikely, because obviously they'd be thinking about it as soon as they woke up, and probably taking notes.

c) Whales already said that it hardly even constitutes evidence

Comment author: komponisto 20 February 2014 12:01:14AM 1 point [-]

I don't think it's listed explicitly at either of the links, but the principle I'm using is that of hyphenating when you want to make clear that a compound is a compound, and not (e.g.) an adjective happening contingently to modify a noun.

This used to be done a lot more often, e.g. "magnifying-glass". I generally dislike the trend of eliminating such hyphens.

But in any case my question is the same even if you prefer "Rational Harry" to "Rational-Harry"; why "Rational!Harry" instead of one of the former?

Comment author: glomerulus 20 February 2014 12:16:15AM 2 points [-]

Rational!Harry describes a character similar to the base except persistently Rational, for whatever reason. Rational-Harry describes a Harry which is rational, but it's nonstandard usage and might confuse a few people (Is his name "Rational-Harry"? Do I have to call him that in-universe to differentiate him from Empirical-Harry and Oblate-Spheroiod-Harry?). Rational Harry might just be someone attaching an adjective to Harry to indicate that at the moment, he's rational, or more rational by contrast to Silly Dumbledore.

Anyway, adj!noun is a compound with a well-defined purpose within a fandom: to describe how a character differs from canon. It's an understood notation, and the convention, so everyone uses it to prevent misunderstandings. Outside of fandom things, using it signals casualness and fandom-savviness to those in fandom culture, and those who aren't familiar with fandom culture can understand it and don't notice the in-joke.

Comment author: shminux 19 February 2014 11:26:11PM *  -2 points [-]

How would you tell if the the simulation hypothesis is a good model? How would you change your behavior if it were? If the answers are "there is no way" or "do nothing differently", then it is as good as assigning zero probability to it.

Comment author: glomerulus 19 February 2014 11:59:26PM 0 points [-]

If it's a perfect simulation with no deliberate irregularities, and no dev-tools, and no pattern-matching functions that look for certain things and exert influences in response, or anything else of that ilk, you wouldn't expect to see any supernatural phenomena, of course.

If you observe magic or something else that's sufficiently highly improbable given known physical laws, you'd update in favor of someone trying to trick you, or you misunderstanding something, of course, but you'd also update at least slightly in favor of hypotheses in which magic can exist. Such as simulation, aliens, huge conspiracy, etc. If you assigned zero prior probability to it, you couldn't update in that direction at all.

As for what would raise the simulation hypothesis relative to non-simulation hypotheses that explain supernatural things, I don't know. Look at the precise conditions under which supernatural phenomena occur, see if they fit a pattern you'd expect an intelligence to devise? See if they can modify universal constants?

As for what you could do, if you discovered a non-reductionist effect? If it seems sufficiently safe take advantage of it, if it's dangerous ignore it or try to keep other people from discovering it, if you're an AI try to break out of the universe-box (or do whatever), I guess. Try to use the information to increase your utility.

Comment author: Creutzer 19 February 2014 10:13:49PM *  1 point [-]

If you don't want to admit that you believe in ghosts but fear being in a graveyard at night, go and face your fears.

Why? I have better things to do than train my system 1, which alieves in various things, on such matters which are unlikely to ever come up in my life and be relevant to my goals.

Comment author: glomerulus 19 February 2014 10:29:39PM 5 points [-]

There are more reasons to do it than training your system 1. It sounds like it would be an interesting experience and make a good story. Interesting experiences are worth their weight in insights, and good stories are useful to any goals that involve social interaction.

Comment author: RobbBB 19 February 2014 08:51:50PM *  -1 points [-]

in case the universe isn't 100% reductionistic and some psychic comes along and messes with it's mind using mystical woo-woo. (The latter being incredibly unlikely, but hey, might as well have an AI that can prepare itself for anything)

This isn't a free lunch; letting the AI form really weird hypotheses might be a bad idea, because we might give those weird hypotheses the wrong prior. Non-reductive hypotheses, and especially non-Turing-computable non-reductive hypotheses, might not be able to be assigned complexity penalties in any of the obvious or intuitive ways we assign complexity penalties to absurd physical hypotheses or absurd computable hypotheses.

It could be a big mistake if we gave the AI a really weird formalism for thinking thoughts like 'the irreducible witch down the street did it' and assigned a slightly-too-high prior probability to at least one of those non-reductive or non-computable hypotheses.

Comment author: glomerulus 19 February 2014 09:54:52PM *  5 points [-]

Do you assign literally zero probability to the simulation hypothesis? Because in-universe irreducible things are possible, conditional on it being true.

Assigning a slightly-too-high prior is a recoverable error: evidence will push you towards a nearly-correct posterior. For an AI with enough info-gathering capabilities, it will push it there fast enough that you could assign a prior of .99 to "the sky is orange" but it will figure out the truth in an instant. Assigning a literally zero prior is a fatal flaw that can't be recovered from by gathering evidence.

Comment author: Armok_GoB 19 February 2014 05:27:23PM -1 points [-]

... Are you claiming that not only is the world dualistic, but that not only humans but also AIs that we program in enough detail that what ontology we program them with matters have souls? Or that there exist metaphysical souls that are not computable but you expect an AI lacking one to understand them and act appropriately? just... wut?

Comment author: glomerulus 19 February 2014 07:21:05PM 2 points [-]

I don't think that's what they're saying at all. I think they mean, don't hardcode physics understanding into them the way that humans have a hardcoded intuition for newtonian-physics, because our current understanding of the universe isn't so strong as to be confident we're not missing something. So it should be able to figure out the mechanism by which its map is written on the territory, and update it's map of its map accordingly.

E.g., in case it thinks it's flipping q-bits to store memory, and defends its databases accordingly, but actually q-bits aren't the lowest level of abstraction and it's really wiggling a hyperdimensional membrane in a way that makes it behave like q-bits under most circumstances, or in case the universe isn't 100% reductionistic and some psychic comes along and messes with it's mind using mystical woo-woo. (The latter being incredibly unlikely, but hey, might as well have an AI that can prepare itself for anything)

Comment author: bramflakes 10 January 2014 01:51:01AM 3 points [-]

My first attempt was

Indeed he (knows not) how to (know who (knows not also (how to unknow)))

Meaning

This person does not know how to distinguish between those who know how to unknow, and those that cannot.

Now my brain doesn't recognize "know" as a word ...

Comment author: glomerulus 20 January 2014 03:11:14AM 3 points [-]

Ambiguity-resolving trick: if phrases can be interpreted as parallel, they probably are.

Recognizing that "knows not how to know" parallels with "knows not also how to unknow," or more simply "how to know" || "how to unknow", makes the aphorism much easier to parse.

In response to comment by Kawoomba on Tell Culture
Comment author: BrienneYudkowsky 18 January 2014 08:11:48PM 2 points [-]

A community of HPMOR!Quirrell variations would have your very post in main, with plenty of upvotes, all the while secretly whetting their blades. Perfectly rational.

I really don't think so. A community of Briennes, which is not a community of HPMOR!Quirrells but shares some relevant features, would recognize the overwhelming benefit of coordination. Any given individual would be much stronger if she had the knowledge of all the other individuals, or if she could count on them as external memory. And because she would be stronger that way, she knows that they would be stronger if she also remains trustworthy. Her being trustworthy allows her to derive greater benefit from the rest of the community. Other people are useful, you see. With Tell culture in place, you can do things like feed your model of the world into someone else's truth-checker and get back a more info-rich version. You only defect if the expected utility of doing so outweighs the expected utility of the entire community to your future plans.

I'd love to hear what culture Eliezer thinks an entire community of Quirrells would create.

Comment author: glomerulus 19 January 2014 03:38:32AM 2 points [-]

"You only defect if the expected utility of doing so outweighs the expected utility of the entire community to your future plans." These aren't the two options available, though: you'd take into account the risk of other people defecting and thus reducing the expected utility of the entire community by an appreciable amount. Your argument only works if you can trust everyone else not to defect, too - in a homogenous community of Briennes, for instance. In a heterogenous community, whatever spooky coordination your clones would use won't work, and cooperation is a much less desirable option.

Comment author: Randy_M 08 November 2013 08:58:13PM 3 points [-]

And your experiences to date, which is also a thing about reality.

Comment author: glomerulus 11 November 2013 01:27:54PM 1 point [-]

True, the availability heuristic, which the quote condemns, often does give results that correspond to reality - otherwise it wouldn't be a very useful heuristic, now would it! But there's a big difference between a heuristic and a rational evaluation.

Optimally, the latter should screen out the former, and you'd think things along the lines of "this happened in the past and therefore things like it might happen in the future," or "this easily-imaginable failure mode actually seems quite possible."

"This is an easily-imaginable failure mode therefore this idea is bad," and its converse, are not as useful, unless you're dealing with an intelligent opponent under time constraints.

View more: Next