All of glomerulus's Comments + Replies

It's not rude if it's not a social setting. If no one sees you do it, no one's sensibilities are offended.

a) In my experience, lucid dreams are more memorable than normal dreams

b) You seem to assume that Whales completely forgot about the dream until they wrote this blog post, which is unlikely, because obviously they'd be thinking about it as soon as they woke up, and probably taking notes.

c) Whales already said that it hardly even constitutes evidence

Rational!Harry describes a character similar to the base except persistently Rational, for whatever reason. Rational-Harry describes a Harry which is rational, but it's nonstandard usage and might confuse a few people (Is his name "Rational-Harry"? Do I have to call him that in-universe to differentiate him from Empirical-Harry and Oblate-Spheroiod-Harry?). Rational Harry might just be someone attaching an adjective to Harry to indicate that at the moment, he's rational, or more rational by contrast to Silly Dumbledore.

Anyway, adj!noun is a comp... (read more)

2Mestroyer
I always figured it was like the scope resolution operator ("::") in C++, but in some weird functional language that AI people liked.
1Rob Bensinger
Yes. I used it in an earlier version of this post reflexively, without even thinking about the connection to fanfics. My thinking was just 'this is clearer than subscript notation, and is a useful and commonplace LW shibboleth'.
0komponisto
Yes, that's why I favor the hyphen (in response to shminux above).

If it's a perfect simulation with no deliberate irregularities, and no dev-tools, and no pattern-matching functions that look for certain things and exert influences in response, or anything else of that ilk, you wouldn't expect to see any supernatural phenomena, of course.

If you observe magic or something else that's sufficiently highly improbable given known physical laws, you'd update in favor of someone trying to trick you, or you misunderstanding something, of course, but you'd also update at least slightly in favor of hypotheses in which magic can ex... (read more)

There are more reasons to do it than training your system 1. It sounds like it would be an interesting experience and make a good story. Interesting experiences are worth their weight in insights, and good stories are useful to any goals that involve social interaction.

4chaosmage
Also, graveyards at night are a lot less crowded then parks, i.e. awesome for outdoors sex.

Do you assign literally zero probability to the simulation hypothesis? Because in-universe irreducible things are possible, conditional on it being true.

Assigning a slightly-too-high prior is a recoverable error: evidence will push you towards a nearly-correct posterior. For an AI with enough info-gathering capabilities, it will push it there fast enough that you could assign a prior of .99 to "the sky is orange" but it will figure out the truth in an instant. Assigning a literally zero prior is a fatal flaw that can't be recovered from by gathering evidence.

6Rob Bensinger
It's very possible that what's possible for AIs should be a proper subset of what's possible for humans. Or, to put it less counter-intuitively: The AI's hypothesis space might need to be more restrictive than our own. (Plausibly, it will be more restrictive in some ways, less in others; e.g., it can entertain more complicated propositions than we can.) On my view, the reason for that isn't 'humans think silly things, haha look how dumb they are, we'll make our AI smarter than them by ruling out the dumbest ideas a priori'. If we give the AI silly-looking hypotheses with reasonable priors and reasonable bridge rules, then presumably it will just update to demote the silly ideas and do fine; so a priori ruling out the ideas we don't like isn't an independently useful goal. For superficially bizarre ideas that are actually at least somewhat plausible, like 'there are Turing-uncomputable processes' or 'there are uncountably many universes', this is just extra true. See my response to koko. Instead, the reason AIs may need restrictive hypothesis spaces is that building a self-correcting epistemology is harder than living inside of one. We need to design a prior that's simple enough for a human being (or somewhat enhanced human, or very weak AI) to evaluate its domain-general usefulness. That's tough, especially if 'domain-general usefulness' requires something like an infinite-in-theory hypothesis space. We need a way to define a prior that's simple and uniform enough for something at approximately human-level intelligence to assess and debug before we deploy it. But that's likely to become increasingly difficult the more bizarre we allow the AI's ruminations to become. 'What are the properties of square circles? Could the atoms composing brains be made of tiny partless mental states? Could the atoms composing wombats be made of tiny partless wombats? Is it possible that colorless green ideas really do sleep furiously?' All of these feel to me, a human (of an unusua
-2Shmi
How would you tell if the the simulation hypothesis is a good model? How would you change your behavior if it were? If the answers are "there is no way" or "do nothing differently", then it is as good as assigning zero probability to it.

I don't think that's what they're saying at all. I think they mean, don't hardcode physics understanding into them the way that humans have a hardcoded intuition for newtonian-physics, because our current understanding of the universe isn't so strong as to be confident we're not missing something. So it should be able to figure out the mechanism by which its map is written on the territory, and update it's map of its map accordingly.

E.g., in case it thinks it's flipping q-bits to store memory, and defends its databases accordingly, but actually q-bits are... (read more)

0Rob Bensinger
This isn't a free lunch; letting the AI form really weird hypotheses might be a bad idea, because we might give those weird hypotheses the wrong prior. Non-reductive hypotheses, and especially non-Turing-computable non-reductive hypotheses, might not be able to be assigned complexity penalties in any of the obvious or intuitive ways we assign complexity penalties to absurd physical hypotheses or absurd computable hypotheses. It could be a big mistake if we gave the AI a really weird formalism for thinking thoughts like 'the irreducible witch down the street did it' and assigned a slightly-too-high prior probability to at least one of those non-reductive or non-computable hypotheses.
1Armok_GoB
Oh. OH. Yea that makes more sense, and is so obviously true that I didn't even consider the hypothesis someone'd feel the need to say it, but in hindsight I was wrong and it's probably a good thing someone did.

Ambiguity-resolving trick: if phrases can be interpreted as parallel, they probably are.

Recognizing that "knows not how to know" parallels with "knows not also how to unknow," or more simply "how to know" || "how to unknow", makes the aphorism much easier to parse.

"You only defect if the expected utility of doing so outweighs the expected utility of the entire community to your future plans." These aren't the two options available, though: you'd take into account the risk of other people defecting and thus reducing the expected utility of the entire community by an appreciable amount. Your argument only works if you can trust everyone else not to defect, too - in a homogenous community of Briennes, for instance. In a heterogenous community, whatever spooky coordination your clones would use won't work, and cooperation is a much less desirable option.

True, the availability heuristic, which the quote condemns, often does give results that correspond to reality - otherwise it wouldn't be a very useful heuristic, now would it! But there's a big difference between a heuristic and a rational evaluation.

Optimally, the latter should screen out the former, and you'd think things along the lines of "this happened in the past and therefore things like it might happen in the future," or "this easily-imaginable failure mode actually seems quite possible."

"This is an easily-imaginable failure mode therefore this idea is bad," and its converse, are not as useful, unless you're dealing with an intelligent opponent under time constraints.

For most people, murder and children crying are a bad outcome for a plan, but if they're what the planner has selected as the intended outcome, the other probable outcomes are presumably worse. Theoretically, the plan could "fail" and end in an outcome with more utilons than murder and children crying, but those failures are obviously improbable: because if they weren't, then the planner would presumably have selected them as the desired plan outcome.

0Decius
Or at least have the foresight to see that they have become likely and alter the plan such that it now results in utopia instead of murder.

This only qualifies as a sane response if one has no ethical qualms about the Imperius curse. Which is a bit of a problem, because most sane people wouldn't like the idea.

Putting aside the sketchiness of the idea itself, it's flawed. If any zombie high on the chain dies or makes their will-save, every zombie subservient to them is freed, and has knowledge of the Grand Imperius Effort. If, before the experience, they hadn't had strong feelings either way about nonconsensual use of mind-effecting spells, they certainly will afterwards; everyone post-zombie i... (read more)

gwern190

After devising a plan for a GNU world order, it's only logical to take the next step up into resilient W2W (Wizard-to-Wizard) networks: add a clause ordering Imperiused wizards to re-infect every 100th wizard they meet. This random crosslinking will convert the efficient yet fragile pyramidal hierarchy into a robust distributed graph.

Multiheaded, you're taking the disutility of each torture caused by Pinochet and using their sum to declare his actions as a net evil. OrphanWilde seems to acknowledge that his actions were terrible, but makes the statement that the frequency of tortures, each with more or less equal disutility (whatever massive quantity that may be), were overall reduced by his actions.

You, however, appear to be looking at his actions, declaring them evil, and citing Allende as evidence that Pinochet's ruthlessness was unnecessary. This could be the foundation of a good ... (read more)

-2Multiheaded
Yep, I admit there's two arguments. My secondary line of attack is that there was nothing "necessary" about the things Pinochet did, and that in regards to the rule of law and sustainable democracy he wrecked what Allende was trying to create. But my primary line is that some "rational" arguments should be simply censored when their advocates don't even bother with hypotheticals but point to the unspeakable experiences of real victims and then dismiss them as a fair price for some dubious greater good. This is a behavior and an attitude that our society needs to suppress, I believe, because it's predictive of other self-centered, remorseless, power-blind attitudes - and we're better off with fully general ethical injunctions against such. Not tolerating even the beginning steps of some potentially devastating paths is important enough to outweigh perfect epistemic detachment and pretensions to impartiality. Christian moralism in its 19th century form - once a popular source for such injunctions - is rightly considered obsolete/bankrupt, but, like Orwell, I think our civilization needs a replacement for it. Or else our descendants might be the ones screaming "Why did it have to be rats?!" one day. ZERO compromise. Not for the sake of politeness, not for the sake of pure reason, not a single more step to hell.
7[anonymous]
He doesn't actually make that statement anywhere that I can see. I disagree that he has done anything of the sort. What's he even comparing Pinochet to? The obvious candidate is a peacefully elected president after the end of Allende's term, which suggests someone from UP or the Christian Democrats, and it's hard to imagine such a government sponsoring systemic torture against dissidents. In any case, I think claims of "rational" (which Multiheaded hasn't made anyway) needs to stay far, far away from this thread.

True. If the law took that into consideration, and precedent indicated that creatures that are most likely Evil are deserving of death unless evidence indicates that they are Neutral or Lawful or Good, then his actions would not have been justified. However, Larks indicated that that is not the case: goblins are considered innocent until proven guilty. Larks' character thus, refusing to be an accessory to illegal vigilante justice, attacked their party in self-defense on the goblins' behalf. In the long-term, successfully preventing the goblin's deaths wou... (read more)

2MugaSofer
I got the impression that he assumed this was the "Lawful" attitude to take.

Assuming: any given goblin is Evil with p=0.95

Assuming: 80% of Evil creatures are guilty of a hanging offense according to an authority

Assuming: 5 randomly-selected goblins in the group

The probability that all members of the group deserved death according to authority should be (0.95*0.8)^5 = 0.254.

Of course, that last assumption is a bit problematic: they're not randomly selected. Still, depending on the laws, they might still be legally entitled to a trial. Or perhaps the law doesn't consider being a member of an Evil race reasonable suspicion of crime, and they wouldn't even have been tried by Lawful Authorities.

0MugaSofer
Goblins are "usually Neutral Evil". What this means is up to the DM, but in my experience is generally taken to mean that, while they can of course be other alignments (perhaps if raised by humans or something) their "default" in this setting is Evil. In other words, killing them is OK as long as you don't have reason to suspect they're Good, but actual genocide is frowned upon. Remember, these are adventurers, killing monsters and taking their stuff is part of the job description.
2Desrtopa
It seems like a coherent position to me to assign negative utility to the lives of "evil" creatures in the first place, even if they haven't committed something that would legally be a hanging offense. You might say that you target evil creatures because they're likely to commit offenses that are punishable under law by death, but then, you might say that certain crimes are punishable by death because they show that the perpetrators are Evil. As a moral theory, it may not make a very good legal foundation in our world, but when we're dealing with a world where you can actually cast Detect Evil, and look at people, or even magical objects, and tell if they're Evil, things may be kind of different.