Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Dacyn 10 December 2017 10:26:13PM 0 points [-]

The only people who would consent to the dust speck are people who would choose SPECKS over TORTURE in the first place. Are you really saying that you "do not value the comfort of" Eliezer, Robin, and others?

However, your argument raises another interesting point, which is that the existence of people who would prefer that SPECKS was chosen over TORTURE, even if their preference is irrational, might change the outcome of the computation because it means that a choice of TORTURE amounts to violating their preferences. If TORTURE violates ~3^^^3 people's preferences, then perhaps it is after all a harm comparable to SPECKS. This would certainly be true if everyone finds out about whether SPECKS or TORTURE was chosen, in which case TORTURE makes it harder for a lot of people to sleep at night.

On the other hand, maybe you should force them to endure the guilt, because maybe then they will be motivated to research why the agent who made the decision chose TORTURE, and so the end result will be some people learning some decision theory / critical thinking...

Also, if SPECKS vs TORTURE decisions come up a lot in this hypothetical universe, then realistically people will only feel guilty over the first one.

Comment author: arundelo 09 December 2017 05:58:09PM *  0 points [-]


This account has been posting spam since April 2017 (though all of their old comments have been deleted and are visible only on their overview and comments pages).

Comment author: malo 08 December 2017 06:49:25AM 1 point [-]

Awesome! Thanks so much :)

Comment author: iceman 08 December 2017 06:03:26AM 8 points [-]

I donated about $20,000, most of that in ETH. (Employer matching programs add another $12,000 on top of that.)

Comment author: xkwwqjtw 08 December 2017 03:32:33AM 0 points [-]

Please don’t build a machine that will torture me to save you from dust specks.

Comment author: Chriswaterguy 08 December 2017 02:55:06AM 0 points [-]

Fair enough, and taking it that way, I think the reasoning does hold up.

Comment author: Chriswaterguy 08 December 2017 02:37:16AM *  0 points [-]

Thanks - ahntharhapik seemed obvious but I missed khanfhighur. (Khanfhighur is much more obvious now when I imagine it with an American accent.)

Re my original question, I'm still curious whether there are any clues about the language itself (other than that there are obvious cognates with English and what those cognates are). Does it relate to other stories/worldbuilding

I'm probably overthinking it.

Comment author: malo 07 December 2017 06:35:05PM 3 points [-]

We just passed the 1/4 mark towards our first target! Fun fact, of the ~$200k raised so far in the fundraiser, ~65% of that has come from cryptocurrency dontions.

In response to How Much Thought
Comment author: roland 05 December 2017 01:19:33PM *  0 points [-]

thinking has higher expected utility when you're likely to change your mind and thinking has higher expected utility when the subject is important.

Conditioning on you changing your mind from incorret to correct.

Comment author: Reaper19 03 December 2017 07:49:18PM 0 points [-]

nice article!

Comment author: SteveJordan 30 November 2017 02:55:30AM 1 point [-]

I know this is way past its expiry date, but I have to ask:

Exactly how much dysfunction / argument / neglect / abuse would it take to make you happy? Your organic brain just isn't that complex compared to an artillect like that Genie. It sounds like that would be baked in to your Verthandi. If she's a modified em, then she's functionally as "human" as any of us.

Or perhaps you'd need her to come after you with a carving knife to persuade you she's genuinely hurt by your rejection?

Comment author: Benquo 29 November 2017 05:12:46PM *  0 points [-]

Not all wrongness is innocent error. Sometimes people are lying, consciously or unconsciously. This is violence directed at the listener to control their behavior. Even advertising that makes no false claims is often in this category, when it raises the salience of something for basically adversarial reasons. (Hard sells and infomercials are less like this, branding is more like this.) If it never ever never for ever gets bullet, then eventually a bunch of thugs barge into your <a href="http://lesswrong.com/lw/c1/wellkept_gardens_die_by_pacifism/">nice unwalled garden</a> (walls being a form of structural violence) and ruin it.

Second, some types of dissent undermine the political order that enables us to interact with one another peacefully. The Deuteronomy quote is very specifically about introducing the worship of foreign or novel gods.Conflating this with a general decree to punish critics is a totally implausible reading to anyone who’s actually bothered to pay attention to the Bible; ancient Israelite prophets frequently claimed that Yahweh’s instructions had been wrongly construed, and that the dominant power structure (including both kings and the priesthood) was in error. They seem to have been a sufficiently protected class that kings and priests would sometimes yell at them, but rarely physically injure them.

The correct modern analogue to advocating the worship of a foreign god, is advocating cooperation with a foreign government. The contemporary analogue to stoning the person introducing the worship of foreign gods, would be imposing legal sanctions against Facebook for colluding with Russian intelligence services to manipulate American election results. If you can’t tell the difference between that and punishing criticism, then you don’t know how to have a sane walled garden.

Comment author: IlyaShpitser 25 November 2017 07:44:29PM 1 point [-]
Comment author: elharo 23 November 2017 02:56:37AM *  0 points [-]

Sorry, but it is. Simple test: open a page and view source. Do you see HTML or do you see a big chunk of obfuscated JavaScript?

Browsers today are wicked fast at rendering HTML. They are ungodly slow on anything that replaces HTML with JavaScript. A text-heavy site such as LessWrong is very well served by pure HTML with a small scattering of JavaScript here and there. LessWrong 1.0 isn't perfect markup (too many divs and spans, too little semantic markup) but it is much better designed for speed than 2.0.

Comment author: roland 20 November 2017 12:40:21PM *  0 points [-]
Comment author: kimberchoi 19 November 2017 02:02:56PM 0 points [-]

Interesting discussion. Does knowledge drive action? Perhaps, but what type of knowledge and how much certainty is sufficient to overcome akrasia? I am reminded of Hamlet's "To be or not to be.." Act 3 scene 1 of "Hamlet." When does knowledge itself become the justification of akrasia?

Comment author: pmw7070 13 November 2017 06:16:37PM 1 point [-]

That's really broadening the term 'irrational.' Irrational is not synonym for 'not good' or 'not preferred,' it just means not rational or not logical. There may be lots of rational choices, some of which may be better or worse than others - but all rational. Irrational MIGHT BE (loosely) short for 'that doesn't make sense,' or better, 'that's not logical.'

The bucket analog as illustrated seems to me more pointing at a faulty basis than irrational thinking. The budding author clearly linked spelling with being allowed to pursue writing that ends up in a successful career, and he has a point. An author cannot be successful without an audience; a piece where one continually has to stop and interpret badly spelled words is likely not going to have a good audience. There is a clear rationale. The faulty basis is more related to the student having a picture of being a successful author (1) at this stage in life, and (2) without discipline and development. The false basis in the picture is linking 'am I allowed to pursue writing ambition' with misspelling a word.

Comment author: Caspar42 11 November 2017 08:56:34AM *  0 points [-]

Great post, obviously.

You argue that signaling often leads to distribution of intellectual positions following this pattern: in favor of X with simple arguments / in favor of Y with complex arguments / in favor of something like X with simple arguments

I think it’s worth noting that the pattern of position often looks different. For example, there is: in favor of X with simple arguments / in favor of Y with complex arguments / in favor of something like X with surprising and even more sophisticated and hard-to-understand arguments

In fact, I think many of your examples follow the latter pattern. For example, the market efficiency arguments in favor of libertarianism seem harder-to-understand and more sophisticated than most arguments for liberalism. Maybe it fits your pattern better if libertarianism is justified purely on the basis of expert opinion.

Similarly, the justification for the “meta-contrarian” position in "don't care about Africa / give aid to Africa / don't give aid to Africa" is more sophisticated than the reasons for the contrarian or naive positions.

But as has been pointed out, along with the gigantic cost, death does have a few small benefits. It lowers overpopulation, it allows the new generation to develop free from interference by their elders, it provides motivation to get things done quickly.

I’m not sure whether the overpopulation is a good example. I think in many circles that point would signal naivety and people would respond by something deep-sounding about how life is sacred. (The same is true for “it’s good if old people die because that saves money and allows the government to build more schools”.) Here, too, I would argue that your pattern doesn’t quite describe the set of commonly held positions, as it omits the naive pro-death position.

Comment author: entirelyuseless 10 November 2017 02:05:13PM 1 point [-]

Exactly. "The reality is undecatillion swarms of quarks not having any beliefs, and just BEING the scientist." Let's reword that. "The reality is undecatillion swarms of quarks not having any beliefs, and just BEING 'undecatillion swarms of quarks' not having any beliefs, with a belief that there is a cognitive mind calling itself a scientist that only exists in the undecatillion swarms of quarks's mind."

There seems to be a logic problem there.

In response to comment by rkyeun on Reductionism
Comment author: TheAncientGeek 10 November 2017 10:31:22AM 2 points [-]

That observation runs headlong into the problem, rather than solving it.

View more: Prev | Next