Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Raemon 22 May 2017 05:08:51PM 0 points [-]

I'm curious if there's much record of intentional communities that aren't farming communes. (i.e. the sort of tech commune that rationalists seem more likely to want to try and start seem like they would have a related but non-identical set of issues to the ones depicted here). I do expect the "attracting starry eyed dreamers without enough skills" to be an issue.

Comment author: fubarobfusco 23 May 2017 07:56:46PM 0 points [-]

I'm curious if there's much record of intentional communities that aren't farming communes.

Oneida comes to mind. They had some farming (it was upstate New York in the 1850s, after all) but also a lot of manufacturing — most famously silverware. The community is long gone, but the silverware company is still around.

Comment author: fubarobfusco 17 May 2017 03:30:56AM *  2 points [-]

We should increase awareness of old fairy tales with a jinn who misinterprets wishes.

The most popular UFAI story I'm aware of is "The Sorcerer's Apprentice".

Sticking with European folktales that were made into classic Disney cartoons, maybe the analogy to be made is "AI isn't Pinocchio. It's Mickey's enchanted brooms. It doesn't want to be a Real Boy; it just wants to carry water. The danger isn't that it will grow up to be a naughty boy if it doesn't listen to its conscience. It's that it cannot care about anything other than carrying water; including whether or not it's flooding your home."

Thing is, much of the popular audience doesn't really know what code is. They've never written a bug and had a program do something unintended ... because they've never written any code at all. They've certainly never written a virus or worm, or even a script that accidentally overwrites their files with zeroes. They may have issued a bad order to a computer ("Oops, I shouldn't have sent that email!") but they've never composed and run a non-obviously bad set of instructions.

So, aside from folklore, better CS education may be part of the story here.

Comment author: fubarobfusco 13 May 2017 04:37:03PM 1 point [-]

Why do you care if you're running in a simulation? Well, if you're a piece of malware, like a computer virus or Internet worm, you care whether you're being analyzed by an anti-malware researcher. If you're running in a researcher's lab instead of actually infecting target systems, you'd want to shut down so the researcher can't observe your real behavior and find out how to exterminate you. However, this means that if you can be fooled into believing you're running in a simulation when you're not, you can be made to shut down in the wild.

Yep, that happened. It's not even the first time.

The WannaCrypt worm contained logic that basically said, "Look for the following properties in the Internet. If you observe them, that means you're not running in the real Internet; you're running in a simulation." But the researcher was able to cause those properties to become true in the real Internet, thereby convincing the live malware that was infesting the actual Internet to believe it was in a simulation and shut down.

Anti-analysis or anti-debugging features, which attempt to ask "Am I running in a simulation?", are not a new thing in malware, or in other programs that attempt to extract value from humans — such as copy-protection routines. But they do make malware an interesting example of a type of agent for which the simulation hypothesis matters, and where mistaken beliefs about whether you're in a simulation can have devastating effects on your ability to function.

Comment author: Benquo 01 May 2017 11:35:03PM 1 point [-]

I'm skeptical of the work "deliberately" is doing there. If the whole agent determining someone's actions is following a decision procedure that tries to push my beliefs away from the truth when convenient, then there's a sense in which the whole agent is acting in bad faith, even if they've never consciously deliberated on the matter. At least, it's materially different from unmotivated error, in a way that makes it similar to consciously lying.

Comment author: fubarobfusco 02 May 2017 12:45:21AM 3 points [-]

Harry Frankfurt's "On Bullshit" introduced the distinction between lies and bullshit. The liar wants to deceive you about the world (to get you to believe false statements), whereas the bullshitter wants to deceive you about his intentions (to get you to take his statements as good-faith efforts, when they are merely meant to impress).

We may need to introduce a third member of this set. Along with lies told by liars, and bullshit spread by bullshitters, there is also spam emitted by spambots.

Like the bullshitter (but unlike the liar), the spambot doesn't necessarily have any model of the truth of its sentences. However, unlike the bullshitter, the spambot doesn't particularly care what (or whether) you think of it. But it optimizes its sentences to cause you to do a particular action.

Comment author: peter_hurford 25 April 2017 02:09:49AM 4 points [-]

Thanks for the feedback.

I added a paragraph to above saying: "We're also using this as a way to build up the online EA community, such as featuring people on a global map of EAs and with a list of EA Profiles. This way more people can learn about the EA community. We will ask you in the survey if you would like to join us, but you do not have to opt-in and you will be opted-out by default."

Comment author: fubarobfusco 25 April 2017 04:17:46AM 2 points [-]

Thank you.

Comment author: fubarobfusco 24 April 2017 10:33:04PM *  7 points [-]

Caution: This is not just a survey. It is also a solicitation to create a public online profile.

In the future, please consider separating surveys from solicitations; or disclosing up front that you are not just conducting a survey.

When I got to the part of this that started asking for personally identifying information to create a public online profile, it felt to me like something sneaky was going on: that my willingness to help with a survey was being misused as an entering-wedge to push me to do something I wouldn't have chosen to do.

I considered — for a moment — putting bogus data in as a tit-for-tat defection in retribution for the dishonesty. I didn't do so, because the problem isn't with the survey aspect; it's with the not saying up front what you are up to aspect. Posting this comment seemed more effective to discourage that than sticking a shoe in your data.

Comment author: WhySpace 20 April 2017 02:28:01AM *  2 points [-]

TL;DR: What are some movements you would put in the same reference class as the Rationality movement? Did they also spend significant effort trying not to be wrong?

Context: I've been thinking about SSC's Yes, We have noticed the skulls. They point out that aspiring Rationalists are well aware of the flaws in straw Vulcans, and actively try to avoid making such mistakes. More generally, most movements are well aware of the criticisms of at least the last similar movement, since those are the criticisms they are constantly defending against.

However, searching "previous " in the comments doesn't turn up any actual exemples.

Full question: I'd like to know if anyone has suggestions for how to go about doing reference class forcasting to get an outside view on whether the Rationality movement has any better chance of succeeding at it's goals than other, similar movements. (Will EA have a massive impact? Are we crackpots about Cryonics, or actually ahead of the curve? More generally, how much weight should I give to the Inside View, when the Outside View suggests we're all wrong?)

The best approach I see is to look at past movements. I'm only really aware of Logical Positivism, and maybe Aristotle's Lyceum, and I have a vague idea that something similar probably happened in the enlightenment, but don't know the names of any smaller schools of thought which were active in the broader movement. Only the most influential movements are remembered though, so are there good examples from the past ~century or so?

And, how self-critical were these groups? Every group has disagreements over the path forward, but were they also critical of their own foundations? Did they only discuss criticisms made by others, and make only shallow, knee-jerk criticisms, or did they actively seek out deep flaws? When intellectual winds shifted, and their ideas became less popular, was it because of criticisms that came from within the group, or from the outside? How advanced and well-tested were the methodologies used? Were any methodologies better-tested than Prediction Markets, or better grounded than Bayes' theorem?

Motive: I think on average, I use about a 50/50 mix of outside and inside view, although I vary this a lot based on the specific thing at hand. However, if the Logical Positivists not only noticed the previous skull, but the entire skull pile, and put a lot of effort into escaping the skull-pile paradigm, then I'd probably be much less certain that this time we finally did.

Comment author: fubarobfusco 20 April 2017 11:25:27PM *  1 point [-]

Just a few groups that have either aimed at similar goals, or have been culturally influential in ways that keep showing up in these parts —

  • The Ethical Culture movement (Felix Adler).
  • Pragmatism / pragmaticism in philosophy (William James, Charles Sanders Peirce).
  • General Semantics (Alfred Korzybski).
  • The Discordian Movement (Kerry Thornley, Robert Anton Wilson).
  • The skeptic/debunker movement within science popularization (Carl Sagan, Martin Gardner, James Randi).

General Semantics is possibly the closest to the stated LW (and CFAR) goals of improving human rationality, since it aimed at improving human thought through adopting explicit techniques to increase awareness of cognitive processes such as abstraction. "The map is not the territory" is a g.s. catchphrase.

Comment author: J_Thomas_Moros 20 April 2017 01:08:01AM *  0 points [-]

To me, success would be the number of patient's signed up for cryonics, greater cultural acceptance and recognition of cryonics as a reasonable patient choice from the medical field and government.

Comment author: fubarobfusco 20 April 2017 09:16:08PM 1 point [-]

Maybe starting the Church of the Frost Giants and declaring cryonic suspension to be a religiously mandated funerary practice would work to that end.

I think actually reviving some ice mice might be a bigger step, though.

Comment author: J_Thomas_Moros 18 April 2017 04:41:48PM 10 points [-]

A friend and I are investigating why the cryonics movement hasn't been more successful and looking at what can be done to improve the situation. We have some ideas and have begun reaching out to people in the cryonics community. If you are interested in helping, message me. Right now it is mostly researching things about the existing cryonics organizations and coming up with ideas. In the future, there could be lots of other ways to contribute.

Comment author: fubarobfusco 18 April 2017 05:29:26PM 9 points [-]

What does "successful" look like here? Number of patients in cryonic storage? Successfully revived tissues or experimental animals?

Comment author: Brillyant 06 April 2017 09:02:23PM 0 points [-]

and sports

It is?

Comment author: fubarobfusco 07 April 2017 01:04:32AM 3 points [-]

In many towns in the US, high school sports (especially football) are not just a recreational activity for students, but rather a major social event for the whole community.

View more: Next