Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ZankerH 27 June 2017 12:17:54PM *  5 points [-]

Meat tastes nice, and I don't view animals as moral agents.

Comment author: fubarobfusco 28 June 2017 03:29:16PM 2 points [-]

Are you claiming that a being must be a moral agent in order to be a moral patient?

Comment author: Vaniver 21 June 2017 10:57:10PM 2 points [-]

The first time I read this poll...

Submitting...

Comment author: fubarobfusco 22 June 2017 01:18:22AM 2 points [-]

I was on an Android tablet, which I use in a laptop-like fashion (landscape mode, with keyboard) but which usually gets the mobile version of sites that try to be mobile-friendly.

Comment author: Brian_Tomasik 20 June 2017 10:06:55PM *  3 points [-]

Is it still a facepalm given the rest of the sentence? "So, s-risks are roughly as severe as factory farming, but with an even larger scope." The word "severe" is being used in a technical sense (discussed a few paragraphs earlier) to mean something like "per individual badness" without considering scope.

Comment author: fubarobfusco 20 June 2017 11:53:13PM 2 points [-]

The section presumes that the audience agrees wrt veganism. To an audience who isn't on board with EA veganism, that line comes across as the "arson, murder, and jaywalking" trope.

Comment author: tcheasdfjkl 01 June 2017 03:04:59AM 4 points [-]

Hi Duncan, I'm a relative newcomer (this is my first LW thread, though I've participated in rationalsphere discussions elsewhere), so this may not carry much weight, but I want to somewhat agree with handoflixue here.

One of my stronger reactions to your post is "this is an impossible set of expectations for me and a lot of others". Which is fine, obviously you can have expectations that some people can't live up to, and of course it is very good that you are making these expectations very clear.

But I sort of get the sense that you are a person who is fundamentally capable of being reliable and regularly making good life choices pretty easily, and that you sort of don't get that for a lot of people these things are really hard even if they understand what the right choice is and are legitimately trying their best to do that.

This is based only partly on your post and somewhat more on a mini-talk which (IIRC) you gave at a CFAR community night where you posed the question "does it even make sense for people to seek out advanced rationality techniques such as the ones discussed here when they're not displaying basic rationality such as eating a reasonable diet and sleeping enough?". Even then, this question struck me as dangerously wrong-headed, and now that you are proposing to be in charge of people, this seems to take on more importance.

Advanced rationality techniques, at least when applied to one's self-conception and life choices, are basically therapy. "Failures of basic rationality" are often better described as "mental health issues". Therapy is how you deal with mental health issues. People with mental health issues need more therapy/advanced rationality, not less! I've seen it hypothesized that one reason we have so many mentally ill rationalists is because people with mental health issues must learn rationality in order to function, at least to some degree that is more than most people need.

I don't actually know you, so my information is pretty incomplete, but my impression is that if someone fails to act in a way you (and they!) think is reasonable, you're likely to become baffled and frustrated and try to deal with the problem by imposing stricter expectations & consequences. This might work for some people, but for many, it will just make them miserable and less productive because they will be angry at themselves for failing at things that they "should" be able to do.

I think it's likely that your way of dealing with this is basically to screen out the people who are likely to react poorly to your approach, in addition to causing others like me to self-select out. That's fine, I guess, though I would still be on the lookout for this sort of issue as a possible failure mode, and maybe also just demonstrate more compassionate awareness that things like reliability are actually almost impossible for some people, and maybe not attribute all of this to having the wrong culture or mindset.

(My general opinion of your project is "this sounds scary and I want to stay very far away from it, and this makes me somewhat wary of the people involved, and I wouldn't recommend participation to people I know, at the same time I am really curious about how this will go so selfishly I'm a little glad it's happening so I can gain information from it".)

Comment author: fubarobfusco 01 June 2017 03:26:09AM 0 points [-]

Advanced rationality techniques, at least when applied to one's self-conception and life choices, are basically therapy. "Failures of basic rationality" are often better described as "mental health issues". Therapy is how you deal with mental health issues. People with mental health issues need more therapy/advanced rationality, not less! I've seen it hypothesized that one reason we have so many mentally ill rationalists is because people with mental health issues must learn rationality in order to function, at least to some degree that is more than most people need.

This reminds me of Romeo's comment over here:

http://lesswrong.com/lw/oym/how_id_introduce_lesswrong_to_an_outsider/dryk

Comment author: Raemon 22 May 2017 05:08:51PM 0 points [-]

I'm curious if there's much record of intentional communities that aren't farming communes. (i.e. the sort of tech commune that rationalists seem more likely to want to try and start seem like they would have a related but non-identical set of issues to the ones depicted here). I do expect the "attracting starry eyed dreamers without enough skills" to be an issue.

Comment author: fubarobfusco 23 May 2017 07:56:46PM 0 points [-]

I'm curious if there's much record of intentional communities that aren't farming communes.

Oneida comes to mind. They had some farming (it was upstate New York in the 1850s, after all) but also a lot of manufacturing — most famously silverware. The community is long gone, but the silverware company is still around.

Comment author: fubarobfusco 17 May 2017 03:30:56AM *  2 points [-]

We should increase awareness of old fairy tales with a jinn who misinterprets wishes.

The most popular UFAI story I'm aware of is "The Sorcerer's Apprentice".

Sticking with European folktales that were made into classic Disney cartoons, maybe the analogy to be made is "AI isn't Pinocchio. It's Mickey's enchanted brooms. It doesn't want to be a Real Boy; it just wants to carry water. The danger isn't that it will grow up to be a naughty boy if it doesn't listen to its conscience. It's that it cannot care about anything other than carrying water; including whether or not it's flooding your home."

Thing is, much of the popular audience doesn't really know what code is. They've never written a bug and had a program do something unintended ... because they've never written any code at all. They've certainly never written a virus or worm, or even a script that accidentally overwrites their files with zeroes. They may have issued a bad order to a computer ("Oops, I shouldn't have sent that email!") but they've never composed and run a non-obviously bad set of instructions.

So, aside from folklore, better CS education may be part of the story here.

Comment author: fubarobfusco 13 May 2017 04:37:03PM 1 point [-]

Why do you care if you're running in a simulation? Well, if you're a piece of malware, like a computer virus or Internet worm, you care whether you're being analyzed by an anti-malware researcher. If you're running in a researcher's lab instead of actually infecting target systems, you'd want to shut down so the researcher can't observe your real behavior and find out how to exterminate you. However, this means that if you can be fooled into believing you're running in a simulation when you're not, you can be made to shut down in the wild.

Yep, that happened. It's not even the first time.

The WannaCrypt worm contained logic that basically said, "Look for the following properties in the Internet. If you observe them, that means you're not running in the real Internet; you're running in a simulation." But the researcher was able to cause those properties to become true in the real Internet, thereby convincing the live malware that was infesting the actual Internet to believe it was in a simulation and shut down.

Anti-analysis or anti-debugging features, which attempt to ask "Am I running in a simulation?", are not a new thing in malware, or in other programs that attempt to extract value from humans — such as copy-protection routines. But they do make malware an interesting example of a type of agent for which the simulation hypothesis matters, and where mistaken beliefs about whether you're in a simulation can have devastating effects on your ability to function.

Comment author: Benquo 01 May 2017 11:35:03PM 1 point [-]

I'm skeptical of the work "deliberately" is doing there. If the whole agent determining someone's actions is following a decision procedure that tries to push my beliefs away from the truth when convenient, then there's a sense in which the whole agent is acting in bad faith, even if they've never consciously deliberated on the matter. At least, it's materially different from unmotivated error, in a way that makes it similar to consciously lying.

Comment author: fubarobfusco 02 May 2017 12:45:21AM 3 points [-]

Harry Frankfurt's "On Bullshit" introduced the distinction between lies and bullshit. The liar wants to deceive you about the world (to get you to believe false statements), whereas the bullshitter wants to deceive you about his intentions (to get you to take his statements as good-faith efforts, when they are merely meant to impress).

We may need to introduce a third member of this set. Along with lies told by liars, and bullshit spread by bullshitters, there is also spam emitted by spambots.

Like the bullshitter (but unlike the liar), the spambot doesn't necessarily have any model of the truth of its sentences. However, unlike the bullshitter, the spambot doesn't particularly care what (or whether) you think of it. But it optimizes its sentences to cause you to do a particular action.

Comment author: peter_hurford 25 April 2017 02:09:49AM 4 points [-]

Thanks for the feedback.

I added a paragraph to above saying: "We're also using this as a way to build up the online EA community, such as featuring people on a global map of EAs and with a list of EA Profiles. This way more people can learn about the EA community. We will ask you in the survey if you would like to join us, but you do not have to opt-in and you will be opted-out by default."

Comment author: fubarobfusco 25 April 2017 04:17:46AM 2 points [-]

Thank you.

Comment author: fubarobfusco 24 April 2017 10:33:04PM *  7 points [-]

Caution: This is not just a survey. It is also a solicitation to create a public online profile.

In the future, please consider separating surveys from solicitations; or disclosing up front that you are not just conducting a survey.

When I got to the part of this that started asking for personally identifying information to create a public online profile, it felt to me like something sneaky was going on: that my willingness to help with a survey was being misused as an entering-wedge to push me to do something I wouldn't have chosen to do.

I considered — for a moment — putting bogus data in as a tit-for-tat defection in retribution for the dishonesty. I didn't do so, because the problem isn't with the survey aspect; it's with the not saying up front what you are up to aspect. Posting this comment seemed more effective to discourage that than sticking a shoe in your data.

View more: Next