Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Alicorn 17 March 2017 01:46:56AM 21 points [-]

If you like this idea but have nothing much to say please comment under this comment so there can be a record of interested parties.

Comment author: KatjaGrace 17 March 2017 09:39:01AM 5 points [-]

Interested in things like this, presently have a partial version that is good.

Comment author: Alicorn 17 March 2017 05:22:11AM 2 points [-]

I mean that if someone moves out, the landlord is likely to choose a nonrationalist to rent the place, and that streets seldom have many houses available all at once for a coordinated move.

Comment author: KatjaGrace 17 March 2017 09:37:11AM 1 point [-]

In my experience this has been less of a problem than you might expect: our landlord likes us because we are reasonable and friendly and only destroy parts of the house when we want to make renovations with our own money and so on. So they would prefer more of us to many other candidates. And since we would also prefer they have more of us, we can make sure our landlord and more of us are in contact.

Comment author: Douglas_Knight 17 March 2017 04:52:48AM 0 points [-]

"a street with a lot of rationalists living on it" (no rationalist-friendly entity controls all those houses and it's easy for minor fluctuations to wreck the intentional community thing)

Has anyone tried this? While it doesn't give a very integrated solution, it seems very easy to do. Why do you say that it is vulnerable to minor fluctuations? Having separate units on the same street seems quite robust to me.

Comment author: KatjaGrace 17 March 2017 09:30:57AM 1 point [-]

I and friends have, but pretty newly; there are currently two houses two doors apart, and more friends in the process of moving into a third three doors down. I have found this good so far, and expect to continue to for now, though i agree it might be unstable long term. As an aside, there is something nice about being able to wander down the street and visit one's neighbors, that all living in one house doesn't capture.

Comment author: KatjaGrace 31 March 2015 04:35:52AM 3 points [-]

Bostrom quotes a colleague saying that a Fields medal indicates two things: that the recipient was capable of accomplishing something important, and that he didn't. Should potential Fields medalists move into AI safety research?

Comment author: KatjaGrace 31 March 2015 04:32:26AM 3 points [-]

The claim on p257 that we should try to do things that are robustly positive seems contrary to usual consequentialist views, unless this is just a heuristic for maximizing value.

Comment author: KatjaGrace 31 March 2015 04:31:31AM 7 points [-]

Does anyone know of a good short summary of the case for caring about AI risk?

Comment author: KatjaGrace 31 March 2015 04:30:46AM 4 points [-]

Did you disagree with anything in this chapter?

Comment author: KatjaGrace 31 March 2015 04:29:27AM 4 points [-]

Are there things that someone should maybe be doing about AI risk that haven't been mentioned yet?

Comment author: KatjaGrace 31 March 2015 04:28:45AM 5 points [-]

Are you concerned about AI risk? Do you do anything about it?

Comment author: KatjaGrace 31 March 2015 04:27:58AM 5 points [-]

Do you agree with Bostrom that humanity should defer non-urgent scientific questions, and work on time-sensitive issues such as AI safety?

View more: Next