Comment author: LM7805 23 September 2013 08:18:22PM 11 points [-]

Another meme that arguably reached the Bay Area via the 1960s/1970s counterculture, but predates it, is "intentional community". This influences startup culture and hacker culture (specifically hackerspaces), and to some (lesser?) extent seasteaders, the back-to-the-land movement, and rationalists as well.

Comment author: Julia_Galef 24 September 2013 07:12:20AM 2 points [-]

Great one, thanks!

Comment author: roland 23 September 2013 07:54:58PM 10 points [-]

Psychedelics? Nootropics? I guess they are also a big part connecting lots of those subcultures.

Comment author: Julia_Galef 24 September 2013 06:58:55AM 3 points [-]

Agreed. I might add them to a future version of this map.

This time around I held off mainly because I was confounded by how to add them; drugs really do pervade so many of these groups, in different variants: psychadelics are strong among the counterculture and New Age culture, nootropics are more popular among rationalists and biohackers/Quantified Self, and both are popular among transhumanists. (See this H+ article for a discussion of psychadelic transhumanists.)

Comment author: RichardKennaway 10 September 2013 06:56:29AM 2 points [-]

Can I summarise that as saying that CFAR takes account of what we are, while LW generally does not?

Comment author: Julia_Galef 10 September 2013 07:47:00AM 9 points [-]

Well, I'd say that LW does take account of who we are. They just haven't had the impetus to do so quite as thoroughly as CFAR has. As a result there are aspects of applied rationality, or "rationality for humans" as I sometimes call it, that CFAR has developed and LW hasn't.

Comment author: shminux 08 April 2013 05:44:07PM *  4 points [-]

At $3900, it's an investment, but a low-risk one, since we have a money-back guarantee. If you don't feel like what you got out of it was worth it, we'll refund your money without hesitation or complaint.

This is still not low-risk. I would hesitate to ask for a refund even if an event like this was below my expectations, as long as it's not a total flop or a con, which it surely isn't. Low-risk (for the participant) would be dividing the camp into billable events with a price tag on each, and refunding a portion of the price of each event based on the post-event evaluations. This is probably unworkable in practice, but at least it would not be misleading. On the other hand, "full refund no questions asked" is a useful marketing strategy, if a bit dark-artsy.

Comment author: Julia_Galef 08 April 2013 06:23:47PM *  12 points [-]

If it makes you feel less hesitant, we've given refunds twice. One person at a workshop last year who said he'd expected polish and suits, and another who said he enjoyed it but wasn't sure it was going to help enough with his current life situation to be worth it.

Comment author: dspeyer 08 April 2013 03:41:13PM 0 points [-]

I also made a simplified map of some of our classes, so you can see how I think of them fitting into the bigger picture of rationality (click to enlarge):

I've clicked everything I can think of and it's not enlarging.

Comment author: Julia_Galef 08 April 2013 03:44:17PM 1 point [-]

Fixed now, sorry!

Comment author: JMiller 08 April 2013 03:26:22PM 1 point [-]

Hi, the "apply here" link is not working for me.

Thanks!

Comment author: Julia_Galef 08 April 2013 03:35:10PM 0 points [-]

Fixed! Thanks, I apparently didn't understand how links worked in this system.

Comment author: TheSingularityIsOver 08 November 2012 05:28:01PM 5 points [-]

I'm a little skeptical about this: "Attendees will be surrounded by other ambitious, successful, practically-minded folks"

Can I get some evidence?

Comment author: Julia_Galef 09 November 2012 12:09:21AM 8 points [-]

Not sure what kind of evidence you're looking for here; that's just a description of our selection criteria for attendees.

Comment author: TheOtherDave 30 November 2011 06:12:21PM 0 points [-]

Presumably that depends on how we came to think we held that moral theory in the first place.

If I assert moral theory X because it does the best job of reflecting my moral intuitions, for example, then when I discover that my moral intuitions in a particular case contradict X, it makes sense to amend X to better reflect my moral intuitions.

That said, I certainly agree that if I assert X for some reason unrelated to my moral intuitions, then modifying X based on my moral intuitions is a very questionable move.

It sounds like you're presuming that the latter is generally the case when people assert utilitarianism?

Comment author: Julia_Galef 30 November 2011 07:52:30PM 4 points [-]

Preferring utilitarianism is a moral intuition, just like preferring Life Extension. The former's a general intuition, the latter's an intuition about a specific case.

So it's not a priori clear which intuition to modify (general or specific) when the two conflict.

Comment author: [deleted] 30 November 2011 06:22:14PM *  0 points [-]

Well if you view moral theories as if they were scientific hypothesis, you could reason in the following way: If a moral theory/hypothesis makes a counter intuitive prediction you could 1) reject the your intuition or 2) reject the hypothesis ("I want to") 3) revise your hypothesis.

It would be practical if one could actually try out an moral theory, but I don't see how one could go about doing that. . .

In response to comment by [deleted] on Life Extension versus Replacement
Comment author: Julia_Galef 30 November 2011 07:47:13PM 4 points [-]

Right -- I don't claim any of my moral intuitions to be true or correct; I'm an error theorist, when it comes down to it.

But I do want my intuitions to be consistent with each other. So if I have the intuition that utility is the only thing I value for its own sake, and I have the intuition that Life Extension is better than Replacement, then something's gotta give.

Comment author: Manfred 30 November 2011 03:07:17AM 26 points [-]

Here's his 2008 paper, "Life Extension versus Replacement," which explores an amendment to utilitarianism that would allow us to prefer Life Extension

I feel like the thing that should allow us to prefer life extension is the thing that makes people search for amendments to utilitarianism that would allow us to prefer life extension.

Comment author: Julia_Galef 30 November 2011 05:31:59PM *  6 points [-]

When our intuitions in a particular case contradict the moral theory we thought we held, we need some justification for amending the moral theory other than "I want to."

View more: Prev | Next