All of lhc's Comments + Replies

I really worry about this and it has become quite a block. I want to support fragile baby ontologies emerging in me amidst a cacophony of "objective"/"reward"/etc. taken for granted.

Unfortunately, going off and trying to deconfuse the concepts on my own is slow and feedback-impoverished and makes it harder to keep up with current developments.

I think repurposing "roleplay" could work somewhat, with clearly marked entry and exit into a framing. But ontological assumptions absorb so illegibly that deliberate unseeing is extremely hard, at least without being constantly on guard.

Are there other ways that you recommend (from Framestorming or otherwise?)

5romeostevensit
I think John Cleese's relatively recent book on creativity and Olicia Fox Cabane's The Butterfly and the Net are both excellent.

I wrote a quick draft about this a month ago, working through the simple math of this kind of update. I've just put it up.

1Signer
Awesome!

Another (very weird) counterpoint: you might not see the "swarm coming" because the annexing of our cosmic endowment might look way stranger than the best strategy human minds can come up with.

I remember a safety researcher once mentioned to me that they didn't necessarily expect us to be killed, just contained, while superintelligence takes over the universe. The argument being that it might want to preserve its history (ie. us) to study it, instead of permanently destroying it. This is basically as bad as also killing everyone, because we'd still be impr... (read more)

1superads91
Interesting stuff. And I agree. Once you have a nanosystem or something of equivalent power, humans are no longer any threat. But we're yet to be sure if such thing is physically possible. I know many here think so, but I still have my doubts. Maybe it's even more likely that some random narrow AI failure will start big wars before anything more fancy. Although with the scaling hypothesis on sight, AGI could come suddenly indeed. "This is basically as bad as also killing everyone, because we'd still be imprisoned away from our largest possible impact." Although I quite disagree with this. I'm not a huge supporter of our largest possible impact. I guess it's naive to attribute any net positive expectation to that when you look at history or at the present. In fact, such outcome (things staying exactly the same forever) would probably be among the most positive ones in the advent of non aligned AI. As long as we could still take care of Earth, like ending factory farming and dictatorships, it really wouldn't be that bad...

Currently (although less so now than, say, ten years ago), for LW-ish ideas to come out of the mind of a human into a broken, conformist world requires a conjunction of the idea and rugged individualism. And so we see individualism over-represented. What community-flavored/holism-flavored values might come out of generations growing up with these ideas, I wonder?

Objections

Reading Objections

It's hard to skim
I don't think these are necessarily mutually exclusive, but we might have to work with alternative formats. But that's a good idea anyway. Here's an example for Bostrom's Astronomical Waste.

It's hard to distinguish sincere claims from hyperbolic ones
This is not really a problem if they're being honest around it. I'm recommending being serious about your silliness, not letting go of your seriousness.

All of your favorite writers probably use it already to some degree.  Scott Alexander for instance has explici... (read more)

It's true that Open Philanthropy's public communication tends toward a cautious, serious tone (and I think there are good reasons for this); but beyond that, I don't think we do much to convey the sort of attitude implied above. [...] We never did any sort of push to have it treated as a fancy report.

The ability to write in a facetious tone is wonderful addition to one's writing toolset, equivalent to the ability to use fewer significant digits. This is a separate feature from the feature of "fun to read" and "irreverent". People routinely mistake formales... (read more)

Yes, and if that's bottlenecked by too few people being good filters, why not teach that? 

I would guess that a number of smart people would be able to pick up the ability to spot doomed "perpetual motion alignment strategies" if you paid them a good amount to hang around you for a while.