Ben Pace

I'm an admin of this site; I work full-time on trying to help people on LessWrong refine the art of human rationality. (Longer bio.)

I generally feel more hopeful about a situation when I understand it better.

I have signed no contracts or agreements whose existence I cannot mention.

Sequences

AI Alignment Writing Day 2019
Transcript of Eric Weinstein / Peter Thiel Conversation
AI Alignment Writing Day 2018
Share Models, Not Beliefs

Comments

Ben Pace1414

I think it makes sense to state the more direct threat-model of literal extinction; though I am also a little confused by the citing of weirdness points… I would’ve said that it makes the whole conversation more complex in a way that (I believe) everyone would reliably end up thinking was not a productive use of time.

(Expanding on this a little: I think that literal extinction is a likely default outcome, and most people who are newly coming to this topic would want to know that this is even in the hypothesis-space and find that to be key information. I think if I said “also maybe they later simulate us in weird configurations like pets for a day every billion years while experiencing insane things” they would not respond “ah, never mind then, this subject is no longer a very big issue”, they would be more like “I would’ve preferred that you had factored this element out of our discussion so far, we spent a lot of time on it yet it still seems to me like the extinction event being on the table is the primary thing that I want to debate”.)

There's definitely some difference, but I still think that the mathematical argument is just pretty strong, and losing a multiple of  of your resources for hosting life and fun and goodness seems to me extremely close to "losing everything".

IMO this is an utter loss scenario, to be clear.

That's fair that mine's not that precise. I've copied Habryka's one instead. (My old one is in a footnote for posterity[1].)

  1. ^

    Non-disclosure agreements I have signed: Around 2017 I signed an NDA when visiting the London DeepMind offices for lunch, one covering sharing any research secrets, that was required by all guests before we were allowed me access to the building. I do not believe I have ever signed another NDA (nor a non-disparagement agreement).

I like the conciseness of yours, I've changed mine to match.

Glad you're keeping your eye out for these things!

It's 8 hours away from the Bay, which all-in is not that different from a plane flight to NY from the Bay, so the location doesn't really help with being where all the smart and interesting people are.

Before we started the Lightcone Offices we did a bunch of interviews to see if all the folks in the bay-area x-risk scene would click a button to move to the Presidio District in SF (i.e. imagine Lightcone team packs all your stuff and moves it for you and also all these other people in the scene move too) and IIRC most wouldn't because of things like their extended friend network and partners and so on (@habryka @jacobjacob am I remembering that correctly?). And that's only a ~1.5-hr move for most of them.

The transcript was tough to read, I got through it, but I don't think I followed everyone's points. Tons of typos and also sometimes it would put marginal words in the wrong speaker's mouth. I think paying someone to go through this and fix typos or having it done by a person originally (I'd guess this is machine-transcribed) would've been worth it, I would've chipped in $10 for that and I'm sure many others would've.

17. Worldview diversification (but not as we know it)

16. Some intuition

Load More