Vaniver

Sequences

Decision Analysis

Wikitag Contributions

Comments

Sorted by

Hermione's mask does not, so far as I noticed, move to Dumbledore.

What about the mask of

captured person offstage to be rescued? 

I expect there are plenty of possible and actual critiques whose responses should include a sentence like "it would have been a more useful critique if the author had read my point properly"

This reminded me of @transhumanist_atom_understander's commentary on tumblr about the Smalley-Drexler debate:

Smalley, though, didn’t read Nanosystems. I’m pretty sure of this. I don’t think you can tell from his Scientific American article. But it becomes pretty clear in the “debate”. Drexler wrote an open letter to Smalley, and Smalley’s response includes this revealing paragraph:
...

So when I say that Smalley’s objections are at least addressed (convincingly or not) in Nanosystems, I don’t infer that Smalley must have read this and made the objections anyway. He didn’t read it.

https://x.com/alexwei_/status/1946477742855532918

This doesn't count yet for the standards of the bet, I think, but this seems pretty close to "EY wins" to me? (The manifold market has jumped up to 93%.)

At the beginning of your post, you talk about "the value" of comments in a way that seems like it's purely connected to their information content. Why not view them as speech acts?

I think I agree with your statement; I assume that this happened, though? Or, at least, in a mirror of the 'improvements visible from the outside' comment earlier, the question is whether MIRI is now operating in a way that leads to successfully opposing their adversaries, rather than whether they've exposed their reasoning about this to the public.

I can't comment on why you weren't invited [to the CFAR postmortem], because I was not involved with the decision-making for who would be invited; I just showed up to the event. Naively, I would've guessed it was because you didn't work at CFAR (unless you did and I missed it?); I think only one attendee wasn't in that category, for a broad definition of 'work at'.

I have to rate all the time spent that didn’t result in improvements visible from the outside as nothing but costs paid to sustain internal narcissistic supply

This seems fair to me.

The uniformly positive things I’ve heard about “Don’t Create the Torment Nexus II: If Anyone Builds It, Everyone Dies” implies not much in the way of new perspective or even consensus that one is needed.

I think the main difference between MIRI pre-2022 and post-2022 is that pre-2022 had much more willingness to play along with AI companies and EAs, and post-2022 is much more willing to be openly critical.

There are other differences, and also I think we might be focusing on totally different parts of MIRI. Would you care to say more about where you think there needs to be new perspective?

Hmm. I HMCFed after that, I think, but I don't remember why I didn't talk much about it publicly. (Also I think there was a CFAR postmortem that I don't recall getting written up and discussed online, tho there was lots of in-person discussion.)

It does in #1 but not #4--I should've been clearer which one I was referring to.

I like "this is not a metaphor".

I think referring to Emmett as "former OpenAI CEO" is a stretch? Or, like, I don't think it passes the onion test well enough. 

Reply1111

However, I still think it is a good idea to create a sense of urgency, both in the ad and in books about AI safety.

Personally, I would rather stake my chips on 'important' and let urgent handle itself. The title of the book is a narrow claim--if anyone builds it, everyone dies--with the clarifying details conveniently swept into the 'it'. Adding more inferential steps makes it more challenging to convey clearly and more challenging to hear (since each step could lose some of the audience).

There's some further complicated arguments about urgency--you don't want to have gone out on too much of a limb about saying it's close, because of costs when it's far--but I think I most want to make a specialization of labor argument, where it's good that the AI 2027 people, who are focused on forecasting, are making forecasting claims, and good that MIRI, who are focused on alignment, are making alignment difficulty / stakes claims.

I see your point about how a weak claim can water down the whole story. But if I could choose between a 100 people convinced that ASI would kill us all, but with no sense of urgency, and 50 or even 20 who believe both the danger and that we must act immediately, I'd choose the latter.

Hmm I think I might agree with this value tradeoff but I don't think I agree with the underlying prediction of what the world is offering us. 

I think also MIRI has tried for a while to recruit people who can make progress on alignment and thought it was important to start work now, and the current push is on trying to get broad attention and support. The people writing blurbs for the book are just saying "yes, this is a serious book and a serious concern" and not signing on to "and it might happen in two years"--tho probably some of them also believe that--and I think that gives enough cover for the people who are acting on two-year timelines to operate.

Load More