davekasten

Wiki Contributions

Comments

Sorted by

Generally, it is difficult to understate how completely the PRC is seen as a bad-faith actor in DC these days. Many folks saw them engage in mass economic espionage for a decade while repeatedly promising to stop; those folks are now more senior in their careers than those formative moments.  Then COVID happened, and while not everyone believes in the lab leak hypothesis, basically everyone believes that the PRC sure as heck reflexively covered up whether or not they were actually culpable. 

(Edit: to be clear, reporting, not endorsing, these claims)

Basic question because I haven't thought about this deeply: in national security stuff, we often intentionally elide the difference between capabilities and intentions.  The logic is: you can't assume a capability won't be used, so you should plan as-if it is intended to be used.

Should we adopt such a rule for AGI with regards to policy decision-making?   (My guess is...probably not for threat assessment but probably yes for contingency planning?)

I think, having been raised in a series of very debate- and seminar-centric discussion cultures, that a quick-hit question like that is indeed contributing something of substance.  I think it's fair that folks disagree, and I think it's also fair that people signal (e.g., with karma) that they think "hey man, let's go a little less Socratic in our inquiry mode here."  

But, put in more rationalist-centric terms, sometimes the most useful Bayesian update you can offer someone else is, "I do not think everyone is having the same reaction to your argument that you expected." (Also true for others doing that to me!)

(Edit to add two words to avoid ambiguity in meaning of my last sentence)

Yes, I would agree that if I expected a short take to have this degree of attention, I would probably have written a longer comment.

Well, no, I take that back.  I probably wouldn't have written anything at all.  To some, that might be a feature; to me, that's a bug. 
 

It is genuinely a sign that we are all very bad at predicting others' minds that it didn't occur to me that if I said effectively "OP asked for 'takes', here's a take on why I think this is pragmatically a bad idea" would also mean that I was saying "and therefore there is no other good question here".  That's, as the meme goes, a whole different sentence.  

I think it's bad for discourse for us to pretend that discourse doesn't have impacts on others in a democratic society.  And I think the meta-censoring of discourse by claiming that certain questions might have implicit censorship impacts is one of the most anti-rationality trends in the rationalist sphere.

I recognize most users of this platform will likely disagree, and predict negative agreement-karma on this post.  

davekasten0-10

Is this where we think our pressuring-Anthropic points are best spent ? 

I personally endorse this as an example of us being a community that Has The Will To Try To Build Nice Things.

To say the obvious thing: I think if Anthropic isn't able to make at least somewhat-roughly-meaningful predictions about AI welfare, then their core current public research agendas have failed?

Load More