Why should we care what one minor TV presenter thinks? Assuming that all Eliezer's direst predictions were true (and I don't have anything like the confidence in that that he does, but I presume from the post that you do), I could name a thousand people off the top of my head without even thinking very hard who it would be a better idea to convince of the risk.
I think what you're doing here is quite close to privileging the hypothesis -- "why don't we try convincing... this guy?" The amount of effort it would take to target one z-list celebrity for 'conversion', compared to the expected reward, suggests that almost anything would be a better idea.
I would prefer to see fewer of these "Here's someone saying something vaguely relevant to transhumanism/FAI"-type posts. I don't mind when it's a well-known thinker whose opinion is actually worth updating on, or if the person in question can reach an extremely large audience, but I don't think someone like Jason Silva is worth a discussion post.
You missed OP's point, which is crowdsourcing:
Anyone have the connections to change his mind and help the X-risk meme piggyback on his voice?
It's my hope that Jason can update on our ideas, and he certainly does seem to have the potential to reach a large, if not extremely large audience. And it's my intuition that he will be more inclined to update while he is gaining popularity, than after he reaches it.
In my mind, this falls into rationality and X-risk outreach, and lies squarely within Discussion territory.
I suspect we have differing expectations about how well-known Silva will become. This isn't a precise prediction, but I believe with ~65% confidence that he is at roughly the peak of his popularity (measured in terms of how frequently his name appears in blogs and major news outlets).
Personally, I would prefer to see fewer posts on x-risk outreach, especially those focused on contacting specific people, and especially when said person isn't very well-known.
I agree resources are well spent reaching out to those with a larger audience. But I would assert that it would take less resources to influence those with a smaller current audience; that the larger one's audience, the more one's current opinions are mentally reinforced.
And, he seems a prime target to be invited to the Singularity Summit, where he'd have hopefully influential social contact with SI folks.
Can I ask why you prefer to see fewer posts on outreach?
That estimate sounds, if anything, a bit low. However, if there is a significant chance he will become very popular in the future, it may still be worthwhile to introduce him those topics now.
Superficially, it seems like he's assuming intelligence implies benevolence.
Well, yeah, but why does he think THAT?
Because it's so obvious that it doesn't require further examination. (Of course this is wrong and it does, but he hasn't figured that out yet.)
Of course this is wrong and it does, but he hasn't figured that out yet
That's quite condescending. How do you know which one of you is wrong?
The one-line answer is "'Superintelligence implies supermorality!' thought the cow as the bolt went through its brain."
I'm not saying the apparent object level claim (ie intelligence implies benevolence) is wrong. Just that it does in fact require further examination. Whereas here it looks like an invisible background assumption.
Did my phrasing not make it clear that this is what I meant, or did you interpret me as I intended and still think it sounds condescending?
I'm not saying the apparent object level claim (ie intelligence implies benevolence) is wrong.
I think few would claim that. We can point to smart-but-evil folk to demonstrate otherwise. The more defensible idea is that there's a correlation.
Just an FYI that Jason Silva, a "performance philosopher" who is quickly gaining popularity and audience, seems to have given little thought to, or not been exposed to the proper arguments for, or is unconvinced by, the existential threat of AGI. But of course, perhaps this optimism is what has allowed him to become so engagingly exuberant.
Do you disagree with the material you quoted? If so, can you say why?
Just an FYI that Jason Silva, a "performance philosopher" who is quickly gaining popularity and audience, seems to have given little thought to, or not been exposed to the proper arguments for, or is unconvinced by, the existential threat of AGI. But of course, perhaps this optimism is what has allowed him to become so engagingly exuberant.
"And I think if they're truly trillions of times more intelligent than us, they're not going to be less empathetic than us---they're probably going to be more empathetic. For them it might not be that big of deal to give us some big universe to play around in, like an ant farm or something like that. We could already be living in such a world for all we know. But either way, I don't think they're going to tie us down and enslave us and send us to death camps; I don't think they're going to be fascist A.I.'s. "
Anyone have the connections to change his mind and help the X-risk meme piggyback on his voice? Perhaps inviting him to Singularity Summit?