Exactly ZERO.
...
Zero is not a probability! You cannot be infinitely certain of anything!
Nobody knows what's "friendly" (you can have "godly" there, etc. - with more or less the same effect).
By common usage in this subculture, the concept of Friendliness has a specific meaning-set attached to it that implies a combination of 1) a know-it-when-I-see-it isomorphism to common-usage 'friendliness' (e.g. "I'm not being tortured"), and 2) A deeper sense in which the universe is being optimized by our own criteria by a more powerful optimization process. Here's a better explanation of Friendliness than the sense I can convey. You could also substitute the more modern word 'Aligned' with it.
Worse, it may easily turn out that killing all humanimals instantly is actually the OBJECTIVELY best strategy for any "clever" Superintelligence.
I would suggest reading about the following:
Paperclip Maximizer Orthogonality Thesis The Mere Goodness Sequence. However, in order to understand it well you will want to read the other Sequences first. I really want to emphasize the importance of engaging with a decade-old corpus of material about this subject.
The point of these links is that there is no objective morality that any randomly designed agent will naturally discover. An intelligence can accrete around any terminal goal that you can think of.
This is a side issue, but your persistent use of the neologism "humanimal" is probably costing you weirdness points and detracts from the substance of the points you make. Everyone here knows humans are animals.
Most probably the problem will not be artificial intelligence, but natural stupidity.
Agreed.
I just started a Facebook group to coordinate effective altruist youtubers. I'd definitely say rationality also falls under the umbrella. PM me and I can add you. :)
There is some minimum threshold below which it just does not count, like saying, "What if we exposed 3^^^3 people to radiation equivalent to standing in front of a microwave for 10 seconds? Would that be worse than nuking a few cities?" I suppose there must be someone in 3^^^3 who is marginally close enough to cancer for that to matter, but no, that rounds down to 0.
Why would that round down to zero? That's a lot more people having cancer than getting nuked!
(It would be hilarious if Zubon could actually respond after almost a decade)
For 1), you might be interested to know that I recently made a Double Crux UI mockup here. I'm hoping to start some discussion on what an actual interface might look like.
Yep, you were one of the parties I was thinking of. Nice work! :D
What I'm about to say is within the context of seeing you be one of the most frequent commenters on this site.
Otherwise it sounds like entitled whining.
That is really unfriendly to say; honestly the word I want to use is "nasty" but that is probably hyperbolic/hypocritical. I'm not sure if you realize this but a culture of macho challenging like this discourages people from participating. I think you and several other commenters who determine the baseline culture of this site should try to be more friendly. I have seen you in particular use a smiley before so that's good and you're probably a friendly person along many dimensions. But I want to emphasize how intimidated newcomers or people who are otherwise uncomfortable with what is probably interpreted-by-you as joshing-around with LW-friends. To you it may feel like you are pursuing less-wrongness, but to people who are more neurotic and/or more unfamiliar with this forum it can come across as feeling hounded, even if vicariously.
I do not want to pick on people I don't know but there are other frequent commenters who could use this message too.
Why isn't CFAR or friends building scaleable rationality tools/courses/resources? I played the Credence Calibration game and feel like that was quite helpful in making me grok Overconfidence Bias and the internal process of down-justing one's confidence in propositions. Multiple times I've seen mentioned the idea of an app for Double Crux. That would be quite useful for improving online discourse (seems like Arbital sorta had relevant plans there).
Relatedly: Why doesn't CFAR have a prep course? I asked them multiple times what I can do to prepare, and they said "you don't have to do anything". This doesn't make sense. I would be quite willing to spend hours learning marginal CFAR concepts, even if it was at a lower pacing/information-density/quality. I think the argument is something like 'you must empty your cup so you can learn the material' but I'm not sure.
I am somewhat suspicious that one of the reasons (certainly not the biggest, but one of) for the lack of these things is so they can more readily indoctrinate AI Safety as a concern. Regardless if that's a motivator, I think their goals would be more readily served by developing scaffolding to help train rationality amongst a broader base of people online (and perhaps use that as a pipeline for the more in-depth workshop).
Did it get resolved? :)
I had asked someone how I could contribute, and they said there was a waitlist or whatever. Like others have mentioned, I would recommend prioritizing maximal user involvement. Try to iterate quickly and get as many eyeballs on it as you can so you can see what works and what breaks. You can't control people.
I do want to heap heavy praise on the OP for Just Going Out And Trying Something, but yes, consult with other projects to avoid duplication of effort. :)
I find it funny people think questions about the Chinese Room argument or induction are obvious, tangential, or silly. xD
Anyway: What is the best algorithm for deciding between careers?
(Rule 1: Please don't say the words "consult 80,000 Hours" or "Use the 80K decision tool!" That is analogous to telling an atypically depressed person to "read a book on exercise and then go out and do things!" Like, that's really not a helpful thing, since those people are completely booked (they didn't respond to my application despite my fitting their checkboxes). Also I've been to two of their intro workshops.)
I want to know what object-level tools, procedures, heuristics etc. people here recommend for deciding between careers. Especially if one feels conflicted between different choices. Thanks! :)