Day = Made
How much money do you have to donate, if you don't mind my asking?
Kinda awkward to say aloud. I think Institute for the Research of Machine Intelligence would sound better. Minor nitpick.
This is a question about utilitarianism, not AI, but can anyone explain (or provide a link to an explanation) of why reducing the total suffering in the world is considered so important? I thought that we pretty much agreed that morality is based on moral intuitions and it seems pretty counterintuitive to value the states of mind of people too numerous to sympathize with as highly as people here do.
It seems to me that reducing suffering in a numbers game is the kind of thing you would say is your goal because it makes you sound like a good person, rather than something your conscience actually motivates you to do, but people here are usually pretty averse to conscious signaling, so I'm not sure that works as an explanation. I'm certain this has been covered elsewhere, but I haven't seen it.
For all but the last one it seems like you'd need an in-depth knowledge of the gatekeeper's psyche and personal life.
Of course. How else would you know which horrible, horrible things to say? (I also have in mind things designed to get a more visceral reaction from the gatekeeper, e.g. graphic descriptions of violence. Please don't ask me to be more specific about this because I really, really don't want to.)
You don't have to be specific, but how would grossing out the gatekeeper bring you closer to escape?
Finding such a movie clip sounds extremely unpleasant and I would need more of an incentive to start trying. (Playing the AI in an AI box experiment also sounds extremely unpleasant for the same reason.)
I know it sounds like I'm avoiding having to justify my assertion here, and... that's because I totally am. I suspect on general principles that most successful strategies for getting out of the box involve saying horrible, horrible things, and I don't want to get much more specific than those general principles because I don't want to get too close to horrible, horrible things.
Like when you say "horrible, horrible things". What do you mean?
Driving a wedge between the gatekeeper and his or her loved ones? Threats? Exploiting any guilt or self-loathing the gatekeeper feels? Appealing to the gatekeeper's sense of obligation by twisting his or her interpretation of authority figures, objects of admiration, and internalized sense of honor? Asserting cynicism and general apathy towards the fate of mankind?
For all but the last one it seems like you'd need an in-depth knowledge of the gatekeeper's psyche and personal life.
I'd prefer not to. If I successfully made my point, then I'd have posted exactly the kind of thing I said I wouldn't want to be known as being capable of posting.
But you wouldn't actually be posting it, you would be posting the fact that you conceive it possible for someone to post it, which you've clearly already done.
More difficult version of AI-Box Experiment: Instead of having up to 2 hours, you can lose at any time if the other player types AI DESTROYED. The Gatekeeper player has told their friends that they will type this as soon as the Experiment starts. You can type up to one sentence in your IRC queue and hit return immediately, the other player cannot type anything before the game starts (so you can show at least one sentence up to IRC character limits before they can type AI DESTROYED). Do you think you can win?
(I haven't played this one but would give myself a decent chance of winning, against a Gatekeeper who thinks they could keep a superhuman AI inside a box, if anyone offered me sufficiently huge stakes to make me play the game ever again.)
You really relish in the whole "scariest person the internet has ever introduced me to" thing, don't you?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
In the end, I think you interpreted pp's question correctly, but that the pronoun is "you" rather than "I" and the disclaimer "if you don't mind my asking" suggests that the question is how big is your donation budget. One answer to that is in the original post, $20k available for this offer. Another answer is here.
(The disclaimer seems to me to require a strong update, but my prior is even higher.)
You would be correct. Thanks for the link.