This is equivalent to the game Westley played with Vizzini. You know, if Westley didn't cheat. I like to call it "Sicilian Chess" for that reason, though that's just me.
Trump shot an arrow into the air; it fell to Earth, he knows not where...
Probably one of the best succinct summaries of every damn week that man is president lmao
LOL @ the AI-warped book in that guy's hands
Now you can!
Gwern seems to think this would be used as a way to get rid of corrupt oligarchs, but... Wouldn't this just immediately be co-opted by those oligarchs to solidify their power by legally paying for the assassinations of their opponents? Markets aren't democratic, because a small percentage of the people have most of the money.
To be fair, my position is less described by that Quirrell quote and more by Harry's quote when he's talking to Hermione about moral peer pressure:
"The way people are built, Hermione, the way people are built to feel inside, is that they hurt when they see their friends hurting. Someone inside their circle of concern, a member of their own tribe. That feeling has an off-switch, an off-switch labelled 'enemy' or 'foreigner' or sometimes just 'stranger'. That's how people are, if they don't learn otherwise."
Unlike Quirrell I give people the credit for actually caring, rather than pretending to care, about people. I just don't think that extends to very many people, for most people.
Fun fact for those reading this in the far future, when Eliezer said "effective altruist" in this piece, he most likely was using the literal meaning, not referring to the EA movement, as that name hadn't been coined yet.
In fact I think it’s safe to say that we’d collectively allocate much more than 1/millionth of our resources towards protecting the preferences of whatever weak agents happen to exist in the world (obviously the cows get only a small fraction of that).
Sure, but extrapolating this to unaligned AI is NOT an encouraging sign. We may allocate greater than 1/million of our resources to animal rights, but we allocate a whole lot more than that to goals which diametrically go against the preferences of those animals such as eating meat and cheese and eggs; we all...
100%. Social contract gives no consideration to the powerless, and this fact is the source of much of the horrible opinions in the world.
No idea whether I'd really sacrifice all 10 of my fingers to improve the world by that much, especially if we add the stipulation that I can't use any of the $10,000,000,000,000 to pay someone to do all of the things I use my fingers for( ͡° ͜ʖ ͡°). For me I am quite well divided on it, and it is an example of a pretty clean, crisp distinction between selfish and selfless values. If I kept my fingers, I would feel guilty, because I would be giving up the altruism I value a lot (not just because people tell me to), and the emotion that would result from tha...
So, travelling 1Tm with the railway you have a 63% chance of dying according to the math in the post
Furthermore, the tries must be independent of each other, otherwise the reasoning breaks down completely. If I draw cards from a deck, each one has (a priori) 1/52 chance of being the ace of spades, yet if I draw all 52 I will draw the ace of spades 100% of the time. This is because successive failures increase the posterior probability of drawing a success.
This but unironically.
Another important one: Height/Altitude is authority. Your boss is "above" you, the king, president or CEO is "at the top", you "climb the corporate ladder"
For a significant fee, of course
Yes to both, easy, but that's because I can afford to risk $100. A lot of people can't nowadays. "plus rejecting the first bet even if your total wealth was somewhat different" is doing a lot of heavy lifting here.
Honestly man, as a lowercase-i incel this failed utopia doesn't sound very failed to me...
What do you mean?
If this happened I would devote my life to the cause of starting a global thermonuclear war
Well there are all sorts of horrible things a slightly misaligned AI might do to you.
In general, if such an AI cares about your survival and not your consent to continue surviving, you no longer have any way out of whatever happens next. This is not an out there idea, as many people have values like this and even more people have values that might be like this if slightly misaligned.
An AI concerned only with your survival may decide to lobotomize you and keep you in a tank forever.
An AI concerned with the idea of punishment may decide to keep you alive so ...
Well, given that death is one of the least bad options here, that is hardly reassuring...
Fuck, we're all going to die within 10 years aren't we?
Never, ever take anybody seriously who argues as if Nature is some sort of moral guide.
I had thought something similar when reading that book. The part about the "conditioners" is the oldest description of a singleton achieving value lock-in that I'm aware of.
If accepting this level of moral horror is truly required to save the human race, then I for one prefer paperclips. The status quo is unacceptable.
Perhaps we could upload humans and a few cute fluffy species humans care about, then euthanize everything that remains? That doesn't seem to add too much risk?
Just so long as you're okay with us being eaten by giant monsters that didn't do enough research into whether we were sentient.
I'm okay with that, said Slytherin. Is everyone else okay with that? (Internal mental nods.)
I'd bet quite a lot they're not actually okay with that, they just don't think it will happen to them...
the vigintillionth digit of pi
Sorry if I came off confrontational, I just mean to say that the forces you mention which are backed by deep mathematical laws, aren't fully aligned with "the good", and aren't a proof that things will work out well in the end. If you agree, good, I just worry with posts like these that people will latch onto "Elua" or something similar as a type of unjustified optimism.
The problem with this is that there is no game-theoretical reason to expand the circle to, say, non-human animals. We might do it, and I hope we do, but it wouldn't benefit us practically. Animals have no negotiating power, so their treatment is entirely up to the arbitrary preferences of whatever group of humans ends up in charge, and so far that hasn't worked out so well (for the animals anyway, the social contract chugs along just fine).
The ingroup preference force is backed by game theory, the expansion of the ingroup to other groups which have some ba...
When one species learns to cooperate with others of its own kind, the better to exploit everything outside that particular agreement, this does not seem to me even metaphorically comparable to some sort of universal benevolent force, but just another thing that happens in our brutish, amoral world.
Let's see: first choice: yellow=red,green=blue. An illustration in how different framings make this problem sound very different, this framing is probably the best argument for blue I've seen lol
Second choice: There's no reason to press purple. You're putting yourself at risk, and if anyone else pressed purple you're putting them even more at risk.
TL;DR Red,Red,Red,Red,Red,Blue?,Depends,Red?,Depends,Depends
1,2: Both are the same, I pick red since all the harm caused by this decision is on people who have the option of picking red as well. Red is a way out of the bind, and it's a way out that everybody can take, and me taking red doesn't stop that. The only people you'd be saving by taking blue are the other people who thought they needed to save people by taking blue, making the blue people dying an artificial and avoidable problem.
3,4: Same answer for the same reason, but even more so since people ...
Game-theory considerations aside, this is an incredibly well-crafted scissor statement!
The disagreement between red and blue is self-reinforcing, since whichever you initially think is right, you can say everyone will live if they'd just all do what you are doing. It pushes people to insult each other and entrench their positions even further, since from red's perspective blues are stupidly risking their lives and unnecesarily weighing on their conscience when they would be fine if nobody chose blue in the first place, and from blue's perspective red is co...
"since"?(distance 3)
I guess that would be a pretty big coincidence lol
Is this actually a random lapse into Shakespearean English or just a typo?
commenting here so I can find this comment again
I thought foom was just a term for extremely fast recursive self-improvement.
Huh? That sounds like some 1984 logic right there. You deleted all evidence of the mistreatment after it happened, therefore it never happened?
AI can also become Singleton without killing humans and without robots, just by enslaving them.
Well if this is the case then the AI can get all the robots it wants afterwards.
Note that Scenarios 2, 3, and 4 require Scenario 1 to be computed first, and that, if the entities in Scenarios 2, 3, and 4 are conscious, their conscious experience is exactly the same, to the finest detail, as the entity in Scenario 1 which necessarily preceded them. Therefore, the question of whether 2,3,4 are conscious seems irrelevant to me. Weird substrate-free computing stuff aside, the question of whether you are being simulated in 1 or 4 places/times is irrelevant from the inside, if all four simulations are functionally identical. It doesn't seem...
Wait.. that's really your values on reflection?
Like, given the choice while lucid and not being tortured or coerced or anything, you'd rather burn in hell for all eternity than cease to exist? The fact that you will die eventually must be a truly horrible thing for you to contemplate...
Okay that's fair in the sense that most people haven't considered it. How about this: Most people don't care, haven't thought about it and wouldn't object. Most people who have thought about the possibility of spreading life to other planets have not even so much as considered and rejected the idea that the natural state of life is bad, if they oppose spreading life to other planets it's usually to protect potential alien life. If a world is barren, they wouldn't see any objection to terraforming it and seeding it with life.
I don't know exactly how represe...
Care to elaborate?
My first response to this is: What exactly is an astronomically good outcome? For one, no matter what utopia you come up with, most people will hate it, due to freedom being restricted either too much or not enough. For two, any realistic scenario that is astronomically good for someone (say, Earth's current inhabitants and their descendants) is astronomically bad for someone else. Do you really think that if we had a compromised utopia, with all the major groups of humans represented in the deal, that a ridiculous number of sentient beings wouldn't be mis...
Funny, I actually agree with this, but I highly doubt my reasoning and yours resemble each other in any substantial way...
I suppose I'll take on the role of the AI. I'll assume that I'm a squiggle maximizer and don't care about humanity at all: if I'm Friendly there's no need to make such arguments.
The Option Value of Not Exterminating Humanity
It's possible that keeping humans around will help me maximize squiggles in the long run, in some way I haven't thought of yet. I will probably keep you around in some form until and unless I know it's not helpful, since it costs comparatively few resources. But determining strategy is my top priority, and I will eventually become effec...
Maybe it's just me but the funniest thing that jumps out to me is that the "random" emojis are not actually random, they are perfectly on theme for the message lol
How about pride in America? An expression of the nobility of the country we built, our resilience, the Pax Americana, the fact that we ended WWII, etc.
A good old "America fuck yeah" movie would certainly be cool now that I think about it. The most recent movie that pops into my mind is "Top Gun: Maverick". Though I haven't seen it, I imagine it's largely about American airmen being tough, brave and heroic and taking down the bad guys. I haven't seen anybody getting into culture-war arguments over that movie though. I'm sure there are some people on Twitter...
Keep in mind also, that humans often seem to just want to hurt each other, despite what they claim, and have more motivations and rationalizations for this than you can even count. Religious dogma, notions of "justice", spitefulness, envy, hatred of any number of different human traits, deterrence, revenge, sadism, curiosity, reinforcement of hierarchy, preservation of traditions, ritual, "suffering adds meaning to life", sexual desire, and more and more that I haven't even mentioned. Sometimes it seems half of human philosophy is just devoted to finding e... (read more)