All of steven's Comments + Replies

steven60

Eliezer, "more AIs are in the hurting class than in the disassembling class" is a distinct claim from "more AIs are in the hurting class than in the successful class", which is the one I interpreted Yvain as attributing to you.

steven00

Nick, I'm now sitting here being inappropriately amused at the idea of Hal Finney as Dark Lord of the Matrix.

Eliezer, thanks for responding to that. I'm never sure how much to bring up this sort of morbid stuff. I agree as to what the question is.

Also, steven points out for the benefit of altruists that if it's not you who's tortured in the future dystopia, the same resources will probably be used to create and torture someone else.

It was Vladimir who pointed that out, I just said it doesn't apply to egoists. I actually don't agree that it applies to altru... (read more)

steven20

Does nobody want to address the "how do we know U(utopia) - U(oblivion) is of the same order of magnitude as U(oblivion) - U(dystopia)" argument? (I hesitate to bring this up in the context of cryonics, because it applies to a lot of other things and because people might be more than averagely emotionally motivated to argue for the conclusion that supports their cryonics opinion, but you guys are better than that, right? right?)

Carl, I believe the point is that until I know of a specific argument why one is more likely than the other, I have no c... (read more)

steven30

Vladimir, hell is only one bit away from heaven (minus sign in the utility function). I would hope though that any prospective heaven-instigators can find ways to somehow be intrinsically safe wrt this problem.

steven02

There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities.

Expected utility is the product of two things, probability and utility. Saying the probability is smaller is not a complete argument.

steven30

The Superhappies can expand very quickly in principle, but it's not clear that they're doing so

We (or "they" rather; I can't identify with your fanatically masochist humans) should have made that part of the deal, then. Also, exponential growth quickly swamps any reasonable probability penalty.

I'm probably missing something but like others I don't get why the SHs implemented part of BE morality if negotiations failed.

steven00

Shutting up and multiplying suggests that we should neglect all effects except those on the exponentially more powerful species.

steven00

Peter, destroying Huygens isn't obviously the best way to defect, as in that scenario the Superhappies won't create art and humor or give us their tech.

steven10

If they're going to play the game of Chicken, then symbolically speaking the Confessor should perhaps stun himself to help commit the ship to sufficient insanity to go through with destroying the solar system.

steven51

Well... would you prefer a life entirely free of pain and sorrow, having sex all day long?

False dilemma.

2Articulator
It is a false dilemma, but the Super Happies won't give you one half without the other, I fear.
0TheStevenator
Good point. I was thinking the same thing. It became a false dilemma right after the coma. Also, nice name. :) Still very much enjoying the story! Love the background of the Confessor!
steven00

Can a preference against arbitrariness ever be stable? Non-arbitrariness seems like a pretty arbitrary thing to care about.

steven00

I would greatly prefer that there be Babyeaters, or even to be a Babyeater myself, than the black hole scenario, or a paperclipper scenario.

Seems to me it depends on the parameter values.

steven20

For what it's worth, I've always enjoyed stories where people don't get hurt more than stories where people do get hurt. I don't find previously imagined utopias that horrifying either.

steven20

I agree with Johnicholas. People should do this over IRC and call it "bloggingheadlessnesses".

steven00

In view of the Dunbar thing I wonder what people here see as a eudaimonically optimal population density. 6 billion people on Mars, if you allow for like 2/3 oceans and wilderness, means a population density of 100 per square kilometer, which sounds really really high for a cookie-gatherer civilization. It means if you live in groups of 100 you can just about see the neighbors in all directions.

steven140

"boreana"

This means "half Bolivian half Korean" according to urbandictionary. I bet I'm missing something.

Perhaps we should have a word ("mehtopia"?) for any future that's much better than our world but much worse than could be. I don't think the world in this story qualifies for that; I hate to be negative guy all the time but if you keep human nature the same and "set guards in the air that prohibit lethal violence, and any damage less than lethal, your body shall repair", they still may abuse one another a lot physically and emotionally. Also I'm not keen on having to do a space race against a whole planet full of regenerating vampires.

steven40

The fact that this future takes no meaningful steps toward solving suffering strikes me as a far more important Utopia fail than the gender separation thing.

steven00

Or "what if you wake up in Dystopia?" and tossed out the window.

What is the counterargument to this? Maybe something like "waking up in Eutopia is as good as waking up in Dystopia is bad, and more probable"; but both of those statements would have to be substantiated.

0DanielLC
I'd expect it to be more likely to wake up in a world worth living, especially considering they put all that effort into waking you up. Shouldn't the idea that it isn't be the one that needs to be substantiated?
1Yosarian2
That seems to assume "Dystopia is likely" and "being in dystopia is significantly worse then death". If you think both of those things are true, though, then what about the odds of our society turning into dystopia in the next 30 years you'll naturally be alive anyway? Should you kill yourself now to avoid the risk of being alive in a possible dystopia in 30 years? It seems fairly silly if you consider it in those terms.
steven20

So could it be said that whenever Eliezer says "video game" he really means "RPG", as opposed to strategy games which have different principles of fun?

steven10

Probably the space you could visit at light speed in a given subjective time would be unreasonably large, depending on speedup and miniaturization.

steven200

Few of these weirdtopias seem strangely appealing in the same way that conspiratorial science seems strangely appealing.

steven50

I think the most you can plausibly say is that for humanlike architectures, memories of suffering (not necessarily true ones) are necessary to appreciate pleasures more complex than heroin. Probably what matters is that there's some degree of empathy with suffering, whether or not that empathy comes from memories. Even in that weakened form the statement doesn't sound plausible to me.

Anyway it seems to me that utopianly speaking the proper psychological contrast for pleasure is sobriety rather than pain.

steven70

Perhaps a benevolent singleton would cripple all means of transport faster than say horses and bicycles, so as to preserve/restore human intuitions and emotions relating to distance (far away lands and so on)?

steven00

If I'm 50% sure that the asymmetry between suffering and happiness is just because it's very difficult to make humans happy (and so in general achieving great happiness is about as important as avoiding great suffering), and 50% sure that the asymmetry is because of something intrinsic to how these things work (and so avoiding great suffering is maybe a hundred times as important), should I act in the mean time as if avoiding great suffering is slightly over 50 times as important as achieving great happiness, slightly under 2 times as important as achieving great happiness, or something in between? This is where you need the sort of moral uncertainty theory that Nick Bostrom has been working on I think.

steven30

I suspect climbing Everest is much more about effort and adventure than about actual pain. Also, the vast majority of people don't do that sort of thing as far as I know.

steven10

I think putting it as "eudaimonia vs simple wireheading" is kind of rhetorical; I agree eudaimonia is better than complex happy mind states that don't correspond to the outside world, but I think complex happy mind states that don't correspond to the outside world are a lot better than simple wireheading.

steven00

For alliances to make sense it seems to me there have to be conflicts; do you expect future people to get in each other's way a lot? I guess people could have conflicting preferences about what the whole universe should look like that couldn't be satisfied in just their own corner, but I also guess that this sort of issue would be only a small percentage of what people cared about.

steven00

Patri, try "Algernon's Law"

steven40

The rickroll example actually applies to all agents, including ideal rationalists. Basically you're giving the victim an extra option that you know the victim thinks is better than it actually is. There's no reason why this would apply to humans only or to humans especially.

steven00

Oh, massive crosspost.

steven10

That one bothered me too. Perhaps you could say bodies are much more peripheral to people's identities than brains, so that in the running case what is being tested is meat that happens to be attached to you and in the robot case it's you yourself. On the other hand I'd still be me with some minor brain upgrades.

steven00

Computer games are the devil but I agree strongly with Hyphen, the good ones are like sports not work.

steven20

Not sure global diversity, as opposed to local diversity or just sheer quantity of experience, is the only reason I prefer there to be more (happy) people.

steven00

and where I just said "universe" I meant a 4D thing, with the dials each referring to a 4D structure and time never entering into the picture.

steven10

Eliezer, I don't think your reality fluid is the same thing as my continuous dials, which were intended as an alternative to your binary check marks. I think we can use algorithmic complexity theory to answer the question "to what degree is a structure (e.g. a mind-history) implemented in the universe" and then just make sure valuable structures are implemented to a high degree and disvaluable structures are implemented to a low degree. The reason most minds should expect to see ordered universes is because it's much easier to specify an ordered ... (read more)

steven00

Also "standard model" doesn't mean what you think it means and "unpleasant possibility" isn't an argument.

steven240

I'm completely not getting this. If all possible mind-histories are instantiated at least once, and their being instantiated at least once is all that matters, then how does anything we do matter?

If you became convinced that people had not just little checkmarks but little continuous dials representing their degree of existence (as measured by algorithmic complexity), how would that change your goals?

steven00

Hal, it also requires that you see each other as seeing each other that way, that you see each other as seeing each other as seeing each other that way, that you see each other as seeing each other as seeing each other as seeing each other that way, and so on.

steven10

I agree that a future world with currently-existing people still living in it is more valuable than one with an equal number of newly-created people living in it after the currently-existing people died, but to show that cryonics is a utilitarian duty you'd need to show not just that this is a factor but that it's an important enough factor to outweigh whatever people are sacrificing for cryonics (normalcy capital!). Lots of people are dead already so whether any single person lives to see the future can constitute at most a tiny part of the future's value.

steven20

Russell, I think the point is we can't expect Friendliness theory to take less than 30 years.

steven720

Awesome post, but somebody should do the pessimist version, rewriting various normal facets of the human condition as horrifying angsty undead curses.

The Curse of Downregulation: Sufferers of this can never live "happily ever after", for anything that gives them joy, done often enough, will become mundane and boring. Someone who is afflicted could have the great luck to earn a million a day, and after a year they will be filled with despair and envy at their neighbor who is making two million, no happier than they would be in poverty.

4lessdazed
Neurotypical
DanielLC230
  • Akrasia

Sufferers do things despite thinking they're bad decisions. They tend to be things that bring small amounts of happiness in the short term, but other times they seem to do nothing more than alleviate boredom. Some examples are simple games, and classifying literary devices. It's not uncommon for the victims to spend most of their lives on unproductive things.

  • Antipleasure

Antipleasure is a rare disease in which a victim's happiness is so low that they would prefer the events not have happened in the first place. Not simply that it's replaced wi... (read more)

Thermodynamic Jurisdiction: This curse causes its victims to become addicted to the inert corpses of dead plants and animals. They are forced to consume them near-constantly, and are unable to go without them for a single day before experiencing withdrawal symptoms. So dependent are they upon these unholy carcasses that a regime of 3 daily dosages is considered normal among sufferers.

This habit is incredibly expensive in the long run; many poor souls, needing a steady supply of this so called "foodstuff" to deal with their affliction, have been l... (read more)

sfb1000
  • The curse of visible intent. Those afflicted by this find their innermost secrets such as fear, surprise, eagerness, alarm, desire, all show up in consistent facial muscle changes for all the world to read, a betrayal by their own flesh.

  • St Addahad's Symptoms. A small group of symptoms including fleshy growths, nerve clusters and neural pathways which result in a near permanent state of distraction as patterns of air pressure change are translated into thoughts and inserted into the mind with disruptively high priority. "Sounds" from all around

... (read more)
steven00

I guess it works out if any number of bits of optimization being exerted given the existence of an optimizer is as probable as any other number, but if that is the prior we're starting from then this seems worth stating (unless it follows from the rest in a way that I'm overlooking).

steven00

The quantity we're measuring tells us how improbable this event is, in the absence of optimization, relative to some prior measure that describes the unoptimized probabilities. To look at it another way, the quantity is how surprised you would be by the event, conditional on the hypothesis that there were no optimization processes around. This plugs directly into Bayesian updating

This seems to me to suggest the same fallacy as the one behind p-values... I don't want to know the tail area, I want to know the probability for the event that actually happened... (read more)

steven00

re: calibration, it seems like what we want to do is ask ourselves what happens if an agent is asked lots of different probability questions, consider the the true probability as a function of probability stated by the agent, use some prior distribution (describing our uncertainty) on all such functions that the agent could have, update this prior using a finite set of answers we have seen the agent give and their correctness, and end up with a posterior distribution on functions (agent's probability -> true probability) from which we can get estimates ... (read more)

steven00

AFAIK the 9/11 people didn't believe they would die in any real sense.

steven10

you will realize that Osama bin Laden would be far more likely to say, "I hate pornography" than "I hate freedom"

There's a difference between hating freedom and saying you hate freedom. There's also a difference between hating freedom and hating our freedom; the latter phrasing rules out Bin Laden misredefining the word to suit his own purposes. And thirdly it's possible to hate freedom and hate pornography more than freedom.

steven00

Eliezer, that's a John McCarthy quote.

steven50

Isn't the problem often not that people betray their ideals, but that their ideals were harmful to begin with? Do we know that not-yet-powerful Stalin would have disagreed (internally) with a statement like "preserving Communism is worth the sacrifice of sending a lot of political opponents to gulags"? If not then maybe to that extent everyone is corrupt and it's just the powerful that get to act on it. Maybe it's also the case that the powerful are less idealistic and more selfish, but then there are two different "power corrupts" effects at play.

steven20

It's important in these crisis things to remind yourself that 1) P does not imply "there are no important generally unappreciated arguments for not-P", and 2) P does not imply "the proponents of P are not all idiots, dishonest, and/or users of bad arguments". You can switch sides without deserting your favorite soldiers. IMO.

steven00

One more argument against deceiving epistemic peers when it seems to be in their interest is that if you are known to have the disposition to do so, this will cause others to trust your non-deceptive statements less; and here you could recommend that they shouldn't trust you less, but then we're back into doublethink territory.

Load More