cousin_it

Wikitag Contributions

Comments

Sorted by

I don't understand Eliezer's explanation. Imagine Alice is hard-working and Bob is lazy. Then Alice can make goods and sell them to Bob. Half the money she'll spend on having fun, the other half she'll save. In this situation she's rich and has a trade surplus, but the other parts of the explanation - different productivity between different parts of Alice (?) and inability to judge her own work fairly (?) - don't seem to be present.

No. Committing a crime inflicts damage. But interacting with a person who committed a crime in the past doesn't inflict any damage on you.

Because the smaller measure should (on my hypothesis) be enough to prevent crime, and inflicting more damage than necessary for that is evil.

Because otherwise everyone will gleefully discriminate against them in every way they possibly can.

I think the US has too much punishment as it is, with very high incarceration rate and prison conditions sometimes approaching torture (prison rape, supermax isolation).

I'd rather give serial criminals some kind of surveillance collars that would detect reoffending and notify the police. I think a lot of such people can be "cured" by high certainty of being caught, not by severity of punishment. There'd need to be laws to prevent discrimination against people with collars, though.

Yeah, I stumbled on this idea a long time ago as well. I never drink sugary drinks, my laptop is permanently in grayscale mode and so on. And it doesn't feel like missing out on fun; on the contrary, it allows me to not miss out. When I "mute" some big, addictive, one-dimensional thing, I start noticing all the smaller things that were being drowned out by it. Like, as you say, noticing the deliciousness of baked potatoes when you're not eating sugar every day, or noticing all the colors in my home and neighborhood when my screen is on grayscale.

cousin_it111

I suppose the superassistants could form coalitions and end up as a kind of "society" without too much aggression. But this all seems moot, because superassistants will anyway get outcompeted by AIs that focus on growth. That's the real danger.

I don't quite understand the plan. What if I get access to cheap friendly AI, but there's also another much more powerful AI that wants my resources and doesn't care much about me? What would stop the much more powerful AI from outplaying me for these resources, maybe by entirely legal means? Or is the idea that somehow the AIs in public access are always the strongest possible? That isn't true even now.

cousin_it*142

I also agree with all of this.

For what an okayish possible future could look like, I have two stories in mind:

  1. Humans end up as housecats. Living among much more powerful creatures doing incomprehensible things, but still mostly cared for.

  2. Some humans get uplifted to various levels, others stay baseline. The higher you go, the more aligned you must be to those below. So still a hierarchy, with super-smart creatures at the top and housecats at the bottom, but with more levels in between.

A post-AI world where baseline humans are anything more than housecats seems hard to imagine, I'm afraid. And even getting to be housecats at all (rather than dodos) looks to be really difficult.

cousin_it*40

Thanks for writing this, it's a great explanation-by-example of the entire housing crisis.

Load More