LESSWRONG
LW

2460
cousin_it
31246Ω43514664650
Message
Dialogue
Subscribe

https://vladimirslepnev.me

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
2cousin_it's Shortform
Ω
6y
Ω
28
Nice-ish, smooth takeoff (with imperfect safeguards) probably kills most "classic humans" in a few decades.
cousin_it1d*72

“the weak superintelligence can just decide to self-modify into the sort of being who doesn’t feel pressure to grab all the resources from vastly weaker, slower, stupider being, even though it’d be so easy.”

I don't think this will work. Today's billionaires can already do something similar to a binding self-modification: donate most of their money to good causes. Not just take a non-binding "pledge", but actually donate. Few do that. Most of them spend more on increasing their own power than on any kind of charity. For the same reason, I expect future human-like AIs to shy away from moral self-modification. They'll keep issuing press releases saying "I'll do it tomorrow" and so on. And as for corporation-like AIs - we can just forget it.

If we want a future where AIs are more moral than people, we need to build them that way from the start.

Reply
Nice-ish, smooth takeoff (with imperfect safeguards) probably kills most "classic humans" in a few decades.
cousin_it2d1212

This agrees almost exactly with my picture of doom. But with a small difference that feels important: I think even if the new powerful entities somehow remain under control of humans (or creatures similar to today's humans), the rest of humans who aren't on top are still screwed. Because the nasty things you mentioned (colonialism, etc) were perpetrated by humans who were on the top of a power differential. There's no need to mention evolution, or new kinds of hypothetical grabby creatures. The increased power differential due to AI is quite enough to cause very bad things, even if there are humans on top.

It seems the only way for future to be good is if it's dominated by an AI or coalition of AIs that are much more moral than humans, less corruptible by power. Imitating a normal human level of morality is not enough, and folks should probably stop putting their hopes on that.

Reply
Ethical Design Patterns
cousin_it4d93

I think today there's still a window of opportunity to stop AI without creating a world government. To build an AI today requires a huge "supercritical pile of GPUs" so to speak, which is costly and noticeable like uranium. But software advances can change that. So it'd be best to take the hardware off the table soon, with the same kind of international effort as stopping nuclear proliferation. But realistically, humanity won't pass such a measure without getting a serious scare first. And there's a high chance the first serious scare just kills us.

Reply
The personal intelligence I want
cousin_it5d*54

I, for one, would like to receive a text from it that reads, “Hey, it’s been 3 weeks since that fight with your mom. I know you love her but also find it difficult to tell her how you feel. Maybe it’s a good idea to send her this: ‘Mom, I miss you. I miss talking to you. Can I call?’” And if it’s hooked up to iMessages, I can just one-click send. Boom.

I mean, that's a bit dishonest, right? An honest message would be something like this: "Mom, my AI assistant suggested to text you that I miss you and would like to call". If you'd be hesitant to send that text, but are happy one-click-sending the "I miss you" which you didn't write, then that's a direction I personally wouldn't want to go.

Reply
The Autofac Era
cousin_it9d62

It's the crux, yeah.

I don't know how much time you spend thinking about the distribution of power, roughly speaking between the masses and the money+power clumps. For me in the past few years it's been a lot. You could call it becoming "woke" in the original sense of the word; this awareness seems to be more a thing on the left. And the more I look, the more I see the balance is tilting away from the masses. AI weapons would be the final nail, but maybe they aren't even necessary; maybe divide-and-conquer manipulation slightly stronger than today will be already enough to neutralize the threat of the masses completely.

Reply1
CFAR update, and New CFAR workshops
cousin_it9d*91

This was pleasant to read! You seem to be shifting toward some conservative vibes (in the sense of appreciating the nice things about the past, not in the sense of the Republican party).

To me it feels like there's a bit of tension between doing lots of purely mental exercises, like Hamming questions, and trying to be more "whole". One idea I have is that you become more "whole" by physically doing stuff while having the right kind of focus. But it's a bit tricky to explain what it feels like. I'll try...

For example, when drawing I can easily get into overthinking; but if I draw a quick sketch with my eyes closed, just from visual imagination, it frees me up. Or when playing an instrument, I can easily get into overthinking; but when playing with a metronome, or matching tones with a recording, I get into flow and it feels like improving and relaxing at the same time. Or to take a silly example, I've found that running makes me tense, but skipping (not with a rope, just skipping along the street for a bit) is a happy thing and I feel good afterward. So maybe this feeling that you're looking for isn't a mind thing, but a mind-body connection thing.

Reply
The Autofac Era
cousin_it9d*42

I already answered it in the first comment though. These big clumps of money+power+AI will have convergent instrumental goals in Omohundro's sense. They'll want expansion, control, arms races. That's quite enough motivation for growth.

About the idea of the underclass receiving UBI and having a right to choose their masters - I think this was also covered in the first comment. There will be no UBI and no rights, because the underclass will have no threat potential. Most likely the underclass will be just discarded. Or if some masters want dominance, they'll get it by force; it's been like that for most of history.

Reply
The Autofac Era
cousin_it9d122

I don't think it requires a Terminator-style takeover. The obvious path is for AI to ally itself with money and power, leading to a world dominated by clumps of money+power+AI "goop", maybe with a handful of people on top leading very nice lives. And it wouldn't need the masses to provide market demand: anything it could get from the masses in exchange, it could instead produce on its own at lower resource cost.

Reply
Load More
35An argument that consequentialism is incomplete
1y
27
24Population ethics and the value of variety
1y
11
43Book review: The Quincunx
1y
12
16A case for fairness-enforcing irrational behavior
1y
3
46I'm open for projects (sort of)
1y
13
27A short dialogue on comparability of values
2y
7
29Bounded surprise exam paradox
2y
5
31Aligned AI as a wrapper around an LLM
3y
19
24Are extrapolation-based AIs alignable?
3y
15
37Nonspecific discomfort
4y
18
Load More