Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

fubarobfusco comments on Open thread, Mar. 20 - Mar. 26, 2017 - Less Wrong Discussion

3 Post author: MrMind 20 March 2017 08:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (97)

You are viewing a single comment's thread. Show more comments above.

Comment author: moridinamael 20 March 2017 03:09:47PM 2 points [-]

What is the steelmanned, not-nonsensical interpretation of the phrase "democratize AI"?

Comment author: fubarobfusco 20 March 2017 05:59:58PM *  4 points [-]

One possibility: Ensure that the benefits of AI accrue to everyone generally, rather than exclusively to the teeny-tiny fraction of humanity who happen to own their own AI business.

Comment author: Lumifer 20 March 2017 06:24:55PM 2 points [-]

Ensure that the benefits of AI accrue to everyone generally, rather than exclusively to the teeny-tiny fraction of humanity who happen to own their own AI business.

s/AI/capital/

Now, where have I heard this before..?

Comment author: Viliam 21 March 2017 04:01:58PM 1 point [-]

And your point is...?

From my point of view, the main problem with "making the benefits of capital accrue to everyone generally" is that... well, people who use these words as an applause light typically do something else instead. First, they take most of the benefits of capital to themselves (think: all those communist leaders with golden watches and huge dachas). Second, as a side-effect of incompetent management (where signalling political loyalty trumps technical competence), even the capital that isn't stolen is used very inefficiently.

But on a smaller scale... companies paying taxes, and those taxes being used to build roads or pay for universal healthcare... is an example of providing the benefits of capital to everyone. Just not all the capital; and besides the more-or-less neutral taxation, the use of the capital is not micromanaged by people chosen for their political loyalty. So the costs to the economy are much smaller, and arguably the social benefits are larger (some libertarians may disagree).

Assuming that the hypothetical artificial superintelligence will be (1) smarter than humans, and (2) able to scale, e.g. to increase its cognitive powers thousandfold by creating 1000 copies of itself which will not immediately start feeding Moloch by fighting against each other, it should be able to not fuck up the whole economy, and could quite likely increase the production, even without increasing the costs to environment, by simply doing things smarter and removing inefficiencies. Unlike the communist bureaucrats who (1) were not superintelligent, and sometimes even not of average intelligence, (2) optimized each for their own personal goals, and (3) routinely lied to each other and to their superiors to avoid irrational punishments, so soon the whole system used completely fake data. Not being bound by ideology, if the AI would find out that it is better to leave something to do to humans (quite unlikely IMHO, but let's assume so for the sake of the argument), it would be free to do exactly that. Unlike a hypothetical enlightened communist bureaucrat, who after making the same observation would be probably shot as a traitor and replaced by a less enlightened one.

If the choice is between giving each human a 1/7000000000 of the universe, or giving the whole universe to Elon Musk (or some other person) and letting everyone else starve (because I don't think anyone would be able to get any job in a world where the scalable superintelligence is your direct competitor), the former option seems better to me, and I think even Elon Musk wouldn't mind... especially considering that going for the former option will make people much more willing to cooperate with him.

Comment author: Lumifer 21 March 2017 04:38:58PM *  0 points [-]

And your point is...?

Is it really that difficult to discern?

From my point of view, the main problem with "making the benefits of capital accrue to everyone generally" is that... well, people who use these words as an applause light typically do something else instead.

So do you think that if we had real communism, with selfless and competent rulers, it would work just fine?

companies paying taxes, and those taxes being used to build roads or pay for universal healthcare... is an example of providing the benefits of capital to everyone

Capital is not just money. You tax, basically, production (=creation of value) and production is not a "benefit of capital".

In any case, the underlying argument here is that no one should own AI technology. As always, this means a government monopoly and that strikes me as a rather bad idea.

If the choice is between giving each human a 1/7000000000 of the universe, or giving the whole universe to Elon Musk (or some other person) and letting everyone else starve

Can we please not make appallingly stupid arguments? In which realistic scenarios do you thing this will be a choice that someone faces?

Comment author: Viliam 21 March 2017 04:57:44PM 0 points [-]

Is it really that difficult to discern?

You mean this one?

So do you think that if we had real communism, with selfless and competent rulers, it would work just fine?

For the obvious reasons I don't think you can find selfless and competent human rulers to make this really work. But conditional on possibility of creating a Friendly superintelligent AI... sure.

Although calling that "communism" is about as much of a central example, as calling the paperclip maximizer scenario "capitalism".

production is not a "benefit of capital".

Capital is a factor in production, often a very important one.

no one should own AI technology. As always, this means a government monopoly

Making a superintelligent AI will make our definitions of ownership (whether private or government) obsolete. And "as always" does not seem like a good argument for Singularity scenarios.

In which realistic scenarios do you thing this will be a choice that someone faces?

Depends on whether you consider the possibility of superintelligent AI to be "realistic".

Comment author: Lumifer 21 March 2017 05:08:27PM *  0 points [-]

this one

That too :-) I am a big fan of this approach.

For the obvious reasons I don't think you can find selfless and competent human rulers to make this really work.

But conditional on finding selfless and competent rulers (note that I'm not talking about the rest of the population), you think that communism will work? In particular, the economy will work?

Depends on whether you consider the possibility of superintelligent AI to be "realistic".

Aaaaand let me quote you yourself from just a sentence back:

Making a superintelligent AI will make our definitions of ownership (whether private or government) obsolete.

One of the arms of your choice involves Elon Musk (or equivalent) owning the singularity AI, the other gives every human 1/7B ownership share of the same AI. How does that work, exactly?

Besides, I thought that when Rapture comes...err... I mean, when the Singularity happens, humans will not decide anything any more -- the AI will take over and will make the right decisions for them-- isn't that so?

Comment author: gjm 21 March 2017 06:05:39PM 0 points [-]

conditional on finding selfless and competent rulers (note that I'm not talking about the rest of the population), you think that communism will work?

If we're talking about a Glorious Post-Singularity Future then presumably the superintelligent AIs are not only ruling the country and making economic decisions but also doing all the work, and they probably have magic nanobot spies everywhere so it's hard to lie to them effectively. That probably does get rid of the more obvious failure modes of a communist economy.

(If you just put the superintelligent AIs in charge of the top-level economic institutions and leave everything else to be run by the same dishonest and incompetent humans as normal, you're probably right that that wouldn't suffice.)

Comment author: Lumifer 21 March 2017 06:19:48PM *  0 points [-]

Actually, no, we're (at least, I am) talking about pre-Singularity situations were you still have to dig in the muck to grow crops and make metal shavings and sawdust to manufacture things.

Viliam said that the main problem with communism is that the people at the top are (a) incompetent; and (b) corrupt. I don't think that's true with respect to the economy. That is, I agree that communism leads to incompetent and corrupt people rising to the top, but that is not the primary reason why communist economy isn't well-functioning.

I think the primary reason is that communism breaks the feedback loop in the economy where prices and profit function as vital dynamic indicators for resource allocation decisions. A communist economy is like a body where the autonomic nervous system is absent and most senses function slowly and badly (but the brain can make the limbs move just fine). Just making the bureaucrats (human-level) competent and honest is not going to improve things much.

Comment author: gjm 22 March 2017 01:07:20AM 1 point [-]

Maybe I misunderstood the context, but it looked to me as if Viliam was intending only to say that post-Singularity communism might work out OK on account of being run by superintelligent AIs rather than superstupid meatsacks, and any more general-sounding things he may have said about the problems of communism were directed at that scenario.

(I repeat that I agree that merely replacing the leaders with superintelligent AIs and changing nothing else would most likely not make communism work at all, for reasons essentially the same as yours.)

Comment author: fubarobfusco 20 March 2017 06:36:37PM 1 point [-]

String substitution isn't truth-preserving; there are some analogies and some disanalogies there.

Comment author: bogus 21 March 2017 06:03:21PM *  0 points [-]

Sure, but capital is a rather vacuous word. It basically means "stuff that might be useful for something". So yes, talking about democratizing AI is a whole lot more meaningful than just saying "y'know, it would be nice if everyone could have more useful stuff that might help em achieve their goals. Man, that's so deeeep... puff", which is what your variant ultimately amounts to!

Comment author: Lumifer 21 March 2017 06:22:03PM *  0 points [-]

capital is a rather vacuous word. It basically means "stuff that might be useful for something"

Um. Not in economics where it is well-defined. Capital is resources needed for production of value. Your stack of decade-old manga might be useful for something, but it's not capital. The $20 bill in your wallet isn't capital either.

Comment author: satt 24 March 2017 12:55:43AM 0 points [-]

Um. Not in economics where it is well-defined. Capital is resources needed for production of value.

While capital is resources needed for production of value, it's a bit misleading to imply that that's how it's "well-defined" "in economics", since the reader is likely to come away with the impression that capital = resources needed to produce value, even though not all resources needed for production of value are capital. Economics also defines labour & land* as resources needed for production of value.

* And sometimes "entrepreneurship", but that's always struck me as a pretty bogus "factor of production" ‚ÄĒ as economists tacitly admit by omitting it as a variable from their production functions, even though it's as free to vary as labour.

Comment author: g_pepper 24 March 2017 01:43:15AM 0 points [-]

The way I remember it from my college days was that the inputs for the production of wealth are land, labor and capital (and, as you said, sometimes entrepreneurship is listed, although often this is lumped in with labor). Capital is then defined as wealth used towards the production of additional wealth. This formulation avoids the ambiguity that you identified.

Comment author: gjm 22 March 2017 01:11:16AM 0 points [-]

None the less, "capital" and "AI" are extremely different in scope and I see no particular reason to think that if "let's do X with capital" turns out to be a bad idea then we can rely on "let's do X with AI" also being a bad idea.

In a hypothetical future where the benefits of AI are so enormous that the rest of the economy can be ignored, perhaps the two kinda coalesce (though I'm not sure it's entirely clear), but that hypothetical future is also one so different from the past that past failures of "let's do X with capital" aren't necessarily a good indication of similar future failure.

Comment author: bogus 21 March 2017 06:51:58PM *  0 points [-]

Capital is resources needed for production of value.

And that stack of decade-old manga is a resource that might indeed provide value (in the form of continuing enjoyment) to a manga collector. That makes it capital. A $20 bill in my wallet is ultimately a claim on real resources that the central bank commits to honoring, by preserving the value of the currency - that makes it "capital" from a strictly individual perspective (indeed, such claims are often called "financial capital"), although it's indeed not real "capital" in an economy-wide sense (because any such claim must be offset by a corresponding liability).

Comment author: Lumifer 21 March 2017 07:03:33PM *  0 points [-]

Sigh. You can, of course, define any word any way you like it, but I have my doubts about the usefulness of such endeavours. Go read.

Comment author: qmotus 21 March 2017 09:47:44AM 0 points [-]

I feel like it's rather obvious that this is approximately what is meant. The people who talk of democratizing AI are, mostly, not speaking about superintelligence or do not see it as a threat (with the exception of Elon Musk, maybe).