Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

moridinamael comments on Open thread, Mar. 20 - Mar. 26, 2017 - Less Wrong

3 Post author: MrMind 20 March 2017 08:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (208)

You are viewing a single comment's thread.

Comment author: moridinamael 20 March 2017 03:09:47PM 2 points [-]

What is the steelmanned, not-nonsensical interpretation of the phrase "democratize AI"?

Comment author: fubarobfusco 20 March 2017 05:59:58PM *  4 points [-]

One possibility: Ensure that the benefits of AI accrue to everyone generally, rather than exclusively to the teeny-tiny fraction of humanity who happen to own their own AI business.

Comment author: Lumifer 20 March 2017 06:24:55PM 2 points [-]

Ensure that the benefits of AI accrue to everyone generally, rather than exclusively to the teeny-tiny fraction of humanity who happen to own their own AI business.

s/AI/capital/

Now, where have I heard this before..?

Comment author: Viliam 21 March 2017 04:01:58PM 2 points [-]

And your point is...?

From my point of view, the main problem with "making the benefits of capital accrue to everyone generally" is that... well, people who use these words as an applause light typically do something else instead. First, they take most of the benefits of capital to themselves (think: all those communist leaders with golden watches and huge dachas). Second, as a side-effect of incompetent management (where signalling political loyalty trumps technical competence), even the capital that isn't stolen is used very inefficiently.

But on a smaller scale... companies paying taxes, and those taxes being used to build roads or pay for universal healthcare... is an example of providing the benefits of capital to everyone. Just not all the capital; and besides the more-or-less neutral taxation, the use of the capital is not micromanaged by people chosen for their political loyalty. So the costs to the economy are much smaller, and arguably the social benefits are larger (some libertarians may disagree).

Assuming that the hypothetical artificial superintelligence will be (1) smarter than humans, and (2) able to scale, e.g. to increase its cognitive powers thousandfold by creating 1000 copies of itself which will not immediately start feeding Moloch by fighting against each other, it should be able to not fuck up the whole economy, and could quite likely increase the production, even without increasing the costs to environment, by simply doing things smarter and removing inefficiencies. Unlike the communist bureaucrats who (1) were not superintelligent, and sometimes even not of average intelligence, (2) optimized each for their own personal goals, and (3) routinely lied to each other and to their superiors to avoid irrational punishments, so soon the whole system used completely fake data. Not being bound by ideology, if the AI would find out that it is better to leave something to do to humans (quite unlikely IMHO, but let's assume so for the sake of the argument), it would be free to do exactly that. Unlike a hypothetical enlightened communist bureaucrat, who after making the same observation would be probably shot as a traitor and replaced by a less enlightened one.

If the choice is between giving each human a 1/7000000000 of the universe, or giving the whole universe to Elon Musk (or some other person) and letting everyone else starve (because I don't think anyone would be able to get any job in a world where the scalable superintelligence is your direct competitor), the former option seems better to me, and I think even Elon Musk wouldn't mind... especially considering that going for the former option will make people much more willing to cooperate with him.

Comment author: Lumifer 21 March 2017 04:38:58PM *  0 points [-]

And your point is...?

Is it really that difficult to discern?

From my point of view, the main problem with "making the benefits of capital accrue to everyone generally" is that... well, people who use these words as an applause light typically do something else instead.

So do you think that if we had real communism, with selfless and competent rulers, it would work just fine?

companies paying taxes, and those taxes being used to build roads or pay for universal healthcare... is an example of providing the benefits of capital to everyone

Capital is not just money. You tax, basically, production (=creation of value) and production is not a "benefit of capital".

In any case, the underlying argument here is that no one should own AI technology. As always, this means a government monopoly and that strikes me as a rather bad idea.

If the choice is between giving each human a 1/7000000000 of the universe, or giving the whole universe to Elon Musk (or some other person) and letting everyone else starve

Can we please not make appallingly stupid arguments? In which realistic scenarios do you thing this will be a choice that someone faces?

Comment author: Viliam 21 March 2017 04:57:44PM 0 points [-]

Is it really that difficult to discern?

You mean this one?

So do you think that if we had real communism, with selfless and competent rulers, it would work just fine?

For the obvious reasons I don't think you can find selfless and competent human rulers to make this really work. But conditional on possibility of creating a Friendly superintelligent AI... sure.

Although calling that "communism" is about as much of a central example, as calling the paperclip maximizer scenario "capitalism".

production is not a "benefit of capital".

Capital is a factor in production, often a very important one.

no one should own AI technology. As always, this means a government monopoly

Making a superintelligent AI will make our definitions of ownership (whether private or government) obsolete. And "as always" does not seem like a good argument for Singularity scenarios.

In which realistic scenarios do you thing this will be a choice that someone faces?

Depends on whether you consider the possibility of superintelligent AI to be "realistic".

Comment author: Lumifer 21 March 2017 05:08:27PM *  0 points [-]

this one

That too :-) I am a big fan of this approach.

For the obvious reasons I don't think you can find selfless and competent human rulers to make this really work.

But conditional on finding selfless and competent rulers (note that I'm not talking about the rest of the population), you think that communism will work? In particular, the economy will work?

Depends on whether you consider the possibility of superintelligent AI to be "realistic".

Aaaaand let me quote you yourself from just a sentence back:

Making a superintelligent AI will make our definitions of ownership (whether private or government) obsolete.

One of the arms of your choice involves Elon Musk (or equivalent) owning the singularity AI, the other gives every human 1/7B ownership share of the same AI. How does that work, exactly?

Besides, I thought that when Rapture comes...err... I mean, when the Singularity happens, humans will not decide anything any more -- the AI will take over and will make the right decisions for them-- isn't that so?

Comment author: gjm 21 March 2017 06:05:39PM 0 points [-]

conditional on finding selfless and competent rulers (note that I'm not talking about the rest of the population), you think that communism will work?

If we're talking about a Glorious Post-Singularity Future then presumably the superintelligent AIs are not only ruling the country and making economic decisions but also doing all the work, and they probably have magic nanobot spies everywhere so it's hard to lie to them effectively. That probably does get rid of the more obvious failure modes of a communist economy.

(If you just put the superintelligent AIs in charge of the top-level economic institutions and leave everything else to be run by the same dishonest and incompetent humans as normal, you're probably right that that wouldn't suffice.)

Comment author: Lumifer 21 March 2017 06:19:48PM *  0 points [-]

Actually, no, we're (at least, I am) talking about pre-Singularity situations were you still have to dig in the muck to grow crops and make metal shavings and sawdust to manufacture things.

Viliam said that the main problem with communism is that the people at the top are (a) incompetent; and (b) corrupt. I don't think that's true with respect to the economy. That is, I agree that communism leads to incompetent and corrupt people rising to the top, but that is not the primary reason why communist economy isn't well-functioning.

I think the primary reason is that communism breaks the feedback loop in the economy where prices and profit function as vital dynamic indicators for resource allocation decisions. A communist economy is like a body where the autonomic nervous system is absent and most senses function slowly and badly (but the brain can make the limbs move just fine). Just making the bureaucrats (human-level) competent and honest is not going to improve things much.

Comment author: gjm 22 March 2017 01:07:20AM 1 point [-]

Maybe I misunderstood the context, but it looked to me as if Viliam was intending only to say that post-Singularity communism might work out OK on account of being run by superintelligent AIs rather than superstupid meatsacks, and any more general-sounding things he may have said about the problems of communism were directed at that scenario.

(I repeat that I agree that merely replacing the leaders with superintelligent AIs and changing nothing else would most likely not make communism work at all, for reasons essentially the same as yours.)

Comment author: fubarobfusco 20 March 2017 06:36:37PM 2 points [-]

String substitution isn't truth-preserving; there are some analogies and some disanalogies there.

Comment author: bogus 21 March 2017 06:03:21PM *  1 point [-]

Sure, but capital is a rather vacuous word. It basically means "stuff that might be useful for something". So yes, talking about democratizing AI is a whole lot more meaningful than just saying "y'know, it would be nice if everyone could have more useful stuff that might help em achieve their goals. Man, that's so deeeep... puff", which is what your variant ultimately amounts to!

Comment author: Lumifer 21 March 2017 06:22:03PM *  0 points [-]

capital is a rather vacuous word. It basically means "stuff that might be useful for something"

Um. Not in economics where it is well-defined. Capital is resources needed for production of value. Your stack of decade-old manga might be useful for something, but it's not capital. The $20 bill in your wallet isn't capital either.

Comment author: satt 24 March 2017 12:55:43AM 0 points [-]

Um. Not in economics where it is well-defined. Capital is resources needed for production of value.

While capital is resources needed for production of value, it's a bit misleading to imply that that's how it's "well-defined" "in economics", since the reader is likely to come away with the impression that capital = resources needed to produce value, even though not all resources needed for production of value are capital. Economics also defines labour & land* as resources needed for production of value.

* And sometimes "entrepreneurship", but that's always struck me as a pretty bogus "factor of production" — as economists tacitly admit by omitting it as a variable from their production functions, even though it's as free to vary as labour.

Comment author: Lumifer 24 March 2017 03:27:28PM 0 points [-]

Sure, but that's all Econ 101 territory and LW isn't really a good place to get some education in economics :-/

Comment author: g_pepper 24 March 2017 01:43:15AM 0 points [-]

The way I remember it from my college days was that the inputs for the production of wealth are land, labor and capital (and, as you said, sometimes entrepreneurship is listed, although often this is lumped in with labor). Capital is then defined as wealth used towards the production of additional wealth. This formulation avoids the ambiguity that you identified.

Comment author: gjm 22 March 2017 01:11:16AM 0 points [-]

None the less, "capital" and "AI" are extremely different in scope and I see no particular reason to think that if "let's do X with capital" turns out to be a bad idea then we can rely on "let's do X with AI" also being a bad idea.

In a hypothetical future where the benefits of AI are so enormous that the rest of the economy can be ignored, perhaps the two kinda coalesce (though I'm not sure it's entirely clear), but that hypothetical future is also one so different from the past that past failures of "let's do X with capital" aren't necessarily a good indication of similar future failure.

Comment author: bogus 21 March 2017 06:51:58PM *  0 points [-]

Capital is resources needed for production of value.

And that stack of decade-old manga is a resource that might indeed provide value (in the form of continuing enjoyment) to a manga collector. That makes it capital. A $20 bill in my wallet is ultimately a claim on real resources that the central bank commits to honoring, by preserving the value of the currency - that makes it "capital" from a strictly individual perspective (indeed, such claims are often called "financial capital"), although it's indeed not real "capital" in an economy-wide sense (because any such claim must be offset by a corresponding liability).

Comment author: Lumifer 21 March 2017 07:03:33PM *  0 points [-]

Sigh. You can, of course, define any word any way you like it, but I have my doubts about the usefulness of such endeavours. Go read.

Comment author: qmotus 21 March 2017 09:47:44AM 0 points [-]

I feel like it's rather obvious that this is approximately what is meant. The people who talk of democratizing AI are, mostly, not speaking about superintelligence or do not see it as a threat (with the exception of Elon Musk, maybe).

Comment author: username2 20 March 2017 04:17:54PM 1 point [-]

Open sourcing all significant advancements in AI and releasing all code under GNU GPL.

Comment author: Viliam 21 March 2017 04:05:52PM 1 point [-]

Tiling the whole universe with small copies of GNU GPL, because each nanobot is legally required to contain the full copy. :D

Comment author: username2 20 March 2017 10:11:23PM 0 points [-]

*GNU AGPL, preferably

Comment author: Lumifer 20 March 2017 03:17:03PM 1 point [-]

Why do you think one exists?

Comment author: moridinamael 20 March 2017 03:55:33PM *  1 point [-]

I try not to assume that I am smarter than everybody if I can help it, and when there's a clear cluster of really smart people making these noises, I at least want to investigate and see whether I'm mistaken in my presuppositions.

To me, "democratize AI" makes as much sense as "democratize smallpox", but it would be good to find out that I'm wrong.

Comment author: bogus 20 March 2017 06:26:02PM *  0 points [-]

To me, "democratize AI" makes as much sense as "democratize smallpox", but it would be good to find out that I'm wrong.

Isn't "democratizing smallpox" a fairly widespread practice, starting from the 18th century or so - and one with rather large utility benefits, all things considered? (Or are you laboring under the misapprehension that the kinds of 'AIs' being developed by Google or Facebook are actually dangerous? Because that's quite ridiculous, TBH. It's the sort of thing for which EY and Less Wrong get a bad name in machine-learning- [popularly known as 'AI'] circles.)

Comment author: moridinamael 20 March 2017 09:30:57PM 1 point [-]

Not under any usual definition of "democratize". Making smallpox accessible to everyone is no one's objective. I wouldn't refer to making smallpox available to highly specialized and vetted labs as "democratizing" it.

Google and/or Deepmind explicitly intend on building exactly the type of AI that I would consider dangerous, regardless of whether or not you would consider them to have already done so.

Comment author: Lumifer 20 March 2017 03:57:26PM 0 points [-]

Links to the noises?

Comment author: moridinamael 20 March 2017 04:03:12PM *  0 points [-]

It's mainly an OpenAI noise but it's been parroted in many places recently. Definitely seen it in OpenAI materials, and I may have even heard Musk repeat the phrase, but can't find links. Also:

YCombinator.

Our long-term goal is to democratize AI. We want to level the playing field for startups to ensure that innovation doesn’t get locked up in large companies like Google or Facebook. If you’re starting an AI company, we want to help you succeed.

which is pretty close to "we don't want only Google and Facebook to have control over smallpox".

Microsoft in context of partnership with OpenAI.

At Microsoft, we believe everyone deserves to be able to take advantage of these breakthroughs, in both their work and personal lives.

In short, we are committed to democratizing AI and making it accessible to everyone.

This is a much more nonstandard interpretation of "democratize". I suppose by this logic, Henry Ford democratized cars?

Comment author: Lumifer 20 March 2017 04:22:57PM *  1 point [-]

Well, YC means, I think, that AI research should not become a monopoly (via e.g. software patents or by buying every competitor). That sounds entirely reasonable to me.

Microsoft means that they want Cortana/Siri/Alexa/Assistant/etc. on every machine and in every home. That's just marketing speak.

Both expressions have nothing to do with democracy, of course.

Comment author: tristanm 20 March 2017 07:08:04PM 0 points [-]

Well, YC means, I think, that AI research should not become a monopoly (via e.g. software patents or by buying every competitor). That sounds entirely reasonable to me.

There are other ways that AI research can become a monopoly without any use of patents or purchases of competitors. For example, a fair bit of research can only be done through heavy computing infrastructure. In some sense places like Google will have an advantage no matter how much of their code is open-sourced (and a lot of it is open source already). Another issue is data, which is a type of capital - much unlike money however - where there is a limit to how much value you can extract from it that depends on your computing resources. These are barriers that I think probably can't be lowered even in principle.

Comment author: Lumifer 20 March 2017 07:23:36PM *  0 points [-]

Having advantages in the field of AI research and having a monopoly are very different things.

a fair bit of research can only be done through heavy computing infrastructure

That's not self-evident to me. A fair bit of practical applications (e.g. Siri/Cortana) require a lot of infrastructure. What kind of research can't you do if you have a few terabytes of storage and a couple dozens of GPUs? What a research university will be unable to do?

Another issue is data

Data is an interesting issue. But first, the difference between research and practical applications is relevant again, and second, data control is mostly fought over at the legal/government level.

Comment author: tristanm 20 March 2017 09:06:13PM 0 points [-]

It's still the case that a lot of problems in AI and data analysis can be broken down into parallel tasks and massively benefit from just having plenty of CPUs/GPUs available. In addition, a lot of the research work at major companies like Google has gone into making sure that the infrastructure advantage is used to the maximum extent possible. But I will grant you that this may not represent an actual monopoly on anything (except perhaps search). Hardware is still easily available to those who can afford it. But in the context of "democratizing AI", I think we should expect that the firms with the most resources should have significant advantages over small startups in the AI space with not much capital. If I have a bunch of data I need analyzed, will I want to give that job to a new, untested player who may not even have the infrastructure depending on how much data I have, or someone established who I know has the capability and resources?

The issue with data isn't so much about control / privacy, it's mainly the fact that if you give me a truckload of a thousand 2 TB hard drives, each containing potentially useful information, there's really not much I can do with it. Now if I happened to have a massive server farm, that would be a different situation. There's a pretty big gulf in value for certain objects depending on my ability to make use of it, and I think data is a good example of those kinds of objects.

Comment author: Lumifer 20 March 2017 09:16:56PM 0 points [-]

we should expect that the firms with the most resources should have significant advantages over small startups

So how this is different from, say, manufacturing? Or pretty much any business for the last few centuries?

Comment author: WalterL 20 March 2017 03:43:48PM 0 points [-]

"Make multiple AIs that can restrain one another instead of one tyrannical MCP"?