lukeprog comments on Intelligence explosion in organizations, or why I'm not worried about the singularity - Less Wrong

13 Post author: sbenthall 27 December 2012 04:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (187)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 28 December 2012 01:21:59AM 1 point [-]

Why not deal with a special case first?

What do you have in mind? Are you proposing a miniature research project into the relevance of companies as superhuman intelligences, and the relevance of those data to the question of whether we should expect a hard takeoff vs. a slow takeoff, or recursively self-improving AI at all? Or are you suggesting something else?

Comment author: IlyaShpitser 28 December 2012 11:22:53AM 6 points [-]

Here is my claim (contrary to Vassar). If you are worried about an unfriendly "foomy" optimizing process, then a natural way to approach that problem is to solve an easier related problem: make an existing unfriendly but "unfoomy" optimizing process friendly. There are lots of such processes of various levels of capability and unfriendliness: North Korea, Microsoft, the United Nations, a non-profit org., etc.

I claim this problem is easier because:

(a) we have a lot more time (no danger of "foom"),

(b) we can use empirical methods (processes already exist), to ground our theories.

(c) these processes are super-humanly intelligent but not so intelligent that their goals/methods are impossible to understand.

The claim is that if we can't make existing processes with all these simplifying features friendly, we have no hope to make a "foomy" AI friendly.

Comment author: lukeprog 30 December 2012 12:03:16AM 1 point [-]

make an existing unfriendly but "unfoomy" optimizing process friendly

I don't know what this would mean, since figuring out friendliness probably requires superintelligence, hence CEV as an initial dynamic.

Comment author: IlyaShpitser 30 December 2012 03:09:14AM 2 points [-]

Ok, so just to make sure I understand your position:

(a) Without friendliness, "foominess" is dangerous.

(b) Friendliness is hard -- we can't use existing academia resources to solve it, as it will take too long. We need a pocket super-intelligent optimizer to solve this problem.

(c) We can't make partial progress on the friendliness question with existing optimizers.

Is this fair?

Comment author: lukeprog 30 December 2012 04:18:26AM 1 point [-]

"Yes" to (a), "no" to (b) and (c).

We can definitely make progress on Friendliness without superintelligent optimizers (see here), but we can't make some non-foomy process (say, a corporation) Friendly in order to test our theories of Friendliness.

Comment author: IlyaShpitser 31 December 2012 12:57:33PM *  1 point [-]

Ok. I am currently diagnosing the source of our disagrement as me being more agnostic about which AI architectures might succeed than you. I am willing to consider the kinds of minds that resemble modern messy non-foomy optimizers (e.g. communities of competing/interacting agents) as promising. That is, "bazaar minds," not just "cathedral minds." Given this agnosticism, I see value in "straight science" that worries about arranging possibly stupid/corrupt/evil agents in useful configurations that are not stupid/corrupt/evil.

Comment author: khafra 28 December 2012 01:48:15PM 1 point [-]

I think the simplifying features on the other side outweigh those--ie., it's built from atomic units that do exactly what you tell them to, and there are probably fewer abstraction layers between those atomic units and the goal system. But I do think Mechanism Design is an important field, and will probably form an important part of any friendly optimizing process.

Comment author: timtyler 29 December 2012 02:20:18PM *  0 points [-]

Organisations are likely to build machine intelligence and imbue it with their values. That is reason enough to be concerned with organisation values. One of my proposals to help with this is better corporate repuatation systems.

Comment author: cypher197 29 December 2012 04:13:54AM 0 points [-]

Those processes are built out of humans, with all the problems that implies. All the transmissions between the humans are lossy. Computers behave much differently. They don't lie to you, embezzle company funds, or rationalize their poor behavior or ignorance.

This is a very important field of study with some relation, and one I would very much like to pursue. OTOH, it's not that much like building an AI out of computers. Really, the complexity of building a self-sustaining, efficient, smart, friendly organization out of humans is quite possibly more difficult due to the "out of humans" constraint.

Comment author: timtyler 29 December 2012 02:26:39PM -2 points [-]

Those processes are built out of humans, with all the problems that implies. All the transmissions between the humans are lossy. Computers behave much differently. They don't lie to you, embezzle company funds, or rationalize their poor behavior or ignorance.

Doesn't that rather depend on the values of those who programmed them?

This is a very important field of study with some relation, and one I would very much like to pursue. OTOH, it's not that much like building an AI out of computers. Really, the complexity of building a self-sustaining, efficient, smart, friendly organization out of humans is quite possibly more difficult due to the "out of humans" constraint.

Organisations tend to construct machine intelligences which reflect their values. However, organisations don't have an "out of humans" constraint. They are typically a complex symbiosis of humans, culture, artefacts, plants, animals, fungi and bacteria.

Comment author: cypher197 29 December 2012 08:28:17PM 0 points [-]

Perhaps. But humans will lie, embezzle, and rationalize regardless of who programmed them. Besides, would the internals of a computer lie to itself? Does RAM lie to a processor? And yet humans (being the subcomponents of an organization) routinely lie to each other. No system of rules I can devise will guarantee that doesn't happen without some very serious side effects.


All of which are subject to the humans' interpretation and use. You can set up an organizational culture, but that won't stop the humans from mucking it up, as they routinely do in organizations across the globe. You can write process documents, but that doesn't mean they'll even follow them at all. If you specify a great deal of process, they may not even do so intentionally - they may just forget. With a computer, that would be caused by an error, but it's a controllable process. With a human? People can't just decide to remember arbitrary amounts of arbitrary information for arbitrary lengths of time and pull it off reliably.


So; on the one hand, I have a system being built where the underlying hardware is reliable and under my control, and generally does not create errors or disobey. On the other hand, I have a network of unreliable and forgetful intelligences that may be highly irrational and may even be working at cross purposes with each other or the organization itself. One requires extremely strict instructions, the other is capable of interpretation and judgment from context without specifying an algorithm in great detail. There are similarities between the two, but there are also great practical differences.

Comment author: timtyler 29 December 2012 09:18:52PM -1 points [-]

As you will see by things like my Angelic Foundations essay, I do appreciate the virtues of working with machines.

However, at the moment, there are also advantages to a man-machine symbiosis - namely robotics is still far behind the evolved molecular nanotechnology in animals in many respects - and computers still lag far behind brains in many critical areas. A man-machine symbiosis will thus beat machines in many areas, until after machines reach the level of a typical human in most work-related physical and mental feats. Machine-only solutions will just lose. So: we will be working with organisations for a while yet - during a pretty important period in history.

Comment author: cypher197 30 December 2012 11:21:51PM 0 points [-]

I just think it's a related but different field. Actually, solving these problems is something I want to apply some AI to (more accurate mapping of human behavior allowing massive batch testing of different forms of organization given outside pressures - discover possible failure modes and approaches to deal with them), but that's a different conversation.