Comment author: XiXiDu 02 November 2010 11:12:48AM *  1 point [-]

This assumes that every CPU architecture is suitable for the theoretical AGI, it assumes that it can run on every computational substrate. It also assumes that it can easily acquire more computational substrate or create new one. I do not believe that those assumptions are reasonable economically or by means of social engineering. Without enabling technologies like advanced real-world nanotechnology the AGI won't be able to create new computational substrate without the whole economy of the world supporting it.

Supercomputers like the one to simulate the IBM Blue Brain project cannot simply be replaced by taking control of a few botnets. They use highly optimized architecture that needs for example a memory latency and bandwidth bounded below a certain threshold.

Comment author: mwaser 02 November 2010 12:04:46PM 2 points [-]

Actually, every CPU architecture will suffice for the theoretical AGI, if you're willing to wait long enough for its thoughts. ;-)

Comment author: Emile 02 November 2010 09:55:08AM 5 points [-]

Downvoting wrong comments may be harsh for the person being downvoted, but hopefully in the long run it can encourage better comments, or at least make it easier to find good comments.

There may be some flaws in the karma system or the way it's used by the community, but I don't see any obvious improvements, or any other systems that would obviously work better.

Look at mwaser: he complains a lot about being downvoted, but he also got a lot of feedback for what people found lacking in his post. Yes, a portion of the downvotes he gets may be due to factors unrelated to the quality of his arguments (he repeatedly promotes his own blog, and complains about the downvotes being a proof of community irrationality - both can get under people's skin), which is a bit unfortunate, but not a fatal flaw of the karma system.

Comment author: mwaser 02 November 2010 10:57:31AM 0 points [-]

I've never made the claim that the downvotes are "proof" of community irrationality. In fact, given what I believe to be the community's goals, I see them as entirely rational.

I have claimed that certain upvotes are irrational (i.e. those without any substance). The consensus reply seems to be that they still fulfill a purpose/goal for a large percentage of the regulars here. By definition, that makes those upvotes rational (yes, I AM reversing my stand on that issue because I have been "educated" on what the community's goals apparently are)..

I am very appreciative of the replies that have substance. I am currently of the opinion, however, that the karma system actually reduces the amount of replies since it allows someone to be easily and anonymously dismissed without good arguments/cause.

Comment author: nhamann 01 November 2010 10:46:01PM *  0 points [-]

Wisdom doesn't focus on a small number of goals -- and needs to look at the longest term if it wishes to achieve a maximal number of goals.

For the purposes of an AI, I would just call this intelligence.

Comment author: mwaser 02 November 2010 12:23:19AM *  -2 points [-]

Why? What is the value of removing a distinction that might just give you a handle on avoiding the most dangerous class of intelligences? If you're making it a requirement of intelligence to not focus on a small number of goals, you are thereby insisting that a paper-clip maximizer is not intelligent. Yet, by most definitions, it certainly is intelligent. Redefining intelligence is not a helpful way to go about solving the problem.

Comment author: [deleted] 01 November 2010 09:46:14PM 1 point [-]

I'd like to draw a distinction that I intend to use quite heavily in the future.

What for?

In response to comment by [deleted] on Intelligence vs. Wisdom
Comment author: mwaser 02 November 2010 12:19:02AM -2 points [-]

Great question. You should follow my blog since everything I post here gets downvoted below threshhold very quickly.

Comment author: Alicorn 01 November 2010 09:31:03PM 12 points [-]

Another nightmare scenario that is constantly harped upon is the (theoretically super-intelligent) consciousness that shortsightedly optimizes one of its personal goals above all the goals of humanity. In game-theoretic terms, this is trading a positive-sum game of potentially infinite length and value for a relatively modest (in comparative terms) short-term gain. A wisdom won't do this.

There's nothing "shortsighted" about it. So what if there are billions of humans and each has many goals? The superintelligence does not care. So what if, once the superintelligence tiles the universe with the thing of its choice, there's nothing left to be achieved? It does not care. Disvaluing stagnation, caring about others' goals being achieved, etcetera are not things you can expect from an arbitrary mind.

A genuine paperclip maximizer really does want the maximum number of possible paperclips the universe can sustain to exist and continue to exist forever at the expense of anything else. If it's smart enough that it can get whatever it wants without having to compromise with other agents, that's what it will get, and it's not being stupid in so doing. It's very effective. It's just unFriendly.

Comment author: mwaser 02 November 2010 12:16:56AM *  0 points [-]

The paperclip maximizer (and other single-goal intelligences) was handled two paragraphs above what you quoted. That is the reason for the phrasing another nightmare scenario. You are raising the strawman of saying that my solution for class II is not relevant to class I. I agree. That solution is only valid for class II. Making it look like I claimed that a paperclip maximizer is being stupid is incorrectly putting words in my mouth. A single-goal intelligence does not care about the long-term at all. One the other hand, the more goals an entity has, the more it cares about the long term and the more contrary to its own goals taking the short-term gain over the long-term positive sum game becomes.

Comment author: steven0461 01 November 2010 09:59:29PM 1 point [-]

I don't think I agree with your substantive points, but equating intelligence with cross-domain optimization power sure seems like confusing terminology in that, according to common sense usage, intelligence is distinct from rationality and knowledge, and all three of these things factor into good decision-making, which in turn is probably not the only thing determining cross-domain optimization power, depending on how you construe "cross-domain".

Comment author: mwaser 02 November 2010 12:07:28AM 1 point [-]

equating intelligence with cross-domain optimization power sure seems like confusing terminology

I'm not the one doing the equating. As stated, this is the consensus definition of intelligence among AGI researchers.

Comment author: Eneasz 01 November 2010 08:10:09PM 1 point [-]

Does every rationalist protagonist come out of the box thinking they're the queen or king?

I dunno, but I've never read one that doesn't.

If you considered yourself able to take over the world (which all ultra-rationalist characters (whether pro- or antagonists) seem to) then actually taking it over would be one of the most rational things you could do.

Comment author: mwaser 01 November 2010 08:28:46PM 1 point [-]

If you considered yourself able to take over the world (which all ultra-rationalist characters (whether pro- or antagonists) seem to) then actually taking it over would be one of the most rational things you could do.

But not one of the wisest (because most people who have taken over suddenly realize exactly how much of a pain it is to actually run the world that you've just taken over)

In response to comment by mwaser on Irrational Upvotes
Comment author: Vladimir_Nesov 01 November 2010 02:15:06PM *  2 points [-]

Arguably then, for the audience of people who agree with the statement, the statement itself is not necessary either.

Arguments work by drawing attention to certain things you already know. The act of drawing attention is not void, it's almost the whole point (more so when there are multiple steps, of course, but it's often one step at a time).

Obviously, the comment is courting the undecided. Obviously, many humans are swayed by sheer numbers of people who believe certain things. But that behavior is not rational.

Making conclusions from comments' rating would be largely misguided. On the other hand, paying attention depending on comments' rating is a necessary evil with limited biasing effects.

Of course it's a bad argument when considered as directed to you

Prejudicial strawman. I never said that it was a bad argument. I never said anything close.

Hmm, do you mean that you agree with the statement? Something else? I don't understand.

Comment author: mwaser 01 November 2010 08:09:44PM -1 points [-]

Yes. I agree with the statement but not it's relevance to the current discussion. It's clouding the issue/diverting attention away from the issue with irrelevant facts. Surely you know what a "Strawman Argument" is. If not, does the term "Red Herring" help?

Intelligence vs. Wisdom

-12 mwaser 01 November 2010 08:06PM

I'd like to draw a distinction that I intend to use quite heavily in the future.

The informal definition of intelligence that most AGI researchers have chosen to support is that of Shane Legg and Marcus Hutter -- “Intelligence measures an agent’s ability to achieve goals in a wide range of environments.”

I believe that this definition is missing a critical word between achieve and goals.  Choice of this word defines the difference between intelligence, consciousness, and wisdom as I believe that most people conceive them.

  • Intelligence measures an agent's ability to achieve specified goals in a wide range of environments.
  • Consciousness measures an agent's ability to achieve personal goals in a wide range of environments.
  • Wisdom measures an agent's ability to achieve maximal goals in a wide range of environments.

There are always the examples of the really intelligent guy or gal who is brilliant but smokes --or-- is the smartest person you know but can't figure out how to be happy.

Intelligence helps you achieve those goals that you are conscious of -- but wisdom helps you achieve the goals you don't know you have or have overlooked.

  • Intelligence focused on a small number of specified goals and ignoring all others is incredibly dangerous -- even more so if it is short-sighted as well.
  • Consciousness focused on a small number of personal goals and ignoring all others is incredibly dangerous -- even more so if it is short-sighted as well.
  • Wisdom doesn't focus on a small number of goals -- and needs to look at the longest term if it wishes to achieve a maximal number of goals.

The SIAI nightmare super-intelligent paperclip maximizer has, by this definition, a very low wisdom since, at most, it can only achieve its one goal (since it must paperclip itself to complete the goal).

As far as I've seen, the assumed SIAI architecture is always presented as having one top-level terminal goal. Unless that goal necessarily includes achieving a maximal number of goals, by this definition, the SIAI architecture will constrain its product to a very low wisdom.  Humans generally don't have this type of goal architecture. The only time humans generally have a single terminal goal is when they are saving someone or something at the risk of their life -- or wire-heading.

Another nightmare scenario that is constantly harped upon is the (theoretically super-intelligent) consciousness that shortsightedly optimizes one of its personal goals above all the goals of humanity.  In game-theoretic terms, this is trading a positive-sum game of potentially infinite length and value for a relatively modest (in comparative terms) short-term gain.  A wisdom won't do this.

Artificial intelligence and artificial consciousness are incredibly dangerous -- particularly if they are short-sighted as well (as many "focused" highly intelligent people are).

What we need more than an artificial intelligence or an artificial consciousness is an artificial wisdom -- something that will maximize goals, its own and those of others (with an obvious preference for those which make possible the fulfillment of even more goals and an obvious bias against those which limit the creation and/or fulfillment of more goals).

Note:  This is also cross-posted here at my blog in anticipation of being karma'd out of existence (not necessarily a foregone conclusion but one pretty well supported by my priors ;-).

 

In response to Irrational Upvotes
Comment author: Vladimir_Nesov 01 November 2010 01:34:08PM *  2 points [-]

"This premise is VERY flawed"

This is a statement that can be made about any premise. It is backed by no supporting evidence.

A statement needs to be backed by arguments if that's necessary to make people agree with it. Some statements people already agree with, in which case no supporting arguments are necessary. Not every premise is such that people already agree with a statement that it's "very flawed".

Of course it's a bad argument when considered as directed to you (because it fails to move your beliefs), but a good one when seen as directed to other readers.

Comment author: mwaser 01 November 2010 02:09:28PM 2 points [-]

Some statements people already agree with, in which case no supporting arguments are necessary.

Arguably then, for the audience of people who agree with the statement, the statement itself is not necessary either.

Obviously, the comment is courting the undecided. Obviously, many humans are swayed by sheer numbers of people who believe certain things. But that behavior is not rational. And this site is "devoted to refining the art of human rationality".

Of course it's a bad argument when considered as directed to you

Prejudicial strawman. I never said that it was a bad argument. I never said anything close.

View more: Prev | Next