You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

shminux comments on Could you tell me what's wrong with this? - Less Wrong Discussion

1 Post author: Algon 14 April 2015 10:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (64)

You are viewing a single comment's thread.

Comment author: shminux 14 April 2015 05:33:08PM 6 points [-]

You keep making a rookie mistake: trying to invent solutions without learning the subject matter first. Consider this: people just as smart as you (and me) have put in 100 times more effort trying to solve this issue professionally. What are the odds that you have found a solution they missed after gaining only a cursory familiarity with the topic?

If you still think that you can meaningfully contribute to the FAI research without learning the basics, note that smarter people have tried and failed. Those truly interested in making their contribution went on to learn the state of the art, the open problems and the common pitfalls.

If you want to contribute, start by studying (not just reading) the relevant papers on the MIRI web site and their summaries posted by So8res in Main earlier this year. And for Omega's sake, go read Bostrom's Superintelligence.

Comment author: Algon 14 April 2015 06:28:42PM *  5 points [-]

I just realised, you're the guy from my first post. You're first sentence now makes a lot more sense. I think that the problem is not so much that I'm massively overconfident though that may also be the case) its just that when I'm writing here I appear too bold. I'll definitely try and reduce that, though I thought I had done fairly well on this post. Looking back, I guess I could've been clearer. I was thinking of putting a disclaimer at the beginning saying 'Warning! This post is not to be seen as representing the posters views. It is meant to be dismantled and thoroughly destroyed so the poster can learn about his AI misconceptions.' But it was late, and I couldn't put it into words properly.

Anyway, thanks for being patient with me. I must have sounded like a bit of a twat, and you've been pleasantly polite. It really is appreciated.

Comment author: HungryHobo 17 April 2015 11:03:00AM *  1 point [-]

Not to put too fine a point on it but many of the posters here have never sat through a single class on AI of any kind nor read any books on actually programming AI's nor ever touched any of the common tools in the current state of the art nor learned about any existing or historical AI designs yet as long as they go along with the flow and stick to the right Applause Lights they don't get anyone demanding they go off and read papers on it.

As a result conversations on here can occasionally be frustratingly similar to theological discussions with posters who've read pop-comp-sci material like Superintelligence who simply assume that an AI will instantly gain any capability up to and including things which would require more energy than there exists in the universe or more computational power than would be available from turning every atom in the universe into computronium.

Beware of isolated demands for rigor.

http://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/

Comment author: shminux 17 April 2015 08:28:35PM *  0 points [-]

pop-comp-sci material like Superintelligence

Superintelligence is a broad overview of the topic without any aspirations for rigor, as far as I can tell, and it is pretty clear about that.

who simply assume that an AI will instantly gain any capability up to and including things which would require more energy than there exists in the universe or more computational power than would be available from turning every atom in the universe into computronium.

This seems uncharitable. The outside view certainly backs up something like

every jump in intelligence opens up new previously unexpected and unimaginable sources of energy.

Examples: fire, fossil fuels, nuclear. Same applies to computational power.

There is no clear reason for this correlation to disappear. Thus what we would currently deem

more energy than there exists in the universe

or

more computational power than would be available from turning every atom in the universe into computronium

might reflect our limited understanding of the Universe, rather than any kind of genuine limits.

Comment author: gjm 17 April 2015 11:46:15AM 0 points [-]

The current state of the art doesn't get anywhere close to the kind of general-purpose intelligence that (hypothetically but plausibly) might make AI either an existential threat or a solution to a lot of the human race's problems.

So while I enthusiastically endorse the idea of anyone interested in AI finding out more about actually-existing AI research, either (1) the relevance of that research to the scenarios FAI people are worried about is rather small, or (2) those scenarios are going to turn out never to arise. And, so far as I can tell, we have very little ability to tell which.

(Perhaps the very fact that the state of the art isn't close to general-purpose intelligence is evidence that those scenarios will never arise. But it doesn't seem like very good evidence for that. We know that rather-general-purpose human-like intelligence is possible, because we have it, so our inability to make computers that have it is a limitation of our current technology and understanding rather than anything fundamental to the universe, and I know of no grounds for assuming that we won't overcome those limitations. And the extent of variations within the human species seems like good reason to think that actually-existing physical things can have mental capacities well beyond the human average.)

Comment author: HungryHobo 17 April 2015 12:56:27PM *  0 points [-]

It's still extremely relevant since they have to grapple with watered-down versions of many of the exact same problems.

You might be concerned that a non-FAI will optimize for some scoring function and do things you don't want while they're actually dealing with the actual nuts and bolts of making modern AI's where they want to make sure they don't optimize for some scoring function and do things you don't want (on a more mundane level). That kind of problem is in the first few pages of many AI textbooks yet the applause lights here hold that almost all AI researchers are blind to such possibilities.

There's no need to convince me that general AI is possible in principle. We can use the same method to prove that nanobots and self replicating von neumann machines are perfectly possible but we're still a lot way from actually building them.

it's just frustrating: like watching someone trying to explain why proving code is important in the control software of a nuclear reactor (extremely true) who has no idea how code is proven, has never written even a hello-world program and every now and then talks as if they believe that exception handling is unknown to programmers mixed with references to magic. They're making a reasonable point but mixing their language with references to magic and occasional absurdities.

Comment author: gjm 17 April 2015 03:15:39PM 0 points [-]

Yeah, I understand the frustration.

Still, if someone is capable of grasping the argument "Any kind of software failure in the control systems of a nuclear reactor could have disastrous consequences; the total amount of software required isn't too enormous; therefore it is worth going to great lengths, including formal correctness proofs, to ensure that the software is correct" then they're right to make that argument even if their grasp of what kind of software is used for controlling a nuclear reactor is extremely tenuous. And if they say "... because otherwise the reactor could explode and turn everyone in the British Isles into a hideous mutant with weird superpowers" then of course they're hilariously wrong, but their wrongness is about the details of the catastrophic disaster rather than the (more important) fact that a catastrophic disaster could happen and needs preventing.

Comment author: HungryHobo 17 April 2015 03:50:24PM *  0 points [-]

That's absolutely true but it leads to two problems.

First: the obvious lack of experience/understanding of the nuts and bolts of it makes people from outside less likely to take the realistic parts of their warning seriously and may even lead to it being viewed as a subject of mockery which works against you.

Second: The failure modes that people suggest due to their lack of understanding can also be hilariously wrong like "a breach may be caused by the radiation giving part of the shielding magical superpowers which will then cause it to gain life and open a portal to R'lyeh" and they may even spend many paragraphs on talking about how serious that failure mode it while others who also don't actually understand politely aplaude. This has some of the same unfortunate side effects: it makes people who are totally unfamiliar with the subject less likely to take the realistic parts of their warning seriously.

Comment author: [deleted] 15 April 2015 09:52:45AM 1 point [-]

Does one of these papers answer this? http://lesswrong.com/r/discussion/lw/m26/could_you_tell_me_whats_wrong_with_this/c9em I suspect these papers would be difficult and time-consuming to understand, so I would prefer to not read all of them just to figure this one out.

Comment author: ChaosMote 15 April 2015 12:29:11AM 1 point [-]

I think you are being a little too exacting here. True, most advances in well-studied fields are likely to be made by experts. That doesn't mean that non-experts should be barred from discussing the issue, for educational and entertainment purposes if nothing else.

That is not to say that there isn't a minimum level of subject-matter literacy required for an acceptable post, especially when the poster in question posts frequently. I imagine your point may be that Algon has not cleared that threshold (or is close to the line) - but your post seems to imply a MUCH higher threshold for posting.

Comment author: shminux 15 April 2015 01:55:04AM *  0 points [-]

for educational and entertainment purposes if nothing else.

Indeed. And a more appropriate tone for this would be "How is <issue X> addressed in the current AI research?" and "where can I find more information about it?" not "I cannot find anything wrong with this idea". To be fair, the OP was edited to sound less arrogant, though the author's reluctance to do some reading even after being pointed to it is not encouraging. Hopefully this is changing.

Comment author: Algon 14 April 2015 06:01:56PM 0 points [-]

The main purpose of this post is not to actually propose a solution; I do not think I have some golden idea here. And if I gave that impression, I really am sorry (not sarcastic). What this was meant to be was a learning opportunity, because I really couldn't see many things wrong with this avenue. So I posted it here to see what was wrong with it, and so far I've had a few people reply and give me decent reasons as to why it is wrong. I've quibbled with a few of them, but that'll probably be cleared up in time.

Though I guess my post was probably too confident in tone. I'll try to correct that in the future and make my intentions better known. I hope I've cleared up any misunderstandings, and thanks for taking the time to reply. I'll certainly check those recommendation out.