You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

estimator comments on Leaving LessWrong for a more rational life - Less Wrong Discussion

33 [deleted] 21 May 2015 07:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (268)

You are viewing a single comment's thread. Show more comments above.

Comment author: estimator 22 May 2015 03:13:07PM *  3 points [-]

Certainly. Why not?

Computers already can outperform you in a wide variety of tasks. Moreover, today, with the rise of machine learning, we can train computers to do pretty high-level things, like object recognition or senitment analysis (and sometimes outperform humans in these tasks). Isn't it power?

As for Solomonoff induction... What do you think your brain is doing when you are thinking? Some kind of optimized search in hypotheses space, so you consider only a very very small set of hypotheses (compared to the entire space), hopefully good enough ones. While Solomonoff induction checks all of them, every single hypothesis, and finds the best.

Solomonoff induction is so much thinking that it is incomputable.

Since we don't have that much raw computing power (and never will have), the hypotheses search must be heavily optimized. Throwing off unpromising directions of search. Searching in regions with high probability of success. Using prior knowledge to narrow search. That's what your brain is doing, and that's what machines will do. That's not like "simple and brute-force", because simple and brute-force algorithms are either impractically slow, or incomputable at all.

Comment author: [deleted] 22 May 2015 03:28:21PM 1 point [-]

Computers already can outperform you in a wide variety of tasks.

Eagles, too: they can fly and I not. The question is whether the currently foreseeable computerizable tasks are closer to flying or to intelligence. Which in turn depends on how high and how "magic" we see intelligence.

As for Solomonoff induction... What do you think your brain is doing when you are thinking?

Ugh, using Aristotelean logic? So it is not random hypotheses but causality and logic based.

Solomonoff induction is so much thinking that it is incomputable.

I think using your terminology, thinking is not the searching, it is the findinging logical relationships so not a lot of space must be searched.

That's not like "simple and brute-force", because simple and brute-force algorithms are either impractically slow, or incomputable at all.

OK, that makes sense. Perhaps we can agree that logic and casuality and actual reasoning is all about narrowing the hypothesis space to search. This is intelligence, not the search.

Comment author: estimator 22 May 2015 03:37:59PM *  4 points [-]

I'm starting to suspect that we're arguing on definitions. By search I mean the entire algorithm of finding the best hypothesis; both random hypothesis checking and Aristotelian logic (and any combination of these methods) fit. What do you mean?

Narrowing the hypothesis space is search. Once you narrowed the hypotheses space to a single point, you have found an answer.

As for eagles: if we build a drone that can fly as well as an eagle can, I'd say that the drone has an eagle-level flying ability; if a computer can solve all intellectual tasks that a human can solve, I'd say that the computer has a human-level intelligence.