Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: curi 01 November 2017 09:38:23AM 0 points [-]

i think humans don't use their full computational capacity. why expect an AGI to?

in what way do you think AGI will have a better algorithm than humans? what sort of differences do you have in mind?

Comment author: siIver 01 November 2017 10:30:43AM *  0 points [-]

It doesn't really matter whether the AI uses their full computational capacity. If the AI has a 100000 times larger capacity (which is again a conservative lower bound) and it only uses 1% of it, it will still be 1000 as smart as the human's full capacity.

AGI's algorithm will be better, because it has instant access to more facts than any human has time to memorize, and it will not have all of the biases that humans have. The entire point of the sequences is to list dozens of ways that the human brain reliably fails.

Comment author: siIver 01 November 2017 09:32:32AM 0 points [-]

Because

"[the brain] is sending signals at a millionth the speed of light, firing at 100 Hz, and even in heat dissipation [...] 50000 times the thermodynamic minimum energy expenditure per binary swtich operation"

https://www.youtube.com/watch?v=EUjc1WuyPT8&t=3320s

AI will be quantitatively smarter because it'll be able to think over 10000 times faster (arbitrary conservative lower bound) and it will be qualitatively smarter because its software will be built by an algoirthm far better than evolution

Comment author: siIver 09 September 2017 11:44:11AM 0 points [-]

"Less than a third of students by their own self-appointed worst-case estimate *1."

missing a word here, I think.

Comment author: siIver 09 September 2017 10:46:05AM 0 points [-]

I think your post is spot on.

Comment author: siIver 31 August 2017 10:56:35PM 1 point [-]

re-live. Although I'd rather live the same amount of time from now onward.

Comment author: siIver 20 July 2017 07:41:08PM 4 points [-]

First question: I know you admire Trump's persuasion skills, but what I want to know is why you think he's a good person/president etc.

Answer: [talks about Trump's persuasion skills]

Yeah, okay.

Comment author: siIver 09 July 2017 02:18:59PM 0 points [-]

This is an exceptionally well reasoned article, I'd say. Particular props to the appropriate amount of uncertainty.

Comment author: cousin_it 07 July 2017 08:46:08PM *  4 points [-]

Yeah, that seems to be the biggest flaw in the post. I shouldn't have addressed it to everyone, it's intended mostly for people suffering from "akrasia". I.e. if lone wolf is working for you, ignore the post. If it isn't, notice that and change course.

Comment author: siIver 07 July 2017 08:52:27PM 1 point [-]

Well, if you put it like that I fully agree. Generally, I believe that "if it doesn't work, try something else" isn't followed as often as it should. There's probably a fair number of people who'd benefit from following this article's advice.

Comment author: siIver 07 July 2017 08:41:09PM 0 points [-]

I don't quite know how to make this response more sophisticated than "I don't think this is true". It seems to me that whether classes ore lone-wolf improvement is better is a pretty complex question and the answer is fairly balanced, though overall I'd give the edge to lone-wolf.

Comment author: siIver 04 July 2017 08:23:10PM *  1 point [-]

I don't know what our terminal goals are (more precisely than "positive emotions"). I think it doesn't matter insofar as the answer to "what should we do" is "work on AI alignment" either way. Modulo that, yeah there are some open questions.

On the thesis of suffering requiring higher order cognition in particular, I have to say that sounds incredibly implausible (for I think fairly obvious reasons involving evolution).

View more: Next