You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

CellBioGuy comments on The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism. - Less Wrong Discussion

15 Post author: diegocaleiro 28 November 2015 11:07AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (74)

You are viewing a single comment's thread.

Comment author: CellBioGuy 29 November 2015 04:50:58AM *  4 points [-]

although he was once very excited about the prospect of defeating the mechanisms of ageing, back when less than 300 thousand dollars were directly invested in it, he is now, with billions pledged against ageing, confident that the problem is substantially harder to surmount than the number of man-hours left to be invested in the problem, at least during my lifetime, or before the Intelligence Explosion.

So, after all this learning about all the niggling details that keep frustrating all these grand designs, you still think an intelligence explosion is something that matters/is likely? Why? Isn't it just as deus-ex-machina as the rest of this that you have fallen out from after learning more about it?

Comment author: diegocaleiro 29 November 2015 10:47:10AM *  0 points [-]

Not really. My understanding of AI is far from grandiose, I know less about it than about my fields (Philo, BioAnthro) - I've merely read all of FHI, most of MIRI, half of AIMA, Paul's blog, maybe 4 popular and two technical books on related issues, Max 60 papers on AGI per se, I don't code, and I only have the coarse grained understanding of it. - But in this little research and time I had to look into it, I saw no convincing evidence for a cap on the level of sophistication that a system's cognitive abilities can achieve. I have also not seen very robust evidence that would countenance the hypothesis of a fast takeoff.

The fact that we have not fully conceptually disentangled the dimensions of which intelligence is composed is mildly embarassing though, and it may be that AGI is a Deus ex-machina because actually, more as Minsky or Goertzel, less as MIRI or Lesswrong, General Intelligence will turn out to be a plethora of abilities that don't have a single denominator, ofter superimposed in a robust way.

But for now, nobody who is publishing seems to know for sure.

Comment author: V_V 29 November 2015 02:15:28PM *  3 points [-]

Not really. My understanding of AI is far from grandiose, I know less about it than about my fields (Philo, BioAnthro) - I've merely read all of FHI, most of MIRI, half of AIMA, Paul's blog, maybe 4 popular and two technical books on related issues, Max 60 papers on AGI per se, I don't code, and I only have the coarse grained understanding of it. - But in this little research and time I had to look into it, I saw no convincing evidence for a cap on the level of sophistication that a system's cognitive abilities can achieve. I have also not seen very robust evidence that would countenance the hypothesis of a fast takeoff.

Beware the Dunning–Kruger effect.

Looking at the big picture, you could also say that there convincing evidence for a cap on the lifespan of a biological organism. Heck, some trees have been alive for over 10,000 years! Yet, once you look at the nitty-gritty details of biomedical research, it becomes clear that even adding just a few decades to the human lifespan is a very hard problem and researchers still largely don't know how to solve it.

It's the same for AGI. Maybe truly super-human AGI is physically impossible due to complexity reasons, but even if it is possible, developing it is a very hard problem and researchers still largely don't know how to solve it.

Comment author: diegocaleiro 29 November 2015 10:23:45PM 0 points [-]

I think you misunderstood my claim for sarcasm. I actually think I don`t know much about AI (not nearly enough to make a robust assessment).