LucasSloan comments on Open Thread: February 2010, part 2 - Less Wrong

10 Post author: CronoDAS 16 February 2010 08:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (857)

You are viewing a single comment's thread. Show more comments above.

Comment author: CronoDAS 17 February 2010 04:15:39AM *  1 point [-]

I disagree. I recommend the top rated charities on givewell.net, specifically the Stop TB Partnership. (They also have a nice blog.)

Comment author: LucasSloan 17 February 2010 05:05:18AM 0 points [-]

Could you explain why? Do you believe that SIAI/FHI aren't accomplishing what they set out to do? Do you discount future lives? Something else?

Comment author: CronoDAS 17 February 2010 05:50:35AM *  3 points [-]

I don't expect Eliezer and co. to succeed, if you define "success" as actually building a transhuman Friendly AI before Eliezer is either cryopreserved or suffers information-theoretic death. My "wild guess" at the earliest plausible date for AGI of any kind is 2100.

Comment author: Eliezer_Yudkowsky 17 February 2010 06:09:17AM 5 points [-]

What do you think you know and how do you think you know it?

Comment author: CronoDAS 17 February 2010 06:34:58AM *  6 points [-]

I'm guessing based on several factors:

1) The past failure of AGI research to deliver progress

2) The apparent difficulty of the problem. We don't know how to do it, and we don't know what we would need to know before we can know how to do it. Or, at least, I don't.

3) My impressions of the speed of scientific progress in general. For example, the time between "new discovery" and "marketable product" in medicine and biotechnology is about 30 years.

4) My impressions of the speed of progress in mathematics, in which important unsolved problems often stay unsolved for centuries. It took over 300 years to prove Fermat's Last Theorem, and the formal mathematics of computation is less than a century old; Alan Turing described the Turing Machine in 1937.

5) The difficulty of computer programming in general. People are bad at programming.

Comment author: LucasSloan 17 February 2010 07:19:32AM 0 points [-]

Do you also evaluate the chances of WBE as being vanishingly slim over the next century?

Comment author: CronoDAS 17 February 2010 07:24:27AM *  1 point [-]

Actually, no, but I also expect that it'll be around for quite a while before running a whole brain emulation becomes cheaper than hiring a human engineer. I don't expect a particularly fast em transition; it took many years for portable telephones to go from something that cost thousands of dollars and went in your car to the cell phones that everyone uses today.

The Singularity was created by Nikola Tesla and Thomas Edison, and ended some time around 1920. Get used to it. ;)

Comment author: LucasSloan 17 February 2010 07:27:58AM 0 points [-]

So you expect that WBE will become possible before cheap supercomputers?

Comment author: timtyler 17 February 2010 02:12:05PM 0 points [-]

You might like to quantify "cheap" and "super".

Comment author: LucasSloan 17 February 2010 08:18:36PM 0 points [-]

See reply to CronoDAS below.

Comment author: CronoDAS 17 February 2010 07:41:48AM 0 points [-]

Even at Moore's Law speeds, simulating 10^11 neurons, 10^11 glial cells, 10^15 synaptic connections, and concentrations of various neurotransmitters and other chemicals in real time or faster-than-real time is going to be expensive for a long time before it becomes cheap.

Comment author: LucasSloan 17 February 2010 07:55:54AM 2 points [-]

Not necessarily. If a human brain with no software tricks requires 10^20 CPS (a very high estimate), then (according to Kurzweil, take with grain of salt) the computational capacity will be there by ~2040. However, it's certainly possible that we don't get the software until 2050, at which point anyone with a couple hundred dollars can run one.

Comment author: orthonormal 17 February 2010 07:54:45AM 1 point [-]

Depends on which details actually need to be simulated. I suspect that most intracellular activity can be neglected or replaced with some simple rules on when a cell divides, adds a synapse, etc.

Comment author: Kevin 17 February 2010 07:32:28AM *  3 points [-]

I disagree, but we probably have different estimates as to just how effective DNA modification and/or intelligence enhancing drugs are going to be in the future. I don't think Eliezer is going to make all that big of a dent in the FAI problem until he becomes more intelligent, and it's hard to estimate how much faster that will make him. I think I can say that intelligence enhancement could turn an impossible problem into a possible problem. It also means that there will be many more people out there capable of making meaningful contributions to the FAI problem.