olalonde

Wiki Contributions

Comments

Sorted by

Perfect simulation is not only really hard, it has been proven to be impossible. See http://en.wikipedia.org/wiki/Halting_problem

Related:

The really important thing is not to live, but to live well. - Socrates

Perhaps their contribution is in influencing the non experts? It is very likely that the non experts base their estimates on whatever predictions respected experts have made.

olalonde180

I believe government should be much more localized and I like the idea of charter cities. Competition among governments is good for citizens just as competition among businesses is good for consumers. Of course, for competition to really work out, immigration should not be regulated.

See: http://en.wikipedia.org/wiki/Charter_city

olalonde100

For some reason, this thread reminds me of this Simpsons quote:

"The following tale of alien encounters is true. And by true, I mean false. It's all lies. But they're entertaining lies, and in the end, isn't that the real truth?"

Oh, and every time someone in this world tries to build a really powerful AI, the computing hardware spontaneously melts.

Would have been a good punch if the humans ended up melting away the aliens' computer simulating our universe.

To expand on what parent said, pretty much all modern computer languages are equivalent to Turing machines (Turing complete). This includes Javascript, Java, Ruby, PHP, C, etc. If I understand Solomonoff induction properly, testing all possible hypothesis implies generating all possible programs in say Javascript and testing them to see which program's output match our observations. If multiple programs match the output, we should chose the smallest one.

efficiently convert ambient energy

Just a nitpick but if I recall correctly, cellular respiration (aerobic metabolism) is much more efficient than any of our modern ways to produce energy.

I think 1 is the most likely scenario (although I don't think FOOM is a very likely scenario). Some more mind blowing hard problems are available here for those who are still skeptical: http://en.wikipedia.org/wiki/Transcomputational_problem

I don't think that's so obviously true. Here are some possible arguments against that theory:

1) There is a theoretical upper limit at which information can travel (speed of light). A very large "brain" will eventually be limited by that speed.

2) Some computational problems are so hard that even an extremely powerful "brain" would take very long to solve (http://en.wikipedia.org/wiki/Computational_complexity_theory#Intractability).

3) There are physical limits to computation (http://en.wikipedia.org/wiki/Bremermann%27s_limit). Bremermann's Limit is the maximum computational speed of a self-contained system in the material universe. According to this limit, a computer the size of the Earth would take 10^72 years to crack a 512 bit key. In other words, even an AI the size of the Earth would not manage to break modern human encryption by brute-force.

More theoretical limits here: http://en.wikipedia.org/wiki/Limits_to_computation

Load More