The person who says, as almost everyone does say, that human life is of infinite value, not to be measured in mere material terms, is talking palpable, if popular, nonsense. If he believed that of his own life, he would never cross the street, save to visit his doctor or to earn money for things necessary to physical survival. He would eat the cheapest, most nutritious food he could find and live in one small room, saving his income for frequent visits to the best possible doctors. He would take no risks, consume no luxuries, and live a long life. If you call it living. If a man really believed that other people's lives were infinitely valuable, he would live like an ascetic, earn as much money as possible, and spend everything not absolutely necessary for survival on CARE packets, research into presently incurable diseases, and similar charities.
In fact, people who talk about the infinite value of human life do not live in either of these ways. They consume far more than they need to support life. They may well have cigarettes in their drawer and a sports car in the garage. They recognize in their actions, if not in their words, that physical survival is only one value, albeit a very important one, among many.
-- David D. Friedman, The Machinery of Freedom
Related:
The really important thing is not to live, but to live well. - Socrates
The only noticeable difference is that amateurs lacked the upswing at 50 years, and were relatively more likely to push their predictions beyond 75 years. This does not look like good news for the experts - if their performance can't be distinguished from amateurs, what contribution is their expertise making?
I believe you can put your case even a bit more strongly than this. With this amount of data, the differences you point out are clearly within the range of random fluctuations; the human eye picks them out, but does not see the huge reference class of similarly "different" distributions. I predict with confidence over 95% that a formal statistical analysis would find no difference between the "expert" and "amateur" distributions.
Perhaps their contribution is in influencing the non experts? It is very likely that the non experts base their estimates on whatever predictions respected experts have made.
I believe government should be much more localized and I like the idea of charter cities. Competition among governments is good for citizens just as competition among businesses is good for consumers. Of course, for competition to really work out, immigration should not be regulated.
If you wish to advance into the infinite, explore the finite in all directions.
That sounds incredibly deep. (By which I mean it is bullshit.)
For some reason, this thread reminds me of this Simpsons quote:
"The following tale of alien encounters is true. And by true, I mean false. It's all lies. But they're entertaining lies, and in the end, isn't that the real truth?"
Oh, and every time someone in this world tries to build a really powerful AI, the computing hardware spontaneously melts.
Would have been a good punch if the humans ended up melting away the aliens' computer simulating our universe.
A good and useful abstraction that is entirely equivalent to Turing machines, and to humans much more useful, is Lambda calculus and Combinator calculi. Many of these systems are known to be Turing complete.
Lambda calculus and other Combinator calculi are rule-sets that rewrite a string of symbols expressed in a formal grammar. Fortunately the symbol-sets in all such grammars are finite and can therefore be expressed in binary. Furthermore, all the Turing complete ones of these calculi have a system of both linked lists and boolean values, so equivalent with the Turing machine model expressed in this article, one can write a programme in a binary combinator calculus and feed it a linked list of boolean values expressed in the combinator calculus itself and then have it reduce itself (return) a linked list of boolean values.
Personally I prefer combinator calculi to Turing machines mainly because they are vastly easier to program.
Related:
To expand on what parent said, pretty much all modern computer languages are equivalent to Turing machines (Turing complete). This includes Javascript, Java, Ruby, PHP, C, etc. If I understand Solomonoff induction properly, testing all possible hypothesis implies generating all possible programs in say Javascript and testing them to see which program's output match our observations. If multiple programs match the output, we should chose the smallest one.
Is there a simple summary of why you think this is true of intelligence when it turned out not to be true of, say, durability, or flightspeed, or firepower, or the ability to efficiently convert ambient energy into usable form, or any of a thousand other evolved capabilities for which we've managed to far exceed our physiological limits with technological aids?
efficiently convert ambient energy
Just a nitpick but if I recall correctly, cellular respiration (aerobic metabolism) is much more efficient than any of our modern ways to produce energy.
To follow up on what olalonde said, there are problems that appear to get extraordinarily difficult as the number of inputs increases. Wikipedia suggests that the know best solutions to the traveling salesman problem is on the order of O(2^n), where n is the number of inputs. Saying that adding computational ability resolves these issues for actual AGI implies either:
1) AGI trying to FOOM won't need to solve problems as complicated as traveling salesman type problems, or
2) AGI trying to FOOM will be able to add processing power at a rate reasonably near O(2^n), or
3) In the process of FOOM, an AGI will be able to determine P=NP or similarly revolutionary result.
None of those seem particularly plausible to me. So for reasonable sized n, AGI will not be able to solve problems appreciably better than humans.
I think 1 is the most likely scenario (although I don't think FOOM is a very likely scenario). Some more mind blowing hard problems are available here for those who are still skeptical: http://en.wikipedia.org/wiki/Transcomputational_problem
One of the most direct methods for an agent to increase its computing power (does this translate to an increase in intelligence, even logarithmically?) is to increase the size of its brain. This doesn't have an inherent upper limit, only ones caused by running out of matter and things like that, which I consider uninteresting.
I don't think that's so obviously true. Here are some possible arguments against that theory:
1) There is a theoretical upper limit at which information can travel (speed of light). A very large "brain" will eventually be limited by that speed.
2) Some computational problems are so hard that even an extremely powerful "brain" would take very long to solve (http://en.wikipedia.org/wiki/Computational_complexity_theory#Intractability).
3) There are physical limits to computation (http://en.wikipedia.org/wiki/Bremermann%27s_limit). Bremermann's Limit is the maximum computational speed of a self-contained system in the material universe. According to this limit, a computer the size of the Earth would take 10^72 years to crack a 512 bit key. In other words, even an AI the size of the Earth would not manage to break modern human encryption by brute-force.
More theoretical limits here: http://en.wikipedia.org/wiki/Limits_to_computation
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Sorry if this is a stupid question, but this tournament looked to me like a thinly disguised version of:
"Construct a robot that can read code and interpret what it means."
which is a Really Hard Problem.
Is that not a fair description? Was there some other way to approach the problem?
The only way I can see to go about constructing a GOOD entrant to this is to write something that can take as its input the code of the opponent and interpret what it will actually DO, that can recognize the equivalence between (say):
return DEFECT
and
if 1: return DEFECT return COOPERATE
and can interpret things like:
if oppcode == mycode return COOPERATE return DEFECT
And I have no idea how to go about doing that. From the fact that the winning entrants were all random, it seems safe to say that no entrants had any idea how to go about doing that either.
Am I missing something here?
Perfect simulation is not only really hard, it has been proven to be impossible. See http://en.wikipedia.org/wiki/Halting_problem