One of the most direct methods for an agent to increase its computing power (does this translate to an increase in intelligence, even logarithmically?) is to increase the size of its brain. This doesn't have an inherent upper limit, only ones caused by running out of matter and things like that, which I consider uninteresting.
I don't think that's so obviously true. Here are some possible arguments against that theory:
1) There is a theoretical upper limit at which information can travel (speed of light). A very large "brain" will eventually be limited by that speed.
2) Some computational problems are so hard that even an extremely powerful "brain" would take very long to solve (http://en.wikipedia.org/wiki/Computational_complexity_theory#Intractability).
3) There are physical limits to computation (http://en.wikipedia.org/wiki/Bremermann%27s_limit). Bremermann's Limit is the maximum computational speed of a self-contained system in the material universe. According to this limit, a computer the size of the Earth would take 10^72 years to crack a 512 bit key. In other words, even an AI the size of the Earth would not manage to break modern human encryption by brute-force.
More theoretical limits here: http://en.wikipedia.org/wiki/Limits_to_computation
If there is something really cool and you can't understand why somebody hasn't done it before, it's because you haven't done it yourself.
-- Lion Kimbro, "The Anarchist's Principle"
Forgive my stupidity, but I'm not sure I get this one. Should I read it as "[...] it's probably for the same reasons you haven't done it yourself."?
Since you said the quote itself was absurd I thought you were saying the post was an internally flawed strawman meant for the purpose of satire, but you meant something else by that word.
I'm the one who said that. Just to make it clear, I do agree with your first comment: taken literally, the quote doesn't make sense. Do you get it better if I say: "It is easy to achieve your goals if you have no goals"? I concede absurd was possibly a bit too strong here.
If someone didn't value any world-states more than any others, I'm not sure that a Way would actually exist for them, as they could do nothing to increase the expected utility of future world-states. Thus, it doesn't seem to really make sense to speak of such a Way being easy or hard for them.
Am I missing something?
I think you're over analyzing here, the quote is meant to be absurd.
On the biological side, is there any evidence that we have reached an equilibrium? (I'm asking genuinely)
On one hand, evolution appears to work in a punctuated manner, meaning that individual components of evolutionary systems are usually at equilibrium.
On the other hand, brain volume in our ancestors rose smoothly from 3 million years ago to the present.
On the other other hand, some Neanderthals had larger brains than modern humans.
Higher levels of human intelligence result in a lower expected social utility for some other species (we are better at hunting them). It does not result in lower expected social utility for humans as we are generally good to other humans. Higher levels of individual intelligence have brought us the great achievements of human kind with very few downsides.
You can't simply assert that. It's an empirical question. How have you tried to measure the downsides?
You can't simply assert that. It's an empirical question. How have you tried to measure the downsides?
It seems so obvious to me that I didn't bother... Here's some empirical data: http://www.ted.com/talks/hans_rosling_shows_the_best_stats_you_ve_ever_seen.html . Anyways, if you really want to dispute the fact that we have progressed over the past few centuries, I believe the burden of proof rests on you.
can convey new information to a bounded rationalist
Why limit it to bounded rationalists?
If anything, the reason we don't see a rapid rise of intelligence among human beings
What about the Flynn effect?
I also strongly doubt the claim that human intelligence has stopped increasing. I was just offering an alternative hypothesis in case that proposition were true. Also, OP was arguing that intelligence stopped increasing at an evolutionary level which the Flynn effect doesn't seem to contradict (after a quick skim of the Wikipedia page).
However, humans and human societies are currently near some evolutionary equilibrium.
I think there's plenty of evidence that human societies are not near some evolutionary equilibrium. Can you name a human society that has lasted longer than a few hundred years? A few thousand years?
On the biological side, is there any evidence that we have reached an equilibrium? (I'm asking genuinely)
It's very possible that individual intelligence has not evolved past its current levels because it is at an equilibrium, beyond which higher individual intelligence results in lower social utility.
The consensus among biologists seems to be that social utility has zero to very little impact on evolution. See http://en.wikipedia.org/wiki/Group_selection
In fact, if you believe SIAI's narrative about the danger of artificial intelligence and the difficulty of friendly AI, I think you would have to conclude that higher individual intelligence results in lower expected social utility, for human measures of utility.
Higher levels of human intelligence result in a lower expected social utility for some other species (we are better at hunting them). It does not result in lower expected social utility for humans as we are generally good to other humans. Higher levels of individual intelligence have brought us the great achievements of human kind with very few downsides. The concern with AGI is that it might treat humans as humans treat some other species.
If anything, the reason we don't see a rapid rise of intelligence among human beings is that it does not provide much evolutionary benefit. In modern societies, people don't die for being dumb (usually) and sexual selection doesn't have much impact since most people only have child with a single partner.
Furthermore, this study is definitely flawed since it's quite obvious that individual intelligence has done a great deal lot more good for society than bad. Is there even an argument about this?
The study itself isn't modelling all aspects of society, just a very limited set of PD situations. That society has on the whole benefited from intelligence is due primarily to inventions and discoveries, which have no analog in PD, Maybe if one had a version where the more previous rounds of cooperation there have been the higher then payoff of cooperation in future rounds one might have something that approached that.
Saying that the study was flawed was indeed a bit strong. What I really meant is that OP's conclusion was wrong (individual intelligence = bad for society).
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
To follow up on what olalonde said, there are problems that appear to get extraordinarily difficult as the number of inputs increases. Wikipedia suggests that the know best solutions to the traveling salesman problem is on the order of O(2^n), where n is the number of inputs. Saying that adding computational ability resolves these issues for actual AGI implies either:
1) AGI trying to FOOM won't need to solve problems as complicated as traveling salesman type problems, or
2) AGI trying to FOOM will be able to add processing power at a rate reasonably near O(2^n), or
3) In the process of FOOM, an AGI will be able to determine P=NP or similarly revolutionary result.
None of those seem particularly plausible to me. So for reasonable sized n, AGI will not be able to solve problems appreciably better than humans.
I think 1 is the most likely scenario (although I don't think FOOM is a very likely scenario). Some more mind blowing hard problems are available here for those who are still skeptical: http://en.wikipedia.org/wiki/Transcomputational_problem