All of olalonde's Comments + Replies

Perfect simulation is not only really hard, it has been proven to be impossible. See http://en.wikipedia.org/wiki/Halting_problem

Related:

The really important thing is not to live, but to live well. - Socrates

Perhaps their contribution is in influencing the non experts? It is very likely that the non experts base their estimates on whatever predictions respected experts have made.

3Stuart_Armstrong
Seems pretty unlikely - because you'd then expect the non-experts to have the same predicted dates as the experts, but not the same distribution of time to AI. Also the examples I saw were mainly of non-experts saying: AI will happen around here because well, I say so. (sometimes spiced with Moore's law).

I believe government should be much more localized and I like the idea of charter cities. Competition among governments is good for citizens just as competition among businesses is good for consumers. Of course, for competition to really work out, immigration should not be regulated.

See: http://en.wikipedia.org/wiki/Charter_city

3AlexMennen
Counterargument: cities can compete against each other not only by implementing policies that will benefit the average resident, but also by implementing policies that will attract already successful immigrants. Thus, localizing government could result in policies that are biased to the advantage of already successful people.

Of course, for competition to really work out, immigration should not be regulated.

How does this follow? Unless I'm having a severe case of reading misapprehension, this is equivalent to arguing that there should be a market in housing because competition between landlords will result in good housing with reasonable rents -- and then adding, as if it were obvious, that for competition to work out, landlords should not have any rules for screening potential tenants.

8Slackson
I like this idea, but for it to work how we want to, having immigration unregulated isn't quite the right way to put it. Immigration needs to be free, perhaps by contract between charter cities. There'd need to be protocols for the creation of the cities, for the transfer of land between them, for them shutting down, and so on. Perhaps this could be organized by a meta-government. I'm not sure how well a decentralized system would deal with that. The barriers to entry would be very high, unfortunately, and while I'm not well versed on the economics of the development of monopolies, it appears to me that there might need to be some kind of regulation to prevent them from developing and allowing new charter cities to enter the market somehow. Unless the division of land was predetermined and static, which also solves the previous problems of land transfer and city creation. In the end this ends up being a substantially less elegant system than initially imagined, but that doesn't mean it's still not potentially far more elegant and effective than the system we have now. States are supposed to operate and compete in a similar manner, but there aren't enough of them for that to work well enough and AFAICT the federal government plays a much larger role than is ideal. Forgive me if I'm being stupid, I only get over my social anxiety enough to post if I'm a little bit drunk.

For some reason, this thread reminds me of this Simpsons quote:

"The following tale of alien encounters is true. And by true, I mean false. It's all lies. But they're entertaining lies, and in the end, isn't that the real truth?"

Oh, and every time someone in this world tries to build a really powerful AI, the computing hardware spontaneously melts.

Would have been a good punch if the humans ended up melting away the aliens' computer simulating our universe.

To expand on what parent said, pretty much all modern computer languages are equivalent to Turing machines (Turing complete). This includes Javascript, Java, Ruby, PHP, C, etc. If I understand Solomonoff induction properly, testing all possible hypothesis implies generating all possible programs in say Javascript and testing them to see which program's output match our observations. If multiple programs match the output, we should chose the smallest one.

0[anonymous]
Correct. It is just harder to exhaustively generate JS, and again harder to judge simplicity.

efficiently convert ambient energy

Just a nitpick but if I recall correctly, cellular respiration (aerobic metabolism) is much more efficient than any of our modern ways to produce energy.

0TheOtherDave
Fair enough. Thanks.

I think 1 is the most likely scenario (although I don't think FOOM is a very likely scenario). Some more mind blowing hard problems are available here for those who are still skeptical: http://en.wikipedia.org/wiki/Transcomputational_problem

I don't think that's so obviously true. Here are some possible arguments against that theory:

1) There is a theoretical upper limit at which information can travel (speed of light). A very large "brain" will eventually be limited by that speed.

2) Some computational problems are so hard that even an extremely powerful "brain" would take very long to solve (http://en.wikipedia.org/wiki/Computational_complexity_theory#Intractability).

3) There are physical limits to computation (http://en.wikipedia.org/wiki/Bremermann%27s_limit). Bremermann... (read more)

Forgive my stupidity, but I'm not sure I get this one. Should I read it as "[...] it's probably for the same reasons you haven't done it yourself."?

1dlthomas
I think it just means "you should do it", which is only sometimes the appropriate response.

I'm the one who said that. Just to make it clear, I do agree with your first comment: taken literally, the quote doesn't make sense. Do you get it better if I say: "It is easy to achieve your goals if you have no goals"? I concede absurd was possibly a bit too strong here.

2chaosmosis
Okay, that makes more sense, yeah I see what you mean and agree.

I think you're over analyzing here, the quote is meant to be absurd.

0chaosmosis
Whaaa? Someone explain please. It didn't seem absurd when I read it.

You can't simply assert that. It's an empirical question. How have you tried to measure the downsides?

It seems so obvious to me that I didn't bother... Here's some empirical data: http://www.ted.com/talks/hans_rosling_shows_the_best_stats_you_ve_ever_seen.html . Anyways, if you really want to dispute the fact that we have progressed over the past few centuries, I believe the burden of proof rests on you.

can convey new information to a bounded rationalist

Why limit it to bounded rationalists?

I also strongly doubt the claim that human intelligence has stopped increasing. I was just offering an alternative hypothesis in case that proposition were true. Also, OP was arguing that intelligence stopped increasing at an evolutionary level which the Flynn effect doesn't seem to contradict (after a quick skim of the Wikipedia page).

However, humans and human societies are currently near some evolutionary equilibrium.

I think there's plenty of evidence that human societies are not near some evolutionary equilibrium. Can you name a human society that has lasted longer than a few hundred years? A few thousand years?

On the biological side, is there any evidence that we have reached an equilibrium? (I'm asking genuinely)

It's very possible that individual intelligence has not evolved past its current levels because it is at an equilibrium, beyond which higher individual intelligence r

... (read more)
0PhilGoetz
On one hand, evolution appears to work in a punctuated manner, meaning that individual components of evolutionary systems are usually at equilibrium. On the other hand, brain volume in our ancestors rose smoothly from 3 million years ago to the present. On the other other hand, some Neanderthals had larger brains than modern humans. You can't simply assert that. It's an empirical question. How have you tried to measure the downsides?
0Gastogh
I'd say the negative correlation between education and fertility has been established pretty firmly. As a simple demonstration: if you sort the information here by fertility rate in descending order, you'll find that the countries with <2 children per woman are mostly first-world countries. There are more than a few countries in Europe, for instance, where immigration is the only thing keeping the population growth positive, and let's not even get started on Japan. And it goes deeper than country-to-country comparisons; within a given country, the poor and less educated tend to have more children than the other guys. (China might be an exception to that, I'm not sure.) From what I know of population trends in recorded history, this has always been the case. This doesn't look good from an evolutionary point of view, if one is concerned with the long term instead of immediate x-risks and bioengineering etc. On the surface at least high education doesn't seem to be an evolutionarily valid tactic. Whether this applies for raw, general intelligence... Dunno. But I wouldn't be surprised if we'd reached an evolutionary equilibrium or a downswing.
4Viliam_Bur
Officially. If intelligence correlates positively with social skills and popularity, smart males can spread their genes outside of their marriages. (Reading this, don't imagine a nerd with IQ 190, but rather a jock with IQ 120. If he impregnates his average neighbor's wife, he contributes to the global intelligence increase.)
0Richard_Kennaway
What about the Flynn effect?

Saying that the study was flawed was indeed a bit strong. What I really meant is that OP's conclusion was wrong (individual intelligence = bad for society).

This suggests that intelligence is an externality, like pollution.

This sentence doesn't really make sense. Intelligence in itself is not a "cost imposed to a third party" (externality's definition)... Perhaps you mean intelligence leads to more externalities?

Furthermore, this study is definitely flawed since it's quite obvious that individual intelligence has done a great deal lot more good for society than bad. Is there even an argument about this?

3PhilGoetz
That isn't a flaw in the study. It would be a flaw in an interpretation of the study. Your question isn't well-defined, since most of the things we define as good require intelligence. But of course that also means my initial statement wasn't well-defined. I'll respond in the OP.
8JoshuaZ
The study itself isn't modelling all aspects of society, just a very limited set of PD situations. That society has on the whole benefited from intelligence is due primarily to inventions and discoveries, which have no analog in PD, Maybe if one had a version where the more previous rounds of cooperation there have been the higher then payoff of cooperation in future rounds one might have something that approached that.

One way to get around the argument on semantics would be to replace "sound" by its definition.

...

Albert: "Hah! Definition 2c in Merriam-Webster: 'Sound: Mechanical radiant energy that is transmitted by longitudinal pressure waves in a material medium (as air).'"

Barry: "Hah! Definition 2b in Merriam-Webster: 'Sound: The sensation perceived by the sense of hearing.'"

Albert: "Since we cannot agree on the definition of sound and a third party might be confused if he listened to us, can you reformulate your question, re... (read more)

1ToddStark
"... replace "sound" by its definition." Yes, that's exactly what happens in a reasonable dialog, at the point where people realze they are thinking of the same thing in different ways. The trick is recognizing what that difference is so you can expand on it and compare. It happens fairly quickly and easily in most cases when both people are mostly focused on inquiry. If they are arguing their own position, they are unlikely to be looking for the difference, they are probably looking for ways to deconstruct the other person's terms and find fallacies in their logic or problems with their evidence. They will resort to arguing for their own definitions. When you end up in a game of duelling definitions, one valuable strategy is to ask the purpose of the definition. It serves a rhetorical purpose to use one definition vs. another in an explanation or question. If emphasizes different things. This is an important pragmatist principle coming from the slant that words are tools for thinking. Ex: Q: Why bring the perceiver into the picture when talking about sound? What purpose does that serve? A: The reason I define sound as something perceived is to distinguish the dark, silent physical world of wavelengths and vibrations and strings from the one constructed in human experience to operate on the world. I care about the human experience, not what is going on with atoms. This exposes a great deal of the relevant conceptual background and current focus of each person so you know what they are arguing about and might be able to either collaborate more effectively, learn something from each other, or else identify that you aren't talking about the same thing at all. Rather than just fighting over which definition is better.

Isn't it implied that sub-human intelligence is not designed to be self-modifying given that monkeys don't know how to program? What exactly do you mean by "we were not designed explicitly to be self-modifying"?

0Vulture
My understanding was that in your comment you basically said that our current inability to modify ourselves is evidence that an AGI of human-level intelligence would likewise be unable to self-modify.

Human level intelligence is unable to improve itself at the moment (it's not even able to recreate itself if we exclude reproduction). I don't think monkey level intelligence will be more able to do so. I agree that the SIAI scenario is way overblown or at least until we have created an intelligence vastly superior to human one.

4Vulture
Uh... I think the fact that humans aren't cognitively self-modifying (yet!) doesn't have to do with our intelligence level so much as the fact that we were not designed explicitly to be self-modifying, as the SIAI is assuming any AGI would be. I don't really know enough about AI to know whether or not this is strictly necessary for a decent AGI, but I get the impression that most (or all) serious would-be-AGI-builders are aiming for self-modification.

You mean explicitly base their every day life beliefs and decisions on Bayesian probability? That strikes me as highly impractical... Could you give some specific examples?

4Nornagest
As best I can tell it is impractical as an actual decision-making procedure for more complex cases, at least assuming well-formalized priors. As a limit to be asymptotically approached it seems sound, though -- and that's probably the best we can do on our hardware anyway.
0Bugmaster
I thought I could, but Yvain kind of took the wind out of my sails with his post that Nornagest linked to, above. That said, Eliezer does outline his vision of using Bayesian rationality in daily life here, and in that whole sequence of posts in general.

I understand your concern, but at this point, we're not even near monkey level intelligence so when I get to 5 year old human level intelligence I think it'll be legitimate to start worrying. I don't think greater than human AI will happen all of a sudden.

0Bugmaster
The SIAI folks would say that your reasoning is exactly the kind of reasoning that leads to all of us being converted into computronium one day. More specifically, they would claim that, if you program an AI to improve itself recursively -- i.e., to rewrite its own code, and possibly rebuild its own hardware, in order to become smarter and smarter -- then its intelligence will grow exponentially, until it becomes smart enough to easily outsmart everyone on the planet. It would go from "monkey" to "quasi-godlike" very quickly, potentially so quickly that you won't even notice it happening. FWIW, I personally am not convinced that this scenario is even possible, and I think that SIAI's worries are way overblown, but that's just my personal opinion.

Before I get more involved here, could someone explain me what is

1) x-rationality (extreme rationality) 2) a rationalist 3) a bayesian rationalist

(I know what rationalism and Bayes theorem are but I'm not sure what the terms above refer to in the context of LW)

3Bugmaster
As far as I understand, a "Bayesian Rationalist" is someone who bases their beliefs (and thus decisions) on Bayesian probability, as opposed to ye olde frequentist probability. An X-rationalist is someone who embraces both epistemic and instrumental rationality (the Bayesian kind) in order to optimize every aspect of his life.
6Nornagest
In the context of LW, all those terms are pretty closely related unless some more specific context makes it clear that they're not. X-rationality is a term coined to distinguish the LW methodology (which is too complicated to describe in a paragraph, but the tagline on the front page does a decent job) from rationality in the colloquial sense, which is a much fuzzier set of concepts; when someone talks about "rationality" here, though, they usually mean the former and not the latter. This is the post where the term originates, I believe. A "rationalist" as commonly used in LW is one who pursues (and ideally attempts to improve on) some approximation of LW methodology. "Aspiring rationalist" seems to be the preferred term among some segments of the userbase, but it hasn't achieved fixation yet. Personally, I try to avoid both. A "Bayesian rationalist" is simply a LW-style rationalist as defined above, but the qualification usually indicates that some contrast is intended. A contrast with rationalism in the philosophical sense is probably the most likely; that's quite different) and in some ways mutually exclusive with LW epistemology, which is generally closer to philosophical empiricism.

Hi all! I have been lurking LW for a few months (years?). I believe I was first introduced to LW through some posts on Hacker News (http://news.ycombinator.com/user?id=olalonde). I've always considered myself pretty good at rationality (is there a difference with being a rationalist?) and I've always been an atheist/reductionist. I recently (4 years ago?) converted to libertarianism (blame Milton Friedman). I was raised by 2 atheist doctors (as in PhD). I'm a software engineer and I'm mostly interested in the technical aspect of achieving AGI. Since I was ... (read more)

0Bugmaster
Most people here would probably tell you to immediately stop your work on AGI, until you can be reasonably sure that your AGI, once you build and activate it, would be safe. As far as I understand, the mission of SIAI (the people who host this site) is to prevent the rise of un-Friendly AGI, not to actually build one. I could be wrong though, and I may be inadvertently caricaturing their position, so take my words with a grain of salt.
2olalonde
Before I get more involved here, could someone explain me what is 1) x-rationality (extreme rationality) 2) a rationalist 3) a bayesian rationalist (I know what rationalism and Bayes theorem are but I'm not sure what the terms above refer to in the context of LW)