Comment author: Khoth 02 July 2012 06:15:04PM 5 points [-]

Something I've not been clear about (I think you might have changed your thinking about this):

Do you see your malthusian upload future as something that we should work to avoid, or work to bring about?

Comment author: RobinHanson 03 July 2012 01:51:34AM 1 point [-]

People tend to assume they have more personal influence on these big far topics than they do. We might be able to make minor adjustments to what happens when, but we just don't get to choose between very different outcomes like uploads vs. AGI vs. nothing. We might work to make an upload world happen a bit sooner, or via a bit more stable a transition.

Comment author: moridinamael 02 July 2012 07:56:06PM 1 point [-]

I somewhat agree, or at least, I agree more with this than I agree with the assumption that the risk of Hanson's Malthusian upload scenario is worth more than a passing thought. Consider the conditions that have to exist during the technological window in which it is fast and cheap to reproduce and run uploaded humans but it is impossible to build a strongly superhuman AI which outcompetes any number of human uploads.

Anyway, the concept of "wealth" has already morphed beyond Hanson's definitions since the advent of mere online games. I'm not sure why this scenario keeps getting brought up as a real thing.

Comment author: RobinHanson 03 July 2012 01:45:57AM 4 points [-]

Online games do not invalidate the usual concept of wealth.

Comment author: RobinHanson 02 July 2012 05:44:55PM *  10 points [-]

Stuart, it sounds like you think that the life of the typical animal, and of the typical human in history, were not worth living -- you'd prefer that they had never existed. Since you seem to think your own life worth living, you must see people like you as a rare exception, and may be unsure if your existence justifies all the suffering your ancestors went through to produce you. And you'd naturally be wary of a future of descendants with lives more like your ancestors' than like your own. What you'd most want from the future is to stop change enough to ensure that people very much like you continue to dominate.

If we conceive of "death" broadly, then pretty much any competitive scenario will have lots of "death", if we look at it on a large enough scale. But this hardly implies that individuals will often feel the emotional terror of an impending death - that depends far more on framing and psychology.

Comment author: timtyler 05 October 2011 11:43:16AM *  4 points [-]

I do not agree with Robin's argument that:

When the parts of large systems evolve independently, to adapt to differing local circumstances, their values may also evolve independently.

Light travels fast, and information is cheap. The parts of large systems seem unlikely to "evolve independently" in the future - rather they will communicate and pool their knowledge, since they will have been built to do so.

In particular colonised regions will do R+D and then beam energy and information to the settlers on the colonisation front - where it is needed - while the setters will send back news of the environments they are encountering.

Comment author: RobinHanson 05 October 2011 01:39:24PM 1 point [-]

By "independently" I do not mean no interdependence. I mean not completely interdependent. You can't predict one perfectly just by looking at the others.

Comment author: RobinHanson 05 October 2011 12:04:39PM 3 points [-]
Comment author: RobinHanson 05 October 2011 11:35:57AM *  15 points [-]

I didn't mean to be harsh or confrontational. Surely it isn't a mere coincidence that in the context of an institute devoted to figuring out how to create a good singleton someone wrote a paper about how ems would result in singletons. That isn't psychology, but social context. I'd love for more folks to study future em society, even if they disagree with me.

Comment author: RobinHanson 24 August 2011 09:21:47PM 10 points [-]

I respond to this post at Overcoming Bias

Comment author: RobinHanson 25 February 2011 03:41:38PM 7 points [-]

I responded here

Comment author: Davorak 01 January 2011 11:39:09AM 3 points [-]

In Aumann's agreement theorem the two rationalists have common knowledge of the subject and share common priors. These two conditions are not true in the common cases you are discussing. Two rationalists can agree that it is not worth while to take the time to make all pertinent knowledge common and come to common priors. This can only happen with unimportant topics because otherwise it would be worth spending more time on it, this is still not agreeing to disagree.

If both rationalists have a high probability of very different answers(strongly disagree) it can indicate, drastically different knowledge or priors, and in real life where no one is perfectly rational it often indicates a lack of rationality in one or both parties. If it is the last case it is probably worth discussing the unimportant topic just to uncover and correct the irrational thought processes in both parties. So if two rationalists that share a large common knowledge base and strongly disagree there is a higher probability the disagreement arises from irrationality on one or both parts and it is therefore a good idea to discuss the topic further to isolate the irrationality and correct it.

On the other hand if the two rationalist have very different knowledge bases then it is likely their disagreement arises from their different knowledge bases and/or priors. Sharing the two knowledge bases could take a great deal of time and may not be worth the effort for an unimportant problem. If the two rationalists decide to walk away from the with out sharing knowledge they should both devalue their knowledge proportionally to how much they judge the other to be rational(both in logic and ability to curate their data and formulate priors and taking into account that it is harder to judge the others irrationality dues to the large difference in knowledge bases).

Comment author: RobinHanson 06 January 2011 03:19:41PM 3 points [-]

They have common knowledge of their disagreement, not of the subject! They need not share "all pertinent knowledge"!

Comment author: RobinHanson 22 November 2010 03:58:13AM 3 points [-]

"Most peoples' beliefs aren’t worth considering ... dropping the habit of seriously considering all others’ improper beliefs that don’t tell me what to anticipate and are only there for sounding interesting or smart."

Seems you assume that most peoples' beliefs are "improper." Did LW offer you evidence for that conclusion? And don't you also need to assume you have a way to generate beliefs that is substantially better at avoiding the desire to sound interesting or smart?

View more: Prev | Next