In response to Growing Up is Hard
Comment author: Robin_Hanson2 04 January 2009 01:40:04PM 2 points [-]

It seems pretty obvious that time-scaling should work - just speed up the operation of all parts in the same proportion. A good bet is probably size-scaling, adding more parts (e.g. neurons) in the same proportion in each place, and then searching in the space of different relative sizes of each place. Clearly evolution was constrained in the speed of components and in the number of parts, so there is no obvious evolutionary reason to think such changes would not be functional.

In response to Dunbar's Function
Comment author: Robin_Hanson2 01 January 2009 04:01:25PM 0 points [-]

Yeah Michael, what Eliezer said.

In response to Dunbar's Function
Comment author: Robin_Hanson2 31 December 2008 08:26:04PM 0 points [-]

Even if Earth ends in a century, virtually everyone in today's world is absolutely influential. Even if 200 folks do the same sort of work in the same office, they don't do the exact same work, and usually that person wouldn't be there or be paid if no one thought their work made any difference. You can even now easily identify your mark, but it is usually tedious to trace it out, and few have the patience for it.

In response to A New Day
Comment author: Robin_Hanson2 31 December 2008 07:32:57PM 2 points [-]

I love it!

In response to Dunbar's Function
Comment author: Robin_Hanson2 31 December 2008 07:32:13PM -1 points [-]

Virtually everyone in today's world is influential in absolute terms, and should be respected for their unique contribution. The problem is those eager to be substantially influential in percentage terms.

In response to Dunbar's Function
Comment author: Robin_Hanson2 31 December 2008 01:03:53PM 0 points [-]

Yes humans are better at dealing with groups of size 7 and 50, but I don't think that has much to do with your complaint. You are basically noticing that you would probably be the alpha male in a tribe of 50, ruling all you surveyed, and wouldn't that be cool. Or in a world of 5000 people you'd be one of the top 100, and everyone would know your name, and wouldn't that be cool. Even we had better ingrown tools for dealing with larger social groups, you'd still have to face the fact that as a small creature in a vast social world, most such creatures can't expect to be very widely known or influential.

Comment author: Robin_Hanson2 30 December 2008 01:54:21AM 0 points [-]

I agree with Phil; all else equal I'd rather have whatever takes over be sentient. The moment to pause is when you make something that takes over, not so much when you wonder if it should be sentient as well.

Comment author: Robin_Hanson2 29 December 2008 11:20:04PM 4 points [-]

I agree with Unknown. It seems that Eliezer's intuitions about desirable futures differ greatly from many of the rest of us here at this blog, and mostly likely even more from the rest of humanity today. I see little evidence that we should explain this divergence as mainly due to his "having moved further toward reflective equilibrium." Without a reason to think he will have vastly disproportionate influence, I'm having trouble seeing much point in all these posts that simply state Eliezer's intuitions. It might be more interesting if he argued for those intuitions, engaging with existing relevant literatures, such as in moral philosophy. But what is the point of just hearing his wish lists?

Comment author: Robin_Hanson2 28 December 2008 09:41:54PM 5 points [-]

Most of our choices have this sort of impact, just on a smaller scale. If you contribute a real child to the continuing genetic evolution process, if you contribute media articles that influence future perceptions, if you contribute techs that change future society, you are in effect adding to and changing the sorts of people there are and what they value, and doing so in ways you largely don't understand.

A lot of futurists seem to come to a similar point, where they see themselves on a runaway freight train, where no one is in control, knows where we are going, or even knows much about how any particular track switch would change where we end up. They then suggest that we please please slow all this change down so we can stop and think. But that doesn't seem a remotely likely scenario to me.

Comment author: Robin_Hanson2 27 December 2008 02:10:35PM 4 points [-]

You've already said the friendly AI problem is terribly hard, and there's a large chance we'll fail to solve it in time. Why then do you keep adding these extra minor conditions on what it means to be "friendly", making your design task all that harder? A friendly AI that was conscious and created conscious simulations to figure things out would still be *pretty* friendly overall.

View more: Prev | Next