In response to Failed Utopia #4-2
Comment author: Will_Pearson 22 January 2009 12:39:19AM 1 point [-]

Dognab, your arguments apply equally well to any planner. Planners have to consider the possible futures and pick the best one (using a form of predicate), and if you give them infinite horizons they may have trouble. Consider a paper clip maximizer, every second it fails to use its full ability to paper clip things in its vicinity it is losing possible useful paper clipping energy to entropy (solar fusion etc). However if it sits and thinks for a bit it might discover a way to hop between galaxies with minimal energy. So what decision should it make? Obviously it would want to run some simulations, see if there gaps in its knowledge. How detailed simulations should it make, so it can be sure it has ruled out the galaxy hopping path?

I'll admit I was abusing the genie-trope some what. But then I am sceptical of FOOMing anyway, so when asked to think about genies/utopias, I tend to suspend all disbelief in what can be done.

Oh and belldandy is not annoying because she has broken down in tears (perfectly natural), but because she bases her happiness too much on what Stephen Grass thinks of her. A perfect mate for me would tell me straight what was going on and if I hated her for it (when not her fault at all), she'd find someone else because I'm not worth falling in love with. I'd want someone with standards for me to meet, not unconditional creepy fawning.

In response to Failed Utopia #4-2
Comment author: Will_Pearson 21 January 2009 04:04:39PM 2 points [-]

Bogdan Butnaru:

What I meant was is that the AI would keep inside it a predicate Will_Pearson_would_regret_wish (based on what I would regret), and apply that to the universes it envisages while planning. A metaphor for what I mean is the AI telling a virtual copy of me all the stories of the future, from various view points, and the virtual me not regretting the wish. Of course I would expect it to be able to distill a non sentient version of the regret predicate.

So if it invented a scenario where it killed the real me, the predicate would still exist and say false. It would be able to predict this, and so not carry out this plan.

If you want to, generalize to humanity. This is not quite the same as CEV, as the AI is not trying to figure out what we want when we would be smarter, but what we don't want when we are dumb. Call it coherent no regret, if you wish.

CNR might be equivalent of CEV if humanity wishes not to feel regret in the future for the choice. That is if we would regret being in a future where people regret the decision, even though current people wouldn't.

In response to Failed Utopia #4-2
Comment author: Will_Pearson 21 January 2009 12:30:20PM 1 point [-]

I don't believe in trying to make utopias but in the interest of rounding out your failed utopia series how about giving a scenario against this wish.

I wish that the future will turn out in such a way that I do not regret making this wish. Where I is the entity standing here right now, informed about the many different aspects of the future, in parallel if need be (i.e if I am not capable of groking it fully then many versions of me would be focused on different parts, in order to understand each sub part).

I'm reminded by this story that while we may share large parts of psychology, what makes a mate have an attractive personality is not something universal. I found the cat girl very annoying.

In response to Building Weirdtopia
Comment author: Will_Pearson 13 January 2009 09:08:24PM 3 points [-]

Personally I don't find the scientific weirdtopia strangely appealing. Finding knowledge for me is about sharing it later.

Utopia originally meant no-place, I have a hard time forgetting that meaning when people talk about them.

I'd personally prefer to work towards negated-dystopias. Which is not necessarily the same thing as working towards Utopia, depending on how broad your class of dystopia is. For example rather than try and maximise Fun, I would want to minimize the chance that humanity and all its work were lost to extinction. If there is time and energy to devote to Fun while humanity survives then people can figure it out for themselves.

In response to Growing Up is Hard
Comment author: Will_Pearson 04 January 2009 03:55:35PM 0 points [-]

Time scaling is not unproblematic. We don't have a single clock in the brain, clocks must be approximated by neurons and by neural firing. Speeding up the clocks may affect the ability to learn from the real world (if we have a certain time interval for associating stimuli).

We might be able to adapt, but I wouldn't expect it to be straight forward.

Comment author: Will_Pearson 02 January 2009 11:26:22PM 0 points [-]

A random utility function will do fine, iff the agent has perfect knowledge.

Imagine, if you will a stabber, something that wants to turn the world into things that have been stabbed. If it knows that stabbing itself will kill itself, it will know to stab itself last. If it doesn't know know that stabbing itself will lead to it no longer being able to stab things, then it may not do well in actually achieving its stabbing goal by stabbing itself too early.

Comment author: Will_Pearson 28 December 2008 11:19:45PM 0 points [-]

I'd agree with the sentiment in this post. I'm interested in building artificial brain stuff, more than building Artificial People. That is a computational substrate that allows the range of purpose-oriented adaptation shown in the brain, but with different modalities. Not neurally based, because simulating neural systems on a system where processing and memory is split defeats the majority of the point of them for me.

In response to Nonperson Predicates
Comment author: Will_Pearson 27 December 2008 10:02:59AM 1 point [-]

Don't you need a person predicate as well? If the RPOP is going to upload us all or something similar, doesn't ve need to be sure that the uploads will still be people.

Comment author: Will_Pearson 22 December 2008 03:08:40PM 0 points [-]

I suspect the knowledge you get from reading someones writings is very different than the knowledge you get from working with them or them teaching you. When you work or learn closely with someone they can see your reasoning processes and correct them when they go astray at the right point when they are still newly formed and not too ingrained. Otherwise it relies too much on luck. When in someone intellectual career should you read OB, too early it won't mean too much lacking the necessary background and too late you will be inured against it (assuming it is the right way to go!).

Autodidacts are going to be most intellectually useful when you need to break new ground and the methodologies of the past aren't the way to solve the problems needed to be solved.

Comment author: Will_Pearson 13 December 2008 05:27:00PM 0 points [-]

Are you saying "snakes are often deadly poisonous to humans" is an instrumental value?

I'd agree that dying is bad therefore avoid deadly poisonous things. But I still don't see that snakes have little xml tags saying keep away, might be harmful.... I don't see that as a value of any sort.

View more: Prev | Next