How good an emulation is required?

8 whpearson 14 August 2011 01:15PM

Reading this article on requiring lots of processing power to emulate the snes accurately, made me think that we will likely have similar issues when emulating humans. 

I'd imagine weird timing and chemical interactions being used by the brain as it is an adaptable system and might be able adapt to use them if they turn out to be helpful.

This suggested to me a few issues with no easy answers that I could see.

  • Is it better to emulate 1 human faithfully or 10 humans with occasional glitches (for example could no longer appreciate music in the same way)
  • How glitch free would you want the emulation to be before you gave up your body. 
  • How glitch free would you want the emulation to be before letting it use heavy machinery.
  • How glitch free would you want the emulation to be before you had it working on FAI.
Also please ignore the 3Ghz vs 25Mhz comparison, it perpetuates the myth that computational power is about clock speed and not operations per second and memory bandwidth.

The danger of wishful thinking

-2 whpearson 28 July 2011 11:24PM

Or "The problems inherent in making a goal maximiser with a changing world model."

No paper clips were created or destroyed in the making of this script.

*This is an experimental post to try and get this point across. I'll write something similar for the type of systems I would like to explore, if this goes down well.*

Goal maximisers are great when you have a fixed ontology and you only have limited ways of getting information about the world. These aren't the case in AGI. Remember the map is not the territory, and the map is all that the utility maximiser can look at when deciding the utility of the future of actions.

TL;DR You can't have a utility maximiser choose how to alter the world model or how the world model should progress, if the utility is derived from that world model. If you have something else derive the world model it will conflict with the utility maximiser over resources and what to do in the world. Some method of resolving the conflicts is necessary which means we must go beyond normal model based utility maximisers.

continue reading »

Is g a measure of ability to absorb information in a non-inductive way?

-3 whpearson 05 July 2011 01:46AM

Eliezer and Robin discussed g somewhat in their debate.  I think this question is one that we can do some more research on ourselves. The current hypothesis I'm exploring is that g measures the ability to take in information non-inductively this includes gossip, culture and taught skills.

continue reading »

Drive-less AIs and experimentation

4 whpearson 17 June 2011 02:33PM

One of the things I've been thinking about is how to safely explore the nature of intelligence. I'm unconvinced of FOOMing and would rather we didn't avoid AI entirely if we can't solve Yudkowsky style Friendliness. So some method of experimentation is needed to determine how powerful intelligence actually is.

continue reading »

Angles of Attack

5 whpearson 03 February 2011 06:53PM

For humans problems can seem intractable for a long time and then suddenly become easy. Forming a coherent chemistry was taking a very long time until Lavoisier thought to look at the mass of reactants and products. And then we had a periodic table 100 years after that. Compare this to little progress from having bounced around in alchemy for 2000 years or so.

So identifying and understanding angles of attacks is important for tackling the thorny problems that face us today.

continue reading »

Rational Terrorism or Why shouldn't we burn down tobacco fields?

-2 whpearson 02 October 2010 02:51PM

Related: Taking ideas seriously

Let us say hypothetically you care about stopping people smoking. 

You were going to donate $1000 dollars to givewell to save a life, instead you learn about an anti-tobacco campaign that is better. So you chose to donate $1000 dollars to a campaign to stop people smoking instead of donating it to a givewell charity to save an African's life. You justify this by expecting more people to live due to having stopped smoking (this probably isn't true, but for the sake of argument)

The consequences of donating to the anti-smoking campaign is that 1 person dies in africa and 20 live that would have died instead live all over the world. 

Now you also have the choice of setting fire to many tobacco plantations, you estimate that the increased cost of cigarettes would save 20 lives but it will kill likely 1 guard worker. You are very intelligent so you think you can get away with it. There are no consequences to this action. You don't care much about the scorched earth or loss of profits.

If there are causes with payoff matrices like this, then it seems like a real world instance of the trolley problem. We are willing to cause loss of life due to inaction to achieve our goals but not cause loss of life due to action.

What should you do?

Killing someone is generally wrong but you are causing the death of someone in both cases. You either need to justify that leaving someone to die is ethically not the same as killing someone, or inure yourself that when you chose to spend $1000 dollars in a way that doesn't save a life, you are killing. Or ignore the whole thing.

This just puts me off being utilitarian to be honest.

Edit: To clarify, I am an easy going person, I don't like making life and death decisions. I would rather live and laugh, without worrying about things too much.

This confluence of ideas made me realise that we are making life and death decisions every time we spend $1000 dollars. I'm not sure where I will go from here.

Brain storm: What is the theory behind a good political mechanism?

1 whpearson 29 September 2010 04:22PM

Patrissimo argue that we should try to design good mechanisms for governance rather than try and use the current broken mechanisms.

I agree, however we don't have a theoretical framework that we can use to evaluate different systems that are proposed. Ideally we would be able to crunch some numbers and show that a Futarchy responds to the desires/needs of the populace better than "voting for politicians who then make decisions" or anything else we come up with.

continue reading »

Summer vs Winter Strategies

-3 whpearson 20 May 2010 12:31AM

Abstract: I have a hypothesis that there are two different general strategies for life that humans might switch between predicated on the way general resource availability change in the society. If it is constant or increasing one strategy pays off, if predictably increasing then decreasing, another is good. These strategies would have been selected for at different times and environments in prehistory but humans are mainly plastic in which strategy they adopt. Culture reinforces them and can create lags. For value neutral purposes I will call them by seasons, the Summer strategy and the Winter strategy. The summer is for times of plenty and partying, and the winter for when resources regularly become scarcer and life becomes harsher. These strategies affect every part of society from mating to the way people plan.

continue reading »

Crunchcourse - a tool for combating learning akrasia

11 whpearson 14 March 2010 10:53PM

Crunchcourse is a free website that might be of use to people trying to learn things outside the normal classroom setting. It aims to get together groups of people interested in the same topic and use our social instincts to motivate us to do the work.

It is in its early stages. If it proves useful, it might be useful to standardize on it as the place to learn the various prerequisites that lesswrong has.

Lesswrong UK planning thread

5 whpearson 24 January 2010 12:33AM

A few of us got together in the pub after the friendly AI meet and agreed we should have a meetup for those of us familiar with lesswrong/bostrom etc. This is a post for discussion of when/where.

continue reading »

View more: Prev | Next