Comment author: moridinamael 16 September 2015 08:48:46PM *  0 points [-]

I like this because it's something to point to when arguing with somebody with an obvious bias toward anthropomorphizing the agents.

You show them a model like this, then you say, "Oh, the agent can reduce its movement penalty if it first consumes this other orange glowing box. The orange glowing box in this case is 'humanity' but the agent doesn't care."

edit: Don't normally care about downvotes, but my model of LW does not predict 4 downvotes for this post, am I missing something?

Comment author: ESRogs 18 September 2015 12:07:03AM 0 points [-]

I was also surprised to see your comment downvoted.

That said, I don't think I see the value of the thing you proposed saying, since the framing of reducing the movement penalty by consuming an orange box which represents humanity doesn't seem clarifying.

Why does consuming the box reduce the movement penalty? Is it because, outside of the analogy, in reality humanity could slow down or get in the way of the AI? Then why not just say that?

I wouldn't have given you a downvote for it, but maybe others also thought your analogy seemed forced and are just harsher critics than I.

Comment author: Yvain 17 September 2015 05:33:50AM *  14 points [-]

I don't know if this solves very much. As you say, if we use the number 1, then we shouldn't wear seatbelts, get fire insurance, or eat healthy to avoid getting cancer, since all of those can be classified as Pascal's Muggings. But if we start going for less than one, then we're just defining away Pascal's Mugging by fiat, saying "this is the level at which I am willing to stop worrying about this".

Also, as some people elsewhere in the comments have pointed out, this makes probability non-additive in an awkward sort of way. Suppose that if you eat unhealthy, you increase your risk of one million different diseases by plus one-in-a-million chance of getting each. Suppose also that eating healthy is a mildly unpleasant sacrifice, but getting a disease is much worse. If we calculate this out disease-by-disease, each disease is a Pascal's Mugging and we should choose to eat unhealthy. But if we calculate this out in the broad category of "getting some disease or other", then our chances are quite high and we should eat healthy. But it's very strange that our ontology/categorization scheme should affect our decision-making. This becomes much more dangerous when we start talking about AIs.

Also, does this create weird nonlinear thresholds? For example, suppose that you live on average 80 years. If some event which causes you near-infinite disutility happens every 80.01 years, you should ignore it; if it happens every 79.99 years, then preventing it becomes the entire focus of your existence. But it seems nonsensical for your behavior to change so drastically based on whether an event is every 79.99 years or every 80.01 years.

Also, a world where people follow this plan is a world where I make a killing on the Inverse Lottery (rules: 10,000 people take tickets; each ticket holder gets paid $1, except a randomly chosen "winner" who must pay $20,000)

Comment author: ESRogs 17 September 2015 11:44:46PM 0 points [-]

if we use the number 1, then we shouldn't wear seatbelts, get fire insurance, or eat healthy to avoid getting cancer, since all of those can be classified as Pascal's Muggings

Isn't this dealt with in the above by aggregating all the deals of a certain probability together?

(amount of deals that you can make in your life that have this probability) * (PEST) < 1

Maybe the expected number of major car crashes or dangerous fires, etc that you experience are each less than 1, but the expectation for the number of all such things that happen to you might be greater than 1.

There might be issues with how to group such events though, since only considering things with the exact same probability together doesn't make sense.

[LINK] Deep Learning Machine Teaches Itself Chess in 72 Hours

8 ESRogs 14 September 2015 07:38PM

Lai has created an artificial intelligence machine called Giraffe that has taught itself to play chess by evaluating positions much more like humans and in an entirely different way to conventional chess engines.

Straight out of the box, the new machine plays at the same level as the best conventional chess engines, many of which have been fine-tuned over many years. On a human level, it is equivalent to FIDE International Master status, placing it within the top 2.2 percent of tournament chess players.

The technology behind Lai’s new machine is a neural network. [...] His network consists of four layers that together examine each position on the board in three different ways.

The first looks at the global state of the game, such as the number and type of pieces on each side, which side is to move, castling rights and so on. The second looks at piece-centric features such as the location of each piece on each side, while the final aspect is to map the squares that each piece attacks and defends.

[...]

Lai generated his dataset by randomly choosing five million positions from a database of computer chess games. He then created greater variety by adding a random legal move to each position before using it for training. In total he generated 175 million positions in this way.

[...]

One disadvantage of Giraffe is that neural networks are much slower than other types of data processing. Lai says Giraffe takes about 10 times longer than a conventional chess engine to search the same number of positions.

But even with this disadvantage, it is competitive. “Giraffe is able to play at the level of an FIDE International Master on a modern mainstream PC,” says Lai. By comparison, the top engines play at super-Grandmaster level.

[...]

Ref: arxiv.org/abs/1509.01549 : Giraffe: Using Deep Reinforcement Learning to Play Chess

http://www.technologyreview.com/view/541276/deep-learning-machine-teaches-itself-chess-in-72-hours-plays-at-international-master/

 

H/T http://lesswrong.com/user/Qiaochu_Yuan

Comment author: gwern 26 July 2015 03:26:00PM *  6 points [-]

So the head of BGI, famous for extremely ambitious & expensive genetics projects which are a Chinese national flagship, is stepping down to work on AI because genetics is just too boring these days: http://www.nature.com/news/visionary-leader-of-china-s-genomics-powerhouse-steps-down-1.18059

I haven't been following estimates lately, but how much do people think it would cost in GPUs to approximate a human brain at this point given all the GPU performance leaps lately? I note that deep learning researchers seem to be training networks with up to 10b parameters using a 4 GPU setup costing, IIRC, <$10k, and given the memory improvements NVIDIA & AMD are working on, we can expect continued hardware improvements for at least another year or two.

(Schmidhuber's group is also now training networks with 100 layers using their new 'highway network' design; I have to wonder if that has anything to do with Schmidhuber's new NNAISENSE startup, beyond just Deepmind envy... EDIT: probably not if it was founded in September 2014 and the first highway network paper was pushed to arxiv in May 2015, unless Schmidhuber et al set it up to clear the way for commercializing their next innovation and highway networks is it.)

Comment author: ESRogs 20 August 2015 05:05:10AM 0 points [-]

From a very uninformed perspective, this looks like an area of science where China is leading the way. Can anyone more informed comment on whether that is accurate, and whether there are other areas in which China leads?

[Link] First almost fully-formed human [foetus] brain grown in lab, researchers claim

7 ESRogs 19 August 2015 06:37AM

This seems significant:

An almost fully-formed human brain has been grown in a lab for the first time, claim scientists from Ohio State University. The team behind the feat hope the brain could transform our understanding of neurological disease.

Though not conscious the miniature brain, which resembles that of a five-week-old foetus, could potentially be useful for scientists who want to study the progression of developmental diseases. 

...

The brain, which is about the size of a pencil eraser, is engineered from adult human skin cells and is the most complete human brain model yet developed

...

Previous attempts at growing whole brains have at best achieved mini-organs that resemble those of nine-week-old foetuses, although these “cerebral organoids” were not complete and only contained certain aspects of the brain. “We have grown the entire brain from the get-go,” said Anand.

...

The ethical concerns were non-existent, said Anand. “We don’t have any sensory stimuli entering the brain. This brain is not thinking in any way.”

...

If the team’s claims prove true, the technique could revolutionise personalised medicine. “If you have an inherited disease, for example, you could give us a sample of skin cells, we could make a brain and then ask what’s going on,” said Anand.

...

For now, the team say they are focusing on using the brain for military research, to understand the effect of post traumatic stress disorder and traumatic brain injuries.

http://www.theguardian.com/science/2015/aug/18/first-almost-fully-formed-human-brain-grown-in-lab-researchers-claim

 

 

Comment author: ESRogs 18 August 2015 06:28:49PM 0 points [-]

All possible worlds are real, and probabilities represent how much I care about each world. ... Which worlds I care more or less about seems arbitrary.

This view seems appealing to me, because 1) deciding that all possible worlds are real seems to follow from the Copernican principle, and 2) if all worlds are real from the perspective of their observers, as you said it seems arbitrary to say which worlds are more real.

But on this view, what do I do with the observed frequencies of past events? Whenever I've flipped a coin, heads has come up about half the time. If I accept option 4, am I giving up on the idea that these regularities mean anything?

Comment author: jacob_cannell 25 June 2015 06:43:55PM *  1 point [-]

Thanks! You can browse my submitted history here on LW, and also my blog has some more going back over the years.

Comment author: ESRogs 18 August 2015 04:58:38PM 0 points [-]

Where is your blog?

Comment author: jacob_cannell 29 July 2015 03:59:00PM *  3 points [-]

I should probably rephrase the brain optimality argument, as it isn't just about energy per se. The brain is on the pareto efficiency surface - it is optimal with respect to some complex tradeoffs between area/volume, energy, and speed/latency.

Energy is pretty dominant, so it's much closer to those limits than the rest. The typical futurist understanding about the Landauer limit is not even wrong - way off, as I point out in my earlier reply below and related links.

A consequence of the brain being near optimal for energy of computation for intelligence given it's structure is that it is also near optimal in terms of intelligence per switching events.

The brain computes with just around 10^14 switching events per second (10^14 synapses * 1 hz average firing rate). That is something of an upper bound for the average firing rate.1

The typical synapse is very small, has a low SNR and thus is equivalent to a low bit op, and only activates maybe 25% of the time.2 We can roughly compare these minimal SNR analog ops with the high precision single bit ops that digital transistors implement. The landauer principle allows us to rate them as reasonably equivalent in computational power.

So the brain computes with just 10^14 switching events per second. That is essentially miraculous. A modern GPU uses perhaps 10^18 switching events per second.

So the important thing here is not just energy - but overall circuit efficiency. The brain is crazy super efficient - and as far as we can tell near optimal - in its use of computation towards intelligence.

This explains why our best SOTA techniques in almost all AI are some version of brain-like ANNs (the key defining principle being search/optimization over circuit space). It predicts that the best we can do for AGI is to reverse engineer the brain. Yes eventually we will scale far beyond the brain, but that doesn't mean that we will use radically different algorithms.

Comment author: ESRogs 18 August 2015 04:47:57PM 0 points [-]

A consequence of the brain being near optimal for energy of computation for intelligence given its structure is that it is also near optimal in terms of intelligence per switching events.

So the brain computes with just 10^14 switching events per second.

What do you mean by, given its structure? Does this still leave open that a brain with some differences in organization could get more intelligence out of the same number of switching events per second?

Similarly, I assume the same argument applies to all animal brains. Do you happen to have stats on the number of switching events per second for e.g. the chimpanzee?

Comment author: jacob_cannell 03 July 2015 07:40:37AM 2 points [-]

Also, remember Elizier was only 20 years old at this time. I am the same age and had just started college then in 98. Bostrom was 25.

I find this interesting in particular:

For example, rather than rigidly prescribing a certain treatment for humans, we could add a clause allowing for democratic decisions by humans or human descendants to overrule other laws. I bet you could think of some good safety-measures if you put your mind to it.

They could be talking about a new government, rather than an AI.

Comment author: ESRogs 12 July 2015 07:09:57AM 1 point [-]

Eliezer was only 20 years old at this time

Actually 19!

Comment author: ESRogs 12 July 2015 06:11:03AM 0 points [-]

Unless I'm misreading, I think the following two lines contradict each other. Does more adenosine correspond to higher or lower levels of sleep drive?

it seems the chemical correlate of sleep drive is the build-up of adenosine in the basal forebrain and this is used as the brain’s internal measure of how badly one needs sleep.

Adenosine levels are much higher (and sleep drive correspondingly lower) in the evening

View more: Prev | Next