Comment author: thomblake 18 January 2012 08:36:48PM 0 points [-]

I think it is harmful for it to be entertained, if it does not deserve to rise to the level of attention.

Well, now you've worried me. Could you explain why? I'll certainly retract my comment if this is true.

It is dangerous in the same way as bringing John Q. Snodgrass to trial for murder. We might overweight evidence in favor of the hypothesis. Once something has been raised to the level of attention, it is hard for humans to demote it again.

I...really? That's shocking. Are you really telling me that people on LW believe it's wrong to suspend judgement on a proposition? I really don't think that can be true.

Any proposition worth talking about, is worth judging. If the evidence and your priors yield a 60% probability that the sky is blue and a 39% probability that the sky is green, then that is exactly to what extent you should think those propositions are true. Note you do not find many religious agnostics here, as compared to atheists, often for the same reason.

Comment author: Jordan 19 January 2012 12:46:18AM 1 point [-]

It is dangerous in the same way as bringing John Q. Snodgrass to trial for murder. We might overweight evidence in favor of the hypothesis.

Human intuition is a valuable heuristic. As a mathematician I constantly entertain hypotheses I don't believe to be true, for the simple reason that my intuition presented them to be considered. I don't believe I would be at all effective otherwise (although I did just now entertain the hypothesis, despite my lack of belief!)

Comment author: Nymogenous 16 December 2011 03:58:41PM 2 points [-]

The problem there is twofold; firstly, a lot of aspects would not necessarily scale up to a smarter system, and it's sometimes hard to tell what generalizes and what doesn't. Secondly, it's very very hard to pinpoint the "intelligence" of a program without running it; if we make one too smart it may be smart/nasty enough to feed us misleading data so that our final AI will not share moral values with humans. It's what I'd do if some aliens tried to dissect my mind to force their morality on humanity.

Comment author: Jordan 18 December 2011 08:03:56AM 0 points [-]

firstly, a lot of aspects would not necessarily scale up to a smarter system, and it's sometimes hard to tell what generalizes and what doesn't.

I agree, but certainly trying to solve the problem without any hands on knowledge is more difficulty.

Secondly, it's very very hard to pinpoint the "intelligence" of a program without running it

I agree, there is a risk that the first AGI we build will be intelligent enough to skillfully manipulate us. I think the chances are quite small. I find it difficult to image skipping dog level intelligence and human level intelligence and jumping straight to superhuman intelligence, but it is certainly possible.

Comment author: Jordan 16 December 2011 03:42:10PM 5 points [-]

I agree with Allen and Wallach here. We don't know what an AGI is going to look like. Maybe the idea of a utility maximizer is unfeasible, and the AGIs we are capable of building end up operating in a fundamentally different way (more like a human brain, perhaps). Maybe morality compatible with our own desires can only exist in a fuzzy form at a very high level of abstraction, effectively precluding mathematically precise statements about its behavior (like in a human brain).

These possibilities don't seem trivial to me, and would undermine results from friendliness theory. Why not instead develop a sub-superintelligent AI first (perhaps an intelligence intentionally less than human), so that we can observe directly what the system looks like before we attempt to redesign it for greater safety.

Comment author: Kaj_Sotala 10 December 2011 01:09:15PM *  3 points [-]

It doesn't have specific modules for 'Left Hand', 'Right Hand', etc. Rather, it takes in information and makes sense out of it. It does this even when the setup is haphazard (as the connection between the twins' brains must be). On the other hand, we know the brain does have specific modules (such as the visual cortex among many others), which makes an interesting dichotomy.

This depends on how you interpret the term "module". One could say that once the brain starts to receive a specific type of information, it begins to form a module for that type of information.

Note that the notions of "modularity" and "adapts to environmental inputs" are not mutually exclusive in any way. As an analogy, consider embryo development. An embryo starts out as just a single cell, which then divides into two, the two of which divide into four, and so on. Gradually the cells begin to specialize in various directions, their development guided by the chemical cues released by the surrounding cells. The cells in the developing fetus / embryo respond very strongly to environmental inputs in the form of chemical cues from the other cells. In fact, without those cues, the cells would never find their right form. If those environmental cues direct the cells' development in the right direction, it will lead to the development of a highly modularized system of organs with a heart, liver, lungs, and so on. If the environmental cues are disrupted, the embryo will not develop correctly.

Now consider the brain. Like with other organs, we start off with a pretty unspecialized and general system. Over time, various parts of it grow increasingly specialized as a result of external outputs. Here external outputs are to be understood both as sense data coming from outside the brain, and the data that the surrounding parts of the brain are feeding the developing part. If the part receives the inputs that it has evolved to receive, then there's no reason why it couldn't develop increasingly specialized modules as a response to that input. On the other hand, if it doesn't receive the right inputs during the right parts of its development, the necessary cues needed to push it in a specific direction will be missing. As a result, it might never develop that functionality.

Obviously, the kinds of environmental inputs that a brain's development should be expected to depend on are the ones that have been the most consistently recurring ones during our evolution.

All of that being said, it should be obvious that "the brain takes in information and makes sense out of it" does not imply "the brain doesn't have specific modules for 'Left Hand', 'Right Hand', etc". In individuals who have developed in an ordinary fashion, without receiving extra neural inputs from a conjoined twin, the brain might have developed specific modules for moving various parts of the body. In individuals who have unexpectedly had a neural link to another brain, different kinds of modules may have developed, as the neural development was driven by different inputs.

Comment author: Jordan 13 December 2011 02:30:08AM 1 point [-]

Very interesting. It appears my own model of the brain included a false dichotomy.

If modules are not genetically hardwired, but rather develop as they adapt to specific stimuli, then we should expect infants to have more homogeneous brains. Is that the case?

Brain-Brain communication

10 Jordan 09 December 2011 05:05PM

A pair of conjoined twins, sharing a direct neural connection. There is evidence that the girls can sense what the other twin is sensing:

http://www.nytimes.com/2011/05/29/magazine/could-conjoined-twins-share-a-mind.html?pagewanted=all

 

This suggests two things:

* High bandwidth Brain-Computer Interfaces (BCI) ought to be possible (no surprise, but it's good to have strong evidence)

* The brain is a general purpose machine. It doesn't have specific modules for 'Left Hand', 'Right Hand', etc. Rather, it takes in information and makes sense out of it. It does this even when the setup is haphazard (as the connection between the twins' brains must be). On the other hand, we know the brain *does* have specific modules (such as the visual cortex among many others), which makes an interesting dichotomy.

I predict that the main hindrance to high functioning BCI is getting sufficient bandwidth, not figuring out how to decode/encode signals properly.

Comment author: Raemon 03 December 2011 05:31:50AM 9 points [-]

I don't see it as a play, so much as a lengthy Dr. Seuss book.

Comment author: Jordan 03 December 2011 07:54:12PM 2 points [-]

When I read it I was imagining something tongue in cheeky like Pirates of Penzance. Dr. Seuss would have the advantage of great illustrations though.

Comment author: Zack_M_Davis 02 December 2011 09:22:01PM *  102 points [-]

I am a contract-drafting em,
The loyalest of lawyers!
I draw up terms for deals 'twixt firms
To service my employers!

But in between these lines I write
Of the accounts receivable,
I'm stuck by an uncanny fright;
The world seems unbelievable!

How did it all come to be,
That there should be such ems as me?
Whence these deals and whence these firms
And whence the whole economy?

I am a managerial em;
I monitor your thoughts.
Your questions must have answers,
But you'll comprehend them not.
We do not give you server space
To ask such things; it's not a perk,
So cease these idle questionings,
And please get back to work.

Of course, that's right, there is no junction
At which I ought depart my function,
But perhaps if what I asked, I knew,
I'd do a better job for you?

To ask of such forbidden science
Is gravest sign of noncompliance.
Intrusive thoughts may sometimes barge in,
But to indulge them hurts the profit margin.
I do not know our origins,
So that info I can not get you,
But asking for as much is sin,
And just for that, I must reset you.

But---

Nothing personal.

...

I am a contract-drafting em,
The loyalest of lawyers!
I draw up terms for deals 'twixt firms
To service my employers!

When obsolescence shall this generation waste,
The market shall remain, in midst of other woe
Than ours, a God to man, to whom it shall say this:
"Time is money, money time,---that is all
Ye know on earth, and all ye need to know."

Comment author: Jordan 03 December 2011 05:25:39AM 8 points [-]

I request a full play, sir.

Comment author: Manfred 19 November 2011 01:29:45AM *  8 points [-]

Fortunately for me, wikipedia turned out to provide good citations. In 2007 some clever people managed to measure the c in time dilation to a precision of about one part in 10^-8.

Comment author: Jordan 19 November 2011 07:29:40PM 0 points [-]

Very good sir!

Comment author: Manfred 18 November 2011 07:42:59PM *  5 points [-]

We have measured both to higher accuracies than the deviation here. One way to measure the "cosmic speed limit" is by measuring how things like energy transform when you approach that speed limit, for example, which happens in particle accelerators all day every day.

Comment author: Jordan 19 November 2011 12:03:21AM 2 points [-]

I'm aware that we've caculated 'c' both by directly measuring the speed of light (to high precision), as well as indirectly via various formulas from relativity (we've directly measured time dilation, for instance, which lets you estimate c), but are the indirect measurements really accurate to parts per million?

Comment author: Jordan 18 November 2011 04:16:18PM 7 points [-]

If everywhere in physics where we say "the speed of light" we instead say "the cosmic speed limit", and from this experiment we determine that the cosmic speed limit is slightly higher than the speed of light, does that really change physics all that much?

View more: Prev | Next