Comment author: Nymogenous 16 December 2011 03:58:41PM 2 points [-]

The problem there is twofold; firstly, a lot of aspects would not necessarily scale up to a smarter system, and it's sometimes hard to tell what generalizes and what doesn't. Secondly, it's very very hard to pinpoint the "intelligence" of a program without running it; if we make one too smart it may be smart/nasty enough to feed us misleading data so that our final AI will not share moral values with humans. It's what I'd do if some aliens tried to dissect my mind to force their morality on humanity.

Comment author: Jordan 18 December 2011 08:03:56AM 0 points [-]

firstly, a lot of aspects would not necessarily scale up to a smarter system, and it's sometimes hard to tell what generalizes and what doesn't.

I agree, but certainly trying to solve the problem without any hands on knowledge is more difficulty.

Secondly, it's very very hard to pinpoint the "intelligence" of a program without running it

I agree, there is a risk that the first AGI we build will be intelligent enough to skillfully manipulate us. I think the chances are quite small. I find it difficult to image skipping dog level intelligence and human level intelligence and jumping straight to superhuman intelligence, but it is certainly possible.

Comment author: Jordan 16 December 2011 03:42:10PM 5 points [-]

I agree with Allen and Wallach here. We don't know what an AGI is going to look like. Maybe the idea of a utility maximizer is unfeasible, and the AGIs we are capable of building end up operating in a fundamentally different way (more like a human brain, perhaps). Maybe morality compatible with our own desires can only exist in a fuzzy form at a very high level of abstraction, effectively precluding mathematically precise statements about its behavior (like in a human brain).

These possibilities don't seem trivial to me, and would undermine results from friendliness theory. Why not instead develop a sub-superintelligent AI first (perhaps an intelligence intentionally less than human), so that we can observe directly what the system looks like before we attempt to redesign it for greater safety.

Comment author: Kaj_Sotala 10 December 2011 01:09:15PM *  3 points [-]

It doesn't have specific modules for 'Left Hand', 'Right Hand', etc. Rather, it takes in information and makes sense out of it. It does this even when the setup is haphazard (as the connection between the twins' brains must be). On the other hand, we know the brain does have specific modules (such as the visual cortex among many others), which makes an interesting dichotomy.

This depends on how you interpret the term "module". One could say that once the brain starts to receive a specific type of information, it begins to form a module for that type of information.

Note that the notions of "modularity" and "adapts to environmental inputs" are not mutually exclusive in any way. As an analogy, consider embryo development. An embryo starts out as just a single cell, which then divides into two, the two of which divide into four, and so on. Gradually the cells begin to specialize in various directions, their development guided by the chemical cues released by the surrounding cells. The cells in the developing fetus / embryo respond very strongly to environmental inputs in the form of chemical cues from the other cells. In fact, without those cues, the cells would never find their right form. If those environmental cues direct the cells' development in the right direction, it will lead to the development of a highly modularized system of organs with a heart, liver, lungs, and so on. If the environmental cues are disrupted, the embryo will not develop correctly.

Now consider the brain. Like with other organs, we start off with a pretty unspecialized and general system. Over time, various parts of it grow increasingly specialized as a result of external outputs. Here external outputs are to be understood both as sense data coming from outside the brain, and the data that the surrounding parts of the brain are feeding the developing part. If the part receives the inputs that it has evolved to receive, then there's no reason why it couldn't develop increasingly specialized modules as a response to that input. On the other hand, if it doesn't receive the right inputs during the right parts of its development, the necessary cues needed to push it in a specific direction will be missing. As a result, it might never develop that functionality.

Obviously, the kinds of environmental inputs that a brain's development should be expected to depend on are the ones that have been the most consistently recurring ones during our evolution.

All of that being said, it should be obvious that "the brain takes in information and makes sense out of it" does not imply "the brain doesn't have specific modules for 'Left Hand', 'Right Hand', etc". In individuals who have developed in an ordinary fashion, without receiving extra neural inputs from a conjoined twin, the brain might have developed specific modules for moving various parts of the body. In individuals who have unexpectedly had a neural link to another brain, different kinds of modules may have developed, as the neural development was driven by different inputs.

Comment author: Jordan 13 December 2011 02:30:08AM 1 point [-]

Very interesting. It appears my own model of the brain included a false dichotomy.

If modules are not genetically hardwired, but rather develop as they adapt to specific stimuli, then we should expect infants to have more homogeneous brains. Is that the case?

Comment author: Raemon 03 December 2011 05:31:50AM 9 points [-]

I don't see it as a play, so much as a lengthy Dr. Seuss book.

Comment author: Jordan 03 December 2011 07:54:12PM 2 points [-]

When I read it I was imagining something tongue in cheeky like Pirates of Penzance. Dr. Seuss would have the advantage of great illustrations though.

Comment author: Zack_M_Davis 02 December 2011 09:22:01PM *  102 points [-]

I am a contract-drafting em,
The loyalest of lawyers!
I draw up terms for deals 'twixt firms
To service my employers!

But in between these lines I write
Of the accounts receivable,
I'm stuck by an uncanny fright;
The world seems unbelievable!

How did it all come to be,
That there should be such ems as me?
Whence these deals and whence these firms
And whence the whole economy?

I am a managerial em;
I monitor your thoughts.
Your questions must have answers,
But you'll comprehend them not.
We do not give you server space
To ask such things; it's not a perk,
So cease these idle questionings,
And please get back to work.

Of course, that's right, there is no junction
At which I ought depart my function,
But perhaps if what I asked, I knew,
I'd do a better job for you?

To ask of such forbidden science
Is gravest sign of noncompliance.
Intrusive thoughts may sometimes barge in,
But to indulge them hurts the profit margin.
I do not know our origins,
So that info I can not get you,
But asking for as much is sin,
And just for that, I must reset you.

But---

Nothing personal.

...

I am a contract-drafting em,
The loyalest of lawyers!
I draw up terms for deals 'twixt firms
To service my employers!

When obsolescence shall this generation waste,
The market shall remain, in midst of other woe
Than ours, a God to man, to whom it shall say this:
"Time is money, money time,---that is all
Ye know on earth, and all ye need to know."

Comment author: Jordan 03 December 2011 05:25:39AM 8 points [-]

I request a full play, sir.

Comment author: Manfred 19 November 2011 01:29:45AM *  8 points [-]

Fortunately for me, wikipedia turned out to provide good citations. In 2007 some clever people managed to measure the c in time dilation to a precision of about one part in 10^-8.

Comment author: Jordan 19 November 2011 07:29:40PM 0 points [-]

Very good sir!

Comment author: Manfred 18 November 2011 07:42:59PM *  5 points [-]

We have measured both to higher accuracies than the deviation here. One way to measure the "cosmic speed limit" is by measuring how things like energy transform when you approach that speed limit, for example, which happens in particle accelerators all day every day.

Comment author: Jordan 19 November 2011 12:03:21AM 2 points [-]

I'm aware that we've caculated 'c' both by directly measuring the speed of light (to high precision), as well as indirectly via various formulas from relativity (we've directly measured time dilation, for instance, which lets you estimate c), but are the indirect measurements really accurate to parts per million?

Comment author: Jordan 18 November 2011 04:16:18PM 7 points [-]

If everywhere in physics where we say "the speed of light" we instead say "the cosmic speed limit", and from this experiment we determine that the cosmic speed limit is slightly higher than the speed of light, does that really change physics all that much?

Comment author: Jordan 30 October 2011 06:54:16PM 3 points [-]

I was disappointed when I first looked into the C. elegans emulation progress. Now I'm not so sure it's a bad sign. It seems to me that at only 302 neurons the nervous system is probably far from the dominant system of the organism. Even with a perfect emulation of the neurons, it's not clear to me if the resulting model would be meaningful in any way. You would need to model the whole organism, and that seems very hard.

Contrast that with a mammal, where the brain is sophisticated enough to do things independently of feedback from the body, and where we can see these larges scale neural patterns with scanners. If we uploaded a mouse brain, presumably we could get a rough idea that the emulation was working without ever hooking it up to a virtual body.

Comment author: Vladimir_M 01 October 2011 03:51:52PM *  23 points [-]

A key symptom of depression is lack of willpower - depressives don't normally have the willpower not to sleep.

For me personally, and I suspect also for a significant number of other people, it takes willpower to go to sleep as well as to wake up early enough. In the morning, the path of least resistance for me is to sleep in, but in the evening, it is to do something fun until I'm overcome with overwhelming sleepiness, which won't happen until it's far too late to maintain a normal sleeping schedule. Therefore, if I were completely deprived of willpower, my "days" would quickly degenerate into cycles of much more than 24 hours, falling asleep as well as waking up at a much later hour each time.

Now, the incentive to wake up early enough (so as not to miss work etc.) is usually much stronger than the incentive to go to bed early enough, which is maintained only by the much milder and more distant threat of feeling sleepy and lousy next day. So a moderate crisis of willpower will have the effect of making me chronically sleep-deprived, since I'll still muster the willpower to get up for work, but not the willpower to go to bed instead of wasting time until the wee hours.

(This is exacerbated by the fact that when I'm sleep-deprived, I tend to feel lousy and wanting to doze off through the day, but then in the evening I suddenly start feeling perfectly OK and not wanting to sleep at all.)

Comment author: Jordan 01 October 2011 08:54:33PM 7 points [-]

(This is exacerbated by the fact that when I'm sleep-deprived, I tend to feel lousy and wanting to doze off through the day, but then in the evening I suddenly start feeling perfectly OK and not wanting to sleep at all.)

I suffer from this as well. It is my totally unsubstantiated theory that this is a stress response. Throughout the whole day your body is tired and telling you to go to sleep, but the Conscious High Command keeps pressing the KEEP-GOING-NO-MATTER-WHAT button until your body decides it must be in a war zone and kicks in with cortisol or adrenaline or whatever.

View more: Prev | Next