All of i77's Comments + Replies

i77-30

I wonder if intuitionism's perspective sheds some light on this issue.

i7730

Singularity Institute for Machine Ethics.

Keep the old brand, add clarification about flavor of singularity.

i77-30

Delenn: They will join with the souls of all our people. Melt one into another until they are born into the next generation of Minbari. Remove those souls and the whole suffers. We are diminished, each generation becomes less than the one before.

Soul Hunter: A quaint lie, pretty fantasy. The soul ends with death, unless we act to preserve it.

-- Babylon 5, "Soul Hunter"

1wedrifid
The fantasy doesn't sound quaint - it sounds like a depressing story of inevitable decay and without even the possibility of allowing the creation of new (ensouled) individuals even in the case where those alive remove their vulnerability to death. The Soul Hunter presents a reality where souls evidently become generated each generation in the same way that they were before.
3[anonymous]
Somewhat weakened by the fact that the show leaves it open whether or not Delenn was right.
i7720

It is not possible to derive exact information from a decayed state

That's true in the most general situation, when there is no prior information available. But a brain is not a random chunk of matter, it's a highly particular one, with certain patterns and regularities. So it's not implausible that a superintelligence could restore even a moderately damaged brain.

For a real example, think of image restoration of natural scenes. A photograph is not a random matrix of pixels, it belongs to a very small subset of all possible images, and that knowledge allows seemingly "impossible" tasks of focusing, enhacement and all that.

0Logos01
Intelligence is not magic. Information that no longer exists cannot be reinvented with fidelity. And yet still-shots are limited to their original resolution; anything further is artistic rendering and not a valid reconstruction of the original. It is possible to "enhance" the resolution of a still-shot of a video feed of a person's face. It is not possible to "enhance" the resolution of a single still-shot of a person's face. Memories are tricky things. We do not, now, know exactly how high the fidelity needs to be to sustain a person's "actual" psyche. If information-theoretically significant portions are missing, no amount of genius can resolve that. For the same reason you cannot extrapolate from the number 3 to the function f(x) whose derivative of x is "3".
i77120

"We are selfish, base animals crawling across the earth. But because we got brains, if we try real hard, we may occasionally aspire to something that is less than pure evil."

-- Gregory House

i7710

Fullmetal Alchemist Brotherhood has (SPOILER):

an almost literally unboxed unfriendly "AI" as main bad guy. Made by pseudomagical ("alchemy") means, but still.

1Anubhav
It bugs me that people don't think of this one more often. It's basically an anime about how science affects the world and its practitioners. (Disclaimer: Far too many convenient coincidences/idiot balls IIRC. It's a prime target for a rationalist rewrite.)
i7710

I've been curious to know what the "U.S." would be like today if the American Revolution had failed.

Code Geass :)

0LucasSloan
Sadly, that is more like the result if the ARW fails and the laws of physics were weirdly different.
i7710

... perfect existence, huh?

Perfection does not exist in this world. It may seem like a cliche, but it's true. Obviously, mediocre fools will forever lust for perfection and seek it out.

However, what meaning is there in "perfection"? None. Not a bit. "Perfection" disgusts me. After "perfection" there exists nothing higher. Not even room for "creation", which means there is no room for wisdom or talent either.

Understand? To scientists like ourselves, "perfection" is "despair".

Even if something is cr... (read more)

9[anonymous]
It's possible, and not undesirable, to achieve perfection. For example, the majority of words I type are spelled perfectly, and the perfect answer to "what is two plus two?" is "four". It's just not possible or desirable to achieve it everywhere.
i7700

I eat a paleo diet, which has low levels of dietary carbohydrates. ... I much prefer the freedom to eat whenever I want.

I just wanted to add myself as another data point: I have been low-carb for three months and I can vouch for this. (I also lost 10 kg)

If only I had known this when I was a kid. So many mid-mornings at school, hungry (and suddenly sleepy) because of "healthy" breakast cereals!

i7700

Straight from the Caprica pilot.

0eirenicon
It's a much older idea than that. One of the best stories on it that I've read is by... Ray Bradbury, perhaps? I'm not sure. It's about a long dead classical composer whose personality and memories are reconstructed inside a living person's brain. He remembers his life, he remembers writing music and even remembers dying... but discovers that he can't compose anything new. Anyone know what I'm talking about?
i7740

To me, the low-carbohydrate approach to the obesity problem has been a real eye-opener. I recommend the book from Gary Taubes, "Good calories, bad calories".

Reading that book i got clear that medical authorities have a very hard time updating their beliefs in the light of evidence, and prefer to surpress/bend it to accommodate established dogma.

i7710

OK. So let'ts take a controller with an explicit (I hope you agree) model, the Smith predictor. The controller as a whole has a model, but the subsystem C(z) (in the wiki example) has not (in your terms).

Or better yet, a Model Reference Adaptive Control. The system as a whole IS predictive, uses models, etc.. but the "core" controller subsystem does "not".

Then I'd argue that in the simple PID case, the engineer does the job of the Model/Adjusting Mechanism, and it's a fundamental part of the implementation process (you don't just buy ... (read more)

i7720

Very interesting article. Yes, the controller is not intelligent but you have to factor in the designer. (I think this is something like a response to the Chinese Room argument). Just a few comments:

It has no model of its surroundings.

It has, a very simple one: the sign of the gain of the plant (steady-state).

It has no model of itself.

No, but its maker does: the transfer function of the controller.

It makes no predictions.

As in the first point: implicit in the design of the system is that temperature goes up with +1 output. If you flip t... (read more)

0Nelson_Flood
Concerning your first point, that the designer has to hand-insert that all-important sign bit. So how do humans come up with these sign bits? I imagine a trial-and-error process of interacting with the controlled system. During this, the person's brain is generating an error signal derived directly or indirectly from an evolutionarily-fixed set point. While trying to control the system manually using an initially random sign bit, I suppose the brain can analyze at a low level in the hardware that the error is 1) changing exponentially, and 2) has a positive or negative slope, as the case may be. If the situation is exponential and the slope is positive, you synaptically weld the cortical representation of the controlled variable to the antagonist muscle of the one currently energized, and if negative, to the energized muscle itself. Bayesian inference would enter as a Kalman filter used to calculate the controlled variable. I suppose the process of acquiring the sign bit of the slope could not be separated from acquiring the model needed by the Kalman filter, so some kind of bootstrapping process could be involved. In his book "Neural Engineering..." (2004), Chris Eliasmith makes a case that the brain contains Kalman filters. Is the evolutionary process responsible for the original hard-wired set point itself a controller? I doubt it, because, to use Douglas Adams' analogy, control principles to not seem to be involved in getting the shape of a puddle to match that of the hole it's in.
0[anonymous]
Concerning your first point, the designer has to hand-insert that all-important sign bit. So how do humans come up with these sign bits? I imagine a trial-and-error process of interacting with the controlled system. During this, the person's brain is generating an error signal derived over learning time by classical conditioning from an evolutionarily-derived hypothalamic error signal. While trying to control the system manually using an initially random sign bit, I suppose the brain can analyze at a low level in the hardware that the error is 1) changing exponentially, and 2) has a positive or negative slope, as the case may be. If the slope is positive, you synaptically weld the cortical representation of the controlled variable to the antagonist muscle of the one currently moving, and if negative, to the moving muscle itself. Bayesian inference would enter as a Kalman filter used to calculate the controlled variable. I suppose the process of acquiring the sign bit of the slope could not be separated from acquiring the model needed by the Kalman filter. In his book "Neural Engineering..." (2004), Chris Eliasmith makes a case that the brain contains Kalman filters. Is the evolutionary process responsible for the original hard wired error signal itself a controller? I doubt it, because, to use Douglas Adams' analogy, control principles to not seem to be involved in getting the shape of a puddle to match that of the hole it's in.