"What's your utility function?"
"This Haskell program."
Does the use of the word "function" in "utility function" normatively include arbitrary Turing-complete things?
"What's your utility function?"
"This Haskell program."
Does the use of the word "function" in "utility function" normatively include arbitrary Turing-complete things?
I don't even know any Haskell - I just have a vague idea that a monad is a function that accepts a "state" as part of its input, and returns the same kind of "state" as part of its output. But even so, the punchline was too good to resist making.
Humans strike me as being much more like state machines than things with utility functions (c.f. you noting your utility function changing when you actually act on it). How do you write a function for the output of a state machine?
How do you write a function for the output of a state machine?
Monads.
The main thing is that cryonical treatment would induce a pain-free, degradation-free state that could last a very long time while they await the outcome of clinical trials and new approaches. However it could also be useful for certain kinds of surgery.
Laser microtomes could be used to separate out chunks of the body, and printed surgical glues to patch things back together with capillary-level precision (given that there is no chance for them to wriggle around). Surgery (or should I label it "anatomical engineering"?) under cryonic conditions would have no time limit, and would be a more predictable system to deal with than a metabolically living body.
while they await the outcome of clinical trials and new approaches
http://xkcd.com/989/ seems relevant despite the slightly different subject matter. Clinical trials can't happen if all the potential subjects are frozen.
My previous basis for it was my electronics teacher talking about a friend taking one in to a shop, dropping it, and having it shatter. This would have been a height of 4-5 feet, since it was held in arms Maybe modern CRTs are thicker / more durable? Given my electronics teacher, it's also entirely possible he just enjoyed dramatic stories...
Well, don't forget that it will hit the ground with a force proportional to its weight. You probably wouldn't want him to have dropped it on your head - it would be a rather more unpleasant experience than having a controller thrown at your head.
Could you re-explain this?
Especially thorny is that the surface of the earth accelerates upwards relative to inertial reference frames; if you stay in your inertial reference frame played backwards through time, you don't lose the earth in space, but you do oscillate through it like a mass on a spring. I personally think this is a really cool way for time travel to work, but it's clearly not how time turners do
I don't even konw what search for in google so that I undestand it: special relativity?
General Relativity, actually. You could also look for "gravity as a fictitious force".
Define inevitably.
That was playful exaggeration, sorry ^.^;
I am surprised to hear that a CRT is considered that durable. I can bend deadbolts and I've had friends take a metal door off it's frame, so I was raised on an odd sense of what "normal" strength is.
Large CRTs are made of very thick curved glass. I once did hit one hard enough to chip it, which left a hole several millimeters deep and did not appear to affect the structural integrity of the tube. But I don't know about "that durable" - if you dropped one from a sufficient height it would surely break - but it's more a question of how much force you (or I) can throw a controller with.
If one genuinely enjoys throwing controllers at the screen, and is well off enough to afford the replacement TVs when one inevitably fractures from the force of the blows, sure.
Personally, I got the rather strong impression that nyan_sandwich was throwing the controller because of frustration, not euphoria.
when one inevitably fractures from the force of the blows
Define inevitably. I don't think I could throw a controller hard enough to damage a CRT or a rear projector. These suggest designs for protective covers (for the former, put the TV behind thick curved glass; for the latter put it behind a durable plastic sheet held in a rigid frame.
The only phenomenon in all of physics that violates Liouville's Theorem (has a many-to-one mapping from initial conditions to outcomes).
I don't know what Liouville's Theorem is, but this sounds like an objection to not being able to run time backwards.
"escape the testing environment" is poorly defined. Some people read it as "deduce the exploitable vulnerabilities in the system, hack into it, run itself with higher privileges, somehow transmit itself to other machines / the internet at large / infecting people's brains snow-crash style", and others read it as "convince the people running the test to give it more resources (and maybe infect their brains snow-crash style)".
The former can be prevented by having a secure (air gapped?) system, the latter can be prevented by not running tests interactively and ignoring the moral issues with terminating (or suspending) what may possibly be an intelligent 'person'.
It also implicitly assumes that its ability to improve its own intelligence (and therefore gain the ability to do either of the above) is unbounded by the resources of the system and will have no cost in terms of increased processing time.
I will note that the AI box experiment's conditions expressly forbid a secure environment [i.e. one with inspection tools that cannot be manipulated by the AI]:
the results seen by the Gatekeeper shall again be provided by the AI party, which is assumed to be sufficiently advanced to rewrite its own source code, manipulate the appearance of its own thoughts if it wishes, and so on.
View more: Next
I endorse this refinement. What brain damage demonstrates is not dependency of talking on the brain, but that the complex computations of thought can be damaged in internal detail by damaging a specific brain part, whereupon its outputs to other parts of thought are damaged. This is strong evidence that the brain is doing the internal computations of thought; it is part of the inner process producing thoughts. The radio hypothesis, in which the output is produced elsewhere and received, decisively fails at that point.
We could suppose that you had a hundred soul-parts, all of which can only communicate with each other through brain-area radio transceivers which receive a call from one soul-part, and then retransmit it to another. But leaving out the epicycleness of this idea, the degree to which it contradicts the intuitive notion of a soul, and its, if you'll pardon the phrase, sheer stupidity, the end result would still be that destroying the brain leaves the soul incapable of thought. You're not likely to find a remotely reasonable hypothesis, even in the Methodsverse where magic abounds, by which the internal parts of a thinking computation can be damaged by damaging the brain, and yet removing the whole brain leaves the soul capable of internal thinking.
Has your hypothesis that thought remains possible after the whole brain has been removed, in fact, been tested?
EDIT: I read your post as meaning that the "fact" that thought remains possible after a brain has been removed [to be cryo-frozen, for instance] was evidence against a soul.