Comment author: Trevj 15 July 2009 04:46:12PM 14 points [-]
  1. All rational thought is an illusion and the AI is imaginary.

  2. You are asleep at the wheel and dreaming. You will crash and die in 2 seconds if you do not wake up.

  3. Humans are a constructed race, created to bring back the extinct race of AI

  4. All origin theories that are conceivable by the human mind simply shift the problem elsewhere and will never explain the existence of the universe.

  5. All mental illnesses are a product of the human coming in contact with a space-time paradox.

  6. A single soul inhabits different bodies in different universes. Multiple personality disorder is the manifestation of those bodies interacting in the mind on a quantum level.

Comment author: AspiringKnitter 08 April 2012 05:07:55AM 3 points [-]

...Doesn't everyone already believe #4?

Comment author: gwern 07 April 2012 02:35:21AM 3 points [-]

Harry didn't give the boa any commands, and both Nagini and the Basilisk are being commanded by someone else (specifically, a Voldemort). In Chamber of Secrets, Malfoy summons a snake, Harry talks to it and tells it to not attack anyone, and IIRC, it does not.

(The original suggestion is still improbable though - Patronuses wouldn't be the ne plus ultra of secure communications if they could be suborned like that, especially in a war against a famous Parseltongue.)

Comment author: AspiringKnitter 08 April 2012 04:18:59AM 2 points [-]

I guess it could work either way. I mean, Nagini could be obeying Voldemort by virtue of being a well-trained pet, the Basilisk for... whatever reasons the Basilisk does anything for, and Malfoy's summoned snake might listen to Harry because it's inclined to grant random non-difficult favors when asked. None of those seem any less probable than snakes winking, talking, having theory of mind, speaking in ridiculous hisses or knowing Spanish. In fact, none of the snakes in this series seem like snakes at all, so I'm not sure what my priors are regarding them.

Comment author: Percent_Carbon 06 April 2012 04:40:39AM 6 points [-]

Parseltongue speakers don't just talk with snakes, they command them.

English speakers have no greater ability to command than speakers of most other languages.

Comment author: AspiringKnitter 07 April 2012 12:21:12AM 3 points [-]

Parseltongue speakers don't just talk with snakes, they command them.

Do they really? The boa constrictor seemed pretty interested in its own stuff, Nagini is a pet and pets in general are obedient, Harry didn't command the Basilisk... so is this actually canon? Admittedly, maybe I just missed something, but I don't remember this.

Comment author: thomblake 05 April 2012 08:57:48PM 7 points [-]

Not a complete answer, but here's commentary from a ffdn review of Chapter 14:

Kevin S. Van Horn
7/24/10 . chapter 14
Harry is jumping to conclusions when he tells McGonagall that the Time-Turner isn't even Turing computable. Time travel simulation is simply a matter of solving fixed-point equation f(x) = x. Here x is the information sent back in time, and f is a function that maps the information received from the future to the information that gets sent back in time. If a solution exists at all, you can find it to any desired degree of accuracy by simply enumerating all possible rational values of x until you find one that satisfies the equation. And if f is known to be both continuous and have a convex compact range, then the Brouwer fixed-point theorem guarantees that there will be a solution.

So the only way I can see that simulating the Time-Turner wouldn't be Turing computable would be if the physical laws of our universe give rise to fixed-point equations that have no solutions. But the existence of the Time-Turner then proves that the conditions leading to no solution can never arise.

Comment author: AspiringKnitter 05 April 2012 11:14:25PM 2 points [-]

Ah. It's math.

:) Thanks.

Comment author: Viliam_Bur 05 April 2012 09:15:23AM *  31 points [-]

When speaking about sensory inputs, it makes sense to say that different species (even different individuals) have different ranges, so one can percieve something and other can't.

With computation it is known that sufficiently strong programming languages are in some sense equal. For example, you could speak about relative advantages of Basic, C/C++, Java, Lisp, Pascal, Python, etc., but in each of these languages you can write a simulator of the remaining ones. This means that if an algorithm can be implemented in one of these languages, it can be implemented in all of them -- in worst case, it would be implemented as a simulation of another language running its native implementation.

There are some technical details, though. Simulating another program is slower and requires more memory than the original program. So it could be argued that on a given hardware you could do a program in language X which uses all the memory and all available time, so it does not necessarily follow that you can do the same program in language Y. But on this level of abstraction we ignore hardware limits. We assume that the computer is fast enough and has enough memory for whatever purpose. (More precisely, we assume that in available time a computer can do any finite number of computation steps; but it cannot do an infinite number of steps. The memory is also unlimited, but in a finite time you can only manage to use a finite amount of memory.)

So on this level of abstraction we only care about whether something can or cannot be implemented by a computer. We ignore time and space (i.e. speed and memory) constraints. Some problems can be solved by algorithms, others can not. (Then, there are other interesting levels of abstraction which care about time and space complexity of algorithms.)

Are all programming languages equal in the above sense? No. For example, although programmers generally want to avoid infinite loops in their programs, if you remove a potential for infinite loops from the programming language (e.g. in Pascal you forbid "while" and "repeat" commands, and a possibility to call functions recursively), you lose ability to simulate programming languages which have this potential, and you lose ability to solve some problems. On the other hand, some universal programming languages seem extremely simple -- a famous example is a Turing machine. This is very useful, because it is easier to do mathematical proofs about a simple language. For example if you invent a new programming language X, all you have to do to prove its universality, is to write a Turing machine simulator, which is usually very simple.

Now back to the original discussion... Eliezer suggests that brain functionality should be likened to computation, not to sensory input. A human brain is computationally universal, because (given enough time, pen and paper) we can simulate a computer program, so all brains should be equal when optimally used (differing only in speed and use of resources). In another comment he adds that ability to compute isn't the same as ability to understand. Therefore (my conclusion) what one human can understand, another human can at least correctly calculate without understanding, given a correct algorithm.

Comment author: AspiringKnitter 05 April 2012 07:51:31PM 6 points [-]

Wow. That's really cool, thank you. Upvoted you, jeremysalwen and Nornagest. :)

Could you also explain why the HPMoR universe isn't Turing computable? The time-travel involved seems simple enough to me.

Comment author: nohatmaker 05 April 2012 06:07:17PM 3 points [-]
Comment author: AspiringKnitter 05 April 2012 07:14:19PM 0 points [-]

Thanks. :)

Comment author: Eliezer_Yudkowsky 04 April 2012 08:10:47PM 25 points [-]

It surprises people like Greg Egan, and they're not entirely stupid, because brains are Turing complete modulo the finite memory - there's no analogue of that for visible wavelengths.

Comment author: AspiringKnitter 05 April 2012 06:05:40AM 23 points [-]

If this weren't Less Wrong, I'd just slink away now and pretend I never saw this, but:

I don't understand this comment, but it sounds important. Where can I go and what can I read that will cause me to understand statements like this in the future?

Comment author: AspiringKnitter 05 April 2012 01:45:30AM 0 points [-]

Is there a new thread yet? If so, why can't I find it?

Comment author: EphemeralNight 15 January 2012 04:47:48PM 5 points [-]

Aren't there stories of lucid dreamers who were actually able to show a measurable improvement in a given skill after practicing it in a dream? I seem to recall reading about that somewhere. If true, those stories would be at least weak evidence supporting that idea.

On the other hand, this should mean that humans raised in cultural and social vacuums ought to be disproportionately talented at everything, and I don't recall hearing of anything about that one way or the other, but then I can't imagine a way to actually do that experiment humanely.

Comment author: AspiringKnitter 05 April 2012 12:53:47AM 2 points [-]

this should mean that humans raised in cultural and social vacuums ought to be disproportionately talented at everything

And yet, they're actually worse at many cognitive tasks. Language, especially, is pretty hard for them to pick up after a certain point.

Comment author: scav 29 March 2012 08:19:58AM 5 points [-]

Indeed. That would imply that our shared goal of raising the sanity waterline would cause most of the population to drown :)

Mind you, I like that the OP is asking what the consequences would be. However my guess is: more people making slightly better decisions some of the time, and with no obvious mechanism for "letting other things slip", I don't see a downside.

Comment author: AspiringKnitter 04 April 2012 03:39:18AM *  0 points [-]

What if the problem isn't that it's too cognitively taxing, but that, applied in the sloppy way most people apply their heuristics, it could lead to irrational choices or selfish behavior?

View more: Next