All of BenRayfield's Comments + Replies

Yes there is a strong collective mind made of communication through words, but its a very self-deceptive mind. It tries to redefine common words to redefine ideas that other parts of the mind do not intend to redefine, and those parts of mind later find their memory has been corrupted. Its why people start expecting to pay money when they agree to get something "free". Intuition is much more honest. Its based on floating points at the subconscious level instead of symbols at the conscious level. By tunneling between the temporal lobes of peoples'... (read more)

6Armok_GoB
Umm... so how would this be different from speech? Does the hand have that much higher bandwidth than the human voice? I doubt any CEV would result, but I'm more inclined to think it's safe now.
2orthonormal
This, as well as the OP, illustrates Failure by Affective Analogy.
0Armok_GoB
Even ignoring the technical problems, and the fact nobody knows and the risk is to big, there's still a huge difference between the CEV of humanity and the CEV of a-bunch-of-guys-on-the-internet. You might get a "none of us is as cruel as all of us" type Anonymous for example.

Its not a troll. Its a very confusing subject, and I don't know how to explain it better unless you ask specific questions.

4timtyler
It sounds rather as though you are trying to reinvent the internet.
5Vladimir_Nesov
Heh, I misread this as saying "It's not a troll, it's a very confused subject" and upvoted before realizing my mistake!

When he says "intelligent design", he is not referring to the common theory that there is some god that is not subject to the laws of physics which created physics and everything in the universe. He says reality created itself as a logical consequence of having to be a closure. I don't agree with everything he says, but based only on the logical steps that lead up to that, him and Yudkowsky should have interesting things to talk about. Both are committed to obey logic and get rid of their assumptions, so there should be no unresolvable conflicts, but I expect lots of conflicts to start with.

I suggest Christopher Michael Langan, as roland said. His "Cognitive-Theoretic Model of the Universe (CTMU)" ( download it at http://ctmu.org ) is very logical and conflicts in interesting ways with how Yudkowsky thinks of the universe at the most abstract level. Langan derives the need for an emergent unification of "syntax" (like the laws of physics) and "state" (like positions and times of objects) and that the universe must be a closure. I think he means the only possible states/syntaxes are very abstractly similar to quin... (read more)

0Vaniver
He appears to be an ID proponent, though that is probably a simplification of his actual position.

The cache problem is worst for language because its usually made entirely of cache. Most words/phrases are understood by example instead of reading a dictionary or thinking of your own definitions. I'll give an example of a phrase most people have an incorrect cache for. Then I'll try to cause your cache of that phrase to be updated by making you think about something relevant to the phrase which is not in most peoples' cache of it. Its something which, by definition, should be included but for other reasons will usually not be included.

"Affirmative a... (read more)

I think someone needs to put forward the best case they can find that human brain emulations have much of a chance of coming before engineered machine intelligence.

I misunderstood. I thought you were saying it was your goal to prove that instead of you thought it would not be proven. My question does not make sense.

0timtyler
Thanks for clarifying!

Why do you consider the possibility of smarter than Human AI at all? The difference between the AI we have now and that is bigger than the difference between those 2 technologies you are comparing.

0timtyler
I don't understand why you are bothering asking your question - but to give a literal answer, my interest in synthesising intelligent agents is an offshoot of my interest in creating living things - which is an interest I have had for a long time and share with many others. Machine intelligence is obviously possible - assuming you have a materialist and naturalist world-view like mine.

It is the fashion in some circles to promote funding for Friendly AI research as a guard against the existential threat of Unfriendly AI. While this is an admirable goal, the path to Whole Brain Emulation is in many respects more straightforward and presents fewer risks.

I believe Eliezer expressed it as something that tells you that even if you think it would be right (because of your superior ability) to murder the chief and take over the tribe, it still is not right to murder the chief and take over the tribe.

That's exactly the high awareness I was talk

... (read more)
1JamesAndrix
I agree entirely that humans are not friendly. Whole brain emulation is humanity-safe if there's never a point at which one person or small group and run much faster than the rest of humanity (including other uploads) The uploads may outpace us, but if they can keep each other in check, then uploading is not the same kind of human-values threat. Even an upload singleton is not a total loss if the uploads have somewhat benign values. It is a crippling of the future, not an erasure.

My Conclusions It seems there is Far Near and Near Near, and if you ever again find yourself with time to meta-think that you are operating in Near mode.... then you're actually in Far mode. and so I will be more suspicious of the hypothetical thought experiments from now on.

When one watches the movie series called "Saw", they will experience the "near mode" of thinking much more than the examples given in this thread. "Saw" is about people trapped in various situations, enforced by mechanical means only (no psychotic perso... (read more)

Not necessarily. It's just that we are very far from being perfectly rational.

You're right. I wrote "rational minds" in general when I was thinking about the most rational few of people today. I did not mean any perfectly rational mind exists.

Most or all Human brains tend to work better if they experience certain kinds of things that may include wasteful parts, like comedy, socializing, and dreaming. Its not rational to waste more than you have to. Today we do not have enough knowledge and control over our minds to optimize away all our wastef... (read more)

We are Borg. You will be assimilated. Resistance is futile. If Star Trek's Borg Collective came to assimilate everyone on Earth, Eliezer Yudkowsky would engage them in logical debate until they agreed they should come back later after our technology has increased exponentially for some number of years, a more valuable thing for them to assimilate. Also, he would underestimate how fast our technology increases just enough that when the Borg came back, we would be the stronger force.

Why is this posted to LessWrong? What does it have to do with being less w

... (read more)
6DanArmak
Not necessarily. It's just that we are very far from being perfectly rational.