BenRayfield comments on How to detonate a technology singularity using only parrot level intelligence - new meetup.com group in Silicon Valley to design and create it - Less Wrong

-15 Post author: BenRayfield 31 July 2011 06:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (14)

You are viewing a single comment's thread. Show more comments above.

Comment author: mwengler 01 August 2011 03:11:07PM 0 points [-]

Good point about speech. Many of the comments hear lead me to think of something I think I see too little of on this board. Much of what we talk about in AI has clear, interesting, and educational analogs in NI (Natural Intellitence). We certainly have a problem with unfriendly NI's (Hitler, Stalin, Pol Pot, etc). Further, the significant structure of government in the western world (at least) shows we do a medium-good job of determining a CEV for humanity. And it also points out that a CEV will likely always be a compromise, finding an optimum-like compromise between components that are truly and actually different.

Since starting to read this site, I have thought more that Humanity has a collective intelligence to it which is way beyond that of the individuals in it. The difference between one human in isolation and one chimp in isolation is probably noticable but small. But with much higher bandwidth between individuals, humanity beats the pants of chimps (we wear pants, they do not).

Your insight about the role of speech ni providing the link between brains is a good one. Results of the project proposed above should be analyzed with respect to how the results achieved are the same as with voice, and how they might differ. We might learn something that way.

Comment author: BenRayfield 02 August 2011 04:23:05AM -2 points [-]

Yes there is a strong collective mind made of communication through words, but its a very self-deceptive mind. It tries to redefine common words to redefine ideas that other parts of the mind do not intend to redefine, and those parts of mind later find their memory has been corrupted. Its why people start expecting to pay money when they agree to get something "free". Intuition is much more honest. Its based on floating points at the subconscious level instead of symbols at the conscious level. By tunneling between the temporal lobes of peoples' brains, Human AI Net will bypass the conscious level and access the core of the problems that lead to conscious disagreements. Words are a corrupted interface so any AI built on them will have errors.

To the LessWrong and Singularity community, I offered an invitation to influence by designing details of this plan for singularity. Downvoting an invitation will not cancel the event, but if you can convince me that my plan may result in UnFriendly AI then I will cancel it. Since I have considered many possibilities, I do not expect a reason against it exists. Would your time be better spent calculating the last digit of friendliness probability for all of the mind space, or working to fix any problems you may see in a singularity plan that's in progress and will finish before yours?