Comment author: timtyler 17 March 2012 03:40:20PM 4 points [-]

No discussion of open source? Ben favours open source, SIAI want to "keep it secret"...

Comment author: cafesofie 17 March 2012 09:26:10PM *  5 points [-]

I find it a little strange that people never talk about this.

Ignore, for a moment, your personal assessment of Goertzel's chance of creating AGI. What would you do, or what would you want done, if you suspected an open source project was capable of succeeding? Even if the developers acknowledged the notion of FAI, there's nothing stopping any random person on the internet from cloning their repository and doing whatever they like with the code.

Comment author: anotherblackhat 15 March 2012 06:46:00PM 0 points [-]

From chapter 14

"Turning into a cat doesn't even BEGIN to compare to this. You know right up until this moment I had this awful suppressed thought somewhere in the back of my mind that the only remaining answer was that my whole universe was a computer simulation like in the book Simulacron 3 but now even that is ruled out because this little toy ISN'T TURING COMPUTABLE! A Turing machine could simulate going back into a defined moment of the past and computing a different future from there, an oracle machine could rely on the halting behavior of lower-order machines, but what you're saying is that reality somehow self-consistently computes in one sweep using information that hasn't... happened... yet..."

Comment author: cafesofie 16 March 2012 03:47:11AM *  -2 points [-]

Harry is stated to only have access to about half of the easier parts of the sequences.

I assume the timeless physics sequence is one of the parts he doesn't have access to...

I can no longer conceive that there might really be a universal time, which is somehow "moving" from the past to the future. This now seems like nonsense.

Something like Barbour's timeless physics has to be true, or I'm in trouble: I have forgotten how to imagine a universe that has "real genuine time" in it.

From this I read that Harry's mistake is the notion that there are things that "[haven't] happened yet".

http://lesswrong.com/lw/qp/timeless_physics/

Comment author: jimmy 05 February 2012 10:32:56PM *  17 points [-]

Still working on hypnosis

I picked up python and wrote a program that goes onto the internet and hypnotizes people, so I can throw some real empiricism at the problem now.

It's really paying off now that I can do things like go snorkling with my girlfriend who had been terrified of the ocean for her whole life, and snap my fingers and make people stop craving sugar :)

I've writing up my thoughts as I go on my blog

Comment author: cafesofie 05 February 2012 10:58:56PM *  4 points [-]

Is the source code to Hypnobot available? You seem to make some pretty strong claims about its effectiveness, but I'm not about to grind Omegle chats until I run in to it.

Comment author: cafesofie 25 January 2012 05:50:37PM 0 points [-]

Another New Yorker here showing interest.

[LINK] Neuroscientists Find That Status within Groups Can Affect IQ

5 cafesofie 23 January 2012 07:52PM

http://media.caltech.edu/press_releases/13492

To investigate the impact of social context on IQ, the researchers divided a pool of 70 subjects into groups of five and gave each individual a computer-based IQ test. After each question, an on-screen ranking showed the subjects how well they were performing relative to others in their group and how well one other person in the group was faring. All of the subjects had previously taken a paper-and-pencil IQ test, and were matched with the rest of the group so that they would each be expected to perform similarly on an IQ test.

At the outset, all of the subjects did worse than expected on this "ranked group IQ task." But some of the subjects, dubbed High Performers, were able to improve over the course of the test while others, called Low Performers, continued to perform below their expected level. By the end of the computer-based test, the scores of the Low Performers dropped an average of 17.4 points compared to their performance on the paper-and-pencil test.

"What we found was that sensitivity to the social feedback of the rankings profoundly altered some people's ability to express their cognitive capacity," Quartz says. "So we get this really quite dramatic downward spiraling of one group purely because of their sensitivity to this social feedback." Since so much of our learning—from the classroom to the work team—is socially situated, this study suggests that individual differences in social sensitivity may play an important role in shaping human intelligence over time.

Comment author: cafesofie 23 June 2011 08:56:46PM *  3 points [-]

For example, if their argument is that it is impossible to judge another culture's activities as being 'evil', I offer up the idea that it's part of my culture to repeatedly thwap people I disagree with on the head with a stick, and thus they have no justification for telling me to stop.

Not being able to judge another culture's activities as intrinsically evil isn't the same as having to like everything everyone else does.

I don't think your "stick test" is worth anything: the person being hit can invoke desire utilitarianism as their justification and still not claim that your action is "evil".

Comment author: jimrandomh 05 April 2011 02:56:33PM 7 points [-]

I just took a look at Ben Goetzel's CAV (Coherent Aggregated Volition). As far as I can tell, it includes peoples' death-to-outgroups volitions unmodified and thereby destroys the world, whereas CEV (which came first) doesn't. And he presents the desire to murder as an example and then fails to address it, then goes on to talk about running experiments on aggregating the volitions of trivial, non-human agents. That looks like a serious rationality failure in the direction of ignoring danger, and I get the same impression from his other writing, too.

The more of Ben Goertzel's writing I read, the less comfortable I am with him controlling OpenCog. If OpenCog turns into a seed AI, I don't think it's safe for him to be the one making the launch/no-launch decision. I also don't think it's safe for him to be setting directions for the project before then, either.

Comment author: cafesofie 06 April 2011 06:08:19PM *  1 point [-]

OpenCog is open source anyway: anything Goertzel can do can be done by anyone else. If Goertzel didn't think it was safe to run, what's stopping someone else from running it?