Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Multiheaded 19 November 2012 02:35:23PM *  0 points [-]

As long as other people are polarized about some issue, you opinion about conflict in Gaza is essentialy a decision to join the "team Israel" or "team Palestine". This choice is absolutely unrelated to the actual people killing each other in the desert. This choice is about whether Joe will consider you an ally, and Jane an enemy, or the other way. With high probability, neither Joe nor Jane are personally related to people killing each other in the desert, and their choices were also based on their preference to be in the same team with some other people.

Data point: you probably know I'm left-wing (in an eccentric way) - and yet, frankly, I'm very "pro-Israel" (although not fanatically so), and think that all the cool, nice, cosmopolitan, compassionate lefty people who protest "Zionist aggression" should go fuck themselves in regards to this particular issue. This includes e.g. Noam Chomsky, whom I otherwise respect highly. And I realize that this lands me in the same position as various far-right types whom I really dislike, yet I'm quite fine with it too.

Yes, I'm not neurotypical. However, you know that I can and do get kinda mind-killed on other political topics. So I'm not satisfied by your explanation.

Comment author: AlphaOmega 19 November 2012 09:51:58PM *  2 points [-]

I think what Viliam_Bur is trying to say in a rather complicated fashion is simply this: humans are tribal animals. Tribalism is perhaps the single biggest mind-killer, as you have just illustrated.

Am I correct in assuming that you identify yourself with the tribe called "Jews"? For me, who has no tribal dog in this particular fight, I can't get too worked up about it, though if the conflict involved, say, Irish people, I'm sure I would feel rather differently. This is just a reality that we should all acknowledge: Our attempts to "overcome bias" with respect to tribalism are largely self-delusion, and perhaps even irrational.

Comment author: AlphaOmega 17 November 2012 01:37:18AM 1 point [-]

Just a gut reaction, but this whole scenario sounds preposterous. Do you guys seriously believe that you can create something as complex as a superhuman AI, and prove that it is completely safe before turning it on? Isn't that as unbelievable as the idea that you can prove that a particular zygote will never grow up to be an evil dictator? Surely this violates some principles of complexity, chaos, quantum mechanics, etc.? And I would also like to know who these "good guys" are, and what will prevent them from becoming "bad guys" when they wield this much power. This all sounds incredibly naive and lacking in common sense!

Comment author: AlphaOmega 08 November 2012 10:38:00PM 0 points [-]

I can conceive of a social and technological order where transhuman power exists, but you may or may not want to live in it. This is a world where there are god-like entities doing wondrous things, and humanity lives in a state of awe and worship at what they have created. To like living in this world would require that you adopt a spirit of religious submission, perhaps not so different from modern-day monotheists who bow five times a day to their god. This may be the best post-Singularity order we can hope for.

In response to against "AI risk"
Comment author: AlphaOmega 12 April 2012 05:48:26PM *  -1 points [-]

I am going to assert that the fear of unfriendly AI over the threats you mention is a product of the same cognitive bias which makes us more fascinated by evil dictators and fictional dark lords than more mundane villains. The quality of "evil mind" is what really frightens us, not the impersonal swarm of "mindless" nanobots, viruses or locusts. However, since this quality of "mind," which encapsulates such qualities as "consciousness" and "volition," is so poorly understood by science and so totally undemonstrated by our technology, I would further assert that unfriendly AI is pure science fiction which should be far down the list of our concerns compared to more clear and present dangers.

Comment author: TheOtherDave 10 April 2012 09:09:56PM 6 points [-]

Robots taking human jobs is another step toward bringing the curtain down permanently on the dead-end primate dramas

Well, so is large-scale primate extermination leaving an empty husk of a planet.

The question is not so much whether the primates exist in the future, but what exists in the future and whether it's something we should prefer to exist. I accept that there probably exists some X such that I prefer (X + no humans) to (humans), but it certainly isn't true that for all X I prefer that.

So whether bringing that curtain down on dead-end primate dramas is something I would celebrate depends an awful lot on the nature of our "mind children."

Comment author: AlphaOmega 11 April 2012 07:08:48PM *  -2 points [-]

OK, but if we are positing the creation of artificial superintelligences, why wouldn't they also be morally superior to us? I find this fear of a superintelligence wanting to tile the universe with paperclips absurd; why is that likely to be the summum bonum to a being vastly smarter than us? Aren't smarter humans generally more benevolent toward animals than stupider humans and animals? Why shouldn't this hold for AI's? And if you say that the AI might be so much smarter than us that we will be like ants to it, then why would you care if such a species decides that the world would be better off without us? From a larger cosmic perspective, at that point we will have given birth to gods, and can happily meet our evolutionary fate knowing that our mind children will have vastly more interesting lives than we ever could have. So I don't really understand the problem here. I guess you could say that I have faith in the universe's capacity to evolve life toward more intelligent and interesting configurations, because for the last several billion years this has been the case, and I don't see any reason to think that this process will suddenly reverse itself.

Comment author: AlphaOmega 15 June 2011 09:42:51PM *  1 point [-]

How useful are these surveys of "experts", given how wrong they've been over the years? If you conducted a survey of experts in 1960 asking questions like this, you probably would've gotten a peak probability for human level AI around 1980 and all kinds of scary scenarios happening long before now. Experts seem to be some of the most biased and overly optimistic people around with respect to AI (and many other technologies). You'd probably get more accurate predictions by taking a survey of taxi drivers!

Comment author: AlphaOmega 14 June 2011 12:45:15AM *  -1 points [-]

Since I'm in a skeptical and contrarian mood today...

  1. Never. AI is Cargo Cultism. Intelligence requires "secret sauce" that our machines can't replicate.
  2. 0
  3. 0
  4. Friendly AI research deserves no support whatsoever
  5. AI risks outweigh nothing because 0 is not greater than any non-negative real number
  6. The only important milestone is the day when people realize AI is an impossible and/or insane goal and stop trying to achieve it.
Comment author: AlphaOmega 06 June 2011 07:37:14AM *  0 points [-]

“Pure logical thinking cannot yield us any knowledge of the empirical world; all knowledge of reality starts from experience and ends in it. Propositions arrived at by pure logical means are completely empty of reality.” –Albert Einstein

I don't agree with Al here, but it's a nice quote I wanted to share.

Comment author: AlphaOmega 04 June 2011 04:15:47AM 6 points [-]

Have you been doing anything in particular to cause your willpower to increase? What are some effective techniques for increasing willpower?

Comment author: AlphaOmega 19 May 2011 07:57:51PM *  4 points [-]

What bothers me is that the real agenda of the LessWrong/Singularity Institute folks is being obscured by all these abstract philosophical discussions. I know that Peter Thiel and other billionaires are not funding these groups for academic reasons -- this is ultimately a quest for power.

I've been told by Michael Anissimov personally that they are working on real, practical AI designs behind the scenes, but how often is this discussed here? Am I supposed to feel secure knowing that these groups are seeking the One Ring of Power, but it's OK because they've written papers about "CEV" and are therefore the good guys? He who can save the world can control it. I don't trust anyone with this kind of power, and I am deeply suspicious of any small group of intelligent people that is seeking power in this way.

Am I paranoid? Absolutely. I know too much about recent human history and the horrific failures of other grandiose intellectual projects to be anything else. Call me crazy, but I firmly believe that building intelligent machines is all about power, and that everything else (i.e. most of this site) is conversation.

View more: Next