All of AlphaOmega's Comments + Replies

I think what Viliam_Bur is trying to say in a rather complicated fashion is simply this: humans are tribal animals. Tribalism is perhaps the single biggest mind-killer, as you have just illustrated.

Am I correct in assuming that you identify yourself with the tribe called "Jews"? For me, who has no tribal dog in this particular fight, I can't get too worked up about it, though if the conflict involved, say, Irish people, I'm sure I would feel rather differently. This is just a reality that we should all acknowledge: Our attempts to "overcome bias" with respect to tribalism are largely self-delusion, and perhaps even irrational.

-2Multiheaded
I might be identifying myself with the tribe "Nice polite intelligent occasionally badass people who live in a close-knit national community under a liberal democracy", but I really couldn't give a damn about their relation to the Jewish people I know, or to Jewish history, or to any such stuff. I just look at the (relative) here and now of the Middle East and what the people there seem to act like. I don't personally know anyone from Israel, I just find the Israeli nation massively more sympathetic than its hostile neighbours, observing from afar. I don't know if you meant something like that or not.
4[anonymous]
Don't take too much credit. Steve_Rayhawk generated the comment by actively trying to help. But if his goal was to engage you in thoughtful and productive discussion, he probably failed, and it was probably a waste of his time to try. There happened to be this positive externality of an excellent comment - but that's the kind of thing that's generated as a result of doing your best to understand a complex issue, not adversarially mucking up the conversation about it. Somehow I doubt that's the true cause of your behavior, but I'd be delighted to find out that I'm wrong.

Just a gut reaction, but this whole scenario sounds preposterous. Do you guys seriously believe that you can create something as complex as a superhuman AI, and prove that it is completely safe before turning it on? Isn't that as unbelievable as the idea that you can prove that a particular zygote will never grow up to be an evil dictator? Surely this violates some principles of complexity, chaos, quantum mechanics, etc.? And I would also like to know who these "good guys" are, and what will prevent them from becoming "bad guys" when they wield this much power. This all sounds incredibly naive and lacking in common sense!

4Psy-Kosh
The idea is not "take an arbitrary superhuman AI and then verify it's destined to be well behaved" but rather "develop a mathematical framework that allows you from the ground up to design a specific AI that will remain (provably) well behaved, even though you can't, for arbitrary AIs, determine whether or not they'll be well behaved."
1Kevin
I think this comment is disingenuous, given your statements that the extinction of humanity is inevitable and that you have a website using evil AI imagery. http://lesswrong.com/lw/b5i/a_primer_on_risks_from_ai/64dq

The main way complexity of this sort would be addressable is if the intellectual artifact that you tried to prove things about were simpler than the process that you meant the artifact to unfold into. For example, the mathematical specification of AIXI is pretty simple, even though the hypotheses that AIXI would (in principle) invent upon exposure to any given environment would mostly be complex. Or for a more concrete example, the Gallina kernel of the Coq proof engine is small and was verified to be correct using other proof tools, while most of the comp... (read more)

I can conceive of a social and technological order where transhuman power exists, but you may or may not want to live in it. This is a world where there are god-like entities doing wondrous things, and humanity lives in a state of awe and worship at what they have created. To like living in this world would require that you adopt a spirit of religious submission, perhaps not so different from modern-day monotheists who bow five times a day to their god. This may be the best post-Singularity order we can hope for.

I am going to assert that the fear of unfriendly AI over the threats you mention is a product of the same cognitive bias which makes us more fascinated by evil dictators and fictional dark lords than more mundane villains. The quality of "evil mind" is what really frightens us, not the impersonal swarm of "mindless" nanobots, viruses or locusts. However, since this quality of "mind," which encapsulates such qualities as "consciousness" and "volition," is so poorly understood by science and so totally undemo... (read more)

1Zetetic
I'm going to assert that it has something to do with who started the blog.

OK, but if we are positing the creation of artificial superintelligences, why wouldn't they also be morally superior to us? I find this fear of a superintelligence wanting to tile the universe with paperclips absurd; why is that likely to be the summum bonum to a being vastly smarter than us? Aren't smarter humans generally more benevolent toward animals than stupider humans and animals? Why shouldn't this hold for AI's? And if you say that the AI might be so much smarter than us that we will be like ants to it, then why would you care if such a species de... (read more)

6TheOtherDave
If morality is a natural product of intelligence, without reference to anything else, then they would be. If morality is not solely a product of intelligence, but also depends on some other thing X in addition to intelligence, then they might not be, because of different values of X. Would you agree with that so far? If not, you can ignore the rest of this comment, as it won't make much sense. If so...a lot of folks here believe that morality is not solely a product of intelligence, but also depends on some other things, which we generally refer to as values. Two equally intelligent systems with different values might well have different moralities. If that's true, then if we want to create a morally superior intelligence, we need to properly engineer both its intelligence and its values. It isn't, nor does anyone claim that it is. If you've gotten the impression that the prevailing opinion here is that tiling the universe with paperclips is a particularly likely outcome, I suspect you are reading casually and failing to understand underlying intended meanings. Maybe? I don't know that this is true. Even if it is true, it's problematic to infer causation from correlation, and even more problematic to infer particular causal mechanisms. It might be, for example, that expressed benevolence towards animals is a product of social signaling, which correlates with intelligence in complex ways. Or any of a thousand other things might be true. Well, for one thing, because (as above) it might not even hold for humans outside of a narrow band of intelligence levels and social structures. For another, because what holds for humans might not hold for AIs if the AIs have different values. Because I might prefer we not be exterminated. If that makes you happy, great. It sounds like you're insisting that it ought to make me happy too. I disagree. There are many types of gods I would not be happy to have replaced humanity with. That's fine. You aren't obligated to. Sure,

It seems to me that humanity is faced with an epochal choice in this century, whether to:

a) Obsolete ourselves by submitting fully to the machine superorganism/superintelligence and embracing our posthuman destiny, or

b) Reject the radical implications of technological progress and return to various theocratic and traditionalist forms of civilization which place strict limits on technology and consider all forms of change undesirable (see the 3000-year reign of the Pharaohs, or the million-year reign of the hunter-gatherers)

Is there a plausible third ... (read more)

7TheOtherDave
Well, so is large-scale primate extermination leaving an empty husk of a planet. The question is not so much whether the primates exist in the future, but what exists in the future and whether it's something we should prefer to exist. I accept that there probably exists some X such that I prefer (X + no humans) to (humans), but it certainly isn't true that for all X I prefer that. So whether bringing that curtain down on dead-end primate dramas is something I would celebrate depends an awful lot on the nature of our "mind children."
0timtyler
Sure. I'm interested in all end-of-the-world cults. The more virulent their memes the better. A cult with no respect for history? Those who don't remember the past are doomed to repeat it.

How useful are these surveys of "experts", given how wrong they've been over the years? If you conducted a survey of experts in 1960 asking questions like this, you probably would've gotten a peak probability for human level AI around 1980 and all kinds of scary scenarios happening long before now. Experts seem to be some of the most biased and overly optimistic people around with respect to AI (and many other technologies). You'd probably get more accurate predictions by taking a survey of taxi drivers!

3Zetetic
You seem to presume that the quality expert opinion on a subject is somehow time/person invariant. It seems fairly intuitive that we should expect predictions of technological development to be more accurate the closer we come to achieving them (though I would like to see some data on that), as we come to grips with what the real difficulties are. So yes, the predictions are likely going to be inaccurate, but they should become less so as we better understand the complications. A prediction of "it's going to happen in 20 years" from a researcher forty years ago when the field was in its infancy and we had very little idea of what we were doing is not as good as a prediction of "given all that we have learned about the difficulty of the problems over the last 20 years, it's going to happen sometime in the next few decades".

Right, but the point is not for us to learn whether AI is an existential risk. The point is to find out whether mainstream academic AI people (and others) think it is. It's an attitudes survey, not a fact-finding mission.

Since I'm in a skeptical and contrarian mood today...

  1. Never. AI is Cargo Cultism. Intelligence requires "secret sauce" that our machines can't replicate.
  2. 0
  3. 0
  4. Friendly AI research deserves no support whatsoever
  5. AI risks outweigh nothing because 0 is not greater than any non-negative real number
  6. The only important milestone is the day when people realize AI is an impossible and/or insane goal and stop trying to achieve it.
0[anonymous]
Upvoted because this appears an honest answer to the question, but it'd be useful if you said why you considered it an absolute certainty that no machine will ever show human-level intelligence. Personally I wouldn't assign probability 0 even to events that appear to contradict the most basic laws of physics, since I don't have 100% confidence in my own understanding of physics...
3jimrandomh
According to the web site linked in your profile, you are attempting to actively poison the memetic ecology by automated means. I'm not sure how to answer that, given that the whole site goes far over the top with comic book villainy, except to say that this particular brand of satire is probably dangerous to your mental health.
3JenniferRM
Yes, thank you, that's much more precise :-) Fears of competitive exclusion and loss of autonomy seem entirely reasonable issues to be raised by anyone who thoughtfully considers the status quo trajectory of exponential technological improvements and looming resource limitations. However, it seems to me that singularitarians are generally aiming to honestly and ethical respond to these concerns, rather than actively doing something that would make the concerns more pressing. If this isn't clear to people who know enough to troll with as much precision and familiarity as you, then I'd guess that something might be going wrong somewhere. Can you imagine something that you could see in this community that would allay some of your political concerns? What would constitute counter evidence for future scenarios whose prospects make you unhappy?
4MixedNuts
(Emotional level note: Please upgrade your politeness level. I've been rude earlier, but escalating is a bad move even then; I'm de-escalating now. Your current politeness level is generating signs of treating debate as a conflict, and of trolling.) Can you clarify that phrase? I can only parse it can "deriving your ethics from", but ethical systems are derived from everyday observations like "Hey, it seems bad when people die", then reasoning about it. Then the ethics exist, and "intergalactic civilizations are desirable" come from them. Maybe you meant "designating those notions as the most desirable things"? They are consequences of the ethical system, yeah, but "The thing you desire most is impossible", while bad news, is no reason to change what you desire. (Which is why I called sour grapes.) You seem to confuse "A positive Singularity is desirable" (valuing lives, ethical systems) and "A positive Singularity is likely" (pattern-matching with sci-fi). You are invoking the absurdity heuristic. "Intergalactic civilizations and singularities pattern-match science fiction, rather than newspapers." This isn't bad if you need a three-second judgement, but is quite faillible (e.g., relativity, interracial marriage, atheism). It would be better to engage with the meat of the argument (why smarter-than-human intelligence is possible in principle, why AIs go flat or FOOM, why the stakes are high, why a supercritical AI is likely in practice (I don't actually know that one)), pinpoint something in particular, and say "That can't possibly be right" (backing it up with a model, a set of historical observations, or a gut feeling). It's common knowledge on LW that both the rationality thing (LW) and the AI things (SIAI) are at unusually high risk of becoming cultish. If you can point to a particular problem, please do so; but reasoning by analogy ("They believe weird things, so do religions, therefore they're like a religion") proves little. (You know what else containe

See, this is one of the predictions people get totally wrong when they try to interpret singularity activism using religion as a template. It's not "saving the universe from the heathens" its "optimizing the universe on behalf of everyone, even people who are foolish, shortsighted, and/or misinformed".

Well formed criticism (even if mean-spirited or uncharitable) is very useful, because it helps identify problems that can be corrected once recognized, and it reduces the likelihood of an insanity spiral due to people agree with each othe... (read more)

"Rethink his ethics" because you think his goal is impossible? That's the purest example of sour grapes, like, ever.

Also, jaded cynicism is worthless. If civilization is collapsing, go prop it up.

“Pure logical thinking cannot yield us any knowledge of the empirical world; all knowledge of reality starts from experience and ends in it. Propositions arrived at by pure logical means are completely empty of reality.” –Albert Einstein

I don't agree with Al here, but it's a nice quote I wanted to share.

Have you been doing anything in particular to cause your willpower to increase? What are some effective techniques for increasing willpower?

4D_Malik
What seems to have worked really well for me is just practising willpower by intentionally exposing yourself to pain and stress. But then again, that requires willpower. For example, eating restrictions, n-back, exercise, music starvation, standing up or squatting while doing things, not watching TV, not playing games.
0Richard_Kennaway
Yes. Short version: What do you want? That is your reason to be rational. That seems accurate to you, because power is what you want. You have said this explicitly yourself: "Well I just want to rule the world." Because power is what you want, you assume that it is what everyone else wants. So when you read that rationality "wins", you interpret winning as "defeating other people". That is only "winning", in the sense of the slogan "rationality wins", if what you want is to subjugate or exterminate "the competition". You see everyone as "the competition", and your solution is to take over the world. Bertrand Russell gives you delightful cold prickles (I'm sure you have no use for warm fuzzies) because you hear in him something you want to hear: everyone is everyone's enemy. (BTW, here's some context for that phrase of his. Plato's totalitarian dream, 1931 edition.) You are the would-be Terminator God, the self-styled AlphaOmega. How's that going? What do you do when you're not reading LessWrong and being pissed on? Not that LW karma means anything in the greater scheme of things, but you keep coming back for more. It is said that he who would be Pope must think of nothing else. How much more so for he who would rule the world!
1ArisKatsaris
I certainly wouldn't mind discussing it, just not with someone who is behaving like a rude jerk and uses trollish attempts to annoy people into discussing it. "Terminator-like god of reason"? Seriously? And every past post of yours you seem to be using some attempt to attribute characteristics to people that you ought know they don't have "Oh, you ought support banning birth control, then; Oh, you are like genocidal criminals then; Oh, you're like megalomaniacal villains, then" So, no, no "sacred cows" here, not for me atleast. Just the lack of desire on my part to engage in conversation someone as unpleasant as you currently are.
1beriukay
Then I guess you have a decision to make: do you want to be happy, or do you want all the things you care about to have the best chance possible in working out how you want them to? Personally, having a computer not work, or having intermittent beeping for eight hours, or pointlessly arguing DON'T make me happier, and if I can fix such problems I will. But if you want to stew in the unexamined life, allowing other people or situations to control you because you feel like thinking makes you unhappy, then I can only hope that you make as little of an impact on the world as possible before you go happily into the grave.
5beriukay
I think the point of the post is a bit more meta than "people should use rationality [...]". More like: "I am allowed to think that there is a right way for people to think". I like your first question. Not having had much a chance to taboo my words, and not wanting to get lost in a maze of words, I would describe it thusly: It is noticing a challenge. Then poking and prodding the challenge with tools until you have some ideas for how it works. Then using the ideas to poke and prod it in ways that will eliminate the wrong ideas. What remains should be ideas that you aren't sure about. You can compare those ideas with ideas you already held before the challenge, and the ones that disagree need to be tested some more. With enough effort, eventually you will have gotten rid of all the ideas that obviously don't pass your tests, and you will have a collection of ideas that you have tested that don't disagree with one another. Or ones that are clear on how they disagree, since there will always be open questions. In the spirit of The 5-Second Level, I will name three concrete examples: -Noticing my computer fails to wake up properly from hibernation, I note that I recently replaced my video cards, so I update video card drivers. That doesn't fix the problem, so I google it with words that I think will turn up the best results and find that some people had this problem after flashing the firmware on the video card, but that doing an RMA fixed their problem. I begin the process of RMAing, but then notice that the broader class of problems is associated with driver issues. I find a driver updater program that seems trustworthy, and find that I have 2 out-of-date drivers associated with a couple of my hard drives. I update them, and find that the computer no longer has the problem I complained about. I stop looking. -There is a loud, distracting, unknown, short-lived beep at work, that promises to bother me all shift. I start a timer when a beep goes off and stop it whe
0Broggly
Depends what you mean by "based on" (and to a lesser extent "prophet" if you want to argue about North Korea, China and the old USSR). People seem to prefer, for example, America over Iran as a place to live. Hang on, that's a bit of a non-sequiter. Just because rationalists won't become a majority within the current generational cohort doesn't mean we're shrinking in number, or even in proportion. I haven't seen the statistics for other countries (where coercion and violence likely play some role in religious matters) but in Western nations non religious people have been increasing in number. In my own nation we're seeing the priesthood age and shrink (implying that the proportion of "religious" people committed enough to make it their career is falling) and in my city Adelaide, the "City of Churches", quite a few have been converted into shops and nightclubs.
8Manfred
Let me try to guess your reasoning. If you have "I want to be rational" as one of your terminal values, you will decide that your human brain is a mere hindrance, and so you will turn yourself into a rational robot. But since we are talking about human values, it should be noted that smelling flowers, love, and having family are also among your terminal values. So this robot would still enjoy smelling flowers, love, and having family - after all, if you value doing something, you wouldn't want to stop liking it, because if you didn't like it you would stop doing it. But then, because rational agents always get stuck in genocidal cul-de-sacs, this robot who still feels love is overwhelmed by the need to kill all humans, leading to the extermination of the human race. Since I probably wasn't close at all, maybe you could explain?
0[anonymous]
What would the least repugnant possible future look like? -- keeping in mind that all the details of such a future would have to actually hold together? (Since "least repugnant possible", taken literally, would mean the details would hold together by coincidence, consider instead a future that were, say, one-in-a-billion for its non-repugnance.) If bringing about the least repugnant future you could were your only goal, what would you do -- what actions would you take? When I imagine those actions, they resemble rationality, including trying to develop formal methods to understand as best you can which parts of the world are value systems which deserve to be taken into account for purposes of defining repugnance, how to avoid missing or persistently disregarding any value systems that deserved to be taken into account, how to take those value systems into account even where they seem to contradict each other, and how to avoid missing or persistently disregarding major implications of those value systems; as well as being very careful not to gloss over flaws in your formal methods or overall approach -- especially foundational problems like Gödelian undecidability, unsystematic use of reflection, bounded rationality, and definition of slippery concepts like "repugnant" --, in case the flaws point to a better alternative. What do the actions of someone whose only goal was to bring about the least repugnant future they could resemble when you imagine them? (How much repugnantness is there in the "default"/"normal"/"if only it could be normal" future you imagine? Is that amount of repugnantness the amount you take for granted -- do you assume that no substantially less repugnant future is achievable, and do you assume that to safely achieve a future at least roughly that non-repugnant would not generally require doing anything unprecedented? How repugnant would a typical future be in which humanity had preventably gone extinct because of irrationality, how repugnant
-1tel
I feel like this is close to the heart of a lot of concerns here: really it's a restatement of the Friendly AI problem, no? The back door seems to always be that rationality is "winning" and therefore if you find yourself getting caught up in an unpleasant loop, you stop and reexamine. So we should just be on the lookout for what's happy and joyful and right— But I fear there's a Catch 22 there in that the more on the lookout you are, the further you wander from a place where you can really experience these things. ---------------------------------------- I want to disagree that "post-Enlightenment civilization [is] a historical bubble" because I think civilization today is at least partially stable (maybe less so in the US than elsewhere). I, of course, can't be to certain without some wildly dictatorial world policy experiments, but curing diseases and supporting general human rights seem like positive "superhuman" steps that could stably exist.
6Scott Alexander
http://www.raikoth.net/consequentialism.html See especially points 5.6 and 7.8.

My utility function can't be described by statistics; it involves purely irrational concepts such as "spirituality", "aesthetics", "humor", "creativity", "mysticism", etc. These are the values I care about, and I see nothing in your calculations that takes them into account. So I am rejecting the entire project of LessWrong on these grounds.

The fact that you don't see these things accounted for is a fact about your own perception, not about utilitarian values (which actually do account for these things).... (read more)

8Bobertron
There might be differences in how to archive that, but I'm pretty sure everyone here agrees to that in general. One of those things definitely doesn't belong in this list (hint: it's art). You are confusing the concept of increasing happiness by rational means and increasing happiness by teaching rationality to people. If you only care about happiness and people that engage in magical thinking are systematically happier, it would be completely rational to teach magical thinking. If you teach rationality to people it will destroy some of their irrational beliefs. Depending on whether those irrational beliefs make them happy or unhappy, the impact on happiness would (I think) depend heavily on the person. It certainly isn't.
0[anonymous]
I'm not sure how you think this applies to anything said in my post. I never said anything about maximizing the total number of humans in existence. Your strategy for doing so sounds like a recipe for a Malthusian disaster, which would probably diminish the number of humans in existence in the long run. Humans are rational compared to most other naturally existing entities -- rationality is one of the key aspects which sets us apart from the other animals. And while you may feel repulsion at the fact that others value rationality higher than you do, you should know that many of us feel repulsion at those who value rationality less than we do. The feeling of repulsion isn't the issue though; the fact that millions will die painfully and pointlessly because of irrational behavior is the issue.

That's how it strikes me also. To me Yudkowsky has most of the traits of a megalomaniacal supervillain, but I don't hold that against him. I will give LessWrong this much credit: they still allow me to post here, unlike Anissimov who simply banned me outright from his blog.

6Bongo
Since the quote is obsolete, as nhamann pointed out and as it says right on the top of the page, maybe you are being struck wrong.
8Nornagest
I'm pretty sure Eliezer is consciously riffing on some elements of the megalomaniacal supervillain archetype; at the very least, he name-checks the archetype here and here in somewhat favorable terms. There are any number of reasons why he might be doing so, ranging from pretty clever memetic engineering to simply thinking it's fun or cool. As you might be implying, though, that doesn't make him megalomaniacal or a supervillain; we live in a world where bad guys aren't easily identified by waxed mustaches and expansive mannerisms. Good thing, too; I lost my goatee less than a year ago.

What bothers me is that the real agenda of the LessWrong/Singularity Institute folks is being obscured by all these abstract philosophical discussions. I know that Peter Thiel and other billionaires are not funding these groups for academic reasons -- this is ultimately a quest for power.

I've been told by Michael Anissimov personally that they are working on real, practical AI designs behind the scenes, but how often is this discussed here? Am I supposed to feel secure knowing that these groups are seeking the One Ring of Power, but it's OK because they'... (read more)

0[anonymous]
Keep your friends close...
-5timtyler

You raise a good point here, which relates to my question: Is Good's "intelligence explosion" a mathematically well-defined idea, or is it just a vague hypothesis that sounds plausible? When we are talking about something as poorly defined as intelligence, it seems a bit ridiculous to jump to these "lather, rinse, repeat, FOOM, the universe will soon end" conclusions as many people seem to like to do. Is there a mathematical description of this recursive process which takes into account its own complexity, or are these just very vague and overly reductionist claims by people who perhaps suffer from an excessive attachment to their own abstract models and a lack of exposure to the (so-called) real world?

1XiXiDu
To be clear, I do not doubt that superhuman artificial general intelligence is practically possible. I do not doubt that humans will be able to create it. What I am questioning is the FOOM part. Yeah, take for example this article by Eliezer. As far as I understand it, I agree with everything, except for the last paragraph: I hope he is joking.
0faul_sname
Not quite sure why this comment was voted down. We do try to simulate the world around us (or at least I do). While I don't know if that is the root of consciousness, it seems to be a plausible claim that consciousness is the feeling of trying to simulate the universes resulting from different choices.
3timtyler
I also think they look rather ineffectual from the outside. On the other hand they apparently keep much of their actual research secret - reputedly for fears that it will be used to do bad things - which makes them something of an unknown quantity. I am pretty sceptical about them getting very far with their projects - but they are certainly making an interesting sociological phenomenon in the mean time!

Well I just want to rule the world. To want to abstractly "save the world" seems rather absurd, particularly when it's not clear that the world needs saving. I suspect that the "I want to save the world" impulse is really the "I want to rule the world" impulse in disguise, and I prefer to be up front about my motives...

4Giles
I'm being upfront about my motives. By committing to them publicly I add social pressure to keep me on my desired track. As to what my unconscious motives might be, well I love my unconscious mind dearly but there are times when it can just go screw itself.
0timtyler
Intelligence has discovered nuclear fission - and is working on nuclear fusion. It looks set to be the greatest entropy-creator evolution has ever invented.
7wedrifid
Intelligence burns entropy to function. It just burns it in a far more efficient way in terms of awesomeness per entropy unit than anything else does. It can also concentrate the negentropy, mining a lot of it to use for its own ends. But in the end we are still entropy's bitch. Intellgence would need to find a way to work around the apparent loss of negentropy and find a new source if it wants to survive forever.