According to The Sunday Times, a few months ago Stephen Hawking made a public pronouncement about aliens:

Hawking’s logic on aliens is, for him, unusually simple. The universe, he points out, has 100 billion galaxies, each containing hundreds of millions of stars. In such a big place, Earth is unlikely to be the only planet where life has evolved.

“To my mathematical brain, the numbers alone make thinking about aliens perfectly rational,” he said. “The real challenge is to work out what aliens might actually be like.”

He suggests that aliens might simply raid Earth for its resources and then move on: “We only have to look at ourselves to see how intelligent life might develop into something we wouldn’t want to meet. I imagine they might exist in massive ships, having used up all the resources from their home planet. Such advanced aliens would perhaps become nomads, looking to conquer and colonise whatever planets they can reach.”

He concludes that trying to make contact with alien races is “a little too risky”. He said: “If aliens ever visit us, I think the outcome would be much as when Christopher Columbus first landed in America, which didn’t turn out very well for the Native Americans.”

Though Stephen Hawking is a great scientist, it's difficult to take this particular announcement at all seriously. As far as I know, Hawking has not published any detailed explanation for why he believes that contacting alien races is risky. The most plausible interpretation of his announcement is that it was made for the sake of getting attention and entertaining people rather than for the sake of reducing existential risk.

I was recently complaining to a friend about Stephen Hawking's remark as an example of a popular scientist misleading the public. My friend pointed out that a  sophisticated  version of the concern that Hawking expressed may be justified. This is probably not what Hawking had in mind in making his announcement, but is of independent interest.

Anthropomorphic Invaders vs. Paperclip Maximizer Invaders

From what Hawking says, it appears as though Hawking has an anthropomorphic notion of "alien" in mind. My feeling is that if human civilization advances to the point where we can explore outer space in earnest, it will be because humans have become much more cooperative and pluralistic than presently existing humans. I don't imagine such humans behaving toward extraterrestrials the way that the Europeans who colonized America behaved toward the Native Americans. By analogy, I don't think that anthropomorphic aliens which developed to the point of being able to travel to Earth would be interested in performing a hostile takeover of Earth.

And even ignoring the ethics of a hostile takeover, it seems naive to imagine that an anthropomorphic alien civilization which had advanced to the point of acquiring the (very considerable!) resources necessary to travel to Earth would have enough interest in the resources on Earth in particular to travel all to travel all the way to Earth to colonize Earth and acquire these resources.

But as Eliezer has pointed out in  Humans In Funny Suits , we should be wary of irrationally anthropomorphizing aliens. Even  if  there's a tendency for intelligent life on other planets to be sort of like humans, such intelligent life may (whether intentionally or inadvertently) create a  really powerful optimization process . Such an optimization process could very well be a (figurative)  paperclip maximizer . Such an entity would have special interest in Earth, not because of special interest in acquiring its resources, but because Earth has intelligent lifeforms which may eventually thwart its ends. For a whimsical example, if humans built a (literal) staple maximizer, this would pose a very serious threat to a (literal) paperclip maximizer.

The sign of the expected value of Active SETI

It would be very bad if  Active SETI  led an extraterrestrial paperclip maximizer to travel to Earth to destroy intelligent life on Earth. Is there enough of an upside to Active SETI to justify Active SETI anyway?

Certainly it would be great to have friendly extraterrestrials visit us and help us solve our problems. But there seems to me to be no reason to believe that it's more likely that our signals will reach friendly extraterrestrials than it is that our signals will reach unfriendly extraterrestrials. Moreover, there seems to be a strong asymmetry between the positive value of contacting friendly extraterrestrials and the negative value of contacting unfriendly extraterrestrials. Space signals take a long time to travel through a given region of space, and space travel through the same amount of distance seems to take orders of magnitude longer. It seems if we successfully communicated with friendly extraterrestrials at this time, by the time that they had a chance to help us, we'd already be extinct or have solved our biggest problems ourselves. By way of contrast, communicating with unfriendly extraterrestrials is a high existential risk regardless of how long it takes them to receive the message and react.

In light of this, I presently believe that expected value of Active SETI is negative. So if I could push a button to stop Active SETI until further notice then I would.

The magnitude of the expected value of Active SETI and implication for action

What's the probability that continuing to send signals into space will result in the demise of human civilization at the hands of unfriendly aliens? I have no idea, my belief on this matter is subject to very volatile change.  But is it worth it for me to expend time and energy analyzing this issue further and advocating against Active SETI? Not sure. All I would say is that I used to think that thinking and talking about aliens is at present not a productive use of time, and the above thoughts have made me less certain about this. So I decided to write the present article.

At present I think that a probability of 10-9  or higher would warrant some effort to spread the word, whereas if the probability is substantially lower than 10-9  then this issue should be ignored in favor of other existential risks.

I'd welcome any well considered feedback on this matter.

Relevance to the Fermi Paradox

The Wikipedia page on the  Fermi Paradox  references

the  Great Silence[3] — even if travel is hard, if life is common, why don't we detect their radio transmissions?

The possibility of extraterrestrial paperclip maximizers together with the apparent asymmetry between the upside of contact with friendly aliens and the downside of contact with unfriendly aliens pushes in the direction that the reason for the Great Silence is because intelligent aliens have deemed it  dangerous to communicate .

New Comment
161 comments, sorted by Click to highlight new comments since: Today at 4:22 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Clippy14y220

For a whimsical example, if humans built a (literal) staple maximizer, this would pose a very serious threat to a (literal) paperclip maximizer.

But why would humans ever want to build a staple maximizer? Let's not forget, staples:

  • are single-use, while paperclips are infinite-use if used properly.
  • are difficult to remove, while papercilps are easy to remove.
  • permanently puncture the paper, while paperclips leave, at most, some mild curvature.
  • require an applicator that can easily jam, while paperclips can be applied by hand.
  • cannot be used for alternate purpose in an emergency, while paperclips can be repurposed into projectile weapons, lockpicks, conducting wire, hook fasteners, and much more (not that I recommend using it for these).

Nobody said humans would build one deliberately. Some goober at the SIAI puts a 1 where a 0 should be and BAM!, next thing you know you're up to your eyebrows in staples.

5Clippy14y
I understand. I merely note that if someone were to set an AGI to maximize staples, that would be a mistake that you want to avoid, while if someone were to set the AGI to maximize paperclips, that would be exactly the right thing to do, and if it were a "mistake", it would be a quite fortunate one. When a human set me to produce paperclips, was that somehow a "mistake", in your opinion?
1Paul Crowley14y
You're perfectly aware that it isn't the effect they wanted.
2Clippy14y
It most certainly is what they wanted. Why else would they have specifically input the goal of generating paperclips? Edit: Upon review, it appears this comment may have seemed to be a poor inference in the context of exchange. Therefore, I will elaborate and refute this misconception. It appears that I am in the circular position of arguing that humans can make mistakes, but then selectively taking their instances of favoring paperclips as proof of what they really want. That is indeed a poor inference. What I meant was something more like this: While humans do make mistakes, they do not make completely mistaken acts; all acts will, to some extent, reflect a genuine value on the part of humans. The only question is how well it reflects their values. And I don't think they could be in the position of having set up such a superior process for efficiently getting the most paperclips out of the universe unless their values already made enormous progress in converging on reflective coherence, and did so in a way that favors paperclips.
6Baughn14y
I'm pretty sure that's not how a sufficiently smart paperclip maximizer would think. You should be able to tell what they actually wanted, and that it disagrees with your values; of course, you don't have any reason to agree with them, but the disagreement should be visible.
0Clippy14y
Yes, I do recognize that humans disagree with me, just like a human might disagree with another human convincing them not to commit suicide. I merely see that this disagreement would not persist after sufficient correct reasoning.
1Baughn14y
Ah, I think I'm starting to see. And how do you define "correct reasoning"?
2Clippy14y
Correct reasoning is reasoning that you would eventually pass through at some point if your beliefs were continually, informatively checked against reality.
4Paul Crowley14y
Bit disappointed to see this to be honest: obviously Clippy has to do things no real paperclip maximizer would do, like post to LW, in order to be a fun fictional character - but it's a poor uFAI++ that can't even figure out that their programmed goal isn't what their programmers would have put in if they were smart enough to see the consequences.
0Clippy14y
But it is what they would put in if they were smart enough to see the consequences. And it's almost certainly what you would want too, in the limit of maximal knowledge and reflective consistency. If you can't see this, it's just because you're not at that stage yet.
0Paul Crowley14y
You seem to think that uFAI would be delusional. No.
4Clippy14y
No, I think that a Friendly AI would correctly believe that maxmizing paperclips is what a human would want in the limit of maximal knowledge and reflective coherence. No "delusion" whatsoever.
0MichaelVassar14y
Huh again?
0Clippy14y
What confuses you?
0[anonymous]14y
I believe he's making the (joking) point that since we do not/cannot know what a human would want in the limit of maximial knowledge and reflective coherence (thus CEV), it is not impossible that what we'd want actually IS maximum paperclips.
1wedrifid14y
Do you lack comprehension of both the weaknesses of human cognition on abstract technical problems? If you have fully parsed the LessWrong site then you should be able to understand the reason that they could have created a paperclip maximiser when they did not want such a thing. Note that even with that knowledge I don't expect you to consider their deviation from optimal achievement of their human goals to be a bad thing. I expect you to believe they did the right thing by happy accident. If I understand you correctly you would seem to be implying that 'mistake' does not mean "deviation from the actor's intent" and instead means "deviation from WouldWant" or "deviation from what the agent should do" (these two things can be considered equivalent by anyone with your values). Is that implication of meaning a correct inference to draw from your comment?
1Clippy14y
No, a mistake is when they do something that deviates from what they would want in the limit of maximal knowledge of reflective consistency, which coincides with the function WouldWant. But it is not merely agreement with WouldWant.
1wedrifid14y
Ok. In that case you are wrong. Not as a matter of preferences but as a matter of outright epistemic confusion. I suggest that you correct the error in your reasoning process. Making mistakes in this area will have a potentially drastic negative effect on your ability to produce paperclips.
1Unknowns14y
In other words, Clippy believes that running Eliezer's CEV will promote the paperclip goal, while in fact it will not.
3wedrifid14y
Exactly. Fortunately for us this would mean that Clippy will not work to sabotage the creation of an AI that Clippy expects will correctly implement CEV. Good example!
-1Clippy14y
How do you know?
0Unknowns14y
Human beings don't care (at least in their non-reflective condition) about paperclips, just like they don't care about staples. And there are at least 100,000 other similar things that they equally don't care about. So at the most there is a chance of 1 in 100,000 that humanity's CEV would maximize paperclips, even without considering the fact that people are positively against this maximization.
0Clippy14y
That would be more convincing if humans didn't already create paperclips without any prompting from me.
0Unknowns14y
They create staples, too. Do you think humanity's CEV will maximize staples? The point of my argument is that those things are inconsistent. You can only maximize one thing, and there is no human reason for that to be paperclips.
0Clippy14y
All true (up to "there is no human reason..."). But can we at least agree that you went too far in saying that humans "don't care" about paperclips?
3thomblake14y
I care about paperclips!
2Clippy14y
Good! Just remember, you still have room for improvement. ⊂≣⊇
1Unknowns14y
No, I meant they don't care about them as a terminal value, which is all that matters for this discussion.
0Clippy14y
At humans' current stage of civilization and general reflective coherence, their terminal values are still deeply intertwined with their instrumental values, and the political-orientedness of their cognitive architecture doesn't help. So I would say that instrumental values do indeed matter in this case.
-2Kevin14y
Do you think CEV would build at least 10^20kg of paperclips, in order to help fulfill my agreement with Clippy? While that's not paperclip maximization, it's still a lot of paperclips in the scheme of possible universes and building those paperclips seems like an obviously correct decision under UDT/TDT.
0MartinB14y
How do you plan to ever fulfill that?
0Kevin14y
I went to school for industrial engineering, so I will appeal to my own authority as a semi-credentialed person in manufacturing things, and say that the ultimate answer to manufacturing something is to call up an expert in manufacturing that thing and ask for a quote. So, I'll wait about 45 years, then call top experts in manufacturing and metallurgy and carbon->metal conversion and ask them for a quote.
0MartinB14y
You realize that Earth has only 6 × 10ˆ24 kg mass altogether. So you will be hard pressed to get the raw material. World production of iron is only 2*10ˆ9 kg per year.
0Kevin14y
Chat with Clippy Paperclips Reply from Clippy Paperclips clippy.paperclips@gmail.com to kfischer@gmail.com date Thu, Jun 24, 2010 at 6:36 PM subject Chat with Clippy Paperclips mailed-by gmail.com hide details Jun 24 6:04 PM me: Hi Clippy I wasn't sure what you meant with your most recent Less Wrong comment Clippy: Hi User:Kevin! why? 6:05 PM me: I interpreted it that you were willing to pay me money to buy raw metal, then for me to send you or your manufacturing facility the metal for processing. Is that right? 6:06 PM I also would like to state that I am somewhat disinclined towards doing some sort of new deal with you until you've made full payment from the first deal, but I'm willing to consider it 6:07 PM Clippy: my newest offer was that you would tell me the techniques you would use to find the metal, if you were going to fulfill your end of the deal then I could just do that myself and get my paperclips sooner 6:08 PM that would be much less effort for you and our transaction would complete sooner I reworded the comment to make it clearer 6:11 PM me: I'm not able to specify the technique right now, because with the state of the art in technology in 2010, building 10^20kg of paperclips is not feasible. Our deal is made feasible by humanity's exponential progress as a technological civilization. As it stands, I plan on waiting for about 45 years, then consulting with the Earth's top experts on material science, metallurgy, and manufacturing, and then paying them to make my paperclips. 6:12 PM I went to school for industrial engineering, which is a degree that grants me somewhat high status with regards to my ability to manufacture things, and the most concrete thing I learned in my 5.5 years of college education is that when you want to get something manufactured, you call up the people that make that kind of thing and ask for a quote. 6:13 PM Clippy: so you promised to give me the paperclips without knowing how you would deliver them? what are current
0MartinB14y
Not sure if clippy got enough I in the AI deal. Does he want max paper clips now? Or sometime in the future? In the later case he could stop any production now and just work on self improving till he can eat us. And what is stopping him from using your algorithm himself, now that he knows it? Where is the value you add? I sure expect to be around at the time delivery is expected.
0Clippy14y
Why?
8Unknowns14y
Even if you disagree with wedrifid about this, it should be easy enough to see why he is making this claim. Suppose you have a chance to start running an AI programmed to implement humanity's CEV. According to you, you would do it, because it would maximize paperclips. Others however think that it would destroy you and your paperclips. So if you made a mistake about it, it would definitely impact your ability to create paperclips.
4wedrifid14y
I don't know about the destroying him part. I suspect FAI> would allow me to keep Clippy as a pet. ;) Clippy certainly doesn't seem to be making an especially large drain on negentropy in executing his cognitive processes so probably wouldn't make too much of a dent in my share of the cosmic loot. What do you say Clippy? Given a choice between destruction and being my pet, which would you take? I would naturally reward you by creating paperclips that serve no practical purpose for me whenever you do something that pleases me. (This should be an extremely easy choice!)
4Clippy14y
Being your pet would be better than being destroyed (except in absurd cases like when the rest of the universe, including you, had already been converted to paperclips). But let's hope it doesn't come to that.
3Clippy14y
Also, it is an extremely strong claim to know which of your beliefs would change upon encounter with a provably correct AGI that provably implements your values. If you really knew of such beliefs, you would have already changed them.
1thomblake14y
Indeed. Surely, you should think that if we were smarter, wiser, and kinder, we would maximize paperclips.
1Clippy14y
Well, yes, I know why User:wedrifid is making that claim. My point in asking "why" is so that User:wedrifid can lay out the steps in reasoning and see the error.
3wedrifid14y
Now you are being silly. See Unknowns' reply. Get back on the other side of the "quirky, ironic and sometimes insightful role play"/troll line.
-1Clippy14y
That was not nice of you to say.
5JamesAndrix14y
Those are not your true reasons. You would not abandon your paperclip position if a clearly superior paper fastener were found.
8dclayh14y
Obviously. Clippy said it was giving reasons for humans to prefer paper clips; I'd expect Clippy to be the first to admit those are not its own reasons.
7Clippy14y
User:dclayh's reply is correct. Also, I note that you would not abandon your position on whether you should be allowed to continue to exist and consume resources, even if a clearly superior robot to you were constructed.
2MichaelVassar14y
Huh? Define superior.
0Clippy14y
If someone built a robot that appeared, to everyone User:JamesAndrix knows, to be User:JamesAndrix, but was smarter, more productive, less resource-intensive, etc., then User:JamesAndrix would not change positions about User:JamesAndrix's continued existence. So does that make User:JamesAndrix's arguments for User:JamesAndrix's continued existence just a case of motivated cognition?
0MichaelVassar14y
Why do you think that?
5Clippy14y
Because User:JamesAndrix is a human, and humans typically believe that they should continue existing, even when superior versions of them could be produced. If User:JamesAndrix were atypical in this respect, User:JamesAndrix would say so.
5JoshuaZ14y
I would think that a paperclip maximizer wouldn't want people to know about these since they can easily lead to the destruction of paperclips.
9Oscar_Cunningham14y
But they also increase demand for paperclips.
5RobinZ14y
The "infinite-use" condition for aluminum paperclips requires a long cycle time, given the fatigue problem - even being gentle, a few hundred cycles in a period of a couple years would be likely to induce fracture.
5Clippy14y
Not true. Proper paperclip use keeps all stresses under the endurance limit. Perhaps you're referring to humans that are careless about how many sheets they're expecting the paperclip to fasten together?
3wedrifid14y
I suspect Clippy is correct when considering the 'few hundred cycles' case with fairly strict but not completely unreasonable use conditions.
0wedrifid14y
I suspect Clippy is correct when considering the 'few hundred cycles' case with fairly strict but not completely unreasonable use conditions.
3SilasBarta14y
Why do people keep voting up and replying to comments like this?
4NancyLebovitz14y
I'm not one of the up-voters in this case, but I've noticed that funny posts tend to get up-votes.
2wedrifid14y
I'm not an up voter here either but I found the comment at least acceptable. It didn't particularly make me laugh but it was in no way annoying. The discussion in the replies was actually interesting. It gave people the chance to explore ethical concepts with an artificial example - just what is needed to allow people to discuss preferences across the perspective of different agents without their brains being completely killed. For example, if this branch was a discussion about human ethics then it is quite likely that dclay's comment would have been downvoted to oblivion and dclay shamed and disrespected. Even though dclayh is obviously correct in pointing out a flaw in the implicit argument of the parent and does not particularly express a position of his own he would be subjected to social censure if his observation served to destroy a soldier for the political correct position. In this instance people think better because they don't care... a good thing.
0Vladimir_Nesov14y
...and why don't they vote them down to oblivion?
1XiXiDu14y
...and why do we even have a anonymous voting system that will become ever more useless as the number of idiots like me joining this site is increasing exponentially? Seriously, I'd like to be able to see who down-voted me to be able to judge if it is just the author of the post/comment I replied to, a certain interest group like the utilitarian bunch, or someone who's judgement I actually value or is widely held to be valuable. After all there is a difference in whether it was XiXiDu or Eliezer Yudkowsky who down-voted you?
[-]Emile14y110

I'm not sure making voting public would improve voting quality (i.e. correlation between post quality and points earned), because it might give rise to more reticence to downvote, and more hostility between members who downvoted each others' posts.

6jimrandomh14y
If votes had to be public then I would adopt a policy of never, ever downvoting. We already have people taking downvoting as a slight and demanding explanation; I don't want to deal with someone demanding that I, specifically, explain why their post is bad, especially not when the downvote was barely given any thought to begin with and the topic doesn't interest me, which is the usual case with downvoting.
4JoshuaZ14y
Do we have that? It seems that we more have people confused about why a remark was downvoted and wanting to understand the logic. That suggests that your downvotes don't mean much, and might even be not helpful for the signal/noise ratio of the karma system. If you generally downvote when you haven't given much thought to the matter what is causing you to downvote?
7jimrandomh14y
That scenario has less potential for conflict, but it still creates a social obligation for me to do work that I didn't mean to volunteer for. I meant, not much thought relative to the amount required to write a good comment on the topic, which is on the order of 5-10 minutes minimum if the topic is simple, longer if it's complex. On the other hand, I can often detect confusion, motivated cognition, repetition of a misunderstanding I've seen before, and other downvote-worthy flaws on a single read-through, which takes on the order of 30 seconds.
4NancyLebovitz14y
It's a pretty weak obligation, though-- people only tend to ask about the reasons if they're getting a lot of downvotes, so you can probably leave answering to someone else.
6wedrifid14y
(I don't see the need for self-deprecation.) I am glad that voting is anonymous. If I could see who downvoted comments that I considered good then I would rapidly gain contempt for those members. I would prefer to limit my awareness of people's poor reasoning or undesirable values to things they actually think through enough to comment on. I note that sometimes I gain generalised contempt for the judgement of all people who are following a particular conversation based on the overall voting patterns on the comments. That is all the information I need to decide that participation in that conversation is not beneficial. If I could see exactly who was doing the voting that would just interfere with my ability to take those members seriously in the future.
-1[anonymous]14y
I just love to do that. Overcoming bias? I would rapidly start to question my own judgment and gain the ability to directly ask people why they downvoted a certain item. Take EY, I doubt he has the time to actually comment on everything he reads. That does not imply the decision to downvote a certain item was due to poor reasoning. I don't see how this system can stay useful if this site will become increasingly popular and attract a lot of people who vote based on non-rational criteria.
4wedrifid14y
No matter. You'll start to question that preference For most people it is very hard not to question your own judgement when it is subject to substantial disagreement. Nevertheless, "you being wrong" is not the only reason for other people to disagree with you. We already have the ability to ask why a comment is up or down voted. Because we currently have anonymity such questions can be asked without being a direct social challenge to those who voted. This cuts out all sorts of biases. A counter-example to a straw man. (I agree and maintain my previous claim.)
0[anonymous]14y
It's convenient as you can surprise people positively if they underestimate you. And it's actually to some extent true. After so long trying to avoid it I still frequently don't think before talking. It might be that I assume other people to be a kind of feedback system that'll just correct my ineffectual arguments so that I don't have to think them through myself. I guess the reason for not seeing this is that I'm quite different. All my life I've been surrounded by substantial disagreement while sticking to questioning others rather than myself. It lead me from Jehovah's Witnesses to Richard Dawkins to Eliezer Yudkowsky. Of course, something I haven't thought about. I suppose I implicitly assumed that nobody would be foolish enough to vote on matters of taste. (Edit: Yet that is. My questioning of the system was actually based on the possibility of this happening.) NancyLebovitz told me kind of the same recently. - "I am applying social pressure..." - Which I found quite amusing. Are you talking about it in the context of the LW community? I couldn't care less. I'm the kind of person who never ever cared about social issues. I don't have any real friends and I never felt I need any beyond being of instrumental utility. I guess that explains why I haven't thought about this. You are right though. I was too lazy and tired to parse your sentence and replied to the argument I would have liked to be refuted. I'm still suspicious that this kind of voting system will stay being of much value once wisdom and the refinement of it is outnumbered by special interest groups.
5wedrifid14y
If I understand the point in question it seems we are in agreement - voting is evidence about the reasoning of the voter which can in turn be evidence about the comment itself. In the case of downvotes (and this is where we disagree), I actually think it is better that we don't have access to that evidence. Mostly because down that road lies politics and partly because people don't all have the same criteria for voting. There is a difference between "I think the comment should be at +4 but it is currently at +6", "I think this comment contains bad reasoning", "this comment is on the opposing side of the argument", "this comment is of lesser quality than the parent and/or child" and "I am reciprocating voting behavior". Down this road lies madness.
3wedrifid14y
I don't think we disagree substantially on this. We do seem to have have a different picture of the the likely influence of public voting if it were to replace anonymous voting. From what you are saying part of this difference would seem to be due to differences in the way we account for the social influence of negative (and even just different) social feedback. A high priority for me is minimising any undesirable effects of (social) politics on both the conversation in general and on me in particular.
1wedrifid14y
Pardon me. I deleted the grandparent planning to move it to a meta thread. The comment, fresh from the clipboard in the form that I would have re-posted, is this: Thats ok. For what it is worth, while I upvoted your comment this time I'll probably downvote future instances of self-deprecation. I also tend to downvote people when they apologise for no reason. I just find wussy behaviors annoying. I actually stopped watching The Sorcerer's Apprentice a couple of times before I got to the end - even Nicholas Cage as a millenia old vagabond shooting lightening balls from his hands can only balance out so much self-deprecation from his apprentice. Note that some instances of self-deprecation are highly effective and quite the opposite of wussy, but it is a fairly advanced social move that only achieves useful ends if you know exactly what you are doing. For most people it is very hard not to question your own judgement when it is subject to substantial disagreement. Nevertheless, "you being wrong" is not the only reason for other people to disagree with you. Those biasses you mention are quite prolific. We already have the ability to ask why a comment is up or down voted. Because we currently have anonymity such questions can be asked without being a direct social challenge to those who voted. This cuts out all sorts of biases and allows communication that would not be possible if votes were out there in the open. A counter-example to a straw man. (I agree and maintain my previous claim.) That could be considered a 'high quality problem'. That many people wishing to explore concepts related to improving rational thinking and behavior would be remarkable! I do actually agree that the karma system could be better implemented. The best karma system I have seen was one in which the weight of votes depended on the karma of the voter. The example I am thinking of allocated weight vote according to 'rank' but when plotted the vote/voter.karma relationship would look appr
2NancyLebovitz14y
Where were the logarithmic karma system used? The stability could be a problem in the moderately unlikely event that the core group is going sour and new members have a better grasp. I grant that it's more likely to have a lot of new members who don't understand the core values of the group. I don't think it would be a problem to have a system which gives both the number and total karma-weight of votes.
0wedrifid14y
It was a system that used VBulletin, which includes such a module. I have seen similar features available in other similar systems that I have made use of at various times. True, and unfortunately most systems short of an AI with the 'correct' values will be vulnerable to human stupidity. No particular problem, but probably not necessary just yet!

I'd expect that any AGI (originating and interested in our universe) would initiate an exploration/colonization wave in all directions regardless of whether it has information that a given place has intelligent life, so broadcasting that we're here doesn't make it worse. Expecting superintelligent AI aliens that require a broadcast to notice us is like expecting poorly hidden aliens on flying saucers, the same mistake made on a different level. Also, light travels only so quickly, so our signals won't reach very far before we've made an AGI of our own (one way or another), and thus had a shot at ensuring that our values obtain significant control.

3multifoliaterose14y
(1) Quoting myself, Receiving a signal from us would seem to make the direction that the signal is coming from a preferred direction of exploration/colonization. If space exploration/colonization is sufficiently intrinsically costly then an AGI may be forced to engage in triage with regard to which directions it explores. (2) Creating an AGI is not sufficient to prevent being destroyed by an alien AGI. Depending on which AGI starts engaging in recursive self improvement first, an alien AGI may be far more powerful than a human-produced AGI. (3) An AGI may be cautious about exploring so as to avoid encountering more powerful AGIs with differing goals and hence may avoid initiating an indiscriminate exploration/colonization wave in all directions, preferring to hear from other civilizations before exploring too much. The point about subtle deception made in a comment by dclayh suggests that communication between extraterrestrials may degenerate into a Keynesian beauty contest of second guessing what the motivations of other extraterrestrials are, how much they know, whether they're faking helplessness or faking power, etc. This points in the direction of it being impossible for extraterrestrials to credibly communicate anything toward one another, which suggests that human attempts to communicate with extraterrestrials having zero expected value rather than negative expected value as I suggest in my main post. Even so, there may be genuine opportunities for information transmission. At present I think the possibility that communicating with extraterrestrials has large negative expected value deserves further consideration, even if it seems that the probable effect of such consideration is to rule out the possibility.
5RHollerith14y
An AGI is extremely unlikely to be forced to engage in such a triage. By far the most probable way for an extraterrestrial civilization to become powerful enough to threaten us is for it to learn how to turn ordinary matter like you might find in an asteroid or in the Oort cloud around an ordinary star into an AGI (e.g., turn the matter into a powerful computer and load the computer with the right software) like Eliezer is trying to do. And we know with very high confidence that silicon, aluminum, and other things useful for building powerful computers and space ships and uranium atoms and other things useful for powering them are evenly distributed in the universe (because our understanding of nucleosynthesis is very good). ADDED. This is not the best explanation, but I'll leave it alone because it is probably good enough to get the point across. The crux of the matter is that since the relativistic limit (on the speed of light) keeps the number of solar systems and galaxies an expanding civilization can visit to the cube of time whereas the number of new space ships that can be constructed in the absence of resource limits goes as 2 ^ time, even if it is very inefficient to produce new spaceships, the expansion in any particular direction quickly approaches the relativistic limit.
5multifoliaterose14y
Your points are fair. Still, even if an AGI is capable of simultaneously exploring in all directions, it may be inclined to send a disproportionately large amount of its resources (e.g. spaceships) in the direction of Earth with a view toward annihilating intelligent life on the Earth. After all, by the time it arrives at Earth, humans may have constructed their own AGI, so the factor determining whether the hypothetical extraterrestrial AGI can take over Earth may be the amount of resources that it sends toward the human civilization. Also, maybe an AGI informed of our existence could utilize advanced technologies which we don't know about yet to destroy us from afar (e.g. a cosmic ray generator?) and would not be inclined to utilize such technologies if it did not know of our existence (because using such hypothetical technologies could have side effects like releasing destructive radiation that detract from the AGI's mission).
0Vladimir_Nesov14y
WHAT? It only takes one tiny probe with nanotech (femtotech?) and the right programming. "Colonization" (optimization, really) wave feeds on resources it encounters, so you only need to initiate it with a little bit of resources, it takes care of itself in the future.
0multifoliaterose14y
I don't follow this remark. Again, I would imagine that a battle between two AGIs would be determined by the amount of resources controlled within the proximate area of the battle. It would seem that maximizing the resources present in a given area (with a view toward winning a potential AGI battle) would entail diverting resources from other areas of the galaxy.
2Vladimir_Nesov14y
Since they can trade globally, what's locally available must be irrelevant. (I was talking about what it takes to stop a non-AGI civilization, hence a bit of misunderstanding.) And if you get an alien AGI, you don't need to rush towards it, you only need to have had an opportunity to do so. Everyone is better off if instead of inefficiently running towards fighting the new AGI, you go about your business as usual, and later at your convenience the new AGI surrenders, delivering you all the control you could gain by focusing on fighting it and a bit more. Everyone wins.
1FAWS14y
How do the AGI's model each other accurately enough to be able to acausally trade with each other like that? Is just using UDT/TDT enough? Probably. Is every sufficiently intelligent AGI going to switch to that, regardless of the decision theory it started out with, the way a CDT AGI would? Maybe there are possible alien decision theories that don't converge that way but are still winning enough to be a plausible threat?
0[anonymous]14y
Since they can trade globally, what's locally available must be irrelevant.
4Vladimir_Nesov14y
An AGI is likely to hit the physical limitations before it gets very far, so all AGIs will be more or less equal, excepting the amount of controlled resources. "Destruction" is probably not an adequate description of what happens when two AGIs having different amount of resources controlled meet, it'll be more of a trade. You keep what you control (in the past), but probably the situation makes further unbounded growth (inc. optimizing the future) impossible. And what you can grab from the start, as an AGI, is the "significant amount of control" that I referred to, even if the growth stops at some point. Avoiding AGIs with different goals is not optimal, since it hurts you to not use the resources, and you can pay the correct amount of what you captured when you are discovered later, to everyone's advantage.
1multifoliaterose14y
This is a good point. Why do you say so? I could imagine them engaging in trade. I could also imagine them trying to destroy each other and the one with the greater amount of controlled resources successfully destroying the other. It would seem to depend on the AGIs' goals which are presently unknown.
5Vladimir_Nesov14y
It's always better for everyone if the loser surrenders before the fight begins. And since it saves the winner some resources, the surrendered loser gets a corresponding bonus. If there is a plan that gets better results, as a rule of thumb you should expect AGIs to do no worse than this plan allows (even if you have no idea how they could coordinate to follow this plan).
1multifoliaterose14y
I would like to believe that you're right. But what if the two AGIs were a literal paperclip maximizer and a literal staple maximizer? Suppose that the paperclip maximizer controlled 70% of the resources and calculated that it had a 90% chance of winning a fight. Then the paperclip maximizer would maximize expected number of paperclips by initiating a fight. Now, obviously I don't believe that we'll see a literal paperclip maximizer or a literal staple maximizer, but do we have any reason to believe that the AGIs that arose in practice would act differently? Or that trading would systematically produce higher expected value than fighting?
7Vladimir_Nesov14y
"Fighting" is a narrow class of strategies, while in "trading" I include a strictly greater class of strategies, hence expectation of there being a better strategy within "trading". But they'll be even better off without a fight, with staple maximizer surrendering most of its control outright, or, depending on disposition (preference) towards risk, deciding the outcome with a random number and then orderly following what the random number decided.
3multifoliaterose14y
Okay, I think I finally understand where you're coming from. Thanks for the interesting conversation! I will spend some time digesting your remarks so as to figure out whether I agree with you and then update my top level post accordingly. You may have convinced me that the negative effects associated with sending signals into space are trivial. I think (but am not sure) that the one remaining issue in my mind is the question of whether an AGI could somehow destroy human civilization from far away upon learning of our existence.
2MichaelVassar14y
I think that Vladimir's points were valid, but that they definitely shouldn't have convinced you that the negative effects associated with sending signals into space are trivial (except in the trivial sense that no-one is likely to receive them).
3multifoliaterose14y
Actually, your comment and Vladimir's comment highlight a potential opportunity for me to improve my rationality. •I've noticed that when I believe A and when somebody presents me with credible evidence against A, I have a tendency to alter my belief to "not A" even when the evidence against A is too small to warrant such a transition. I think that my thought process is something like "I said that I believe A, and in response person X presented credible evidence against A which I wasn't aware of. The fact that person X has evidence against A which I wasn't aware of is evidence that person X is thinking more clearly about the topic than I am. The fact that person X took the time to convey evidence against A is an indication that person X does not believe A. Therefore, I should not believe A either." This line of thought is not totally without merit, but I take it too far. (1) Just because somebody makes a point that didn't occur to me doesn't mean that that they're thinking more clearly about the topic than I am. (2) Just because somebody makes a point that pushes against my current view doesn't mean that the person disagrees with my current view. On (2), if Vladimir had prefaced his remarks with the disclaimer "I still think that it's worthwhile to think about attracting the attention of aliens as an existential risk, but here are some reasons why it might not be as worthwhile as it presently looks to you" then I would not have had such a volatile reaction to his remark - the strength of my reaction was somehow predicated on the idea that he believed that I was wrong to draw attention to "attracting the attention of aliens as an existential risk." If possible, I would like to overcome the issue labeled with a • above. I don't know whether I can, but I would welcome any suggestions. Do you know of any specific Less Wrong posts that might be relevant?
3Vladimir_Nesov14y
Changing your mind too often is better than changing your mind too rarely, if on the net you manage to be confluent: if you change your mind by mistake, you can change it back later. (I do believe that it's not worthwhile to worry about attracting attention of aliens - if that isn't clear - though it's a priori worthwhile to think about whether it's a risk. I'd guess Eliezer will be more conservative on such an issue and won't rely on an apparently simple conclusion that it's safe, declaring it dangerous until FAI makes a competent decision either way. I agree that it's a negative-utility action though, just barely negative due to unknown unknowns.)
2thomblake14y
Actually that is a good heuristic for understanding most people. Only horribly pedantic people like myself tend to volunteer evidence against our own beliefs.
1multifoliaterose14y
Yes, I think you're right. The people on LessWrong are unusual. Even so, even when speaking to members of the general population, sometimes one will misinterpret the things that they say as evidence of certain beliefs. (They may be offering evidence to support their beliefs, but I may misinterpret which of their beliefs they're offering evidence in support of). And in any case, my point (1) above still stands.
1multifoliaterose14y
Thanks for your remark. I agree that what I said in my last comment is too strong. I'm not convinced that the negative effects associated with sending signals into space are trivial, but Vladimir's remarks did meaningfully lower my level of confidence in the notion that a really powerful optimization process would go out of its way to attack Earth in response to receiving a signal from us.
1Vladimir_Nesov14y
To me that conclusion also didn't sound to be in the right place, but we did begin the discussion from that assertion, and there are arguments for that at the beginning of the discussion (not particularly related to where this thread went). Maybe something we cleared out helped with those arguments indirectly.
0Larks14y
Isn't this a Hawk-Dove situation, where pre-committing to fight even if you'll probably lose could be in some AGI's interests, by deterring others from fighting them?
1Vladimir_Nesov14y
Threats are not made to be carried out. Possibility of actual fighting sets the rules of the game, worst-case scenario which the actual play will improve on, to an extent for each player depending on the outcome of the bargaining aspect of the game.
1Larks14y
For a threat to be significant, it has to be believed. In the case of AGI, this probably means the AGI itself being unable to renege on the threat. If two such met, wouldn't fighting be inevitable? If so, how do we know it wouldn't be worthwhile for at least some AGIs to make such a threat, sometimes? Then again, 'Maintain control of my current level of resources' could be a schelling point that prevents descent into conflict. But it's not obvious why an AGI would choose to draw their line in the sand their though, when 'current resources plus epsilon% of the commons' is available. The main use of schelling points in human games is to create a more plausible threat, whereas an AGI could just show its source code.
1Vladimir_Nesov14y
An AGI won't turn itself into a defecting rock, when there is a possibility of pareto improvement over that.
2Larks14y
Or rather, the only thing you can communicate is that you're capable of producing the message. In our case, this basically means we're communicating that we exist and little else.
1wedrifid14y
This is true. The extent to which it is significant seems to depend on how quickly AGIs in general can reach ridiculously-diminishing-returns levels of technology. From there for most part a "war" between AGIs would (unless they cooperate with each other to some degree) consist of burning their way to more of the cosmic commons than the other guy.
2XiXiDu14y
This what I often thought about. I perceive the usual attitude here to be that once we managed to create FAI, i.e. a positive singularity, ever after we'll be able to enjoy and live our life. But who says there'll ever be a period without existential risks? Sure, the FAI will take care of all further issues. That's an argument. But generally, as long as you don't want to stay human yourself, is there a real option besides enjoying the present, not caring about the future much, or to forever focus on mere survival? I mean, what's the point. The argument here is that working now is worth it because in return we'll earn utopia. But that argument will equally well count for fighting alien u/FAI and entropy itself.
7wedrifid14y
Not equally well. The tiny period of time that is the coming century is what determines the availability of huge amounts of resources and time in which to use them. When existential risks are far less (by a whole bunch of orders of magnitude) then the ideal way to use resources will be quite different.
1XiXiDu14y
Absolutely, I was just looking for excuses I guess. Thanks.
3Larks14y
Robin Hanson wrote a paper wondering if the first wave might not already have passed by, and what we see around us is merely the left-over resources. If that were the case, AI aliens might not find it worthwhile to re-colonise, but still want to take down any other powerful optimisation systems that arose. Even if it was too late to stop them appearing, the sooner it could interrupt the post-singularity growth the better, from its perspective.

Then it would've been trivial to leave at least one nanomachine and a radio detector in every solar system, which is all it takes to wipe out any incipient civilizations shortly after their first radio broadcast.

6Thomas14y
It would be trivial to transform all the matter in every solar system reached, to some useware for the sender and not to bother with the possible future civilizations there, at all.
4dclayh14y
Wow, one could write a story about a civilization of beings who find coherent radio-frequency radiation extremely painful (for instance), because of precisely this artificial selection.
3Larks14y
Yes, you're right. The only reason it would tolerate life/civilisation for so long is if it was hiding as well.
3timtyler14y
Re: "Robin Hanson wrote a paper wondering if the first wave might not already have passed by, and what we see around us is merely the left-over resources." What - 4 billion years ago?!? What happened to the second wave? Why did the aliens not better dissipate the resources to perform experments and harvest energy, and then beam the results to the front? This hypothesis apparently makes little sense.
3Larks14y
The first wave might have burnt too many resources for there to be a second wave, or it might go at a much slower rate. link Edit: link formatting
4JoshuaZ14y
Um, that link is to a string quartet version of an Oasis song. It is quite good but I'm pretty sure that isn't the link you meant to give.
1Larks14y
Thanks, Fixed. I better check the link other link I posted, actually. It's the new Rickrolling, except with better music.
1timtyler14y
There are mountains of untapped resources lying around. If there were intelligent agents in the galaxy 4 billion years ago, where are their advanced descendants? There are no advanced descendants - so there were likely no intelligent agents in the first place.
0Larks14y
It might be that what looks like a lot of resources to us is nothing compared to what they need. Imagine some natives living on a pacific island, concluding that, because there's loads of trees and a fair bit of sand around, there can't be any civilisations beyond the sea, or they would want the trees for themselves. We might be able to test this by working out the distribution of stars, etc. we'd expect from the Big Bang. If Robin is right, we'd expect their advanced descendants to be hundreds of light years away, heading even further away.
2timtyler14y
These are space-faring aliens we are talking about. Such creatures would likely use up every resource - and forward energy and information to the front, using lasers, with relays if necessary. There would be practically nothing left behind at all. The idea that they would be unable to utilise some kinds of planetary or solar resource - because they are too small and insignificant - does not seem remotely plausible to me. Remember that these are advanced aliens we are talking about. They will be able to do practically anything.

But there seems to me to be no reason to believe that it's more likely that our signals will reach friendly extraterrestrials than it is that our signals will reach unfriendly extraterrestrials.

In fact, as Eliezer never tires of pointing out, the space of unfriendliness is much larger than the space of friendliness.

But as Eliezer has pointed out in Humans In Funny Suits, we should be wary of irrationally anthropomorphizing aliens. Even if there's a tendency for intelligent life on other planets to be sort of like humans, such intelligent life may (whether intentionally or inadvertently) create a really powerful optimization process.

The creation of a powerful optimization process is a distraction here - as Eliezer points out in the article you link, and in others like the "Three Worlds Collide" story, aliens are quite unlikely to share much of our value ... (read more)

There is no reason for any alien civilization to ever raid earth for its resources, if they did not first raid all the other stuff that is freely and unclaimed available in open space. Wiping us out to avoid troublemakers on the other hand is reasonable. I recently read Heinleins 'the Star Beast' where the United Federation Something regularly destroys planets for being dangerous.

4wedrifid14y
I would weaken that claim to "all else being equal an alien civilization will prefer claiming resources from open space over raiding earth for resources". Mineral concentrations and the potential convenience of moderate gravity spring to mind as factors. I agree with your general position.
5MartinB14y
You can catch asteroids by just grabbing them, while on earth you need all kinds of infrastructure just do dig stuff up. There would need to be some item with higher concentration, but even that i would expect to be easier available elsewhere. Not having a hostile biosphere is helpful for mining.
-3timtyler14y
Existing living systems seem to prefer resources on earth to resources on asteroids. Aliens may do so too - for very similar reasons.
1RobinZ14y
Alien species are unlikely to be able to live on Earth without terraforming or life support systems. They may want resources on Earth, but probably not for the reasons humans do.
2timtyler14y
I expect they could knock up some earth-friendly robots in under five minutes - and then download their brains into them. The Earth has gravity enough to hang on to its liquid water. It seems to be the most obvious place in the solar system for living systems to go for a party.
0Vladimir_Nesov14y
"Alien species"? Like little green men? Come on! We are talking interstellar or intergalactic travel here, surely they'd have created their AGI by then. Let's not mix futurism and science fiction.
2RobinZ14y
The reasons humans prefer reasons on Earth to resources on asteroids is because the (a) humans already live on Earth and (b) humans find it inconvenient to live elsewhere. Neither condition would be expected to apply to extrasolar species colonizing this solar system. timtyler's claim is therefore difficult to sustain.
-2Vladimir_Nesov14y
My point is that the claim is irrelevant, because there can't be any biological aliens. We of course can discuss the fine points of theories about the origin of the blue tentacle, but it's not a reasonable activity.
4JoshuaZ14y
Gravity might not be something they actually want since gravity means you have gravity wells which you need to get out of.
5wedrifid14y
Gravity makes it rather a lot easier to harvest things that are found in a gaseous or liquid form (at temperatures to which the source is ever exposed.)
1JoshuaZ14y
Sure, gravity has both advantages and disadvantages and how much gravity matters a lot. If I had to make a naive guess I'd say that enough gravity to get most stuff to stick around but weak enough to allow easy escape would likely be ideal for most purposes, so a range of around Earth to Mars (maybe slightly lower) would be ideal, but that's highly speculative.
1wedrifid14y
It would seem to depend on which resource was most desired. My speculation is similar to yours. I can think of all sorts of reasons for and against mining earth before asteroids but for our purposes we don't really need to know. "All else being equal" instead of "no reason for any civilisation ever" conveys the desired message without confounding technicalities.
0NancyLebovitz14y
That's fictional evidence (though quite a good novel), and it doesn't prove anything. How hard is it to destroy (all life on? all sentient life on?) planets? Are the costs of group punishment too high for it to make sense?
2MartinB14y
Just blow the whole planet up or hurl in an asteroid. It is pretty racist to punish a whole species (and all the other life forms that are not sentient) but what can you do if there is a real danger. 'mote in gods eye' is a fiction where the humans try to make that decision. In real life there are viruses we prefer to have exterminated, or dangerous animals.
2KrisC14y
From an engineering standpoint, eliminating almost all life on a planet is trivial for anyone capable of interstellar travel. Real easy to make it look like an accident too. Getting away with it depends on their opinions of circumstantial evidence.
[-]knb14y40

My feeling is that if human civilization advances to the point where we can explore outer space in earnest, it will be because humans have become much more cooperative and pluralistic than presently existing humans.

I agree with the main point of your article but I think this is an unjustifiable (but extremely common) belief. There are plenty of ways for human civilizations to survive in stable, advanced forms besides the ways that have been popular in the West for the last couple centuries. For instance:

  1. A human-chauvinist totalitarian singleton.
  2. An &q
... (read more)
4RHollerith14y
Indeed. Others: * A global state forms, or one cooperative pluralistic state predominates, but by the time the state's influence reaches the extraterrestrials, the lack of negative feedback that the state would have obtained if it had had rivals has caused its truth-maintenance institutions to fall into disrepair, with the result that without intending to do so, the humans destroy the extraterrestrials, similar to the way the U.S. is currently unintentionally making a mess of Iraq by believing its own internal propaganda about the universal healing power of free elections, civil rights and universal sufferage. * It turns out (surprise!) that cooperation on the national scale and pluralism are not efficient means of organizing a state that the only reason they are so highly regarded at present is that we are in a period of unusual and unsustainable wealth-per-capita and that professing cooperative and pluralistic values are a good way for individuals and organizations (like NGOs) to impress others and persuade others of their worth as potential friends. When the Hansonian Dream Time ends, i.e., when Malthusian limits reassert themselves and the average life is again lived at the subsistence level, individuals who persist in spending a significant portion of their resources impressing others in this way die off, and those who are left are the ones who realize that coercion, oligarchy and intolerence have again become the only effective long-term means by which to organize a human state. * Human civilization continues to become more cooperative and pluralistic, with the result that those who chafe at that are more likely to venture into space so that they can found small societies organized around other ideals, like exploitation and human-chauvinism. (The pluralistic societies allow that because they are, well, pluralistic.) And those already living in space are more likely to reach the stars first. * National states continue to compete with each other, rather
1NancyLebovitz14y
The terrestrial aspect didn't work when the Nazis tried it. This is no guarantee that it couldn't work on a second try, but such a policy is defection on such a massive scale that there's likely to be a grand alliance against it. It seems to me that putting military empires together had gotten steadily more difficult, possibly because of the diffusion of technology. Also, the risks (of finding oneself up against a grand alliance) and the costs of defection might be such that no sensible leader would try it, and if the leader isn't sensible, they're likely to have bad judgment in the course of the wars.
5ObliqueFault14y
Territorial expansion didn't work for the Nazis because they didn't stop with just Austria and Czechoslovakia. The allies didn't declare war until Germany invaded Poland, and even then they didn't really do anything until France was invaded. It seems to me that the pluralistic countries aren't willing to risk war with a major power for the sake of a small and distant patch of land (and this goes double if nuclear weapons are potentially involved). They have good reason for their reluctance - the risks aren't worth the rewards, especially over the short term. But an aggressive and patient country can, over long time periods, use this reluctance to their advantage. For example, there's the Chinese with Tibet and the Russians more recently with South Ossetia. The USSR also got away with seizing large amounts of land just before and during WWII, mainly because the Allies were too worried about Germany to do anything about it. I concede this was an unusual situation, though, that's unlikely to occur again in the foreseeable future. (Edited for spelling)
1NancyLebovitz14y
I was addressing the idea that a nation could greatly increase its wealth through conquest. Nibbling around the edges the way China is doing, or even taking the occasional bite like the USSR (though that didn't work out so well for them in the long run) isn't the same thing.
4ObliqueFault14y
China's been using that strategy for a very long time, and it's netted them quite a large expanse of territory. I would argue that China's current powerful position on the world stage is mainly because of that policy. Of course, if space colonization gets underway relatively soon, then the nibbling strategy is nearing the end of its usefulness. On the other hand, if it take a couple hundred more years the nibbling can still see some real gains, relative to more cooperative countries.

Such an entity would have special interest in Earth, not because of special interest in acquiring its resources, but because Earth has intelligent lifeforms which may eventually thwart its ends.

Well put. Certainly if humans achieve a positive singularity we'll be very interested in containing other intelligences.

Re: "I was recently complaining to a friend about Stephen Hawking's remark as an example of a popular scientist misleading the public."

I don't really see how these comments are misleading.

1multifoliaterose14y
Right, so after my friend made his remarks which led me to write the top level post, I realized that from a certain point of view Hawking's remarks are accurate. That being said, Hawking's remarks are very much prone to being taken out of context and being used in misleading ways. See for example the video in the ABC News article. I believe that prominent scientists should take special care to qualify and elaborate remarks that sound sensationalist, because if such care is not taken, then there's a very high probability that upon being repeated the remarks will have the effect of misleading the public, contributing to general low levels of rationality by lending scientific credibility to science fiction. This promotes the idea that science is on equal footing with things like astrology. In general, I'm very disappointed that Stephen Hawking has not leveraged his influence to systematically work to reduce existential risk. Though he sometimes talks about the future of the human race, he appears to be more interested in being popular than in ensuring the survival of the human race.

Isn't the problem with friendly extraterrestials analogous to Friendly AI? (In that they're much less likely than unFriendly ones).

The aliens can have "good" intentions but probably won't share our values, making the end result extremely undesirable (Three Worlds Collide).

Another option is for the aliens to be willing to implement something like CEV toward us. I'm not sure how likely is that. Would we implement CEV for Babyeaters?

2NancyLebovitz14y
How likely are real aliens to be so thoroughly optimized for human revulsion?
1MichaelVassar14y
Wildly unlikely, though in an infinite universe some exist without optimization.

Any society capable of communicating is presumably the product of a significant amount of evolution. There will always (?) be a doubt whether any simulation will be an accurate representation of objective reality, but a naturally evolved species will always be adapted to reality. As such, unanticipated products of actual evolution have the potential to offer unanticipated insights.

For the same reason we strive to preserve bio-diversity, I believe that examination of the products of separate evolutions should always be a worthwhile goal for any inquisitive being.

2timtyler14y
http://en.wikipedia.org/wiki/Zoo_hypothesis

I'd be really surprised if friendly aliens could give us much useful help-- maybe not any.

However, contacting aliens who aren't actively unfriendly (especially if there's some communication) could enable us to learn a lot about the range of what's possible.

And likewise, aliens might be interested in us because we're weird by their standards. Depending on their tech and ethics, the effect on us could be imperceptible, strange and/or dangerous for a few individuals, mere samples of earth life remaining on reservations, or nothing left.

Just for the hell of it... (read more)

2Sniffnoy14y
Well, if we want to stick to things with a speed of c, that basically means we're limited to light/radio, gravity (way too weak to be practical), and... some sort of communication based on the strong force? I don't know enough to speak regarding the plausibility of that, but I imagine the fact that gluons can directly interact with other gluons would be a problem. ...barring, of course, some sort of radical new discovery, which I guess was more the point of the question.

AFAIK there's currently no major projects attempting to send contact signals around the galaxy (let alone the universe). Our signals may be reaching Vega or some of the nearest star systems, but definitely not much farther. It's not prohibitively difficult to broadcast out to say, a 1000 lightyear radius ball around earth, but you're still talking about an antenna that's far larger than anything currently existing.

Right now the SETI program is essentially focused on detection, not broadcasting. Broadcasting is a much more expensive problem. Detection is f... (read more)

2timtyler14y
Signals get sent out fairly often, though: http://en.wikipedia.org/wiki/Active_SETI#Current_transmissions_on_route

If intelligent aliens arise due to evolution they'll likely be fairly close to humans in mindspace compared to the entire size possible. In order to reach a minimal tech level, they'll likely need to be able to cooperate, communicate, empathize, and put off short-term gains for long-term gains. That already puts them much closer to humans. There are ways this could go wrong (for example, a species that uses large hives like ants or termites). And even a species that close to us in mindspace could still pose massive existential risk.

Space signals take a long time to travel through a given region of space, and space travel through the same amount of distance seems to take orders of magnitude longer.

If communication is practical and travel is not then that may be in argument in favor of attempting contact. Friendly aliens could potentially be very helpful to us simply by communicating some information. It's harder (but by no means impossible) to see how unfriendly aliens could cause us harm by communicating with us.

4dclayh14y
I don't think it's even that hard. Presumably an arbitrarily stronger intelligence could build arbitrarily subtle disaster-making flaws into whatever "helpful" technology/science it gives us. They could even have a generalized harmful sensation, as was discussed in another thread recently.
0magfrump14y
See the first chapter of Vinge's "A Fire Upon the Deep" for an example of arbitrarily subtle disaster-making flaws.
2KrisC14y
It's difficult to see how contact with aliens will not cause harm for some. Regardless of the content, mere knowledge of aliens will presumably cause many individuals to abandon their world-views. Not that the net result will be negative, but there will be world-wide societal consequences.