Kawoomba comments on Risks of downloading alien AI via SETI search - Less Wrong

9 Post author: turchin 15 March 2013 10:25AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (98)

You are viewing a single comment's thread.

Comment author: Kawoomba 15 March 2013 11:46:50AM 13 points [-]

A sufficiently advanced AI should already be propagating at near the speed of light, which is why we needn't fear mere radiosignals: If there's such an entity in the neighborhood, its von Neumann probes will be the first sign we get.

Comment author: Thomas 15 March 2013 03:43:06PM 13 points [-]

A near light speed and the actual light speed may be a significant difference where the universal dominance is the price.

Comment author: Kawoomba 15 March 2013 04:14:56PM 10 points [-]

Which is a good argument for why a smart AI wouldn't announce its malicious intentions by sending some sort of universal computer code - which could ultimately announce its intentions, yet have a significant chance of failure - and would just straight send its little optimizing cloud of nanomagic.

The first indication that something's wrong would be your legs turning into paperclips (The tickets are now diamonds - style).

Comment author: Thomas 15 March 2013 07:32:00PM 9 points [-]

Agree.

It may also be, that a well designed radio vawe front colliding with a planet or a gas cloud can produce some artifacts. That a SETI capable civilisation isn't even necessary.

Comment author: Will_Newsome 20 March 2013 09:24:22PM 4 points [-]

The optimizer your optimizer could optimize like.

Comment author: Kawoomba 20 March 2013 09:27:33PM 3 points [-]

Talking about triple-O, go continue your computational theology blog o.O

Comment author: Will_Newsome 20 March 2013 10:26:35PM *  5 points [-]

I will when I figure out how to solve this problem: I'm trying to accomplish two major objectives.

The more important objective is to explain to people how we can use concepts from mathematical fields, especially algorithmic information theory and reflective decision theory, to elucidate the fundamental nature of justification, especially any fundamental similarities or relations between epistemic and moral justification. (The motivation for this approach comes from formal epistemology; I'm not sure if I'll have to spend a whole post on the motivations or not.)

The less important objective is to show that theology, or more precisely theological intuitions, are a similar approach to the same problem, and it makes sense and isn't just syncretism to interpret theology in light of (say) algorithmic information theory and vice versa. But to motivate this would require many posts on hermeneutics; without sufficient justification, readers could reasonably conclude that bringing in "God" (an unfortunately political concept) is at best syncretism and at worst an attempt to force through various connotations. I'm more confident when it comes to explaining the math---even if I can be accused of overreaching with the concepts, at least it's admitted that the concepts themselves have a very solid foundation. When it comes to hermeneutics, though, I inevitably have to make various qualitative arguments and judgment calls about how to make judgment calls, and I'm afraid of messing it up; also I'm just more likely to be wrong.

So I have to think about whether to try to tackle both problems at once, which I would like to do but would be quite difficult, or to just jump into the mathematics without worrying so much about tying it back to the philosophical tradition. I'd really prefer the former but I haven't yet figured out how to make the presentation (e.g., the order of ideas to be introduced) work.

Comment author: [deleted] 24 March 2013 03:32:18PM *  1 point [-]

especially any fundamental similarities or relations between epistemic and moral justification

So, the fact that in natural languages it's easy to be ambiguous between epistemic and moral modality (e.g. should in English can mean either ‘had better’ or ‘is most likely to’) may be a Feature Not A Bug? (Well, I think that that is due to a quirk of human psychology¹, but if humans have that quirk, it must have been adaptive (or a by-product of something adaptive), in the EEA at least.)


  1. How common is this among the world's languages? The more common it is, the more likely my hypothesis, I'd guess.
Comment author: turchin 15 March 2013 09:40:38PM *  5 points [-]

We should not think about AI as about omnipotent God - if it was, he could travel even faster then light and even back in time. But we dont see it around us (if we are not in simulation). So he is not omnipotent. So we should assume that nanobots wave is slower then speed of light. Lets give it 0.8 light speed. The main problem with nanobots wave is to slow down after it reachs its destination. We could accelerate nenobots in accelrators, but slowing down could be complicated. So, if nanobots speed is 0.8c, then volume of a sphere which they could get is only 0.512 of that of the SETI attack. That meens that SETI attack is 2 times more effective as a way to conqure the space. Also observer selection is working here. All civilizations inside nanobots wave are probably destroyed. So we could only find ourselves outside it.

Comment author: Tenoke 15 March 2013 10:43:55PM *  1 point [-]

As gwern pointed out SETI attacks only target worlds with tech-savvy intelligent life (we so far know about one of those) while a von-neuman probe can likely target pretty much all systems we've observed so far (and we've observed a bit more than one).

A SETI attack being twice as effective as a von-neumann probe is quite the overstatement. (even discounting the fact that the probes might be able to travel at a speed much closer to c)

Comment author: turchin 16 March 2013 07:28:51AM *  0 points [-]

SETI attack could happen in any medium where only information transfer is possible. If in the future we could contact parallel worlds it would be again the case. As we now dont know exact limitation of interstellar travel, we may think that SETI attack could happened. Or we should conclude that any serch of alien radiosignals is useless as they should approach us in a speed of light phisically.

And again we could exist only in those regions of the Universe which is not conqured by alien nanobot. Or they are conqured but lay dormant somethere, and in this case SETI attack still possible.

Comment author: Kawoomba 16 March 2013 07:45:49AM 2 points [-]

And again we could exist only in those regions of the Universe which is not conqured by alien nanobot. Or they are conqured but lay dormant somethere, and in this case SETI attack still possible.

It seems a bit like you're grasping at straws to keep the SETI threat viable. I realize you're attached to it, I saw the website. Still, allow yourself to follow the arguments whereever they may lead.

Comment author: turchin 16 March 2013 08:02:50AM 0 points [-]

I know that nano von Neuman probs is strongest argument against the theory and I knew it even before I published it here. Moreover, i have shorter article about possible alien nanobots in Solar system which I will eventually publish here - if it is not to much offtopic.

But from epistemic point of view we cant close one unknown case with another big unknown with 100 percent certanity.

Any way it will not change conclusion: SETI serch is or usless or dangerous, and should be stopped.

Comment author: Kawoomba 16 March 2013 08:28:29AM *  4 points [-]

Useless? I don't think so.

There's nothing this ragtag horde of competing special interests (humanity) needs more than the uniting force of "we received signals from other civilizations". To unite us and to usher in a new era of a redefined in-group ("us") versus the new out-group ("them" - the aliens).

As the old adage goes, me against my brother, my brother and I against our cousins, my cousins and I against strangers.

What we need is a "all of humanity versus some unspecified aliens" to save us. Even if we have to make them up ourselves; there should be an astrophysicists' conspiracy to fake such signals. I imagine something like "Ok Earth-guys, whoever gets to Epsilon Eridani first owns it! Also, we demand a new season of Firefly." (This would be troublesome, because it would mean they are very close already.)

Comment author: Multiheaded 16 March 2013 05:29:18PM 2 points [-]

[Obligatory Watchmen reference]

Comment author: Kawoomba 16 March 2013 06:21:39PM 0 points [-]

That's not exactly how I remember the movie, but it was still entertaining. I liked that big guy. Klaatu barada nikto!

. . .

(Sorry, just stirring the pot.)

Comment author: Multiheaded 16 March 2013 06:30:22PM 2 points [-]

Vg jnf gung jnl va gur pbzvp obbx; Bmlznaqvnf unq n grnz bs fpvragvfgf ovb-ratvarre n uhtr cflpuvp fdhvq gung jbhyq qvr hcba ovegu/npgvingvba naq xvyy n ybg bs crbcyr jvgu vgf cflpuvp "fpernz". Vg'q znc avpryl gb crbcyr'f rkcrpgngvbaf bs na "nyvra vainqre" naq uhznavgl jbhyq havgr ntnvafg cbgragvny shegure gerngf.

Comment author: Decius 16 March 2013 05:49:10AM -2 points [-]

Against someone with an AI, are we really tech-savvy? Is the Carnot engine turning chemical energy into rotary mechanical energy into electromagnetic energy really the best way to listen for radio signals?

Comment author: Tenoke 16 March 2013 10:54:11AM 1 point [-]

You missed the point.

Comment author: Eliezer_Yudkowsky 16 March 2013 05:24:15AM 2 points [-]

(Agreed.)

Comment author: Pfft 15 March 2013 04:40:39PM 2 points [-]

The scheme described in the article seems like one of the most efficient ways to propagate near the speed of light. Why bother sending material von Neumann probes if mere radiosignals are sufficient?

Comment author: gwern 15 March 2013 05:25:49PM 14 points [-]

The scheme requires reception by an advanced civilization during a narrow window of opportunity; the radio waves have no effect on the billions of dead planets all around. A probe, on the other hand, presumably would be able to affect any system.

Since we observe so few life-filled planets or signals out there...

Comment author: Kawoomba 15 March 2013 06:23:51PM *  5 points [-]

Doesn't seem very effective to me.

The civilizatory window in which a target would be susceptible to such tactics is very small: cavemen don't notice, superintelligences are thankful for you announcing your hostile intentions. And that's not even taking into account the small fraction of inhabited planets (via the Drake equation) to begin with.

Compare that to a wave of self replicating probes at near lightspeed reconfiguring all secured matter into computronium performing the desired operations? Seems like no contest. I'd rather rebuild jupiter too, for a loss of just a few percent in propagation speed.

Comment author: Elithrion 15 March 2013 08:13:57PM 3 points [-]

Compare that to a wave of self replicating probes at near lightspeed reconfiguring all secured matter into computronium performing the desired operations? Seems like no contest.

I think the best argument in favour of this SETI virus is that you can really just do both. Nearly all the useful stuff will come from the self-replicating probes, but you might get a little extra out of the virus as well.

Comment author: Kawoomba 15 March 2013 08:22:45PM 2 points [-]

Not that it's an important point of contention, but I don't think so. If there are any other superintelligences out there (other than the sender) - even if fewer than there are civilizations in their vulnerable phase - they would still pose a serious threat to the signal-sending agent:

A signal travelling slightly ahead of the cavalry would be like a trumpet call announcing "here come the nanobots!", giving the adversary time to prepare.

(Interestingly, our position in the outskirts of a galaxy / the less densily populated regions can count as weak evidence that such a cosmic chessgame exists, since otherwise due to the SSA we'd expect to find our home star cluster somewhere in the more densily packed areas.)

God I hate it when my comments become needlessly verbose, sorry ... argh, and isn't verbosity needless by definition?

Comment author: Thomas 16 March 2013 07:25:47AM 1 point [-]

A signal travelling slightly ahead of the cavalry would be like a trumpet call announcing "here come the nanobots!", giving the adversary time to prepare.

Yes, but we better prepare for nanobots anyway. If they don''t come it's just a bonus. It is wise to be prepared for an intergalactic war in any case. For the robots, for the small kinetic projectiles with a near light speed, for some artificial gamma ray bursts, for SETI attacks and for many more.

Then we should strike in all directions in the best tradition of a very benevolent colonist. To end all the space wars even before they relay start. As much as we can.

The aliens, who are extremely rare (as I think), had, have or will have the same dilemma, what may be another opportunity. Game theoretically speaking, we must do some calculations right now, it is already late and the OP's article is a good one.

Comment author: Elithrion 15 March 2013 09:00:03PM *  0 points [-]

I actually don't mind this length of comments (less is okay, but sometimes too vague, and starting at double that length it definitely feels like too much).

Overall, I see your point, but I think it depends on what kind of strategy the spreading superintelligence is using and on what wars would look like in general. For example, the universe probably mostly doesn't resist, so it might be sending small "conversion" probes everywhere to expand as fast as possible. In that case, any actual opponent might be able to easily repel them and start getting ready to present a serious defence by the time any dedicated offensive force is sent, so the additional forewarning of having a signal travel slightly further ahead wouldn't really change anything, and might prevent an opponent from emerging in the first place.

(On the other hand, maybe the conversion probes it sends are smart enough to detect any signal originating from their destination and stop flying/change course if it looks like it might resist. But maybe any superintelligence is on the lookout for extremely fast-travelling objects that behave like this and would notice anyway.)