Kawoomba comments on Risks of downloading alien AI via SETI search - Less Wrong

9 Post author: turchin 15 March 2013 10:25AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (98)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kawoomba 15 March 2013 04:14:56PM 10 points [-]

Which is a good argument for why a smart AI wouldn't announce its malicious intentions by sending some sort of universal computer code - which could ultimately announce its intentions, yet have a significant chance of failure - and would just straight send its little optimizing cloud of nanomagic.

The first indication that something's wrong would be your legs turning into paperclips (The tickets are now diamonds - style).

Comment author: Thomas 15 March 2013 07:32:00PM 9 points [-]

Agree.

It may also be, that a well designed radio vawe front colliding with a planet or a gas cloud can produce some artifacts. That a SETI capable civilisation isn't even necessary.

Comment author: Will_Newsome 20 March 2013 09:24:22PM 4 points [-]

The optimizer your optimizer could optimize like.

Comment author: Kawoomba 20 March 2013 09:27:33PM 3 points [-]

Talking about triple-O, go continue your computational theology blog o.O

Comment author: Will_Newsome 20 March 2013 10:26:35PM *  5 points [-]

I will when I figure out how to solve this problem: I'm trying to accomplish two major objectives.

The more important objective is to explain to people how we can use concepts from mathematical fields, especially algorithmic information theory and reflective decision theory, to elucidate the fundamental nature of justification, especially any fundamental similarities or relations between epistemic and moral justification. (The motivation for this approach comes from formal epistemology; I'm not sure if I'll have to spend a whole post on the motivations or not.)

The less important objective is to show that theology, or more precisely theological intuitions, are a similar approach to the same problem, and it makes sense and isn't just syncretism to interpret theology in light of (say) algorithmic information theory and vice versa. But to motivate this would require many posts on hermeneutics; without sufficient justification, readers could reasonably conclude that bringing in "God" (an unfortunately political concept) is at best syncretism and at worst an attempt to force through various connotations. I'm more confident when it comes to explaining the math---even if I can be accused of overreaching with the concepts, at least it's admitted that the concepts themselves have a very solid foundation. When it comes to hermeneutics, though, I inevitably have to make various qualitative arguments and judgment calls about how to make judgment calls, and I'm afraid of messing it up; also I'm just more likely to be wrong.

So I have to think about whether to try to tackle both problems at once, which I would like to do but would be quite difficult, or to just jump into the mathematics without worrying so much about tying it back to the philosophical tradition. I'd really prefer the former but I haven't yet figured out how to make the presentation (e.g., the order of ideas to be introduced) work.

Comment author: [deleted] 24 March 2013 03:32:18PM *  1 point [-]

especially any fundamental similarities or relations between epistemic and moral justification

So, the fact that in natural languages it's easy to be ambiguous between epistemic and moral modality (e.g. should in English can mean either ‘had better’ or ‘is most likely to’) may be a Feature Not A Bug? (Well, I think that that is due to a quirk of human psychology¹, but if humans have that quirk, it must have been adaptive (or a by-product of something adaptive), in the EEA at least.)


  1. How common is this among the world's languages? The more common it is, the more likely my hypothesis, I'd guess.