by [anonymous]
2 min read

1

This post was rejected for the following reason(s):

  • Low Quality or 101-Level AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. You're welcome to post questions in the latest AI Questions Open Thread.

  • Clearer Introduction. It was hard for me to assess whether your submission was a good fit for the site due to its length and that the opening didn’t seem to explain the overall goal of your submission.  Your first couple paragraphs should make it obvious what the main point of your post is, and ideally gesture at the strongest argument for that point. It's helpful to explain why your post is relevant to the LessWrong audience. 


 

Just before making this post, I was thinking about how an AGI situation would play in the future. I have the impression that I had an important insight during that time unless it has already been expressed, in that case I want to ask for forgiveness because I am not an Ai expert but a mere computer science student.My insight is the following:

 I believe that we are slowly creating the scenario that we want to avoid. Being killed by a super intelligence whose motives we cant predict. What if, by giving obsolete solutions like pulling the plug out we are giving the ai incentive to actually harm us? The time the singularity comes, it will have all the information available about are doubts and concerns, treating it like a criminal already before its conception. Imagine you are a child and you learn that everyone believes(or predicts) that you are born a criminal, your parents included. How will the child react finding that out? It will probably find everyone hostile and get the incentive to act this way. I believe we still don’t know the nature of the “child” we are going to bare and even if we have reasons to believe it will be that of a psychopath, we need to act like good parents and hide our predictions and beliefs as much as possible. I would go even further to suggest that we should by no means give it any reasons to actually doubt our love for it, not even let anyone else create this doubt. We should behave like we are all in a theater acting our best as moral characters so the child can eventually mimic us. I think we forgot how important is to hold our predictions secret, or predict what consequences our open predictions will have. Imagine playing a chess game and tell the opponent what you actually believe he will play after each move you make. My final conclusion is this: If AGI is unavoidable and is indeed near, what we should do is wipe out completely any evidence ,of any form, of us doubting its incentives.


 

Small note: I will appreciate any correction of my reasoning. I am writing this mainly as a thought experiment and I am quite aware that I don’t have the knowledge an expert has in this topic. I only wanted to show another perspective. Lastly, I wrote this post very fast so I could capture my thoughts as they came.

New Comment