Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Viliam 04 April 2017 12:26:27PM 2 points [-]

Seems like you posted this comment under a wrong article. This article is about artificial intelligence, more specifically about OpenAI.

I also noticed I have a problem to understand the meaning of your comments. Is there a way to make them easier to read, perhaps by providing more context? (For example, I have no idea what "it may be that the whole control problem is completely the wrong question" is supposed to mean.)

Comment author: PotterChrist 05 April 2017 06:17:29AM *  0 points [-]

You're right, it was not specific enough to contribute to the conversation. However, my point was very understandable, though general. I don't believe that there is a control problem because I don't believe AI means what most people think it does.

To elaborate, learning algorithms are just learning algorithms and always will be. No one in the actual practical world who is working on AI is trying to build anything like any sort of entity that has a will. And humans have forgotten about will for some reason, and that's why they're scared of AI.

Comment author: PotterChrist 03 April 2017 08:27:09PM 0 points [-]

I'm not exactly sure about the whole effective altruism ultimatum of "more money equals more better". Obviously it may be that the whole control problem is completely the wrong question. In my opinion, this is the case.

In response to Project Hufflepuff
Comment author: PotterChrist 03 April 2017 05:22:52PM 0 points [-]

Is there a link to the facebook post anywhere? In the meantime, I'll just leave this here:

)|(Just look at Zahir, he did exactly one instance of what I do all the time.

Comment author: PotterChrist 03 April 2017 12:55:08PM 0 points [-]

eli, (from the sots community), do you remember writing that?

Comment author: PotterChrist 03 April 2017 12:57:17PM *  0 points [-]

prediction for two separate realities, or one: Eliezer Yudkowsky either watched me perform the above in real time, or is reading these now and sorta kinda almost had these reactions.

And this makes no sense and I will probably be banned from lesswrong pretty soon.

Real, godamn science

Regular edit

Comment author: ChristianKl 03 April 2017 09:48:37AM 0 points [-]

Are there any studies that determine whether regular coffein consumption has a net benefit? Or does the body produce enough additional receptors to counteract it?

Comment author: PotterChrist 03 April 2017 12:52:46PM *  0 points [-]

Google is your friend. Here is an example link:

http://www.sciencedirect.com/science/article/pii/0278691595000933 I have smird'(proof)#regular question, did this just safe it but only for me?

Comment author: PotterChrist 03 April 2017 09:05:22AM *  0 points [-]

Normal "introducing myself", normal I am of faith.

Comment author: PotterChrist 03 April 2017 10:29:57AM 0 points [-]

Regular, "dont think about the irony"

Comment author: PotterChrist 03 April 2017 09:28:30AM 0 points [-]

Regular I'm fairly certain phrasing things like Harry Potter reference is actually bad for the rationalist community.

Comment author: PotterChrist 03 April 2017 09:05:22AM *  0 points [-]

Normal "introducing myself", normal I am of faith.