You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Vladimir_Nesov comments on Singularity FAQ - Less Wrong Discussion

16 Post author: lukeprog 19 April 2011 05:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 20 April 2011 07:31:45PM *  4 points [-]

No, the point of that section is that there are many AI designs in which we can't explicitly make goals.

I know, but you use the word "predict", which is what I was pointing out.

I disagree. A textbook error in machine learning that has not yet been solved is good match for a fundamental problem.

What do you mean, "has not yet been solved"? This kind of error is routinely being solved in practice, which is why it's a textbook example.

Again, I'm not claiming that these aren't also problems elsewhere.

Yes, but that makes it a bad illustration.

Why? I've already varied the wording

Because it's bad prose, it sounds unnatural (YMMV).

Hence, the link, for people who don't know.

This doesn't address my argument. I know there is a link and I know that people could click on it, so that's not what I meant.

(More later, maybe.)