All of Cleo Scrolls's Comments + Replies

That last paragraph seems important. There’s a type of person that doesn’t have an opinion yet in AI discourse, which is new, and will bounce off the "side" that appears most hostile to them--which, if they have misguided ideas, might be the truth-seeking side that gently criticizes. (Not saying that's the case for the author of this post!) 

It’s really hard to change the mind of someone who’s found their side in AI. But not to have them join one in the first place! 

2AtillaYasar
Despite being "into" AI safety for a while, I haven't picked a side. I do believe it's extremely important and deserves more attention and I believe that AI actually could kill everyone in less 5 years. But any effort spent on pinning down one's "p(doom)" is not spent usefully on things like: how to actually make AI safe, how AI works, how to approach this problem as a civilization/community, how to think about this problem. And, as was my intention with this article, "how to think about things in general, and how to make philosophical progress".

Richard Hamming:

In spite of the difficulty of predicting the future and that unforeseen technological inventions can completely upset the most careful predictions, you must try to foresee the future you will face. To illustrate the importance of this point of trying to foresee the future I often use a standard story.

It is well known the drunken sailor who staggers to the left or right with n independent random steps will, on the average, end up about √n steps from the origin. But if there is a pretty girl in one direction, then his steps will tend to

... (read more)
2PaulBecon
Hamming Questions are core to some exercises in CFAR workshops. Personally, I've never been motivated by setting goals. Once they are fixed, the removal of exploration and the single mindedness of optimization are fatal to sustaining my interest. I don't know if CFAR has ever clicked into the resistance that comes up when people are confronted with the question of what is the work of greatest significance that one could possibly do. At least in my dissertation research, I found people were more reluctant to set goals for the things that most mattered to them. My interpretation was that it was a way to evade the possibility of discovering you've failed at something really meaningful. This was called "The Delmore Effect", as it was robustly observed that people had explicit and well-structured goals for lower priority ambitions, but less articulate, more sketchy ideas for pursuing the activities most important to their identity
3FinalFormal2
I'm curious, what course is this from?

This is also why HPMOR! Harry is worried about grey goo and wants to work in nanotech, is only vaguely interested in AI; I think those were Eliezer's beliefs in about 1995 (he would be 16)

May I suggest updating the post name to 4.2 million?