All of Mader_Levap's Comments + Replies

"You can update by posting a header to all of your blog posts saying, "I wrote this blog during a dark period of my life. I now realize that Eliezer Yudkowsky is a decent and honest person with no ill intent, and that anybody can be made to look terrible by selectively collecting all of his quotes one-sidedly as I did. I regret this page, and leave it here as an archive to that regret.""

Wow, just wow. Cult leader demands Stalin-style self-critique on every page (no sane person would consider it reasonable) and censoring of all posts related to Less Wrong after campaign of harassment.

"I don't trust my ability to set limits on the abilities of Bayesian superintelligences."

Limits? I can think up few on the spot already.

Environment: CPU power, RAM capacity etc. I don't think even you guys claim something as blatant as "AI can break laws of physics when convenient".

Feats:

  • Win this kind of situation in chess. Sure, AI would not allow occurence of that situation in first place during game, but that's not my point.

  • Make human understand AI. Note: uplifting does not count, since human then ceases to be human. As a practic

... (read more)
4hairyfigment
As far as environment goes, the context says exactly the opposite of what you suggest it does. Among your bullet points, only the first seems well-defined. I could try to discuss them anyway, but I suggest you just read up on the subject and come back. Eliezer's organization has a great deal of research on self-understanding and theoretical limits; it's the middle icon at the top right of the page.

I never seen anyone bragging about defeating strawmans so much. Hell, in one place he explicitly said about "Soul Swap World" that he made up on spot to happily destroy.

And I still do not know what I am supposed to think about personal identity. I happen to think ME is generated by brain. Brain that works so well it can generate mind despite all of those changes in atoms meticulously described by Yudkowsky.

3Lumifer
And how do you know that?

Real, authentic Nazis were proud of their achievement

Not publicly. Holocaust denial exists since it (mass murdering of certain groups of humans) make them look bad. Of course, it is Insane Troll Logic, but I do not think anyone expects sane logic from Nazis.

"Which transhumanist ideas are "not even wrong"?"

Technological Singularity, for example (as defined in Wikipedia). In my view, it is just atheistic version of Rapture or The End Of World As We Know It endemic in various cults and equally likely.

Reason for that is that recursive self-improvement is not possible, since it requires perfect self-knowledge and self-understanding. In reality, AI will be black box to itself, like our brains are black boxes to ourself.

More precisely, my claim is that any mind on any level of complexity is insuf... (read more)

Our brains are mysterious to us not simply because they're our brains and no one can fully understand themselves, but because our brains are the result of millions of years of evolutionary kludges and because they're made out of hard-to-probe meat. We are baffled by chimpanzee brains or even rabbit brains in many of the same ways as we're baffled by human brains.

Imagine an intelligent agent whose thinking machinery is designed differently from ours. It's cleanly and explicitly divided into modules. It comes with source code and comments and documentation a... (read more)

That does not matter. If you think it is scam, then size of promised reward does not matter. 100? Googol? Googolplex? 3^^^3? Infinite? It just do not enter calculations in first place, since it is made up anyway.

Determining "is this scam?" probably would have to rely on other things than size of reward. That' avoids whole "but but there is no 1 in 3^^^3 probablility because I say so" bs.

5Jiro
There's a probability of a scam, you're not certain that it is a scam. The small probability that you are wrong about it being a scam is multiplied by the large amount.