Comment author: Halfwit 13 June 2013 07:27:41PM *  1 point [-]

I think we're past the point where it matters. If we had a few lost decades in the mid-twentieth century, maybe, (and just to be cognitively polite here, this is just my intuition talking) the intelligence explosion could have been delayed significantly. We are just a decade off from home computers with >100 teraflops, not to mention the distressing trend of neuromorphic hardware (Here's Ben Chandler of the SyNAPSE project talking about his work on HackerNews)With all this inertia, it would take an extremely large downturn to slow us now. Engineering a new AI winter seems like a better idea, though I'm confused about how this could be done. Perceptrons discredited connectionist approaches for a surprisingly long time, perhaps a similar book could discredit (and indirectly defund) dangerous branches of AI which aren't useful for FAI research--but this seems unlikely, though less so than OP significantly altering economic growth either way.

In response to AGI Quotes
Comment author: Halfwit 10 June 2013 08:47:46PM *  4 points [-]

The mathematician John von Neumann, born Neumann Janos in Budapest in 1903, was incomparably intelligent, so bright that, the Nobel Prize-winning physicist Eugene Wigner would say, "only he was fully awake." One night in early 1945, von Neumann woke up and told his wife, Klari, that "what we are creating now is a monster whose influence is going to change history, provided there is any history left. Yet it would be impossible not to see it through." Von Neumann was creating one of the first computers, in order to build nuclear weapons. But, Klari said, it was the computers that scared him the most.

Konstantin Kakaes

Comment author: Halfwit 06 June 2013 05:00:41AM *  23 points [-]

The fact that MIRI is finally publishing technical research has impressed me. A year ago it seemed, to put it bluntly, that your organization was stalling, spending its funds on the full-time development of Harry Potter fanfiction and popular science books. Perhaps my intuition there was uncharitable, perhaps not. I don't know how much of your lead researcher's time was spent on said publications, but it certainly seemed, from the outside, that it was the majority. Regardless, I'm very glad MIRI is focusing on technical research. I don't know how much farther you have to walk, but it's clear you're headed in the right direction.

Comment author: XiXiDu 24 March 2012 04:01:33PM 31 points [-]

I'm intrigued as to the thought processes and motivations which lead to this article in light of your previous two weeks of comments and posts.

  • I realized that I might have entered some sort of vicious circle of motivated skepticism.
  • I can't ask other people to explore both sides of an argument if I don't do so either.
  • Someone wrote that I shouldn't ask AI researchers about risks from AI if I don't understand the basic arguments underlying the possibility.
  • I was curious if my perception of the arguments in favor of risks from AI is flawed and if I am missing important points. Since I haven't read the Sequences.
  • I recently wrote that I agree with 99,99% of what Eliezer Yudkowsky writes. The number was wrong. But I wanted to show that it isn't just made up.
  • I don't perceive myself to be a troll at all. Although some unthoughtful comments might have given that impression.

Although it looks like that everyone hates me now, I still don't want to be wrong.

I know that not having read the Sequences is received badly. Especially since I posted a lot in the past. But that's not some incredible evil plan or anything. I am unable to play games I want to play for longer than 20 minutes either. Yet I have to do physical exercises every day for like 2 hours, even though I don't really want to. It sometimes takes me months to read a single book. I think some here underestimate how people can act in a weird way without being evil. I am in psychiatric therapy for 3 years now (yeah, I can prove this).

I can neither get myself to read the Sequences nor am I able to ignore risks from AI. But I am trying.

Comment author: Halfwit 04 June 2013 06:54:26PM *  1 point [-]

I think you're an important guy to have around for reasons of evaporative cooling.

Comment author: Halfwit 03 June 2013 03:31:55AM 0 points [-]

The line I came up with, when asking the question to myself, was this: If the singularity is a religion, it is the only religion with a plausible mechanism of action.

Comment author: Halfwit 02 June 2013 10:03:14PM *  6 points [-]

"Why do people worry about mad scientists? It's the mad engineers you have to watch out for." - Lochmon

Comment author: Eliezer_Yudkowsky 22 May 2013 12:01:07AM 3 points [-]

It'll be healthier and more enjoyable just to eat actual food

I tried that. It didn't work. If you have something specific to recommend that can replace meals instead of Soylent, speak up.

Comment author: Halfwit 22 May 2013 02:31:39AM *  1 point [-]

I believe you can live off Boost for an indefinite period of time.

Comment author: Halfwit 21 May 2013 03:33:07PM 1 point [-]

I've never seen the Icarus story as a lesson about the limitations of humans. I see it as a lesson about the limitations of wax as an adhesive, - Randall Munroe.

Comment author: lukeprog 18 May 2013 08:33:48PM 3 points [-]

I think I'd put something like 5% on AI in the next 15 years. Your estimate is higher, I imagine.

Comment author: Halfwit 18 May 2013 11:58:50PM 1 point [-]

5% is pretty high considering the purported stakes.

Comment author: Halfwit 16 April 2013 03:18:55AM *  12 points [-]

Untangling the Knot: A Users Guide to the Human Mind

Your Brain, an Owner's Manual

Less than One, Greater than Zero: The Sequences, 2006–2009

Approximating Omega (badly, of course)

Sharpening the Mace

Uncountable Infinite Shades of Grey (my apologies)

Stop Tripping Yourself: A Users Guide to the Human Mind

Marshaling the Mind: An Introduction to the Informed Art of Rationality

Motes and Meaning: The Less Wrong Archives

Of Motes and Meaning

Theory, in Practice

Thinking, in Practice

Thinking in Circles:Avoiding the Known Bugs in Human Reasoning.

View more: Prev | Next