Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: private_messaging 24 January 2014 11:30:54AM *  1 point [-]

We don't think it has exactly the probability of 0, do we? Or that it's totally impossible that the universe is infinite, or that it's truly non discrete, and so on.

A lot of conceptually simple things have no exact representation on a Turing machine, and unduly complicated approximate representations.

edit: also it strikes me as dumb that the Turing machine has an infinite tape, yet it is not possible to make an infinite universe on it with finite amount of code.

Comment author: timtyler 24 January 2014 10:12:05PM -1 points [-]

We don't think it has exactly the probability of 0, do we?

It isn't a testable hypothesis. Why would anyone attempt to assign probabilities to it?

Comment author: timtyler 23 January 2014 02:08:54AM *  0 points [-]

Hypercomputation doesn't exist. There's no evidence for it - and nor will there ever be. It's an irrelevance that few care about. Solomonoff induction is right about this.

Comment author: ChristianKl 08 January 2014 12:11:05PM 0 points [-]

Eliezer's posts do a great job of explaining the actual dangers of unfriendly AI, more along the lines of "the AI neither loves you, nor hates you, but you are composed of matter it can use for other things".

I'm not sure that's true. At the beginning stages where an AI is vulnerable it might very well use violence to prevent itself from getting destroyed.

Comment author: timtyler 10 January 2014 12:18:08AM *  -1 points [-]

Also, competition between humans (with machines as tools) seems far more likely to kill people than a superintelligent runaway. However, it's (arguably) not so likely to kill everybody. MIRI appears to be focussing on the "killing everybody case". That is because - according to them - that is a really, really bad outcome.

The idea that losing 99% of humans would be acceptable losses may strike laymen as crazy. However, it might appeal to some of those in the top 1%. People like Peter Thiel, maybe.

Comment author: RobbBB 09 January 2014 03:14:10AM 1 point [-]

Friendliness is an extremely high bar. Humans are not Friendly, in the FAI sense. Yet humans are mutualist and can cooperate with each other.

Comment author: timtyler 09 January 2014 11:25:00AM *  0 points [-]

Right. So, if we are playing the game of giving counter-intuitive technical meanings to ordinary English words, humans have thrived for millions of years - with their "UnFriendly" peers and their "UnFriendly" institutions. Evidently, "Friendliness" is not necessary for human flourishing.

Comment author: lukeprog 05 January 2014 05:47:45PM 8 points [-]

The history of effective altruism is littered with over-confident claims, many of which have later turned out to be false. In 2009, Peter Singer claimed that you could save a life for $200 (and many others repeated his claim). While the number was already questionable at the time, by 2011 we discovered that the number was completely off. Now new numbers were thrown around: from numbers still in the hundreds of dollars (GWWC's estimate for SCI, which was later shown to be flawed) up to $1600 (GiveWell's estimate for AMF, which GiveWell itself expected to go up, and which indeed did go up).

Another good example is GiveWell's 2009 estimate that "Because [our] estimate makes so many conservative assumptions, we feel it is overall reasonable to expect [Village Reach's] future activities to result in lives saved for under $1000 each."

Comment author: timtyler 09 January 2014 03:01:20AM 4 points [-]

"8 lives saved per dollar donated to the Machine Intelligence Research Institute. — Anna Salamon"

Comment author: RobbBB 05 September 2013 03:59:17PM *  3 points [-]

Nor does the fact that evolution 'failed' in its goals in all the people who voluntarily abstain from reproducing (and didn't, e.g., hugely benefit their siblings' reproductive chances in the process) imply that evolution is too weak and stupid to produce anything interesting or dangerous. We can't confidently generalize from one failure that evolution fails at everything; analogously, we can't infer from the fact that a programmer failed to make an AI Friendly that it almost certainly failed at making the AI superintelligent. (Though we may be able to infer both from base rates.)

Comment author: timtyler 09 January 2014 02:41:30AM -1 points [-]

Nor does the fact that evolution 'failed' in its goals in all the people who voluntarily abstain from reproducing (and didn't, e.g., hugely benefit their siblings' reproductive chances in the process) imply that evolution is too weak and stupid to produce anything interesting or dangerous.

Failure is a necessary part of mapping out the area where success is possible.

Comment author: timtyler 09 January 2014 02:36:47AM -1 points [-]

Being Friendly is of instrumental value to barely any goals. [...]

This is not really true. See Kropotkin and Margulis on the value of mutualism and cooperation.

Comment author: timtyler 08 January 2014 11:27:32AM *  1 point [-]

Uploads first? It just seems silly to me.

The movie features a luddite group assassinating machine learning researchers - not a great meme to spread around IMHO :-(

Slightly interestingly, their actions backfire, and they accelerate what they seek to prevent.

Overall, I think I would have preferred Robopocalypse.

Comment author: ChrisHallquist 31 December 2013 04:56:24AM *  24 points [-]

One other point I should make: this isn't just about "someone" being wrong. It's about an author frequently cited by people in the LessWrong community on an important issue being wrong.

Indeed, I'm not sure I'd know about Taubes at all if not for the LessWrong community.

I've already mentioned Eliezer's "Correct Contrarian Cluster" as an example in another thread, but perhaps it would be helpful to mention other examples:

  • In a thread where someone asked what the evidence in favor of paleo was, Taubes was the main concrete source that came up. Specifically, Luke mentioned Taubes as the person he's "usually" referred to on this question, without taking a stand himself and saying he didn't have time to evaluate the evidence personally.
  • Sarah Constantin (commenter at Yvain's blog, author of reply to Yvain's non-libertarian FAQ, and I just learned a MetaMed VP) has cited Taubes a couple times partly to make a libertarian point.
  • Jack bringing up Taubes in offline conversation
  • Yvain's old blog had a review of Taubes which doesn't seem to be public right now, but which I remember as partly criticizing Taubes but also lauding him for things that now I don't think Taubes deserves credit for.

So Taubes was someone I could expect to see cited in the future when the issue of expert consensus gets discussed on LessWrong. In spite of all the people who didn't like these posts, I think I may have accomplished the goal of getting people to stop citing Taubes.

Comment author: timtyler 05 January 2014 11:15:30PM *  0 points [-]

One other point I should make: this isn't just about "someone" being wrong. It's about an author frequently cited by people in the LessWrong community on an important issue being wrong.

Not experts on the topic of diet. I associated with members of the Calorie Restriction Society some time ago. Many of them were experts on diet. IIRC, Taubes was generally treated as a low-grade crackpot by those folk: barely better than Atkins.

Comment author: timtyler 02 January 2014 01:28:33PM *  -2 points [-]

To learn more about this, see "Scientific Induction in Probabilistic Mathematics", written up by Jeremy Hahn

This line:

Choose a random sentence from S, with the probability that O is chosen proportional to u(O) - 2^-length(O).

...looks like a subtraction operation to the reader. Perhaps use "i.e." instead.

The paper appears to be arguing against the applicability of the universal prior to mathematics.

However, why not just accept the universal prior - and then update on learning the laws of mathematics?

View more: Next