Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to comment by Jack on The Last Number
Comment author: Stuart_Armstrong 10 April 2010 09:24:24PM 1 point [-]

Some very large integer.

Comment author: AllanCrossman 22 September 2013 01:31:40AM 1 point [-]

Huh, integer. I don't know how that got past me when I wrote that.

Comment author: AllanCrossman 04 May 2010 09:18:29PM 3 points [-]

Is Eliezer alive and well? He's not said anything here (or on Hacker News, for that matter) for a month...

Comment author: Roko 14 April 2010 09:46:29PM *  3 points [-]

I have given David an abstract, which goes as follows:

The Singularity Institute for Artificial Intelligence has, in conjunction with Oxford University's Future of Humanity Institute, pioneered the application of debiasing to predicting the future and making policy suggestions for technology related issues.

The mental skills and traits that everyday folk live their lives with are very different from the skills required to accurately predict the future of technology and human civilization; most importantly prediction of complex future scenarios requires debiasing -- realizing that our brains have in-built weaknesses that prevent us from forming accurate beliefs about the world. 30 years' worth of work on human cognitive biases has been examined and explained on the Overcoming Bias and Less Wrong blogs. Academics from SIAI and FHI have done a significant amount of original work applying our knowledge of human cognitive biases to the issues that futurists and visionaries have traditionally thought about. In particular, Eliezer Yudkowsky and Marcello Herreshoff, Singularity Institute researchers, have outlined the risks that smarter-than-human AI systems pose, and have proposed a research paradigm to counter the risks. SIAI researchers Anna Salamon and Steve Rayhawk have developed computer models, including The Uncertain Future (web application, www.theuncertainfuture.com), to combine various opinions and beliefs that we have about the future. Often simply taking existing beliefs and showing that they are probabilistically inconsistent can generate insight; inability to check one's beliefs for global consistency is a fearsome human cognitive bias as far as predicting the future goes.

Comment author: AllanCrossman 14 April 2010 10:04:42PM 1 point [-]

I think it's Herreshoff.

Comment author: Stuart_Armstrong 10 April 2010 07:38:30PM 1 point [-]

That there is no integer that, when added to one, produces 4.2... (or alternatively, that arithmetic is consistent).

Comment author: AllanCrossman 10 April 2010 08:00:47PM -2 points [-]

4.2 - 1 = 3.2. Simples.

Comment author: Psychohistorian 27 March 2010 09:23:15PM *  0 points [-]

Assume three possible worlds, for simplicity:

A: 1 billion humans. No ETs.

B: 1 billion humans, 1 million ETs

C: 1 billion humans, 1 billion billion billion ETs.

If I am using the anthropic principle and the observation that I am human, these together provide very strong evidence that we are in either world one or world two, with a slightly stronger nudge towards world one. Where we end up after this observation depends on our priors. I agree fully that making additional inferences, such as the probability of other sentient beings increasing due to our own existence, or when we look at the size of the universe, the odds of being alone decrease, affects the end probability.

The inference I described may be unduly restricted, but that is my exact point. The original post made an anthropic inference in isolation - it simply used the fact that there are more animals than humans, and the author is a human, to infer that animals do not have experiences. The form of the argument would not have changed significantly were it used to argue that rocks lack experience. Thus, while the argument is legitimate, it is easily overwhelmed by additional evidence, such as the fact that humans and animals have somewhat similar brains. That was my point: the anthropic principle is easily swamped by additional evidence (as in the ET issue) and so is being overextended here.

Comment author: AllanCrossman 27 March 2010 10:32:37PM *  3 points [-]

If the various species of ET are such that no particular species makes up the bulk of sentient life, then there's no reason to be surprised at belonging to one species rather than another. You had to be some species, and human is just as likely as klingon or wookie.

Comment author: AllanCrossman 27 March 2010 03:49:41PM 9 points [-]

"why am I me, rather than an animal?" is not obviously sillier than "why am I me, rather than a person from the far future?".

Well, quite. Both are absurd.

Comment author: Eliezer_Yudkowsky 01 March 2010 09:03:31PM 1 point [-]

So rot13?

Comment author: AllanCrossman 01 March 2010 09:28:14PM *  2 points [-]

I suppose. The comment could be:

"Also Crystal nights is a good story about a topic of some interest to the futurist/transhumanist element on LW, namely rfpncr sebz n fvzhyngvba."

Comment author: RichardKennaway 01 March 2010 08:52:17PM *  2 points [-]

It isn't, at least, not in the sense of being a story whose punchline is "...naq vg jnf nyy n fvzhyngvba". You would already be foreseeing what Roko has mentioned by the end of the second screenful (and crying out, "Ab! Ab! Lbh znq sbbyf, unir V gnhtug lbh abguvat?").

Comment author: AllanCrossman 01 March 2010 08:58:03PM 5 points [-]

Reading through it now. There are two relevant words in Roko's description, only one of which is obvious from the outset.

Still I'm not sure I fully agree with LW's spoiler policy. I wouldn't be reading this piece at all if not for Roko's description of it. When the spoiler is that the text is relevant to an issue that's actually discussed on Less Wrong (rather than mere story details, e.g. C3PO is R2D2's father) then telling people about the spoiler is necessary...

Comment author: CronoDAS 16 February 2010 07:30:37AM 0 points [-]

A thought on nanotechnology: considering that biological cells already have most of the capabilities of molecular nanotechnology, and that said cells have been undergoing natural selection for over a billion years, if something better were possible, it probably would have evolved by now. For example, I'd be very surprised if somebody one day makes a machine that's significantly better at protein synthesis than a ribosome is. I suspect that future nanotechnology will look a lot like today's biological systems.

Comment author: AllanCrossman 16 February 2010 08:43:07PM *  4 points [-]

if something better were possible, it probably would have evolved by now

I don't think this argument works. Adaptive evolution has mostly been driven by DNA mutations and natural selection. DNA is transcribed to RNA and then translated into proteins. I'm not sure evolution (of Earth's cell-based life) could produce something radically different, because this central mechanism is so fundamental and so entrenched.

Comment author: Eliezer_Yudkowsky 16 February 2010 07:48:07AM 10 points [-]

Um... that's a rather odd argument to make, considering steel, wheels, nuclear power, transistors, radio, lasers, books, LEDs...

Proteins are held together by van der Waals forces, which are much weaker than covalent bonds. Preliminary calculations show gargantuan opportunities for improvement (see Drexler's Nanosystems).

Comment author: AllanCrossman 16 February 2010 08:18:58PM 2 points [-]

Proteins are held together by van der Waals forces, which are much weaker than covalent bonds

I'm not sure how this affects the argument, but the very flexibility of proteins is one of the things that makes them work. A whole bunch of biological reactions involve enzymes changing shape in response to some substance.

View more: Next