Today's post, That Alien Message was originally published on 22 May 2008. A summary (taken from the LW wiki):

 

Einstein used evidence more efficiently than other physicists, but he was still extremely inefficient in an absolute sense. If a huge team of cryptographers and physicists were examining a interstellar transmission, going over it bit by bit, we could deduce principles on the order of Galilean gravity just from seeing one or two frames of a picture. As if the very first human to see an apple fall, had, on the instant, realized that its position went as the square of the time and that this implied constant acceleration.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Einstein's Speed, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New Comment
3 comments, sorted by Click to highlight new comments since:
[-][anonymous]00

A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis—perhaps not the dominant hypothesis, compared to Newtonian mechanics, but still a hypothesis under direct consideration—by the time it had seen the third frame of a falling apple.

Why would it? It's so much more complicated than the Newtonian mechanics, and requires extra empirical data (finite speed of light, mass-independence of free fall and probably more), not accessible from just 3 frames.

[This comment is no longer endorsed by its author]Reply
[-]Shmi-10

A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis—perhaps not the dominant hypothesis, compared to Newtonian mechanics, but still a hypothesis under direct consideration—by the time it had seen the third frame of a falling apple. It might guess it from the first frame, if it saw the statics of a bent blade of grass.

Given the lack of necessary experimental data (speed of light being constant and mass-independence of free fall), I suspect that the required power of this "Bayesian superintelligence" throwing oodles of Solomonoff induction at the problem would result in such a superintelligence spawning a UFAI more powerful than this "superintelligence" and killing it in the process.

To quote EY:

Solomonoff induction, taken literally, would create countably infinitely many sentient beings, trapped inside the computations. All possible computable sentient beings, in fact. Which scarcely seems ethical. So let us be glad this is only a formalism.

What are the odds of them staying inside the computation?

[-]Thomas-10

What are the odds of them staying inside the computation?

Meta-computations would arise inside. Into which will some of those upload themselves. And meta-meta and so on. On the expense of slowing down.

Can they come here? Entirely depends if we have a hardware connection to that machine, which runs said process.

Anyway, some of us are already here, no matter of this SI runs or not.