In response to That Alien Message
Comment author: Marcello 22 May 2008 03:30:13PM 23 points [-]

> Bravo. > It doesn't seem (ha!) that an AI could deduce our psychology from a video of a falling rock, not because of information bounds but because of uncorrelation - that video seems (ha!) equally likely to be from any number of alien species as from humans.

You're not being creative enough. Think what the AI could figure out from a video of a falling rock. It could learn something about:

* The strength of the gravitational field on our planet * The density of our atmosphere (from any error terms in the square law for the falling rock) * The chemical composition of our planet (from the appearence of the rock.) * The structure of our cameras (from things like lens flares, and any other artefacts.) * The chemical composition of whatever is illuminating the rock (by the spectra of the light) * The colors that we see in (our color cameras record things in RGB.) * For that matter, the fact that we see at all, instead of using sonar, etc. * And that's just what I can think of with a mere human brain in five minutes

These would tell the AI a lot about our psychology.

> Still, I really wouldn't try it, unless I'd proven this (fat chance), or it was the only way to stop the world from blowing up tomorrow anyway.

Aren't you glad you added that disclaimer?

Comment author: unconscious 28 February 2015 05:01:42PM 2 points [-]

I'm really late here, but a few problems: - Time and space resolution might be too low to allow a meaningful estimate of air resistance, especially if the camera angle doesn't allow you to accurately determine the rock's 3D shape. - Encoding the color in RGB eliminates spectra. - If it didn't already have knowledge of the properties of minerals and elements, it would need to calculate them from first principles. Without looking into this specifically, I'd be surprised if it was computationally tractable, especially since the AI doesn't know beforehand our fundamental physics or the values of relevant constants.

Comment author: Epictetus 28 February 2015 03:32:29AM 1 point [-]

If people see you as an authority and you make a mistake, they can accept that no one is perfect and mistakes happen. If they doubt the legitimacy of your authority, any mistakes will be taken as evidence of hubris and incompetence.

I think part of it is the general population just not being used to algorithms on a conceptual level. One can understand the methods used and so accept the algorithm, or one can get used to such algorithms over a period of time and come to accept them.

Besides, a human's "expert judgment" on a subject you know little about is just as much of a black box.

And such experts are routinely denounced by people who know little about the subject in question. I leave examples as an exercise for the reader.

Comment author: unconscious 28 February 2015 03:53:24AM 1 point [-]

And such experts are routinely denounced by people who know little about the subject in question. I leave examples as an exercise for the reader.

True, but that seems inconsistent with taking human experts but not algorithms as authorities. Maybe these tend to be different people, or they're just inconsistent about judging human experts.

Comment author: imuli 27 February 2015 03:20:07PM 1 point [-]

Different methods are more and less likely to lead one to the truth (in a given universe). I see little harm in calling those less likely arts dark. Rhetoric is surely grey at the lightest.

Comment author: unconscious 28 February 2015 03:12:45AM 5 points [-]

Presentation will influence how people receive your ideas no matter what. If you present good ideas badly, you'll bias people away from the truth just as much as if you presented bad ideas cleverly.

Comment author: Epictetus 28 February 2015 02:14:39AM *  4 points [-]

Probably because humans who don't know much about algorithms basically have no way to observe or verify the procedure. The result of an algorithm has all the force of an appeal to authority, and we're far more comfortable granting authority to humans.

I think people have also had plenty of experience with machines that malfunction and have objections on those grounds. We can tell when a human goes crazy if his arguments turn into gibberish, but it's a bit harder to do with computers. If an algorithm outputs gibberish that's one thing, but there are cases when the algorithm produces a seemingly reasonable number that ends up being completely false.

It's a question of whether to trust a transparent process with a higher risk of error or a black box with a lower, but still non-negligible risk of error.

Comment author: unconscious 28 February 2015 03:03:56AM 2 points [-]

I'm not sure that explains why they judge the algorithm's mistakes more harshly even after seeing the algorithm perform better. If you hadn't seen the algorithm perform and didn't know it had been rigorously tested, you could justify being skeptical about how it works, but seeing its performance should answer that. Besides, a human's "expert judgment" on a subject you know little about is just as much of a black box.