Comment author: malthrin 23 December 2011 07:19:00PM *  -1 points [-]

Voted you down. This is deontologist thought in transhumanist wrapping paper.

Ignoring the debate concerning the merits of eternal paradise itself and the question of Heaven's existence, I would like to question the assumption that every soul is worth preserving for posterity.

Consider those who have demonstrated through their actions that they are best kept excluded from society at large. John Wayne Gacy and Jeffrey Dahmer would be prime examples. Many people write these villains off as evil and give their condition not a second thought. But it is quite possible that they actually suffer from some sort of Satanic corruption and are thus not fully responsible for their crimes. In fact, there is evidence that the souls of serial killers are measurably different from those of normal people. Far enough in the future, it might be possible to "cure" them. However, they will still possess toxic memories and thoughts that would greatly distress them now that they are normal. To truly save them, they would likely need to have many or all of their memories erased. At that point, with an amnesic brain and a cloned body, are they even really the same person, and if not, what was the point of saving them?

Forming a robust theory of mind and realizing that not everyone thinks or sees the world the same way you do is actually quite difficult. Consider the immense complexity of the world we live in and the staggering scope of thoughts that can possibly be thought as a result. If eternal salvation means first and foremost soul preservation, maybe there are some souls that just shouldn't be saved. Maybe Heaven would be a better, happier place without certain thoughts, feelings and memories--and without the minds that harbor them.

Comment author: malthrin 21 December 2011 10:21:51PM *  11 points [-]

Make sure you know which "SOPA" you're referring to. This piece of legislation has undergone significant change from the version that sparked popular outrage.

Added after reading some other comments: if you've made cynical predictions about SOPA's progress through Congress or its effects in the real world, don't forget to update your beliefs on the eventual outcome. Write this prediction down somewhere.

Comment author: MixedNuts 20 December 2011 10:29:08PM 28 points [-]

I strongly suspect that what's going on with "people who talk to children like they're adults" is that they talk to children like they're people.

The morality of convincing children of arbitrary stuff is questionable. Though less than usual, because children are designed to work that way (also changing their preferences in cartoons isn't the end of the world). Do you know if the liking is sincere - i.e., if they enjoy cooking, or only believe they do and are surprised to find they didn't after each time?

Comment author: malthrin 21 December 2011 10:17:13PM *  12 points [-]

Regarding "convincing" children of things: this AI koan is relevant.

In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.

“What are you doing?”, asked Minsky.

“I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied.

“Why is the net wired randomly?”, asked Minsky.

“I do not want it to have any preconceptions of how to play”, Sussman said.

Minsky then shut his eyes.

“Why do you close your eyes?”, Sussman asked his teacher.

“So that the room will be empty.”

At that moment, Sussman was enlightened.

Comment author: malthrin 21 December 2011 10:12:36PM 3 points [-]

Alcohol.

Comment author: malthrin 20 December 2011 06:06:22AM 0 points [-]

So, I missed my goal of scoring 100% in the Stanford AI class. Time to do better - to do what others can't, or just haven't thought of yet.

In response to comment by malthrin on Uncertainty
Comment author: cadac 11 December 2011 11:29:32PM 0 points [-]

Maybe I'm missing something obvious here, but I'm unsure how to calculate P(S). I'd appreciate it if someone could post an explanation.

In response to comment by cadac on Uncertainty
Comment author: malthrin 12 December 2011 05:00:30PM *  2 points [-]

Sure. S results from HH or from TT, so we'll calculate those independently and add them together at the end. We'll do that by this equation: P(p=x|S) = P(p=x|HH) * P(H) + P(p=x|TT) * P(T).

We start out with a uniform prior: P(p=x) = 1. After observing one H, by Bayes' rule, P(p=x|H) = P(H|p=x) * P(p=x) / P(H). P(H|p=x) is just x. Our prior is 1. P(H) is our prior, multiplied by x, integrated from 0 to 1. That's 1/2. So P(p=x|H) = x*1/(1/2) = 2x.

Apply the same process again for the second H. Bayes' rule: P(p=x|HH) = P(H|p=x,H) * P(p=x|H) / P(H|H). The first term is still just x. The second term is our updated belief, 2x. The denominator is our updated belief, multiplied by x, integrated from 0 to 1. That's 2/3 this time. So P(p=x|HH) = x*2x/(2/3) = 3x^2.

Calculating tails is similar, except we update with 1-x instead of x. So our belief goes from 1, to 2-2x, to 3x^2-6x+3. Then substitute both of these into the original equation: (3/2)(x^2) + (3/2)(x^2 - 2x + 1). From there it's just a bit of algebra to get it into the form I linked to.

Comment author: malthrin 12 December 2011 03:46:43AM 2 points [-]

Why is your name Miley Cyrus?

Comment author: lukeprog 03 December 2011 07:25:28PM 0 points [-]

Link is broken.

Comment author: malthrin 03 December 2011 08:05:01PM 0 points [-]

Whoops, fixed.

[LINK] Fermi Paradox paper touching on FAI

2 malthrin 03 December 2011 07:22PM

This paper discusses the Fermi Paradox in the context of civilizations that can build self-replicating probes (SRPs) to explore/exploit the galaxy. In passing, it discusses some FAI-related objections to self-replicating machine intelligence.

One popular argument against SRPs is presented by Sagan and Newman (Sagan and Newman, 1983). They argue that any presumably wise and cautious civilization would never develop SRPs because such machines would pose an existential risk to the original civilization. The concern is that the probes may undergo a mutation which permits and motivates them to either wipe out the homeworld or overcome any reasonable limit on their reproduction rate, in effect becoming a technological cancer that converts every last ounce of matter in the galaxy into SRPs.

Bad Clippy.

Comment author: JenniferRM 02 December 2011 03:14:36AM *  2 points [-]

Upvote :-)

Followup questions spring to mind... Is there standard software for managing large trees of this sort? Is any of it open source? Are there file formats that are standard in this area? Do any posters (or lurkers who could be induced to start an account to respond) personally prefer particular tools in real life?

Actionable advice would be appreciated!

Comment author: malthrin 02 December 2011 03:27:55PM 1 point [-]

There's a Stanford online course next semester called Probabilistic Graphical Models that will cover different ways of representing this sort of problem. I'm enrolled.

View more: Prev | Next