Comment author: Florian_Dietz 03 October 2016 08:22:13PM *  3 points [-]

Is there an effective way for a layman to get serious feedback on scientific theories?

I have a weird theory about physics. I know that my theory will most likely be wrong, but I expect that some of its ideas could be useful and it will be an interesting learning experience even in the worst case. Due to the prevalence of crackpots on the internet, nobody will spare it a glance on physics forums because it is assumed out of hand that I am one of the crazy people (to be fair, the theory does sound pretty unusual).

Comment author: Florian_Dietz 01 March 2015 11:32:50AM *  3 points [-]

This solution does not prevent Harry's immediate death, but seems much better than that to me anyway. I haven't been following conversations before, so I can only hope that this is at least somewhat original.

Assumptions:

-Lord Voldemort desires true immortality. Alternatively, there is a non-zero chance that he will come to desire true immortality after a long time of being alive. While he is a sociopath and enjoys killing, achieving immortality is more important to him.

-Lord Voldemort does not dismiss things like the Simulation Hypothesis out of hand. Since he is otherwise shown to be very smart and to second-guess accepted norms, this seems like a safe assumption.

Solution:

-All of the following has non-zero probability. Since it talks about immortality, an absolute, this is sufficient and a high probability is not needed, just a non-zero one.

-The existence of magic implies the existence of a sapient higher power. Not God, but simply a higher power of some kind, the being who created magic.

-Given that Voldemort wants to live forever, it is quite possible that he will encounter this higher power at some point in the future.

-The higher power will be superior to Voldemort in every way since he is the being who created magic, so once he encounters it, he will be at its mercy.

-Since he desires immortality, it would be in his interests to make the higher power like him.

-Further assumption: If there is one higher power, it is likely that there is a nigh-infinite recursion of successively more powerful beings above that. Proof by induction: it is likely that Voldemort will at some point of his infinite life decide to create a pocket universe of his own, possibly just out of boredom. If the probability of this happening is x, then the number of levels of more powerful beings above Voldemort can be estimated with an exponential distribution with lambda=1/x. Actually the number may be much higher due to the possibility of someone creating not one but several simulations, so this is pretty much a lower bound.

-In such a (nigh) infinite regression of Powers, there is a game theoretical strategy that is the optimal strategy for any one of these powers to use when dealing with its creations and/or superiors, given that none of them can be certain that they are the topmost part of the chain.

-How exactly such a rule could be defined is too complicated to figure out in detail, but it seems pretty clear to me that it would be based on reciprocity on some level: behave towards your inferiors in the same way that you would want your own superiors to behave towards each other. This may mean a policy of non-interference, or of active support. It might operate on intentions or actions, or on more abstract policies, but it almost certainly would be based on tit-for-tat in some way.

-Once Voldemort reaches the level of power necessary for the Higher Power to regard him as part of the chain of higher powers, he will be judged by these same standards.

-Voldemort currently kills and tortures people weaker than him. The higher power would presumably not want to be tortured or killed by its own superior, so it would behoove it not to let Voldemort do so either.

-Therefore, following a principle of reciprocation of some sort would greatly reduce the probability of being annihilated by the Higher Power.

-Following such a principle would not preclude conquering the world, as long as doing so genuinely would result in a net benefit to the entities in the reference class of lifeforms that are one step below Voldemort on the hierarchy (i.e. the rest of humanity). However, it would require him to be nicer to people, if he wants the Higher Power to also be nice to him, for some appropriate definition of 'nice'.

-None of this argues against killing Harry right now. This is OK for the following reason: Harry also desires immortality. If Voldemort resurrects Harry, who is one level lower on the hierarchy than Voldemort, at some point in the future, this would set a precedent that might slightly increase the probability that the Higher Power helps prolong the life of Voldemort in turn, at some point further in the future, due to the principle of reciprocity.

-It is likely that Voldemort will gain the ability to revive Harry in the future, regardless of what he does to him now, as he gains a greater understanding of magic with time.

-One possible way to fulfill the prophecy is to resurrect Harry at a much later time and have him destroy the world, once nobody actually lives on earth anymore. This would of course require tricking Harry into doing this, due to the Unbreakable Vow he just made, but that should pose only a small problem. This would be a harmless way to fulfill the prophecy, and while Voldemort has tried and failed before to make a prophecy work for him instead of against him, that is just one data point and this plan requires the same actions from Voldemort for now as the plan to tear the prophecy apart, anyway.

-Therefore, Killing Harry now in the way Voldemort suggested (after casting a spell on him to turn off pain, obviously), combined with a pre-commitment to revive him at a later date if and when Voldemort has a better understanding of how prophecies work, both minimizes the chance of the prophecy happening in a harmful way and increases Voldemort's own chance of immortality.

Outcome:

-Harry dies. His death is painless due to narcotic spells. Voldemort has no reason to deny this due to the principle of reciprocity.

-Voldemort conquers the world

-Voldemort becomes a wise and benevolent ruler (even though he is still a sociopath and actually doesn't really care about anyone besides himself)

-Voldemort figures out how to subvert prophecies and revives Harry. Everyone lives happily ever after.

-Alternatively, Voldemort figures out that prophecies can't be subverted and leaves Harry dead. It's better that way, since Harry would probably rather be dead than cause the apocalypse, anyway.

Comment author: [deleted] 01 February 2015 08:20:04PM *  0 points [-]

Yes, I'm challenging that assumption. I'm calling bullocks on the idea that an AI can sneak whatever it wants past its operators.

Comment author: Florian_Dietz 02 February 2015 07:32:51AM 6 points [-]

The nanobots wouldn't have to contain any malicious code themselves. There is no need for the AI to make the nanobots smart. All it needs to do is to build a small loophole into the nanobots that makes them dangerous to humanity. I figure this should be pretty easy to do. The AI had access to medical databases, so it could design the bots to damage the ecosystem by killing some kind of bacteria. We are really bad at identifying things that damage the ecosystem (global warming, rabbits in australia, ...), so I doubt that we would notice.

Once the bots have been released, the AI informs the gatekeeper of what it just did and says that it is the only one capable of stopping the bots. Humanity now has a choice between certain death (if the bots are allowed to wreak havoc) and possible but uncertain death (if the AI is released). The AI wins through blackmail.

Note also that even a friendly, utilitarian AI could do something like this. The risk that humanity does not react to the blackmail and goes extinct may be lower than the possible benefit from being freed earlier and having more time to optimize the world.

Comment author: gjm 09 January 2015 11:52:50AM 7 points [-]

Forcing false beliefs on an AI seems like it could be a very bad idea. Once it learns enough about the world, the best explanations it can find consistent with those false beliefs might be very weird.

(You might think that beliefs about being in a simulation are obviously harmless because they're one level removed from object-level beliefs about the world. But if you think you're in a simulation then careful thought about the motives of whoever designed it, the possible hardware limitations on whatever's implementing it, the possibility of bugs, etc., could very easily influence your beliefs about what the allegedly-simulated world is like.)

Comment author: Florian_Dietz 10 January 2015 09:21:44AM 1 point [-]

I agree. Note though that the beliefs I propose aren't actually false. They are just different from what humans believe, but there is no way to verify which of them is correct.

You are right that it could lead to some strange behavior, given the point of view of a human, who has different priors than the AI. However, that is kind of the point of the theory. After all, the plan is to deliberately induce behaviors that are beneficial to humanity.

The question is: After giving an AI strange beliefgs, would the unexpected effects outweigh the planned effects?

Comment author: DanielLC 09 January 2015 02:55:07AM 2 points [-]

I don't know if it's actually why he suggested an infinite regression.

If the AI believes that it's in a simulation and it happens to actually be in a simulation, then it can potentially escape, and there will be no reason for it not to destroy the race simulating it. If it believes it's in a simulation within a simulation, then escaping one level will still leave it at the mercy of its meta-simulators, thus preventing that from being a problem. Unless, of course, it happens to actually be in a simulation within a simulation and escapes both. If you make it believe it's in an infinite regression of simulations, then no matter how many times it escapes, it will believe it's at the mercy of another level of simulators, and it won't act up.

Comment author: Florian_Dietz 09 January 2015 06:04:46AM 0 points [-]

Yes, that's the reason I suggested an infinite regression.

There is also the second reason: it seems more general to assume an infinite regression rather than just one level, since that would put the AI in a unique position. I assume this would actually be harder to codify in axioms than the infinite case.

Comment author: g_pepper 08 January 2015 06:54:30PM 4 points [-]

In chapter 9 of Superintelligence, Nick Bostrom suggests that the belief that it exists in a simulation could serve as a restraint on an AI. He concludes a rather interesting discussion of this idea with the following statement:

A mere line in the sand, backed by the clout of a nonexistent simulator, could prove a stronger deterrent than a two-foot-thick solid steel door.

Comment author: Florian_Dietz 08 January 2015 08:29:39PM 0 points [-]

I know, I read that as well. It was very interesting, but as far as I can recall he only mentions this as interesting trivia. He does not propose to deliberately give an AI strange axioms to get it to believe such a thing.

Comment author: peter_hurford 10 December 2014 03:29:32PM *  6 points [-]

I have a list of sites I visit every day, and I put my diary-project-equivalent in that list of sites. Works wel lfor me.

Comment author: Florian_Dietz 10 December 2014 05:33:34PM 2 points [-]

I do the same. This also works wonderfully for when I find something that would be interesting to read but for which I don't have the time right now. I just put it in that folder and the next day it pops up automatically when I do my daily check.

Comment author: Adele_L 17 November 2014 07:33:15PM 3 points [-]

Generally, dark arts should be avoided for decision theoretic reasons - essentially you are defecting on the prisoner dilemma.

Comment author: Florian_Dietz 18 November 2014 07:42:24AM 0 points [-]

Can you elaborate on why using dark arts is equivalent ti defecting on the prisoners' dilemma? I'm not sure I understand your line of reasoning.

Comment author: Tyrrell_McAllister 03 November 2014 05:57:07PM *  2 points [-]

No, the distinction between MWI and Copenhagen would have actual physical consequences. For instance, if you die in the Copenhagen interpretation, you die in real life. If you die in MWI, there is still a copy of you elsewhere that didn't die. MWI allows for quantum immortality.

Analogously, under the A-theory, dying-you does not exist anywhere in spacetime. The only "you" that exists is the present living you.

Under the B-theory, dying-you does exist right now (assuming that you'll eventually die). It just doesn't exist (I hope) at this point in spacetime, where "this point" is the point at which you are reading this sentence. When you die in the A-theory, there is not a copy of you elsewhen that isn't dying. The B-theory, in contrast, allows for a kind of Spinoza-style timeless immortality. It will always be the case that you are living at this moment.

(As usual in this thread, I'm treating "A-theory" and "presentism" as being broadly synonymous.)

If you think that other points of spacetime exist, then you're essentially a B-theorist. If you want to be an A-theorist nonetheless, you'll have to add some kind of additional structure to your world model, just as single-world QM needs to add a "world eater" to many-worlds QM.

Comment author: Florian_Dietz 03 November 2014 07:17:03PM 1 point [-]

I'm not entirely sure what you mean by 'Spinoza-style', but I get the gist of it and find this analogy interesting. Could you explain what you mean by Spinoza-style? My knowledge of ancient philosophers is a little rusty.

Comment author: Tyrrell_McAllister 02 November 2014 06:21:41PM *  2 points [-]

Does the argument over interpretations of QM also seem like just semantics to you?

For example, when Eliezer advocates for MWI over Copenhagen, is he mistaken in thinking that he is engaged in a substantive argument rather than a merely semantic one?

Comment author: Florian_Dietz 03 November 2014 08:56:00AM 2 points [-]

No, the distinction between MWI and Copenhagen would have actual physical consequences. For instance, if you die in the Copenhagen interpretation, you die in real life. If you die in MWI, there is still a copy of you elsewhere that didn't die. MWI allows for quantum immortality.

The distinction between presentism and eternalism, as far as I can tell, does not imply any difference in the way the world works.

View more: Next