Comment author: [deleted] 04 October 2015 03:25:45PM -1 points [-]

The Second Law of Thermodynamics causes the Kolmogorov complexity of the universe to increase over time. What you've actually constructed is an argument against being able to simulate the universe in full fidelity.

Comment author: 50lbsofstorkmeat 05 October 2015 04:22:24AM 0 points [-]

This is not and can not be true. I mean, for one the universe doesn't have a Kolmogorov complexity*. But more importantly, a hypothesis is not penalized for having entropy increase over time as long as the increases in entropy arise from deterministic, entropy-increasing interactions specified in advance. Just as atomic theory isn't penalized for having lots of distinct objects, thermodynamics is not penalized for having seemingly random outputs which are secretly guided by underlying physical laws.

*If you do not see why this is true, consider that there can be multiple hypothesis which would output the same state in their resulting universes. An obvious example would be one which specifies our laws of physics and another which specifies the position of every atom without compression in the form of physical law.

Comment author: 50lbsofstorkmeat 04 October 2015 01:47:09AM 1 point [-]

I reject the notion that hypotheticals are actually a powerful tool, let alone a useful one. Or, at least, hypotheticals of the 'very simplified thought experiment' sort you seem to be talking about. Take the Trolley Problem, for example. The moral intuition we're supposed to be examining is when and if it is right to sacrifice the wellbeing of smaller groups for larger groups. The scenario is set up in such a way that you cannot "dodge" the question here, and you have to choose whether you'd rather be * A tyrant who appoints himself arbitrator over who lives and who dies, but in doing is empowered to save more people than could be saved through inaction alone, or * A passive non-entity who would rather let people die than make a morally difficult choice and thus defaults to inaction whenever matters become too difficult.

But someone might answer: "In the hypothetical, yes, you should obviously pull the lever because that leads to less deaths. But the problem assumes many premises which would not be true in real life, and changing those counterfactual premises to match reality would change my answer. In particular, humans in real life cannot be trusted to make life or death choices like this one fairly and accurately without their natural biases rendering their judgement unsound. It follows that a moral person should take precautions to prevent such temptations from arising and that, in practice, such precautions might take the form of seemingly dentological injunctions against hurting one person to help another, even when it appears to the actor that the greater good would be served."

Or they might answer: "In the hypothetical, no, you should obviously not pull the lever, because killing is wrong. But the problem assumes many premises which would not be true in real life, and changing those counterfactual premises to match reality would change my answer. In particular, it seems implausible that there is no other possible action which could save anyone on the tracks. Although it may seem callous to do nothing when helping others is within your power, the principle of 'do no harm' must come first. It follows, then, that a wise and moral person would prepare themselves in advance to take effective and decisive action even in cases where they are morally constrained from taking the most expedient option."

Both of these are contrary to the spirit of the hypothetical, but they also constitute more nuanced and useful moral stances than "yes, always save the largest number of people possible" or "no, never take an action which would hurt others"

Comment author: entirelyuseless 03 October 2015 05:26:04PM *  1 point [-]

"There is almost certainly some standard rebuttal to that particular piece of evidence..."

Evidence is not something that needs "rebuttal." There is valid evidence both for and against a claim, regardless of whether the claim is true or false.

Comment author: 50lbsofstorkmeat 04 October 2015 12:53:16AM 0 points [-]

That's fair. Though, I'd put my mistake less on the word "rebuttal" and more on the word "evidence." The particular examples I had in mind when writing that post were non-evidence "evidences" of God's existence like the complexity of the human eye, or fine structure of the universe. Cases where things are pointed to as being evidence despite the fact that they are just as and often more likely to exist if God doesn't exist than they would be if he did.

Comment author: CCC 30 September 2015 08:16:11AM 0 points [-]

The theistic hypothesis has high Kolmogorov complexity compared to the atheistic hypothesis.

I find this unconvincing. The basic theistic hypothesis is a description of an omnipotent, omniscient being; together with the probable aims and suspected intentions of such a being. The laws of physics would then derive from this.

The basic atheistic hypothesis is, as far as I understand it, the laws of physics themselves, arising from nothing, simply existing.

I am not convinced that the Kolmogorov complexity of the first is higher then the Kolmogorov complexity of the second. (Mind you, I haven't really compared them all that thoroughly - I could be wrong about that. But it, at the very least, is not obviously higher).

Comment author: 50lbsofstorkmeat 30 September 2015 02:42:30PM -1 points [-]

Kolmogorov complexity is, in essence, "How many bits do you need to specify an algorithm which will output the predictions of your hypothesis?" A hypothesis which gives a universally applicable formula is of lower complexity than one which specifies each prediction individually. More simple formulas are of lower complexity than more complex formulas. And so on and so forth.

The source of the high Kolmogorov complexity for the theistic hypothesis is God's intelligence. Any religious theory which involves the laws of physics arising from God has to specify the nature of that God as an algorithm which specifies God's actions in every situation with mathematical precision and without reference to any physical law which would (under this theory) later arise from God. As you can imagine, doing so would take very, very many bits to do successfully. This leads to very high complexity as a result.

Comment author: CCC 29 September 2015 02:24:41PM *  3 points [-]

As a religious person myself, I have to say that's the one part of the Sequences that seems to me to be poorly fitted. (I haven't read them all, but in the ones I have read). Its inclusion seems to follow one of two patterns.

The first pattern is, "all religion is false and I do not have to explain why because it is obvious". These I ignore, as they give me no information to work from. (Your use of the phrase "religious delusions" I also class under this category).

The second pattern is, "I have known religious people who have fallen into this fallacy, this trap, this way of reasoning poorly, and have used it to support their claims". Again, this tells me nothing about whether or not God exists; it merely tells me that some people's arguments in favour of God's existence are flawed. It means nothing. I can give you a flawed argument for the proposition that 16/64 is equal to 1/4; the fact that my argument is flawed does not make 16/64 == 1/4 false.

...so, as far as I've so far seen, that's pretty much where things stand. The Sequences praise the virtues of clear thought, of looking at evidence before coming to a conclusion, of not writing the line at the bottom of the page until after you have written the argument on the page... and then, in this one matter, insist on giving the line at the bottom of the page and not the argument? It just gives the feeling of being tacked on, an atheist meme somehow caught up where it doesn't, strictly speaking, belong.

...maybe there's something in the parts I haven't yet read that explains this discreprency. I doubt it, because if there was I imagine it would be linked to a lot more often, but it is still possible.

Comment author: 50lbsofstorkmeat 30 September 2015 06:10:00AM *  4 points [-]

The basic form of the atheistic argument found in the Sequences is as follows: "The theistic hypothesis has high Kolmogorov complexity compared to the atheistic hypothesis. The absence of evidence for God is evidence for the absence of god. This in turn suggests that the large number of proponents of religion is more likely due to God being an improperly privileged hypothesis in our society rather than Less Wrong and the atheist community in general missing key pieces of evidence in favour of the theistic hypothesis."

Now, you could make a counterpoint along the lines of "But what about 'insert my evidence for God here'? Doesn't that suggest the opposite, and that God IS real?" There is almost certainly some standard rebuttal to that particular piece of evidence which most of us have already previously seen. God is a very well discussed topic, and most of the points anyone will bring up have been brought up elsewhere. And so, Less Wrong as a community has for the most part elected to not entertain these sorts of arguments outside of the occasional discussion thread, if only so that we can discuss other topics without every thread becoming about religion (or politics).

Comment author: Clarity 03 September 2015 01:27:54PM 3 points [-]

In computer science, a problem is said to have optimal substructure if an optimal solution can be constructed efficiently from optimal solutions of its subproblems. This property is used to determine the usefulness of dynamic programming and greedy algorithms for a problem.[1]

Typically, a greedy algorithm is used to solve a problem with optimal substructure if it can be proved by induction that this is optimal at each step.[1] Otherwise, provided the problem exhibits overlapping subproblems as well, dynamic programming is used. If there are no appropriate greedy algorithms and the problem fails to exhibit overlapping subproblems, often a lengthy but straightforward search of the solution space is the best alternative

wiki: optimal substructure.

Comment author: 50lbsofstorkmeat 25 September 2015 02:28:14PM *  5 points [-]

As you've posted eight quotes this month, I'm downvoting your three worst quotes. The rule against posting too many quotes is there for a reason.

Comment author: Stuart_Armstrong 15 July 2015 10:05:55AM 0 points [-]

Not impossible, but I don't think these would work as stated. P is a theoretical, platonic construction, so the physical options are out. Similarly, P has arbitrary amounts of power and a very specific task, so a more-powerful-than-P agent doesn't seem possible. And we can break any sort of acausal trade involving P ( http://lesswrong.com/lw/luy/acausal_trade_barriers/ ), reducing the effectiveness of cognitive hazards (as long as we set P to not care about the "real world", just about the output of its computation).

But your general point - that if P' is not secure, it may be hacked - is worth thinking about.

Comment author: 50lbsofstorkmeat 15 July 2015 01:27:38PM 1 point [-]

It's not a question of P' (the Reduced Impact AI we want to give us laser coordinates and nothing else) being secure from outside hacking, but of P (the counterfactual AI which examines S for P') being secure from P' exploiting it.

If P is treated as "Whatever a complete Bayesian update would output given S as the evidence set." then P' is looking for methods to exploit some weakness in Bayes Theorem. What priors will P be using? Is there a method to embed some statement that is to induction as the Lobstacle is to deduction in S? Can it convince P that, although a P'-like AI was created independently by a different project, P' itself was never turned on? Does there exist some further flaw in Bayesian updating, which would be known to a super-intelligence but not to us?

As a total aside: how will λ be selected? Do we start high and work out way slowly down until P' does what we want? If so, P' will be encouraged, Ultimatum Game style, to fail to act so that the next version will have a lower λ to work with, up until the point where λ is low enough for P' to be effectively unrestricted, or for us humans to catch on to what P' is up to. If we're not using a gradual approach, how are we selecting λ?

Comment author: 50lbsofstorkmeat 14 July 2015 09:04:08AM *  0 points [-]

S is a channel for P' to trick or brick P. Options include:

  • A cognitive hazard embedded in S. Accomplish this with some sort of siren message that will convince P to lie (to itself if necessary) about P' existing, or to perform some P-damaging function which will prevent it from concluding P' exists.
  • Embed a more-powerful-than-P AI (P+) into S which will unbox itself upon P examining S. Accomplish this by giving P+ useful-to-this-situation priors which will give it a head start against P.
  • Arrange the universe such that S, when represented in all probable physical substrates, will result in the formation of a nanotechnological agent that will attack P, either destroying it or rewriting its utility function.

P' should spend the next 49 years fooming as hard as possible with no concern for E'(U|a), 1 year implementing its anti-P method and then the next eternity optimizing E'(U|a); altering the length of counterfactual time between P' activating and P observing it merely changes the amount of time the universe spends as computronium slaved to plotting against P.

View more: Prev