Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: eli_sennesh 22 August 2015 08:37:48PM 0 points [-]

Of course, you could decide it is false but lie about it, but people have a hard time doing that.

It's really not that hard, especially in countries with institutionalized religions. Just keep going to mosque, saying the prayers, obeying the norms, and you've got everything most believers actually do, minus the belief.

Comment author: V_V 22 August 2015 10:12:49PM 3 points [-]

But lying your entire life, even to your children (you can't risk teaching them anything other than the official truth) can be mentally exhausting.

Add the fact that core religious ideas seem intuitively appealing to most people, to the point that even ostensibly atheist people often end up believing in variants of them, and you get why religion is so popular.

Comment author: MattG 21 August 2015 09:25:23PM 4 points [-]

IHe's explaining the process of compartmentalization. I suspect if he had to bet on it for the background of a scientific fact, he would choose option A, but if he were discussing with a Rabi, he would choose to option B... he's reallly just choosing which compartment of belief to draw from.

Comment author: V_V 22 August 2015 10:03:43PM 6 points [-]

He agreed to disagree with himself. :)

Comment author: iarwain1 21 August 2015 09:10:50PM 4 points [-]

So can you please explain what he means? I really don't understand in what sense it can be said that "the world is 15 billion years old" and "the world was created by God in six days" can both be literally true. And it doesn't sound like he means the Omphalos argument that the world was created looking old. Rather, it sounds like he's saying that in one sense of "truth" or in one "model of the world" it really is 15 billion years old, and in another sense / model it really is young, and those two truths / models are somehow not contradictory. I just can't seem to wrap my head around how that might make any sense.

Comment author: V_V 22 August 2015 09:39:39PM *  1 point [-]

The sentence "Frodo carried the One Ring to Mount Doom" is not literally true, but it is true within the fictional narrative of the Lord of the Rings. You can simultaneously believe it and not believe it, in a certain sense, by applying the so called "suspension of disbelief", a mental mechanism which probably evolved to allow us to consider hypothetical conterfactual beliefs for decision making and which we then started using to make fiction.

I think that theists like Robert Aumann who support the non-overlapping magisteria position are doing something similar: they accept "the world is 15 billion years old" as an epistemic "Bayesian" belief which they use when considering expectations over observations, and they apply suspension of disbelief in order to believe "the world was created by God in six days" in the counterfactual context of religion.

Comment author: Lumifer 22 August 2015 07:53:21PM 0 points [-]

There seems to be this idea on LW that the probability of it being not a scam can only decrease with the Kolmogorov complexity of the offer.

I can't come up with any reasons why that should be so.

Comment author: V_V 22 August 2015 09:18:19PM 1 point [-]

I suppose that people who talk about Kolmogorov complexity in this setting are thinking of AIXI or some similar decision procedure.
Too bad that AIXI doesn't work with unbounded utility, as expectations may diverge or become undefined.

Comment author: Stuart_Armstrong 14 August 2015 02:51:55PM 1 point [-]

In theory, changing the exploration rate and changing the prior are equivalent. I think that it might be easier to decide upon an exploration rate that gives a good result for generic priors, than to be sure that generic priors have good exploration rates. But this is just an impression.

Comment author: V_V 17 August 2015 10:26:26AM 2 points [-]

In theory, changing the exploration rate and changing the prior are equivalent.

Not really. Standard AIXI is completely deterministic, while the usual exploration strategies for reinforcement learning, such as ɛ-greedy and soft-max, are stochastic.

Comment author: Clarity 15 August 2015 11:37:16PM -2 points [-]

Here's hoping that Australian soylent (Aussie soylent I think it was) is alright, as with all the other DIY soylents...

Comment author: V_V 17 August 2015 10:12:42AM *  2 points [-]

If I understand correctly the main source of these heavy metals is the brown rice protein, thus anything containing it may potentially have the same issue as Soylent.

A bit of googling shows up that there have been recent concerns about heavy metals contamination in rice grown in various Asian countries which is then traded worldwide.

Comment author: Wei_Dai 12 August 2015 12:20:07AM *  1 point [-]

An exploring agent (if it survives) will converge on the right environment, independent of language.

But it seems like such an agent could only survive in an environment where it literally can't die, i.e., there is nothing it can do that can possibly cause death, since in order to converge on the right environment, independent of language, it has to try all possible courses of action as time goes to infinity and eventually it will do something that kills itself.

What value (either practical or philosophical, as opposed to purely mathematical), if any, do you see in this result, or in the result about episodic environments?

Comment author: V_V 12 August 2015 09:02:16AM *  1 point [-]

What value (either practical or philosophical, as opposed to purely mathematical), if any, do you see in this result, or in the result about episodic environments?

There are plenty of applications of reinforcement learning where it is plausible to assume that the environment is ergodic (that is, the agent can't "die" or fall into traps that permanently result in low rewards) or episodic. The Google DQN Atari game agent, for instance, operates in an episodic environment, therefore, stochastic action selection is acceptable.

Of course, this is not suitable for an AGI operating in an unconstrained physical environment.

Comment author: skeptical_lurker 11 August 2015 07:19:17PM 0 points [-]

Hmm, well you can't choose the laws of physics adversarially, so I think this would only be a problem in a pure virtual environment.

Comment author: V_V 11 August 2015 07:36:24PM 0 points [-]

The laws of physics may allow for adversaries that try to manipulate you.

Comment author: skeptical_lurker 10 August 2015 07:43:55PM *  0 points [-]

See here for approaches that can deal with the AIXI existence issue:

I can't read past the abstract, but I'd find this more reassuring if it didn't require Turing oracles.

It seems that "just pick a random language (eg C++), without adding any specific weirdness" should work to avoid the problem - but we just don't know at this point.

My understanding is that functional languages have properties which would be useful for this sort of thing, but anyway I agree, my instincts are that while this problem might exist, you would only actually run into it if using a language specifically designed to create this problem.

Comment author: V_V 11 August 2015 06:25:45PM 0 points [-]

my instincts are that while this problem might exist, you would only actually run into it if using a language specifically designed to create this problem.

I think it's actually worse. If I understand correctly, corollary 14 implies that for any choice of the programming language, there exist some mixtures of environments which exhibit that problem. This means that if the environment is chosen adversarially, even by a computable adversary, AIXI is screwed.

Comment author: V_V 11 August 2015 12:45:53PM 13 points [-]

That's a very nice paper, kudos to Hutter for having the intellectual honesty to publish a result that largely undermines 15 years of his work.

View more: Next