Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Tyrin00

the numbers you extract will be badly inaccurate most of the time

As its the case with an myopic view on any Bayesian inference process that involves a lot of noise. The question is just whether rationality is about removing the noise, or whether it is about something else; whether "rationality is more than ‘Bayesian updating’". I do not think we can answer this question very satisfyingly yet.

I tend to think what Cumming says is akin to saying something like: "Optimal evolution is not about adapting according to Bayes rule, because look at just how complicated gene expression is! See, evolution works by stories encoded in G, A, C and T, and most of them get passed on even though they do not immediately help the individual!"

Tyrin00

I didn't mean 'similar'. I meant that it is equivalent to Bayesian updating with a lot of noise. The great thing about recursive Bayesian state estimation is that it can recover from noise by processing more data. Because of this, noisy Bayes is a strict subset of noise-free Bayes, meaning pure rationality is basically noise-free Bayesian updating. That idea contradicts the linked article claiming that rationality is somehow more than that.

There is no plausible way in which the process by which this meme has propagated can be explained by Bayesian updating on truth value.

An approximate Bayesian algorithm can temporarily get stuck in local minima like that. Remember also that the underlying criterion for updating is not truth, but reward maximization. It just happens to be the case that truth is extremely useful for reward maximization. Evolution did not achieve to structure our species in a way that makes it make it obvious for us how to balance social, aesthetic, …, near-term, long-term rewards to get a really good overall policy in our modern lives (or really in any human life beyond multiplying our genes in groups of people in the wilderness). Because of this people get stuck all the time in conformity, envy, fear, etc., when there are actually ways of suppressing ancient reflexes and emotions to achieve much higher levels of overall and lasting happiness.

Tyrin00

Even if stories are selected for plausibility, truth and whatever else leads most directly to maximal reward only once in a while, that would probably still be equivalent to Bayesian updating, just interfered by an enormous amount of noise.

Natural selection is Bayesian updating too: http://math.ucr.edu/home/baez/information/information_geometry_8.html

Tyrin00

But it is not clear at all why stories do not approximate Bayesian updating. Stories do allow us to reach far into the void of space which cannot be mapped immediately from sensory data, but stories also mutate and get forgotten based how useful they are which at least resembles Bayesian updating. The question is whether this kind of filtering throws off the approximation so far that it is qualitatively a different computation.

Tyrin10

But if you consider only US' well being, things might be a net positive.

If actions can be traced down to cause a whole lot of suffering, then it might be less certain to get a net positive outcome (for example due to empathic people revolting against these actions or feelings of guilt harming education and innovation; exodus of professionals to metropolitan regions in Europe, Asia etc.).

Tyrin00

Isn't the idea more that the neural network just learns rough subgraphs of the underlying DAG that captures the causal structure up to quantum detail? Whole-part relationships are such subgraphs: a person being present causes a face to be present, which causes eyes to be present etc.

Tyrin00

I had exactly the same insight as James_Miller a couple of days ago. Are you sure this is Grace's Doomsday argument? Her reasoning seems to be rather along the line that it is more likely that we'll be experiencing a late Great Filter (argued by SIA which I'm not familiar with). The idea here is rather that for life to likely exist for a prolonged time there has to be a late Great Filter (like space travel being extremely difficult or UFAI), because otherwise Paperclippers would quickly conquer the entire space (at least in universes like ours where all points in space can be travelled to in principle).