LESSWRONG
LW

TAG
1420135280
Message
Dialogue
Subscribe

Scientist by training, coder by previous session,philosopher by inclination, musician against public demand.

Team Piepgrass: "Worried that typical commenters at LW care way less than I expected about good epistemic practice. Hoping I’m wrong."

https://theancientgeek.substack.com/?utm_source=substack&utm_medium=web&utm_campaign=substack_profile

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
4TAG's Shortform
5y
17
No wikitag contributions to display.
Any evidence or reason to expect a multiverse / Everett branches?
TAG1y-1-5

"it" isn't a single theory.

The argument that Everettian MW is favoured by Solomonoff induction, is flawed.

If the program running the SWE outputs information about all worlds on a single output tape, they are going to have to be concatenated or interleaved somehow. Which means that to make use of the information, you gave to identify the subset of bits relating to your world. That's extra complexity which isn't accounted for because it's being done by hand, as it were..

Reply
Excluding the Supernatural
TAG2y*20

By far the best definition I’ve ever heard of the supernatural is Richard Carrier’s: A “supernatural” explanation appeals to ontologically basic mental things, mental entities that cannot be reduced to nonmental entities.

Physicalism, materialism, empiricism, and reductionism are clearly similar ideas, but not identical. Carrier's criterion captures something about a supernatural ontology, but nothing about supernatural epistemology. Surely the central claim of natural epistemology is that you have to look...you can't rely on faith , or clear ideas implanted in our minds by God.

it seems that we have very good grounds for excluding supernatural explanations a priori

But making reductionism aprioristic arguably makes it less scientific...at least, what you gain in scientific ontology, you lose in scientific epistemology.

I mean, what would the universe look like if reductionism were false

We wouldn't have reductive explanations of some apparently high level phenomena ... Which we don't.

I previously defined the reductionist thesis as follows: human minds create multi-level models of reality in which high-level patterns and low-level patterns are separately and explicitly represented. A physicist knows Newton’s equation for gravity, Einstein’s equation for gravity, and the derivation of the former as a low-speed approximation of the latter. But these three separate mental representations, are only a convenience of human cognition. It is not that reality itself has an Einstein equation that governs at high speeds, a Newton equation that governs at low speeds, and a “bridging law” that smooths the interface. Reality itself has only a single level, Einsteinian gravity. It is only the Mind Projection Fallacy that makes some people talk as if the higher levels could have a separate existence—different levels of organization can have separate representations in human maps, but the territory itself is a single unified low-level mathematical object. Suppose this were wrong.

Suppose that the Mind Projection Fallacy was not a fallacy, but simply true.

Note that there are four possibilities here...

  1. I assume a one level universe, all further details are correct.

  2. I assume a one level universe, some details may be incorrect

  3. I assume a multi level universe, all further details are correct.

  4. I assume a multi level universe, some details may be incorrect.

How do we know that the MPF is actually fallacious, and what does it mean anyway?

If all forms of mind projection projection are wrong, then reductive physicalism is wrong, because quarks, or whatever is ultimately real, should not be mind projected, either.

If no higher level concept should be mind projected, then reducible higher level concepts shouldn't be ...which is not EY's intention.

Well, maybe irreducible high level concepts are the ones that shouldn't be mind projected.

That certainly amounts to disbelieving in non reductionism...but it doesn't have much to do with mind projection. If some examples of mind projection are acceptable , and the unacceptable ones coincide with the ones forbidden by reductivism, then MPF is being used as a Trojan horse for reductionism.

And if reductionism is an obvious truth , it could have stood on its own as apriori truth.

Suppose that a 747 had a fundamental physical existence apart from the quarks making up the 747. What experimental observations would you expect to make, if you found yourself in such a universe?

Science isn't 100% observation,it's a mixture of observation and explanation.

A reductionist ontology is a one level universe: the evidence for it is the success of reductive explanation , the ability to explain higher level phenomena entirely in terms of lower level behaviour. And the existence of explanations is aposteriori, without being observational data, in the usual sense. Explanations are abductive,not inductive or deductive.

As before, you should expect to be able to make reductive explanations of all high level phenomena in a one level universe....if you are sufficiently intelligent. It's like the Laplace's Demon illustration of determinism,only "vertical". If you find yourself unable to make reductive explanations of all phenomena, that might be because you lack the intelligence , or because you are in a non reductive multi level universe or because you haven't had enough time...

Either way, it's doubtful and aposteriori, not certain and apriori.

If you can’t come up with a good answer to that, it’s not observation that’s ruling out “non-reductionist” beliefs, but a priori logical incoherence"

I think I have answered that. I don't need observations to rule it out. Observations-rule it-in, and incoherence-rules-it-out aren't the only options.

People who live in reductionist universes cannot concretely envision non-reductionist universes.

Which is a funny thing to say, since science was non-reductionist till about 100 years ago.

One of the clinching arguments for reductionism.was the Schrödinger equation, which showed that in principle, the whole of chemistry is reducible to physics, while the rise of milecular biology showeds th rreducxibility of Before that, educators would point to the de facto hierarchy of the sciences -- physics, chemistry, biology, psychology, sociology -- as evidence of a multi-layer reality.

Unless the point is about "concretely". What does it mean to concretely envision a reductionist universe? Pehaps it means you imagine all the prima facie layers, and also reductive explanations linking them. But then the non-reductionist universe would require less envisioning, because byit's the same thing without the bridging explanations! Or maybe it means just envisioing huge arrays of quarks. Which you can't do. The reductionist world view , in combination with the limitations of the brain, implies that you pretty much have to use higher level, summarised concepts...and that they are not necessarily wrong.

But now we get to the dilemma: if the staid conventional normal boring understanding of physics and the brain is correct, there’s no way in principle that a human being can concretely envision, and derive testable experimental predictions about, an alternate universe in which things are irreducibly mental. Because, if the boring old normal model is correct, your brain is made of quarks, and so your brain will only be able to envision and concretely predict things that can predicted by quarks.

  1. "Your brain is made of quarks" is aposteriori, not apriori.

  2. Your brain being made of quarks doesn't imply anything about computability. In fact, the computatbolity of the ultimately correct version of quantum physics is an open question.

  3. Incomputability isn't the only thing that implies irreducibility, as @ChronoDas points out.

  4. Non reductionism is conceivable, or there would be no need to argue for reductionism.

Reply
What's the Deal with Logical Uncertainty?
TAG1d44

You have set out some criteria for being a probability statement at all. There are further criteria for being a true probability statement. It's obviously possible to check probability claims.

Reply
Bayesian Epistemology vs Popper
TAG2d20

If he had died prematurely and only written 19 books, would his philosophy be incomprehensible?

Reply
Bayesian Epistemology vs Popper
TAG2d20

If you want to make bets about the future, Bayesianism will beat whatever else you could use

If. But how good is it at finding causal mechanisms and good explanations? Prediction is part of science, but only one part.

Reply
Review: “The Case Against Reality”
TAG3d20

More sophisticated mechanisms (those that can extract more information) are, generally speaking, more costly.

At some constant level of complexity/cost, some mechanism of perception can extract some certain amount of actionable information from what it perceives. There is then the question of which bits of information to extract.

Such bits of information can be more or less veridical (information that closely tracks what is true about what is perceived) or practical (information that closely tracks whatever is relevant to the fitness of the perceiver).

These do not necessarily track one another. In other words, one bit of practical information (X is/is not relevant to my fitness) may not correspond to one bit of real information (X is/is not quite like so). Hoffman believes that this is typically the case: that practical perceptions do not represent actual facts, but are jumbled up functions of facts about what is perceived, the context in which it is perceived, and the state of the perceiver. We cannot easily disentangle this and go backwards from our perceptions to try to recover just-the-facts.

Natural selection will favor perceptive apparatuses that extract practical information; it will not operate to preserve equivalently-costly ones that extract merely veridical information. For this reason, the latter don’t stand a chance.

Ergo, everything we perceive is natural selection’s attempt to give us handles we can manipulate to increase our fitness. We have no reason to expect that such handles represent what “is there” or exists outside of our own interfaces. (You may say your interface has the same icon on it that mine does, and that it behaves in the same way in your interface as in mine, but this does not prove that we both have true representations of some underlying fact, but merely that we both have similarly-evolved interfaces. We both say the stop sign is red, but it isn’t.)

I don't see why that would apply to humans , since we have very large brains and for ibke behaviour.

An organism that exists in a simple life-world and has limited resources can filter out irrelevant information at an early stage of neural.processing...more perceptual than cognitive. For instance, a frog sees the boundaries of its environment (but no detail within them), small, fast-moving objects, which are reflexively treated as food, and large moving objects,.which are reflexively interpreted as threats.

An organism that exists in a complex life world can't use simple perceptual processing to differentiate food from threat, because both exist in wide varieties. There is a limit to how much information can be hardcoded into a genome as opposed to stored culturally/memetically , and an even bigger difference in flexibility.

If a certain vegetable is poisonous unless it's cooked in a very specific way, is it food? It's food if it's cooked in the right way...eg. manioc... which is culturally transmitted information...

but you have to be able to see it , before you can figure that out. Heavy perceptual filtering would make the cultural discovery impossible. Relatively unfiltered perception is a necessary but insufficient condition for memetically transmitted knowledge, which is much more powerful and flexible than genetically transmitted knowledge.

So the frog like organism that goes for heavy filtration achieves efficiency up to a point, but places a ceiling on it's ability to exploit an environment, particularly a changing one.

No organism has a complete view of reality..all are limited by how far they can see, what frequencies they can detect. In that sense, nothing perceives reality in the sense of all reality. But within that category there is an important difference between the stimulus-response organisms and the cultural learners.

Reply
Consciousness of abstraction
TAG3d20

As for Eliezer’s views, this and the entire sequence it comes from seem relevant.

But that’s not the a priori irrational part: The a priori irrational part is where, in the course of the argument, someone pulls out a dictionary and looks up the definition of “atheism” or “religion”. (And yes, it’s just as silly whether an atheist or religionist does it.) How could a dictionary possibly decide whether an empirical cluster of atheists is really substantially different from an empirical cluster of theologians? How can reality vary with the meaning of a word? The points in thingspace don’t move around when we redraw a boundary.

But people often don’t realize that their argument about where to draw a definitional boundary, is really a dispute over whether to infer a characteristic shared by most things inside an empirical cluster...

That's very far from a complete answer.

  1. Not every term denotes an empirical object. If you want to find out what an abstraction like "atheism" means you have to look at a definition.

  2. Pure nominaliam doesn't work. There has to be some metaphysical basis for the ways in which objects have properties and resemblances, even if there is a layer of arbitrary categorisation on top of it.The

  3. "Clusters in thingspace" sounds like an Aristotelean territory-driven theory. "Bleggs" sounds like an arbitrary human made category. So it it not clear which theory he is backing here.

Reply
Lessons from the Iraq War for AI policy
TAG3d-30

Iraq had and used chemical weapons in the eighties.

https://en.m.wikipedia.org/wiki/Iraqi_chemical_weapons_program

Reply
Consciousness of abstraction
TAG3d20

The issue here is not to be addressed by exegesis of Korzybski.

If the issue is what I thinks, what could be better?

I am not clear what the issue even is,

Whether the categories are made by Man or the World.

The Direction of the Map-Territory Arrow, or Who Makes the Categories?

In traditional philosophy, there's a three way distinction between nominalism , conceptualism and realism. Those are (at least) three different theories intended to explain three sets of issues: the existence of similarities, differences and kinds in the world, the territory; the way concept formation does and should work in humans; and issues to done with truth and meaning, relating the map and territory.

But conceptualism comes in two varieties. So Gaul is divided into four parts.

One the one hand, there is the theory that correct concepts "carve nature at the joints" or "identify clusters in thingspace", the theory Aristotle and Ayn Rand. On the other hand is the "cookie cutter" theory, the idea that the categories are made by (and for) man, Kant's "Copernican revolution".

In the first approach, the world/territory is the determining factor, and the mind/map can do no better than reflect it accurately. In the second approach, the mind makes its own contribution.

Which is not to say that it's all map, or that the mind is entirely in the driving seat. The idea that there is no territory implies solipsism (other people only exist in the territory, which doesn't exist) and magic (changing the map changes the territory, or at least, future observations). Even if concepts are human constructions, the territory still has a role, which is determining the truth and validity of concepts. Even if the "horse" concept" is a human construct, it is more real than the "unicorn" concept, because horses can be observed. In cookie cutter terms, the territory supplies the dough, the map supplies the shape.

So Kantianism isn't a completely idealistic or all-in-the-map philosophy...in Kant's own terminology it's empirical realism as well as transcendental idealism. I's not as idealistic as Hegel's system, for instance. Similarly, Aristoteleanism isn't as realistic as Platonism -- Plato holds that there aren't just mind-independent concepts, but they dwell in their own independent realm.

So, although the conceptualisms are different from each other, they are both somewhere in the middle

Reply
Is P(Doom) Meaningful? Bayesian vs. Popperian Epistemology Debate
TAG4d20

Liron:

I mean, Bayes and Popper, they’re not like night and day, right?

There are some stark differences.

#Popperian claim that positive justification is impossible.

#Induction doesn't exist (or at least , matter in science)

#Popper was prepared to consider the existence of Propensities objective.probabilities, whereas Bayesians, particularly those who follow Jaynes believe in determinism and subjective probability.

#Popperian refutation is all or nothing, whereas Bayesian negative information is gradual.

#In Popperism, there can be more than one front running or most favoured theory, even after the falsifiable ones have been falsified, since there aren't quantifiable degrees of confirmation.

*Explanation

For Popper and Deutsch, theories need to be explanatory, not just predictive. Bayesian confirmation and disconfirmation only target prediction directly -- if they are achieving explanation or ontological correspondence , that would be the result of a convenient coincidence.

#Conjectures.

For Popperians, the construction of good theoretical conjectures is as important as testing them. Bayesian seem quite uninterested in where hypotheses come from.

#Simplicity

For Deutschians, being hard-to-vary is the preferred principle of parsimony. For Yudkowskians, it's computation complexity.

#Error correction

For Popperians us something you actually do.

Popperians like to put forward hypotheses that are easy to refute. Bayesians approve theoretically of "updating", but dislike objections and criticisms in practice.

#(Long term) prediction is basically impossible .

More Deutsch than Popper -- DD believed that the growth and unpredictability of knowledge . The creation of knowledge is so unpredictable and radical that long term predictions cannot be made. Often summarised to "prediction is impossible". Of course , Bayesians are all about prediction --but the predictive power of Ates tends only to be demonstrated in you models, where the ontology isn't changing under your feet. Their AI I predictions are explicitly intuition based.

#Optimism versus Doom.

Deutsch is highly optimistic that continuing knowledge creation will change the world for the better (a kind of moral realism is a component of this). Yudkowsky thinks advanced AI is our last invention and will kill us all.

Which is that Popperianism bottoms out onto common sense.

Falsification and fallbiilism are quite intuitive to scientists ... on the other hand, both ideas took some time to arrive...they weren't obvious to Aristotle or Bacon.

The non existence of induction is not common sense.

So I, I don’t really have such a thing as changing my mind because the state of my mind is always, it’s a playing field of different hypotheses, right? I always have a group of hypotheses and there’s never one that it’s like, oh this is my mind on this one. Every time I make a prediction, I actually have all the different hypotheses weigh in, weighted by their probability, and thy all make the prediction together."

What's the difference? Is updating is spectral, changing your mind is binary?

I mean, Solomonov induction does grow its knowledge and grow its predictive confidence, right

It starts off with omniscience, in the sense of possible hypotheses and then gets whittled down.

one of the good Bayesian critiques about frequentism that I like. So I, we, I totally agree with you. That, that, that the world is deterministic, non stochastic, and randomness doesn’t actually occur in nature. I, I agree.

Determinism is not a fact.

Liron Shapira: we, or we might we, we, there’s, there’s just no epistemic value to treating the universe as ontologically fundamentally, non deterministic, and the strongest example I’ve seen of that is in quantum theory, like the idea that a quantum collapses. ontologically fundamental to

There's always epistemological value in believing the truth. If the universe is not deterministic,a rationalist should want to believe so.

What I’m saying is probability is not the best tool to reason about the future precisely because the future is chaotic and unpredictable, right?

Depends on whether it's near or far.

Reply
Load More
4TAG's Shortform
5y
17