All of TAG's Comments + Replies

TAG*20

Again, those are theories of consciousness, not definitions of consciousness.

I would agree that people who use consciousness to denote the computational process vs. the fundamental aspect generally have different theories of consciousness, but they’re also using the term to denote two different things.

But that doesn't imply that they disagree about (all of) the meaning of the term "qualia"..since denotation (extension, reference)doesn't exhaust meaning. The other thing is connotation, AKA intension, AKA sense.

https://en.m.wikipedia.org/wiki/Sense_a... (read more)

TAG*30

If it was that easy to understand, we wouldn’t be here arguing about it.

Definitions are not theories

Even if there is agreement about the meaning of the word, there can also be disagreement about the correct theory of qualia. Definitions always precede theories -- we could define "Sun" for thousands of years before we understood its nature as a fusion reactor. Shared definitions are a prerequisite of disagreement , rather than just talking past each other.

The problem of defining qualia -- itself, the first stage in specifying the problem --can be much ea... (read more)

2Rafael Harth
I would agree that people who use consciousness to denote the computational process vs. the fundamental aspect generally have different theories of consciousness, but they're also using the term to denote two different things. (I think this is bc consciousness notably different from other phenomena -- e.g., fiber decreasing risk of heart disease -- where the phenomenon is relatively uncontroversial and only the theory about how the phenomenon is explained is up for debate. With consciousness, there are a bunch of "problems" about which people debate whether they're even real problems at all (e.g., binding problem, hard problem). Those kinds of disagreements are likely causally upstream of inconsistent terminology.)
TAG*20

Free will in the general context means that you are in complete control of the decisions you make, that is farthest from the truth. Sure you can hack your body and brain ...

Why "complete" control? You can disprove anything , in a fake sort of way, by setting then bar high -- if you define memory as Total Recall, it turns out no-one has a memory.

Who's this "you" who's separate from both brain and body? Shouldn't you be asking how the machine works? A machine doesn't have to be deterministic , and can be self-modifying.

When Robert Sapolsky says there is

... (read more)
1asksathvik
What I meant was the consciousness part of your brain, the "you" who wants to do something. Its your ego. Machine can be both deterministic and self modifying, its deterministic when doing inference and modifying itself during training, although models can do test-time RL as well to update their weights on the fly. Also of course, I am very interested in learning how it works and that's why I am reading multiple books in Neuroscience and Psychology. (Jung and Jeff Hawkins) I just want to be in touch with the ground reality, and I believe that there has to be a set of algorithms we are running, some of which the conscious mind controls and some the autonomous nervous system, it can't be purely random else we wouldn't be functional, there has to be some sort of error correction happening as well. If I ask you to do 2+2, a 100 times, you would always respond 4, unless you are pissed off at the mundaneness of the task, so even if at the quantum level everything is probabilistic, its somehow leading to some sort of determinism at the end.   You are confusing what we can do now vs what we can do with the relevant understanding. I said that if we do have the full body state + the algorithm then we can predict.   No, I meant that if you observe closely for enough time, you can predict the actions of others, that amount of time observing everyday activities is possible only in the case of a partner, you might spend time with friends or family in a few contexts only.   We don't have perfect predicatability in psychology because we don't understand it yet. Just like we couldn't predict planetary motion with reasonable accuracy until we had the right models. We are fairly predictable in the short run, and with sufficient observation predictable in the medium term as well, if we aren't any long term contract is bound to be void.   Yes!
TAG20

Possibly we are just in one of the mathematical universes that happens to have an arrow of time—the arrow seems to arise from fairly simple assumptions, mainly an initial condition and coarse graining

You are misunderstanding the objection. It's not just an arrow of time in the sense of order, such as increasing entropy, it's the passingness of the time. An arrow can exist statically, but that's not how we experience time. We don't experience it as a simultaneous group of moments that happen to be ordered , we experience one moment at a time. A row of ho... (read more)

4Cole Wyeth
I already explained one reason why we should experience time passing - we have memories of the past but not the future. This is because of the arrow of time. Cognition is a computational process that runs forward in time; the explanation is probably related to the fact that computers create heat, which means increasing entropy, and the forward direction of time is the direction in which entropy increases - but I think Aram has a better explanation. I am aware that this will not address the objection as it exists in your mind - you’re imagining that all of our qualia should somehow exist outside of time at the same instant - but I think this is just confused. How would you know if they did? What would that mean? You certainly can’t experience the future as you experience the past in any causally detectable way. Actually; I suppose that such a strange state of affairs is discussed in “stories of your life,” the inspiration for the movie Arrival.  I don’t have a complete theory of qualia, but this seems like an unreasonable demand from the level 4 multiverse theory in itself. The level 4 multiverse explains why thinking beings like us could find themselves in our situation. Why that “feels like” something in a first person way is a problem for any materialist theory, and the discussion of that problem is not new. Instead of getting into this, I addressed directly what the post actually claims, which is that the level 4 multiverse theory does not explain why pleasure and suffering have different valences, when they should be symmetric - the flaw in that reasoning is that there is no need for them to be symmetric. 
TAG*20

(Extensively reviesed and edited).

Reductionism

Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.

Things like airplane wings actually are, at least as approximations. I don't see why you are.approvingly quoting this: it conflates reduction and elimination.

But the way physics really works, as far as we can tell, is that there is only the most basic level—the elementary particle fields and fundamental forces.

If that's a scientific claim ,it needs to be... (read more)

2Seth Herd
I'm puzzled by your quotes. Was this supposed to be replying to another thread? I see it as a top-level comment. Because you tagged me, it looks like you're quoting me below, but most of that isn't my writing. In any case, this topic can eat unlimited amounts of time with no clear payoff, so I'm not going to get in any deeper right now.
TAG20

This Cartesian dualism in various disguises is at the heart of most “paradoxes” of consciousness. P-zombies are beings materially identical to humans but lacking this special res cogitans sauce, and their conceivability requires accepting substance dualism.

Only their physical possibility requires some kind of nonphysicality. Physically impossible things can be conceivable if you don't know why they are physically impossible, if you can't see the contradiction between their existence and the laws of physics. The conceivability of zombies is therefore evi... (read more)

TAG20

First, computationalism doesn’t automatically imply that, without other assumptions, and indeed there are situations where you can’t clone data perfectly,

Thats a rather small nit. The vast majority of computationalists are talking about classical computation.

Indeed, I was basically trying to say that computationalism is so general that it cannot predict any result that doesn’t follow from pure logic/tautologies,

That's not much of a boast: pure logic can't solve metaphysical problems about consciousness, time, space, identity, and so on. That's why t... (read more)

TAG20

We’re talking about “physical processes”

We are talking about functionalism -- it's in the title. I am contrasting physical processes with abstract functions.

In ordinary parlance, the function of a physical thing is itself a physical effect...toasters toast, kettles boil, planes fly.

In the philosophy of mind, a function is an abstraction, more like the mathematical sense of a function. In maths, a function takes some inputs and or produces some outputs. Well known examples are familiar arithmetic operations like addition, multiplication , squaring, and s... (read more)

1James Diacoumis
I understand that there's a difference between abstract functions and physical functions. For example, abstractly we could imagine a NAND gate as a truth table - not specifying real voltages and hardware. But in a real system we'd need to implement the NAND gate on a circuit board with specific voltage thresholds, wires etc..  Functionalism is obviously a broad church, but it is not true that a functionalist needs to be tied to the idea that abstract functions alone are sufficient for consciousness. Indeed, I'd argue that this isn't a common position among functionalists at all. Rather, they'd typically say something like a physically realised functional process described at a certain level of abstraction is sufficient for consciousness.  To be clear, by "function" I don't mean some purely mathematical mapping divorced from any physical realisation. I'm talking about the physically instantiated causal/functional roles. I'm not claiming that a simulation would do the job.  This is trivially true, there is a hard problem of consciousness that is, well, hard. I don't think I've said that computational functions are known to be sufficient for generating qualia. I've said if you already believe this then you should take the possibilty of AI consciousness more seriously.  Makes sense, thanks for engaging with the question.  It's an opinion. I'm obviously not going to be able to solve the Hard Problem of Consciousness in a comment section. In any case, I appreciate the exchange. I'm aware that neither of us can solve the Hard Problem here, but hopefully this clarifies the spirit of my position. 
TAG*20

I don’t find this position compelling for several reasons:

First, if consciousness really required extremely precise physical conditions—so precise that we’d need atom-by-atom level duplication to preserve it, we’d expect it to be very fragile.

Don't assume that then. Minimally, non computation physicalism only requires that the physical substrate makes some sort of difference. Maybe approximate physical resemblance results in approximate qualia.

Yet consciousness is actually remarkably robust: it persists through significant brain damage, chemical al

... (read more)
1James Diacoumis
I think we might actually be agreeing (or ~90% overlapping) and just using different terminology.  Right. We’re talking about “physical processes” rather than static physical properties. I.e. Which processes are important for consciousness to be implemented and can the physics support these processes?  The flight simulator doesn’t implement actual aerodynamics (it’s not implementing the required functions to generate lift) but this isn’t what we’re arguing. A better analogy might be to compare a birds wing to a steel aeroplane wing, both implement the actual physical process required for flight (generating lift through certain airflow patterns) just with different materials.  Similarly, a wooden rod can burn in fire whereas a steel rod can’t. This is because the physics of the material are preventing a certain function (oxidation) from being implemented. So when we’re imagining a functional isomorph of the brain which has been built using silicon this presupposes that silicon can actually replicate all of the required functions with its specific physics. As you’ve pointed out, this is a big if! There are physical processes (such as Nitrous Oxide diffusion across the cell membranes) which might be impossible to implement in silicon and fundamentally important for consciousness.  I don’t disagree! The point is that the functions which this physical process is implementing are what’s required for consciousness not the actual physical properties themselves.  I think I’m more optimistic than you that a moderately accurate functional isomorph of the brain could be built which preserves consciousness (largely due to the reasons I mentioned in my previous comment around robustness.) But putting this aside for a second, would you agree that if all the relevant functions could be implemented in silicon then a functional isomorph would be conscious? Or is the contention that this is like trying to make steel burn i.e. we’re just never going to be able to replicate the fu
3rife
Yes, I'm specifically focused on the behaviour of an honest self-report fine-grained information becomes irrelevant implementation details. If the neuron still fires, or doesn't, smaller noise doesn't matter. The only reason I point this out is specifically as it applies to the behaviour of a self-report (which we will circle back to in a moment). If it doesn't materially effect the output so powerfully that it alters that final outcome, then it is not responsible for outward behaviour. I'm saying that we have ruled out that a functional duplicate could lack conscious experience because: we have established conscious experience as part of the causal chain to be able to feel something and then output a description through voice or typing that is based on that feeling. If conscious experience was part of that causal chain, and the causal chain consists purely of neuron firings, then conscious experience is contained in that functionality. We can't invoke the idea that smaller details (than neuron firings) are where consciousness manifests, because unless those smaller details affect neuronal firing patterns enough to cause the subject to speak about what it feels like to be sentient, then they are not part of that causal chain, which sentience must be a part of.
TAG20

Imagine that we could successfully implement a functional isomorph of the human brain in silicon. A proponent of 2) would need to explain why this functional isomorph of the human brain which has all the same functional properties as an actual brain does not, in fact, have consciousness.

Physicalism can do that easily,.because it implies that there can be something special about running running unsimulated , on bare metal.

Computationalism, even very fine grained computationalism, isn't a direct consequence of physicalism. Physicalism has it that an exact... (read more)

3James Diacoumis
Ok, I think I can see where we're diverging a little clearer now. The non-computational physicalist position seem to postulate that consciousness requires a physical property X and the presence or absence of this physical property is what determines consciousness - i.e. it's what the system is that is important for consciousness, not what the system does. I don't find this position compelling for several reasons: First, if consciousness really required extremely precise physical conditions - so precise that we'd need atom-by-atom level duplication to preserve it, we'd expect it to be very fragile. Yet consciousness is actually remarkably robust: it persists through significant brain damage, chemical alterations (drugs and hallucinogens) and even as neurons die and are replaced. We also see consciousness in different species with very different neural architectures. Given this robustness, it seems more natural to assume that consciousness is about maintaining what the state is doing (implementing feedback loops, self-models, integrating information etc.) rather than their exact physical state.  Second, consider what happens during sleep or under anaesthesia. The physical properties of our brains remain largely unchanged, yet consciousness is dramatically altered or absent. Immediately after death (before decay sets in), most physical properties of the brain are still present, yet consciousness is gone. This suggests consciousness tracks what the brain is doing (its functions) rather than what it physically is. The physical structure has not changed but the functional patterns have changed or ceased.  I acknowledge that functionalism struggles with the hard problem of consciousness - it's difficult to explain how subjective experience could emerge from abstract computational processes. However, non-computationalist physicalism faces exactly the same challenge. Simply identifying a physical property common to all conscious systems doesn't explain why that property
TAG*70

Whether computationalism functionalism is true or not depends on the nature of consciousness as well as the nature of computation.

While embracing computational functionalism and rejecting supernatural or dualist views of mind

As before, they also reject non -computationalist physicalism, eg. biological essentialism whether they realise it or not.

It seems to privilege biology without clear justification. If a silicon system can implement the same information processing as a biological system, what principled reason is there to deny it could be conscio

... (read more)
1rife
Functionalism doesn't require giving up on qualia, but only acknowledging physics. If neuron firing behavior is preserved, the exact same outcome is preserved, whether you replace neurons with silicon or software or anything else. If I say "It's difficult to describe what it feels like to taste wine, or even what it feels like to read the label, but it's definitely like something" - There are two options - either - * it's perpetual coincidence that my experience of attempting to translate the feeling of qualia into words always aligns with words that actually come out of my mouth * or it is not Since perpetual coincidence is statistically impossible, then we know that experience had some type of causal effect. The binary conclusion of whether a neuron fires or not encapsulates any lower level details, from the quantum scale to the micro-biological scale—this means that the causal effect experience has is somehow contained in the actual firing patterns.    We have already eliminated the possibility of happenstance or some parallel non-causal experience, but no matter how you replicated the firing patterns, I would still claim the difficulty in describing the taste of wine. So - this doesn't solve the hard problem.  I have no idea how emergent pattern dynamics causes qualia to manifest, but it's not as if qualia has given us any reason to believe that it would be explicable through current frameworks of science. There is an entire uncharted country we have yet to reach the shoreline of.
3James Diacoumis
Thank you for the comment.  I take your point around substrate independence being a conclusion of computationalism rather than independent evidence for it - this is a fair criticism.  If I'm interpreting your argument correctly, there are two possibilities:  1. Biological structures happen to implement some function which produces consciousness [Functionalism]  2. Biological structures have some physical property X which produces consciousness. [Biological Essentialism or non-Computationalist Physicalism] Your argument seems to be that 2) has more explanatory power because it has access to all of the potential physical processes underlying biology to try to explain consciousness whereas 1) is restricted to the functions that the biological systems implement. Have I captured the argument correctly? (please let me know if I haven't) Imagine that we could successfully implement a functional isomorph of the human brain in silicon. A proponent of 2) would need to explain why this functional isomorph of the human brain which has all the same functional properties as an actual brain does not, in fact, have consciousness. A proponent of 1) could simply assert that it does. It's not clear to me what property X the biological brain has which induces consciousness which couldn't be captured by a functional isomorph in silicon. I know there's been some recent work by Anil Seth where he tries to pin down the properties X which biological systems may require for consciousness https://osf.io/preprints/psyarxiv/tz6an. His argument suggests that extremely complex biological systems may implement functions which are non-Turing computable. However, while he identifies biological properties that would be difficult to implement in silicon, I didn't find this sufficient evidence for the claim that brains perform non-Turing computable functions.. Do you have any ideas? I'll admit that modern day LLM's are nowhere near functional isomorphs of the human brain so it could be that there'
TAG*10

We de-empahsized QM in the post

You did a bit more than de-emphasize it in the title!

Also:

Like latitude and longitude, chances are helpful coordinates on our mental map, not fundamental properties of reality.

"Are"?

**Insofar as we assign positive probability to such theories, we should not rule out chance as being part of the world in a fundamental way. **Indeed, we tried to point out in the post that the de Finetti theorem doesn’t rule out chances, it just shows we don’t need them in order to apply our standard statistical reasoning. In many contex

... (read more)
TAG*40

Computationalism is a bad theory of synchronic non-identity (in the sense of "why am I a unique individual, even though I have an identical twin"), because computations are so easy to clone -- computational states are more cloneable than physical states.

Computationalism might be a better theory of diachronic identity (in the sense of "why am I still the same person, even though I have physically changed"), since it's abstract, and so avoids the "one atom has changed" problem of naive physicalism. Other abstractions are available, though. "Having the same ... (read more)

4Noosphere89
First, computationalism doesn't automatically imply that, without other assumptions, and indeed there are situations where you can't clone data perfectly, like conventional quantum computers (the no-cloning theorem breaks if we allow closed-timelike curves ala Deutschian CTCs, but we won't focus on that), so this is more or less a non-issue. Indeed, I was basically trying to say that computationalism is so general that it cannot predict any result that doesn't follow from pure logic/tautologies, so computationalism doesn't matter that much in the general case, and thus you need to focus on more specific classes of computations. More below: https://en.wikipedia.org/wiki/No-cloning_theorem https://en.wikipedia.org/wiki/No-broadcasting_theorem Secondly, one could semi-reasonably argue that the inability to clone physical states is an artifact of our technological immaturity, and that in the far-future, it will be way easier to clone physical states to a level of fidelity that is way closer to the level of copyability of computer programs. Third, I gave a somewhat more specific theory of identity in my linked answer, and it's compatible with both computationalism and physicalism as presented, I just prefer the computationalist account for the general case and the physicalist answer for specialized questions. My main non-trivial claim here is that the sense of a phenomenal experience/awareness fundamentally comes down to the fact that the brain needs to control the body, and vice-versa, so you need a self-model of yourself, which becomes a big part of why we say we have consciousness, because we are referring to our self models when we do that.
TAG*103

Others say chance is a physical property – a “propensity” of systems to produce certain outcomes. But this feels suspiciously like adding a mysterious force to our physics.[4] When we look closely at physical systems (leaving quantum mechanics aside for now), they often seem deterministic: if you could flip a coin exactly the same way twice, it would land the same way both times.

Don't sideline QM: it's highly relevant. If there are propensities, real probabilities, then they are not mysterious, they are just the way reality works. They might be unnecess... (read more)

4quiet_NaN
Agreed. If the authors claim that adding randomness in the territory in classical mechanics requires making it more complex, they should also notice that for quantum mechanics, removing the probability from the territory for QM (like Bohmian mechanics) tends to make the the theories more complex.  Also, QM is not a weird edge case to be discarded at leisure, it is to the best of our knowledge a fundamental aspect of what we call reality. Sidelining it is like arguing "any substance can be divided into arbitrary small portions" -- sure, as far as everyday objects such as a bottle of water are concerned, this is true to some very good approximation, but it will not convince anyone. Also, I am not sure that for the many world interpretation, the probability of observing spin-up when looking at a mixed state is something which firmly lies in the map. From what I can tell, what is happening in MWI is that the observer will become entangled with the mixed state. From the point of view of the observer, they find themselves either in the state where they observed spin-up and spin-down, but their world model before observation as "I will find myself either in spin-up-world or spin-down-world, and my uncertainty about which of these it will be is subjective" seems to grossly misrepresent their model. They would say "One copy of myself will find itself in spin-down-world, and one in spin-up-world, and if I were to repeat this experiment to establish a frequentist probability, I would find that that the probability of each outcome is given by the coefficient of that part of the wave function to the power of two". So, in my opinion, * If a blackjack player wonders if a card placed face-down on the table is an ace, that is uncertainty in their map. * If someone wonders how a deterministic but chaotic physical system will evolve over time, that is also uncertainty in the map. * If someone wonders what outcome they are likely to measure in QM, that is (without adding extra
TAG*20

ethical, political and religious differences (which i’d mostly not place in the category of ‘priors’, e.g. at least ‘ethics’ is totally separate from priors aka beliefs about what is)

That's rather what I am saying. Although I would include "what is" as opposed to "what appears to be". There may well be fact/value gap, but there's also an appearance/reality gap. The epistemology you get from evolutionary argument only goes as far as the apparent. You are not going to die if you have interpreted the underlying nature or reality of a dangerous thing incorr... (read more)

TAG*40

A) If priors are formed by an evolutionary process common to all humans, why do they differ so much? Why are there deep ethical, political and religious divides?

B) how can a process tuned to achieving directly observable practical results allow different agents to converge on non-obvious theoretical truth?

These questions answer each other, to a large extent. B -- they cant, A -- that's where the divides come from. Values aren't dictated by facts, and neither are interpretations-of-facts.

@quila

The already-in-motion argument is even weaker than the evolution... (read more)

1[anonymous]
ethical, political and religious differences (which i'd mostly not place in the category of 'priors', e.g. at least 'ethics' is totally separate from priors aka beliefs about what is) are explained by different reasons (some also evolutionary, e.g. i guess it increased survival for not all humans to be the same), so this question is mostly orthogonal / not contradicting that human starting beliefs came from evolution. i don't understand the next three lines in your comment.
TAG40

you can only care about what you fully understand

I think I need an operational definition of “care about” to process this

If you define "care about" as "put resources into trying to achieve" , there's plenty of evidence that people care about things that can't fully define, and don't fully understand, not least the truth-seeking that happens here.

TAG*10

You can only get from the premise "we can only know our own maps" to the conclusion "we can only care about our own maps" via the minor premise "you can only care about what you fully understand ". That premise is clearly wrong: one can care about unknown reality, just as one can care about the result of a football match that hasn't happened yet. A lot of people do care about reality directionally.

@Dagon

Embedded agents are in the territory. How helpful that is depends on the territory

@Noosphere89

you can model the territory under consideration well enough

... (read more)
3Dagon
I think I need an operational definition of "care about" to process this.  Presumably, you can care about anything you can imagine, whether you perceive it or not, whether it exists or not, whether it corresponds to other maps or not.  Caring about something does not make it territory.  It's just another map. Kind of.  Identification of agency is map, not territory.  Processing within an agent happens (presumably) in a territory, but the higher-level modeling and output of that processing is purely about maps.  The agent is a subset of the territory, but doesn't have access at the agent level to the territory.
TAG30

To specify the Universe, you only have to specify enough information to pick it out from the landscape of all possible Universes

Of course not. You have to specify the landscape itself, otherwise it's like saying "page 273 of [unspecified book]" .

According to string theory (which is a Universal theory in the sense that it is Turing-complete)

As far as I can see, that is only true in that ST allows Turing machines to exist physically. That's not the kind s of Turing completeness you want. You want to know that String Theory is itself Turing computable,... (read more)

TAG31

They are not the same things though. Quantum mechanical measure isn’t actually a head count, like classical measure. The theory doesn’t say that—it’s an extraneous assumption. It might be convenient if it worked that way, but that would be assuming your conclusion.

QM measure isn’t probability—the probability of something occurring or not—because all possible branches occur in MWI.

Another part of the problem stems from the fact that what other people experience is relevant to them, whereas for a probability calculation, I only need to be able to statistica... (read more)

2Richard_Kennaway
So whence the Born probabilities, that underly the predictions of QM? I am not well versed in QM, but what is meant by quantum mechanical measure, if not those probabilities?
TAG*20

@Dagon

This comes down to a HUGE unknown - what features of reality need to be replicated in another medium in order to result in sufficiently-close results

That's at least two unknowns: what needs to be replicated in order to get the objective functioning; and what needs to be replicated to get the subjective awarness as well.

Which is all just to say -- isn't it much more likely that the problem has been solved, and there are people who are highly confident in the solution because they have verified all the steps that led them there, and they know wit

... (read more)
TAG*30

Physicalist epiphenomenalism is the only philosophy that is compatible with the autonomy of matter and my experience of consciousness, so it has not competitors as a cosmovision

No, identity theory and illusionism are competitors. And epiphenenomenalism is dualism, not physicalism. As I have pointed out before.

1Arturo Macias
Illusionism is not a competitor, because consciousness is obviously an illusion. That is immediate since Descartes. That is why you cannot distinguish between "the true reality" and "matrix": both produce a legitimate stream of illusory experience ("you").  Epiphenomenalism is physicalist in the sense that it respects the autonomy and closeness of the physical world. Given that we are not p-zombis (because there is an "illusory" but immediate difference between real humans and p-zombies), that difference is precisely what we call “consciousness”.   Descartes+Laplace=Chalmers.  In fact, there is only one scape: consciousness could play an active role in the fundamental Laws of Physics. That would break the Descartes/Laplace orthogonality, making philosophy interesting again.
TAG*31

And one of Wallace’s axioms, which he calls ‘branching indifference’, essentially says that it doesn’t matter how many branches there are, since macroscopic differences are all that we care about for decisions..

The macroscopically different branches and their weights?

Focussing on the weight isn't obviously correct , ethically. You cant assume that the answer to "what do I expect to see" will work the same as the answer to "what should I do". Is-ought gap and all that.

Its tempting to think that you can apply a standard decision theory in terms of expecte... (read more)

3Richard_Kennaway
For the same reason that they decline with classical measure. Two people are worth more than one. And with classical probability measure. A 100% chance of someone surviving something is better than a 50% chance.
TAG*0-3

According the many-worlds interpretation (MWI) of quantum mechanics, the universe is constantly splitting into a staggeringly large number of decoherent branches containing galaxies, civilizations, and people exactly like you and me

There is more than one many worlds interpretation. The version stated above is not known to be true.

There is an approach to MWI based on coherent superpositions, and a version based on decoherence. These are (for all practical purposes) incompatible. Coherent splitting gives you the very large numbers of "worlds"..except that... (read more)

TAG*20

Every quantum event splits the multiverse, so my measure should decline by 20 orders of magnitude every second.

There isn’t the slightest evidence that irrevocable splitting, splitting into decoherent branches occurs at every microscopic event -- that would be combining the frequency of coherentism style splitting with the finality of decoherent splitting. As well as the conceptual incoherence, there is In fact plenty of evidence—eg. the existence of quantum computing—that it doesnt work that way

"David Deutsch, one of the founders of quantum computing i... (read more)

2avturchin
But if I use quantum coin to make a life choice, there will be splitting, right?
TAG4-2

It seems common for people trying to talk about AI extinction to get hung up on whether statements derived from abstract theories containing mentalistic atoms can have objective truth or falsity values. They can. And if we can first agree on such basic elements of our ontology/epistemology as that one agent can be objectively smarter than another, that we can know whether something that lives in a physical substrate that is unlike ours is conscious, and that there can be some degree of objective truth as to what is valuable [not that all beings that are m

... (read more)
1Lorec
I'll address each of your 4 critiques: The point I'm making in the post is that no matter whether you have to treat the preferences as objective, there is an objective fact of the matter about what someone's preferences are, in the real world [ real, even if not physical ]. Whether or not an AI "only needs some potentially dangerous capabilities" for your local PR purposes, the global truth of the matter is that "randomly-rolled" superintelligences will have convergent instrumental desires that have to do with making use of the resources we are currently using [like the negentropy that would make Earth's oceans a great sink for 3 x 10^27 joules], but not desires that tightly converge with our terminal desires that make boiling the oceans without evacuating all the humans first a Bad Idea. My intent is not to say "I/we understand consciousness, therefore we can derive objectively sound-valid-and-therefore-true statements from theories with mentalistic atoms". The arguments I actually give for why it's true that we can derive objective abstract facts about the mental world, begin at "So why am I saying this premise is false?", and end at ". . . and agree that the results came out favoring one theory or another." If we can derive objectively true abstract statements about the mental world, the same way we can derive such statements about the physical world [e.g. "the force experienced by a moving charge in a magnetic field is orthogonal both to the direction of the field and to the direction of its motion"] this implies that we can understand consciousness well, whether or not we already do. My point, again, isn't that there needs to be, for whatever local practical purpose. My point is that there is.
2Nathan Helm-Burger
I think AI safety isn't as much a matter of government policy as you seem to think. Currently, sure. Frontier models are so expensive to train only the big labs can do it. Models have limited agentic capabilities, even at the frontier. But we are rushing towards a point where science makes intelligence and learning better understood. Open source models are getting rapidly more powerful and cheap. In a few years, the yrend suggests that any individual could create a dangerously powerful AI using a personal computer. Any law which fails to protect society if even a single individual chooses to violate it once... Is not a very protective law. Historical evidence suggests that occasionally some people break laws. Especially when there's a lot of money and power on offer in exchange for the risk. What happens at that point depends a lot on the details of the lawbreaker's creation. With what probability will it end up agentic, coherent, conscious, self-improvement capable, escape and self-replication capable, Omohundro goal driven (survival focused, resource and power hungry), etc... The probability seems unlikely to me to be zero for the sorts of qualities which would make such an AI agent dangerous. Then we must ask questions about the efficacy of governments in detecting and stopping such AI agents before they become catastrophically powerful.
TAG20

Arguably, “basic logical principles” are those that are true in natural language.

That's where the problem starts, not where it stops. Natural language supports a bunch of assumptions that are hard to formally reconcile: if you want your strict PNC, you have to give up on something else. The whole 2500 yeah history of logic has been a history of trying to come up with formal systems that fulfil various desiderata. It is now formally proven that you can't have all of them at once, and it's not obvious what to keep and what to ditch. (Godelian problems can... (read more)

Answer by TAG20

However, I find myself appealing to basic logical principles like the law of non-contradiction.

The law of non contradiction isn't true in all "universes" , either. It's not true in paraconsistent logic, specifically.

2cubefox
Arguably, "basic logical principles" are those that are true in natural language. Otherwise nothing stops us from considering absurd logical systems where "true and true" is false, or the like. Likewise, "one plus one is two" seems to be a "basic mathematical principle" in natural language. Any axiomatization which produces "one plus one is three" can be dismissed on grounds of contradicting the meanings of terms like "one" or "plus" in natural language. The trouble with set theory is that, unlike logic or arithmetic, it often doesn't involve strong intuitions from natural language. Sets are a fairly artificial concept compared to natural language collections (empty sets, for example, can produce arbitrary nestings), especially when it comes to infinite sets.
TAG*-30

Yes, and Logan is claiming that arguments which cannot be communicated to him in no more than two sentences suffer from a conjunctive complexity burden that renders them “weak”.

@Logan Zoellner being wrong doesn't make anyone else right. If the actual argument is conjunctive and complex, then all the component claims need to be high probability. That is not the case. So Logan is right for not quite the right reasons -- it's not length alone.

That’s not trivial. There’s no proof that there is such a coherent entity as “human values”, there is no proof t

... (read more)
4avturchin
In general, I agree with you: we can't prove with certainty that AI will kill everyone. We can only establish a significant probability (which we also can't measure precisely). My point is that some AI catastrophe scenarios don't require AI motivation. For example: - A human could use narrow AI to develop a biological virus - An Earth-scale singleton AI could suffer from a catastrophic error - An AI arms race could lead to a world war
TAG*10

As other people have said, this is a known argument; specifically, it’s in The Generalized Anti-Zombie Principle in the Physicalism 201 series. From the very early days of LessWrong

Albert: “Suppose I replaced all the neurons in your head with tiny robotic artificial neurons that had the same connections, the same local input-output behavior, and analogous internal state and learning rules.”

I think this proof relies on three assumptions. The first (which you address in the post) is that consciousness must happen within physics. (The opposing view wou

... (read more)
TAG*42

Argument length is substantially a function of shared premises

A stated argument could have a short length if it's communicated between two individuals who have common knowledge of each others premises..as opposed to the "Platonic" form, where every load bearing component is made explicit, and there is noting extraneous.

But that's a communication issue....not a truth issue. A conjunctive argument doesn't become likelier because you don't state some of the premises. The length of the stated argument has little to do with its likelihood.

How true an argume... (read more)

3RobertM
Yes, and Logan is claiming that arguments which cannot be communicated to him in no more than two sentences suffer from a conjunctive complexity burden that renders them "weak". Many possible objections here, but of course spelling everything out would violate Logan's request for a short argument.  Needless to say, that request does not have anything to do with effectively tracking reality, where there is no "platonic" argument for any non-trivial claim describable in only two sentence, and yet things continue to be true in the world anyways, so reductio ad absurdum: there are no valid or useful arguments which can be made for any interesting claims.  Let's all go home now!
TAG20

I mean that if turing machine is computing universe according to the laws of quantum mechanics,

I assume you mean the laws of QM except the collapse postulate.

observers in such universe would be distributed uniformly,

Not at all. The problem is that their observations would mostly not be in a classical basis.

not by Born probability.

Born probability relates to observations, not observers.

So you either need some modification to current physics, such as mangled worlds,

Or collapse. Mangled worlds is kind of a nothing burger--its a variation on the... (read more)

1Signer
I phrased it badly, but what I mean is that there is a simulation of Hilbert space, where some regions contain patterns that can be interpreted as observers observing something, and if you count them by similarity, you won't get counts consistent with Born measure of these patterns. I don't think basis matters in this model, if you change basis for observer, observations and similarity threshold simultaneously? Change of basis would just rotate or scale patterns, without changing how many distinct observers you can interpret them as, right? Collapse or reality fluid. The point of mangled worlds or some other modification is to evade postulating probabilities on the level of physics.
TAG20

One might be determined to throw in the towel on cognitive effort if they were to take a particular interpretation of determinism, and they, and the rest of us, would be worse off for it.

Determinists are always telling each other to act like libertarians. That's a clue that libertarianism is worth wanting. @James Stephen Brown

Compatibilist free will has all the properties worth wanting: your values and beliefs determine the future, to the extent you exert the effort to make good decisions.

No it doesn't, because it doesn't have the property of being ... (read more)

1James Stephen Brown
This is really interesting, because I agree with this, but also agree with what Seth's saying. I think this disagreement might actually be largely a semantic one. As such, I'm going to (try to) avoid using the terms 'libertarian' or 'compatibilist' free will. First of all I agree with the use of "indeterminism" to mean non-uniform randomness. I agree that there is a way that determinism and indeterminism can be mixed in such a way as give rise to an emergent property that is not present in either purely determined or purely random systems. I understand this in relation to the idea of evolutionary "design" which emerges from a system that necessarily has both determined and indeterminate properties (indeterminate at least at the level of the genes, they might not be ultimately indeterminate). I'm going to employ a decision-making map that seeks to clarify my understanding of the how we make decisions and where we might get "what we want" from. As I see it, the items in white are largely set, and change only gradually, and with no sense of control involved. I don't believe we have any control over our genes, our intentions or desires, what results our actions will have, of the world—I also don't think we have any control over our model of ourselves or the world, those are formed subconsciously. But our effort (in the green areas) allows for deliberative decision making, following an evolutionary selection process, in which our conscious awareness is involved. In this way we are not beholden to the first action available to us, we can, instead of taking an action in the world, make a series of simulated actions in our head, consciously experiencing the predicted outcome of those actions, until we find a satisfactory one. So, you don't end up with a determined or a random solution, you end up with an option based on your conscious experience of your simulated options. This process satisfies my wants in terms of my sense that I have some control (when I make the effor
TAG20

I noticed the same thing -- even Scott Alexander dropped a reference to it without explaining it. Anyway, here what I came up with:-

https://www.reddit.com/r/askphilosophy/s/lVNnjhTurI

(That's me done for another two days)

TAG41

You are a subject, and you determine your own future

Not so much , given deteminism.

Determinism allows you to cause the future in a limited sense. Under determinism, events still need to be caused,and your (determined) actions can be part of the cause of a future state that is itself determined, that has probability 1.0. Determinism allows you to cause the future ,but it doesn't allow you to control the future in any sense other than causing it. (and the sense in which you are causing the future is just the sense in which any future state depends on caus... (read more)

2Seth Herd
Libertarian free will is a contradiction in terms. Randomness is not what we want. Compatibilist free will has all the properties worth wanting: your values and beliefs determine the future, to the extent you exert the effort to make good decisions. Whether you do that is also determined, but it is determined by meaningful things like how you react to this statement. Determinism has no actionable consequences if it's true. The main conclusion people draw, my efforts at making decisions don't matter, is dreadfully wrong.
3James Stephen Brown
I think Seth is not so much contradicting you here but using a deterministic definition of "self" as that which we are referring to as a particular categorisation of the deterministic process, the one experienced as "making decisions", and importantly "deliberating over decisions". Whether we are determined or not, the effort one puts into their choices is not wasted, it is data-processing that produces better outcomes in general. One might be determined to throw in the towel on cognitive effort if they were to take a particular interpretation of determinism, and they, and the rest of us, would be worse off for it. So, the more of us who expend the effort to convince others of the benefits of continuing cognitive effort in spite of a deterministic universe are doing a service to the future, determined or otherwise.
TAG20

Your model of muon decay doesn't conserve charge -- you start with -1e , then have -2e and finally have zero. Also, the second electron is never observed.

-1[anonymous]
You made a couple of interesting points. First, the Γ Framework doesn't use charge. I know that so radical, right!?! Instead, it uses the oscillatory resonances of coupled photons (1Γ gluons) to form, stabilize and polarize particles.  In addition, like lepton numbers, charge is a construct - a useful tool but not necessarily the endgame in physics.  Also, keep in mind the volumes of interactors. Until there's a way to actually count both muons and electrons, then all we can honestly say is x muons decay into y electrons (and, possibly, unconfined gluons that disintegrate into photons). We don't know the actual ratio because, currently, there's no way to know.  Besides, there's another model that addresses the dynamics from a level below the SM that suggests it's not the ground floor. :) Models evolve. Like any fleeting zeitgeist, consensuses change. 
TAG31

What I have noticed is that while there are cogent overviews of AI safety that don't come to the extreme conclusion that we all going to be killed by AI with high probability....and there are articles that do come to that conclusion without being at all rigorous or cogent....there aren't any that do both. From that I conclude there aren't any good reasons to believe in extreme AI doom scenarios, and you should disbelieve them. Others use more complicated reasoning, like "Yudkowsky is too intelligent to communicate his ideas to lesser mortals, but household... (read more)

TAG*187

Large: economies of scale; need to coordinate many specialised skills. ( Factories were developed before automation)

Hierarchical: Needed because Large. It's how you co-ordinate a.much greater than Dunbar number of people. (Complex software is also hierarchical).

Bureaucratic: Hierarchical subdivision by itself is necessary but insufficient...it makes organisations manageable but not managed. Reports create legibility and Rules ensure that units are contributing to the whole, not pursuing their own ends.


I don't see what Wentworld is:

Are you giving up on s... (read more)

TAG*30

I really don’t understand what “best explanation”, “true”, or “exist” mean, as stand-alone words divorced from predictions about observations we might ultimately make about them.

Nobody is saying that anything has to be divorced from prediction , in the sense that emperical evidence is ignored: the realist claim is that empirical evidence should be supplemented by other epistemic considerations.

Best explanation:- I already pointed out that EY is not an instrumentalist. For instance, he supports the MWI over the CI, although they make identical prediction... (read more)

TAG*20

Is there anything different about the orld that I should expect to observe depending on whether Platonic math “exists” in some ideal realm? If not, why would I care about this topic once I have already dissolved my confusion about what beliefs are meant to refer to?

Word of Yud is that beliefs aren't just about predicting experience. While he wrote Beliefs Must Pay Rent, he also wrote No Logical Positivist I.

(Another thing that has been going on for years is people quoting VBeliefs Must Pay Rent as though it's the whole story).

Maybe you are a logical pos... (read more)

[anonymous]1310

you are interested in finding the best explanation for.your observations -- that's metaphysics. Shminux seems.sure that certain negative metaphysical claims are true -- there are No Platonic numbers, objective laws,.nor real probabilities

I really don't understand what "best explanation", "true", or "exist" mean, as stand-alone words divorced from predictions about observations we might ultimately make about them.

This isn't just a semantic point, I think. If there are no observations we can make that ultimately reflect whether something exists in this (seem... (read more)

TAG32

the ‘instantaneous’ mind (with its preferences etc., see post) is*—if we look closely and don’t forget to keep a healthy dose of skepticism about our intuitions about our own mind/self*—sufficient to make sense of what we actually observe

Huh? If you mean my future observations, then you are assuming a future self, and therefore temporally extended self. If you mean my present observations, then they include memories of past observations.

in fact I’ve defended some strong computationalist position in the past

But a computation is an series of steps over time, so it is temporarily extended

TAG*31

I think it’s fair to say that the most relevant objection to valid circular arguments is that they are not very good at convincing someone who does not already accept the conclusion.

I think the most relevant objection is quodlibet. Simple circular arguments be generated for any conclusion. Since they are formally equivalent, they must have equal justifcatory (probability raising) power, which must be zero. That doesn't quite mean they are invalid...it could mean there are valid arguments with no justificatory force.

@Seed Using something like empiricism... (read more)

TAG01

Yes, but here the right belief is the realization that what connects you to what we traditionally called your future “self”, is nothing supernatural

As before merely rejecting the supernatural doesn't give you a single correct theory, mainly because it doesn't give you a single theory. There a many more than two non-soul theories of personal identity (and the one Bensinger was assuming isn't the one you are assuming).

e. no super-material unified continuous self of extra value:

That's a flurry of claims. One of the alternatives to the momentary theory... (read more)

0FlorianH
Core claim in my post is that the 'instantaneous' mind (with its preferences etc., see post) is - if we look closely and don't forget to keep a healthy dose of skepticism about our intuitions about our own mind/self - sufficient to make sense of what we actually observe. And given this instantaneous mind with its memories and preferences is stuff we can most directly observe without much surprise in it, I struggle to find any competing theories as simple or 'simpler' and therefore more compelling (Occam's razor), as I meant to explain in the post. As I make very clear in the post, nothing in this suggests other theories are impossible. For everything there can of course be (infinitely) many alternative theories available to explain it. I maintain the one I propose has a particular virtue of simplicity. Regarding computationalism: I'm not sure whether you meant a very specific 'flavor' of computationalism in your comment; but for sure I did not mean to exclude computationalist explanations in general; in fact I've defended some strong computationalist position in the past and see what I propose here to be readily applicable to it.
TAG*32

Care however it occurs to you!

Good decisions need to be based on correct beliefs as well as values.

Well, what do you anticipate experiencing? Something or nothing? You anticipate whatever you do anticipate and that’s all there is to know—there’s no “should” here

Why not? If there is some discernable fact of the matter about how personal continuity works, that epistemically-should constrain your expectations. Aside from any ethically-should issues.

What we must not do, is insist on reaching a universal, ‘objective’ truth about it.

Why not?

The curr

... (read more)
0FlorianH
Yes, but here the right belief is the realization that what connects you to what we traditionally called your future "self", is nothing supernatural i.e. no super-material unified continuous self of extra value: we don't have any hint at such stuff; too well we can explain your feeling about such things as fancy brain instincts akin to seeing the objects in the 24FPS movie as 'moving' (not to say 'alive'); and too well we know we could theoretically make you feel you've experienced your past as a continuous self while you were just nano-assembled a mirco-second ago with exactly the right memory inducing this beliefs/'feeling'. So due to the absence of this extra "self": "You" are simply this instant's mind we currently observe from you. Now, crucially, this mind has, obviously, a certain regard, hopes, plans, for, in essence, what happens with your natural successor. In the natural world, it turns out to be perfectly predictable from the outside, who this natural successor is: your own body.  In situations like those imagined with cloning thought experiments instead, it suddenly is less obvious from the outside, whom you'll consider your most dearly cared for 'natural' (or now less obviously 'natural') successor. But as the only thing that in reality connects you with what we traditionally would have called "your future self", is your own particular preferences/hopes/cares to that elected future mind, there is no objective rule to tell you from outside, which one you have to consider the relevant future mind. The relevant is the one you find relevant. This is very analogous for, say, when you're in love, the one 'relevant' person in a room for you to save first in a fire (if you're egoistic about you and your loved one) is the one you (your brain instinct, your hormones, or whatever) picked; you don't have to ask anyone outside about whom that should be. The traditional notion of "survival" as in invoking a continuous integrated "self" over and above the successio
TAG*20

.

The Olson twins are do not at all have qualitative identity.

Not 100% , but enough to illustrate the concept.

So I just don’t know what your position is.

I didn't have to have a solution to point out the flaws in other solutions. My main point is that a no to soul- theory isn't a yes to computationalism. Computationalism isn't the only alternative, or the best.

You claim that there doesn’t need to be an answer;

Some problems are insoluble.

that seems false, as you could have to make decisions informed by your belief.

My belief isn't necessarily t... (read more)

TAG61

You’ve got a lot of questions to raise, but no apparent alternative.

Non computationalism physicalism is an alternative to either or both the computationalist theories. (That performing a certain class of computations is sufficient to be conscious in general, or that performing a specific one is sufficient to be a particular conscious individual. Computation as a theory of consciousness qua awareness isn't known to be true, and even if it is assumed, it doesn't directly give you a theory of personal identity).

The non existence, or incoherence, of persona... (read more)

2Seth Herd
If you're not arguing against a perfect copy being you, then I don't understand your position, so much of what follows will probably miss the mark. I had written more but have to cut myself off since this discussion is taking time without having much odds of improving anyone's epistemics noticably. The Olson twins are do not at all have qualitative identity. They have different minds: sets of memories, beliefs, and values. So I just don't know what your position is. You claim that there doesn't need to be an answer; that seems false, as you could have to make decisions informed by your belief. You currently value your future self more than other people, so you act like you believe that's you in a functional sense. Are you the same person tomorrow? It's not an identical pattern, but a continuation. I'm saying it's pretty-much you because the elements you wouldn't want changed about yourself are there. If you value your body or your continuity over the continuity of your memories, beliefs, values, and the rest of your mind that's fine, but the vast majority will disagree with you on consideration. Those things are what we mean by "me". I certainly do believe in the plural I (under the special cirrumstance I discussed); we must be understanding something differently in the torture question. I don't have a preference pre-copy for who gets tortured; both identical future copies are me from my perspective before copying. Maybe you're agreeing with that? After copying, we're immediately starting to diverge into two variants of me, and future experiences will not be shared between them. I was addressing a perfect computational copy. An imperfect but good computational copy is higher resolution, not lower, than a biological twin. It is orders of magnitude more similar to the pattern that makes your mind, even though it is less similar to the pattern that makes your body. What is writing your words is your mind, not your body, so when it says "I" it meets the mind. Non
TAG13

Both determinism and free will are metaphysical assumptions. In other words, they are presuppositions of thought.

Neither is a presupposition of thought. You don't have to presume free will, beyond some general decision making ability, and you don't have to presume strict determinism beyond some good-enough causal reliability. Moreover, both are potentially discoverable as facts.

A choice must be determined by your mental processes, knowledge and desires. If choices arose out of nowhere, as uncaused causes, they would not be choices.

False dichotomy. ... (read more)

Answer by TAG3-2

You don’t have to be a substance dualist to believe a sim (something computationally or functionally isomorphic to a person) could be a zombie. It's a common error , that because dualism is a reason to reject something as being genuinely conscious,it is the only reason --there is also an argument based on physicalism.

There are three things that can defeat the multiple realisability of consciousness:-

  1. Computationalism is true, and the physical basis makes a difference to the kinds of computations that are possible.

  2. Physicalism is true, but computational

... (read more)
TAG20

None of these are free will (as commonly understood

Some believe that free will must be a tertium datur, a third thing fundamentally different to both determinism and indeterminism. This argument has the advantage that it makes free will logically impossible,and the disadvantage that hardly any who believes in free will defined it that way. In particular, naturalistic libertarians are happy to base free will on a mere mixture of determinism and indeterminism.

Another concern about naturalistic libertarianism is that determinism is needed to put a decisio... (read more)

TAG20

everything seems to collapse to tautology

Successful explanation makes things seem less arbitrary, more predictable, more obvious. A tautology is the ultimate in non arbitrary obviousness.

Load More