James Camacho

Wiki Contributions

Comments

Sorted by

For humans from our world, these questions do have answers—complicated answers having to do with things like map–territory confusions that make receiving bad news seem like a bad event (rather than the good event of learning information about how things were already bad, whether or not you knew it), and how it's advantageous for others to have positive-valence false beliefs about oneself.

 

If you have bad characteristics (e.g. you steal from your acquaintances), isn't it in your best interest to make sure this doesn't become common knowledge? You don't want to normalize people pointing out your flaws, so you get mad at people for gossiping behind your back, or saying rude things in front of you.

If you're not already aware of the information bottleneck, I'd recommend The Information Bottleneck Method, Efficient Compression in Color Naming and its Evolution, and Direct Validation of the Information Bottleneck Principle for Deep Nets. You can use this with routing for forward training.

EDIT: Probably wasn't super clear why you should look into this. An optimal autoencoder should try to maximize the mutual information between the encoding and the original image. You wouldn't even need to train a decoder at the same time as the encoder! But, unfortunately, it's pretty expensive to even approximate the mutual information. Maybe, if you route to different neurons based on image captions, you could significantly decrease this cost.

And I migrated my comment.

If you're not already aware of the information bottleneck, I'd recommend The Information Bottleneck Method, Efficient Compression in Color Naming and its Evolution, and Direct Validation of the Information Bottleneck Principle for Deep Nets. You can use this with routing for forward training.

and that's because I think you don't understand them either.

What am I supposed to do with this? The one effect this has is to piss me off and make me less interested in engaging with anything you've said.

Why is that the one effect? Jordan Peterson says that the one answer he routinely gives to Christians and atheists that piss them off is, "what do you mean by that?" In an interview with Alex O'Conner he says,

So people will say, well, do you believe that happened literally, historically? It's like, well, yes, I believe that it's okay. Okay. What do you mean by that? That you believe that exactly. Yeah. So, so you tell me you're there in the way that you describe it.

Right, right. What do you see? What are the fish doing exactly? And the answer is you don't know. You have no notion about it at all. You have no theory about it. Sure. You have no theory about it. So your belief is, what's your belief exactly?

(25:19–25:36, The Jordan B. Peterson Podcast - 451. Navigating Belief, Skepticism, and the Afterlife w/ Alex O'Connor)

Sure, this pisses off a lot of people, but it also gets some people thinking about what they actually mean. So, there's your answer: you're supposed to go back and figure out what you mean. A side benefit is if it pisses you off, maybe I won't see your writing anymore. I'm pretty annoyed at how the quality of posts has gone down on this website in the past few years.

But my view is that maths and computation are not the only symbols upon which constructive discussion can be built.

I find it useful to take an axiom of extensionality—if I cannot distinguish between two things in any way, I may as well consider them the same thing for all that it could affect me. Given maths/computation/logic is the process of asserting things are the same or different, it seems to me to be tautologically true that maths and computaiton are the only symbols upon which useful discussion can be built.

I'm not arguing against the claim that you could "define consciousness with a computation". I am arguing against the claim that "consciousness is computation". These are distinct claims.

Maybe you want to include some undefinable aspect to consciousness. But anytime it functions differently, you can use that to modify your definition. I don't think the adherents for computational functionalism, or even a computational universe, need to claim it encapsulates everything there could possibly be in the territory. Only that it encapsulates anything you can perceive in the territory.

There is an objective fact-of-the-matter whether a conscious experience is occurring, and what that experience is. It is not observer-dependent. It is not down to interpretation. It is an intrinsic property of a system.

I believe this is your definition of real consciousness? This tells me properties about consciousness, but doesn't really help me define consciousness. It's intrinsic and objective, but what is it? For example, if I told you that the Serpinski triangle is created by combining three copies of itself, I still don't know what it actually looks like. If I want to work with it, I need to know how the base case is defined. Once you have a definition, you've invented computational functionalism (for the Serpinski triangle, for consciousness, for the universe at large).

I think I have a sense of what's happening here. You don't consider an argument precise enough unless I define things in more mathematical terms.

Yes, exactly! To be precise, I don't consider an argument useful unless it is defined through a constructive logic (e.g. mathematics through ZF set theory).

If you actually want to know the answer: when you define the terms properly (i.e. KL-divergence from the firings that would have happened), the entire paradox goes away.

I'd be excited to actually see this counterargument. Is it written down anywhere that you can link to?

Note: this assumes computational functionalism.

I haven't seen it written down explicitly anywhere, but I've seen echoes of it here and there. Essentially, in RL, agents are defined via their policies. If you want to modify the agent to be good at a particular task, while still being pretty much the "same agent", you add a KL-divergence anchor term:

This is known as piKL and was used for Diplomacy, where it's important to act similarly to humans. When we think of consciousness or the mind, we can divide thoughts into two categories: the self-sustaining (memes/particles/holonomies), and noise (temperature). Temperature just makes things fuzzy, while memes will proscribe specific actions. On a broad scale, maybe they tell your body to take specific actions, like jumping in front of a trolley. Let's call these "macrostates". Since a lot of memes will produce the same macrostates, let's call them "microstates". When comparing two consciousnesses, we want to see how well the microstates match up.

The only way we can distinguish between microstates is by increasing the number of macrostates—maybe looking at neuron firings rather than body movements. So, using our axiom of reducibility, to determine how "different" two things are, the best we can do is count the difference in the number of microstates filling each macrostate. Actually, we could scale the microstate counts and temperature by some constant factor and end up with the same distribution, so it's better to look at the difference in their logarithms. This is exactly the cross-entropy. The KL-divergence subtracts off the entropy of the anchor policy (the thing you're comparing to), but that's just a constant.

So, let's apply this to the paradox. Suppose my brain is slowly being replaced by silicon, and I'm worried about losing consciousness. I acknowledge there are impossible-to-determine properties that I could be losing; maybe the gods do not let cyborgs into heaven. However, that isn't useful to include in my definition of consciousness. All the useful properties can be observed, and I can measure how much they are changing with a KL-divergence.

When it comes to other people, I pretty much don't care if they're p-zombies, only how their actions effect me. So a very good definition for their consciousness is simply the equivalence class of programs that would produce the actions I see them taking. If they start acting radically different, I would expect this class to have changed, i.e. their consciousness is different. I've heard some people care about the substrate their program runs on. "It wouldn't be me if the program was run by a bunch of aliens waving yellow and blue flags around." I think that's fine. They've merely committed suicide in all the worlds their substrate didn't align with their preferences. They could similarly play the quantum lottery for a billion dollars, though this isn't a great way to ensure your program's proliferation.

In response to the two reactions:

  1. Why do you say, "Besides, most people actually take the opposite approch: computation is the most "real" thing out there, and the universe—and any consciouses therein—arise from it."

Euan McLean said at the top of his post he was assuming a materialist perspective. If you believe there exists "a map between the third-person properties of a physical system and whether or not it has phenomenal consciousness" you believe you can define consciousness with a computation. In fact, anytime you believe something can be explicitly defined and manipulated, you've invented a logic and computer. So, most people who take the materialist perspective believe the material world comes from a sort of "computational universe", e.g. Tegmark IV.

  1. Soldier mindset.

Here's a soldier mindset: you're wrong, and I'm much more confident on this than you are. This person's thinking is very loosey-goosey and someone needed to point it out. His posts are mostly fluff with paradoxes and questions that would be completely answerable (or at least interesting) if he deleted half the paragraphs and tried to pin down definitions before running rampant with them.

Also, I think I can point to specific things that you might consider soldier mindset. For example,

It's such a loose idea, which makes it harder to look at it critically. I don't really understand the point of this thought experiment, because if it wasn't phrased in such a mysterious manner, it wouldn't seem relevant to computational functionalism.

If you actually want to know the answer: when you define the terms properly (i.e. KL-divergence from the firings that would have happened), the entire paradox goes away. I wasn't giving him the answer, because his entire post is full of this same error: not defining his terms, running rampant with them, and then being shocked when things don't make sense.

Load More