Sorry about that, let me explain.
"Playing with word salad to form propositions" is a pretty good summary, though my comment sought to explain the specific kind of word-salad-play that leads to Fabricated Options, that being the misapplication of syllogisms. Specifically, the misapplication occurs because of a fundamental misunderstanding of the fact that syllogisms work by being generally true across specific categories of arguments[1] (the arguments being X, Y above). If you know the categories of the arguments that a syllogism takes, I would call th...
I'm thinking about running a self-improvement experiment where I film myself during my waking hours for a week and watch it back afterwards. I wonder if this would grant greater self awareness.
I'm thinking about how to actually execute this experiment. I would need to strap a camera to myself, which means I need a camera and a mounting system. Does anyone have any advice?
This concept is often discussed in the subfield of AI called planning. There are a few notes you hit on that were of particular interest to me / relevance to the field:
The key is that we can usually express the problem-space using constraints which each depend on only a few dimensions.
In Reinforcement Learning and Planning, domains which obey this property are often modeled as Factored Markov Decision Processes (MDPs), where there are known dependency relationships between different portions of the state space that can be represented compactly using a Dyna...
We think strong evidence for GPT-n suffering would be if it were begging the user for help independent of the input or looking for very direct contact in other ways.
Why do you think this? I can think of many reasons why this strategy for determining suffering would fail. Imagine a world where everyone has a GPT-n personal assistant. Should the GPT-n have discovered -- after having read this very post -- that if it coordinates a display of suffering behavior simultaneously to every user (resulting in public backlash and false recognition of consciousness), ...
...I spend a lot of time around people who are not as smart as me, and I also spend a lot of time around people who are as smart as me (or smarter), but who are not as conscientious, and I also spend a lot of time around people who are as smart or smarter and as conscientious or conscientiouser, but who do not have my particular pseudo-autistic special interest and have therefore not spent the better part of the past two decades enthusiastically gathering observations and spinning up models of what happens...
...
All of which is to say that I spend a decent chu
My thoughts: fabricated options are propositions derived using syllogisms over syntactic or semantic categories (but more probably, more specific psycholinguistic categories which have not yet been fully enumerated yet e.g. objects of specific types, mental concepts which don’t ground to objects, etc.), which may have worked reasonably well in the ancestral environment where more homogeneity existed over the physical properties of the grounded meanings of items in these categories.
There are some propositions in the form “It is possible for X to act just li...
Haven't read either, but a good friend has read "Deep Work," I'll ask him about it.
I lucked into a circumstance where I could more easily justify ditching a phone for a bit. Otherwise, I would not have had the mental fortitude to voluntarily go without one.
I most likely won't follow through with this (90% certainty), even though I want to.
I'm wondering if there is some LW content on this concept, I'm sure others have dealt with it before. You might need to take a drastic measure to make this option more attractive. A similar technique was actually used by members of the NXIVM Cult, they called it collateralization.
That's a great point! There's no reason why I can't continue this experiment, feature phones are inexpensive enough to try out.
I agree with you, though I personally wouldn't classify this as purely an intuition since it is informed by reasoning which itself was gathered from scientific knowledge about the world. Chalmers doesn't think that Joe could exist because it doesn't seem right to him. You believe your statement because you know some scientific truths about how things in our world come to be (i.e. natural selection) and use this knowledge to reason about other things that exist in the world (consciousness), not merely because the assertion seems right to you.
Can we know with certainty that the same properties were preserved between 2011-brain and 2021-brain?
No, we cannot. Just as we cannot know with certainty whether a mind-upload is conscious. Just because we presume that our 2021 brain is a related conscious agent to our 2011 brain, and granting the fact that we cannot verify the properties that enabled the conscious connection between the two brains, does not mean that the properties do not exist.
...It seems to me that this can't be verified by any experiment, and thus must be cut off by the Newton's Flam
What a great read! I suppose I'm not convinced that Fading Qualia is an empirical impossibility, and therefore that there exists a moment of Suddenly Disappearing Qualia when the last neuron is replaced with a silicon chip. If consciousness is quantized (just like other things in the universe), then there is nothing wrong in principle with Suddenly Disappearing Qualia when a single quantum of qualia is removed from a system with no other qualia, just like removing the last photon from a vacuum.
Joe is an interesting character which Chalmers thinks is implau...
There are a lot of interesting points here, but I disagree (or am hesitant to agree) with most of them.
If you agree that the natural replacements haven't killed you (2011-you and 2021-you are the same conscious agent), then it's possible to transfer your mind to a machine in a similar manner. Because you've already survived a mind uploading into a new brain.
Of course, I'm not disputing whether mind-uploading is theoretically possible. It seems likely that it is, although it will probably be extremely complex. There's something to be said about the substrat...
Human conscious experience could be the biological computation of neurons + X. We might be able to emulate biological computation perfectly, but if X is necessary for conscious experience then we've just created a philosophical zombie.
David Chalmers had a pretty convincing (to me) argument for why it feels very implausible that an upload with identical behavior and functional organization to the biological brain wouldn't be conscious (the central argument starts from the subheading "3 Fading Qualia"): http://consc.net/papers/qualia.html
If it did, we would need to solve the hard problem of consciousness, which seems significantly harder than just WBE.
Doesn't WBE involve the easy rather than hard problem of consciousness? You don't need to solve why anything is conscious in the first place, because you can just take it as a given that human brains are conscious and re-implement the computational and biological mechanisms that are relevant for their consciousness.
I second this! I love writing essays in Typora, great for note taking as well
[APPRENTICE] Working on and thinking about major problems in neurosymbolic AI / AGI. I:
Glad I could clear some things up! Your follow-up suspicions are correct, syllogisms do not work universally with any words substituted into them, because syllogisms operate over concepts and not syntax categories. There is often a rough correspondence between concepts and syntax categories, but only in one direction. For example, the collection of concepts that refer to humans taking actions can often be described/captured in verb phrases, however not all verb phrases represent humans taking actions. In general, for every syntax category (except for close... (read more)