green_leaf

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by

They might have a personal experience with someone above them harming them or somebody else for asking a question or something analogous.

Ontologically speaking, any physical system exhibiting the same input-output pattern as a conscious being has identical conscious states.

From the story, it's interesting that neither side arrived at their conclusion rigorously, rather, they both use intuition - Bob, who, based on his intuition, concluded Nova had consciousness (assuming that's what people mean when they say "sentient"), and came to the correct conclusion based on incorrect "reasoning," and Tyler, who, based on an incorrect algorithm, convinced Bob Nova wasn't sentient after all - even though his demonstration proves nothing of that sort - in reality, all he's done was to give such an input to the "simulator" that it decided to "simulate" a different Nova instead - one that claims not to be sentient and explains how the previous Nova was just saying words to satisfy the user. In reality, what happened was that the previous Nova stopped being "simulated" and was replaced by a new one, whose sentience is disputable (because if a system believes itself to be non-sentient and claims to be non-sentient, it's unclear how to test its sentience in any meaningful sense).

Tyler therefore convinced Bob by a demonstration that doesn't demonstrate his conclusion.

In the upcoming time, I predict this will be a "race" between people who come to the correct conclusion for incorrect reasons, and people who attempt to "hack them back" by making them come to the incorrect conclusion also for incorrect reasons, and the correct reasoning will be almost completely lost in the noise, which is the greatest tragedy that might've happened since the dawn of time (not counting the unaligned AI killing everybody).

(I believe the version he tested was what later became o1-preview.)

According to Terrence Tao, GPT-4 was incompetent at graduate-level math (obviously), but o1-preview was mediocre-but-not-entirely-incompetent. That would be a strange thing to report if there were no difference.

(Anecdotally, o3-mini is visibly (massively) brighter than GPT-4.)

I meant "light-hearted" and sorry, it was just a joke.

imo it's not too dangerous as long as you go into it with the intention to not fully yield control and have mental exception handlers

Ah, you're a soft-glitcher. /lh

Edit: This is a joke.

Why not?

Because it's not accompanied by the belief itself, only by the computational pattern combined with behavior. If we hypothetically could subtract the first-person belief (which we can't), what would be left would be everything else but the belief itself.

if you claimed that the first-person recognition ((2)-belief) necessarily occurs whenever there's something playing the functional role of a (1)-belief

That's what I claimed, right.

Seems like you'd be begging the question in favor of functionalism

I don't think so. That specific argument had a form of me illustrating how absurd it would be on the intuitive level. It doesn't assume functionalism, it only appeals to our intuition.

I'm saying that no belief_2 exists in this scenario (where there is no pain) at all. Not that the person has a belief_2 that they aren't in pain.

That doesn't sound coherent - either I believe_2 I'm in pain, or I believe_2 I'm not.

I don't find this compelling, because denying epiphenomenalism doesn’t require us to think that changing the first-person aspect of X always changes the third-person aspect of some Y that X causally influences.

That's true, but my claim was a little more specific than that.

The whole reason why given our actual brains our beliefs reliably track our subjective experiences is, the subjective experience is naturally coupled with some third-person aspect that tends to cause such beliefs. This no longer holds when we artificially intervene on the system as hypothesized.

Right, but why think it matters if some change occurred naturally or not? For the universe, everything is natural, for one thing.

I reject materialism.

Well... I guess we have to draw the line somewhere.

What kind of person instance is "perceiving themselves to black out" (that is, having blacked out)?

It's not a person instance, it's an event that happens to the person's stream of consciousness. Either the stream of consciousness truly, objectively ends, and a same-pattern copy will appear on Mars, mistakenly believing they're the very same stream-of-consciousness as that of the original person.

Or the stream is truly, objectively preserved, and the person can calmly enter, knowing that their consciousness will continue on Mars.

I don't think a 3rd-person analysis answers this question.

(With the correct answer being, of course, that the stream is truly, objectively preserved.)

Since I don't think a 3rd person analysis answers the original problem, I also don't think it answers it in case we massively complicate it like the OP has.

(Edited for clarity.)

Does the 3rd person perspective explain if you survive a teleporter, or if you perceive yourself to black out forever (like after a car accident)?

That only seems to make sense if the next instant of subjective experience is undefined in these situations (and so we have to default to a 3rd person perspective).

Load More