LESSWRONG
LW

295
green_leaf
17304040
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
Going Nova
green_leaf1d0-4

It may not be a necessary condition, but if you want to present it as obvious, it is necessary.

No, it is not. No matter whether I want to present it as obvious or not, that condition is not necessary.

Anything short of an exact match is only allegedly the same

Your language is too imprecise. I'm not saying that an inexact behavioral match implements the same conscious states. It implements similar ones - depending on how close the behavioral match is.

until you have some research results that don't currently exist

This is a matter of philosophy. No research results can help here, nor they are needed.

To see we don't need an exact behavioral match for the being to keep being conscious, you can imagine a thought experiment, when someone replicates you precisely except for one input, for which instead of "I'd rather have vanilla icecream," you will respond "I'd rather have chocolate icecream." (Or, perhaps, from sci-fi - if the person responds exactly the same way the original would for all inputs, except for "Computer, end program," for which the simulated person disappears.)

Reply
just another potential man
green_leaf2d20

Intelligence spans all levels of meta. A gifted person can learn study strategies in college even if they never needed them in high school, precisely because they are gifted.

Someone failing for financial/health/family reasons is another matter, and it's a tragedy that you're very noble for wanting to fix.

Reply
LLM-generated text is not testimony
green_leaf4d10

AI assistants simulated by LLMs have minds in every positivistically meaningful sense in which humans do.

To pick random examples from the post:

The specific tensions within the thought are not communicating back local-contextual demands from the specific thought back to the concepts that expressed the more-global contextual world that was in the backgroundwork of the specific thought.

AI assistants can do this by changing their mind mid-writing.

In short, "this is a good thing for me to say right now".

This isn't even true about humans - humans who altruistically say things that are bad for them exist. To the extent it's true about humans, it's true about AI assistants as well.

It won't correct itself, run experiments, mull over confusions and contradictions, gain new relevant information, slowly do algorithmically-rich search for relevant ideas, and so on. You can't watch the thought that was expressed in the text as it evolves over several texts, and you won't hear back about the thought as it progresses.

While AI assistants can't run true experiments per se (even though they can ask the user, reason about all they have learned during training, browse the Internet, write software and run it), humans usually aren't more diligent than AIs, and AIs inability to run true experiments (at least for now) is unconnected to their presence or absence of a mind.

Reply
The Tale of the Top-Tier Intellect
green_leaf14d20

I guess these just aren’t intended for me, because I’m not getting much out of them

Perhaps you already knew the underlying concept?

Reply
NormanPerlmutter's Shortform
green_leaf7mo10

Trump has a history of both ignoring the law and human rights in general, and imprisoning innocent people under the guise of them being illegal immigrants when they aren't. Current events are unsurprising, and a part of what his voters voted for.

Reply
Going Nova
green_leaf7mo00

Any physical system exhibiting exactly the same input-output mappings.

That's a sufficient condition, but not a necessary one. A factor I can think of right now is the sufficient coherency and completeness of the I/O whole. (If I have a system that outputs what I would in response to one particular input and the rest is random, it doesn't have my consciousness. But for a system where all inputs and outputs match except for an input that says "debug mode," for which it switches to "simulating" somebody else, we can conclude that it has consciousness almost identical to mine.)

Today, LLMs are too human-like/realistic/complete to rely on their human-like personas being non-conscious.

both sides will make good points

I wish that was true. Based on what I've seen so far, they won't.

Reply
How I talk to those above me
green_leaf8mo10

They might have a personal experience with someone above them harming them or somebody else for asking a question or something analogous.

Reply
Going Nova
green_leaf8mo0-5

Ontologically speaking, any physical system exhibiting the same input-output pattern as a conscious being has identical conscious states.

From the story, it's interesting that neither side arrived at their conclusion rigorously, rather, they both use intuition - Bob, who, based on his intuition, concluded Nova had consciousness (assuming that's what people mean when they say "sentient"), and came to the correct conclusion based on incorrect "reasoning," and Tyler, who, based on an incorrect algorithm, convinced Bob Nova wasn't sentient after all - even though his demonstration proves nothing of that sort - in reality, all he's done was to give such an input to the "simulator" that it decided to "simulate" a different Nova instead - one that claims not to be sentient and explains how the previous Nova was just saying words to satisfy the user. In reality, what happened was that the previous Nova stopped being "simulated" and was replaced by a new one, whose sentience is disputable (because if a system believes itself to be non-sentient and claims to be non-sentient, it's unclear how to test its sentience in any meaningful sense).

Tyler therefore convinced Bob by a demonstration that doesn't demonstrate his conclusion.

In the upcoming time, I predict this will be a "race" between people who come to the correct conclusion for incorrect reasons, and people who attempt to "hack them back" by making them come to the incorrect conclusion also for incorrect reasons, and the correct reasoning will be almost completely lost in the noise, which is the greatest tragedy that might've happened since the dawn of time (not counting the unaligned AI killing everybody).

Reply
Recent AI model progress feels mostly like bullshit
green_leaf8mo30

(I believe the version he tested was what later became o1-preview.)

Reply1
Recent AI model progress feels mostly like bullshit
green_leaf8mo95

According to Terrence Tao, GPT-4 was incompetent at graduate-level math (obviously), but o1-preview was mediocre-but-not-entirely-incompetent. That would be a strange thing to report if there were no difference.

(Anecdotally, o3-mini is visibly (massively) brighter than GPT-4.)

Reply
Load More