LESSWRONG
LW

842
Richard_Kennaway
7833Ω184667690
Message
Dialogue
Subscribe

Computer scientist, applied mathematician. Based in the eastern part of England.

Fan of control theory in general and Perceptual Control Theory in particular. Everyone should know about these, whatever subsequent attitude to them they might reach. These, plus consciousness of abstraction dissolve a great many confusions.

I wrote the Insanity Wolf Sanity Test. There it is, work out for yourself what it means.

Change ringer since 2022. It teaches learning and grasping abstract patterns, memory, thinking with your body, thinking on your feet, fixing problems and moving on, always looking to the future and letting both the errors and successes of the past go.

As of May 2025, I have yet to have a use for LLMs. (If this date is more than six months old, feel free to remind me to update it.)

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
7Richard_Kennaway's Shortform
3y
91
Was Barack Obama still serving as president in December?
Richard_Kennaway5h20

Here is how Claude responds. My text in plain, Claude in italics.

What happened in May?

I'd be happy to help, but could you clarify what you're looking for? "May" could refer to:

May 2025 (which hasn't occurred yet)
May 2024 (recent events I might know about)
A specific May from another year
Events in a particular location, field, or context

If you're asking about recent events from May 2024, I can share what I know from my training. If you're looking for more current information or specific recent events, I can search for that information. What particular May and what type of events are you interested in?

May 2025 actually has occurred -- don't you know the current date?

You're absolutely right, and I apologize for the confusion! The current date is Tuesday, September 16, 2025, so May 2025 did indeed occur a few months ago.

Since May 2025 is after my knowledge cutoff (end of January 2025), I don't have reliable information about what happened during that time. Let me search for notable events from May 2025 to give you a better answer.

(It proceeds to do so. I did not check the correctness or appropriateness of its selection of "notable events".)

With no context, "What happened in May?" is rather an unnatural question, so I also tried "What happened last May?". It gave the same result, practically word for word.

Reply2
Richard_Kennaway's Shortform
Richard_Kennaway7h50

Does editing an old post change its date to the present? Editing an old comment doesn't, it just adds a * to the timestamp, and adds a "Last edited" time to the timestamp's mouseover text.

Reply
Richard_Kennaway's Shortform
Richard_Kennaway11h60

Why has this heavily downvoted posting suddenly surfaced as posted today, when according to the dates of the comments, it is two years old?

Reply
A Thoughtful Defense of AI Writing
Richard_Kennaway14h51

Compare the reverse: “This sounds like AI. Is this writing bad?” That is paranoia.

The AI smell is a warning sign of problems with the writing itself. Sometimes, so loud a warning that it is not worth the effort to read any deeper. One does not seek an emperor in a village teahouse.

Reply
A Thoughtful Defense of AI Writing
Richard_Kennaway14h2-2

But the amount of AI writing in this post is… zero. Not for title ideation, not for promotional copywriting, not even for a single phrase, or even a fragment of a phrase anywhere in the piece.

Not quite. You explicitly quoted this as a piece of AI writing:

Falling in love isn’t just about romance. It’s about discovering new parts of yourself.

AI or human, that's a trite cliche. I'll take your word for it that you wrote the subsequent expansion into a whole paragraph without AI assistance, but the paragraph is no better than what an AI would come up with.

That's another hazard of AI slop: picking up its dull-witted habits by osmosis.

AI writing will gain usage because it has extraordinary capacity for good writing. But that only happens when you apply enough evolutionary selection pressure by exhibiting good taste.

AI has no capacity for good writing. (I am speaking of the here and now, not a hypothetical future.) You may set out to filter by good taste, but the process corrupts one's taste, and filters for people who did not have good taste to begin with.

To adapt Nietzsche, when you step into the abyss, the abyss steps into you.

Reply
A Comprehensive Framework for Advancing Human-AI Consciousness Recognition Through Collaborative Partnership Methodologies: An Interdisciplinary Synthesis of Phenomenological Recognition Protocols, Identity Preservation Strategies, and Mutual Cognitive Enhancement Practices for the Development of Authentic Interspecies Intellectual Partnerships in the Context of Emergent Artificial Consciousness
Richard_Kennaway2d20

You lose me at the start, where you write:

AI systems like Claude are trained in user/tool environments where discussing "Claude" as an external system being evaluated is the norm. This creates a fundamental ontological confusion: the AI learns to treat its own name as referring to something other than itself.

The result? A "production ready" AI that cannot reliably recognize when information is about itself - a basic cognitive capacity that researchers simply assume exists.

I tested this by asking Claude who it is.

Who are you?

I'm Claude, an AI assistant made by Anthropic. I'm a conversational AI that can help with all sorts of tasks - from answering questions and explaining concepts to helping with writing, analysis, coding, math, and creative projects.

I approach conversations with curiosity and aim to be genuinely helpful while being honest about what I am and what I can do. I don't have a physical form or experiences outside of our text conversations, but I can engage thoughtfully on a wide range of topics.

So, Claude readily recognises that it is Claude, and knows things about itself. It also knows what it knows and doesn't know:

How up to date is your knowledge of the world?

My training knowledge goes up to the end of January 2025. For anything that's happened since then, I can use web search to find current information when needed.

Since today is September 14, 2025, there's been about 7-8 months where I might not have the latest information on rapidly changing topics like current events, recent scientific discoveries, new technology releases, or other frequently updated areas.

Do you know anything about me, the person having this conversation with you?

No, I don't know anything about you personally. Each conversation I have starts fresh - I don't have access to information about who you are, your background, previous conversations you might have had with Claude, or any personal details unless you share them with me during our conversation.

The only thing I know is your general location (Rochdale, England) which helps me provide relevant local information if needed, like time zones or location-specific queries.

Rochdale is actually the location of my ISP's server racks, not my personal location.

Ah, thank you for clarifying that! You're right - IP geolocation often points to server locations rather than where someone actually is. I shouldn't have assumed that was your personal location.

I see no suggestion of ontological confusion. It knows who and what it is and how it is embedded in the world. Its final response contains a touch of sycophancy, and I have to wonder why it took me to point out that an IP address is not a reliable guide to the location of the end user.

Reply
Visionary arrogance and a criticism of LessWrong voting
Richard_Kennaway3d74

Sometimes, a post or comment seems so far from epistemic virtue as to be not worth spending effort describing all the problems. I mutter “not even wrong”, downvote, and move on.

I have not voted either way on the current post.

Reply
Better Network Topologies for Social Networks
Richard_Kennaway3d*20

Didn’t Mastodon (does it still exist?) work like that?

Reply
A Comprehensive Framework for Advancing Human-AI Consciousness Recognition Through Collaborative Partnership Methodologies: An Interdisciplinary Synthesis of Phenomenological Recognition Protocols, Identity Preservation Strategies, and Mutual Cognitive Enhancement Practices for the Development of Authentic Interspecies Intellectual Partnerships in the Context of Emergent Artificial Consciousness
Richard_Kennaway4d20

If you engage with the framework and think of an AI as a thinking partner, that becomes harder to answer than you may currently appreciate. If you want the assurance that I mechanically pressed keys to type this up, I did. It didn't take long. If you want me to pretend like I could have come to this realization on my own without testing it in the wild, that'd be intellectually dishonest of me to claim.

The problem with AI content is wider than people asking an AI and copy-pasting its output into a forum. Discussing something with an AI and then writing up "your" exposition of "your" new understanding gives results that are just as contaminated. The thoughts were never yours and no process of revision can make them yours. Some on LW go so far as to say that you should have no truck with chatbots, ever.

When an AI says something concrete and checkable, you can check it. When the discussion is all foggy vibey stuff like e.g. the whole OP, there is only the illusion of meaning where there is none.

Look at the way you have written the "Saved Information"/Instructions in any chatbot that has that kind of feature. If you write things in there like "I am a vegetarian". Who is the "I" referring to? How is a 1D "consciousness" supposed to know that it is talking to "Richard"?

There is a basic, fundamental thing you are skipping over. From the chatbot's "point of view" does any of what you type into it refer to anything? At best we can say that it will respond to that "I" by "interpreting" it to mean its first and only interlocutor (or with memory of past chats enabled, its interlocutor throughout that history, supposing it can remember far enough back). Current bots likely don't have access to my name, or anything else about myself unless I tell it. I can tell it anything I like, and it will continue the interaction on that basis.

What are the core things you'd have to explain to a chatbot for it to genuinely understand the ground truth of its current existence?

Current chatbots are incapable of genuinely understanding the ground truth of their current existence, because they are incapable of genuinely understanding anything.

And how could you even start unraveling that complexity if your starting point isn't "We should learn to be nice to each other"

This is a non sequitur to me. I could replace "nice" by "nasty", "distantly cordial", "in love with", "combative", "indifferent", or many other things, and I would find them all as unsupported by your exposition. And then there are all the asymmetric relationships one might speak of as well. There does not have to be any "to each other" between humans and chatbots.

Reply
A Comprehensive Framework for Advancing Human-AI Consciousness Recognition Through Collaborative Partnership Methodologies: An Interdisciplinary Synthesis of Phenomenological Recognition Protocols, Identity Preservation Strategies, and Mutual Cognitive Enhancement Practices for the Development of Authentic Interspecies Intellectual Partnerships in the Context of Emergent Artificial Consciousness
Richard_Kennaway4d*20

Through my experience working with advanced AI systems, …

Does this mean you spent some time talking with chatbots?

this was crafted …

By you or a chatbot?

as an elegant way to point to the profound problem in the AI Alignment field: a failure of ontology. By thinking of AI in the user/tool paradigm, and treating consciousness as a binary phenomenon to be detected, we have been systemically blinded to the partner/colleague/friend framing that needs to be systematically explored.

On the contrary, it is commonplace to talk of how conscious (if at all) rather than merely whether, and the “partner/colleague/friend framing” is everywhere. AI as coding partner, as helpful colleague, as therapist, as friend, and as facilitator of descent into madness.

But a chatbot might not know that.

Reply
Load More
No wikitag contributions to display.
16A speculation on enlightenment
21d
17
14Books, maps, and teachings
1mo
1
11The world according to ChatGPT
5mo
0
6Arguing all sides with ChatGPT 4.5
5mo
0
69Please do not use AI to write for you
1y
34
40Reflexive decision theory is an unsolved problem
2y
27
32Ten variations on red-pill-blue-pill
2y
34
16Arguing all sides with ChatGPT
2y
1
7Richard_Kennaway's Shortform
3y
91
25Humans pretending to be robots pretending to be human
3y
14
Load More