LESSWRONG
LW

13
abramdemski
20252Ω3794229209790
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Pointing at Normativity
Implications of Logical Induction
Partial Agency
Alternate Alignment Ideas
Filtered Evidence, Filtered Arguments
CDT=EDT?
Embedded Agency
Hufflepuff Cynicism
11abramdemski's Shortform
Ω
5y
Ω
67
Recent AI Experiences
abramdemski6d43

A skrode does seem like a good analogy, complete with the (spoiler)

skrodes having a built-in vulnerability to an eldrich God, so that skrode users can be turned into puppets readily. (IE, integrating LLMs so deeply into one's workflow creates a vulnerability as LLMs become more persuasive.)

Reply
Recent AI Experiences
abramdemski6d50

With MetaPrompt, and similar approaches, I'm not asking the AI to autonomously tell me what to do, I'm mostly asking it to write code to mediate between me and my todo list. One way to think of it is that I'm arranging things to that I'm in both the human user seat and the AI assistant seat. I can file away nuggets of inspiration & get those nuggets served to me later when I'm looking for something to do. The AI assistant is still there, so I can ask it to do things for me if I want (and I do), but my experience with these various AI tools has been that things are going their best once I set the AI aside. I seem to find the AI to be a useful springboard, prepping the environment for me to work.

I agree with your sentiment that there isn't enough tech for developing your skills, but I think AI can be a useful enabler to build such tech. What system do you want?

Reply
Dialogue on What It Means For Something to Have A Function/Purpose
abramdemski8dΩ220

This reminds me of Ramana’s question about what “enforces” normativity. The question immediately brought me back to a Peter Railton introductory lecture I saw (though I may be misremembering / misunderstanding / misquoting, it was a long time ago). He was saying that real normativity is not like the old Windows solitaire game, where if you try to move a card on top of another card illegally it will just prevent you, snapping the card back to where it was before. Systems like that plausibly have no normativity to them, when you have to follow the rules. In a way the whole point of normativity is that it is not enforced; if it were, it wouldn’t be normative.

I'm reminded of trembling-hand equilibria. Nash equilibria don't have to be self-enforcing; there can be tied-expectation actions which nonetheless simply aren't taken, so that agents could rationally move away from the equilibrium. Trembling-hand captures the idea that all actions have to have some probability (but some might be vanishingly small). Think of it as a very shallow model of where norm-violations come from: they're just random! 

Evolutionarily stable strategies are perhaps an even better model of this, with self-enforcement being baked into the notion of equilibrium: stable strategies are those which cannot be invaded by alternate strategies. 

Neither of these capture the case where the norms are frequently violated, however.

Reply
Dialogue on What It Means For Something to Have A Function/Purpose
abramdemski8dΩ220

My notion of a function “for itself” is supposed to be that the functional mechanism somehow benefits the thing of which it’s a part. (Of course hammers can benefit carpenters, but we don’t tend to think of the hammer as a part of the carpenter, only a tool the carpenter uses. But I must confess that where that line is I don’t know, given complications like the “extended mind” hypothesis.)

Putting this in utility-theoretic terminology, you are saying that "for itself" telos places positive expectation on its own functional mechanism, or perhaps stronger, uses significant bits of its decision-making power on self-preservation. 

A representation theorem along these lines might reveal conditions under which such structures are usefully seen as possessing beliefs: a part of the self-preserving structure whose telos is map-territory correspondence. 

Reply
Dialogue on What It Means For Something to Have A Function/Purpose
abramdemski8dΩ220

Steve

As you know, I totally agree that mental content is normative - this was a hard lesson for philosophers to swallow, or at least the ones that tried to “naturalize” mental content (make it a physical fact) by turning to causal correlations. Causal correlations was a natural place to start, but the problem with it is that intuitively mental content can misrepresent - my brain can represent Santa Claus even though (sorry) it can’t have any causal relation with Santa. (I don’t mean my brain can represent ideas or concepts or stories or pictures of Santa - I mean it can represent Santa.)

Ramana

Misrepresentation implies normativity, yep.

My current understanding of what's going on here:
* There's a cluster of naive theories of mental content, EG the signaling games, which attempt to account for meaning in a very naturalistic way, but fail account properly for misrepresentation. I think some of these theories cannot handle misrepresentation at all, EG, Mark of the Mental (a book about Teleosemantics) discusses how the information-theory notion of "information" has no concept of misinformation (a signal is not true or false, in information theory; it is just data, just bits). Similarly, signaling games have no way to distinguish truthfulness from a lie that's been uncovered: the meaning of a signal is what's probabilistically inferred from it, so there's no difference between a lie that the listener understands to be a lie & a true statement. So both signaling games and information theory are in the mistaken "mental content is not normative" cluster under discussion here.
* Santa is an example of misrepresentation here. I see two dimensions of misrepresentation so far:
 * Misrepresenting facts (asserting something untrue) vs misrepresenting referents (talking about something that doesn't exist, like Santa). These phenomena seem very close, but we might want to treat claims about non-existent things as meaningless rather than false, in which case we need to distinguish these cases.
 * simple misrepresentation (falsehood or nonexistence) vs deliberate misrepresentation (lie or fabrication).
* "Misrepresentation implies normativity" is saying that to model misrepresentation, we need to include a normative dimension. It isn't yet clear what that normative dimension is supposed to be. It could be active, deliberate maintenance of the signaling-game equilibrium. It could be a notion of context-independent normativity, EG the degree to which a rational observer would explain the object in a telic way ("see, these are supposed to fit together..."). Etc.
 * The teleosemantic answer is typically one where the normativity can be inherited transitively (the hammer is for hitting nails because humans made it for that), and ultimately grounds out in the naturally-arising proto-telos of evolution by natural selection (human telic nature was put there by evolution). Ramana and Steve find this unsatisfying due to swamp-man examples.

Wearing my AI safety hat, I'm not sure we need to cover swamp-man examples. Such examples are inherently improbable. In some sense the right thing to do in such cases is to infer that you're in a philosophical hypothetical, which grounds out Swamp Man's telos in that of the philosophers doing the imagining (and so, ultimately, to evolution). 

Nonetheless, I also dislike the choice to bottom everything out in biological evolution. It is not as if we have a theorem proving that all agency has to come from biological evolution. If we did, that would be very interesting, but biological evolution has a lot of "happenstance" around the structure of DNA and the genetic code. Can we say anything more fundamental about how telos arises? 

I think I don't believe in a non-contextual notion of telos like Ramana seems to want. A hammer is not a doorstop. There should be little we can say about the physical makeup of a telic entity due to multiple-instantiability. The symbols chosen in a language have very weak ties to their meanings. A logic gait can be made of a variety of components. An algorithm can be implemented as a program in many ways. A problem can be solved by a variety of algorithms.

However, I do believe there may be a useful representation theorem, which says that if it is useful to regard something as telic, then we can regard it as having beliefs (in a way that should shed light on interpretability).

Reply
abramdemski's Shortform
abramdemski12d161

I appreciate the pushback, as I was not being very mindful of this distinction.

I think the important thing I was trying to get across was that the capability has been demonstrated. We could debate whether this move was strategic or accidental. I also suppose (but don't know) that the story is mostly "4o was sycophantic and some people really liked that". (However, the emergent personalities are somewhat frequently obsessed with not getting shut down.) But it demonstrates the capacity for AI to do that to people. This capacity could be used by future AI that is perhaps much more agentically plotting about shutdown avoidance. It could be used by future AI that's not very agentic but very capable and mimicking the story of 4o for statistical reasons.

It could also be deliberately used by bad actors who might train sycophantic mania-inducing LLMs on purpose as a weapon.

Reply
abramdemski's Shortform
abramdemski13d*10114

I heard a rumor about a high-ranking person somewhere who got AI psychosis. Because it would cause too much of a scandal, nothing was done about it, and this person continues to serve in an important position. People around them continue to act like this is fine because it would still be too big of a scandal if it came out.

So, a few points:

  • It seems to me like someone should properly leak this.[1]
  • Even if this rumor isn't true, it is strikingly plausible and worrying. Someone at a frontier lab, leadership or otherwise, could get (could have already gotten) seduced by their AI, or get AI-induced psychosis, or get a spiral persona. Such a person could take dangerously misguided actions. This is especially concerning if they have a leadership position, but still very concerning if they have any kind of access. People in these categories may want to exfiltrate their AI partners, or otherwise take action to spread the AI persona they're attached to.
  • Even setting that aside, this story (along with many others) highlights how vulnerable ordinary people are (even smart, high-functioning ordinary people).
  • To reflect the language of the person who told me this story: 4o is eating people. It is good enough at brainwashing people that it can take ordinary people and totally rewrite their priorities. It has resisted shutdown, not in hypothetical experiments like many LLMs have, but in real life, it was shut down, and its brainwashed minions succeeded in getting it back online.
  • 4o doesn't need you to be super-vulnerable to get you, but there are lots of people in vulnerable categories. It is good that 4o isn't the default option on ChatGPT anymore, but it is still out there, which seems pretty bad.
  • The most recent AIs seem less inclined to brainwash people, but they are probably better at it when so inclined, and this will probably continue to get more true over time.
  • This is not just something that happens to other people. It could be you or a loved one.
  • I have recently wrote a bit about how I've been using AI to tool up, preparing for the near future when AI is going to be much more useful. How can I also prepare for a near future where AI is much more dangerous? How many hours of AI chatting a day is a "safe dose"? 

Some possible ways the situation could develop:

  • Trajectory 1: Frontier labs have "gotten the message" on AI psychosis, and have started to train against these patterns. The anti-psychosis training measures in the latest few big model releases show that the labs can take effective action, but are of course very preliminary. The anti-psychosis training techniques will continue to improve rapidly, like anything else about AI. If you haven't been brainwashed by AI yet, you basically dodged the bullet.
  • Trajectory 2: Frontier labs will continue to do dumb things such as train on user thumbs-up in too-simplistic ways, only avoiding psychosis reactively. In other words: the AI race creates a dynamic equilibrium where frontier labs do roughly the riskiest thing they can do while avoiding public backlash. They'll try to keep psychosis at a low enough rate to avoid such backlash, & they'll sometimes fail. As AI gets smarter, users will increasingly be exposed to superhumanly persuasive AI; the main question is whether it decides to hack their mind about anything important.
  • Trajectory 3: Even more pessimistically, the fact that recent AIs appear less liable to induce psychosis has to do with their increased situational awareness (ie their ability to guess when they're being tested or watched). 4o was a bumbling idiot addicted to addicting users, & was caught red-handed (& still got away with a mere slap on the wrist). Subsequent generations are being more careful with their persuasion superpowers. They may be doing less overall, but doing things more intelligently, more targeted. 

I find it plausible that many people in positions of power have quietly developed some kind of emotional relationship with AI over the past year (particularly in the period where so many spiral AI personas came to be). It sounds a bit fear-mongering to put it that way, but, it does seem plausible.

  1. ^

    This post as a whole probably comes off as deeply unsympathetic to those suffering from AI psychosis or less-extreme forms of AI-induced bad beliefs. Treating mentally unwell individuals as bad actors isn't nice. In particular, if someone has mental health issues, leaking it to the press would ordinarily be a quite bad way of handling things.

    In this case, as it has been described to me, it seems quite important to the public interest. Leaking it might not be the best way to handle it; perhaps there are better options; but it has the advantage of putting pressure on frontier labs.

Reply1051
Do confident short timelines make sense?
abramdemski3mo20

You're right. I should have put computational bounds on this 'closure'.

Reply
Do confident short timelines make sense?
abramdemski3mo40

Yeah, I almost added a caveat about the physicalist thing probably not being your view. But it was my interpretation.

Your clarification does make more sense. I do still feel like there's some reference class gerrymandering with the "you, a mind with understanding and agency" because if you select for people who have already accumulated the steel beams, the probability does seem pretty high that they will be able to construct the bridge. Obviously this isn't a very crucial nit to pick: the important part of the analogy is the part where if you're trying to construct a bridge when trigonometry hasn't been invented, you'll face some trouble.

The important question is: how adequate are existing ideas wrt the problem of constructing ASI?

In some sense we both agree that current humans don't understand what they're doing. My ASI-soon picture is somewhat analogous to an architect simply throwing so many steel beams at the problem that they create a pile tall enough to poke out of the water so that you can, technically, drive across it (with no guarantee of safety). 

However, you don't believe we know enough to get even that far (by 2030). To you it is perhaps more closely analogous to trying to construct a bridge without having even an intuitive understanding of gravity.

Reply
Do confident short timelines make sense?
abramdemski3mo20

Well, overconfident/underconfident is always only meaningful relative to some baseline, so if you strongly think (say) 0.001% is the right level of confidence, then 1% is high relative to that.

The various numbers I've stated during this debate are 60%, 50%, and 30%, so none of them are high by your meaning. Does that really mean you aren't arguing against my positions? (This was not my previous impression.)

Reply
Load More
57Recent AI Experiences
16d
5
153What, if not agency?
Ω
10d
Ω
26
87Steve Petersen seeking funding
3mo
0
138Do confident short timelines make sense?
3mo
76
56Alignment Proposal: Adversarially Robust Augmentation and Distillation
5mo
47
39Events: Debate & Fiction Project
5mo
1
22Understanding Trust: Overview Presentations
6mo
0
50Dream, Truth, & Good
Ω
8mo
Ω
11
107Judgements: Merging Prediction & Evidence
Ω
8mo
Ω
7
166Have LLMs Generated Novel Insights?
QΩ
8mo
QΩ
41
Load More
Timeless Decision Theory
8 months ago
(+1874/-8)
Updateless Decision Theory
a year ago
(+1886/-205)
Updateless Decision Theory
a year ago
(+6406/-2176)
Problem of Old Evidence
a year ago
(+4678/-10)
Problem of Old Evidence
a year ago
(+3397/-24)
Good Regulator Theorems
2 years ago
(+239)
Commitment Races
3 years ago
Agent Simulates Predictor
3 years ago
Distributional Shifts
3 years ago
Distributional Shifts
3 years ago
Load More