1 min read

2

This is a special post for quick takes by J Bostock. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.

New to LessWrong?

72 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

From Rethink Priorities:

  1. We used Monte Carlo simulations to estimate, for various sentience models and across eighteen organisms, the distribution of plausible probabilities of sentience.
  2. We used a similar simulation procedure to estimate the distribution of welfare ranges for eleven of these eighteen organisms, taking into account uncertainty in model choice, the presence of proxies relevant to welfare capacity, and the organisms’ probabilities of sentience (equating this probability with the probability of moral patienthood)

Now with the disclaimer that I do think that RP are doing good and important work and are one of the few organizations seriously thinking about animal welfare priorities...

Their epistemics led them to do a Monte Carlo simulation to determine if organisms are capable of suffering (and if so, how much) then got a value of 5 shrimp = 1 human and then not bat an eye at this number.

Neither a physicalist nor a functionalist theory of consciousness can reasonably justify a number like this. Shrimp have 5 orders of magnitude fewer neurons than humans, so whether suffering is the result of a physical process or an information processing one, this implies that shrimp neur... (read more)

Their epistemics led them to do a Monte Carlo simulation to determine if organisms are capable of suffering (and if so, how much) then got a value of 5 shrimp = 1 human and then not bat an eye at this number.

Neither a physicalist nor a functionalist theory of consciousness can reasonably justify a number like this. Shrimp have 5 orders of magnitude fewer neurons than humans, so whether suffering is the result of a physical process or an information processing one, this implies that shrimp neurons do 4 orders of magnitude more of this process per second than human neurons.

epistemic status: Disagreeing on object-level topic, not the topic of EA epistemics.

I disagree, especially functionalism can justify a number like this. Here's an example for reasoning on this:

  1. Suffering is the structure of some computation, and different levels of suffering correspond to different variants of that computation.
  2. What matters is whether that computation is happening.
  3. The structure of suffering is simple enough to be represented in the neurons of a shrimp.

Under that view, shrimp can absolutely suffer in the same range as humans, and the amount of suffering is dependent on crossing some thresh... (read more)

6J Bostock
I have added a link to the report now. As to your point: this is one of the better arguments I've heard that welfare ranges might be similar between animals. Still I don't think it squares well with the actual nature of the brain. Saying there's a single suffering computation would make sense if the brain was like a CPU, where one core did the thinking, but actually all of the neurons in the brain are firing at once and doing computations in at the same time. So it makes much more sense to me to think that the more neurons are computing some sort of suffering, the greater the intensity of suffering.
3Kaj_Sotala
Can you elaborate how leads to ?
1nielsrolf
One intuition against this is by drawing an analogy to LLMs: the residual stream represents many features. All neurons participate in the representation of a feature. But the difference between a larger and a smaller model is mostly that the larger model can represent more features, not that the larger model represents features with greater magnitude. In humans it seems to be the case that consciousness is most strongly connected to processes in the brain stem, rather than the neo cortex. Here is a great talk about the topic - the main points are (writing from memory, might not be entirely accurate): * humans can lose consciousness or produce intense emotions (good and bad) through interventions on a very small area of the brain stem. When other much larger parts of the brain are damaged or missing, humans continue to behave in a way such that one would ascribe emotions to them from interactions, for example, they show affection. * dopamin, serotonin, and other chemicals that alter consciousness work in the brain stem If we consider the question from an evolutionary angle, I'd also argue that emotions are more important when an organism has fewer alternatives (like a large brain that does fancy computations). Once better reasoning skills become available, it makes sense to reduce the impact that emotions have on behavior and instead trust the abstract reasoning. In my own experience, the intensity in which I feel emotions is strongly correlated to how action guiding it is, and I think as a child I felt emotions more intensly than now, which also fits the hypothesis that more ability to think abstract reduces intensity of emotions.
3kairos_
I agree with you that the "structure of suffering" is likely to be represented in the neurons of shrimp. I think it's clear that shrimps may "suffer" in the sense that they react to pain, move away from sources of pain, would prefer to be in a painless state rather than a painful state, etc. But where I diverge from the conclusions drawn by Rethink Priorities is that I believe shrimp are less "conscious" (for a lack of a better word) than humans and less their suffering matters less. Though shrimp show outward signs of pain, I sincerely doubt that with just 100,000 neurons there's much of a subjective experience going on there. This is purely intuitive, and I'm not sure of the specific neuroscience of shrimp brains or Rethink Priorities arguments against this. But it seems to me that the "level of consciousness" animals have sit on an axis that's roughly correlated with neuron count; with humans elephants at the top to C. elegans at the bottom. Another analogy I'll throw out is that humans can react to pain unconsciously. If you put your hand on a hot stove, you will reactively pull your hand away before the feeling of pain enters your conscious perception. I'd guess shrimp pain response works a similar way, largely unconscious processing do to their very low neuron count.
5Jeremy Gillen
Can you link to where RP says that?
4J Bostock
Good point, edited a link to the Google Doc into the post.
1CB
Your disagreement, from what I understand, seems mostly to stem from the fact that shrimps have less neuron than humans. Did you check RP's piece on that topic, "Why Neuron Counts Shouldn't Be Used as Proxies for Moral Weight?" https://forum.effectivealtruism.org/posts/Mfq7KxQRvkeLnJvoB/why-neuron-counts-shouldn-t-be-used-as-proxies-for-moral They say this: "In regards to intelligence, we can question both the extent to which more neurons are correlated with intelligence and whether more intelligence in fact predicts greater moral weight;  Many ways of arguing that more neurons results in more valenced consciousness seem incompatible with our current understanding of how the brain is likely to work; and There is no straightforward empirical evidence or compelling conceptual arguments indicating that relative differences in neuron counts within or between species reliably predicts welfare relevant functional capacities. Overall, we suggest that neuron counts should not be used as a sole proxy for moral weight, but cannot be dismissed entirely"
1Garrett Baker
This hardly seems an argument against the one in the shortform, namely If the original authors never thought of this that seems on them.

Are there any high p(doom) orgs who are focused on the following:

  1. Pick an alignment "plan" from a frontier lab (or org like AISI)
  2. Demonstrate how the plan breaks or doesn't work
  3. Present this clearly and legibly for policymakers

Seems like this is a good way for people to deploy technical talent in a way which is tractable. There are a lot of people who are smart but not alignment-solving levels of smart who are currently not really able to help.

I'd say that work like our Alignment Faking in Large Language Models paper (and the model organisms/alignment stress-testing field more generally) is pretty similar to this (including the "present this clearly to policymakers" part).

A few issues:

  • AI companies don't actually have specific plans, they mostly just hope that they'll be able to iterate. (See Sam Bowman's bumper post for an articulation of a plan like this.) I think this is a reasonable approach in principle: this is how progress happens in a lot of fields. For example, the AI companies don't have plans for all kinds of problems that will arise with their capabilities research in the next few years, they just hope to figure it out as they get there. But the lack of specific proposals makes it harder to demonstrate particular flaws.
  • A lot of my concerns about alignment proposals are that when AIs are sufficiently smart, the plan won't work anymore. But in many cases, the plan does actually work fine right now at ensuring particular alignment properties. (Most obviously, right now, AIs are so bad at reasoning about training processes that scheming isn't that much of an active concern.) So you can't directly demonstrate that
... (read more)
5Oliver Sourbut
(Psst: a lot of AISI's work is this, and they have sufficient independence and expertise credentials to be quite credible; this doesn't go for all of their work, some of which is indeed 'try for a better plan')
2Thane Ruthenis
That seems like a pretty good idea! (There are projects that stress-test the assumptions behind AGI labs' plans, of course, but I don't think anyone is (1) deliberately picking at the plans AGI labs claim, in a basically adversarial manner, (2) optimizing experimental setups and results for legibility to policymakers, rather than for convincingness to other AI researchers. Explicitly setting those priorities might be useful.)
6Buck
People who do research like this are definitely optimizing for legibility to policymakers (always at least a bit, and usually a lot). One problem is that if AI researchers think your work is misleading/scientifically suspect, they get annoyed at you and tell people that your research sucks and you're a dishonest ideologue. This is IMO often a healthy immune response, though it's a bummer when you think that the researchers are wrong and your work is fine. So I think it's pretty costly to give up on convincingness to AI researchers.
1Thane Ruthenis
"Not optimized to be convincing to AI researchers" ≠ "looks like fraud". "Optimized to be convincing to policymakers" might involve research that clearly demonstrates some property of AIs/ML models which is basic knowledge for capability researchers (and for which they already came up with rationalizations why it's totally fine) but isn't well-known outside specialist circles. E. g., the basic example is the fact that ML models are black boxes trained by an autonomous process which we don't understand, instead of manually coded symbolic programs. This isn't as well-known outside ML communities as one might think, and non-specialists are frequently shocked when they properly understand that fact.
3Guive
What kind of "research" would demonstrate that ML models are not the same as manually coded programs? Why not just link to the Wikipedia article for "machine learning"? 
3Kabir Kumar
AI Plans does this
1Kabir Kumar
yes. AI Plans

My impression is that the current Real Actual Alignment Plan For Real This Time amongst medium p(Doom) people looks something like this:

  1. Advance AI control, evals, and monitoring as much as possible now
  2. Try and catch an AI doing a maximally-incriminating thing at roughly human level
  3. This causes [something something better governance to buy time]
  4. Use the almost-world-ending AI to "automate alignment research"

(Ignoring the possibility of a pivotal act to shut down AI research. Most people I talk to don't think this is reasonable.)

I'll ignore the practicality of 3. What do people expect 4 to look like? What does an AI assisted value alignment solution look like?

My rough guess of what it could be, i.e. the highest p(solution is this|AI gives us a real alignment solution) is something like the following. This tries to straddle the line between the helper AI being obviously powerful enough to kill us and obviously too dumb to solve alignment:

  1. Formalize the concept of "empowerment of an agent" as a property of causal networks with the help of theorem-proving AI.
  2. Modify today's autoregressive reasoning models into something more isomorphic to a symbolic casual network. Use some sort of minimal c
... (read more)
2Raemon
Is your question more about "what's the actual structure of the 'solve alignment' part", or "how are you supposed to use powerful AIs to help with this?"
4Raemon
I think there's one structure-of-plan that is sort of like your outline (I think it is similar to John Wentworth's plan but sort of skipping ahead past some steps and being more-specific-about-the-final-solution which means more wrong) (I don't think John self-identifies as particularly oriented around your "4 steps from AI control to automate alignment research". I haven't heard the people who say 'let's automate alignment research' say anything that sounded very coherent. I think many people are thinking something like "what if we had a LOT of interpretability?" but IMO not really thinking through the next steps needed for that interpretability to be useful in the endgame.) STEM AI -> Pivotal Act I haven't heard anyone talk about this for awhile, but a few years back I heard a cluster of plans that were something like "build STEM AI with very narrow ability to think, which you could be confident couldn't model humans at all, which would only think about resources inside a 10' by 10' cube, and then use that to invent the pre-requisites for uploading or biological intelligence enhancement, and then ??? -> very smart humans running at fast speeds figure out how to invent a pivotal technology."  I don't think the LLM-centric era lends itself well to this plan. But, I could see a route where you get a less-robust-and-thus-necessarily-weaker STEM AI trained on a careful STEM corpus with careful control and asking it carefully scoped questions, which could maybe be more powerful than you could get away with for more generically competent LLMs. 
2J Bostock
Yes, a human-uploading or human-enhancing pivotal act might actually be something people are thinking about. Yudkowsky gives his nanotech-GPU-melting pivotal act example, which---while he has stipulated that it's not his real plan---still anchors "pivotal act" on "build the most advanced weapon system of all time and carry out a first-strike". This is not something that governments (and especially companies) can or should really talk about as a plan, since threatening a first-strike on your geopolitical opponents does not a cooperative atmosphere make. (though I suppose a series of targeted, conventional strikes on data centers and chip factories across the world might be on the pareto-frontier of "good" vs "likely" outcomes)
2J Bostock
My question was an attempt to trigger a specific mental motion in a certain kind of individual. Specifically, I was hoping for someone who endorses that overall plan to envisage how it would work end-to-end, using their inner sim. My example was basically what I get when I query my inner sim, conditional on that plan going well. 

Too Early does not preclude Too Late

Thoughts on efforts to shift public (or elite, or political) opinion on AI doom.

Currently, it seems like we're in a state of being Too Early. AI is not yet scary enough to overcome peoples' biases against AI doom being real. The arguments are too abstract and the conclusions too unpleasant.

Currently, it seems like we're in a state of being Too Late. The incumbent players are already massively powerful and capable of driving opinion through power, politics, and money. Their products are already too useful and ubiquitous to be hated.

Unfortunately, these can both be true at the same time! This means that there will be no "good" time to play our cards. Superintelligence (2014) was Too Early but not Too Late. There may be opportunities which are Too Late but not Too Early, but (tautologically) these have not yet arrived. As it is, current efforts must fight on bith fronts.

5Seth Herd
I like this framing; we're both too early and too late. But it might transition quite rapidly from too early to right on time. One idea is to prepare strategies and arguments and perhaps prepare the soil of public discourse in preparation for the time when it is no longer too early. Job loss and actually harmful AI shenanigans are very likely before takeover-capable AGI. Preparing for the likely AI scares and negative press might help public opinion shift very rapidly as it sometimes does (e.g., COVID opinions went from no concern to shutting down half the economy very quickly). The average American and probably the average global citizen already dislikes AI. It's just the people benefitting from it that currently like it, and that's a minority. Whether that's enough is questionable, but it makes sense to try and hope that the likely backlash is at least useful in slowing progress or proliferation somewhat.

So Sonnet 3.6 can almost certainly speed up some quite obscure areas of biotech research. Over the past hour I've got it to:

  1. Estimate a rate, correct itself (although I did have to clock that it's result was likely off by some OOMs, which turned out to be 7-8), request the right info, and then get a more reasonable answer.
  2. Come up with a better approach to a particular thing than I was able to, which I suspect has a meaningfully higher chance of working than what I was going to come up with.

Perhaps more importantly, it required almost no mental effort on my part to do this. Barely more than scrolling twitter or watching youtube videos. Actually solving the problems would have had to wait until tomorrow.

I will update in 3 months as to whether Sonnet's idea actually worked.

(in case anyone was wondering, it's not anything relating to protein design lol: Sonnet came up with a high-level strategy for approaching the problem)

1Qumeric
I think you might find this paper relevant/interesting: https://aidantr.github.io/files/AI_innovation.pdf TL;DR: Research on LLM productivity impacts in material disocery. Main takeaways: * Significant productivity improvement overall * Mostly at idea generation phase * Top performers benefit much more (because they can evaluate AI's ideas well) * Mild decrease in job satisfaction (AI automates most interesting parts, impact partly counterbalanced by improved productivity)

The latest recruitment ad from Aiden McLaughlin tells a lot about OpenAI's internal views on model training:

Image

My interpretation of OpenAI's worldview, as implied by this, is:

  1. Inner alignment is not really an issue. Training objectives (evals) relate to behaviour in a straightforward and predictable way.
  2. Outer alignment kinda matters, but it's not that hard. Deciding the parameters of desired behaviour is something that can be done without serious philosophical difficulties.
  3. Designing the right evals is hard, you need lots of technical skill and high taste to make good enough evals to get the right behaviour.
  4. Oversight is important, in fact oversight is the primary method for ensuring that the AIs are doing what we want. Oversight is tractable and doable.

None of this dramatically conficts with what I already thought OpenAI believed, but it's interesting to get another angle on it.

It's quite possible that 1 is predicated on technical alignment work being done in other parts of the company (though their superalignment team no longer exists) and it's just not seen as the purview of the evals team. If so it's still very optimistic. If there isn't such a team then it's suicidally optimistic.

Fo... (read more)

8evhub
Link is here, if anyone else was wondering too.
1sjadler
Re: 1, during my time at OpenAI I also strongly got the impression that inner alignment was way underinvested. The Alignment team’s agenda seemed basically about better values/behavior specification IMO, not making the model want those things on the inside (though this is now 7 months out of date). (Also, there are at least a few folks within OAI I’m sure know and care about these issues)

Spoilers (I guess?) for HPMOR

HPMOR presents a protagonist who has a brain which is 90% that of a merely very smart child, but which is 10% filled with cached thought patterns taken directly from a smarter, more experienced adult. Part of the internal tension of Harry is between the un-integrated Dark Side thoughts and the rest of his brain.

Ironic then, that the effect that reading HPMOR---and indeed a lot of Yudkowsky's work---was to imprint a bunch of un-integrated alien thought patterns onto my existing merely very smart brain. A lot of my development over the past few years has just been trying to integrate these things properly with the rest of my mind.

7Eli Tyre
You might want to note that these are spoilers for HP:MoR.
4J Bostock
Fair enough, done. This felt vaguely like tagging spoilers for Macbeth or the Bible, but then I remembered how annoyed I was to have Of Mice And Men spoiled for me at age fifteen. 
6Eli Tyre
There's always new people coming through, and I don't want to spoil the mysteries for them!

Steering as Dual to Learning

I've been a bit confused about "steering" as a concept. It seems kinda dual to learning, but why? It seems like things which are good at learning are very close to things which are good at steering, but they don't always end up steering. It also seems like steering requires learning. What's up here?

I think steering is basically learning, backwards, and maybe flipped sideways. In learning, you build up mutual information between yourself and the world; in steering, you spend that mutual information. You can have learning without ... (read more)

4testingthewaters
See also this paper about plasticity as dual to empowerment https://arxiv.org/pdf/2505.10361v2
3Roman Malov
I'm just going from pure word vibes here, but I've read somewhere (to be precise, here) about Todorov’s duality between prediction and control: https://roboti.us/lab/papers/TodorovCDC08.pdf
1Daniel C
Alternatively, for learning your brain can start out in any given configuration, and it will end up in the same (small set of) final configuration (one that reflects the world); for steering the world can start out in any given configuration, and it will end up in the same set of target configurations It seems like some amount of steering without learning is possible (open-loop control), you can reduce entropy in a subsystem while increasing entropy elsewhere to maintain information conservation

Shrimp Interventions

The hypothetical ammonia-reduction-in-shrimp-farm intervention has been touted as 1-2 OOMs more effective than shrimp stunning.

I think this is probably an underestimate, because I think that the estimates of shrimp suffering during death are probably too high.

(While I'm very critical of all of RP's welfare range estimates, including shrimp, that's not my point here. This argument doesn't rely on any arguments about shrimp welfare ranges overall. I do compare humans and shrimp, but IIUC this sort of comparison is the thing you multiply b... (read more)

As much as the amount of fraud (and lesser cousins thereof) in science is awful as a scientist, it must be so much worse as a layperson. For example this is a paper I found today suggesting that cleaner wrasse, a type of finger-sized fish, can not only pass the mirror test, but are able to remember their own face and later respond the same way to a photograph of themselves as to a mirror.

https://www.pnas.org/doi/10.1073/pnas.2208420120

Ok, but it was published in PNAS. As a researcher I happen to know that PNAS allows for special-track submissions from memb... (read more)

[This comment is no longer endorsed by its author]Reply
2J Bostock
So I still don't know what's going on but this probably mischaracterizes the situation. So the original notification that Frans de Waal "edited" the paper actually means that he was the individual who coordinated the reviews of the paper at the Journal's end, which was not made particularly clear. The lead authors do have other publications (mostly in the same field) it's just the particular website I was using didn't show them. There's also a strongly skeptical response to the paper that's been written by ... Frans de Waal so I don't know what's going on there! The thing about PNAS having a secret submission track is true as far as I know though.
1idly
The editor of an article is the person who decides whether to desk-reject or seek reviewers, find and coordinate the reviewers, communicate with the authors during the process and so on. That's standard at all journals afaik. The editor decides on publication according to the journal's criteria. PNAS does have this special track but one of the authors must be in NAS, and as that author you can't just submit a bunch of papers in that track, you can use it once a year or something. And most readers of PNAS know this and are suitably sceptical of those papers (and it's written on the paper if it used that track). The journal started out only accepting papers from NAS members and opened to everyone in the 90s so it's partly a historical quirk.

https://threadreaderapp.com/thread/1925593359374328272.html

Reading between the lines here, Opus 4 was RLed by repeated iterating and testing. Seems like they had to hit it fairly (for Anthropic) hard with the "Identify specific bad behaviors and stop them" technique.

Relatedly: Opus 4 doesn't seem to have the "good vibes" that Opus 3 had.

Furthermore, this (to me) indicates that Anthropic's techniques for model "alignment" are getting less elegant and sophisticated over time, since the models are getting smarter---and thus harder to "align"---faster than Ant... (read more)

There's a court at my university accommodation that people who aren't Fellows of the college aren't allowed on, it's a pretty medium-sized square of mown grass. One of my friends said she was "morally opposed" to this (on biodiversity grounds, if the space wasn't being used for people it should be used for nature).

And I couldn't help but think, how tiring it would be to have a moral-feeling-detector this strong. How could one possibly cope with hearing about burglaries, or North Korea, or astronomical waste.

I've been aware of scope insensitivity for a long time now but, this just really put things in perspective in a visceral way for me.

6Dagon
For many who talk about "moral opposition", talk is cheap, and the cause of such a statement may be in-group or virtue signaling rather than an indicator of intensity of moral-feeling-detector.
2mako yass
You haven't really stated that she's putting all that much energy into this (implied, I guess), but I'd see nothing wrong with having a moral stance about literally everything but still prioritizing your activity in healthy ways, judging this, maybe even arguing vociferously for it, for about 10 minutes, before getting back to work and never thinking about it again.
1JBlack
To me it seems more likely that this person is misreporting their motive than that they really oppose this allocation of a patch of grass on biodiversity grounds. I would expect grounds like "I want to use it myself" or slightly more general "it should be available for a wider group" to be very much more common, for example if I had to rank likelihood of motives after hearing that someone objects, but before hearing their reasons. I'd end up with more weight on "playing social games" than on "earnestly believes this". On the other hand it would not surprise me very much that at least one person somewhere might truly hold this position. Just my weight for any particular person would be very low.

Seems like if you're working with neural networks there's not a simple map from an efficient (in terms of program size, working memory, and speed) optimizer which maximizes X to an equivalent optimizer which maximizes -X. If we consider that an efficient optimizer does something like tree search, then it would be easy to flip the sign of the node-evaluating "prune" module. But the "babble" module is likely to select promising actions based on a big bag of heuristics which aren't easily flipped. Moreover, flipping a heuristic which upweights a small subset ... (read more)

5habryka
True if you don't count the training process as part of the optimizer (which is a choice that sometimes makes sense and sometimes doesn't). If you count the training process as part of the optimizer, then you can of course just flip your loss function or RL signal most of the time.
2JBlack
How do you construct a maximizer for 0.3X+0.6Y+0.1Z from three maximizers for X, Y, and Z? It certainly isn't true in general for black box optimizers, so presumably this is something specific to a certain class of neural networks.
2J Bostock
My model: suppose we have a DeepDreamer-style architecture, where (given a history of sensory inputs) the babbler module produces a distribution over actions, a world model predicts subsequent sensory inputs, and an evaluator predicts expected future X. If we run a tree-search over some weighted combination of the X, Y, and Z maximizers' predicted actions, then run each of the X, Y, and Z maximizers' evaluators, we'd get a reasonable approximation of a weighted maximizers. This wouldn't be true if we gave negative weights to the maximizers, because while the evaluator module would still make sense, the action distributions we'd get would probably be incoherent e.g. the model just running into walls or jumping off cliffs. My conjecture is that, if a large black box model is doing something like modelling X, Y, and Z maximizers acting in the world, that large black box model might be close in model-space to a itself being a maximizer which maximizes 0.3X + 0.6Y + 0.1Z, but it's far in model-space from being a maximizer which maximizes 0.3X - 0.6Y - 0.1Z due to the above problem. 

I have no evidence for this but I have a vibe that if you build a proper mathematical model of agency/co-agency, then prediction and steering will end up being dual to one another.

My intuition why:

A strong agent can easily steer a lot of different co-agents; those different co-agents will be steered towards the same goals of the agent.

A strong co-agent is easily predictable by a lot of different agents; those different agents will all converge on a common map of the co-agent.

Also, category theory tells us that there is normally only one kind of thing, but ... (read more)

Simplified Logical Inductors

Logical inductors consider belief-states as prices over logical sentences  in some language, with the belief-states decided by different computable "traders", and also some decision process which continually churns out proofs of logical statements in that language. This is a bit unsatisfying, since it contains several different kinds of things.

What if, instead of buying shares in logical sentences, the traders bought shares in each other. Then we only need one kind of thing.

Let's make this a bit more precise:

  • Each trad
... (read more)
1Daniel C
Neat idea, I've thought about similar directions in the context of traders betting on traders in decision markets A complication might be that a regular deductive process doesn't discount the "reward" of a proposition based on its complexity whereas your model does, so it might have a different notion of logical induction criterion. For instance, you could have an inductor that's exploitable but only for propositions with larger and larger complexities over time, such that with the complexity discounting the cash loss is still finite (but the regular LI loss would be infinite so it wouldn't satisfy regular LI criterion) (Note that betting on "earlier propositions" already seems beneficial in regular LI since if you can receive payouts earlier you can use it to place larger bets earlier) There's also some redundancy where each proposition can be encoded by many different turing machines, whereas a deductive process can guarantee uniqueness in its ordering & be more efficient that way Are prices still determined using Brouwer’s fixed point theorem? Or do you have a more auction-based mechanism in mind?

Thinking back to the various rationalist attempts to make vaccine. https://www.lesswrong.com/posts/niQ3heWwF6SydhS7R/making-vaccine For bird-flu related reasons. Since then, we've seen mRNA vaccines arise as a new vaccination method. mRNA vaccines have been used intra-nasally for COVID with success in hamsters. If one can order mRNA for a flu protein, it would only take mixing that with some sort of delivery mechanism (such as Lipofectamine, which is commercially available) and snorting it to get what could actually be a pretty good vaccine. Has RaDVac or similar looked at this?

Seems like there's a potential solution to ELK-like problems. If you can force the information to move from the AI's ontology to (it's model of) a human's ontology and then force it to move it back again.

This gets around "basic" deception since we can always compare the AI's ontology before and after the translation.

The question is how do we force the knowledge to go through the (modeled) human's ontology, and how do we know the forward and backward translators aren't behaving badly in some way.

Rather than using Bayesian reasoning to estimate P(A|B=b) it seems like most people the following heuristic:

  • Condition on A=a and B=b for different values of a
  • For each a, estimate the remaining uncertainty given A=a and B=b
  • Choose the a with the lowest remaining uncertainty from step 2

This is how you get "Saint Austacious could levitate, therefore God", since given [levitating saint] AND [God exists] there is very little uncertainty over what happened. Whereas given [levitating saint] AND [no God] there's a lot still left to wonder about regarding who made up the story at what point.

1ProgramCrafter
If so, they must be committing a 'disjunction fallacy', grading the second option as less likely than the first disregarding that it could be true in more ways!

Getting rid of guilt and shame as motivators of people is definitely admirable, but still leaves a moral/social question. Goodness or Badness of a person isn't just an internal concept for people to judge themselves by, it's also a handle for social reward or punishment to be doled out. 

I wouldn't want to be friends with Saddam Hussein, or even a deadbeat parent who neglects the things they "should" do for their family. This also seems to be true regardless of whether my social punishment or reward has the ability to change these people's behaviour. B... (read more)

3Pattern
I think there's a bit of a jump from 'social norm' to 'how our government deals with murders'. Referring to the latter as 'social' doesn't make a lot of sense.
1J Bostock
I think I've explained myself poorly, I meant to use the phrase social reward/punishment to refer exclusively to things forming friendships and giving people status, which is doled out differently to "physical government punishment". Saddam Hussein was probably a bad example as he is also someone who would clearly also receive the latter.

Alright so we have: 

- Bayesian Influence Functions allow us to find a training data:output loss correspondence
- Maybe the eigenvalues of the eNTK (very similar to influence function) corresponds to features in the data
- Maybe the features in the dataset can be found with an SAE

Therefore (will test this later today) maybe we can use SAE features to predict the influence function.

An early draft of a paper I'm writing went like this:

In the absence of sufficient sanity, it is highly likely that at least one AI developer will deploy an untrusted model: the developers do not know whether the model will take strategic, harmful actions if deployed. In the presence of a smaller amount of sanity, they might deploy it within a control protocol which attempts to prevent it from causing harm.

I had to edit it slightly. But I kept the spirit.

Arguments From Intelligence Explosions (FOOMs)

There's lots of discourse around at the moment about

  • Will AI go FOOM? With what probability?
  • Will we die if AI goes FOOM?
  • Will we die even if AI doesn't go FOOM?
  • Does the Halt AI Now case rest on FOOM?

I present a synthesis:

  • AI might FOOM. If it does, we go from a world much like today's, straight to dead, with no warning.
  • If AI doesn't foom, we go from the AI 2027 scary automation world to dead. Misalignment isn't solved in slow takeoff worlds.

If you disagree with either of these, you might not want to halt now:

  • If yo
... (read more)

The constant hazard rate model probably predicts exponential training inference (i.e. the inference done during guess and check RL) compute requirements agentic RL with a given model, because as hazard rate decreases exponentially, we'll need to sample exponentially more tokens to see an error, and we need to see an error to get any signal.

Hypothesis: one type of valenced experience---specifically valenced experience as opposed to conscious experience in general, which I make no claims about here---is likely to only exist in organisms with the capability for planning. We can analogize with deep reinforcement learning: seems like humans have a rapid action-taking system 1 which is kind of like Q-learning, it just selects actions; we also have a slower planning-based system 2, which is more like value learning. There's no reason to assign valence to a particular mental state if you're not able to imagine your own future mental states. There is of course moment-to-moment reward-like information coming in, but that seems to be a distinct thing to me.

Heuristic explanation for why MoE gets better at higher model size:

The input/output of a feedforward layer is equal to the model_width, but the total size of weights grows as model_width squared. Superposition helps explain how a model component can make the most use of its input/output space (and presumably its parameters) using sparse overcomplete features, but in the limit, the amount of information accessed by the feedforward call scales with the number of active parameters. Therefore at some point, more active parameters won't scale so well, since you're "accessing" too much "memory" in the form of weights, and overwhelming your input/output channels.

If we approximate an MLP layer with a bilinear layer, then the effect of residual stream features on the MLP output can be expressed as a second order polynomial over the feature coefficients $f_i$. This will contain, for each feature, an $f_i^2 v_i+ f_i w_i$ term, which is "baked into" the residual stream after the MLP acts. Just looking at the linear term, this could be the source of Anthropic's observations of features growing, shrinking, and rotating in their original crosscoder paper. https://transformer-circuits.pub/2024/crosscoders/index.html

I think you should pay in Counterfactual Mugging, and this is one of the newcomblike problem classes that is most common in real life.

Example: you find a wallet on the ground. You can, from least to most pro social:

  1. Take it and steal the money from it
  2. Leave it where it is
  3. Take it and make an effort to return it to its owner

Let's ignore the first option (suppose we're not THAT evil). The universe has randomly selected you today to be in the position where your only options are to spend some resources to no personal gain, or not. In a parallel universe, perhaps... (read more)

The UK has just switched their available rapid Covid tests from a moderately unpleasant one to an almost unbearable one. Lots of places require them for entry. I think the cost/benefit makes sense even with the new kind, but I'm becoming concerned we'll eventually reach the "imagine a society where everyone hits themselves on the head every day with a baseball bat" situation if cases approach zero.

Just realized I'm probably feeling much worse than I ought to on days when I fast because I've not been taking sodium. I really should have checked this sooner. If you're planning to do long (I do a day, which definitely feels long) fasts, take sodium! 

Curated and popular this week
Jemist's Shortform — LessWrong