"cannot imagine a study that would convince me that it "didn't work" for me, in the ways that actually matter. The effects on my mind kick in sharply, scale smoothly with dose, decay right in sync with half-life in the body, and are clearly noticeable not just internally for my mood but externally in my speech patterns, reaction speeds, ability to notice things in my surroundings, short term memory, and facial expressions."
The drug actually working would mean that your life is better after 6 years of taking the drug compared to the counterfactual where you took a placebo.
The observations you describe are explained by you simply having a chemical dependency on a drug that you have been on for 6 years.
"In an argument between a specialist and a generalist, the expert usually wins by simply (1) using unintelligible jargon, and (2) citing their specialist results, which are often completely irrelevant to the discussion. The expert is, therefore, a potent factor to be reckoned with in our society. Since experts both are necessary and also at times do great harm in blocking significant progress, they need to be examined closely. All too often the expert misunderstands the problem at hand, but the generalist cannot carry though their side to completion. The person who thinks they understand the problem and does not is usually more of a curse (blockage) than the person who knows they do not understand the problem.’
—Richard W. Hamming, “The Art of Doing Science and Engineering”
***
(Side note:
I think there's at least a 10% chance that a randomly selected LessWrong user thinks it was worth their time to read at least some of the chapters in this book. Significantly more users would agree that it was a good use of their time (in expectation) to skim the contents and introduction before deciding if they're in that 10%.
That is to say, I recommend this book.)
Robin Hanson recently wrote about two dynamics that can emerge among individuals within an organisations when working as a group to reach decisions. These are the "outcome game" and the "consensus game."
In the outcome game, individuals aim to be seen as advocating for decisions that are later proven correct. In contrast, the consensus game focuses on advocating for decisions that are most immediately popular within the organization. When most participants play the consensus game, the quality of decision-making suffers.
The incentive structure within an organization influences which game people play. When feedback on decisions is immediate and substantial, individuals are more likely to engage in the outcome game. Hanson argues that capitalism's key strength is its ability to make outcome games more relevant.
However, if an organization is insulated from the consequences of its decisions or feedback is delayed, playing the consensus game becomes the best strategy for gaining resources and influence.
This dynamic is particularly relevant in the field of (existential) AI Safety, which needs to develop strategies to mitigate risks before AGI is developed. Currently, we have zero concrete feedback about which strategies can effectively align complex systems of equal or greater intelligence to humans.
As a result, it is unsurprising that most alignment efforts avoid tackling seemingly intractable problems. The incentive structures in the field encourage individuals to play the consensus game instead.
I'm not saying that this would necessarily be a step in the wrong direction, but I don't think think a discord server is capable of fixing a deeply entrenched cultural problem among safety researchers.
If moderating the server takes up a few hours of John's time per week the opportunity cost probably isn't worth it.
Worth emphasizing that cognitive work is more than just a parallel to physical work, it is literally Work in the physical sense.
The reduction in entropy required to train a model means that there is a minimum amount of work required to do it.
I think this is a very important research direction, not merely as an avenue for communicating and understanding AI Safety concerns, but potentially as a framework for developing AI Safety techniques.
There is some minimum amount of cognitive work required to pose an existential threat, perhaps it is much higher than the amount of cognitive work required to perform economically useful tasks.
Can you expect that the applications to interpretability would apply on inputs radically outside of distribution?
My naive intuition is that by taking derivatives are you only describing local behaviour.
(I am "shooting from the hip" epistemically)
A loss of this type of (very weak) interpretability would be quite unfortunate from a practical safety perspective.
This is bad, but perhaps there is a silver lining.
If internal communication within the scaffold appears to be in plain English, it will tempt humans to assume the meaning coincides precisely with the semantic content of the message.
If the chain of thought contains seemingly nonsensical content, it will be impossible to make this assumption.
I think that overall it's good on the margin for staff at companies risking human extinction to be sharing their perspectives on criticisms and moving towards having dialogue at all
No disagreement.
your implicit demand for Evan Hubinger to do more work here is marginally unhelpful
The community seems to be quite receptive to the opinion, it doesn't seem unreasonable to voice an objection. If you're saying it is primarily the way I've written it that makes it unhelpful, that seems fair.
I originally felt that either question I asked would be reasonably easy to answer, if time was given to evaluating the potential for harm.
However, given that Hubinger might have to run any reply by Anthropic staff, I understand that it might be negative to demand further work. This is pretty obvious, but didn't occur to me earlier.
I will add: It's odd to me, Stephen, that this is your line for (what I read as) disgust at Anthropic staff espousing extremely convenient positions while doing things that seem to you to be causing massive harm.
Ultimately, the original quicktake was only justifying one facet of Anthropic's work so that's all I've engaged with. It would seem less helpful to bring up my wider objections.
I wouldn't expect them or their employees to have high standards for public defenses of far less risky behavior
I don't expect them to have a high standard for defending Anthropic's behavior, but I do expect the LessWrong community to have a high standard for arguments.
Highly Expected Events Provide Little Information and The Value of PR Statements
Entropy for a discrete random variable is given by . This quantifies the amount of information that you gain on average by observing the value of the variable.
It is maximized when every possible outcome is equally likely. It gets smaller as the variable becomes more predictable and is zero when the "random" variable is 100% guaranteed to have a specific value.
You've learnt 1 bit of information when you learn the outcome of a fair coin toss was heads. But you learn 0 information, when you learn the outcome was heads after tossing a coin with heads on both side.
On your desk is a sealed envelope that you've been told contains a transcript of a speech that President Elect Trump gave on the campaign trail. You are told that it discusses the impact that his policies will have on the financial position of the average American.
How much additional information do you gain if I tell you that the statement says his policies will have a positive impact on the financial position of the average American?
The answer is very little. You know ahead of time that it is exceptionally unlikely for any politician to talk negatively about their own policies.
There is still plenty of information in the details that Trump mentions, how exactly he plans to improve the economy.
Both Altman and Amodei have recently put out personal blog posts in which they present a vision of the future after AGI is safely developed.
How much additional information do you gain from learning that they present a positive view of this future?
I would argue simply learning that they're optimistic tells you almost zero useful information about what such a future looks like.
There is plenty of useful information, particularly in Amodei's essay, in how they justify this optimism and what topics they choose to discuss. But their optimism alone shouldn't be used as evidence to update your beliefs.
Edit:
Fixed pretty major terminology blunder.
(This observation is not original, and a similar idea appears in The Sequences.)
Signalling that I do not like linkposts to personal blogs.