More generally you can use the following typology to inspire creating more interventions.
Interventions points to change/form an AGI company and its surroundings towards safer x-risk results (I've used this in advising startups on AI safety, it is also related to my post on positions where people can be in the loop):
Thanks for asking the question!
Some things I'd especially like to see change (in as much as I know what is happening) are:
Gotcha. What determines the "ratios" is some sort of underlying causal structure of which some aspects can be summarized by a tech tree. For thinking about the causal structure you may also like this post: https://forum.effectivealtruism.org/posts/TfRexamDYBqSwg7er/causal-diagrams-of-the-paths-to-existential-catastrophe
Complementary ideas to this article:
Relatedly, here is a post going beyond the framework of a ratio of progress to the effect on the ratio of research that still needs to be done for various outcomes: https://www.lesswrong.com/posts/BfKQGYJBwdHfik4Kd/fai-research-constraints-and-agi-side-effects
Extending further one can examine higher order derivatives and curvature in a space of existential risk trajectories: https://forum.effectivealtruism.org/posts/TCxik4KvTgGzMowP9/state-space-of-x-risk-trajectories
Roughly speaking, in terms of the actions you take, various timelines should be weighted as P(AGI in year t)*DifferenceYouCanProduceInAGIAlignmentAt(t). This produces a new, non normalized distribution of how much to prioritize each time (you can renormalize it if you wish to make it more like "probability").
Note that this is just a first approximation and there are additional subtleties.
I think causal diagrams naturally emerge when thinking about Goodhart's law and its implications.
I came up with the concept of Goodhart's law causal graphs above because of a presentation someone gave at the EA Hotel in late 2019 of Scott's Goodhart Taxonomy. I thought causal diagrams were a clearer way to describe some parts of the taxonomy but their relationship to the taxonomy is complex. I also just encountered the paper you and Scott wrote a couple weeks ago when getting ready to write this Good Heart Week prompted post, and I was planning in th...
I like the distinction that you're making and that you gave it a clear name.
Relatedly, there is the method of Lagrangian multipliers for solving things in the subspace.
On a side note: there is a way to partially unify subspace optimum and local optimum by saying that the subspace optimum is a local optimum with respect to the local set of parameters you're using to define the subspace. You're at a local optimum with respect to defining the underlying space to optimize over (aka the subspace) and a local optimum within that space (the subspace). (Relatedly, moduli spaces.)
Nice deduction about the relationship between this and conflict vs mistake theory! Similar and complementary to this post is the one I wrote on Moloch and the Pareto optimal frontier.
Are you trying to express the idea of adding new fundamental "terms" to your language describing things like halting oracles and such? And then discounting their weight by the shortest statement of said term's properties expressed in the language that existed previously to including this additional "term?" If so, I agree that this is the natural way to extend priors out to handle arbitrary describable objects such as halting oracles.
Stated another way. You start with a language L. Let the definition of an esoteric mathematical object (...
Interesting idea.
I agree that trusting newly formed ideas is risky, but there are several reasons to convey them anyway (non-comprehensive listing):
To recruit assistance in developing and verifying them
To convey an idea that is obvious in retrospect, an idea you can be confident in immediately
To signal cleverness and ability to think on one's feet
To socially play with the ideas
What we are really after though is to asses how much weight to assign to an idea off the bat so we can calculate the opportunity costs of thinking about the idea in great...
I'm thinking of writing up a post clearly explaining update-less decision theory. I have a somewhat different way of looking at things than Wei Dia and will give my interpretation of his idea if there is demand. I might also need to do this anyway in preparation for some additional decision theory I plan to post to lesswrong. Is there demand?
Closely related to your point is the paper, "The Epistemic Benefit of Transient Diversity"
It describes and models the costs and benefits of independent invention and transient disagreement.
What keeps the AI from immediately changing itself to only care about the people's current utility function? That's a change with very high expected utility defined in terms of their current utility function and one with little tendency to change their current utility function.
Will you believe that a simple hack will work with lower confidence next time?
Hmm, darn. When I write I do have a tendency to see what ideas I meant to describe instead of seeing my actual exposition; I don't like grammar checking my writing until I've had some time to forget details, I read right over my errors unless I pay special attention.
I did have a three LWers look over the article before I sent it and got the general criticism that it was a bit obscure and dense but understandable and interesting. I was probably too ambitious in trying to include everything within one post though, length vs clarity tradeoff.
To address you...
I think he meant that even if we are not religious, society tends to pull us into moral realism even though of course moral realism is an illusion.
You are correct, though I don't go as far as calling moral realism an illusion because of unknown unknowns (though I would be very surprised to find it isn't illusionary).
Addressing your reification point:
By means of reification something that was previously implicit, unexpressed and possibly unexpressible is explicitly formulated and made available to conceptual (logical or computational) manipulation." - Reification(computer science) from wikipedia.
I don't think I did abuse vocabulary outside of possibly generalizing meanings in straightforward ways and taking words and meanings common in one topic and using them in a context where they are rather uncommon (e.g. computer science to philosophy). I rely on contex...
Some things I use to test mental ability as well as train it are: BrainWorkshop (A free dualNback program), Cognitivefun.net (A site with assorted tests and profiles including everything from reaction time, to subitizing, to visual backward digit span), Posit Science's jewel diver demo (a multi-object tracking test), and Lumosity.com (brainshift, memory matrix, speed match, top chimp. All of these tests can be found for free on the internet).
Subjectively the regular use of these tests has increased my metacognitive and self monitoring ability. Anyone have ...
I do not agree with all interpretations of the quote but primed by:
That's not right. It's not even wrong. -- Wolfgang Pauli
I interpreted it charitably with "critical" loosely implying "worth thinking about" in contrast to vague ideas that are not even wrong. Furthermore, from thefreedictionary.com definition of critical, "1. Inclined to judge severely and find fault.", vague statements may be considered useless and so judged severely but much of the time they are also slippery in that they must be broken down into precis...
From pwno: "Aren't true theories defined by how useful they are in some application?"
My definition of "usefulness" was built with the express purpose of relating the truth of theories to how useful they are and is very much a context specific temporary definition (hence "define:"). If I had tried to deal with it directly I would have had something uselessly messy and incomplete, or I could have used a true but also uninformative expectation approach and hid all of the complexity. Instead, I experimented and tried to force the ...
define: A theory's "truthfulness" as how much probability mass it has after appropriate selection of prior and applications of Bayes' theorem. It works as a good measure for a theory's "usefulness" as long as resource limitations and psychological side effects aren't important.
define: A theory's "usefulness" as a function of resources needed to calculate its predictions to a certain degree of accuracy, the "truthfulness" of the theory itself, and side effects. Squinting at it, I get something roughly like: usefulne...
I agree that it may plausibly be argued that the difference should rarely fall into the small margin: U(good name) - U(bad name) (up to varying priors, utility functions, ...). However, should people calculate to the point that they can resolve differences of that order of magnitude? A fast and dirty heuristic may be the way to go practically speaking; the difference in utility would be less than the utility lost in calculating it.
I agree.
Anthropic's marginal contribution to safety (compared to what we would have in a world without Anthropic) probably doesn't offset Anthropic's contribution to the AI race.
I think there are more worlds where Anthropic is contributing to the race in a negative fashion than there are worlds where Anthropic's marginal safety improvement over OpenAI/DeepMind-ish orgs is critical for securing a good future with AGI (weighing things according to the impact sizes and probabilities).