I cobbled together a compartmental fitting model for Omicron to try to get a handle on some viral characteristics empirically. It's not completely polished yet, but this is late enough already, so I figured the speed premium was enough to share this version in a comment before writing up a full-length explanation of some of the choices made (e.g. whether to treat vaccination as some chance of removal or decreased risk by interaction).
You can find the code here in an interactive environment.
Ok, I think I understand our crux here. In the fields of math Iām talking about, 3^(-1) is a far better way to express the multiplicative inverse of 3, simply because itās not dependent on any specific representation scheme and immediately carries the relevant meaning. I donāt know enough about the pedagogy of elementary school math to opine on that.
Sorry for the lack of clarity-Iām not talking about high school algebra, Iām talking about abstract algebra. I guess if weāre writing -2 as a simplification, thatās fine, but seems to introduce a kind of meaningless extra step-I donāt quite understand the āspecial casesā youāre talking about, because it seems to me that you can eliminate subtraction without doing this? In fact, for anything more abstract than calculus, thatās standard-groups, for example, donāt have subtraction defined (usually) other than as the addition of the inverse.
This seems super annoying when you start dealing with more abstract math: while it's plausibly more intuitive as a transition into finite fields (thinking specifically of quadratic residues, for example), it would really really suck for graphing, functions, calculus, or any sort of coefficent-based work. It also sounds tremendously annoying for conceptualizing bases/field-adjoins/sigma notation.
Iām trying to figure out what you mean-my current interpretation is that my post is an example of reason that will lead us astray. I could be wrong about this, and would appreciate correction, as the analogy isnāt quite āclickingā for me. If Iām right, I think itās generally a good norm to provide some warrant for these types of things: I can vaguely see what you might mean, but itās not obvious enough to me to be able to engage in productive discourse, or change my current endorsement of my opinion: Iām open to the possibility you might be right, but I donāt know what youāre saying. This might be just an understanding failure on my part, in which case Iād appreciate any guidance/correction/clarification.
This post seems excellent overall, and makes several arguments that I think represent the best of LessWrong self-reflection about rationality. It also spurred an interesting ongoing conversation about what integrity means, and how it interacts with updating.
The first part of the post is dedicated to discussions of misaligned incentives, and makes the claim that poorly aligned incentives are primarily to blame for irrational or incorrect decisions. Iām a little bit confused about this, specifically that nobody has pointed out the obvious corollary: the peop...
Thatās a fair point-see my comment to Raemon. The way I read it, the mod consensus was that we canāt just curate the post, meaning that comments are essentially the only option. To me, this means an incorrect/low quality post isnāt disqualifying, which doesnāt decrease the utility of the review, just the frame under which it should be interpreted.
Thatās fair-I wasnāt disparaging the usefulness of the comment, just pointing out that the post itself is not actually whatās being reviewed, which is important, because it means that a low-quality post that sparks high-quality discussion isnāt disqualifying.
Note that this review is not of the content that was nominated; nomination justifications strongly suggest that the comment suggestion, not the linkpost, was nominated.
(Epistemic status: I donāt have much background in this. Not particularly confident, and attempting to avoid making statements that donāt seem strongly supported.)
I found this post interesting and useful, because it brought a clear unexpected result to the fore, and proposed a potential model that seems not incongruent with reality. On a meta-level, I think supporting these types of posts is quite good, especially because this one has a clear distinction between the āhard thing to explainā and the āpotential explanation,ā which seems very important to allo...
I strongly oppose collation of this post, despite thinking that it is an extremely well-written summary of an interesting argument on an interesting topic. The reason that I do so is because I believe it represents a substantial epistemic hazard because of the way it was written, and the source material it comes from. I think this is particularly harmful because both justifications for nominations amount to "this post was key in allowing percolation of a new thesis unaligned with the goals of the community into community knowledge," which is a justificatio...
This seems to me like a valuable post, both on the object level, and as a particularly emblematic example of a category ("Just-so-story debunkers") that would be good to broadly encourage.
The tradeoff view of manioc production is an excellent insight, and is an important objection to encourage: the original post and book (haven't read in the entirety) appear to have leaned to heavily on what might be described as a special case of a just-so story: the phenomena is a behavior difference is explained as an absolute by using a post-hoc framework, and then doe...
I think this post significantly benefits in popularity, and lacks in rigor and epistemic value, from being written in English. The assumptions that the post makes in some part of the post contradict the judgements reached in others, and the entire post, in my eyes, does not support its conclusion. I have two main issues with the post, neither of which involve the title or the concept, which I find excellent:
First, the concrete examples presented in the article point towards a different definition of optimal takeover than is eventually reached. All of the p...
Oops, you're correct.
This review is more broadly of the first several posts of the sequence, and discusses the entire sequence.
Epistemic Status: The thesis of this review feels highly unoriginal, but I can't find where anyone else discusses it. I'm also very worried about proving too much. At minimum, I think this is an interesting exploration of some abstract ideas. Considering posting as a top-level post. I DO NOT ENDORSE THE POSITION IMPLIED BY THIS REVIEW (that leaving immoral mazes is bad), AND AM FAIRLY SURE I'M INCORRECT.
The rough thesis of "Meditations on Moloch"...
Thanks! Iām obviously not saying I want to remove this post, I enjoyed it. Iām mostly wondering how we want to norm-set going forwards.
I think youāre mostly right. To be clear, I think that thereās a lot of value in unfiltered information, but I mostly worry about other topics being drowned out by unfiltered information on a forum like this. My personal preference is to link out or do independent research to acquire unfiltered information in a community with specific views/frames of reference, because I think itās always going to be skewed by that communities thought, and I donāt find research onerous.
Iād support either the creation of a separate [Briefs] tag that can be filtered like oth...
To effectively extend on Raemon's commentary:
I think this post is quite good, overall, and adequately elaborates on the disadvantages and insufficiencies of the Wizard's Code of Honesty beyond the irritatingly pedantic idiomatic example. However, I find the implicit thesis of the post deeply confusing (that EY's post is less "broadly useful" than it initially appears). As I understand them, the two posts are saying basically identical things, but are focused in slightly different areas, and draw very different conclusions. EY's notes the issues with the wi...
I think my comment in response to Raemon is applicable here as well. I found your argument as to why progress studies writ large is important persuasive. However, I do not feel as though this post is the correct way to go about that. Updating towards believing that progress studies are important has actually increased my conviction that this post should not be collated: important areas of study deserve good models, and given the diversity of posts in progress studies, the exact direction is still very nebulous and susceptible to influences like collation.
T...
Iām a bit confused-I thought that this was what I was trying to say. I donāt think this is a broadly accurate portray of reasons for action as discussed elsewhere in the story, see great-grandparent for why. Separately, I think itās a really bad idea to be implicitly tying harm done by AI (hard sci-fi) to a prerequisite of anthropomorphized consciousness (fantasy). Maybe we agree, and are miscommunication?
(strong-upvoted, I think this discussion is productive and fruitful)
I think this is an interesting distinction. I think Iām probably interpreting the goals of a review as more of a āLetās create a body of gold standard work,ā whereas it seems as though youāre interpreting it more through a lens of āLetās showcase interesting work.ā I think the central question where these two differ is exemplified by this post: what happens when we get a post that is nice to have in small quantities. In the review-as-goal world, thatās not a super helpful post to curate. I...
I notice I am confused.
I feel as though these type of posts add relatively little value to LessWrong, however, this post has quite a few upvotes. I donāt think novelty is a prerequisite for a high-quality post, but I feel as though this post was both not novel and not relevant, which worries me. I think that most of the information presented in this article is a. Not actionable b. Not related to LessWrong, and c. Easily replaceable with a Wikipedia or similar search. This would be my totally spot balled test for a topical post: at least one of these 3 must...
I strongly believe all of the following:
I'll walk through each of those one-by-one.
First, progress studies. The cu...
I agree that itās narratively exciting; I worry that it makes the story counterproductive in its current form (I.e. computer people thinking ācomputers donāt think like that, so this is irrelevant)
Iām pretty impressed by this post overall, not necessarily because of the object-level arguments (though those are good as well), but because I think itās emblematic of a very good epistemic habit that is unfortunately rare. The debate between Hanson and Zvi over this, like habryka noted, is a excellent example of how to do good object-level debate that reveals details of shared models over text. I suspect that this is the best post to canonize to reward that, but Iām not convinced of this. On the meta-level, the one major improvement/further work Iād lik...
I think Raemonās comments accurately describe my general feeling about this post-intriguing, but not well-optimized for a post.
However, I also think that this post may be the source of a subtle misconception in simulacra levels that the broader LessWrong community has adopted. Specifically, I think the distinction between 3 and 4 is blurred in this post, and tries to draw the false analogy that 1:2::3:4. Going from 3 (masks the absence of a profound reality) to 4 (no profound reality) is more clearly described not as a āwidespread understandingā that they...
I think this post is incredibly useful as a concrete example of the challenges of seemingly benign powerful AI, and makes a compelling case for serious AI safety research being a prerequisite to any safe further AI development. I strongly dislike part 9, as painting the Predict-o-matic as consciously influencing others personality at the expense of short-term prediction error seems contradictory to the point of the rest of the story. I suspect I would dislike part 9 significantly less if it was framed in terms of a strategy to maximize predictive accuracy....
I think that the main thing that confuses me is the nuance of SL4, and I also think thatās the main place where the rationalist communities understanding/use of simulacra levels breaks down on the abstract level.
One of the original posts bringing simulacra to LessWrong explicitly described the effort to disentangle simulacra from Marxist European philosophers. I think that this was entirely successful, and intuitive for the first 3 levels, but I think that the fourth simulacra level is significantly more challenging to disentangle from the ideological thes...
Iām really curious to see some of the raw output (not curated) to try and get an estimate on how many oysters you have to pick through to find the pearls. (Iām especially interested w.r.t. the essay-like things-the extension of the essay on assertions was by far the scariest and most impressive thing Iāve seen from GPT-3, because the majority of its examples were completely correct, and it held a thesis for the majority of the piece.)
On a similar note, I know there have been experiments using either a differently-trained GPT or other text-prediction models
...I think this might just be a rephrasal of what several other commenters have said, but I found this conception somewhat helpful.
Based on intuitive modeling of this scenario and several others like it, I found that I ran into the expected āparadoxā in the original statement of the problem, but not in the statement where you roll one dice to determine the 1/3 chance of me being offered the wager, and then the original wager. I suspect that the reason why is something like this:
Loosing 1B is a uniquely bad outcome, worse than its monetary utility would imply,
...
See my above comment, where I was trying to get a handle on this. It increasingly seems like the answer is that most of it comes from breakthrough+serial intervals