All of fiddler's Comments + Replies

See my above comment, where I was trying to get a handle on this. It increasingly seems like the answer is that most of it comes from breakthrough+serial intervals

I cobbled together a compartmental fitting model for Omicron to try to get a handle on some viral characteristics empirically. It's not completely polished yet, but this is late enough already, so I figured the speed premium was enough to share this version in a comment before writing up a full-length explanation of some of the choices made (e.g. whether to treat vaccination as some chance of removal or decreased risk by interaction). 

You can find the code here in an interactive environment. 

https://mybinder.org/v2/gh/pasdesc/Omicron-Model-Fittin... (read more)

Ok, I think I understand our crux here. In the fields of math I’m talking about, 3^(-1) is a far better way to express the multiplicative inverse of 3, simply because it’s not dependent on any specific representation scheme and immediately carries the relevant meaning. I don’t know enough about the pedagogy of elementary school math to opine on that.

Sorry for the lack of clarity-I’m not talking about high school algebra, I’m talking about abstract algebra. I guess if we’re writing -2 as a simplification, that’s fine, but seems to introduce a kind of meaningless extra step-I don’t quite understand the ā€œspecial casesā€ you’re talking about, because it seems to me that you can eliminate subtraction without doing this? In fact, for anything more abstract than calculus, that’s standard-groups, for example, don’t have subtraction defined (usually) other than as the addition of the inverse.

2gilch
...ĀÆĀÆĀÆ97 is the simplified form of āˆ’3, not the other way around, in the same sense that 0.ĀÆĀÆĀÆ3... is the simplified form of 3āˆ’1. Why do we think it's consistent that we can express a multiplicative inverse without an operator, but we can't do the same for an additive inverse? A number system with compliments can express a negative number on its own rather than requiring you to express it in terms of a positive number and an inversion operator, but you still need the operator for other reasons. ^8 seems no more superfluous of an additive inverse of 2 than 0.5 is as its multiplicative inverse. Either both are superfluous, or neither is. That was kind of my point, as far as the algebra is concerned—subtraction, fundamentally, is a negate and add, not a primitive. But I was talking about children doing arithmetic, and they can do it the same way. Teach them how to do negation (using complements, not tacking on a sign) instead of subtraction, and you're done. You never have to memorize the subtraction table.

This seems super annoying when you start dealing with more abstract math: while it's plausibly more intuitive as a transition into finite fields (thinking specifically of quadratic residues, for example), it would really really suck for graphing, functions, calculus, or any sort of coefficent-based work. It also sounds tremendously annoying for conceptualizing bases/field-adjoins/sigma notation.

2gilch
Maybe it's not better. I could be wrong. My opinion is weakly held. But I'm talking about eliminating the arithmetic of subtraction, not eliminating the algebra of negation. You'd still have a minus sign you can do algebra with, but it would be strictly unary. I don't see high-school algebra changing very much with that. We'd have some more compact notation to represent the ...ĀÆĀÆĀÆ9, maybe I'll use ^ for now. So you can still write āˆ’2 for algebra, it just simplifies to ^8 for when you need to do arithmetic on it. And instead of writing xāˆ’y, you write x+āˆ’y. Algebra seems about the same to me. Maybe a little easier since we lost a superfluous non-commutative operator. In base six, in complement form, the ^ now represents ...ĀÆĀÆĀÆ5, so a number line would look like ... ^43 ^44 ^45 ^0 ^1 ^2 ^3 ^4 ^5 0 1 2 3 4 5 10 ... i.e. all the numbers increment forwards instead of flipping at zero. You can plug these x-values into a y= formula for graphing, and it would seem to work the same. Multiplication still works on complements. Computers do integer multiplies using two's complement binary numbers. Maybe a concrete example of why you think graphing would be more difficult would help me understand where you're coming from.

I’m trying to figure out what you mean-my current interpretation is that my post is an example of reason that will lead us astray. I could be wrong about this, and would appreciate correction, as the analogy isn’t quite ā€œclickingā€ for me. If I’m right, I think it’s generally a good norm to provide some warrant for these types of things: I can vaguely see what you might mean, but it’s not obvious enough to me to be able to engage in productive discourse, or change my current endorsement of my opinion: I’m open to the possibility you might be right, but I don’t know what you’re saying. This might be just an understanding failure on my part, in which case I’d appreciate any guidance/correction/clarification.

This post seems excellent overall, and makes several arguments that I think represent the best of LessWrong self-reflection about rationality. It also spurred an interesting ongoing conversation about what integrity means, and how it interacts with updating.

The first part of the post is dedicated to discussions of misaligned incentives, and makes the claim that poorly aligned incentives are primarily to blame for irrational or incorrect decisions. I’m a little bit confused about this, specifically that nobody has pointed out the obvious corollary: the peop... (read more)

5Raemon
Minor note: the large paragraph blocks make this hard to read.

That’s a fair point-see my comment to Raemon. The way I read it, the mod consensus was that we can’t just curate the post, meaning that comments are essentially the only option. To me, this means an incorrect/low quality post isn’t disqualifying, which doesn’t decrease the utility of the review, just the frame under which it should be interpreted.

That’s fair-I wasn’t disparaging the usefulness of the comment, just pointing out that the post itself is not actually what’s being reviewed, which is important, because it means that a low-quality post that sparks high-quality discussion isn’t disqualifying.

Note that this review is not of the content that was nominated; nomination justifications strongly suggest that the comment suggestion, not the linkpost, was nominated.

4Rohin Shah
As I read it, two of the nominations are for the post itself, and one is for the comments... ...is what I was going to say until I checked and saw that this comment is a review, not a nomination. So one is for the post, and one for the comments. ---- I agree with Raemon that even if the nomination is for the comments, evaluating the post is important. I actually started writing a section on the comments, but didn't have that much to say, because they all seem predicated on the post stating something true about the world. The highest-voted top-level comment, as well as Zvi's position in this comment thread, seem to basically be considering the case where academia as a whole is net negative. I broadly agree with Zvi that it is not acceptable for an academic to go around faking data; if that were the norm in academia I expect I would think that academia was net negative and one could not justify joining it (unless you were going to buck the incentives). But... that isn't the norm in academia. I feel like these comments are only making an important point if you actually believe the original post, which I don't. The other comments seem to have only a little content, or to be on relatively tangential topics.
7Raemon
I think the comments are in large part about the post, though, and it matters a lot whether the post is wrong or misleading. I also think that, while, this post wouldn't be eligible for the 2019 Review, an important point of the overall review process is still to have a coordinated time where everyone evaluates posts that have permeated the culture. I think this review is quite valuable along those lines.

(Epistemic status: I don’t have much background in this. Not particularly confident, and attempting to avoid making statements that don’t seem strongly supported.)

I found this post interesting and useful, because it brought a clear unexpected result to the fore, and proposed a potential model that seems not incongruent with reality. On a meta-level, I think supporting these types of posts is quite good, especially because this one has a clear distinction between the ā€œhard thing to explainā€ and the ā€œpotential explanation,ā€ which seems very important to allo... (read more)

I strongly oppose collation of this post, despite thinking that it is an extremely well-written summary of an interesting argument on an interesting topic. The reason that I do so is because I believe it represents a substantial epistemic hazard because of the way it was written, and the source material it comes from. I think this is particularly harmful because both justifications for nominations amount to "this post was key in allowing percolation of a new thesis unaligned with the goals of the community into community knowledge," which is a justificatio... (read more)

This seems to me like a valuable post, both on the object level, and as a particularly emblematic example of a category ("Just-so-story debunkers") that would be good to broadly encourage.

The tradeoff view of manioc production is an excellent insight, and is an important objection to encourage: the original post and book (haven't read in the entirety) appear to have leaned to heavily on what might be described as a special case of a just-so story: the phenomena is a behavior difference is explained as an absolute by using a post-hoc framework, and then doe... (read more)

I think this post significantly benefits in popularity, and lacks in rigor and epistemic value, from being written in English. The assumptions that the post makes in some part of the post contradict the judgements reached in others, and the entire post, in my eyes, does not support its conclusion. I have two main issues with the post, neither of which involve the title or the concept, which I find excellent:

First, the concrete examples presented in the article point towards a different definition of optimal takeover than is eventually reached. All of the p... (read more)

Oops, you're correct. 

1Yoav Ravid
Nice, much clearer now :)

This review is more broadly of the first several posts of the sequence, and discusses the entire sequence. 

Epistemic Status: The thesis of this review feels highly unoriginal, but I can't find where anyone else discusses it. I'm also very worried about proving too much. At minimum, I think this is an interesting exploration of some abstract ideas. Considering posting as a top-level post. I DO NOT ENDORSE THE POSITION IMPLIED BY THIS REVIEW (that leaving immoral mazes is bad), AND AM FAIRLY SURE I'M INCORRECT.

The rough thesis of "Meditations on Moloch"... (read more)

2supposedlyfun
I would be very interested in your proposed follow-up but don't have enough game theory to say whether the idea has obvious flaws.
3Yoav Ravid
i found your epistemic status confusing. it reads like it's about zvi's post, but i assume it's supposed to be about your review. (perhaps because you referenced your review as a post/article)

Thanks! I’m obviously not saying I want to remove this post, I enjoyed it. I’m mostly wondering how we want to norm-set going forwards.

I think you’re mostly right. To be clear, I think that there’s a lot of value in unfiltered information, but I mostly worry about other topics being drowned out by unfiltered information on a forum like this. My personal preference is to link out or do independent research to acquire unfiltered information in a community with specific views/frames of reference, because I think it’s always going to be skewed by that communities thought, and I don’t find research onerous.

I’d support either the creation of a separate [Briefs] tag that can be filtered like oth... (read more)

5johnswentworth
Good suggestion, and I expect some mechanism along these lines will show up if and when it becomes significant.

To effectively extend on Raemon's commentary:

I think this post is quite good, overall, and adequately elaborates on the disadvantages and insufficiencies of the Wizard's Code of Honesty beyond the irritatingly pedantic idiomatic example. However, I find the implicit thesis of the post deeply confusing (that EY's post is less "broadly useful" than it initially appears). As I understand them, the two posts are saying basically identical things, but are focused in slightly different areas, and draw very different conclusions. EY's notes the issues with the wi... (read more)

I think my comment in response to Raemon is applicable here as well. I found your argument as to why progress studies writ large is important persuasive. However, I do not feel as though this post is the correct way to go about that. Updating towards believing that progress studies are important has actually increased my conviction that this post should not be collated: important areas of study deserve good models, and given the diversity of posts in progress studies, the exact direction is still very nebulous and susceptible to influences like collation.

T... (read more)

4johnswentworth
Ok, I think our crux here is about how much posts should explicitly point out how their material connects to everything else. Personally, I think there's a lot of value in posts which explicitly do not explain how they connect, because explaining connections usually means pulling in a particular framing and suggesting particular questions/frameworks. In a pre-paradigmatic field, we don't know what the right questions or frames are, so there's a lot of value in just presenting the information without much framing. It's the "go out and look at the world" part of rationality. Now, a downside of this sort of post is that many people will come along who don't have any idea how the material relates to anything. There's no hand-holding in the interpretation/connections, so readers have to handle that part on their own, and not everyone is going to have enough prior scaffolding to see why the material matters at all. (I've definitely seen this on many of my own posts, when I present a result without explaining how it fits in with everything else.) I think the best way to handle this sort of trade-off is to have some posts which present information (especially concrete examples) without much framing, and then separately have posts which try to frame that information and explain how things fit together (which usually also means positing hypotheses/theories). It's very similar to the separation of empirical vs theoretical work we see in a lot of the sciences. We already have a lot of the latter sort of post, but could use a lot more of the former. So e.g. "this type of historical brief could easily proliferate to an excessive extent" is something I'd consider a very positive outcome.

I’m a bit confused-I thought that this was what I was trying to say. I don’t think this is a broadly accurate portray of reasons for action as discussed elsewhere in the story, see great-grandparent for why. Separately, I think it’s a really bad idea to be implicitly tying harm done by AI (hard sci-fi) to a prerequisite of anthropomorphized consciousness (fantasy). Maybe we agree, and are miscommunication?

4Ben Pace
Yeah, my bad, I didn't read your initial review properly (I saw John's comment in Recent Discussion and made some fast inferences about what you originally said). Sorry about that! Thx for the review :)

(strong-upvoted, I think this discussion is productive and fruitful)

I think this is an interesting distinction. I think I’m probably interpreting the goals of a review as more of a ā€œLet’s create a body of gold standard work,ā€ whereas it seems as though you’re interpreting it more through a lens of ā€œLet’s showcase interesting work.ā€ I think the central question where these two differ is exemplified by this post: what happens when we get a post that is nice to have in small quantities. In the review-as-goal world, that’s not a super helpful post to curate. I... (read more)

I notice I am confused.

I feel as though these type of posts add relatively little value to LessWrong, however, this post has quite a few upvotes. I don’t think novelty is a prerequisite for a high-quality post, but I feel as though this post was both not novel and not relevant, which worries me. I think that most of the information presented in this article is a. Not actionable b. Not related to LessWrong, and c. Easily replaceable with a Wikipedia or similar search. This would be my totally spot balled test for a topical post: at least one of these 3 must... (read more)

4jasoncrawford
OP here. I will recuse myself from the conversation about whether this deserves to be in any list or collection. However, on the topic of whether it belongs on LW at all, I'll just note that I was specifically invited by LW admins to cross-post my blog here.

I strongly believe all of the following:

  • Progress Studies should be a core topic on LessWrong, and is directly relevant to LessWrong's central mission
  • On most of LessWrong's core topics, discussion is usually too abstract, and would benefit from more concreteness/object-level discussion
  • Expanding the set of topics regularly discussed on LessWrong to include more object-level science/history/economics would dramatically improve the quality of discussion on topics which are already common

I'll walk through each of those one-by-one.

First, progress studies. The cu... (read more)

2magfrump
I think this comment is convincing to me that the post should NOT be curated. I upvoted primarily for H1 because I enjoyed reading it, and partly for H2. I think reading more gears-level descriptions of things from day to day life is helpful for keeping an accurate reductionist picture of reality. In particular, I want to reinforce in myself the idea that mundane inventions (1) have a long history with many steps (2) solve specific problems, and (3) are part of an ongoing process that contains problems yet to be solved. That makes this post nice for me to read day to day, but it makes it definitively NOT a post that I care about revisiting or that I think expands the type of thinking that the curation is trying to build.
7Raemon
(I disagree with this comment, but upvoted it because I think it does a good job exploring the question "how do we evaluate blogposts of varying types?" which I still feel pretty overall confused about) There's maybe two separate question of "does this deserve a bunch of upvotes?" and "does this deserve to be in the 2019 Review Book(s)?" I didn't upvote this post, but I might have, for a couple reasons. One major one is novelty. Right now there aren't that many LessWrong posts that explore object level worldmodeling. Rather than ask "is this a fit for LessWrong's main topics?" I think it's actually often useful to ask "does this expand on LessWrong's main topics in a way that is potentially fruitful?". I think intellectual progress depends in part on people curiously exploring and writing up things that they are interested in, even if we don't have a clear picture of how they fall fit together. Separately, I do think Progress Studies are (probably) particularly important to what I think of as one of LessWrong's central goals: using applied rationality to put a dent in universe. I'm not sure this particular piece was crucial (I haven't re-read it recently). But, I think understanding how human progress works, in the general sense, is disproportionately likely to yield insight into how to cause more progress to happen in important domains. I think that's valuable enough to consider upvoting, and valuable enough to consider it for a retrospective best-of Review. I think for in both cases it depends more on the specifics of the post, and whether it, in fact, led to some kind of later insight. (Part of the point of a retrospective review is you don't have to guess whether something would provide useful insight – you know whether it actually helped you in the past 1.5 years).

I agree that it’s narratively exciting; I worry that it makes the story counterproductive in its current form (I.e. computer people thinking ā€œcomputers don’t think like that, so this is irrelevant)

-1Ben Pace
Either it's a broadly accurate portrayal of its reasons for action or it isn't, just because people find hard sci-fi weird doesn't mean you should make it into a fantasy. Don't dilute art for people who don't get it.

I’m pretty impressed by this post overall, not necessarily because of the object-level arguments (though those are good as well), but because I think it’s emblematic of a very good epistemic habit that is unfortunately rare. The debate between Hanson and Zvi over this, like habryka noted, is a excellent example of how to do good object-level debate that reveals details of shared models over text. I suspect that this is the best post to canonize to reward that, but I’m not convinced of this. On the meta-level, the one major improvement/further work I’d lik... (read more)

I think Raemon’s comments accurately describe my general feeling about this post-intriguing, but not well-optimized for a post.

However, I also think that this post may be the source of a subtle misconception in simulacra levels that the broader LessWrong community has adopted. Specifically, I think the distinction between 3 and 4 is blurred in this post, and tries to draw the false analogy that 1:2::3:4. Going from 3 (masks the absence of a profound reality) to 4 (no profound reality) is more clearly described not as a ā€œwidespread understandingā€ that they... (read more)

4jimrandomh
I think this points to a mismatch between Benquo and Baudrillard, but not to a problem with the version of the concept Benquo uses. Given how successful the (modified, slightly different) concept has been, I consider this more of a problem with Baudrillard's book than a problem with Benquo's post.

I think this post is incredibly useful as a concrete example of the challenges of seemingly benign powerful AI, and makes a compelling case for serious AI safety research being a prerequisite to any safe further AI development. I strongly dislike part 9, as painting the Predict-o-matic as consciously influencing others personality at the expense of short-term prediction error seems contradictory to the point of the rest of the story. I suspect I would dislike part 9 significantly less if it was framed in terms of a strategy to maximize predictive accuracy.... (read more)

6abramdemski
I share a feeling that part 9 is somehow bad, and I think your points are fair.
6johnswentworth
I don't think I'm the target audience for this story so I'm not leaving a full review, but +1 to this. Part 9 seems to be trying to display another possible failure mode (specifically inner misalignment), but it severely undercuts the core message from the rest of the post: that a predictive accuracy optimizer is dangerous even if that's all it optimizes for. I do think an analogous story which focused specifically on inner optimization would be great, but mixing it in here dilutes the main message.
Answer by fiddler50

I think that the main thing that confuses me is the nuance of SL4, and I also think that’s the main place where the rationalist communities understanding/use of simulacra levels breaks down on the abstract level.

One of the original posts bringing simulacra to LessWrong explicitly described the effort to disentangle simulacra from Marxist European philosophers. I think that this was entirely successful, and intuitive for the first 3 levels, but I think that the fourth simulacra level is significantly more challenging to disentangle from the ideological thes... (read more)

1Luke Allen
I define SL4 in terms of a description I heard once of a summary of Baudrillard's work: a simulacrum is when a simulation breaks off and becomes its own thing, but still connected to the original. And whether or not that's how Baudrillard thought of SL4, it's a useful concept on its own. (My simulacrum of "simulacrum" as it were.) For example, a smartphone is a miniature computer and video game console that also has telephone capabilities; it's a simulacrum of Bell's talk-over-telegraph-wires device. The iPod Video is an almost identical piece of hardware and software minus the telephony, and even that can be simulated with the right VOIP app. I can imagine someone saying, "Well, it's still essentially a smartphone." But we don't say the same of a laptop computer using a VOIP app, or even a jailbroken Nintendo Switch or DSi. We've reached the edge of the simulacrum.

I’m really curious to see some of the raw output (not curated) to try and get an estimate on how many oysters you have to pick through to find the pearls. (I’m especially interested w.r.t. the essay-like things-the extension of the essay on assertions was by far the scariest and most impressive thing I’ve seen from GPT-3, because the majority of its examples were completely correct, and it held a thesis for the majority of the piece.)

On a similar note, I know there have been experiments using either a differently-trained GPT or other text-prediction models

... (read more)
3gwern
You can read the random sample dump to get an idea of that, or Max Woolf's repo (both of which I link around the beginning). I'm not doing that for any of my prompts because right now the Playground is just way too much of a pain and errors out too regularly to make it feasible to generate, say, 100 1024 completions for a specific prompt. I would need to get set up with the Python library for the API, and I've been busy exploring prompts & writing them up rather than programming. Yes, best-of rankers like Meena are basically just a ranker which happens to use the same model to estimate & score by total likelihood of the final sample completion. It works because the final sample may have a different total and better likelihood than the partial completions would indicate, and if you greedily maximized, you immediately fall into repetition traps, while quasi-random (but still local) samples of the tree appear to avoid those very high likelihood traps in favor of sensible but still high likelihood completions. Preference learning would be nice, but at least for GPT-2 it didn't work too well for me. I don't know if you could finetune a sanity-checking GPT-3 by doing something like flipping texts to generate logical vs illogical completions.

I think this might just be a rephrasal of what several other commenters have said, but I found this conception somewhat helpful.

Based on intuitive modeling of this scenario and several others like it, I found that I ran into the expected ā€œparadoxā€ in the original statement of the problem, but not in the statement where you roll one dice to determine the 1/3 chance of me being offered the wager, and then the original wager. I suspect that the reason why is something like this:

Loosing 1B is a uniquely bad outcome, worse than its monetary utility would imply,

... (read more)