Ruby

LessWrong Team

 

I have signed no contracts or agreements whose existence I cannot mention.

Sequences

LW Team Updates & Announcements
Novum Organum

Comments

Sorted by
Ruby42

Errors are my own

At first blush, I find this caveat amusing.

1. If there are errors, we can infer that those providing feedback were unable to identify them.
2. If the author was fallible enough to have made errors, perhaps they are are fallible enough to miss errors in input sourced from others.

What purpose does it serve? Given its often paired with "credit goes to..<list of names> it seems like an attempt that people providing feedback/input to a post are only exposed to upside from doing so, and the author takes all the downside reputation risk if the post is received poorly or exposed as flawed.

Maybe this works? It seems that as a capable reviewer/feedback haver, I might agree to offer feedback on a poor post written by a poor author, perhaps pointing out flaws, and my having given feedback on it might reflect poorly on my time allocation, but the bad output shouldn't be assigned to me. Whereas if my name is attached to something quite good, it's plausible that I contributed to that. I think because it's easier to help a good post be great than to save a bad post. 

But these inferences seem like they're there to be made and aren't changed by what an author might caveat at the start. I suppose the author might want to remind the reader of them rather than make them true through an utterance.

Upon reflection, I think (1) doesn't hold. The reviewers/input makers might be aware of the errors but be unable to save the author from them. (2) That the reviewers made mistakes that have flowed into the piece seems all the more likely the worse the piece is overall, since we can update that the author wasn't likely to catch them.

On the whole, I think I buy the premise that we can't update too much negatively on reviewers and feedback givers from them having deigned to give feedback on something bad, though their time allocation is suspect. Maybe they're bad at saying no, maybe they're bad at dismissing people's ideas aren't that good, maybe they have hope for this person. Unclear. Upside I'm more willing to attribute.

Perhaps I would replace the "errors are my my own[, credit goes to]" with a reminder or pointer that these are the correct inferences to make. The words themselves don't change them? Not sure, just musing here.

Edited To Add: I do think "errors are my own" is a very weird kind of social move that's being performed in an epistemic contexts and I don't like.

Ruby169

This post is comprehensive but I think "safetywashing" and "AGI is inherently risky" are far too towards and the end and get too little treatment, as I think they're the most significant reasons against. 

This post also makes no mention of race dynamics and how contributing to them might outweigh the rest, and as RyanCarey says elsethread, doesn't talk about other temptations and biases that push people towards working at labs and would apply even if it was on net bad.

Ruby7-7

Curated. Insurance is a routine part of life, whether it be the car and home insurance we necessarily buy or the Amazon-offered protection one reflexively declines, the insurance we know doctors must have, businesses must have, and so on.

So it's pretty neat when someone comes along along and (compellingly) says "hey guys, you (or are at least most people) are wrong about when insurance makes sense to buy, the reasons you have are wrong, here's the formula". 

While assumptions can be questioned, e.g. infiniteness badness of going bankrupt and other factors can be raised, this is just a neat technical treatment of a very practical, everyday question. I expect that I'll be thinking in terms of this myself making various insurance choices. Kudos!

Ruby146

Curated. This is a good post and in some ways ambitious as it tries to make two different but related points. One point – that AIs are going to increasingly commit shenanigans – is in the title. The other is a point regarding the recurring patterns of discussion whenever AIs are reported to have committed shenanigans. I reckon those patterns are going to be tough to beat, as strong forces (e.g. strong pre-existing conviction) cause people to take up the stances they do, but if there's hope for doing better, I think it comes from understanding the patterns.

There's a good round up of recent results in here that's valuable on its own, but the post goes further and sets out to do something pretty hard in advocating for the correct interpretation of the results. This is hard because I think the correct interpretation is legitimately subtle and nuanced, with the correct update depending on your starting position (as Zvi explains). It sets out and succeeds.

Lastly, I want to express my gratitude for Zvi's hyperlinks to lighter material, e.g. "Not great, Bob" and "Stop it!" It's a heavy world with these topics of AI, and the lightness makes the pill go down easier. Thanks

Ruby20

Yes, true, fixed, thanks!

Ruby20

Dog: "Oh ho ho, I've played imaginary fetch before, don't you worry."

Ruby113

My regular policy is to not frontpage newsletters, however I frontpaged this one as it's the first in the series and I think it's neat for more people to know this is a series Zvi intends to write.

Ruby51

Curated! I think it's generally great when people explain what they're doing and why in way legibile to those not working on it. Great because it let's others potentially get involved, build on it, expose flaws or omissions, etc. This one seems particularly clear and well written. While I haven't read all of the research, nor am I particularly qualified to comment on it, I like the idea of a principled/systematic approach behind, in comparison to a lot of work that isn't coming on a deeper, bigger, framework.

(While I'm here though, I'll add a link to Dmitry Vaintrob's comment that Jacob Hilton described as "best critique of ARC's research agenda that I have read since we started working on heuristic explanations". Eliciting such feedback is the kind of good thing that comes out of up writing agendas – it's possible or likely Dmitry was already tracking the work and already had these critiques, but a post like this seems like a good way to propagate them and have a public back and forth.)

Roughly speaking, if the scalability of an algorithm depends on unknown empirical contingencies (such as how advanced AI systems generalize), then we try to make worst-case assumptions instead of attempting to extrapolate from today's systems.

I like this attitude. The human standard, I think often in alignment work too, is to argue why one's plan will work and find stories for that, and adopting the methodology of the opposite, especially given the unknowns, is much needed in alignment work.

Overall, this is neat. Kudos to Jacob (and rest of the team) for taking the time to put this all together. Doesn't seem all that quick to write, and I think it'd be easy to think they ought to not take time out off from further object-level research to write it.  Thanks!

Ruby72

Curated. I really like that even though LessWrong is 1.5 decades old now and has Bayesianism assumed as background paradigm while people discuss everything else, nonetheless we can have good exploration of our fundamental epistemological beliefs.

The descriptions of unsolved problems, or at least incompleteness of Bayesianism strikes me as technically correct. Like others, I'm not convinced of Richard's favored approach, but it's interesting. In practice, I don't think these problems undermine the use of Bayesianism in typical LessWrong thought. For example, I never thought of credences being applied to "propositions" rigorously, and more like "hypotheses" or possibilities for how things are that could be framed as models already too.  Context-dependent terms like "large" or quantities without explicit tolerances like "500ft" are the kind of things that you you taboo or reduce if necessary either for your own reasoning or a bet

That said, I think the claims about mistakes and downstream consequences of the way people do Bayesianism are interesting. I'm reading a claim here I don't recall seeing. Although we already knew that bounded reasons aren't logically omniscient, Richard is adding a claim (if I'm understanding correctly) that this means that no matter how much strong evidence we technically have, we shouldn't have really high confidence in any domain that requires heavy of processing that evidence, because we're not that good at processing. I do think that leaves us with a question of judging when there's enough evidence to be conclusive without complicated processing or not.

Something I might like a bit more factored out is the rigorous gold-standard epistemological framework and the manner in which we apply our epistemology day to day.

I fear this curation notice would be better if I'd read all the cited sources on critical rationalism, Knightian uncertainty, etc., and I've added them to my reading list. All in all, kudos for putting some attention on the fundamentals.

Load More