The thought saver at the end of the heritability section asks you to remember some strategies for self control, but they've not been introduced yet
Presumably the welfare premium is reduced if the ethical egg providers can recoup some costs from a quality premium
Not sure at all! It still seems like the ordering is tricky. They don't know how many ethical eggs they've sold when selling towards the consumer. There's not a guarantee of future ethical eggs when buying the certificate.
Maybe it works out OK, and they can sell 873,551 eggs at a regular price after that many certificates were bought, and the rest at the higher price. I know very little about how the food supply chain works
IIUC, this exposes the high-welfare egg co to more risk. It's hard to sell 1 million eggs for one price, and 1 million for another price. So they probably have to choose to sell at the low welfare price constantly. But this means they build up a negative balance that they're hoping ethical consumers will buy them out of.
Was this actually cross posted by EY, or by Rob or Ben? I prefer it being mentioned in the latter case
I posted this, and I'll make a note that I did so for any future Eliezer content where I hit the 'submit' button.
The causal process for this article looked like this:
To add more color to the inadequate equilibrium: I didn’t want to hang out with people with a lot of risk, not because of how bad COVID would be for me, but because of how it would limit which community members would interact with me. But this also meant I was a community member who was causing other people to take less risk.
I didn’t mean to predict on this; I was just trying to see number of predictions on first one. Turns out that causes prediction on mobile
Hoping, I guess, that the name was bad enough that others would call it an Uhlmann Filter
Oh, and I also notice that a social manoeuvring game (the game that governs who is admitted) is a task where performance is correlated with performance on (1) and (2)
First time I’ve seen a highlighted mod comment. I like it!
Most of the Inner Rings I've observed are primarily selected on (1) being able to skilfully violate the explicit local rules to get things done without degrading the structure the rules hold up and (2) being fun to be around, even for long periods and hard work.
Lewis acknowledges that Inner Rings aren't necessarily bad, and I think the above is a reason why.
Making correct decisions is hard. Sharing more data tends to make them easier. Whether you'll thrive or fail in a job may well depend on parts you are inclined to hide. Also, though we may be unwilling to change many parts of ourselves, other times we are getting in our own way, and it can help to have more eyes on the part of the territory that's you
All uses of the second person "you" and "your" in this post are in fact me talking to myself
I wanted to upvote just for this note, but I decided it's not good to upvote things based on the first sentence or so. So I read the post, and it's good, so now I can upvote guilt-free!
I would also be salty
I think this tag should cover, for example, auction mechanics. Auctions don't seem much like institutions to me
Note also Odin was "Woden" in Old English
90% autocorrect of "Chauna"
Should I be reading all the openings of transparent envelopes as actual openings, or are they sometimes looking at the sealed envelope and seeing what it contains (the burnings incline me towards the second interpretation, but I'm not sure)?
EDIT: Oh, I think I understand better now
Made a quick neural network (reaching about 70% accuracy), and checked all available scores.
Its favorite result was: +2 Cha, +8 Wis. It would have like +10 Wis if it were possible.
For at least the top few results, it wanted to (a) apportion as much to Wis as possible, then (b) as much to Cha, then (c) as much to Con. So we have for (Wis, Cha, Con): 1. (8, 2, 0) 2. (8, 1, 1) 3. (8, 0, 2) 4. (7, 3, 0) 5. (7, 2, 1) ...
Any favored resources on metta?
If we solve the problem normally thought of as "misalignment", it seems like this scenario would now go well. If we solve the problem normally thought of as "misuse", it seems like this scenario would now go well. This argues for continuing to use these categories, as fixing the problems they name is still sufficient to solve problems that do not cleanly fit in one bucket or another
Sure!
I see people on twitter, for example, doing things like having GPT-3 provide autocomplete or suggestions while they're writing, or doing grunt work of producing web apps. Plausibly, figuring out how to get the most value out of future AI developments for improving productivity is important.
There's an issue that it's not very obvious exactly how to prepare for various AI tools in the future. One piece of work could be thinking more about how to flexibly prepare for AI tools with unknown capabilities, or predicting what the capabilities will be.
Other th...
Now, it's still possible that accumulation of slow-turnover senescent cells could cause the increased production rate of fast-turnover senescent cells.
Reminds me of this paper, in which they replaced the blood of old rats with a neutral solution (not the blood of young rats), and found large rejuvinative effects. IIRC, they attributed it to knocking the old rats out of some sort of "senescent equilibrium"
If timelines are short, where does the remaining value live? Some fairly Babble-ish ideas:
Even more so, I would love to see your unjustifiable stab-in-the-dark intuitions as to where the center of all this is
Curious why this in particular (not trying to take umbrage with wanting this info; I agree that there’s a lot of useful data here. Would be a thing I’d also want to ask for, but wouldn’t have prioritised)
Good question. I'm not sure if this will make sense, but: this is somehow the sort of place where I would expect peoples' stab-in-the-dark faculties ("blindsight", some of us call it at CFAR) to have some shot at seeing the center of the thing, and where by contrast I would expect that trying to analyze it with explicit concepts that we already know how to point to would... find us some very interesting details, but nonetheless risk having us miss the "main point," whatever that is.
Differently put: "what is up with institutional cultures lately?" is a que...
Seems like you’re missing an end to the paragraph that starts “Related argument”
I liked your example of being uncertain of your probabilities. I note that if you are trying to make an even money bet with a friend (as this is a simple Schelling point), you should never Kelly bet if you have discounted rate of 2/3 or less of your naïve probabilities.
The maximum bet for is when is 1, which is which crosses below 0 at
In the pie chart in the Teams section, you can see "CooperateBot [Larks]" and "CooperateBot [Insub]"
Yeah, that's what my parenthetical was supposed to address
(particularly because there is large interpersonal variation in the strength of hedging a given qualifier is supposed to convey)
Perhaps you are able to get more reliable information out of such statements than I am.
I like qualifiers that give information on the person's epistemic state or, even better, process. For example:
Given that I don't start thinking that anyone can report directly the state of the world (rather than their beliefs and understanding of it), "From ...
You can steer a bit away from catastrophe today. Tomorrow you will be able to do less. After years and decades go by, you will have to be miraculously lucky or good to do something that helps. At some point, it's not the kind of "miraculous" you hope for, it's the kind you don't bother to model.
Today you are blind, and are trying to shape outcomes you can't see. Tomorrow you will know more, and be able to do more. After years and decades, you might know enough about the task you are trying to accomplish to really help. Hopefully the task you find yourself faced with is the kind you can solve in time.
I think you're right. I think inline comments are a good personal workflow when engaging with a post (when there's a post I want to properly understand, I copy it to a google doc and comment it), but not for communicating the engagement
My understanding is the first thing is what you get with UDASSA and the second thing would be what you get is if you think the Solomonoff prior is useful for predicting your universe for some other reason (ie not because you think the likelihood of finding yourself in some situation covaries with the Solomonoff prior's weight on that situation)
It is at least the case that OpenAI has sponsored H1Bs before: https://www.myvisajobs.com/Visa-Sponsor/Openai/1304955.htm
He treated it like a game, even though he was given the ability to destroy a non-trivial communal resource.
I want to resist this a bit. I actually got more value out of the "blown up" frontpage then I would have from a normal frontpage that day. A bit of "cool to see what the LW devs prepared for this case", a bit of "cool, something changed!" and some excitement about learning something.
That’s a very broad definition of ‘long haul’ on duration and on severity, and I’m guessing this is a large underestimate of the number of actual cases in the United Kingdom
If the definition is broad, shouldn't it be an overestimate?
My attempt to summarize the alignment concern here. Does this seem a reasonable gloss?
It seems plausible that competitive models will not be transparent or introspectable. If you can't see how the model is making decisions, you can't tell how it will generalize, and so you don't get very good safety guarantees. Or to put it another way, if you can't interact with the way the model is thinking, then you can't give a rich enough reward signal to guide it to the region of model space that you want
Most importantly, the success of the scheme relies on the correctness of the prior over helper models (or else the helper could just be another copy of GPT-Klingon)
I'm not sure I understand this. My understanding of the worry: what if there's some equilibrium where the model gives wrong explanations of meanings, but I can't tell using just the model to give me meanings.
But it seems to me that having the human in the loop doing prediction helps a lot, even with the same prior. Like, if the meanings are wrong, then the user will just not predict the correct word. But maybe this is not enough corrective data?
As someone who didn't receive the codes, but read the email on Honoring Petrov Day, I also got the sense it wasn't too serious. The thing that would most give me pause is "a resource thousands of people view every day".
I'm not sure I can say exactly what seems lighthearted about the email to me. Perhaps I just assumed it would be, and so read it that way. If I were to pick a few concrete things, I would say the phrase "with our honor intact" seems like a joke, and also "the opportunity to not destroy LessWrong" seems like a silly phrase (kind of similar to...
Suppose phishing attacks do have an 80%+ success rate. I have been the target of phishing attempts 10s of times, and never fallen for it (and I imagine this is not unusual on LW). This suggests the average LWer should not expect to fall victim to a phishing attempt with 80% probability even if that is the global average
This summary was helpful for me, thanks! I was sad cos I could tell there was something I wanted to know from the post but couldn't quite get it
In a Stag Hunt, the hunters can punish defection and reward cooperation
This seems wrong. I think the argument goes "the essential difference between a one-off Prisoner's Dilemma and an IPD is that players can punish and reward each other in-band (by future behavior). In the real world, they can also reward and punish out-of-band (in other games). Both these forces help create another equilibrium where people cooper...
I think it's probably true that the Litany of Gendlin is irrecoverably false, but I feel drawn to apologia anyway.
I think the central point of the litany is its equivocation between "you can stand what is true (because, whether you know it or not, you already are standing what is true)" and "you can stand to know what is true".
When someone thinks, "I can't have wasted my time on this startup. If I have I'll just die", they must really mean "If I find out I have I'll just die". Otherwise presumably they can conclude from their continued aliveness that they ...
I wonder why it seems like it suggests dispassion to you, but to me it suggests grace in the presence of pain. The grace for me I think comes from the outward- and upward-reaching (to me) "to be interacted with" and "to be lived", and grace with acknowledgement of pain comes from "they are already enduring it"
Wondering if these weekly talks should be listed in the Community Events section?
I like this claim about the nature of communities. One way people can Really Try in a community is by taking stands against the way the community does things while remaining part of the community. I can’t think of any good solutions for encouraging this without assuming closed membership (or other cures worse than the disease)
I vote for GWP or your favorite timelines model
LW must really need the money, having decided to destroy a non-trivial communal resource