This is addressed in the FAQ linked at the top of the page. TL;DR: The author insists that the gist of the story is true, but acknowledges that he glossed over a lot of intermediate debugging steps, including accounting for the return time.
Does that logic apply to crawlers that don't try to post or vote, as in the public-opinion-research use case? The reason to block those is just that they drain your resources, so sophisticated measures to feed them fake data would be counterproductive.
I didn't downvote (I'm just now seeing this for the first time), but the above comment left me confused about why you believe a number of things:
For people in Boston, I made a straw poll to gauge community sentiment on this question: https://forms.gle/5BJEG5fJWTza14eL9
I assume this is referring to the ancient fable "The Ant and the Grasshopper", which is about what we would today call time preference. In the original, the high-time-preference grasshopper starves because it didn't spend the summer stockpiling food for winter, while the low-time-preference ant survives because it did. Of course, alternate interpretations have been common since then.
Boston
Saturday, December 17; doors open at 6:30, Solstice starts at 7:15
69 Morrison Ave., Somerville, MA 02144
RSVPs appreciated for planning purposes: https://www.facebook.com/events/3403227779922411
Let us know in advance if you need to park onsite (it's accessible by public transportation). We're up a flight of stairs.
As someone who was very unhappy with last year's implementation and said so (though not in the public thread), I think this is an improvement and I'm happy to see it. In previous years, I didn't get a code, but if I'd had one I would have very seriously considered using it; this year, I see no reason to do that.
I do think that, if real value gets destroyed as a result of this, then the ethical responsibility for that loss of value lies primarily with the LW team, and only secondarily with whoever actually pushed the button. So if the button got pushed and ...
I'm glad you're happier with this year's version!
I'm not sure I'd say primarily/secondarily, probably I'd guess more at 50-50 (that might be the Shapley attribution?) between LessWrong and the pusher, if someone pushes the button. But overall agree LW gets a bunch of culpability.
So this wound up going poorly for me for various reasons. I ultimately ended up not doing the fast, and have been convinced that I’m not going to be able to in the future either, barring unanticipated changes in my mental-health situation. Other people are going to be in a different situation and that seems fine. But there are a couple community-level things that I feel ought to be expressed publicly somewhere, and this is where they're apparently allowed, so:
First, it's not a great situation if there are like three rationalist holidays and one of them is ...
This strikes me as a purely semantic question regarding what goals are consistent with an agent qualifying as "friendly".
Correction: The annual Petrov Day celebration in Boston has never used the button.
I've talked to some people who locked down pretty hard pretty early; I'm not confident in my understanding but this is what I currently believe.
I think characterizing the initial response as over-the-top, as opposed to sensible in the face of uncertainty, is somewhat the product of hindsight bias. In the early days of the pandemic, nobody knew how bad it was going to be. It was not implausible that the official case fatality rate for healthy young people was a massive underestimate.
I don't think our community is "hyper-altruistic" in the Strangers Drowning...
Docker is not a security boundary.
Eh, if you read the raw results most are pretty innocuous.
Not at the scale that would be required to power the entire grid that way. At least, not yet. This is of course just one study (h/t Vox via Robert Wiblin) but provides at least a rough picture of the scale of the problem.
Cross-posting from Facebook:
Any policy goal that is obviously part of BLM's platform, or that you can convince me is, counts. Police reform is the obvious one but I'm open to other possibilities.
It's fine for "heretics" to make suggestions, at least here on LW where they're somewhat less likely to attract unwanted attention. Efficacy is the thing I'm interested in, with the understanding that the results are ultimately to be judged according to the BLM moral framework, not the EA/utilitarian one.
Small/limited returns are okay if they're the best that can b
...It is easy to get the impression that the concerns raised in this post are not being seen, or are being seen from inside the framework of people making those same mistakes.
I don't have a strong opinion about the CFAR case in particular, but in general, I think this is impression is pretty much what happens by default in organizations, even when people running them are smart and competent and well-meaning and want to earn the community's trust. Transparency is really hard, harder than I think anyone expects until they try to do it, and to do it we...
The organizer wound up posting their own event: https://www.lesswrong.com/events/ndqcNdvDRkqZSYGj6/ssc-meetups-everywhere-1
This looks like a duplicate.
Nit: I think this game is more standardly referred to in the literature as the "traveler's dilemma" (Google seems to return no relevant hits for "almost free lunches" apart from this post).
Irresponsible and probably wrong narrative: Ptolemy and Simplicius and other pre-modern scientists generally believed in something like naive realism, i.e., that the models (as we now call them) that they were building were supposed to be the way things really worked, because this is the normal way for humans to think about things when they aren't suffering from hypoxia from going up too many meta-levels, so to speak. Then Copernicus came along, kickstarting the Scientific Revolution and with it the beginnings of science-vs.-religion conflict, spurrin...
Oh, I totally buy that it was relevant in the Galileo affair; indeed, the post does discuss Copernicus. But that was after the controversy had become politicized and so people had incentives to come up with weird forms of anti-epistemology. Absent that, I would not expect such a distinction to come up.
This essay argues against the idea of "saving the phenomenon", and suggests that the early astronomers mostly did believe that their models were literally true. Which rings true to me; the idea of "it doesn't matter if it's real or not" comes across as suspiciously modern.
For EAs and people interested in discussing EA, I recommend the EA Corner Discord server, which I moderate along with several other community members. For a while there was a proliferation of several different EA Discords, but the community has now essentially standardized on EA Corner and the other servers are no longer very active. Nor is there an open EA chatroom with comparable levels of activity on any other platform, to the best of my knowledge.
I feel that we've generally done a good job of balancing access needs associated with different levels...
The Slate Star Codex sidebar is now using localStartTime
to display upcoming meetups, fixing a longstanding off-by-one bug affecting displayed dates.
You probably want to configure this such that anyone can read and subscribe but only you can post.
I don't feel like much has changed in terms of evaluating it. Except that the silliness of the part about cryptocurrency is harder to deny now that the bubble has popped.
I linked this article in the EA Discord that I moderate, and made the following comments:
Posting this in #server-meta because it helps clarify a lot of what I, at least, have struggled to express about how I see this server as being supposed to work.
Specifically, I feel pretty strongly that it should be run on civic/public norms. This is a contrast to a lot of other rationalsphere Discords, which I think often at least claim to be running on guest norms, though I don’t have a super-solid understanding of the social dynamics involved.
The standard failure mo...
I fear that this system doesn't actually provide the benefits of a breadth-first search, because you can't really read half a comment. If I scroll down a comment page without uncollapsing it, I don't feel like I got much of a picture of what anyone actually said, and also repeatedly seeing what people are saying cut off midsentence is really cognitively distracting.
Reddit (and I think other sites, but on Reddit I know I've experienced this) makes threads skimmable by showing a relatively small number of comments, rather than a small sni...
You don't currently expand comments that are positioned below the clicked comment but not descendants of it.
Idea: If somebody has expanded several comments, there's a good chance they want to read the whole thread, so maybe expand all of them.
Would you mind saying in non-metaphorical terms what you thought the point was? I think this would help produce a better picture of how hard it would have been to make the same point in a less inflammatory way.
Ecosystems, and organisms in them, generally don't care about stuff that can't be turned into power-within-the-ecosystem. Box two exists, but unless the members of box one can utilize box two for e.g. information/computation/communication, it doesn't matter to anyone in box one.
Other places where this applies:
There's an argument to be made that even if you're not an altruist, that "societal default" only works if the next fifty years play out more-or-less the same way the last fifty years did; if things change radically (e.g., if most jobs are automated away), then following the default path might leave you badly screwed. Of course, people are likely to have differing opinions on how likely that is.
No, we didn't participate in this in Boston. Our Petrov Day is this Wednesday, the actual anniversary of the Petrov incident.
Some disconnected thoughts:
In Boston we're planning Normal Mode. (We rejected Hardcore Mode in previous years, in part because it was a serious problem for people who underwent significant inconvenience to be able to attend.)
I'm good at DevOps and might be able to help the Seattle folks make their app more available if they need it.
I happened to give a eulogy of sorts for Stanislav Petrov last year.
I'm currently going through the latest version of the ritual book and looking for things to nitpick, since I know that a few points (notably the ...
Thanks for this update!
I have a question as a donor, that I regret not thinking of during the fundraising push. Could you identify a few possible future outcomes, that success or failure on could be measured within a year, that if achieved would indicate that REACH was probably producing significant value from an EA perspective (as opposed to from a community-having-nice-things perspective)? And could you offer probability estimates on those outcomes being achieved?
I certainly understand if this would be overly time-consuming, but I'd feel comfortable...
I am not very good at making up numbers in this way and have stopped trying. I am not a superforecaster :) So I'm not going to make any actual predictions, but I'll give some categories where I see potential for impact.
First, let me give an overview of what has been achieved so far based on the metrics I have access to:
Then I think the post should have waited until those arguments were up, so that the discussion could be about their merits. The problem is the "hyping it up to Be An Internet Event", as Ray put it in a different subthread; since the thing you're hyping up is so inflammatory, we're left in the position of having arguments about it without knowing what the real case for it is.
… since the thing you’re hyping up is so inflammatory, we’re left in the position of having arguments about it without knowing what the real case for it is.
Are we, though? Must we have arguments about it? What reason is there for us not to say something like, “this raises red flags but we’ll consider and discuss it properly after it takes place; make sure to document it properly and exhaustively, to signal to us all that you are acting in good faith”, and then say no more for now?
I think it's an antisocial move to put forth a predictably inflammatory thesis (e.g., that an esteemed community member is a pseudo-intellectual not worth reading) and then preemptively refuse to defend it. If the thesis is right, then it would be good for us to be convinced of it, but that won't happen if we don't get to hear the real arguments in favor. And if it's wrong, then it should be put to bed before it creates a lot of unproductive social conflict, but that also won't happen as long as people can claim that we haven'...
Unless a comment was edited or deleted before I got the chance to read it, nobody but you has used the word "violence" in this thread. So I don't understand how an argument about the definition of "violence" is in any way relevant.
Hmmm. Do you think that's a bug, or a feature?
LessWrong seems like a bit of a weird example since CFAR's senior leadership were among the people pushing for it in the first place. IIRC even people working at EA meta-orgs have encountered difficulties and uncertainty trying to personally fund projects through the org.
I've just pledged $40 per month.
I could afford to pay more. I'd do so if I ever actually visited REACH, but I live thousands of miles away (and did give a small donation when I visited for the pre-EA Global party, and will continue to do so if I ever come back). I'd also pay more if I were more convinced that it was a good EA cause, but the path from ingroup reinforcement to global impact is speculative and full of moral hazard and I'm still thinking about it.
My pledge represents a bet that REACH will ultimately make a difference in my ...
This is a problem I've been thinking about for awhile in a broader EA context.
It's claimed fairly widely that EA needs a lot more smallish projects, including ones that aren't immediately legible enough to be fundable by large institutional donors (e.g., because the expected value depends on assessments of the competence and value alignment of the person running the project, which the large institutional funders can't assess). It's also claimed (e.g., by Nick Beckstead of OpenPhil at EA Global San Francisco 2017) that smallish earn...
Re: local events: Although I haven't checked this with Scott, my default assumption for the SSC sidebar is that keeping it free of clutter and noise is of the highest importance. As such, I'm only including individual events that a human actually took explicit action to advertise, to prevent the inclusion of "weekly" events from groups that have since flaked or died out.
(This is also why the displayed text only includes the date and Google-normalized location, to prevent users from defacing the sidebar with arbitrary text.)
LW proper may have different priorities. Might be worth considering design options here for indicating how active a group is.
So correct me if I'm wrong here, but the way timezones seem to work is that, when creating an event, you specify a "local" time, then the app translates that time from whatever it thinks your browser's time zone is into UTC and saves it in the database. When somebody else views the event, the app translates the time in the database from UTC to whatever it thinks their browser's time zone is and displays that.
I suppose this will at least sometimes work okay in practice, but if somebody creates an event in a time zone other than the ...
Also, two other questions:
Thanks. I'd originally written up a wishlist of server-side functionality here, but at this point I'm thinking maybe I'll just do the sorting and filtering on the client, since this endpoint seems able to provide a superset of what I'm looking for. It's less efficient and definitely an evil hack, but it means not needing server-side code changes.
I'll note that filter: "SSC"
doesn't work in the GraphiQL page; events that don't match the filter still get returned.
More generally, the way the API works now basi...
I think I agree that if you see that as the development of explicit new norms as the primary point then Facebook doesn't really work and you need something like this. I guess I got excited because I was hoping that you'd solved the "audience is inclined towards nitpicking" and "the people I most want to hear from will have been prefiltered out" problems, and now it looks more like those aren't going to change.
I guess there's an inherent tradeoff between archipelago and the ability to shape the culture of the community. The status quo on LW 2.0 leans too far towards the latter for my tastes; the rationalist community is big and diverse and different people want different things, and the culture of LW 2.0 feels optimized for what you and Ben want, which diverges often enough from what I want that I'd rather post on Facebook to avoid dealing with that set of selection effects. Whether you should care about this depends on how many other people are in a s...
You don't think the GitHub thing is about reducing server load? That would be my guess.