Consider the following bot:
NQCoopBot := defect against X iff X is DefectBot, else cooperate with X
FairBot will cooperate with NQCoopBot, even though FairBot could fairly trivially safely defect.
PrudentBot will also cooperate with NQCoopBot, even though PrudentBot could fairly trivially safely defect.
However, the entire reason why PrudentBot exists is to try to overcome the shortcoming whereby FairBot will cooperate even in situations where the opponent fairly trivially cooperates regardless.
Is there a PrudentBot variant that doesn't cooperate with NQCoopBot? (Aside from brittle approaches like 'Will defect against X if either (X == NQCoopBot) or (PrudentBot would defect against X)' that is.)
One of the problems with simulating cells is that cells are rather complex. To help with this, researchers optimized cells towards simplicity (by, among other things, successively removing genes and seeing if the cell still functioned). They then took said cells and analyzed them as best they could.
Today[1] in 'incremental advances in bottom-up simulation research' we have this.
As it turns out, these cells are simple enough (and well-understood enough) that they are feasible to simulate over an entire cell division cycle[2]. It's not a perfect simulation for various reasons, but still is progress[3].
(It's when simulating and explaining a human neuron in this fashion becomes feasible that things become interesting.)
January 20th, really, but it seems like much of LW was focused on Omicron at that time.
At a higher level of detail than before.
There's kind of a hierarchy of models here, where the more complex low-level (and slower / smaller scale) models ultimately serve as sources of parameters and insights for higher-level models.
Is there a way to get a link to a straight chronological view of all posts on LW?
The closest I've found is going to All Posts, selecting sorted by New, timeframe of Daily, selecting Low Karma, then scrolling down and hitting 'load more[1]', then scrolling back to the top:
...however this:
...for each day. Potentially repeatedly.
This entire site is laggy[5], to the point where it honestly pushes me away from interacting with the site. Then again I'm one of those people who notices the difference between 144Hz monitors with different response times[6], so it's entirely possible that I'm just on the low-latency-tolerance tail.
Part of this is that I'm also the sort of person who likes to start by context-switching to "selecting which headlines are interesting", opening tabs for any headlines that are interesting, then context-switching to "reading/evaluating/responding to content" mode and going through reading content & commenting.
Yes, the settings for New / All Time / etc are saved, assuming you keep cookies, but it still requires manual steps for load more / etc.
Looking at Firefox performance monitor, there's random 200ms pauses/jitter while scrolling, for instance[7]. Most frames being on-time with occasional 200ms frames is kind of terrible.
Admittedly, I've never done a formal trial on this.
(And for reference / sanity-checking, this doesn't happen when scrolling on a non-web2.0 site. Looking at the performance trace, the jitter is mainly due to (blocking) layouts triggered by JS, although the JS is minified so I'm not going to take the time to reverse-engineer exactly where/why it's causing layout updates.)
Interesting! That doesn't appear to show shortform posts, unfortunately.
(The themes on that website are also kind of terrible, though that is a lesser issue.)
My comments on LessWrong, and indeed most sites, tend to skew negative, especially for toplevel comments.
My votes on LessWrong, and indeed most sites, tend to skew positive, especially for toplevel posts and comments.
I have come to a realization as to why this is. Take two hypothetical posts:
Post A is a perfect post[1]. It coherently and completely describes an obviously-correct-only-in-hindsight argument far more eloquently than I ever could, from premises that I assign a very high probability to.
Post B is a terrible post. It incoherently and incompletely describes an obviously-incorrect argument in an inelequent fashion, from premises that I assign a very low probability to.
My typical reaction upon seeing these two posts is as follows:
For post A, I'll upvote[2]. I am unlikely to leave a comment, as I am unable to constructively add to the discussion, and generally unconstructive comments-as-upvotes are discouraged[3].
For post B, I will leave a negative comment, either describing a flaw in the argument, or why a premise appears to be unlikely. (Or some combination of these). I may downvote, but am relatively unlikely to, at least initially and assuming that the post/argument appears to be made in good faith. (This is because, among other reasons[4], it is entirely possible that I am misconstruing the argument or am otherwise mistaken myself. I skew heavily towards not censoring information, lest I get trapped in information cascades.)
The overall result of this is that my comments, especially the initial toplevel comments on posts, skew heavily negative, whereas my votes skew positive, somewhat less heavily.
I find myself thinking that this state of affairs is undesirable; I cannot articulate why, precisely[5]. Is there a better approach or flaw in premises or reasoning here?
From my perspective. Don't treat this as "ideal agent", just "far better than I am".
Likely a strong upvote, depending.
Sometimes explicitly, sometimes implicitly.
One other reason is that a lot of sites assign a cost to downvotes. Insofar as "number go up" is an incentive, there is a disincentive to downvote.
I suspect that at least some of this is due to upvotes being anonymous whereas comments are tied to my username. Someone e.g. looking at my user page just sees that I'm negative all the time.