Also I somehow keep not giving holidays proper respect.
I thought you were an advocate of the Sabbath? 😉
"Free Day", while perhaps not the best option overall, has the merit that these days involving freeing the part of you that communicatess through your gut (and through what you feel like doing). During much of our working (and non-working) week, that part is overridden by our mind's sense of what we have to do.
By contrast, in OP's Recovery Days this part is either:
(a) doing the most basic recharging before it can do things it positively feels like and enjoys, or
(b) overridden or hijacked by addictive behaviours that it doesn't find as roundly rewarding as Free Day activities.
Addiction can also be seen as a lack of freedom.
They say they haven't accounted for sampling bias, though, which makes me doubt the methodology overall, as sampling bias could be huge over 90 day timespans.
Yes, the article doesn't describe the exact methodology, but they could be well deriving the percentages from people who choose to self-report how they're doing after 30 and 90 days. These would be far more likely to be people who still feel unwell.
As a separate point, and I'm skirting around using the word "hypochondria" here, asking people is they still feel unwell or have symptoms a mon...
For reference, here are the results from last year's survey, along with Peter's analysis of them. This includes a link to a Github repository including the raw data, with names and email addresses removed.
Notable findings included:
Here's drawing your attention to this year's Effective Altruism Survey, which was recently released and which Peter Hurford linked to in LessWrong Main. As he says there:
...This is a survey of all EAs to learn about the movement and how it can improve. The data collected in the survey is used to help EA groups improve and grow EA. Data is also used to populate the map of EAs, create new EA meetup groups, and create EA Profiles and the EA Donation Registry.
If you are an EA or otherwise familiar with the community, we hope you will take it using this link. Al
For reference, here are the results from last year's survey, along with Peter's analysis of them. This includes a link to a Github repository including the raw data, with names and email addresses removed.
Notable findings included:
You're conflating something here. The statement only refers to "what is true", not your situation; each pronoun refers only to "what is true"
In that case saying "Owning up to the truth doesn't make the truth any worse" is correct, but doesn't settle the issue at hand as much as people tend to think it does. We don't just care about whether someone owning up to the truth makes the truth itself worse, which it obviously doesn't. We also care about whether it makes their or other people's situation worse, which it sometimes does.
I asked for a good general guide to IQ (and in particular its objectivity and importance) on the LW FB group a while back. I got a bunch of answers, including these standouts:
http://www.psych.utoronto.ca/users/reingold/courses/intelligence/cache/1198gottfred.html
http://www.newscientist.com/data/doc/article/dn19554/instant_expert_13_-_intelligence.pdf
But there's still plenty of room for improvement on those so I'd be curious to hear others' suggestions.
That being said, what I'm not interested in as my sole aim is to be maximally effective at doing good. I'm more interested in expressing my values in as large and impactful a way as possible - and in allowing others to do the same. This happens to coincide with doing lots and lots f good, but it definitely doesn't mean that I would begin to sacrifice my other values (eg fun, peace, expression) to maximize good.
It's interesting to ask to what extent this is true of everyone - I think we've discussed this before Matt.
Your version and phrasing of what you'...
People's expectation clock starts running from the time they hit send. More improtantly, deadlines related to the email content really sets the agenda for how often to check your email.
Then change people's expectations, including those of the deadlines appropriate for tasks communicated by emails that people may not see for a while! (Partly a tongue in cheek answer - I know this may no be feasible, and you make a fair point).
Effective altruism ==/== utilitarianism
Here's the thread on this at the EA Forum: Effective Altruism and Utilitarianism
Potentially worth actually doing - what'd be the next step in terms of making that a possibility?
Relevant: a bunch of us are coordinating improvements to the identical EA Forum codebase at https://github.com/tog22/eaforum and https://github.com/tog22/eaforum/issues
For my part, I'm interested in the connection to GiveWell's powerful advocacy of "cluster thinking". I'll think about this some more and post thoughts if I have time.
Shop for Charity is much better - 5%+ directly to GiveWell-recommended charities, plus browser plugins people have made that apply this every time you buy from Amazon.
Some people offer arguments - eg http://philpapers.org/archive/SINTEA-3.pdf - and for some people it's a basic belief or value not based on argument.
If C doesn't want A to play music so loud, but it's A's right to do so, why should A oblige? What is in it for A?
Some (myself included) would say that A should oblige if doing so would increase total utility, even if there's nothing in it for A self-interestedly. (I'm assuming your saying A had a right to play loud music wasn't meant to exclude this.)
"Tit-for-tat is a better strategy than Cooperate-Bot."
Can you use this premise in an explicit argument that expected reciprocation should be a factor in your decision to be nice toward others. How big a factor, relative to others (e.g. what maximises utility)? If there's an easy link to such an argument, all the better!
How about moral realist consequentialism? Or a moral realist deontology with defeasible rules like a prohibition on murdering? These can certainly be coherent. I'm not sure what you require them to be non-arbitrary, but one case for consequentialism's being non-arbitrary would be that it is based on a direct acquaintance with or perception of the badness of pain and goodness of happiness. (I find this case plausible.) For a paper on this, see http://philpapers.org/archive/SINTEA-3.pdf
I largely agree with the post. Saying Robertson's thought experiment was off limits and he was fantasising about beheading and raping atheists is silly. I think many people's reaction was explained by their being frustrated with his faulty assumption that all atheists are necessarily (implicitly or explicitly) nihilists of the sort who'd say there's nothing wrong with murder.
One amendment I'd make to the post is that many error theorists and non-cognitivists wouldn't be on board with what the murderer is saying in the thought experiment. For example, they ...
The latest from Scott:
I'm fine with anyone who wants reposting things for comments on LW, except for posts where I specifically say otherwise or tag them with "things i will regret writing"
In this thread some have also argued for not posting the most hot-button political writings.
Would anyone be up for doing this? Ataxerxes started with "Extremism in Thought Experiments is No Vice"
On fragmentation, I find Raemon's comment fairly convincing:
2) Maybe it'll split the comments? Sure, but the comments there are already huge and unwieldy (possibly more-than-dunbar's number worth of commenters) so I'm actually fine with that. Discussion over there is already pretty split up among comment threads in a hard to follow fashion.
I would be more in favour of pushing SSC to have up/downvotes
That doesn't look like a goer given Scott's response that I quoted.
I would certainly be against linking every single post here given that some of them would be decisively off topic.
Noting that it may be best to exclude some posts as off topic.
I'm not sure those topics are outside the norms of LW, outside the puns. Cf. this discussion: http://lesswrong.com/r/discussion/lw/lj4/what_topics_are_appropriate_for_lesswrong/
There's discussion of this on the LW Facebook group: https://www.facebook.com/groups/144017955332/permalink/10155300261480333/
It includes this comment from Scott:
I've unofficially polled readers about upvotes for comments and there's been what looks like a strong consensus against it on some of the grounds Benjamin brings up. I'm willing to listen to other proposals for changing the comments, although if it's not do-able via an easy WordPress plugin someone else will have to do it for me.
An underrated and little understood virtue in our culture.
And a nice summary with many good, non-obvious and practical points. I've done a lot of what you describe in the section on process, and can testify to its effectiveness.
I'd be curious to hear any examples you have of integrity-maintaining examples of playing a role (which are non-obvious, and where a more simple high integrity approach might naively think one simply shouldn't play the role).