The article isn't so much about Reiki as about intentionally utilizing the placebo effect in medicine. And that there is some evidence that, for the group of people that currently believe (medicine x) is effective, the placebo effect of fake (medicine x) may be stronger than that of fake (medicine y) and (medicine x) has fewer medically significant side effects than (medicine y).
We're a long way from having any semblance of a complete art of rationality, and I think that holding on to even the names used in the greater less wrong community is a mistake. Good names for concepts are important, and while it may be confusing in the short term while we're still developing the art, we are able to do better if we don't tie ourselves to the past. Put the old names at the end of the entry, or under a history heading, but pushing the innovation of jargon forward is valuable.
There are a significant number of people who judge themselves harshly. Too harshly. It's not fun and not productive, see Ozy's Post on Scrupulosity. It maybe would be helpful for the unscrupulous to judge themselves with a bit more rigor, but leniency has a lot to recommend it as viewed from over here.
Basic version debug apk here, (more recent) source on GitHub, and Google Play.
The most notable feature lacking is locking the phone when the start time arrives. PM me if you run into problems. Don't set the end time one minute before the start time, or you'll only be able to unlock the phone in that minute.
A more advanced version of this would be to lock the phone into "emergency calls only" mode within a specific time window. I don't know how hard that would be to pull off.
This appears to be possible with the Device Administration API to relock the screen upon receiving an ACTION_USER_PRESENT intent. Neither of which requires a rooted phone.
Probably because they have been dead for forty for fifty years.
The best example still living might be Robert Aumann, though his science is less central (economics) than anyone on your list. Find a well known modern scientist who is doing impressive work and believes in any reasonably traditional sense of God! It's not interesting to show a bunch of people who believed in God when >99% of the rest of their society did.
I'm talking about things on the level of selecting which concepts are necessary and useful to implement in a system or higher. At the simplest that's recognizing that you have three types of things that have arbitrary attributes attached and implementing an underlying thing-with-arbitrary-attributes type instead of three special cases. You tend to get that kind of review from people with whom you share a project and a social relationship such that they can tell you what you're doing wrong without offense.
I think the learn to program by programming adage came from a lack of places teaching the stuff that makes people good programmers. I've never worked with someone who has gone through one of the new programming schools, but I don't think they purport to turn out senior-level programmers, much less 99th percentile programmers. As far as I can tell, folks either learn everything beyond the mechanics and algorithms of programming from your seniors in the workplace or discover it for themself.
So I'd say that there are nodes on the graph that I don't have label...
I think you understand the concept that I was trying to convey, and are trying to say that 'humble' and 'humility' are the wrong labels for that concept. Right? I basically agree with the OED's definition of humility: “The quality of being humble or having a lowly opinion of oneself; meekness, lowliness, humbleness: the opposite of pride or haughtiness.” Note the use of the word opposite, not absence.
...Besides, shouldn´t a person who believe himself unworthy tend to accept ideas that contradict his own original beliefs more easy? E.g. Oh, Dr. Kopernikues c
There're other calculations to consider too (edit: and they almost certainly outweigh the torture possibilities)! For instance:
Suppose that if you can give one year of life this year by giving $25 to AMF (Givewell says $3340 to save a child's life, not counting the other benefits).
If all MIRI does is delay the development of any type of Unfriendly AI, your $25 would need to let MIRI delay that by, ah, 4.3 milliseconds (139 picoyears). With 10% a year exponential future discounting and 100 years before you expect Unfriendly AI to be created if you don't hel...
But what does one maximize?
We can not maximize more than one thing (except in trivial cases). It's not too hard to call the thing that we want to maximize our utility, and the balance of priorities and desires our utility function. I imagine that most of the components of that function are subject to diminishng returns, and such components I would satisfice. So I understand this whole thing as saying that these things have the potential for unbounded linear or superlinear utility?
I'm not sure if I'm confused.
Posit a world where sustenance, shelter, and well-being are magically provided - nobody actually needs to do anything to continue existing. This would be an instance of what is colloquially, and perhaps to an economist incorrectly, termed a post-scarcity society.
I'm less certain about this phrasing, I'm not yet comfortable with the semantics of the economic definition of scarce, but one could try: An society where only time and some luxuries are (economically) scarce.
This is why I don't take promises of a post-scarcity society very seriously. They seem to think in terms of leaps in production technology, as if the key to ending scarcity is producing lots and lots of stuff.
Is this simply a matter of people using the word scarcity differently?
When someone talks about a post-scarcity future, I doubt that they are thinking about a future without choice between alternatives, but indeed a future without unmet needs of one sort or another. Indeed, such futures tend to have a bewildering amount of choice and alternative uses of time.
I could imagine calling all the changes that take place in one's mind due to an event as the memory of that event - not just the ones that involve conscious recall. Still, to be a little more general, I would maybe frame it as process vs. consequences.
Though honestly I'm more interested in understanding the different types of mind-changes it is useful to have names for.
If the snitch is both the trigger and the epicenter of this spell in progress, then this would explain how the three wishes will be granted by "a single plot". The game is played/watched by mostly Slytherin/Ravenclaw students, so mostly Slytherin/Ravenclaw students would die. I can see a school like Hogwarts then giving both these houses the House Cup as a way to deal with the trauma for surviving students and honor the lost children. So that's all three wishes: both houses win the House Cup, and the snitch is removed from Qudditch, all using &qu...
There is a dependency tree for Eliezer Yudkowsky's early posts. It's not terribly pretty, but with a couple hours and a decent data presentation toolkit someone could probably make a pretty graphical version. It doesn't include a lot of later contributions by other people, but it'd be a start.
My thoughts are that you probably havn't read Malcolm's post on communication cultures, or you disagree.
Roughly, different styles of communication cultures (guess, ask, tell) are supported by mutual assumptions of trust in different things (and product hurt and confusion in the absence of that trust). Telling someone you would enjoy a hug is likely to harm a relationship where the other person's assumptions are aligned with ask or guess, even if you don't expect the other person to automatically hug you!
You need to coordinate with people on what type of an... (read more)