All of imuli's Comments + Replies

My thoughts are that you probably havn't read Malcolm's post on communication cultures, or you disagree.

Roughly, different styles of communication cultures (guess, ask, tell) are supported by mutual assumptions of trust in different things (and product hurt and confusion in the absence of that trust). Telling someone you would enjoy a hug is likely to harm a relationship where the other person's assumptions are aligned with ask or guess, even if you don't expect the other person to automatically hug you!

You need to coordinate with people on what type of an... (read more)

1Gleb_Tsipursky
Thanks for linking me to that post. I didn't read it before, so I learned something new - appreciate it! Yup, good point about the hug, this was written with the idea that the other would be committed to Tell Culture as well. Agreed on the key importance of trust, which is point 7 of the list I made.

The article isn't so much about Reiki as about intentionally utilizing the placebo effect in medicine. And that there is some evidence that, for the group of people that currently believe (medicine x) is effective, the placebo effect of fake (medicine x) may be stronger than that of fake (medicine y) and (medicine x) has fewer medically significant side effects than (medicine y).

3Marlon
Placebo doesn't affect objective outcomes anyway. See Orac for a bitter "discussion" about this article. http://scienceblogs.com/insolence/2015/10/15/in-the-pages-of-nature-a-full-throated-defense-of-integrating-quackery-into-medicine/
0BiasedBayes
Sorry the misleading title and thanks for downvoting :D.The author goes much further than just ”utilizing the placebo effect”. The article is basically about endorsing alternative medicine. You can easily see this from the following quotes . There are many shady arguments in the article: ”Conventional medicine, with its squeezed appointment times and overworked staff, often struggles to provide such human aspects of care. One answer is to hire alternative therapists.” --> Just because there are challenges in medicine like overworked stuff does not mean alternative medicine practicioners should be hired. ”Critics say that this is dangerous quackery. Endorsing therapies that incorporate unscientific principles such as auras and energy fields encourages magical thinking, they argue, and undermines faith in conventional drugs and vaccines. That is a legitimate concern, but dismissing alternative approaches is not evidence-based either, and leaves patients in need.” -->Dismissing alternative approach does not mean that the patient is leaved ”in need.” If the patient is in need the answer is not necessarily alternative medicine. I have problems seeing the problem of utilizing placebo using evidence based medicine and at the same time NOT "hiring alternative therapists".... While acknowledging the limits of placebo.
-1ChristianKl
The question is whether the word "placebo effect" is a good way to think about the issue or whether it's more sensible about thinking about maximizing healing of sick people.

Thinking Fast and Slow references studies of disbelief requiring attention - which is what I assume you mean by "easier".

0ScottL
Yes. That's what I mean. Thanks. I added a link to this paper: Gilbert, D.T., Tafarodi, R.W. and Malone, P.S. (1993) You can't not believe everything you read. Journal of Personality and Social Psychology, 65, 221-233. This is the quote from Thinking Fast and Slow:

We're a long way from having any semblance of a complete art of rationality, and I think that holding on to even the names used in the greater less wrong community is a mistake. Good names for concepts are important, and while it may be confusing in the short term while we're still developing the art, we are able to do better if we don't tie ourselves to the past. Put the old names at the end of the entry, or under a history heading, but pushing the innovation of jargon forward is valuable.

I've been introducing rationality not by name, but by description. As in, “I've been working on forming more accurate beliefs and taking more effective action.”

  • Ionizing Radiation - preferably expressed as synthetic heat or pain with a tolerable cap. The various types could be differentiated, by location or flavor, but mostly it's the warning that matters.

There are a significant number of people who judge themselves harshly. Too harshly. It's not fun and not productive, see Ozy's Post on Scrupulosity. It maybe would be helpful for the unscrupulous to judge themselves with a bit more rigor, but leniency has a lot to recommend it as viewed from over here.

0[anonymous]
Thanks for the link. Yes, it is possible to inflict much self-pain on our perceived faults, but interestingly they tend to be different things than what others would judge us over. I get the most criticism for not listening to people, ignoring what I am told, while I beat up myself over mainly the lack of willpower.

Basic version debug apk here, (more recent) source on GitHub, and Google Play.

The most notable feature lacking is locking the phone when the start time arrives. PM me if you run into problems. Don't set the end time one minute before the start time, or you'll only be able to unlock the phone in that minute.

A more advanced version of this would be to lock the phone into "emergency calls only" mode within a specific time window. I don't know how hard that would be to pull off.

This appears to be possible with the Device Administration API to relock the screen upon receiving an ACTION_USER_PRESENT intent. Neither of which requires a rooted phone.

0The_Jaded_One
If you can write an app that kicks the user back to the lock screen within a specific time window, that would be fantastic

Probably because they have been dead for forty for fifty years.

The best example still living might be Robert Aumann, though his science is less central (economics) than anyone on your list. Find a well known modern scientist who is doing impressive work and believes in any reasonably traditional sense of God! It's not interesting to show a bunch of people who believed in God when >99% of the rest of their society did.

I'm talking about things on the level of selecting which concepts are necessary and useful to implement in a system or higher. At the simplest that's recognizing that you have three types of things that have arbitrary attributes attached and implementing an underlying thing-with-arbitrary-attributes type instead of three special cases. You tend to get that kind of review from people with whom you share a project and a social relationship such that they can tell you what you're doing wrong without offense.

I think the learn to program by programming adage came from a lack of places teaching the stuff that makes people good programmers. I've never worked with someone who has gone through one of the new programming schools, but I don't think they purport to turn out senior-level programmers, much less 99th percentile programmers. As far as I can tell, folks either learn everything beyond the mechanics and algorithms of programming from your seniors in the workplace or discover it for themself.

So I'd say that there are nodes on the graph that I don't have label... (read more)

0Emile
... or from Stack Overflow / Wikipedia, no? When encountering a difficult problem, one can either ask someone more knowledgeable, figure it out himself, or look it up on the internet.

Ok, then, humble from the OED: "Having a low estimate of one's importance, worthiness, or merits; marked by the absence of self-assertion or self-exaltation; lowly: the opposite of proud."

Clicking out.

I think you understand the concept that I was trying to convey, and are trying to say that 'humble' and 'humility' are the wrong labels for that concept. Right? I basically agree with the OED's definition of humility: “The quality of being humble or having a lowly opinion of oneself; meekness, lowliness, humbleness: the opposite of pride or haughtiness.” Note the use of the word opposite, not absence.

Besides, shouldn´t a person who believe himself unworthy tend to accept ideas that contradict his own original beliefs more easy? E.g. Oh, Dr. Kopernikues c

... (read more)
0[anonymous]
The Oxford Dictionary defines humility as to be humble OR having a lowly opinion of oneself. Well, if one is meeky, that could be a problem. I agree. My theory is that if you are modest, you have superior advantage in critical thinking compared to an arrogant person :) Let us say that an arrogant scientist and a modest scientist is doing research. A modest person will be more open to hypotheses that seems unlikely. If evidence later is updated, I think that a humble scientist will have an easier time cooping with it and maintain his critical thinking while an arrogant person will be more likely to try to find evidence supporting his own claims.

I was speaking more to how someone acts inside than how someone presents themself. If they believe themself unworthy or unimportant or without merit, they tend not to reject ideas very well and do a lot of equivocating. (Though, I think, all my evidence for that is anecdotal.)

0[anonymous]
Yes, and that is what I mean when i say you confuse the concepts :) Modesty is perhaps the better term here. Humlilty is modesty in all aspects of life. Compare it with Piety. Humility means that you don´t overstate your own importance even when you are successful and that you respect others even if they are less intelligent/successful. It is the opposite of arrogance. If you are successful and brawl much you are arrogant, if you are successful and modest/humble, you don´t brawl. Besides, shouldn´t a person who believe himself unworthy tend to accept ideas that contradict his own original beliefs more easy? E.g. Oh, Dr. Kopernikues claims that the earth ISN`T flat? Well, who am I to come and believe otherwise?

You might say that they are both traps, at least from a truth seeker's perspective. The arrogant will not question their belief sufficiently; the humble will not sufficiently believe.

0[anonymous]
I disagree. Why would a humble person have problem to believe evidence? I think you confuse the concepts.

There're other calculations to consider too (edit: and they almost certainly outweigh the torture possibilities)! For instance:

Suppose that if you can give one year of life this year by giving $25 to AMF (Givewell says $3340 to save a child's life, not counting the other benefits).

If all MIRI does is delay the development of any type of Unfriendly AI, your $25 would need to let MIRI delay that by, ah, 4.3 milliseconds (139 picoyears). With 10% a year exponential future discounting and 100 years before you expect Unfriendly AI to be created if you don't hel... (read more)

5dxu
Yes, but that was when the tension was still high, because the story was incomplete. Now that it is complete, the desire for closure won't be as strong, and so it's questionable if any recursive stories will be spawned.

But what does one maximize?

We can not maximize more than one thing (except in trivial cases). It's not too hard to call the thing that we want to maximize our utility, and the balance of priorities and desires our utility function. I imagine that most of the components of that function are subject to diminishng returns, and such components I would satisfice. So I understand this whole thing as saying that these things have the potential for unbounded linear or superlinear utility?

  • epistemic rationality
  • ethics
  • social interaction
  • existance

I'm not sure if I'm confused.

3David Althaus
Expected utility :) I guess I have to disagree. Sure, in any given moment you can maximize only one thing but this is simply not true for larger time horizons. Let's illustrate this with a typical day of Imaginary John: He wakes up and goes to work at an investment bank to earn money (money maximizing) to donate it later to GiveWell (ethical maximizing). Later at night he goes on OKCupid/or to a party to find his true soulmate (romantic maximizing). He maximized three different things in just one day. But I agree that there are always trade-offs. John could had worked all day instead of going to the party. I think that some components of my utility function are not subject to diminishing returns. Let's use your first example, "epistemic rationality". Epistemic rationality is basically about acquiring true beliefs or new (true) information. But sometimes learning new information can radically change your whole life and thus is not subject to diminishing marginal returns. To use an example: Let's imagine you are a consequentialist and donate to charities to help blind people in the USA. Then you learn about effective altruism and cost-effectiveness and decide to donate to the most effective charities. Reading such arguments has just increased your positive impact on the world by a hundredfold! (Btw, Bostrom uses the term "crucial consideration" exactly for such things.) But sure, at some point, you gonna hit diminishing returns. On to the next issue – Ethics: Let's say one value of mine is to reduce suffering (what could be called non-suffering maximizing). This value is also not subject to diminishing marginal returns. For example, imagine 10.000 people getting tortured (sorry). Saving the first 100 people from getting tortured is as valuable to me as saving the last 100 people. Admittedly, with regards to social interactions there is an upper bound. But this upper bound is probably higher than most seem to assume. Also, it occurred to me that one has to distingu

The zip file has some extra Apple metadata files included. Nothing too revealing, just dropbox bits.

As is Tom Riddle. I imagine the point of divergence is in Tom Riddle's childhood somewhere, which pushed Albus into consulting the maze of the future, which...

Alastor Moody went to Minerva's right and sat down.

Amelia Bones sat down in a chair, taking Minerva's right. Mad-Eye Moody took the chair to her own right.

Oops!

I had always modeled part of the appeal of workout/gym is that one doesn't need to coordinate with other people.

0Curiouskid
Pickup basketball games require some coordination once you get to the gym (getting a game going can be somewhat difficult, but is usually pretty easy), but, you can just go whenever you want.

Timing note: While this update was at 12pm Pacific, this is no longer the same as 8pm UTC, due to daylight savings time beginning in the US. I'm assuming tomorrow will be the same (at 19:00/7pm UTC)?

0Gondolinian
He just fixed it. I've updated the note in the OP.

Your question is: after an airliner accident, how often do any of the next n flights following the same route also have an accident?

Guessing (2/3 confidence) lower than the base rate.

5buybuydandavis
It wasn't "the same" route, but the 9/11 attacks have skewed the coincidence rates. You'd expect intelligent adversaries to hit and hit again quickly, before the means of their attack were found out and countermeasures were implemented.
2michael_b
Close. If the accident is completely unexplained, as it often is immediately following an accident, shouldn't the risk be substantially higher immediately following the accident and then rapidly decay back to baseline as more information becomes available?
3Emile
Yeah, that was my thought too - after an accident, everyone is more careful and diligent, because there will be a search for someone to blame, and that's really not a good time to be asleep at the wheel, whatever your level of responsibility.

Nicholas Flamel is dead, at least according to Dumbledore. (Or tucked away for later secret extraction?)

3Izeinwinter
Yhea, two problems with that: 1: I really don't put it past Dumbledore to just lie about everything to Voldemort, and 2:. Flamel had access to the stunt Voldemort pulled on Hermione for a minimum of 500 years, and potentially more like a thousand. I figure good odds killing Flamel just gets you a rebirth in fire phoenix-style and an annoyed arch-wizard.
0Val
Does Dumbledore know about Perenelle? Maybe I just don't remember.

Posit a world where sustenance, shelter, and well-being are magically provided - nobody actually needs to do anything to continue existing. This would be an instance of what is colloquially, and perhaps to an economist incorrectly, termed a post-scarcity society.

I'm less certain about this phrasing, I'm not yet comfortable with the semantics of the economic definition of scarce, but one could try: An society where only time and some luxuries are (economically) scarce.

-2[anonymous]
You don't need to do anything to continue existing already. Someone will find you, put you in hospital. Your life will be sustained regardless of wishes. See, aren't definitions tricky? Isn't it nice that we're spending 3 whole articles just nailing down the most basic concept? Concepts which are basic, but not easy? =)

This is why I don't take promises of a post-scarcity society very seriously. They seem to think in terms of leaps in production technology, as if the key to ending scarcity is producing lots and lots of stuff.

Is this simply a matter of people using the word scarcity differently?

When someone talks about a post-scarcity future, I doubt that they are thinking about a future without choice between alternatives, but indeed a future without unmet needs of one sort or another. Indeed, such futures tend to have a bewildering amount of choice and alternative uses of time.

0[anonymous]
Haven't faintest idea what they really mean, frankly. Usually too fuzzy, vague; using technical terms in odd ways. "Post-scarcity" and "economics" or "economy" should occupy same sentence in only same way that "inorganic" and "biology" should.

I wonder if this (distrusting imperfect algorithms more than imperfect people) holds for programmers and mathematicians. Indeed, the popular perception seems to be that such folks overly trust algorithms...

7TheMajor
I was under the impression that mathematicians are actually too distrusting of imperfect algorithms (compared to their actual error rates). The three examples I ran into myself were: * In analysis, in particular in bifurcation analysis, a (small) parameter epsilon is introduced which determine the size of the perturbation. Analysts always loudly proclaim that 'there exists an epsilon small enough' such that their analysis holds (example values are often around 1/1000), but frequently the techniques are valid for values as large as epsilon = 1/2 (for example). Analysist who are unwilling to make statements about such large values of epsilon seem to be too mistrusting of their own techniques/algorithms. * Whether or not pi and e are normal are open questions in mathematics, but statistical analysis of the first couple of billion of digits (if I am not mistaken) suggests that pi might be normal whereas e is probably not. Still, many mathematicians seem to be agnostic about these questions, as only a few billion data points have been obtained. * In the study of number fields probabilistic algorithms are implemented to compute certain interesting properties such as the class group (algorithms that are guaranteed to give the right answer exist, but are too slow to be used in anything other than a few test cases). These algorithms generally have a guaranteed error rate of about 0.01% (sometimes this is a tune-able parameter), but I know of a few mathematicians in this field (which makes it a high percentage, since I only know a few mathematicians in this field) who will frequently doubt the outcome of such an algorithm. Of course these are only my personal experiences, but I'd guess that mathematicians are on the whole too fond of certainty and trust imperfect algorithms too little rather than too much.

Different methods are more and less likely to lead one to the truth (in a given universe). I see little harm in calling those less likely arts dark. Rhetoric is surely grey at the lightest.

7unconscious
Presentation will influence how people receive your ideas no matter what. If you present good ideas badly, you'll bias people away from the truth just as much as if you presented bad ideas cleverly.

Adapting the Horcrux (2.0 in HPMoR) spell to make Amulets of Life Saving was the very first thing I thought of when considering ethical immortality in HPverse.

Hermione can always transfigure herself older - possibly with help from the stone - if that becomes a problem.

Voldemort believes that Harry “WILL TEAR APART THE VERY STARS IN HEAVEN” without Hermione. What wouldn't you do to protect the person preventing that, given that you are willing to murder unknown hundreds for Horcruxes.

0linkhyrule5
Be stupid? There's no excuse for letting Harry have his stuff back, after all.

One does not get put back 49 years hard work toward immortality every day.

And might possibly have prompted Harry to insist on hearing about Bellatrix in Parselmouth.

3ChristianKl
Quirrell didn't reveal that he's a Parselmouth but instead went through the road of transforming in a snake that might not be bound by the Parselmouth truth saying bind.

You cannot transfigure from air, hard physical limit. Harry tested this.

1TryingToThink
Can't you? If I recall correctly, Harry tried and failed before succeeding with Partial Transfiguration, while he was still thinking in the terms of atoms. Also, you could consider transfiguring air as a partial transfiguration unless it's within a closed system, since you can't think of air as a whole or defined object. So it stands to consideration that Harry might be able to do it using the model of timeless physics like he did to achieve partial transfiguration, since we have no proof that he tried again afterwards ( that I recall ). Still, this wouldn't be the best moment to try it. Besides, could've just tried to levitate the gun or something towards the gun he could use as a shield, and run away or retrieve his invisibility cloak or used the spare turn on the time-turner... There are countless possibilities. Even if he did appear on the corridor one hour before, when only Snape was there, he could probably convince him of acting as he did towards him... Which would've opened the possibility of another Harry hidden somewhere ready to try to take down professor Quirrell, or rather a Harry who had already sent a messenger Patronus to Dumbledore so he could quietly and without disrupting the Quidditch match take down the very charming Defence professor. Actually, this could still happen if Harry somehow retrieves his time-turner, or even if Cedric is hidden in Harry's pouch and uses it ( though I'm not sure how this would turn out... )

I mean, he just forged a note "from yourself"

Or Harry just wrote a note that looked like Quirrell had forged it, to help his past-self figure it out at the appropriate time.

I could imagine calling all the changes that take place in one's mind due to an event as the memory of that event - not just the ones that involve conscious recall. Still, to be a little more general, I would maybe frame it as process vs. consequences.

Though honestly I'm more interested in understanding the different types of mind-changes it is useful to have names for.

The spell in progress that may kill hundreds of students that the stone can fix — sounds like something transfigured into a gas.

If the snitch is both the trigger and the epicenter of this spell in progress, then this would explain how the three wishes will be granted by "a single plot". The game is played/watched by mostly Slytherin/Ravenclaw students, so mostly Slytherin/Ravenclaw students would die. I can see a school like Hogwarts then giving both these houses the House Cup as a way to deal with the trauma for surviving students and honor the lost children. So that's all three wishes: both houses win the House Cup, and the snitch is removed from Qudditch, all using &qu... (read more)

1solipsist
Ooooh, I like it! But Harry was out watching the Quidditch game breathing non-doomy breaths.

They want them frozen immediately, shipped in an insulated box with an ice pack, and then they extract cells and store the cells cryogenically. So that's probably not sufficient.

The two tooth storage services I looked at both cost US $120/year. One time fees were in the $600-1800 range. Both figure for up to four teeth extracted simultaneously.

2Vaniver
Do you have to do anything fancy for tooth storage? As I recall, the dentist managed to extract my wisdom teeth intact and so I think they're sitting in a box somewhere (but have been for ~five years). Given that people get useful DNA out of prehistoric specimens, that makes me not immediately dismiss the possibility (but I expect that freezing them or something similar is better).
5skeptical_lurker
Is it possible to freeze e.g. blood instead? I'd rather not have my teeth extracted if possible.

This is not a test as to whether we should judge the truth by what the church condemns, but rather for the OP's thesis that they are/were not specifically opposing the progress of truth on an object level.

3Gondolinian
I think we might both be misunderstanding each other? I thought your post was implying that the most important thing was that Galileo's theory was empirically confirmed and the church's falsified, I then intended to imply that someone could have a correct theory through blind luck while still being unscientific/irrational. (I don't know enough relevant history to have much of an opinion on the specific case of Galileo, I'm just pointing out the meta-level rule.)

Galileo was eventually demonstrated correct. Were there trials where the church was eventually demonstrated correct?

-2bobfrank
That's just the thing, though. For all the hoopla people have made over it, Galileo was not eventually demonstrated correct, for the very simple reason that he was factually wrong. Yes, the Sun is the center of the solar system, not the Earth. But that's not what he was teaching. He was teaching that the Copernican model of orbital dynamics was literally correct, and it isn't. It got heliocentrism right, but a lot of other important details wrong, and Galileo was teaching that not only was this incorrect model literally true, but that it also had scriptural support. (Which means he truly was a heretic, by the definition being applied there.) At his trial, he was offered the chance to produce scientific evidence to prove his position, but he couldn't actually do that, because his position could not be proven by scientific evidence due to being factually incorrect.
4alienist
Yes, not quite a trial but the condemnations at the University of Paris. In every statement falsifiable among them, the church was eventually correct, e.g., the universe isn't infinitely old, vacuum exists, astrology is bunk.
7Gondolinian
A broken clock may be right twice a day, but you still shouldn't use it to tell time.
7ChristianKl
If the OP is right then Galileo was wrong in a lot of arguments he made.

I would hazard that cloning comes a lot closer to 100% fidelity than a child comes to 50% fidelity. In any case, one cannot transfer their self to clones or children with our current means - I doubt one can even convey 1%.

0Gunnar_Zarncke
That entierely depends on how to measure this.

Upvoted for cuteness.

However, my understanding is that technology has already reached the level of making copies with ~100% of hardware fidelity.

0Gunnar_Zarncke
We know how to make ~100% copies of software sure, but hardware? I don't think we can do single-material solid copies with an accurracy with much more than µm resolution. We can 'copy' (clone) a lot of life-forms. So you might mean that kind of hardware copy. I don't know the mutation rate of animal cloning and it is probably good enough to call it ~100% on the DNA-level. But the resulting phenotype often contains errors that make it questionable to call the result a 100% copy.

Note - images and links are broken.

Noticing when you're confused and confidence calibration are two rationality skills that are necessary to have in your system 1 in order to progress as a rationalist… and much of instrumental rationality can be construed as retraining system 1.

There is a dependency tree for Eliezer Yudkowsky's early posts. It's not terribly pretty, but with a couple hours and a decent data presentation toolkit someone could probably make a pretty graphical version. It doesn't include a lot of later contributions by other people, but it'd be a start.

0fowlertm
I thought of that as well, it does need some work done in terms of presentation. It'd be a good place to start, yes.
3[anonymous]
I'm not sure that's the same as a skill tree.

Consider it to be public domain.

If you pull the image from it's current location and message me when you add more folks I might even update it. Or I can send you my data if you want to go for a more consistency.

Birth Year vs Foom:

A bit less striking than the famous enough to have Google pop up their birth year subset (green).

1Brian_Tomasik
This is awesome! Thank you. :) I'd be glad to copy it into my piece if I have your permission. For now I've just linked to it.
Load More