It's a comment on one of Eliezer Yudkowsky's Facebook posts. I got permission to post it here, as I thought it was worth posting.
The Courage Wolf looked long and slow at the Weasley twins. At length he spoke, "I see that you possess half of courage. That is good. Few achieve that."
"Half?" Fred asked, too awed to be truly offended.
"Yes," said the Wolf, "You know how to heroically defy, but you do not know how to heroically submit. How to say to another, 'You are wiser than I; tell me what to do and I will do it. I do not need to understand; I will not cost you the time to explain.' And there are those in your lives wiser than you, to whom you could ...
Welcome to Less Wrong!
This is an old topic. Note the title: Welcome to Less Wrong! (2012). I'm not sure where the new topic is, or even if it exists, but you should be able to search for it.
I recommend starting with the Sequences: http://wiki.lesswrong.com/wiki/Sequences
The sequence you are looking for in regards to "right" and "should" is likely the Metaethics Sequence, but said sequence assumes you've read a lot of other stuff first. I suggest starting with Mysterious Answers to Mysterious Questions, and if you enjoy that, move on to How to Actually Change Your Mind.
In that case, I pre-commit that if I win, I'll spend it on something leisure-related or some treat that I otherwise wouldn't be able to justify the money to purchase.
I co-operated; I'd already committed myself to co-operating on any Prisoner's Dilemma involving people I believed to be rational. I'd like to say it was easy, but I did have to think about it. However, I stuck to my guns and obeyed the original logic that got me to pre-commit in the first place.
If I assume other people are about as rational as me, than a substantial majority of people should t...
I wanted to thank you for this. I read this post a few weeks ago, and while it was probably a matter of like two minutes for you to type it up, it was extremely valuable to me.
Specifically a paraphrase of point B, "The point where you feel like you should give up is way before the point at which you should ACTUALLY give up" has become my new mantra in learning maths, and since I do math tutoring when the work's there, I'm passing this message on to my students as well.
So, thank you very much for this advice.
The main technique I used was bypassing the "trying to try" fallacy, as well as some HPMOR-style thinking; Obstacles mean you get creative, rather than give up. The most important thing was just not giving up upon finding the first reasonable-sounding solution, even if it's chances of success wasn't particularly high.
As to how I applied it, that was the best part, and what the second paragraph alluded to; it was my default response, to the point where I was briefly stunned when my friend was throwing up easily circumventible roadblocks to my idea...
I got to use rationality techniques to not only solve a friend's problem that had been ongoing for months, but also managed to completely change the way he thought about problem-solving in general. Not sure if that second part will actually stick.
On a related note, that was when I found out that I've internalised the basics of how to REALLY approach a problem with the intent of solving it, to such a degree that I'd forgotten that my thought process was unusual.
How'd it go?
EDIT: My bad, I thought this was posted on 22 January 2013, not 22 January 2012. I'll leave this up just in case though.
What I've found that the spoilt version of Nethack tests, more than anything else, is patience. Nethack spoilt isn't about scholarship, really. You don't study. You have a situation, and you look up things that are relevant to that situation. There is a small bit of study at the beginning, generally when you look up stuff like how to begin, what a newbie-friendly class/race is, and how to not die on the second floor.
But really, it's patience. I once did an experiment where players who were relatively new to Nethack were encouraged to spoil themselves as ea...
Oh, no, I have no problems with people spoiling themselves for Nethack. That's pretty much the only way to actually win. But if your aim is to improve rationality, rather than to do as well as possible within the game, it might be better to play it unspoiled. After all, Morendil mentioned "hypothesis testing" as something that was taught by Nethack: The spoilt version doesn't really test that.
I'm assuming this only applies if you aren't using spoilers for NetHack?
I'm not sure about it's rationality testing or improving abilities, but I find it very fun :)
But this is a rather interesting example of rationality at work. It's useful for a couple of reasons.
1) There's a clear indication here of incorrect beliefs leading to unwanted consequences. In this case, a downplay of the importance of cup holders is leading to the loss of profit that could otherwise be gained.
2) It's fairly trivial and simple, which is actually a good thing in it's favor. It's not technical, meaning we can all understand what's going on, and it's extremely unlikely anyone is going to have an entrenched belief about cup holders already th...
Thank you. I apologise for not asking you for verification sooner. My downvote is revoked and I've upvoted your post.
I learnt that I should have asked for verification sooner, either immediately, or as soon as you informed me you had reasons for wishing to keep said verification private. I also learnt that I should assign a higher initial probability to claims made by LessWrong members I don't know, which is a lesson I'm very glad to have learnt, since I do enjoy trusting people.
You're right.
In this case, assuming immortals had perfect memories and would eventually work out that you didn't, assuming you were an immortal who can't remember if you've played that particular opponent before (But can vaguely remember an idea of how often you get defected on vs. co-operated with by the entire field) what do you think your optimal strategy would be?
Okay, I've sent a PM asking you for verification.
I never actually claimed you were making this up, merely that the likelihood of your story being true was low. You inventing the story is only one possible reason why your story might be false. You could also simply be mistaken, have witnessed actions that looked much worse out of context (For example, maybe your friends did something to deserve their treatment, but didn't tell you because it would make them look bad) or some other reason I haven't thought of.
In addition, you ask why I care so much about lack of transparency when I can think of reasons why...
That's how it looks like from your perspective. From a reader's perspective, it looks like someone who isn't a notable community figure on LessWrong (At least, I assume this, based on your karma scores and the fact that I have never heard of you. If I'm wrong, I apologise.) has suddenly made a claim with a significant burden of proof on it, and not provided any concrete evidence, despite apparently sitting on some. "I have evidence but am not going to include this in this post, nor will I explain why I cannot include the evidence in this post." i...
If you have references, and you want to get potentially helpful information to rationalists, why on earth would you not just post these references to begin with? If you have a good reason for not making the references public, why didn't you say so in your initial post?
If you have an imperfect memory and you think they don't, wouldn't you want to pre-commit to attempting co-operation with any immortal entities you face, given they are very likely to remember you, even if you don't remember them? This is of course assuming that most or all other immortal entities you're likely to face in the Dilemma do in fact have perfect memories.
As far as I understand it, causality is just the relationship between cause and effect. If I'm right, saying it tries to avoid paradoxes is like saying gravity acts whenever someone falls off a cliff to prevent them from flying.
If I really needed to explain away time travel in this fic, I'd probably have a future Twilight show up and say "Whatever you do, do NOT use time travel. I don't care how bad it is. Even if Equestria is going to be destroyed if you don't. DO. NOT. MESS. WITH. TIME."
Fortunately, I don't see any situation in this fic where T...
It does, in fact, weaken the anti-alicorn argument (Different from the pro-death argument, even though they still wind up the same) but with the amount of ammunition I've gotten from LessWrong, the anti-alicorn side is no longer weak in the slightest.
Good point. Shining would be a good one as well, because I already figured out he'd probably be the next alicorn if alicornism won.
1) He's a very skilled unicorn, so he can transform other alicorns. 2) He has a strong relationship with not one, but two of the royal alicorns. 3) He's very important in the defense of the realm.
Hell, I'm pretty sure Shining is technically a prince now anyway. It wouldn't be much of a stretch, and he could certainly appear in the same settings as the other four where other potential characters can't. (Say, eating at the royal dining room at Canterlot Castle.)
...I hadn't thought of that. Congratulations. You win. No, seriously. In the event that ponies can become immortal WITHOUT being alicorns, there simply isn't a good enough argument for deathism, period.
For the sake of the story, however, when the argument gets brought up by Twilight, it'll have to be shown to be magically impossible to do it. I'm going to have to make something up. Because the argument is literally too good. It actually makes the story a lot worse, because there's no longer a meaningful conflict between the two ideologies.
Since it's a stor...
Your assumption is correct. The alicorn transformation can only be granted to ponies.
I'm not sure what you mean by the danger, in point 2. I can't think of a danger that fits all the criteria you mentioned. Military threat wouldn't affect other ponies, and envy would affect other races regardless of alicorn rule or not.
Point 4 is good, though it has a fairly easy answer: Ponies would have to be approved by someone (or multiple someones) trustworthy in order to be upraised, not merely by any alicorn. So, you would need to trust the pony to adhere to the law...
For what it's worth, I'm now taking pro-alicornism arguments, having strengthened the anti-alicornism side significantly. Anti-alicornism arguments are still acceptable.
That's an excellent backup plan. Fortunately, with all the other replies in this thread, I'm unlikely to need a backup plan. That said, for the purposes of strengthening both sides, I'm likely to look for arguments to strengthen alicornism at some stage, and if that makes alicornism too powerful, I shall consider your idea as a way to bring parity back to the sides.
"If you are Christian, then you probably know the Bible in detail, you are probably familiar with a range of theological and apologetic texts"
I'll admit I don't have any statistics here, but from what I've seen heard, both first-hand and second-hand, Christians tend to be quite poor on average at knowing the Bible. I've never heard any evidence suggesting the average Christian has a detailed knowledge of the contents of the Bible, even if the kind of Christians who like to argue Christianity are more informed than most. (Similarly, argumentative atheists tend to have a better knowledge of the atheistic arguments than the average atheist.)
It's not even really about magical power. Within the world, it's about political power, and the fact that the alicorns are royalty. In reality, it's about the nature of the fanfiction. Much of the fanfiction is about the discussion and debate between the four princesses of Equestria. Therefore, any pony that isn't an alicorn tends to fade into the background a bit, taking the role of a driving force on the main characters. The main power that the alicorns have is the literary device of being major characters.
I spent two minutes arguing about why Discord w...
That...is actually pretty brilliant. I was originally going to have Celestia be opposed to the idea of alicornification, but I may have Celestia change her mind to this. Cadence has the view of "We should make absolutely sure we've concluded things will work before proceeding", which is likely to take decades, but not millenia. Twilight starts out with the view of "We should start right now, why the hell are we even hesitating?"
This is partly because of the big red flag of having the protagonist share my personal beliefs. In this fanfic...
Fortunately, I now have enough arguments against alicornification to turn the fanfic into a good fight while still having the world the way I originally envisioned it. I doubt many people are going to say I'm making it too easy, what with all the arguments about social pressure, overpopulation, and potential for magical abuse. Plus, I'm adding something that we don't see often enough: At the beginning of the fic, the protagonist is simply WRONG. Twilight's belief is "We should charge ahead and turn everypony into alicorns as quickly as possible" ...
Definitely possible. After all, I'm not going to ignore technical constraints. I just don't intend to invent them. Hell, I don't have to. The problem is hard enough as it is. (For example, overpopulation is a very difficult technical constraint, and it arises naturally from the logic inherent to the canon setting.)
I don't intend to write it in the fashion described (I.e, a largely linear story where Twilight and friends solve various technical constraints of alicornification in turn, being rewarded with immortality each time, until there aren't any left) b...
For the purpose of this fanfiction, Celestia is able to uplift alicorns at a significantly higher rate than she currently is and other alicorns can either cast it, or learn to cast it. So logistically, it's possible to increase alicornification at an exponential rate. Call it somewhere between 6 and 12 casts a year, for now: The exact rate isn't all that important, what's important is that it can be done, which means arguments then shift to "Should it be done?"
As for the power vs. safety thing, I agree, that's definitely true, but what I was aski...
I'll have to check out that comic if the Chrysalis argument comes up, I suppose.
I'm not sure what you mean by trying to exploit closed-loop time travel through travelling to the future. Do you mean using future sight to see a desirable future and then trying to get there?
As for time travel, in this particular fic, my best answer is simply "Hell no." If it comes up, Twilight and the alicorns can simply decide it's a Really Bad Idea to use it, and they're right, since nobody actually understands how the hell it works, because it violates causality ...
Well, this argument of mine was made before you pointed out the priority-based nature of magic in the show, based on the idea that more alicorns actually equals reduction of existential risk via the villain of the week. That particular argument is much weaker now.
If one doesn't have a need to increase the alicorn numbers in order to protect Equestria, then you're right. The bar should, in fact, be set extremely high. Even Cadence, the alicorn of love of all things, has tremendous power. She basically has the ability to mind-control ponies, and she can send...
Well that's a relief, considering that neither of them overlap what I want to do by as much as I feared. There's definitely room for this fanfiction to be unique. I hadn't thought of Friendship is Optimal as being about transpony/transhumanism, and being more of an AI story, but the theme is definitely there, I agree.
So, given that we have two stories currently, and three constitutes a genre, that means that the entire existence of a genre is now dependent on me writing this :P But no pressure, right?
How do we decide who to give mansions? Especially if the "finite" is real and it never comes back. Then you can virtually always make an argument for waiting. When you're literally immortal, there's no such thing as the perfect time to use an irreplacable resource. If you wait long enough, it's basically a lock that somepony better will come along, if not this millenium then maybe the next one.
As for eugenics: Dragons take up a hell of a lot more space than ponies.
Yeah, I'd rather not add hard technical constraints. Simply put, it ruins the entire story I have in mind. A story about the emotions of accepting the mortality of one's friends isn't a bad idea for a fanfiction, and I'm sure there'll be plenty of them, but it's simply not what I want to write.
Interestingly, the destiny thing has been something I'd thought about in the past. I thought about an idea for a short fanfiction designed to teach some of the basics of rationality, wherein Twilight was totally clueless about how to fix Starswirl's spell in the Season 3 finale. Twilight would be forced to learn the basics of rationality in some fashion, specifically the portion about mysterious answers, noticing that "destiny" didn't actually ANSWER anything, forcing her to clarify her true answers. By working on that, she discovers the true nat...
I'd say you've got two out of three there. Based on lines Chrysalis says (When she beats Celestia, she says "Ah! Shining Armor's love for you is even stronger than I thought! Consuming it has made me even more powerful than Celestia!"), her power doesn't depend on the magical strength of the pony she's feeding off, it's all about love, and alicorns don't necessarily love any more intensely than other ponies. The changelings would have been more powerful taking on alicorn forms, but it's clear that that isn't enough to win in one-on-one combat: Th...
Thanks for the inspiration for this idea, by the way :) I might not have thought of it if not for Luminosity and Radiance.
And, speaking of which, something I was wondering about: Is your name actually inspired by the alicorns from MLP? Believe it or not, I only thought of the association a few weeks ago, but I wasn't curious enough to PM you about it.
Wait, this is a thing? I've only ever seen one small one-shot that had a transhumanist vibe to it. (Mortality Report) All the other "Reactions to immortality" ones I've seen have been all about how terrible it was. If there's already a few well-written explorations of this exact concept, is there even a good reason to write this one?
Also, does anyone have some links to these, or at least names/authors? Whether my writing this fanfiction is still worth doing or not, getting more ideas is unlikely to be a bad thing.
I was referring to the concept as...
A large amount of the things you mention become less dangerous in the event of greater alicorn presence in Equestria, not more. Nightmare Moon, Discord and Chrysalis ALL almost won, and if even just a few dozen alicorns had existed, they wouldn't have stood a chance in hell.
Now, the whole existential risk angle...is a very interesting point, since based on what I've just argued, the logical meeting-ground between the two would be to have a task force of alicorns, say, at least a dozen, but no more than a hundred, all comprised of ponies Celestia trusted su...
all comprised of ponies Celestia trusted sufficiently
You're a god. You've got the ability to make other gods. You've got literally a million years to find people trustworthy enough. A single failure is a possible extinction event, and that nearly happened once already. How high do you set the bar for 'trust sufficiently'?
She's already working on the problem (and communicating with other alicorns about it, as seen at the end of S3Ep2). She's increased the number by two within the last couple decades or so (Twilight, and I assume Cadance is young). ...
Good point. It also makes Celestia look like a much more credible character. One of my biggest problems was "Why the hell hasn't Celestia come up with this solution a thousand years ago?" and by making it genuinely really difficult to make the mass alicornification work properly, I can come up with a plausible answer for this that isn't "Celestia isn't rational.".
For what it's worth, I think I'm going to keep the particular thing you quoted, because I think it makes significantly more logical sense for alicorns, which are supposed to em...
That's...an interesting point. I never actually thought that Celestia and Luna could move the celestial bodies because they were alicorns. I always just thought they could move them because it was their special talents, and it was unique magic they could do because of their knowledge or talent, not because only they had the brute force to do it. After all, the only fight Celestia was ever in canonically, she lost, and it wasn't even all that climactic either.
In the event that all alicorns have royal-sister levels of power (Again, my assumptions have blinde...
Then I choose the torture. I've grown a bit more comfortable with overriding intuition in regards to extremely large numbers since my original reply 3 months ago.
You might be right. I'll have to think about this, and reconsider my stance. One billion is obviously far less than 3^^^3, but you are right in that the 10 million dollars stolen by you would be preferable to me than the 100,000 dollars stolen by Eliezer. I also consider losing 100,000 dollars less than or equal to 100,000 times as bad as losing one dollar. This indicates one of two things:
A) My utility system is deeply flawed. B) My utility system includes some sort of 'diffiusion factor' wherein a disutility of X becomes <X when divided among several ...
Actually, I ended up resolving this at some point. I would in fact pick the dust specks in this case, because the situations aren't identical. I'd spend a lot of time in my 3^^^3 lives worrying if I'm going to start being tortured for 50 years, but I wouldn't worry about the dust specks. Technically, the disutility of the dust specks is worse, but my brain can't comprehend the number "3^^^3", so it would worry more about the torture happening to me. Adding in the disutility of worrying about the torture, even a small amount, across 3^^^3 / 2 lives, and it's clear that I should pick the dust specks for myself in this situation, regardless of whether or not I choose torture in the original problem.
Ben Jones didn't recognise the dust speck as "trivial" on his torture scale, he identified it as "zero". There is a difference: If dust speck disutility is equal to zero, you shouldn't pay one cent to save 3^^^3 people from it. 0 3^^^3 = 0, and the disutility of losing one cent is non-zero. If you assign an epsilon of disutility to a dust speck, then 3^^^3 epsilon is way more than 1 person suffering 50 years of torture. For all intents and purposes, 3^^^3 = infinity. The only way that Infinity(X) can be worse than a finite number is if X is equal to 0. If X = 0.00000001, then torture is preferable to dust specks.
Well, he didn't actually identify dust mote disutility as zero; he says that dust motes register as zero on his torture scale. He goes on to mention that torture isn't on his dust-mote scale, so he isn't just using "torture scale" as a synonym for "disutility scale"; rather, he is emphasizing that there is more than just a single "(dis)utility scale" involved. I believe his contention is that the events (torture and dust-mote-in-the-eye) are fundamentally different in terms of "how the mind experiences and deals with [the...
I believe this lesson is designed for crisis situations where the wiser person taking the time to explain could be detrimental. For example, a soldier believes his commander is smarter than him and possesses more information than he does. The commander orders him to do something in an emergency situation that appears stupid from his perspective, but he does it anyway, because he chooses to trust his commander's judgement over his own.
Under normal circumstances, there is of course no reason why a subordinate shouldn't be encouraged to ask why they're doing something.
I'm not sure that's the real reason a soldier, or someone in a similar position, should obey their leader. In circumstances that rely on a group of individuals behaving coherently, it is often more important that they work together than that they work in the optimal way. That is, action is coordinated by assigning one person to make the decision. Even if this person is not the smartest or best informed in the situation, the results achieved by following orders are likely to be better than by each individual doing what they personally think is best.
In less ... (read more)