All of Gondolinian's Comments + Replies

I knew about melatonin and red/blue light from reading people in this community. I also had a vague understanding that circadian rhythms controlled falling asleep and were based on light, but I don't think I'd seen things spelled out as clearly as they are here. Thank you for putting this together and I do look forward to the rest of your series.

Didn't see the original, but it looks good now.

(The line spacing got all wacko, sorry about that)

That prompted me to look up how to make line breaks in Markdown syntax, which I'd been wondering about myself for a while.

Try typing two or more spaces and then hitting enter
whenever you want a new line.

7Alex_Miller
Thanks; I fixed it up now!

Thanks for responding. Unfortunately I think that guide only works for comments? Or at least, it only works for Markdown syntax. Do you know of any way to put Markdown syntax in a Main or Discussion post?

0Elo
I was going to say try linking to the pollid. after making a poll it's syntax is changed to pollid:number but I just tried that on a draft post and it doesn't seem to want to do it. Maybe knowing the HTML for it will allow it... I am out of ideas. Sorry

Sorry for the late reply, and thanks for the offer! Unfortunately I wasn't actually talking about doing it myself, just putting it out there as an idea. Good luck though; it sounds like a valuable thing for the rationality community to have.

[This comment is no longer endorsed by its author]Reply

Improbable idea for surviving heat death: computers made from time crystals. (h/t Nostalgebraist and Scott)

0Lumifer
From that set of solutions I prefer the good old-fashioned elven magic X-D
2turchin
I have another roadmap "how to survive the end of the universe", and one of ideas there is geometric computers. But thanks for links. This map in OP is about x-risks in approximately near future, like next 100 yeras

Keep the same upvote/downvote system for individual comments and posts, but don't keep a total karma score for each user on their user page. Alternatively, keep a total karma score, but don't keep a total percent-positive score. (I believe the EA forum uses the former system, and Reddit the latter, but please correct me if I'm wrong about this.) [pollid:1006]

True, but that's only from a very limited number of sources (~4?); it doesn't include the dozens of smaller blogs. It's also a straight feed--no filtering out of housekeeping, meta posts, etc.--and it only shows 5 links which are quickly pushed aside by newer ones, while a section for links would keep all of them accessible and searchable.

Ideas from recent discussion regarding changes to the Promotion system:

[in progress]

Have Promotion be based on some kind of popular vote (not necessarily karma) or some other kind of community decision, instead of an Editor's decision. [pollid:1004]

Allow posts from Discussion to be Promoted without having to first be moved to Main.

This is already possible by logging into the Username account and sending a message or reply from there, but we could do something to make it more convenient. Thanks for the idea.

ETA: One possible issue with this I see is that the anonymity might encourage people to be meaner than they would be when posting/messaging under their main account, but perhaps there are ways around this?

[pollid:1000]

Ideas from recent discussion regarding changes to the karma system and/or addition of new sections:

[in progress]

Make downvotes cost some of the downvoter's karma. (h/t RichardKennaway and Houshalter) [pollid:997]

Only allow users with a certain amount of karma to downvote. (The actual amount can be worked out later.) (h/t ete and Houshalter) [pollid:998]

Create a new and separate from Main and Discussion section with either no karma, like the SSC discussions, or only upvotes, like Tumblr, Facebook and other social media services used by rationalists. [... (read more)

0Gondolinian
Keep the same upvote/downvote system for individual comments and posts, but don't keep a total karma score for each user on their user page. Alternatively, keep a total karma score, but don't keep a total percent-positive score. (I believe the EA forum uses the former system, and Reddit the latter, but please correct me if I'm wrong about this.) [pollid:1006]
0RyanCarey
There already is 'recently on rationality blogs'

On second thought, I'll risk it. (I might post a comment to it with a compilation of my ideas and my favorites of others' ideas, but it might take me a while.)

Good point, thanks. I was already not a fan of the way the polls made the post look, so I went ahead and took them down. I could replace them with something better, but I think this thread has already gotten most of the attention it's going to get, so I might as well just leave the post as it is.

Would you be willing to run a survey on Discussion also about Main being based on upvotes instead of a mix of self-selection and moderation? As well as all ideas that seem interesting to you that people suggest here?

I'd rather not expose myself to the potential downvotes of a full Discussion post, and I also don't know how to put polls in full posts, only in comments. Nonetheless I am pretty pro-poll in general and I'll try to include more of them with my ideas.

Perhaps official downvote policies messaged to a user the first time they pass that would help too.

Anything with messages could be implemented by a bot account, right? That could be made without having to change the Less Wrong code itself.

Maybe we could send a message to users with guidelines on downvoting every time they downvote something? This would gently discourage heavy and/or poorly reasoned downvoting, likely without doing too much damage to the kind of downvoting we want. One issue with this is it would likely be very difficult or practica... (read more)

0plex
Every time someone downvotes would probably be too much, but maybe the first time, or if we restrict downvotes only for users with some amount of karma then when they hit that level of karma?

I do not have the time to engage in the social interactions required to even be aware of where all this posting elsewhere is going on, but I want to read it.

There's a Masterlist for rational Tumblr, but I'm not aware of a complete list of all rationalist blogs across platforms.

Perhaps the Less Wrong community might find it useful to start one? If it were hosted here on LW, it might also reinforce LW's position as a central hub of the rationality community, which is relevant to the OP.

4Evan_Gaensbauer
I have already thought of doing this, and want to do it. I've been neglecting this goal, and I've got lots of other priorities on my plate right now, so I'm not likely to do it alone soon (i.e., by the end of June). If you want me to help you, I will. I may have an "ugh field" around starting this project. Suggestions for undoing any trivial inconveniences therein you perceive are welcomed.

It doesn't help that even the most offhand posting is generally treated as if it was an academic paper and reviewed skewered accordingly :-p.

I agree. There are definitely times for unfiltered criticism, but most people require a feeling of security to be their most creative.

2John_Maxwell
I believe this is referred to as "psychological safety" in the brainstorming literature, for whatever that's worth.

Is anyone in favor of creating a new upvote-only section of LW?

[pollid:988]

0Richard_Kennaway
Another suggestion. Every downvote costs a point of your own karma. You must have positive karma to downvote.
0[anonymous]
Another suggestion: Every downvote costs a point of your own karma.
3Sarunas
Other. I do not think there is a need for a new section. Instead, we could encourage people to use tags (e.g. something like these belief tags) and put disclaimers at the top of their posts. Even though actual tags aren't very easy to notice, we can use "informal tags", such as, e.g. putting a tag in square brackets. For example, if you want to post your unpolished idea, your post could be titled something like this: "A Statement of idea [Epistemic state: speculation] [Topic:Something]" or "A Statement of idea [Epistemic state: possible] [Topic:Something]" or "A Statement of idea [Epistemic state: a very rough draft][Topic:Something]". In addition to that you could put a disclaimer at the top of your post. Perhaps such clarity would make it somewhat easier to be somewhat more lenient on unpolished ideas, because even if a reader can see that the poster intended their post to be a rough draft with many flaws, they cannot be sure if that draft being highly upvoted won't be taken by another reader as a sign that this post is correct and flawless (or at least thought as such by a lot of LWers), thus sending the wrong message. If a poster made it clear that they merely explore the curious idea, an interesting untested model or something that has only a remote possibility of not being not even wrong, a reader would be able to upvote or downvote a post based on what the post was trying to achieve, since there would be less need to signal other readers that a post has serious flaws, and therefore should not be believed, if it was already tagged as "unlikely" or something like that. Perhaps, numerical values to indicate the belief status (e.g. [0.3]) could be used instead of words. There would still be an incentive to tag your posts as "certain" or "highly likely", because most likely they would be treated as having more credibility and thus attract more readers.
2plex
Another approach would be not allowing downvote to be open to all users. On the Stackexchage network for example, you need a certain amount of reputation to downvote someone. I'd bet that a very large majority of the discouraging/unnecessary/harmful downvotes come from users who don't have above, say, 5-15 karma in the last month. Perhaps official downvote policies messaged to a user the first time they pass that would help too. This way involved users can still downvote bad posts, and the bulk of the problem is solved. But it requires technical work, which may be an issue.
Nornagest100

Proposals for making LW upvote-only emerge every few months, most recently during the retributive downvoting fiasco. I said then, and I continue to believe now, that it's a terrible idea.

JMIV is right to say in the ancestor that subtle features of moderation mechanics have outsized effects on community culture; I even agree with him that Eliezer voiced an unrealistically rosy view of the downvote in "Well-Kept Gardens". But upvote-only systems have their own pitfalls, and quite severe ones. The reasons behind them are somewhat complex, but boi... (read more)

2diegocaleiro
Would you be willing to run a survey on Discussion also about Main being based on upvotes instead of a mix of self-selection and moderation? As well as all ideas that seem interesting to you that people suggest here? There could be a research section, a Upvoted section and a discussion section, where the research section is also displayed within the upvoted, trending one.

A few tangential ideas off the top of my head:

If the moderation and self selection of Main was changed into something that attracts those who have been on LW for a long time, and discussion was changed to something like Newcomers discussion, LW could go back to being the main space, with a two tier system (maybe one modulated by karma as well).

  1. People have been proposing for a while that we create a third section of LW for open threads and similar content.

  2. We could have a section without any karma scores for posts/upvote only, though we could still ke

... (read more)
2Jiro
Those are all phrased as "do you agree that people are saying X" or "do you agree that we could X" rather than "is X a good idea".

I don't think this is how it works with people. Especially ones with full 'net access.

You're right; that was poorly phrased. I meant that they would have a lot less tying them down to the mainstream, like heavy schoolwork, expectations to get a good job, etc. Speaking from my own experience, not having those makes a huge difference in what ideas you're able to take seriously.

The Internet exposes one to many ideas, but 99% of them are nonsense, and smart people with the freedom to think about the things they want to think about eventually become pretty... (read more)

0Lumifer
I am confused as to why do you think it's a good thing. You're basically trying to increase the variance of outcomes. I have no idea why do you think this variance will go precisely in the direction you want. For all I know you'll grow a collection of very very smart sociopaths. Or maybe wireheads. Or prophets of a new religion. Or something else entirely.

If we have 200-300 years before well proved catastrophe, this technic may work.

If you're talking about significant population changes in IQ, then I agree, it would take a while to make that happen with only reproduction incentives. However, I was thinking more along the lines of just having a few thousands or tens of thousands of >145 IQ people more than we would otherwise, and that could be achieved in as little as one or two generations (< 50 years) if the program were successful enough.

Now for a slightly crazier idea. (Again, I'm just thinkin... (read more)

2Lumifer
I don't think this is how it works with people. Especially smart ones with full 'net access.

Do martial arts training until you get the falling more or less right. While this might be helpful against muggers the main benefit is the reduced probability of injury in various unfortunate situation.

As someone with ~3 years of aikido experience, I second this.

What's the easiest way to put a poll in a top-level article?

0Elo
click the "poll help" in "show help" guide is there I think. http://wiki.lesswrong.com/wiki/Comment_formatting#Polls

Thanks for taking the time to put all that together! I'll keep it in mind.

In the interest of helping to bridge the inferential distance of others reading this, here's a link to the wiki page for Oracle AI.

Thanks; I've put a library request in for it, though it'll probably be a few months until I get it.

What Is Mathematics? was the only one I was able to find from a local library. I've put a request in for it and I should be getting it soon. Thanks for the recommendation; if it helps me to not hate math then I might be able to do something actually useful for existential risk reduction.

2Vladimir_Nesov
These are available on Library Genesis. Also, "What is Mathematics?" is more serious than the other two. "The Shape of Space" is probably the easiest and most fun, and "The Enjoyment of Math" is a collection of almost completely independent small pieces that don't assume any background, but some of them are a bit involved for something that doesn't assume any background.

(I know there are almost certainly problems with what I'm about to suggest, but I just thought I'd put it out there. I welcome corrections and constructive criticisms.)

You mention gene therapy to produce high-IQ people, but if that turns out not to be practical, or if we want to get started before we have the technology, couldn't we achieve the same through reproduction incentives? For example, paying and encouraging male geniuses to donate lots of sperm, and paying and encouraging lots of gifted-level or higher women to donate eggs (men can donate sper... (read more)

4turchin
If we have 200-300 years before well proved catastrophe, this technic may work. But in 10-50 years time scale it is better to search good clever students and pay them to work on x-risks.

I shouldn't have phrased that so confidently; I was essentially just thinking out loud. Would anyone who knows more about decision theory mind explaining where I went wrong?

2Gram_Stone
I don't know a lot about decision theory, and I have to guess at an unidentified person's thought process, but I'll take a swing at this: * They might have downvoted you because real-world deterrence involves causal influence. Usually where there's talk of precommitments, there's also talk of acausal trade, so I think your brain lumped them together, but agents can precommit in ordinary trade as well. However, it is true that you can analyze ordinary trade in acausal terms, and it seems that you have done this. So your words implied that deterrence doesn't involve any causal effects, which is false, but you really just wanted to point out that you can analyze ordinary trade acausally, which is true. * They might have downvoted you because they think it's silly to talk about provably cooperative humans, or further, because they find it objectionable that your ideas about decision theory and provable cooperation would lead you to what they consider a morally repugnant conclusion in your counterfactual (i.e. retaliate anyway); I think I've seen some people who think things like that. I do consider this a lot less likely than the first possibility. * They might have downvoted you because the agent in the downvoter's simulation of your counterfactual was using causal decision theory. Also, I don't really get what the point would be if deterrence were acausal. Were you thinking something like, "Deterrence is acausal, therefore maybe tense is not a concept that we can even apply to deterrence."?
5Epictetus
It's not wrong. In many contexts such a strategy is advisable. It's the theory behind mutually assured destruction. You personally aren't going to benefit from launching a retaliatory nuclear strike, but the knowledge that you'd do it anyway might just keep your enemy from launching a first strike. On a smaller scale, you can see this sort of thing going on in prisons and criminal organizations where appearing weak can turn you into a target. One drawback is that while a reputation for retaliating against every wrong will make people less likely to wrong you, those who decide to wrong you anyway will make sure to leave you in no position to retaliate. There's another drawback that occurs if you allow for miscommunication: retaliation against something you wrongfully thought was a defection can lead your opponent to retaliate against what he perceives is an unprovoked attack. It all depends on the situation. Sometimes it's better to be more forgiving and sometimes it's better to be more vindictive.
5VoiceOfRa
You weren't. A single down vote doesn't mean you're wrong.

Deterrent effects would fall under "things present and to come".

Fair enough, but there's also a sense in which deterrence is acausal. In order to make a truly credible threat of retaliation for defection, you have to be completely willing to follow through with the retaliation if they defect, even if, after the defection, following through does not seem to have any future benefits.

[This comment is no longer endorsed by its author]Reply
2Gondolinian
I shouldn't have phrased that so confidently; I was essentially just thinking out loud. Would anyone who knows more about decision theory mind explaining where I went wrong?

I have found that when you are like 16, you often want everything to be super logical and everything that is not feels stupid. And growing up largely means accepting "common sense", which at the end of the day means relying more on pattern recognition.

For a counterexample, I am 16 and almost all my decisions/perceptions are based on implicit pattern recognition more than explicit reasoning.

ETA: I think I missed your point.

[This comment is no longer endorsed by its author]Reply
2[anonymous]
My point is that I was like this guy you probably aren't.

That may be the case now, but a part of my brain is certain that in the past downvotes did have a significant effect on ordering. Like, if a 10-point comment got one downvote, it would fall below a 6-point comment without any downvotes. Feelings of certainty are of course very unreliable, but I don't see any obvious reasons why this one is wrong.

0ChristianKl
Maybe you used "Popular" or "Top" as an ordering criteria in the past?

Is it just me/my browser, or has something changed in the Less Wrong code regarding the "Best" comment ordering? For example, it seemed like before if there were a bunch of 0% positive comments and a 50% positive comment, then the latter would almost always be at the bottom, but now I'm seeing them and even negative karma posts above or between neutral or positive karma posts. Has anyone else noticed this?

0OrphanWilde
AFAICT, "Best" is ordered strictly by the number of upvotes, and isn't tempered by the number of downvotes. What shows up seems to vary by what's currently trending (varying by your configured comment window), rather than changes to the logic.

Good luck and I wish you the best! You're one of the people I most aspire to be like.

Seconded! I've read through your posts on how you got to where you are today, and it's very inspiring.

On that note, I wish that more successful people broke down how they got to where they are today (or maybe they do and I don't know about it? or maybe I just particularly identified with your story and some sort of affect heuristic is influencing me here?).

You don't have to be angry (and it is probably better if you aren't), but deterrents are still a thing.

4Epictetus
Deterrent effects would fall under "things present and to come". If you expect some kind of future benefit from a retaliatory act, that's one thing. On the other hand, if you seek vengeance because you're outraged that someone would dare wrong you, then you're mentally living in the past.

I wonder if a movie with an AI box-based story would have any potential? Perhaps something treated as more of a psychological horror/thriller than as a guns-and-explosions action movie might help to distance people's intuitions from "AI is 'The Terminator', right?"

2Evan_Gaensbauer
Re: Sly's suggestion for 'Ex Machina'. Rob Bensinger, who works for MIRI, apparently saw it with the MIRI staff, and they gave it a stamp of approval for being "pretty good", if there opinions on the subject are worth something to you.
Sly100

Watch Ex Machina. This is pretty close to what you are talking about, and I was it was well done.

It appears that MetaMed has gone out of business. Wikipedia uses the past tense "was" in their page for MetaMed, and provides this as a source for it.

Key quote from the article:

Tallinn learned the importance of feedback loops himself the hard way, after seeing the demise of one of his startups, medical consulting firm Metamed.

9Larks
It would be nice if people were open when their startups close, especially when previously advertised on LW, so we can learn from mistakes. Or is there some reason to not admit a startup has failed?

Oh wow, I tried logging in to it and the reply buttons on existing comments weren't even there. Thanks for providing the account and pointing this out.

Two hypotheses come to mind, firstly, as estimator said, perhaps you missed something about the email confirmation when you set up the account, (same for Halfwitz), and secondly, maybe it's an IP address thing intended to discourage throwaway accounts?

I'm not sure I accept your premises? I could certainly be wrong, but I have not gotten the impression that comments can be prevented by low karma, only posts to Discussion or Main. (And I recall the minimum as 20, not 2.*) The most obvious way to get the karma needed to post is by commenting on existing posts (including open threads and welcome threads), and new users with zero initial karma regularly do this without any apparent difficulty, so unless I'm missing something, I don't think it's a problem?

*ETA: It seems that 2 is the minimum for Discussion, while 20 is the minimum for Main.

6zedzed
Created throwaway, couldn't comment. (So as to not propagate throwaways testing this, account is less_than_2, and the password is 123456)

Well, for a start there's the Rationalist Masterlist currently hosted by Yxoque (MathiasZaman here on LW). You could announce your presence there and ask to be added to the list, or just lurk around some of the blogs for a while and send anonymous asks to people to get a feel for the community before you set up an account.

2[anonymous]
Thanks!

If he could establish such a reputation as a consultant for commercial organizations, I imagine he could make quite a lot of money to do EA with.

When I try to convince people like Scott that they're actually very good at math, they often say "No, you don't understand, I'm really bad at math, you're overestimating my mathematical ability because of my writing ability." To which my response is "I know you think that, I've seen many people in your rough direction who think that they're really bad at math, and say that I don't understand how bad they are, and they're almost always wrong: they almost never know that what they were having trouble with wasn't representative of math."

... (read more)
7Vladimir_Nesov
I would recommend trying these books (at high school level or earlier, depending on when it becomes possible to follow them): * H. Rademacher & O. Toeplitz (1967). The Enjoyment of Math. * J. R. Weeks (2001). The Shape of Space. * R. Courant & H. Robbins (1996). What Is Mathematics?

There is a not necessarily large, but definitely significant chance that developing machine intelligence compatible with human values may very well be the single most important thing that humans have or will ever do, and it seems very likely that economic forces will make strong machine intelligence happen soon, even if we're not ready for it.

So I have two questions about this: firstly, and this is probably my youthful inexperience talking (a big part of why I'm posting this here), but I see so many rationalists do so much awesome work on things like socia... (read more)

2Gram_Stone
To elaborate on existing comments, a fourth alternative to FAI theory, Earning To Give, and popularization is strategy research. (That could include research on other risks besides AI.) I find that the fruit in this area is not merely low-hanging but rotting on the ground. I've read in old comment threads that Eliezer and Carl Shulman in particular have done a lot of thinking about strategy but very little of it has been written down, and they are very busy people. Circumstances may well dictate retracing a lot of their steps. You've said elsewhere that you have a low estimate of your innate mathematical ability, which would preclude FAI research, but presumably strategy research would require lower aptitude. Things like statistics would be invaluable, but strategy research would also involve a lot of comparatively less technical work, like historical and philosophical analysis, experiments and surveys, literature reviews, lots and lots of reading, etc. Also, you've already done a bit of strategizing; if you are fulfilled by thinking about those things and you think your abilities meet the task, then it might be a good alternative. Some strategy research resources: * Luke Muehlhauser's How to study superintelligence strategy. * Luke's AI Risk and Opportunity: A Strategic Analysis sequence. * Analysis Archives of the MIRI blog. * The AI Impacts blog, particularly the Possible Empirical Investigations post and links therein. * The Future of Life Institute's A survey of research questions for robust and beneficial AI. * Naturally, Bostrom's Superintelligence.
2Capla
First thing you should do is talk to the people that are already involved in this. CFAR seems to be the gateway for man people (at least, it was for me).
6MarsColony_in10years
I don't agree with this particular argument, but I'll mention it anyway for the sake of having a devil's advocate: The number of lives lost to an extinction event is arguably capped at ~10 billion, or whatever Earth's carrying capacity is. If you think the AI risk is enough generations out, then it may well be possible to do more good by, say, eliminating poverty faster. A simple mathematical model would suggest that if the singularity is 10 generations away, and Earth will have a constant population of ~10 billion, then 100 billion lives will pass between now and the singularity. A 10% increase in humanity's average quality of life over that period would be morally equivalent to stopping the singularity. Now, there are a host of problems with the above argument: * First, it is trying to minimize death rather than maximize life. If you set out to maximize the number of Quality Adjusted Life Years that intelligent life accumulates before it's extinction, then you should also take into account all of the potential future lives which would be extinguished by an extinction event, rather than just the lives taken by the event itself. * Second, the Future of Humanity Institute has conducted an informal survey of existential risk researchers, asking for estimates of the probability of human extinction in the next 100 years. The median result (not mean so as to minimize the impact of outliers) was ~19%. If that's a ~20% chance each century, then we can expect humanity to last perhaps 2 or 3 centuries (aka, that's the half life of a technological civilization). Even 300 years is only maybe 4 or 5 generations, so perhaps 50 billion lives could be effected by eliminating poverty now. Using the same simplistic model as before, that would require a 20% increase in humanity's average quality of life to be morally equivalent to ~10 billion deaths. That's a harder target to hit, but it may be even harder still if you consider that poverty is likely to be nearly eliminated in
9Viliam
Despite all talking about rationality, we are still humans with all typical human flaws. Also, it is not obvious which way it needs to go. Even if we had unlimited and infinitely fast processing power, and could solve mathematically all kinds of problems related to Löb's theorem, I still would have no idea how we could start transferring human values to the AI, considering that even humans don't understand themselves, and ideas like "AI should find a way to make humans smile" can lead to horrible outcomes. So maybe the first step would be to upload some humans and give them more processing power, but humans can also be horrible (and the horrible ones are actually more likely to seize such power), and the changes caused by uploading could make even nice people go insane. So, what is the obvious next step, other than donating some money to the research, which will most likely conclude that further research is needed? I don't want to discourage anyone who donates or does the research, just saying that the situation with the research is frustrating by its lack of feedback. On the scale where 0 is the first electronic computer and 100 is the Friendly AI, are we at least at point 1? If we happen to be there, how would we know that?
dxu170

how can it be that the ultimate fate of humanity to either thrive beyond imagination or perish utterly may rest on our actions in this century, and yet people who recognize this possibility don't do everything they can to make it go the way we need it to?

Well, ChaosMote already gave part of the answer, but another reason is the idea of comparative advantage. Normally I'd bring up someone like Scott Alexander/Yvain as an example (since he's repeatedly claimed he's not good at math and blogs more about politics/general rationality than about AI), but this... (read more)

ChaosMote130

To address your first question: this has to do with scope insensitivity, hyperbolic discounting, and other related biases. To put it bluntly, most humans are actually pretty bad at maximizing expected utility. For example, when I first head about x-risk, my thought process was definitely not "humanity might be wiped out - that's IMPORTANT. I need to devote energy to this." It was more along the lines of "huh; That's interesting. Tragic, even. Oh well; moving on..."

Basically, we don't care much about what happens in the distant future, ... (read more)

[META]

MrMind, I'm just letting you know that I'd be happy to take over posting the majority of OTs in case you ever want a break. To be honest, I'm interested in the karma, but I can get karma elsewhere, and I definitely don't want to feel like I'm taking yours. (Also, it was rude of me before to post OTs without checking with you.) If you think our preferences are approximately equal, perhaps we could try posting on alternate weeks or something like that?

I know a lot of you probably aren't all that interested in mainstream television, but I've noticed something in the 8th series of Doctor Who which might be somewhat relevant here. It seems the new Twelfth Doctor has a sort of Shut Up and Multiply utilitarian attitude. There have been several instances in the 8th series where he is faced with something like the fat man variation of the trolley problem and actually pushes the metaphorical fat man, even in situations that are less clear cut than the original problem. This might represent a step in the righ... (read more)

0[anonymous]
Perhaps he just mainpulates Clara into being a person who cares alot about the living beings she happens to interact with, but still can make uncomfortable choices. This would be useful for him since she is supposed to save his lives over and over somewhere in time. He could easily "cheat" and look what consequences a given choice would have, since he has a time machine and a lot of spare time. Or his basic values are alien to some of us.
Jiro110

I'd argue the opposite. The writer is so opposed to the idea of moral reasoning that he thinks that no normal human being would ever use it. However, he's trying to make the Doctor look alien. Something that nobody would ever do, but has a plausible-sounding justification, is ideal to show that the Doctor is an alien.

Also, this explains why the show is so inconsistent on such things. The right thing to do when the moon is a giant egg and hatching has a chance of destroying the Earth is to kill it. It's one life against (billions * probability of the w... (read more)

When reading this chapter I couldn't help but think of another parallel in that Hermione is going to be the one-person special ops division of a world-saving conspiracy.

Well, hopefully this conspiracy will be a lot more effective and ethical.

1hairyfigment
Is it comforting to note that, due to future knowledge in the hands of the wrong entity, that other one was forbidden a certain level of competency? (Something far above that level might have been possible.)

I hope the Epilogue will feature Hermione in action. It sounds like she'll be perfect as a light side Dragon (TV Tropes) for Harry, as well as simply awesome in her own right.

I'm struck with Dumbledore's ruthlessness.

Pretend to kill someone to keep your enemies in line, but really just stash them away to be used as a trump card again later, whether as a hostage or a way to reconcile with your enemy. That's good.

I'm not sure I'd call Dumbledore "ruthless" just for this. While there might very well have been pragmatic benefits to hiding Narcissa instead of actually killing her that Dumbledore took into account, that's not at all incompatible with a simple desire to not cause an unnecessary death.

-6polymathwannabe
Load More