If you have "something to protect", if your desire to be rational is driven by something outside of itself, what is the point of having a secret identity? If each student has that something, each student has a reason to learn to be rational -- outside of having their own rationality dojo someday -- and we manage to dodge that particular failure mode. Is having a secret identity a particular way we could guarantee that each rationality instructor has "something to protect"?
It seems that in order to get Archimedes to make a discovery that won't be widely accepted for hundreds of years, you yourself have to make a discovery that won't be widely accepted for hundreds of years; you have to be just as far in the dark as you want Archimedes to be. So talking about plant rights would probably produce something useful on the other end, but only if what you say is honestly new and difficult to think about. If I wanted Archimedes to discover Bayes' theorem, I would need to put someone on the line who is doing mathematics that is hundreds of years ahead of their time, and hope they have a break-through.
I think cryonics is a terrible idea, not because I don't want to preserve my brain until the tech required to recreate it digitally or physically is present, but because I don't think cryonics will do the job well. Cremation does the job very, very badly, like trying to preserve data on a hard drive by melting it down with thermite.
Oh, hello. I've posted a couple of times, in a couple of places, and those of you who have spoken with me probably know that I am one: a novice, and two: a bit of a jerk.
I'm trying to work on that last one.
I think cryonics, in its current form, is a terrible idea, I am a (future) mathematician, and am otherwise divergent from the dominant paradigm here, but I think the rest of that is for me to know, and you to find out.
Bugmaster, I call down hurricanes everyday. It never gets boring. Meteorites are a little harder, but I do those on occasion. They aren't quite as fun.
But the angry frogs?
The angry frogs?
Those don't leave a shattered wasteland behind, so you can just terrorize people over and over again with those. Just wonderful.
Note: All of the above is complete bull-honkey. I want this to be absolutely clear. 100%, fertilizer-grade, bull-honkey.
That's alright. My humor, in real life, is based entirely on the fact that only I know I'm joking at the time, and the other person won't realize it until three days later, when they spontaneously start laughing for no reason they can safely explain. Is that asinine? Yes. Is it hilarious? Hell, yes. So I apologize. I'll try not to do that.
I think it is important to make a distinction between what our choice is now, while we are here, sitting at a computer screen, unconfronted by Omega, and our choice when actually confronted by Omega. When actually confronted by Omega, your choice has been determined. Take both boxes, take all the money. Right now, sitting in your comfy chair? Take the million-dollar box. In the comfy chair, the contra-factual nature of the experiment basically gives you an Outcome Pump. So take the million-dollar box, because if you take the million-dollar box, it's full of a million dollars. But when it actually happens, the situation is different. You aren't in your comfy chair anymore.
Okay, so where did those arrows come from? I see how the graph second from the top corresponds to the amount of time a particle, were particles to exist, would take if it bounced, if it could bounce, because it's not actually a particle, off of a specific point on the mirror. But how does one pull the arrows out of that graph?
I believe I suggested earlier that I don't know what moral theory I hold, because I am not sure of the terminology. So I may, in fact, be a utilitarian, and not know it, because I have not the vocabulary to say so. I asked "At what point is utilitarianism not completely arbitrary?" because I wanted to know more about utilitarianism. That's all.
No-one asked for a general explanation.
The best term I have found, the one that seems to describe the way I evaluate situations the most accurately, is consequentialism. However, that may still be inaccurate. I don't have a fully reliable way to determine what consequentialism entails; all I have is Wikipedia, at the moment.
I tend to just use cost-benefit analysis. I also have a mental, and quite arbitrary, scale of what things I do and don't value, and to what degree, to avoid situations where I am presented with multiple, equally beneficial choices. I al...
I don't agree. The existence 3^^^3 people, or 3^^^3 dust specks, is impossible because there isn't enough matter, as you said. The existence of an event that has only effects that are tailored to fit a particular person's idea of 'bad' does not fit my model of how causality works. That seems like a worse infraction, to me.
However, all of that is irrelevant, because I answered the more "interesting question" in the comment you quoted. To be blunt, why are we still talking about this?
That is in no way what was said. Also, the idea of an event that somehow manages to have no effect aside from being bad is... insanely contrived. More contrived than the dilemma itself.
However, let's say that instead of 3^^^3 people getting dust in their eye, 3^^^3 people experience a single nano-second of despair, which is immediately erased from their memory to prevent any psychological damage. If I had a choice between that and torturing a person for 50 years, then I would probably choose the former.
No, I'm pretty sure it makes you notice. It's "enough". "barely enough", but still "enough". However, that doesn't seem to be what's really important. If I consider you to be correct in your interpretation of the dilemma, in that there are no other side effects, then yes, the 3^^^3 people getting dust in their eyes is a much better choice.
I think you enormously over-state the difficulty of lying well, as well as the advantages of honesty.