All of Insert_Idionym_Here's Comments + Replies

I think you enormously over-state the difficulty of lying well, as well as the advantages of honesty.

Already done this to myself -- it lowers your self-esteem enormously.

I used to do exactly this, but I created whole backstories and personalities for my "hats" so that they would be more realistic to other people.

It might be more accurate to say that pretty much everything, including what we call biology and physics -- humans are the ones codifying it -- is memetically selected to be learnable by humans. Not that it all develops towards being easier to learn.

May I ask how many people any of you have seen walking around entirely barefoot, as opposed to wearing minimalist footwear of any kind?

To be perfectly honest, at the time I simply planted my face on the table in front of me a few times. I was at a dinner party with friends of my mother's; I would have sounded extremely condescending otherwise.

1Shmi
Ah yes, status mismatch in a not very rational crowd. Not much you can do there.

The lack of this knowledge got me a nice big "most condescending statement of the day award" in lab a year ago.

I have attempted using this in more casual decision making situations, and the response I get is nearly always something along the lines of "Okay, just let me propose this one solution, we won't get attached to it or anything, just hear me out..."

5Shmi
What do you do in this situation? Let them speak? Ask them to write down their solution, to be discussed later? Oops... Couldn't resist proposing solutions.

One could attempt to fight that by reducing the number or frequency of M&Ms eaten over a long period of time, essentially weaning one's self off of extrinsic rewards.

0Arkanj3l
I think it's still hard to privilege if that kind of effect exists in the first place.

I agree. I think that failure mode might then be better avoided by restricting possible "somethings", as opposed to adding another requirement on to one's reasons for wanting to be rational.

2DaFranker
Yes, but that's an exercise implicitly left to the reader. Formulating it this way is somewhat intuitively easier to understand, and if you've read the other sequences this should be simple enough to reduce to something that pretty much fits (restriction of "things to protect") in beliefspace. Essentially, this article, the way I understand it, mostly points at an "empirical cluster in conceptspace" of possible failure modes, and proposes possible solutions to some of them, so that the reader can deduce and infer the empirical cluster of solutions to those failure modes. The general rule could be put as "Make rationality your best means, but never let it become an end in any way." - though I suspect that I'm making a generalization that's a bit too simplistic here. I've been reading the sequences in jumbled order, and I'm particularly bad at reduction, which is one of the Sequences I haven't finished reading yet.

If you have "something to protect", if your desire to be rational is driven by something outside of itself, what is the point of having a secret identity? If each student has that something, each student has a reason to learn to be rational -- outside of having their own rationality dojo someday -- and we manage to dodge that particular failure mode. Is having a secret identity a particular way we could guarantee that each rationality instructor has "something to protect"?

3Nick_Tarleton
It's very easy to believe that you're being driven by something outside yourself, while primarily being driven by self-image. It's also very easy to incorrectly believe this about someone else.
3DaFranker
Failure mode: My "something to protect" is to spread rationality throughout the world and to raise the sanity waterline, which is best achieved by having my own rationality dojo. Beware the meta.

But don't you want to understand the underlying principles?

Not necessarily, your brain might have this annoying property that understanding a moral principal changes it in such a way that it no longer cares about it.

It seems that in order to get Archimedes to make a discovery that won't be widely accepted for hundreds of years, you yourself have to make a discovery that won't be widely accepted for hundreds of years; you have to be just as far in the dark as you want Archimedes to be. So talking about plant rights would probably produce something useful on the other end, but only if what you say is honestly new and difficult to think about. If I wanted Archimedes to discover Bayes' theorem, I would need to put someone on the line who is doing mathematics that is hundreds of years ahead of their time, and hope they have a break-through.

6johnlawrenceaspden
I think probability theory would have been very accessible to the Greeks, had they only thought to think about games of chance, which they certainly played. I bet if you'd asked Archimedes 'What odds should you offer on a bet that two dice get seven?', then the whole thing would have come crashing out within a hundred years or so. So you might want to put him in touch with a modern philosopher trying to take a mathematical approach to something mysterious, say Dennett.

I applaud your fourth paragraph.

I think that perhaps you may be missing the point.

I'm thinking about why I care about why I care about what I'm thinking, and I'm realizing that I have other things that I need to do, and that realization is not helping me get past this moment.

One: I support the above post. I've seen quite a few communities die for that very reason.

Two: Gurren Lagann? (pause) Gurren Lagann? Who the h*ll do you think I am?

I used to live in Ann Arbor, rather recently. I live in Saginaw now.

I believe the point is that we do not know how much more is possible, or what circumstances make that so. As such, we must check, as often as we can, to make absolutely sure that we are still held by our chains.

Feet are for standing, not hands, but that doesn't keep us from admiring the gymnast.

Ah, I see. I just don't think that cryonics significantly improves the chances of actually extending one's life span, which would be similar to saying that democracy is not significantly better than most other political systems.

2soreff
What do you see as the limiting factors? * The technical ability of current best-case cryonics practice to preserve brain structure? * The ability of average-case cryonics to do the same? * The risk of organizational failure? * The risk of larger scale societal failure? * Insufficient technical progress? * Runaway unfriendly AI? * Something else?

Are you saying that cryonics is not perfect, but it is the best alternative?

I'm not sure I understand your point. I'll read your link a few more times, just to see if I'm missing something, but I don't quite get it now.

2wedrifid
Just referring to the quote:

Ah. Wrong referent. It's hilarious for me, and it may, at some point, be hilarious for them. But it's mostly funny for me. That would be why I took time to mention that it was also, in fact, asinine.

I think cryonics is a terrible idea, not because I don't want to preserve my brain until the tech required to recreate it digitally or physically is present, but because I don't think cryonics will do the job well. Cremation does the job very, very badly, like trying to preserve data on a hard drive by melting it down with thermite.

8wedrifid
This obviously invites the conclusion that cryonics is a terrible idea in the same sense that democracy is the worst form of government.

Oh, hello. I've posted a couple of times, in a couple of places, and those of you who have spoken with me probably know that I am one: a novice, and two: a bit of a jerk.

I'm trying to work on that last one.

I think cryonics, in its current form, is a terrible idea, I am a (future) mathematician, and am otherwise divergent from the dominant paradigm here, but I think the rest of that is for me to know, and you to find out.

3wedrifid
What do you think of cremation in its current form?

Bugmaster, I call down hurricanes everyday. It never gets boring. Meteorites are a little harder, but I do those on occasion. They aren't quite as fun.

But the angry frogs?

The angry frogs?

Those don't leave a shattered wasteland behind, so you can just terrorize people over and over again with those. Just wonderful.

Note: All of the above is complete bull-honkey. I want this to be absolutely clear. 100%, fertilizer-grade, bull-honkey.

4Kaj_Sotala
If I had a smartphone, I could call down Angry Birds on people. Well, on pigs at least.

That's alright. My humor, in real life, is based entirely on the fact that only I know I'm joking at the time, and the other person won't realize it until three days later, when they spontaneously start laughing for no reason they can safely explain. Is that asinine? Yes. Is it hilarious? Hell, yes. So I apologize. I'll try not to do that.

0wedrifid
Not especially, unfortunately. There is something to be said for appearing that you don't give a @#%! whether other people get your humor in real time but it works best if you care a whole lot about making your humor funny to your audience at the time and then just act like you don't care about the response you get. Even if people get your joke three days later you still typically end up slightly worse off for the failed transaction.

I am being somewhat ... absurd, and on purpose, at that. But I have enough arrogance lying around in my brain to believe that I can trick the super-intelligence.

0APMason
Sorry - I'm always inclined to take people on the internet literally. I used to mess with my friends using the same kind of ow-my-brain Prisoner's-dilemma somersaults, and still I couldn't recognise a joke.

You aren't doublethinking hard enough, then.

1APMason
I don't know if this is a joke - I have a poor sense of humour - but you do know Omega predicts your actual behaviour, right? As in, all things taken into account, what you will actually do.

Because the million is already there, along with the thousand. Why not get all of it?

2APMason
Because I'd end up with only a thousand, as opposed to a million. And I want the million.
3DSimon
The million isn't there, because Omega's simulation was of you confronting Omega, not of you sitting in a comfy chair.

I think it is important to make a distinction between what our choice is now, while we are here, sitting at a computer screen, unconfronted by Omega, and our choice when actually confronted by Omega. When actually confronted by Omega, your choice has been determined. Take both boxes, take all the money. Right now, sitting in your comfy chair? Take the million-dollar box. In the comfy chair, the contra-factual nature of the experiment basically gives you an Outcome Pump. So take the million-dollar box, because if you take the million-dollar box, it's full of a million dollars. But when it actually happens, the situation is different. You aren't in your comfy chair anymore.

0APMason
I'm not in my comfy chair any more, and I still take the million. Why wouldn't I?

How would reality go about being not normal? Or more specifically, what is normal, if not reality?

2Manfred
Well, I suppose reality could get pretty abnormal. And yet, it would still all add up to normality - that is, my model of reality should explain my observations, even if that model was "it's all a big acid trip." Getting around that would need something like a violation of causality.

Thank you very much.

Okay, so where did those arrows come from? I see how the graph second from the top corresponds to the amount of time a particle, were particles to exist, would take if it bounced, if it could bounce, because it's not actually a particle, off of a specific point on the mirror. But how does one pull the arrows out of that graph?

4arundelo
Feynman talks about this between 59:33 and 60:32 of part one of his 1979 Douglas Robb lectures. Between 29:41 and 36:27 of part two, he draws the "arrows" diagram on the chalkboard. If you find this topic interesting, you'll enjoy all four parts of the lecture series. See also 63:26 to 63:35 of part one, which is relevant to your other question. Edit: To explicitly answer your question, the angle of each arrow is proportional to the height of the graph above that arrow. Note that different heights on the graph can correspond to identical angles, since (for example) 0 radians, 2pi radians, and 4pi radians are all the same angle.

I... Er... What. Where did the whole 'amplitude' thing come from? I mean, it looks a lot like they are vectors in the complex plane, but why are they two dimensional? Why not three? Or one? I just don't get the idea of what amplitude is supposed to describe.

0Amanojack
For that matter, amplitude of a wave...but what is waving? Where's the realism?

I believe I suggested earlier that I don't know what moral theory I hold, because I am not sure of the terminology. So I may, in fact, be a utilitarian, and not know it, because I have not the vocabulary to say so. I asked "At what point is utilitarianism not completely arbitrary?" because I wanted to know more about utilitarianism. That's all.

0Nornagest
Ah. Well, informally, if you're interested in pissing the fewest people off, which as best I can tell is the main point where moral abstractions intersect with physical reality, then it makes sense to evaluate the moral value of actions you're considering according to the degree to which they piss people off. That loosely corresponds to preference utilitarianism: specifically negative preference utilitarianism, but extending it to the general version isn't too tricky. I'm not a perfect preference utilitarian either (people are rather bad at knowing what they want; I think there are situations where what they actually want trumps their stated preference; but correspondence with stated preference is itself a preference and I'm not sure exactly where the inflection points lie), but that ought to suffice as an outline of motivations.

At what point is utilitarianism not completely arbitrary?

0Nornagest
I'm not a moral realist. At some point it is completely arbitrary. The meta-ethics here are way outside the scope of this discussion; suffice it to say that I find it attractive as a first approximation of ethical behavior anyway, because it's a simple way of satisfying some basic axioms without going completely off the rails in situations that don't require Knuth up-arrow notation to describe. But that's all a sideline: if the choice of moral theory is arbitrary, then arguing about the consequences of one you don't actually hold makes less sense than it otherwise would, not more.

No-one asked for a general explanation.

The best term I have found, the one that seems to describe the way I evaluate situations the most accurately, is consequentialism. However, that may still be inaccurate. I don't have a fully reliable way to determine what consequentialism entails; all I have is Wikipedia, at the moment.

I tend to just use cost-benefit analysis. I also have a mental, and quite arbitrary, scale of what things I do and don't value, and to what degree, to avoid situations where I am presented with multiple, equally beneficial choices. I al... (read more)

0Nornagest
It helps me understand your reasoning, yes. But if you aren't arguing within a fairly consistent utilitarian framework, there's not much point in trying to convince others that the intuitive option is correct in a dilemma designed to illustrate counterintuitive consequences of utilitarianism. So far it sounds like you're telling us that Specks is intuitively more reasonable than Torture, because the losses are so small and so widely distributed. Well, yes, it is. That's the point.

I don't agree. The existence 3^^^3 people, or 3^^^3 dust specks, is impossible because there isn't enough matter, as you said. The existence of an event that has only effects that are tailored to fit a particular person's idea of 'bad' does not fit my model of how causality works. That seems like a worse infraction, to me.

However, all of that is irrelevant, because I answered the more "interesting question" in the comment you quoted. To be blunt, why are we still talking about this?

0dlthomas
I'm not sure I agree, but "which impossible thing is more impossible" does seem an odd thing to be arguing about, so I'll not go into the reasons unless someone asks for them. I meant a more generalized you, in my last sentence. You in particular did indeed answer the more interesting question.

Yes. I believe that because any suffering caused by the 3^^^3 dust specks is spread across 3^^^3 people, it is of lesser evil than torturing a man for 50 years. Assuming there to be no side effects to the dust specks.

0TimS
When I participated in this debate, this post convinced me that a utilitarian must believe that dust specks cause more overall suffering (or whatever badness measure you prefer). Since I already wasn't a utilitarian, this didn't bother me.
0dlthomas
That's not quite what I meant by "explain" - I had understood that to be your position, and was trying to get insight into your reasoning. Drawing an analogy to mathematics, would you say that this is an axiom, or a theorem? If an axiom, it clearly must be produced by a schema of some sort (as you clearly don't have 3^^^3 incompressible rules in your head). Can you explore somewhat the nature of that schema? If a theorem, what sort of axioms, and how arranged, produce it?
-1Nornagest
That's not general enough to mean very much: it fits a number of deontological moral theories and a few utilitarian ones (what the right answer within virtue ethics is is far too dependent on assumptions to mean much), and seems to fit a number of others if you don't look too closely. Its validity depends greatly on which you've picked. As best I can tell the most common utilitarian objection to TvDS is to deny that Specks are individually of moral significance, which seems to me to miss the point rather badly. Another is to treat various kinds of disutility as incommensurate with each other, which is at least consistent with the spirit of the argument but leads to some rather weird consequences around the edge cases.

That is in no way what was said. Also, the idea of an event that somehow manages to have no effect aside from being bad is... insanely contrived. More contrived than the dilemma itself.

However, let's say that instead of 3^^^3 people getting dust in their eye, 3^^^3 people experience a single nano-second of despair, which is immediately erased from their memory to prevent any psychological damage. If I had a choice between that and torturing a person for 50 years, then I would probably choose the former.

2dlthomas
The notion of 3^^^3 events of any sort is far more contrived than the elimination of knock-on effects of an event. There isn't enough matter in the universe to make that many dust specks, let alone the eyes to be hit and nervous systems to experience it. Of course it's contrived. It's a thought experiment. I don't assert that the original formulation makes it entirely clear; my point is to keep the focus on the actual relevant bit of the experiment - if you wander, you're answering a less interesting question.

No, I'm pretty sure it makes you notice. It's "enough". "barely enough", but still "enough". However, that doesn't seem to be what's really important. If I consider you to be correct in your interpretation of the dilemma, in that there are no other side effects, then yes, the 3^^^3 people getting dust in their eyes is a much better choice.

1dlthomas
Can you explain a bit about your moral or decision theory that would lead you to conclude that?
2dlthomas
The thought experiment is, 3^^^3 bad events, each just so bad that you notice their badness. Considering consequences of the particular bad thing means that in fact there are other things as well that are depending on your choice, and that's a different thought experiment.

Better late than never.

You haven't said anything. Make a relevant point.

thomblake120

You responded to an anonymous comment from nearly 4 years ago. I don't think they're going to take your advice.

... What is it that frequentists do, again? I'm a little out of touch.

I missed newton by over 150 years. Pray for a curve.

Load More