Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: wedrifid 30 October 2014 11:51:12AM 0 points [-]

Anyway, isn't that an ad hominem argument?

No. It is an argument which happens to use the perceived negative consequences of an individual's actions as a premise. Use of 'ad hominem!' to reject a claim only (legitimately) applies when there is a fallacy of relevance that happens to be a personal attack that doesn't support the conclusion. It does not apply whenever an argument happens to contain content that reflects badly on an individual.

Comment author: AnnaLeptikon 30 October 2014 10:02:14AM 1 point [-]

Me - and many others from the meetup in Vienna - already signed up. This - probably - will be super awesome. Looking forward to it!

Comment author: Stuart_Armstrong 30 October 2014 10:01:46AM *  0 points [-]

"In any possible world I value only my own current and future subjective happiness, averaged over all of the subjectively indistinguishable people who could equally be "me" right now."

Oh. I see. The problem is that that utility takes a "halfer" position on combining utility (averaging) and "thirder" position on counterfactual worlds where the agent doesn't exist (removing them from consideration). I'm not even sure it's a valid utility function - it seems to mix utility and probability.

For example, in the heads world, it values "50% Roger vs 50% Jack" at the full utility amount, yet values only one of "Roger" and "Jack" at full utility. The correct way of doing this would be to value "50% Roger vs 50% Jack" at 50% - and then you just have a rescaled version of the thirder utility.

I think I see the idea you're getting at, but I suspect that the real lesson of your example is that that mixed halfer/thirder idea cannot be made coherent in terms of utilities over worlds.

Comment author: Stuart_Armstrong 30 October 2014 09:44:28AM 0 points [-]

The divergence between reference class (of identical people) and reference class (of agents with the same decision) is why I advocate for ADT (which is essentially UDT in an anthropic setting).

Comment author: Stuart_Armstrong 30 October 2014 09:39:01AM 0 points [-]

Conceivable. But it doesn't seem to me that such a theory is necessary, as it's role seems merely to be able to state probabilities that don't influence actions.

Comment author: Jackercrack 30 October 2014 08:34:18AM -1 points [-]

You say he's not-mad, but isn't he the spitting image of the revolutionary that power corrupts? Wasn't Communism the archetype of the affective death spiral?It would appear he was likely suffering from syphilis, a disease that can cause confusion, dementia and memory problems. Anyway, isn't that an ad hominem argument?

Comment author: eli_sennesh 30 October 2014 08:25:04AM 0 points [-]

Why not start with a probability distribution over (the finite list of) objects of size at most N, and see what happens when N becomes large?

Because there is no defined "size N", except perhaps for nodes in the tree representation of the inductive type.

Comment author: Creutzer 30 October 2014 08:24:18AM *  1 point [-]

You can define a notion of logical consequence that isn't preservation of truth and is therefore applicable to sentences that have no truth-values. For example, define a state as some sort of thing, define what it means for a sentence to be accepted in a state, and then define consequence as preservation of acceptance. But you still can't identify acceptance with truth because you'll have a separate notion of the truth which, in turn, is used in the definition of acceptance. It's just that this notion of truth is only defined for some sentences of the language. (As a very simple case, say a state is a set of worlds, and a non-modal sentence φ is accepted in a state s iff φ is true in all worlds w in s.)

Mark Schröder and Seth Yalcin are two people on the philosophical side who defend modal expressivism with a semantics of that sort. On the more logico-linguistic side, there's lots of Dutch people, for example Frank Veltman and Jeroen Groenendijk.

Comment author: eli_sennesh 30 October 2014 08:13:12AM 1 point [-]

Well, I can't answer for Eliezer's intentions, but I can repeat something he has often said about HPMoR: the only statements in HPMoR he is guaranteed to endorse with a straight face and high probability are those made about science/rationality, preferably in an expo-speak section, or those made by Godric Gryffindor, his author-avatar. Harry, Dumbledore, Hermione, and Quirrell are fictional characters: you are not necessarily meant to emulate them, though of course you can if you independently arrive to the conclusion that doing so is a Good Idea.

Is this really supposed to be one of the HPMOR passages which is solely about the fictional character and is not meant to have any application to the real world except as an example of something not to do?

I personally think it is one of the passages in which the unavoidable conceits of literature (ie: that the protagonist's actions actually matter on a local-world-historical scale) overcome the standard operation of real life. Eliezer might have a totally different view, but of course, he keeps info about HPMoR close to his chest for maximum Fun.

Comment author: Creutzer 30 October 2014 08:01:03AM 0 points [-]

The formulation of the question didn't quite make it clear that emotivism was just intended as an example for one possible non-cognitivist position. That's what I objected to. As an example, it's fine of course - it is, after all, the most well-known such position.

Comment author: RichardKennaway 30 October 2014 06:57:19AM 2 points [-]

arguably, ‘x = 3’ and ‘x² = 9’ do not have truth values, but ‘if x = 3, then x² = 9’ does.

I would say that "x=3" has a function from values of x to truth values, as does "if x = 3, then x² = 9" (a constant function to the value "true").

Comment author: undermind 30 October 2014 06:35:54AM 0 points [-]

No, you didn't.

And kudos (in the form of an upvote) to you for suggesting something to improve the niceness of rationalists -- as has been pointed out many times, that's something we should work on.

Yeah, instrumental rationality is (epistemically) easier -- on the writer as well as on the reader. Epistemic rationality requires rigor, which usually implies a lot of math. Instrumental rationality can be pretty successful with a few examples and a moderately useful analogy.

Comment author: JoachimSchipper 30 October 2014 06:27:05AM 1 point [-]

I didn't exactly disagree with the content, right?

Part of the problem is just that writing something good about epistemic rationality is really hard, even if you stick to the 101 level - and, well, I don't really care about 101 anymore. But I have plenty of sympathy for those writing more practical posts.

Comment author: jpaulson 30 October 2014 05:57:29AM 0 points [-]

Why not start with a probability distribution over (the finite list of) objects of size at most N, and see what happens when N becomes large?

It really depends on what distribution you want to define though. I don't think there's an obvious "correct" answer.

Here is the Haskell typeclass for doing this, if it helps: https://hackage.haskell.org/package/QuickCheck-

Comment author: NancyLebovitz 30 October 2014 05:45:27AM 0 points [-]

From what I've seen on the SJ side, they've done a lot to make white into a marked state (in other words, white people being referred to as white) rather than whiteness being an implied default.

Comment author: TobyBartels 30 October 2014 05:44:06AM 0 points [-]

It has been reported here that largest volume, longest length, and largest mass all give the same result.

Comment author: Username 30 October 2014 05:43:30AM *  1 point [-]

I see the advent of modern corporations as the start of independent agents competing for resources and striving for their own goals. It also is when we started seeing the exponential growth that defines our current age. The standard thought is that the singularity is the moment when the speed of exponential growth outpaces the human ability to process that information in real time. I think that definition is too human-centric, and I'd rather refer to the phenomenon of exponential growth as a longer continuous process.

So the formation of LLCs was the start of the Singularity, and we haven't seen the end yet. Like I said, non-standard and weird.

Comment author: jpaulson 30 October 2014 05:42:42AM 0 points [-]

Unfortunately, it seems much easier to list particularly inefficient uses of time than particularly efficient uses of time :P I guess it all depends on your zero point.

Comment author: TobyBartels 30 October 2014 05:41:20AM 1 point [-]

Maybe there should be an ‘extended family’ option.

Comment author: TobyBartels 30 October 2014 05:40:10AM 0 points [-]

There were ‘left-libertarian’ and ‘anarchist’.

View more: Next