All of crazy88's Comments + Replies

I think each year's flu vaccine is a slight modification on an existing vaccine. This may well (read: I have no idea, but it sounds plausible) make it faster to safety test the flu vaccine than a vaccine for a novel disease.

3brianwang712
This is correct. We have lots of infrastructure and expertise for making new flu vaccines every year. It's not a good model for how long we should expect safety testing to take for a vaccine for a new virus. We don't have any licensed vaccines for any coronavirus, for example.

I think this is an uncharitable reading of the purpose of Gaiman's quote. His quote isn't really meant to be a factual claim but an inspirational one.

Now obviously some people will find more inspiration from quotes that express a truth as compared with those that don't. Perhaps you're such a person (I suspect that many people on LW are). At risk of irony, however, it's best not to assume that everyone else is the same as you in that regards.

Evaluating something with an emotional purpose in accordance with its epistemic accuracy (instead of its psychological or poetic force) is likely to lead to an uncharitable reading of many quotes (and rather reinforces the straw vulcan stereotype of rationality).

Yes, philosophers tend to be interested in the issue of conceptual analysis. Different philosophers will have a different understanding of what conceptual analysis is but one story goes something like the following. First, we start out with a rough, intuitive sense of the concepts that we use and this gives us a series of criteria for each concept (perhaps with free will one criteria would be that it relates to moral responsibility in some way and another would be it relates to the ability to do otherwise in some way). Then we try to find a more precise ac... (read more)

Exactly what information CDT allows you to update your beliefs on is a matter for some debate. You might be interested in a paper by James Joyce (http://www-personal.umich.edu/~jjoyce/papers/rscdt.pdf) on the issue (which was written in response to Egan's paper).

Then if the uncompressed program running had consciousness and the compressed program running did not, you have either proved or defined consciousness as something which is not an output. If it is possible to do what you are suggesting then consciousness has no effect on behavior, which is the presumption one must make in order to conclude that p-zombies are possible.

I haven't thought about this stuff for a while and my memory is a bit hazy in relation to it so I could be getting things wrong here but this comment doesn't seem right to me.

First, my p-zo... (read more)

0mwengler
I think formally you are right. But I think that if consciousness is essential to how we get important aspects of our input-output map, then I think the chances of there being another mechanism that works to get the same input-output map are equal to the chances that you could program a car to drive from here to Los Angeles without using any feedback mechanisms, by just dialing in all the stops and starts and turns and so on that it would need ahead of time. Formally possible, but absolutely bearing no real relationship to how anything that works has ever been built. I am not a mathematician about these things, I am an engineer or a physicist in the sense of Feynman.

It depends on what you're looking for. If you're looking for Drescher style stuff then you're looking for a very specific type of contemporary, analytic philosophy. Straight off the top of my head: Daniel Dennett, Nick Bostrom and some stuff by David Chalmers as well as decision and game theory (good free introduction here).

If you're interested in contemporary, analytic philosophy generally then I can't really make suggestions because the list is too broad (what are your interests? Ethics? Aesthetics? Metaphysics? Epistemology? Logic?). Good general resour... (read more)

I don't think further conversation on this topic is going to be useful for either of us. I presume we both accept that we have some responsibilities for the welfare of others and that sometimes we can consider the welfare of others without being infantilising (for example, I presume we both presume that shooting someone for fun would be in violation of these responsibilities).

Clearly, you draw the line at a very different place to me but beyond that I'm not sure there's much productive to be said.

I will note, however, that my claim is not about doubting th... (read more)

If you are a car salesman and have a button you can legally press which makes your costumer buy a car, you'd press it. Instrumental rationality, no?

Instrumental rationality doesn't get you this far. It gets you this far only if you assume that you care only about selling cars and legality. If you also care about the welfare of others then instrumental rationality will not necessarily tell you to push the button (instrumental rationality isn't the same thing as not caring about others).

Of course, I don't expect anyone who doesn't care about the welfare o... (read more)

2Kawoomba
I find your comment to be quixotic. I live in a sheltered bubble, but apparently not yet so far up the ivory tower. Whenever you walk into any department store, get a loan to buy a car, a new stereo, or whatever, noone there who's trying to sell to you is going to care whether that purchase is in your self-interest, or whether you can afford it (other than your ability to pay), other than to make you happy so that you become a repeat customer, which also isn't a function of the customer's self interest, just think about tobacco companies. Whether it's the educational sector signing you up for non-dischargeable student loans, car loans, new credit cards offered in the mail, or just buying a PC game, noone will inquire as to your actual self-interest. They'll assume you're an adult and can do what you darn well please - and your self-interest is your business, not theirs. They can pitch you, and if you listen, it's your decision and responsibility. Would you say that the overwhelming majority of modern day society does then not care at all about the welfare of others, just because they allow others to make their own choices, and let them be autonomous regarding their own self-interest? The infantizing part is saying "I don't think women are capable of disengaging from a negative conversation, therefore they ought to be protected since their own agency doesn't suffice. There must be rules protecting them since they apparently cannot be trusted to make their own correct choices."

Thanks for a reply. I did take a look at your post but I don't think it really engages with the points that I make (it engages with arguments that are perhaps superficially similar but importantly distinct)

In general a PUA should always make a woman feel good, otherwise why should she choose to stay with him? Probably women suffer much more through awkward interactions, stalkers, etc...

I have no problems with certain things that one might describe as pick up artistry. My comments are reserved for the things that don't involve respect for a woman's welf... (read more)

Hi Roland,

I replied to you in the other thread and I'd be interested to know what you think about my comment (I'm not really making the sort of claim you dismiss in this post so I'm curious as to whether you agree with what I'm saying or whether my comments are problematic for other reasons). Comments quoted below for ease of access:

If the sole determining factor of whether an interaction with a women is desirable is whether they end up attracted to you then, yes, even the most extreme sort of pick up artistry would be unproblematic.

However, if you think

... (read more)
-3roland
This is one example of countless other objections that are leveled against PU that fall into the same pattern: an elaborate argument is presented that illustrates a problem with PU and yet at the same time it is overlooked that the same argument could be applied(yet rarely or never is) against women or against dating/mating in general. Specifically: and: The name of the game is mating, not altruism! In mating we are generally not concerned primarily about the welfare of the object of our desire. It doesn't matter if you are human, non-human, male, female, homosexual, heterosexual, PUA or not PUA. Is a woman who expects the man to pay for her drinks, or the boyfriend to help her pay the rent really concerned about the welfare of the other? Doesn't sound nice, does it? But I didn't write the rules of the game. Actually it should be a big surprise if the mating game that came about through evolution conformed to our expectations of fairness or niceness. PUAs didn't invent the game, they analyzed it and figured out what the winning moves are. Don't blame them. This is actually what all the PUA hate is about: rationalized or cleverly packaged envy. Those guys who figured out how to hack the system imposed onto them and gain an "unfair" advantage, we can't let them get away with it, can we?

That's the issue. Some people have an ideology that some women's tastes are distasteful.

It's a clever line but doesn't really interact with what I said (which may perhaps have been because I was unclear: I don't intend to suggest this fact is your fault).

We can think of it another way: what do we think constitutes the welfare of a woman? Presumably we don't think that it is just that she is attracted to the person she is currently conversing with.

However, if this is the case and if we care about how our interaction with people effect their welfare then ... (read more)

If the sole determining factor of whether an interaction with a women is desirable is whether they end up attracted to you then, yes, even the most extreme sort of pick up artistry would be unproblematic.

However, if you think that there are other factors that determine whether such an interaction is desirable (such as whether the woman is treated with respect, is not made to feel unpleasant etc) then certain sorts of pick up artistry are extremely distasteful.

For example, let's hypothetically imagine that women are more attracted to people who make them fe... (read more)

-3Kawoomba
If you are a car salesman and have a button you can legally press which makes your customer buy a car, you'd press it. Instrumental rationality, no? If you are a researcher who has a button he can legally press to make that reviewer look upon his submission more favorably, you'd probably press it. If you are a guy and have a button you can legally press that makes the woman you're trying to woo fall in love with you, pressing that button would be ... bad? I find it extremely condescending to say you're responsible with how a woman you just met feels, it's treating them like a child, not like an adult who can darn well be expected to make her own choices, and turn away from you if she so desires. This of course only applies with the male staying in the legal framework and not exhibiting e.g. stalking behavior (i.e. accept when the woman is turning away). Of course women have a right to demand respect and to be treated in whatever manner they as individuals desire, just as males have a right to provide that sort of interaction or not to provide that sort of interaction. Externally imposing unwritten rules (other than a legal framework) is infantizing adult agents.
1roland
In general a PUA should always make a woman feel good, otherwise why should she choose to stay with him? Probably women suffer much more through awkward interactions, stalkers, etc... Making a woman feel insecure might work, so does a movie that makes people feel scared(ever enjoyed a good horror movie?). Should we blame a PUA if that works for him? Beautiful women will have an edge when negotiating with a man, should we blame her for using this as a tactic? I've decided to write my own post on the subject, feel free to take a look: http://lesswrong.com/r/discussion/lw/h6l/pick_up_artistspuas_my_view/
0buybuydandavis
That's the issue. Some people have an ideology that some women's tastes are distasteful.

There have been attempts to create derivatives of CDT that work like that. That replace the "C" from conventional CDT with a type of causality that runs about in time as you mention. Such decision theories do seem to handle most of the problems that CDT fails at. Unfortunately I cannot recall the reference.

You may be thinking of Huw Price's paper available here

Thanks Pinyaka, changed for next edit (and glad to hear you're finding it useful).

Okay, well I've rewritten this for the next update in a way that hopefully resolves the issues.

If you have time, once the update is posted I'd love to know whether you think the rewrite is successful. In any case, thanks for taking the time to comment so far.

Some quotes might help.

Peterson defines an act "as a function from a set of states to a set of outcomes"

The rest of the details are contained in this quote: "The key idea in von Neumann and Morgenstern's theory is to ask the decision maker to state a set of preferences over risky acts. These acts are called lotteries, because the outcome of each act is assumed to be randomly determined by events (with known probabilities) that cannot be controlled by the decision maker".

The terminology of risky acts is more widespread than Peterson: htt... (read more)

0AlexMennen
If I understand correctly, Peterson is defining "acts" and "risky acts" as completely separate things (functions from states to outcomes, and lotteries over outcomes, respectively). If that's true, it clears up the confusion, but that seems like extraordinarily bad terminology.

From memory, Nozick explicitly disclaims the idea that his view might be a response to normative uncertainty. Rather, he claims that EDT and CDT both have normative force and so should both be taken into account. While this may appear to be window dressing, this will have fairly substantial impacts. In particular, no regress threatens Nozick but the regress issue is going to need to be responded to in the normative uncertainty case.

Okay, so I've been reading over Peterson's book An Introduction to Decision Theory and he uses much the same language as that used in the FAQ with one difference: he's careful to talk about risky acts rather than just acts (when he talks about VNM, I mean, he does simply talk about acts at some other point). This seems to be a pretty common way of talking about it (people other than Peterson use this language).

Anyway, Peterson explicitly defines a "lottery" as an act (which he defines as a function from world states to outcomes) whose outcome is ... (read more)

0AlexMennen
Either Peterson does things wrong, you're misunderstanding Peterson, or I'm misunderstanding you. When I have time, I'll look at that book to try to figure out which, unless you manage to sort things out for me before I get to it.

Cool, thanks for letting me know.

Point conceded (both your point and shminux's). Edited for the next update.

Thanks for the clarification.

Perhaps worth noting that earlier in the document we defined acts as functions from world states to outcomes so this seems to resolve the second concern somewhat (if the context is different then presumably this is represented by the world states being different and so there will be different functions in play and hence different acts).

In terms of the first concern, while VNM may define preferences over all lotteries, there's a sense where in any specific decision scenario, VNM is only appealed to in order to rank the achievab... (read more)

2AlexMennen
What? That's what I thought "acts" meant the first time, before I read the document more thoroughly and decided that you must mean that acts are lotteries. If you are using "act" to refer to functions from world states to outcomes, then the statement that the VNM system only applies to acts is simply false, rather than misleading.
2Shmi
I could not find a definition of "world state" in the document. All you say is which is by no means a good definition. It tells you what a state is not, but not what it is. It even fails at that, given that it uses the term "part of the world" without it being previously defined.

My understanding is that in the VNM system, utility is defined over lotteries. Is this the point you're contesting or are you happy with that but unhappy with the use of the word "acts" to describe these lotteries. In other words, do you think the portrayal of the VNM system as involving preferences over lotteries is wrong or do you think that this is right but the way we describe it conflates two notions that should remain distinct.

1AlexMennen
The problem is with the word "acts". Some lotteries might not be achievable by any act, so this phrasing makes it sound like the VNM only applies to the subset of lotteries that is actually possible to achieve. And I realize that you're using the word "act" more specifically than this, but typically, people consider doing the same thing in a different context to be the same "act", even though its consequences may depend on the context. So when I first read the paragraph I quoted after only skimming the rest, it sounded like it was claiming that the VNM system can only describe deontological preferences over actions that don't take context into account, which is, of course, ridiculous. Also, while it is true that the VNM system defines utility over lotteries, it is fairly trivial to modify it to use utility over outcomes (see first section of this post)

My understanding is that in the VNM system, utility is defined over lotteries. Is this the point you're contesting or are you happy with that but unhappy with the use of the word "acts" to describe these lotteries. In other words, do you think the portrayal of the VNM system as involving preferences over lotteries is wrong or do you think that this is right but the way we describe it conflates two notions that should remain distinct.

[This comment is no longer endorsed by its author]Reply

I think I'm missing the point of what you're saying here so I was hoping that if I explained why I don't understand, perhaps you could clarify.

VNM-utility is unique up to a positive linear transformation. When a utility function is unique up to a positive linear transformation, it is an interval (/cardinal scale). So VNM-utility is an interval scale.

This is the standard story about VNM-utility (which is to say, I'm not claiming this because it seems right to me but rather because this is the accepted mainstream view of VNM-utility). Given that this is a si... (read more)

0Sniffnoy
Oops, you are absolutely right. (a-b)/|c-d| is meaningful after all. Not sure why I failed to notice that. Thanks for pointing that out.

Does the horizontal axis of the decision tree in section 3 represent time?

Yes and no. Yes, because presumably the agent's end result re: house and money occurs after the fire and the fire will happen after the decision to take out insurance (otherwise, there's not much point taking out insurance). No, because the diagram isn't really about time, even if there is an accidental temporal component to it. Instead, the levels of the diagram correspond to different factors of the decision scenario: the first level is about the agent's choice, the second leve... (read more)

Will be fixed in the next update. Thanks for pointing it out.

Thanks. Will be fixed in next update. Thanks also for the positive comment.

Thanks, as you note, the linked comment is right.

Thanks, will be fixed in next update.

Fixed for next update. Thanks.

Thanks, fixed for the next update.

Thanks. Fixed for the next update of the FAQ.

0james_edwards
Typo at 11.4:
2pinyaka
Also, Shouldn't independence have people who prefer (1A) to (1B) prefer (2A) to (2B)? EDIT: Either the word "because" or "and" is out of place here. I only notice these things because this FAQ is great and I'm trying to understand every detail that I can.

Thanks. I've fixed this up in the next update (though it won't appear on the LW version yet).

I think this conversation is now well into the territory of diminishing return so I'll leave it at that.

Okay, perhaps I can have another go at this.

First thing to note, possible worlds can't be specified at different levels of detail. When doing so we are either specifying partial possible worlds or sets of possible worlds. As rigid designation is a claim about worlds, it can't be relative to the level of detail utilised as it only applies to things specified at one level of detail.

Second, you still seem to be treating possible worlds as concrete things rather than something in the head (or, at least, making substantive assumptions about possible worlds and ... (read more)

2Qiaochu_Yuan
I think that these two desires are contradictory. Part of what I'm trying to say is that it's a highly nontrivial problem which propositions are even meaningful, let alone true, if you specify possible worlds at a sufficiently high level of detail. For example, at an extremely high level of detail, you might specify a possible world by specifying a set of laws of physics together with an initial condition for the universe. This kind of specification of a possible world doesn't automatically allow you to interpret intuitive referents like "I," so the meaning of a statement like "I am holding a glass of water" is extremely unclear. How do you know what things are rigid designators if you neither know how to specify possible worlds nor how to determine what's in them?

I think this is getting past the point that I can useful contribute further though I will note that the vast literature on the topic has dealt with this sort of issue in detail (though I don't know it well enough to comment in detail).

Saying that, I'll make one final contribution and then leave it at that: I suspect that you've misunderstood the idea of a rigid designator if you think it depends on the resolution at which you examine possible worlds. To say that something is a rigid designator is to say that it refers to the same thing in all possible worl... (read more)

I can't cite sources off-hand but this suggestion is reasonably standard but taken to be a bit of a cheat (it dodges the difficult question). For this reason it is often stipulated that no objective chance device is available to the agent or that the predictor does something truly terrible if the agent decides by such a device (perhaps takes back all the money in the boxes and the money in the agent's bank account).

4CronoDAS
Usually, it's just "choosing using a randomizing device will be treated the same as two-boxing."
2faul_sname
In other words, the question becomes one of "Omega has two boxes box A and box B, which it fills based on what it thinks you will do. Box A has $1000 and box B has either $0 or $1000000 depending on whether Omega predicts you will take both boxes or only box B, respectively. If Omega predicts that you will do your best to be unpredictable, it will do something bad to you. Should you take box A, box B, or try to be unpredictable?" That question doesn't seem as interesting.

As I said, these are complex issues.

possible worlds are things that live inside the minds of agents (e.g. humans).

Yes, but almost everyone agrees with this (or at least, almost all views on possible worlds can be interpreted this way even if they can also be interpreted as claims about the existence of abstract - non-concrete - objects). There are a variety of different things that possible worlds can be even given the assumption that they exist in people's heads (almost all the disagreement about what possible worlds are is disagreement within this ca... (read more)

2Qiaochu_Yuan
Okay. I think what I'm actually trying to say is that what constitutes a rigid designator, among other things, seems to depend very strongly on the resolution at which you examine possible worlds. When you say the phrase "imagine the possible world in which I have a glass of water in my hand" to a human, that human knows what you mean because by default humans only model the physical world at a resolution where it is easy to imagine making that intervention and only that intervention. When you say that phrase to an AI which is modeling the world at a much higher resolution, the AI does not know how to do what you ask because you haven't given it enough information. How did the glass of water get there? What happened to the air molecules that it displaced? Etc.

Okay, so three things are worth clarifying up front. First, this isn't my area of expertise so anything I have to say about the matter should be taken with a pinch of salt. Second, this is a complex issue and really would require 2 or 3 sequences of material to properly outline so I wouldn't read too much into the fact that my brief comment doesn't present a substantive outline of the issue. Third, I have no settled views on the issues of rigid designators, nor am I trying to argue for a substantive position on the matter so I'm not deliberately sweeping a... (read more)

4Qiaochu_Yuan
Thank you for the clarification. I agree that the question of what a possible world is is an important one, but the answer seems obvious to me: possible worlds are things that live inside the minds of agents (e.g. humans). Water is one of the examples I considered and found incoherent. Once you start considering possible worlds with different laws of physics, it's extremely unclear to me in what sense you can identify types of particles in one world with particles in another type of world. I could imagine doing this by making intuitive identifications step by step along "paths" in the space of possible worlds, but then it's unclear to me how you could guarantee that the identifications you get this way are independent of the choice of path (this idea is motivated by a basic phenomenon in algebraic topology and complex analysis).

You may have resolved this now by talking to Richard (who knows more about this than me) but, in case you haven't, I'll have a shot at it.

First, the distinction: Richard is using rigid designation to talk about how a single person evaluates counterfactual scenarios, whereas you seem to be taking it as a comment about how different people use the same word.

Second, relevance: Richard's usage allow you to respond to an objection. The objection asks you to consider the counterfactual situation where you desire to murder people and says murder must now be right... (read more)

Multiple philosophers have suggested that this stance seems similar to "rigid designation", i.e., when I say 'fair' it intrinsically, rigidly refers to something-to-do-with-equal-division. I confess I don't see it that way myself - if somebody thinks of Euclidean geometry when you utter the sound "num-berz" they're not doing anything false, they're associating the sound to a different logical thingy. It's not about words with intrinsically rigid referential power, it's that the words are window dressing on the underlying entities.

I j... (read more)

6Qiaochu_Yuan
Can you give an example of a rigid designator (edit: that isn't purely mathematical / logical)? I don't understand how the concept is even coherent right now. "Issues of transworld identity" seem to be central and I don't know why you're sweeping them under the rug. More precisely, I do not understand how one goes about identifying objects in different possible worlds even in principle. I think that intuitions about this procedure are likely to be flawed because people do not consider possible worlds that are sufficiently different.
8RichardChappell
Correct. Eliezer has misunderstood rigid designation here.

Given that this has no response to it, I'm curious as to whether Orthonormal has responded to you regarding this either off list or elsewhere?

0Wei Dai
We discussed it by email a bit more, but I don't think he came up with a very good answer. I'll forward you the exchange if you PM me your email address.

It still may be hard to resolve when something is as simple as possible.

So modal realism (the idea that possible worlds exist concretely) has been highlighted a few times in this thread as an unparsimonious theory but Lewis has two responses to this:

1.) This is (at least mostly) quantitative unparsimony not qualitative (lots of stuff, not lots of types of stuff). It's unclear how bad quantitative unparsimony is. Specifically, Lewis argues that there is no difference between possible worlds and actual worlds (actuality is indexical) so he argues that he doe... (read more)

In terms of Lewis, I don't know of someone criticising him for this off-hand but it's worth noting that Lewis himself (in his book On the Plurality of Worlds) recognises the parsimony objection and feels the need to defend himself against it. In other words, even those who introduce unparsimonious theories in philosophy are expected to at least defend the fact that they do so (of course, many people may fail to meet these standards but the expectation is there and theories regularly get dismissed and ignored if they don't give a good accounting of why we s... (read more)

Obviously and unfortunately, the idea that you are not supposed to end up with more and more ontologically fundamental stuff inside your philosophy is not mainstream.

I think I must be misunderstanding what you're saying here because something very similar to this is probably the principle accusation relied upon in metaphysical debates (if not the very top, certainly top 3). So let me outline what is standard in metaphysical discussions so that I can get clear on whether you're meaning something different.

In metaphysics, people distinguish between quanti... (read more)

5Eliezer Yudkowsky
The claim might just need correction to say, "Many philosophers say that simplicity is a good thing but the requirement is not enforced very well by philosophy journals" or something like that. I think I believe you, but do you have an example citation anyway? (SEP entries or other ungated papers are in general good; I'm looking for an example of an idea being criticized due to lack of metaphysical parsimony.) In particular, can we find e.g. anyone criticizing modal logic because possibility shouldn't be basic because metaphysical parsimony?

Sometimes, they are even divided on psychological questions that psychologists have already answered: Philosophers are split evenly on the question of whether it's possible to make a moral judgment without being motivated to abide by that judgment, even though we already know that this is possible for some people with damage to their brain's reward system, for example many Parkinson's patients, and patients with damage to the ventromedial frontal cortex (Schroeder et al. 2012).1

This isn't an area about which I know very much about but my understanding i... (read more)

I'm not convinced that Briggs' argument succeeds but I take it that the argument is meant to apply as long as the theory ranks decisions ordinally (rather than applying only if they do so and not if they utilise more information). See my response to manfred for a few more minor details.

Whoa, no. That's a bad mantra. Wireheading, quantum immortality, doing meth - these are bad things.

Briggs is here primarily considering cases where your preferences don't change as a result of your decision (but where your credences might). If we're interested in criticising the argument precisely as stated then perhaps this is a reasonable criticism but it's not an interesting criticism of Briggs' view which is to do with how we reason in cases where our decision gives us new information about the state of the world (ie. about changing credences not ch... (read more)

0Manfred
Yup, I missed that a year ago. I'm not sure where I was going with that either. True. Though on the other hand, the smoking lesion problem (and variants) is pretty much the credence-changing equivalent of doing meth :P I still think the requirements are akin to "let's find a decision theory that does meth but never has anything bad happen to it."

Egan's point is often taken to be similar to some earlier points including that made by Bostrom's meta-newcomb's problem (http://www.nickbostrom.com/papers/newcomb.html)

It's worth noting that not everyone agrees that these are problems for CDT:

See James Joyce (the philosopher): http://www-personal.umich.edu/~jjoyce/papers/rscdt.pdf

See Bob Stalnaker's comment here: http://tar.weatherson.org/2004/11/29/newcomb-and-mixed-strategies/ (the whole thread is pretty good)

Load More