All of PhilosophyTutor's Comments + Replies

(EDIT: See below.) I'm afraid that I am now confused. I'm not clear on what you mean by "these traits", so I don't know what you think I am being confident about. You seem to think I'm arguing that AIs will converge on a safe design and I don't remember saying anything remotely resembling that.

EDIT: I think I figured it out on the second or third attempt. I'm not 100% committed to the proposition that if we make an AI and know how we did so that we can definitely make sure it's fun and friendly, as opposed to fundamentally uncontrollable and unkn... (read more)

0Stuart_Armstrong
Significant is not the same as sufficient. How low do you think the probability of negative AI outcomes is, and what are your reasons for being confident in that estimate?

I won't argue against the claim that we could conceivably create an AI without knowing anything about how to create an AI. It's trivially true in the same way that we could conceivably turn a monkey loose on a typewriter and get strong AI.

I also agree with you that if we got an AI that way we'd have no idea how to get it to do any one thing rather than another and no reason to trust it.

I don't currently agree that we could make such an AI using a non-functioning brain model plus "a bit of evolution". I am open to argument on the topic but currently it seems to me that you might as well say "magic" instead of "evolution" and it would be an equivalent claim.

0Stuart_Armstrong
Why are you confident that an AI that we do develop will not have these traits? You agree the mindspace is large, you agree we can develop some cognitive abilities without understanding them. If you add that most AI programmers don't take AI risk seriously and will only be testing their AI's in controlled environments, that the AI will be likely developed for a military or commercial purpose, I don't see why you'd have high confidence that they will converge on a safe design?

A universal measure for anything is a big demand. Mostly we get by with some sort of somewhat-fuzzy "reasonable person" standard, which obviously we can't fully explicate in neurological terms either yet, but which is much more achievable.

Liberty isn't a one-dimensional quality either, since for example you might have a country with little real freedom of the press but lots of freedom to own guns, or vice versa.

What you would have to do to develop a measure with significant intersubjective validity is to ask a whole bunch of relevantly educated p... (read more)

I tend to think that you don't need to adopt any particular position on free will to observe that people in North Korea lack freedom from government intervention in their lives, access to communication and information, a genuine plurality of viable life choices and other objectively identifiable things humans value. We could agree for the sake of argument that "free will is an illusion" (for some definitions of free will and illusion) yet still think that people in New Zealand have more liberty than people in North Korea.

I think that you are basi... (read more)

2Shmi
Well, yes, it is hard to argue about NK vs West. But let's try to control for the "non-liberty" confounders, such as income, wealth, social status, etc. Say, we take some upper middle-class person from Iran, Russia or China. It is quite likely that, when comparing their life with that of a Westerner of similar means, they would not immediately state that the Western person has more "objectively identifiable things humans value". Obviously the sets of these valuable things are different, and the priorities different people assign to them would be different, but I am not sure that there is a universal measure everyone would agree upon as "more liberty".

I said earlier in this thread that we can't do this and that it is a hard problem, but also that I think it's a sub-problem of strong AI and we won't have strong AI until long after we've solved this problem.

I know that Word of Eliezer is that disciples won't find it productive to read philosophy, but what you are talking about here has been discussed by analytic philosophers and computer scientists as "the frame problem" since the eighties and it might be worth a read for you. Fodor argued that there are a class of "informationally unencaps... (read more)

0Stuart_Armstrong
It depends on how general or narrow you make the problem. Compare: is classical decision theory the heart of the AI problem? If you interpret this broadly, then yes; but the link from, say, car navigation to classical decision theory is tenuous when you're working on the first problem. The same thing for the frame problem.
0Stuart_Armstrong
You mean the frame problem that I talked about here? http://lesswrong.com/lw/gyt/thoughts_on_the_frame_problem_and_moral_symbol/ The issue can be talked about in terms of the frame problem, but I'm not sure it's useful. In the classical frame problem, we have a much clearer idea of what we want, the problem is specifying enough so that the AI does too (ie so that the token "loaded" corresponds to the gun being loaded). This is quite closely related to symbol grounding, in a way. When dealing with moral problems, we have the problem that we haven't properly defined the terms to ourselves. Across the span of possible futures, the term "loaded gun" is likely much sharply defined than "living human being". And if it isn't - well, then we have even more problems if all our terms start becoming slippery, even the ones with no moral connotations. But in any case, saying the problem is akin to the frame problem... still doesn't solve it, alas!
0Shmi
Note that the relevance issue has been successfully solved in any number of complex practical applications, such as the self-driving vehicles, which are able to filter out gobs of irrelevant data, or the LHC code, which filters out even more. I suspect that the Framing Problem is not some general problem that needs to be resolved for AI to work, but just one of many technical issues, just as the "computer scientists" suggest. On the other hand, it is likely to be a real problem for FAI design, where relying to heuristics providing, say, six-sigma certainty just isn't good enough. I think that the framing problem is distinct from the problem of defining and calculating mostly because attempting to define liberty objectively leads us to the discussion of free will, the latter being an illusion due to the human failure to introspect deep enough.

I didn't think we needed to put the uploaded philosopher under billions of years of evolutionary pressure. We would put your hypothetical pre-God-like AI in one bin and update it under pressure until it becomes God-like, and then we upload the philosopher separately and use them as a consultant.

(As before I think that the evolutionary landscape is unlikely to allow a smooth upward path from modern primate to God-like AI, but I'm assuming such a path exists for the sake of the argument).

1Stuart_Armstrong
And then we have to ensure the AI follows the consultant (probably doable) and define what querying process is acceptable (very hard). But your solution (which is close to Paul Christiano's) works whatever the AI is, we just need to be able to upload a human. My point was that we could conceivably create an AI without understanding any of the hard problems, still stands. If you want I can refine it: allow partial uploads: we can upload brains, but they don't function as stable humans, as we haven't mapped all the fine details we need to. However, we can use these imperfect uploads, plus a bit of evolution, to produce AIs. And here we have no understanding of how to control its motivations at all.

I think there is insufficient information to answer the question as asked.

If I offer you the choice of a box with $5 in it, or a box with $500 000 in it, and I know that you are close enough to a rational utility-maximiser that you will take the $500 000, then I know what you will choose and I have set up various facts in the world to determine your choice. Yet it does not seem on the face of it as if you are not free.

On the other hand if you are trying to decide between being a plumber or a blogger and I use superhuman AI powers to subtly intervene in you... (read more)

0Stuart_Armstrong
Can you cash out the difference between those two cases in sufficient detail that we can use it to safely defined what liberty means?

If I was unclear, I was intending that remark to apply to the original hypothetical scenario where we do have a strong AI and are trying to use it to find a critical path to a highly optimal world. In the real world we obviously have no such capability. I will edit my earlier remark for clarity.

The standard LW position (which I think is probably right) is that human brains can be modelled with Turing machines, and if that is so then a Turing machine can in theory do whatever it is we do when we decide that something ls liberty, or pornography.

There is a degree of fuzziness in these words to be sure, but the fact we are having this discussion at all means that we think we understand to some extent what the term means and that we value whatever it is that it refers to. Hence we must in theory be able to get a Turing machine to make the same distinction although it's of course beyond our current computer science or philosophy to do so.

If you can do that, then you can just find someone who you think understands what we mean by "liberty" (ideally someone with a reasonable familiarity with Kant, Mill, Dworkin and other relevant writers), upload their brain without understanding it, and ask the uploaded brain to judge the matter.

(Off-topic: I suspect that you cannot actually get a markedly superhuman AI that way, because the human brain could well be at or near a peak in the evolutionary landscape so that there is no evolutionary pathway from a current human brain to a vastly supe... (read more)

0Stuart_Armstrong
And would their understanding of liberty remain stable under evolutionary pressure? That seems unlikely. Have not been downvoting it.

Why? Just because the problem is less complicated, does not mean it will be solved first. A more complicated problem can be solved before a less complicated problem, especially if there is more known about it.

To clarify, it seems to me that modelling hairyfigment's ability to decide whether people have liberty is not only simpler than modelling hairyfigment's whole brain, but that it is also a subset of that problem. It does seem to me that you have to solve all subsets of Problem B before you can be said to have solved Problem B, hence you have to have... (read more)

1CCC
Hmmm. That's presumably true of hairyfigment's brain; however, simulting a copy of any human brain would also be a solution to the strong AI problem. Some human brains are flawed in important ways (consider, for example, psychopaths) - given this, it is within the realm of possibility that there exists some human who has no conception of what 'liberty' means. Simulating his brain is also a solution of the Strong AI problem, but does not require solving the liberty-assessing problem.

We have identified the point on which we differ, which is excellent progress. I used fictional worlds as examples, but would it solve the problem if I used North Korea and New Zealand as examples instead, or the world in 1814 and the world in 2014? Those worlds or nations were not created to be transparent to human examination but I believe you do have the faculty to distinguish between them.

I don't see how this is harder than getting an AI to handle any other context-dependant, natural language descriptor, like "cold" or "heavy". "... (read more)

I'll try to lay out my reasoning in clear steps, and perhaps you will be able to tell me where we differ exactly.

  1. Hairyfigment is capable of reading Orwell's 1984, and Banks' Culture novels, and identifying that the people in the hypothetical 1984 world have less liberty than the people in the hypothetical Culture world.
  2. This task does not require the full capabilities of hairyfigment's brain, in fact it requires substantially less.
  3. A program that does A+B has to be more complicated than a program that does A alone, where A and B are two different, signi
... (read more)
0hairyfigment
..It's the hidden step where you move from examining two fictions, worlds created to be transparent to human examination, to assuming I have some general "liberty-distinguishing faculty".
0CCC
Incorrect. I can write a horrendously complicated program to solve 1+1; and a far simpler program to add any two integers. Admittedly, neither of those are particularly significant problems; nonetheless, unnecessary complexity can be added to any program intended to do A alone. It would be true to say that the shortest possible program capable of solving A+B must be more complex than the shortest possible program to solve A alone, though, so this minor quibble does not affect your conclusion. Granted. Why? Just because the problem is less complicated, does not mean it will be solved first. A more complicated problem can be solved before a less complicated problem, especially if there is more known about it.

I really am. I think a human brain could rule out superficially attractive dystopias and also do many, many other things as well. If you think you personally could distinguish between a utopia and a superficially attractive dystopia given enough relevant information (and logically you must think so, because you are using them as different terms) then it must be the case that a subset of your brain can perform that task, because it doesn't take the full capabilities of your brain to carry out that operation.

I think this subtopic is unproductive however, for... (read more)

-1hairyfigment
No, this seems trivially false. No subset of my brain can reliably tell when an arbitrary Turing machine halts and when it doesn't, no matter how meaningful I consider the distinction to be. I don't know why you would say this.

I could be wrong but I believe that this argument relies on an inconsistent assumption, where we assume we have solved the problem of creating an infinitely powerful AI, but we have not solved the problem of operationally defining commonplace English words which hundreds of millions of people successfully understand in such a way that a computer can perform operations using them.

It seems to me that the strong AI problem is many orders of magnitude more difficult than the problem of rigorously defining terms like "liberty". I imagine that a relat... (read more)

0[anonymous]
My mind is throwing a type-error on reading your comment. Liberty could well be like pornography: we know it when we see it, based on probabilistic classification. There might not actually be a formal definition of liberty that includes all actual humans' conceptions of such as special cases, but instead a broad range of classifier parameters defining the variation in where real human beings "draw the line".
0Stuart_Armstrong
Yes. Here's another brute force approach: upload a brain (without understanding it), run it very fast with simulated external memory, subject it to evolutionary pressure. All this can be done with little philosophical and conceptual understanding, and certainly without any understanding of something as complex as liberty.
0hairyfigment
While I don't know how much I believe the OP, remember that "liberty" is a hotly contested term. And that's without a superintelligence trying to create confusing cases. Are you really arguing that "a relatively small part of the processing power of one human brain" suffices to answer all questions that might arise in the future, well enough to rule out any superficially attractive dystopia?

The strong AI problem is much easier to solve than the problem of motivating an AI to respect liberty. For instance, the first one can be brute forced (eg AIXItl with vast resources), the second one can't.

I don't believe that strong AI is going to be as simple to brute force as a lot of LessWrongers believe, personally, but if you can brute force strong AI then you can just get it to run a neuron-by-neuron simulation of the brain of a reasonably intelligent first year philosophy student who understands the concept of liberty and tell the AI not to take... (read more)

-2[anonymous]
And therein lies the rub. Current research-grade AGI formalisms don't actually allow us to specifically program the agent for anything, not even paperclips.
0Stuart_Armstrong
How? "tell", "the simulated brain thinks" "offend": defining those incredibly complicated concepts contains nearly the entirety of the problem.
8Nornagest
I've met far too many first-year philosophy students to be comfortable with this program.

I think Asimov did this first with his Multivac stories, although rather than promptly destroy itself Multivac executed a long-term plan to phase itself out.

Precisely and exactly! That's the whole of the problem - optimising for one thing (appearance) results in the loss of other things we value.

This just isn't always so. If you instruct an AI to optimise a car for speed, efficiency and durability but forget to specify that it has to be aerodynamic, you aren't going to get a car shaped like a brick. You can't optimise for speed and efficiency without optimising for aerodynamics too. In the same way it seems highly unlikely to me that you could optimise a society for freedom, education, just distribution of ... (read more)

0Strange7
Unless you start by removing the air, in some way that doesn't count against the car's efficiency.
1Stuart_Armstrong
The strong AI problem is much easier to solve than the problem of motivating an AI to respect liberty. For instance, the first one can be brute forced (eg AIXItl with vast resources), the second one can't. Having the AI understand human concepts of liberty is pointless unless it's motivated to act on that understanding. An excess of anthropomophisation is bad, but an analogy could be about creating new life (which humans can do) and motivating that new life to follow specific rules are requirements if they become powerful (which humans are pretty bad at at).

I think this and the "finite resources therefore tradeoffs" argument both fail to take seriously the interconnectedness of the optimisation axes which we as humans care about.

They assume that every possible aspect of society is an independent slider which a sufficiently advanced AI can position at will, even though this society is still going to be made up of humans, will have to be brought about by or with the cooperation of humans and will take time to bring about. These all place constraints on what is possible because the laws of physics and... (read more)

0Stuart_Armstrong
Precisely and exactly! That's the whole of the problem - optimising for one thing (appearance) results in the loss of other things we value. Next challenge: define liberty in code. This seems extraordinarily difficult. So we do agree that there are problem with an all-powerful genie? Once we've agreed on that, we can scale back to lower AI power, and see how the problems change. (the risk is not so much that the AI would be an all powerful genie, but that it could be an all powerful genie compared with humans).

I don't think you have highlighted a fundamental problem since we can just specify that we mean a low percentage of conceptions being deliberately aborted in liberal societies where birth control and abortion are freely available to all at will.

My point, though, is that I don't think it is very plausible that "marketing worlds" will organically arise where there are no humans, or no conception, but which tick all the other boxes we might think to specify in our attempts to describe an ideal world. I don't see how there being no conception or no h... (read more)

1Stuart_Armstrong
The "no conception" example is just to illustrate that bad things happen when you ask an AI to optimise along a certain axis without fully specifying what we want (which is hard/impossible). A marketing world is fully optimised along the "convince us to choose this world" axis. If at any point, the AI in confronted with a choice along the lines of "remove genuine liberty to best give the appearance of liberty/happiness", it will choose to do so. That's actually the most likely way a marketing world could go wrong - the more control the AI has over people's appearance and behaviour, the more capable it is of making the world look good. So I feel we should presume that discrete-but-total AI control over the world's "inhabitants" would be the default in a marketing world.

It's a proposition with a truth value in a sense, but if we are disagreeing about the topic then it seems most likely that the term "one of the world's foremost intellectuals" is ambiguous enough that elucidating what we mean by the term is necessary before we can worry about the truth value.

Obviously I think that the truth value is false, and so obviously so that it needs little further argument to establish the implied claim that it is rational to think that calling Eliezer "one of the world's foremost intellectuals" is cult-like and... (read more)

In a world where Eliezer is by objective standards X, then in that world it is correct to say he is X, for any X. That X could be "one of the world's foremost intellectuals" or "a moose" and the argument still stands.

To establish whether it is objectively true that "his basic world view is fundamentally correct in important ways where the mainstream of intellectuals are wrong" would be beyond the scope of the thread, I think, but I think the mainstream has good grounds to question both those sub-claims. Worrying about steep-cu... (read more)

-2Anders_H
My point is that the statement "Eliezer is one of the world's foremost intellectuals" is a proposition with a truth value. We should argue about the truth value of that proposition, not about how our beliefs might affect our status in the eyes of another rationalist group, particularly if that "rationalist" group assigns status based on obvious fallacies. I assign a high prior belief to the statement. If I didn't, I wouldn't waste my time on Less Wrong. I believe this is also true for many of the other participants, who just don't want to say it out loud. You can argue that we should try to hide our true beliefs in order to avoid signaling low status, but given how seriously we take this website, it would be very difficult to send a credible signal. To most intelligent observers, it would be obvious that we are sending a false signal for status reason, which is inconsistent with our own basic standards for discussion

It seems based on your later comments that the premise of marketing worlds existing relies on there being trade-offs between our specified wants and our unspecified wants, so that the world optimised for our specified wants must necessarily be highly likely to be lacking in our unspecified ones ("A world with maximal bananas will likely have no apples at all").

I don't think this is necessarily the case. If I only specify that I want low rates of abortion, for example, then I think it highly likely that 'd get a world that also has low rates of ST... (read more)

0Stuart_Armstrong
Yes, certainly. That's a problem of optimisation with finite resources. If A is a specified want and B is an unspecified want, then we shouldn't confuse "there are worlds with high A and also high B" with "the world with the highest A will also have high B".
0Stuart_Armstrong
You would get a world with no conception, or possibly with no humans at all.

Calling Eliezer Yudkowsky one of the world's foremost intellects is the kind of cult-like behaviour that gives LW a bad reputation in some rationalist circles. He's one of the foremost Harry Potter fanfiction authors and a prolific blogger, who has also authored a very few minor papers. He's a smart guy but there are a lot of smart guys in the world.

He articulates very important ideas, but so do very many teachers of economics, ethics, philosophy and so on. That does not make them very important people (although the halo effect makes some students think so).

(Edited to spell Eliezer's name correctly, with thanks for the correction).

-2Anders_H
Consider a hypothetical world in which Eliezer Yudkowsky actually is, by objective standards, one of the world's foremost intellects. In such a hypothetical world, would it be "cult-like" behavior to make this claim? And again, in this hypothetical world, do you care about having a bad reputation in alleged "rationalist circles" that do not believe in the objective truth? The argument seems to be that some "rationalist circles" are so deeply affected by the non-central fallacy (excessive attention to one individual --> cult, cult--> kool aid) , that in order to avoid alienating them, we should refrain from saying certain things out loud. I will say this for the record: Eliezer Yudkowsky is sometimes wrong. I often disagree with him. But his basic world view is fundamentally correct in important ways where the mainstream of intellectuals are wrong. Eliezer has started a discussion that is at the cutting edge of current intellectual discourse. That makes him one of the world's foremost intellectuals.
0gjm
I agree with what you said, but I think you should do him the courtesy of spelling his name correctly. (Yudkowsky.)

"Cult" might not be a very useful term given the existing LW knowledge base, but it's a very useful term. I personally recommend Steve Hassan's book "Combating Cult Mind Control" as an excellent introduction to how some of the nastiest memetic viruses propagate and what little we can do about them.

He lists a lengthy set of characteristics which cults tend to have in common which go beyond the mind-controlling tactics of mainstream religions. My fuzzy recollection is that est/Landmark was considered a cult by the people who make it their... (read more)

2fubarobfusco
I hear some ambiguity there on the word "attempt". In the first case you're talking about the stated motives of the founders and high-status members, whereas in the second case you're talking about a behavior that arises from the social relations in a group. A group can become a cult even if its founders and leaders don't try to be a cult; cultishness is a mode of group behavior. I'd also caution that "the people who make it their area of interest to keep track of currently active cults" may have some difficulties as well — some are missionaries from larger cults (e.g. conservative Protestantism), for instance ....

A possible interpretation is that the "strength" of a belief reflects the importance one attaches to acting upon that belief. Two people might both believe with 99% confidence that a new nuclear power plant is a bad idea, yet one of the two might go to a protest about the power plant and the other might not, and you might try to express what is going on there by saying that one holds that belief strongly and the other weakly.

You could of course also try to express it in terms of the two people's confidence in related propositions like "prote... (read more)

-2Eugine_Nier
They might also differ in just how bad an idea they think it is.

It seems from my perspective that we are talking past each other and that your responses are no longer tracking the original point. I don't personally think that deserves upvotes, but others obviously differ.

Your original claim was that:

Said literature gives advice, reasoning and conclusions that is epistemically, instrumentally and normatively bad.

Now given that game theory is not making any normative claims, it can't be saying things which are normatively bad. Similarly since game theory does not say that you should either go out and act like a game... (read more)

-1wedrifid
Not true. The word 'connotations' comes to mind. As does "reframing to the extent of outright redefining a critical keyword". That is not a normatively neutral act. It is legitimate for me to judge it and I choose to do so---negatively.
8wedrifid
No, it just isn't. Game theory is completely agnostic about what the preferences of the players are based on. Game theory takes a payoff matrix and calculates things like Nash Equilibrium and Dominant Strategies. The verbal description of why the payoff matrix happens to be as it is is fluff. As soon as you allow altruistic interests the game ceases to be a Prisoner's Dilemma. The dilemma (and game theory in general) relies on the players being perfectly selfish in the sense that they ruthlessly maximise their own payoffs as they are defined, not in the sense that those payoffs must never refer to aspects of the universe that happen to include the physical state of the other agents. Consider the Codependent Prisoner's Dilemma. Romeo and Juliet have been captured and the guards are trying to extort confessions out of them. However Romeo and Juliet are both lovesick and infatuated and care only about what happens to their lover, not what happens to themselves. Naturally the guards offer Romeo the deal "If you confess we'll let Juliet go and you'll get 10 years but if you don't confess you'll both get 1 year" (and vice versa, with a both confess clause in there somewhere). Game theory is perfectly equipped at handling this game. In fact, so much so that it wouldn't even bother calling it a new name. It's just a Prisoner's Dilemma and the fact that the conflict of interests between Romeo and Juliet happens to be based on codependent altruism rather than narcissism is outside the scope of what game theorists care about.

I would be interested in reading about the bases for your disagreement. Game theory is essentially the exploration of what happens if you postulate entities who are perfectly informed, personal utility-maximisers who do not care at all either way about other entities. There's no explicit or implicit claim that people ought to behave like those entities, thus no normative content whatsoever. So I can't see how the game theory literature could be said to give normatively bad advice, unless the speaker misunderstood the definition of rationality being used, a... (read more)

0wedrifid
Under this definition you can't claim epistemic accuracy either. In particular the 'perfectly informed' assumption when combined with the personal utility maximization leads to different behaviors to those described as 'rational'. (It needs to be weakened to "perfectly informed about everything except those parts of the universe that are the other agent.) This isn't about the agents having selfish desires (in fact, they don't even have to "not care at all about other entities"---altruism determines what the utility function is, not how to maximise it.) No, this is about shoddy claims about decision theory that are either connotatively misleading or erroneous depending on how they are framed. All those poor paperclip maximisers who read such sources and take them at face value will end up producing less paperclips than they could have if they knew the correct way to interact with the staples maximisers in contrived scenarios.

Said literature gives advice, reasoning and conclusions that is epistemically, instrumentally and normatively bad.

Said literature makes statements about what is game-theory-rational. Those statements are only epistemically, instrumentally or normatively bad if you take them to be statements about what is LW-rational or "rational" in the layperson's sense.

Ideally we'd use different terms for game-theory-rational and LW-rational, but in the meantime we just need to keep the distinction clear in our heads so that we don't accidentally equivocate between the two.

0wedrifid
Disagree on instrumentally and normatively. Agree regarding epistemically---at least when the works are careful with what claims are made. Also disagree with the "game-theory-rational", although I understand the principle you are trying to get at. A more limited claim needs to be made or more precise terminology.

The effort required may be much larger than you think. Eliezer finds it very difficult to do that kind of work, for example. (Which is why his papers still read like long blog posts, and include very few citations. CEV even contains zero citations, despite re-treading ground that has been discussed by philosophers for centuries, as "The Singularity and Machine Ethics" shows.)

If this is the case, then a significant benefit to Eliezer of trying to get papers published would be that it would be excellent discipline for Eliezer, and would make him... (read more)

0wedrifid
Luke (and his remote research assistants) have this angle covered.

I think you're probably right in general, but I wouldn't discount the possibility that, for example, a rumour could get around the ALS community that lithium was bad, and be believed by enough people for the lack of blinding to have an effect. There was plenty of paranoia in the gay community about AZT, for example, despite the fact that they had a real and life-threatening disease, so it just doesn't always follow that people with real and life-threatening diseases are universally reliable as personal judges of effective interventions.

Similarly if the wi-... (read more)

What is your evidence for the claim that the main thing powering the superior statistical strength of PatientsLikeMe is the fact that medical researchers have learned to game the system and use complicated ad-hoc frequentist statistics to get whatever answer they want or think they ought to get? What observations have you made that are more likely to be true given that hypothesis?

No. Lack of double-blinding will increase the false negative rate too, if the patients, doctors or examiners think that something shouldn't work or should be actively harmful. If you test a bunch of people who believe that aspartame gives them headaches or that wifi gives them nausea without blinding them you'll get garbage out as surely as if you test homeopathic remedies unblinded on a bunch of people who think homeopathic remedies cure all ills.

In this particular case I think it's likely the system worked because it's relatively hard to kid yourself abo... (read more)

2gwern
Fair enough. I don't think the biases are symmetrical though: these people have a real and life-threatening disease, so they approach any intervention hoping strongly that it will work; hence we should expect them to yield more false positives than false negatives compared to whatever an equal medical trial would yield. On the other hand, when we're looking at the chatrooms of hypochondriacs & aspartame sufferers, I think we can expect the bias to be reversed: if even crazy people find nothing to take offense to in something, that something may well be harmless. This yields the useful advice that when looking at any results, we should look at whether the participants have an objectively (or at least, third-party) validated problem. If they do, we should pay attention to their nulls but less attention to their claims about what helps. And vice versa. (Can we then apply this to self-experimentation? I think so, but there we already have selection bias telling us to pay little attention to exciting news like 'morning faces help my bipolar', and more attention to boring nulls like 'this did nothing for me'.) Kind of a moot point I guess, because the fakes do not seem to be well-organized at all.

If this is a problem for Rawls, then Bentham has exactly the same problem given that you can hypothesise the existence of a gizmo that creates 3^^^3 units of positive utility which is hidden in a different part of the multiverse. Or for that matter a gizmo which will inflict 3^^^3 dust specks on the eyes of the multiverse if we don't find it and stop it. Tell me that you think that's an unlikely hypothesis and I'll just raise the relevant utility or disutility to the power of 3^^^3 again as often as it takes to overcome the degree of improbability you plac... (read more)

It already is in Bayesian language, really, but to make it more explicit you could rephrase it as "Unless P(B|A) is 1, there's always some possibility that hypothesis A is true but you don't get to see observation B."

This "moral dilemma" only has force if you accept strict Bentham-style utilitarianism, which treats all benefits and harms as vectors on a one-dimensional line, and cares about nothing except the net total of benefits and harms. That was the state of the art of moral philosophy in the year 1800, but it's 2012 now.

There are published moral philosophies which handle the speck/torture scenario without undue problems. For example if you accepted Rawls-style, risk-averse choice from a position where you are unaware whether you will be one of the speck... (read more)

6steven0461
Rawls's Wager: the least well-off person lives in a different part of the multiverse than we do, so we should spend all our resources researching trans-multiverse travel in a hopeless attempt to rescue that person. Nobody else matters anyway.

If you have a result with a p value of p<0.05, the universe could be kidding you up to 5% of the time. You can reduce the probability that the universe is kidding you with bigger samples, but you never get it to 0%.

0RobinZ
How would you rephrase that using Bayesian language, I wonder?

We probably shouldn't leap to the assumption that Transfiguration Weekly is a peer-reviewed journal with a large staff publishing results from multiple large laboratories. For all we know it's churned out in a basement by an amateur enthusiast, is only eight pages long on a good week and mostly consists of photographs of people's cats transfigured into household objects.

In the real world, Eliezer's example simply doesn't work.

In the real world you only hear about the results when they are published. The prior probability of the biased researcher publishing a positive result is higher than the prior probability of the unbiased researcher publishing a positive result.

The example only works if you are an omniscient spy who spies on absolutely all treatments. It's true that an omniscient spy should just collate all the data regardless of the motivations of the researcher spied upon. However unless you are an omniscient spy yo... (read more)

That seems to be a poorly-chosen prior.

An obvious improvement would be to instead use "non-rationalists are dedicated to achieving a goal through training and practice, and find a system for doing so which is significantly superior to alternative, existing systems".

It is no great praise of an exercise regime, for example, to say that those who follow it get fitter. The interesting question is whether that particular regime is better or worse than alternative exercise regimes.

However the problem with that question is that there are multiple compet... (read more)

1taelor
I actually agree mainly with you, but am downvoting both sides on the principle that I'm tired of listening to people argue back and forth about PUAs/Seduction communities.

On lesswrong insisting a claim is unfalsifiable while simultaneously explaining how that claim can be falsified is more than sufficient cause to downvote.

That's rather sad, if the community here thinks that the word "unfalsifiable" only refers to beliefs which are unfalsifiable in principle from the perspective of a competent rationalist, and that the word is not also used to refer to belief systems held by irrational people which are unfalsifiable from the insider/irrational perspective.

The fundamental epistemological sin is the same in each ... (read more)

-1wedrifid
Then you should indeed be sad. An unfalsifiable claim is a claim that can not be falsified. Not only is it right there in the word it is a basic scientific principle. The people who present a claim happening to be irrational would be a separate issue. Just say that the seduction community is universally or overwhelmingly irrational when it comes to handling counterevidence to their claims - and we can merrily disagree about the state of the universe. But unfalsifiable things can't be falsified.
-1wedrifid
I would update only slightly from the prior for "non-rationalists are dedicated to achieving a goal through training and practice". EDIT: In case the meaning isn't clear - this translates to "They're probably about the same as most folks are when they do stuff. Haven't seen much to think they are better or worse."

It is dramatically different thing to say "people who are in the seduction community are the kind of people who would make up excuses if their claims were falsified" than to say "the beliefs of those in the seduction community are unfalsifiable". While I may disagree mildly with the former claim the latter I object to as an absurd straw man.

I'm content to use the term "unfalsifiable" to refer to the beliefs of homeopaths, for example, even though by conventional scientific standards their beliefs are both falsifiable and fa... (read more)

-1wedrifid
On lesswrong insisting a claim is unfalsifiable while simultaneously explaining how that claim can be falsified is more than sufficient cause to downvote. This is false even if - and especially obviously when - that claim is false. Further, in general downvotes of comments by the PhilsophyTutor account - at least those by myself - have usually been for the consistent use of straw men and the insulting misrepresentation of a group of people you are opposed to. Declaring downvotes of your one's own comments to be evidence in favor of one's position is seldom a useful approach. They should not be persuasive and are not intended as such. Instead, in this case, it was an explicit rejection of the "My side is the default position and the burden of proof is on the other!" debating tactic. The subject of how to think correctly (vs debate effectively) is one of greater interest to me than seduction. I also reject the tactic used in the immediate parent. It seems to be of the form "You are trying to refute my arguments. You are being defensive. That means you must be wrong. I am right!". It is a tactic which, rather conveniently, become more effective the worse your arguments are!

Are you familiar with the technical meaning of 'unfalsifiable'? It does not mean 'have not done scientific tests'. It means 'cannot do scientific tests even in principle'. I would like it if scientists did do more study of this subject but that is not relevant to whether claims are falsifiable.

In the case of Sagan's Dragon, the dragon is unfalsifiable because there is always a way for the believer to explain away every possible experimental result.

My view is that the mythology of the seduction community functions similarly. You can't attack their theori... (read more)

2wedrifid
It is dramatically different thing to say "people who are in the seduction community are the kind of people who would make up excuses if their claims were falsified" than to say "the beliefs of those in the seduction community are unfalsifiable". While I may disagree mildly with the former claim the latter I object to as an absurd straw man. I don't accept the role of a skeptic. I take the role of someone who wishes to have correct beliefs, within the scope of rather dire human limitations. That means I must either look for and process the evidence to whatever extent possible or, if a field is consider of insufficient expected value, remain in a state of significant uncertainty to the extent determined by information I have picked up in passing. I reject the skeptic role of thrusting the burden of proof around, implying "You've got to prove it to me or it ain't so!' That's just the opposite stupidity to that of a true believer. It is a higher status role within intellectual communities but it is by no means rational. No, it's their job to go ahead and get laid and have fulfilling relationships. It is no skin of their nose if you don't agree with them. In fact, the more people who don't believe them the less competition they have. Unless they are teachers, people are not responsible for forcing correct epistemic states upon others. They are responsible for their beliefs, you are responsible for yours.

This is an absurd claim. Most of the claims can be presented in the form "If I do X I can expect to on average achieve a better outcome with women than if I do Y". Such claims are falsifiable. Some of them are even actually falsified. They call it "Field Testing".

If they conducted tests of X versus Y with large sample sizes and with blinded observers scoring the tests then they might have a basis to say "I know that if I do X I can expect to on average achieve a better outcome with women than if I do Y". They don't do such... (read more)

-2wedrifid
Your claim was: Are you familiar with the technical meaning of 'unfalsifiable'? It does not mean 'have not done scientific tests'. It means 'cannot do scientific tests even in principle'. I would like it if scientists did do more study of this subject but that is not relevant to whether claims are falsifiable. I'd be surprised. I've never heard such a reply, certainly not in response to subject matter which many wouldn't understand (unfalsifiability). I used that term 'shaming' because the inferred motive (and, regardless of motive, one of the practical social meanings) of falsely accusing the enemy of behavior that looks pathetic is to provide some small degree of humiliation. This can, the motive implicitly hopes, make people ashamed of doing the behaviors that have been misrepresented. I am happy to conceed that this point is more distracting than useful. I would have been best served to stick purely to the (more conventional expression of) "NOT UNFALSIFIABLE! LIES!" I assert that the "act like JWs" approach is not taken by the seduction community in general either. For most part they do present evidence. That evidence is seldom of the standard accepted in science except when they are presenting claims that are taken from scientific findings - usually popularizations thereof, Cialdini references abound. I again agree that the seduction community could use more scientific rigor. Shame on science for not engaging in (much) research in what is a rather important area! Yes, I agree that you didn't get in to ethics and that your claim was epistemological in nature. I do believe that the act of making epistemological claims is not always neutral with respect to other kinds of implication. As another tangential aside I note that if an exemplar of the seduction community were to be said to be sensitive to public opinion he would be far more sensitive to things that make him look pathetic than things than make him look unethical!

I would say that it is largely the ostensible basis of the seduction community.

As you can see if you read this subthread, they've got a mythology going on that renders most of their claims unfalsifiable. If their theories are unsupported it doesn't matter, because they can disclaim the theories as just being a psychological trick to get you to take "correct" actions. However they've got no rigorous evidence that their "correct" actions actually lead to any more mating success than spending an equivalent amount of time on personal groomi... (read more)

1wedrifid
This is an absurd claim. Most of the claims can be presented in the form "If I do X I can expect to on average achieve a better outcome with women than if I do Y". Such claims are falsifiable. Some of them are even actually falsified. They call it "Field Testing". Your depiction of the seduction community is a ridiculous straw man and could legitimately be labelled offensive by members of the community that you are so set on disparaging. Mind you they probably wouldn't bother doing so: The usual recommended way to handle such shaming attempts is to completely ignore them and proceed to go get laid anyway.

That is indeed a valid argument-form, in basic classical logic. To illustrate this we can just change the labels to ones less likely to cause confusion:

  1. Person X is a Foffler with respect to Y.
  2. Things said about Y by persons who are Fofflers with respect to Y are Snarfly.
  3. Person X said Z about Y.
  4. Z is Snarfly.

The problem arises when instead of sticking a label on the set like "Snarfly" or "bulbous" or whatever you use a label such as "likely to be correct", and people start trying to pull meaning out of that label and apply... (read more)

Here's a link:

http://rationalwiki.org/wiki/Astrology

In brief, there is no evidence from properly conducted trials that astrology can predict future events at a rate better than chance. In addition physics as we currently understand it precludes any possible effect on us from objects so far away.

Astrology can appear to work through a variety of cognitive biases or can be made to appear to work through various forms of trickery. For example when someone is majorly freaked out by the accuracy of a guess (and with a large enough population reading a guess it's... (read more)

1Incorrect
So our world would look exactly the same without astronomy? (I'm kidding of course but that statement should require further qualification)

It's sociopaths who should all be killed or otherwise removed from society.

Lots of sociopaths as the term is clinically defined live perfectly productive lives, often in high-stimulation, high-risk jobs that neurotypical people don't want to do like small aircraft piloting, serving in the special forces of their local military and so on. They don't learn well from bad experiences and they need a lot of stimulation to get a high, so those sorts of roles are ideal for them.

They don't need to be killed or removed from society, they need to be channelled into jobs where they can have fun and where their psychological resilience is an asset.

5AspiringKnitter
Huh, okay. Thanks.

If there was a real guy called Jesus of Nazareth around the early 1st century, who was crucified during Pontius Pilate, and his disciples and followers that formed the core of the religious movement later called Christianity, to argue that Jesus was nonetheless "completely fictional" becomes a mere twisting of words that miscommunicates its intent.

Isn't that just what I said? I contrasted such a Jesus-figure with one who did not do those things, and said that the Jesus-figure you describe would count as a historical Jesus and one that did not ... (read more)

-1ArisKatsaris
I don't understand. My version just has four elements: being an itinerant preacher, being called "Jesus of Nazareth", being crucified by the Romans, and having his followers begin the Christian movement. You already conceded there were many itinerant preacher, so that's nothing special that we'd expect documentary evidence about for any specific one of them. You already conceded that the name "Jesus" was commonplace, so there's nothing special about that either. We know as a matter of historical fact that the the Christian movement thought themselves as followers of Jesus of Nazareth. That' s indisputable. So the only thing that's so extraordinary that you expect "documentary evidence" for you to you believe it happened, was that there was a crucifixion of this person? You don't believe crucifixions happened in Judaea, is that it? What exactly is this extraordinary hypothesis that you disbelieve in without the presence of documentary evidence? And again you can't explain why those elements were inserted. You just don't have an explanation for them if they were fictional, you just call it a mistake on part of the unknown authors and move on. Cult leaders don't make up stories about fictional people with their own divine missions, they make up stories about their own visions, their own supposed divine missions. Show me a cult leader that ever invented other fictional people to be the messiahs, instead of themselves. You aren't addressing any of my points, you have just written your bottomline. That's very simple. Besides all the arguments I've already given you about none of the story make at all sense as fictional, and goes against everything we know about how religious groups write their stories, there's the plain fact that when asking if a person that's supposed to have lived in existed for real or not. I give significant weight to the beliefs on the subject of the people that lived in his/her time, or as near it as we can get. I haven't seen "documentary e
Load More