All of irrational's Comments + Replies

Maybe I am amoral, but I don't value myself the same as a random person even in a theoretical sense. What I do is I recognize that in some sense I am no more valuable to humanity than any other person. But I am way more valuable to me - if I die, that brings utility to 0, and while it can be negative in some circumstances (aka Life is not worth living), some random person's death clearly cannot do so, people are constantly dying in huge numbers all the time, and the cost of each death is non-zero to me, but must be relatively small, else I would easily be in the negative territory, and I am not.

That's interesting, but how much money is needed to solve "most of the world's current problems"?

To forestall an objection: I think investing with a goal of improving the world as opposed to maximizing income, is basically the same as giving, so that comes into the category of how to spend, not how much money to allocate for it. If you were investing rather than giving, and had income from it, you'd simply allocate it back into the category.

That's a very useful point. I do have employer match and it is likely to be an inflection point for effectiveness of any money I give.

I apologize for being unclear in my description. At the moment, after all my bills I have money left over. This implicitly goes toward retirement. So it wouldn't be slighting my family to give some more to charity. I also have enough saved to semi-retire today (e.g. if I chose to move to a cheap area I could live like a lower-middle class person on my savings alone), and my regular 401K contributions (assuming I don't retire) would mean that I'll have plenty of income if I retire at 65 or so.

I was hoping that answering "How did you decide how much of your income to give to charity?" is obviously one way of answering my original question, and so some people would answer that. But you may be right that it's too ambiguous.

I don't mean that I have one that's superior to anyone else's, but there are tools to deal with this problem, various numbers that indicate risk, waste level, impact, etc. I can also decide what areas to give in based on personal preferences/biases.

This thread is interesting, but off-topic. There is lots of useful discussion on the most effective ways to give, but that wasn't my question.

1irrational
To forestall an objection: I think investing with a goal of improving the world as opposed to maximizing income, is basically the same as giving, so that comes into the category of how to spend, not how much money to allocate for it. If you were investing rather than giving, and had income from it, you'd simply allocate it back into the category.

I see what you mean now, I think. I don't have a good model of dealing with a situation where someone can influence the actual updating process either. I was always thinking of a setup where the sorcerer affects something other than this.

By the way, I remember reading a book which had a game-theoretical analysis of games where one side had god-like powers (omniscience, etc), but I don't remember what it was called. Does anyone reading this by any chance know which book I mean?

0gjm
You might be thinking of Superior Beings by Steven Brams. (My favourite result of this kind is that if you play Chicken with God, then God loses.)

For this experiment, I don't want to get involved in the social aspect of this. Suppose they aren't aware of each other, or it's very impolite to talk about sorcerers, or whatever. I am curious about their individual minds, and about an outside observer that can observe both (i.e. me).

How about, if Bob has a sort of "sorcerous experience" which is kind of like an epiphany. I don't want to go off to Zombie-land with this, but let's say it could be caused by his brain doing its mysterious thing, or by a sorcerer. Does that still count as "moving things around in the world"?

3cousin_it
Well, it seems possible to set up an equivalent game (with the same probabilities etc) where the sorcerer is affecting a card deck that's shown to you. Maybe I should have drawn the distinction differently. If the sorcerer can only affect your experiences, that's basically the same as affecting a card deck. But if the sorcerer can affect the way you process these experiences, e.g. force you to not do a Bayesian update where you normally would, or reach into your mind and make you think you had a different prior all along, that's different because it makes you an imperfect reasoner. We know how to answer questions like "what should a perfect reasoner do?" but we don't know much about "what should such-and-such imperfect reasoner do?"

I am not certain that it's the same A. If I say to you, here's a book that proves that P=NP. You go and read it, and it's full of Math, and you can't fully process it. Later, you come back and read it again, this time you actually able to fully comprehend it. Even later you come back again, and not only comprehend it, but are able to prove some new facts, using no external sources, just your mind. Those are not all the same "A". So, you may have some evidence for/against a sorcerer, but are not able to accurately estimate the probability. After some reflection, you derive new facts, and then update again. Upon further reflection, you derive more facts, and update. Why should this process stop?

-1MrMind
I think we are talking about different things. I proved only that Bob cannot update his belief in Bright on the sole evidence "Bob believes in Bright". This is a perfectly defined cognitive state, totally accessible to Bob, and unique. Therefore Bob cannot update on it. On the other hand, if from a belief Bob gathers new evidence, then this is clearly another cognitive state, well different from the previous, and so there's no trouble in assigning different probabilities (provided that "Bob believes in Bright" doesn't mean that he assigns to Bright probability 1).

It's not that different from saying "I believe it will rain tomorrow, and the fact that I believe that is evidence that it is rain tomorrow, so I'll increase my degree of belief. But wait, that makes the evidence even stronger!".

This is completely different. My belief about the rain tomorrow is in no way evidence for actual rain tomorrow, as you point out - it's already factored in. Tomorrow's rain is in no way able to affect my beliefs, whereas a sorcerer can, even without mind tampering. He can, for instance, manufacture evidence so as to mi... (read more)

0Emile
Seems you can calculate P(evidence | Dark) by taking Dark's tampering into account (basically he'll try to get that value as close as possible to P(evidence | no Dark) ), and update based on that. Your belief may not be reliable in that you may still be wrong, but it still already takes all the information you have (i.e. P(evidence | Dark) ) into account.

That's a very interesting analysis. I think you are taking the point of view that sorcerers are rational, or that they are optimizing solely for proving or disproving their existence. That wasn't my assumption. Sorcerers are mysterious, so people can't expect their cooperation in an experiment designed for this purpose. Even under your assumption you can never distinguish between Bright and Dark existing: they could behave identically, to convince you that Bright exists. Dark would sort the deck whenever you query for Bright, for instance.

The way I was th... (read more)

2cousin_it
The assumption of rationality is usually used to get a tractable game. That said, the assumption is not as restrictive as you seem to say. A rational sorcerer isn't obliged to cooperate with you, and can have other goals as well. For example, in my game we could give Dark a strong desire to move the ace of spades to the top of the deck, and that desire could have a certain weight compared to the desire to stay hidden. In the resulting game, Daisy would still use only the information from the deck, and wouldn't need to do Bayesian updates based on her own state of mind. Does that answer your question?

I don't think I completely follow everything you say, but let's take a concrete case. Suppose I believe that Dark is extremely powerful and clever and wishes to convince me he doesn't exist. I think you can conclude from this that if I believe he exists, he can't possibly exist (because he'd find a way to convince me otherwise), so I conclude he can't exist (or at least the probability is very low). Now I've convinced myself he doesn't exist. But maybe that's how he operates! So I have new evidence that he does in fact exist. I think there's some sort of p... (read more)

0VAuroch
The scenario you outlined is exactly the same as Daisy's half of the piece ending in A3. The result of your reasoning isn't further evidence, it's screened off by the fact that it's your reasoning, and not the actions of an outside force.
3Lumifer
LOL. The argument goes something like this: "I refuse to prove that I exist,'" says God, "for proof denies faith, and without faith I am nothing." "But," says Man, "The Babel fish is a dead giveaway, isn't it? It could not have evolved by chance. It proves you exist, and so therefore, by your own arguments, you don't. QED." "Oh dear," says God, "I hadn't thought of that," and promptly vanishes in a puff of logic. "Oh, that was easy," says Man, and for an encore goes on to prove that black is white and gets himself killed on the next zebra crossing.”

I am not assuming they are Bayesians necessarily, but I think it's fine to take this case too. Let's suppose that Bob finds that whenever he calls upon Bright for help (in his head, so nobody can observe this), he gets unexpectedly high success rate in whatever he tries. Let's further suppose that it's believed that Dark hates kittens (and it's more important for him than trying to hide his existence), and Daisy is Faerie's chief veterinarian and is aware of a number of mysterious deaths of kittens that she can't rationally explain. She is afraid to discuss this with anyone, so it's private. For numeric probabilities you can take, say, 0.7, for each.

0Dagon
I think your degree of belief in their rationality (and their trustworthiness in terms of not trying to mislead you, and their sanity in terms of having priors at least mildly compatible with yours) should have a very large effect on how much you update based on the evidence that they claim a belief. The fact that they know of each other and still have wildly divergent beliefs indicates that they don't trust in each other's reasoning skills. Why would you give them much more weight than they gave each other?

Thanks. I am of course assuming they lack common knowledge. I understand what you are saying, but I am interested in a qualitative answer (for #2): does the fact they have updated their knowledge according to this meta-reasoning process affect my own update of the evidence, or not?

If you don't have a bachelor's degree, that makes it rather unlikely that you could get a PhD. I agree with folks that you shouldn't bother - if you are right, you'll get your honorary degrees and Nobel prizes, and if not, then not. (I know I am replying to a five-year-old comment).

I also think you are too quick to dismiss the point of getting these degrees, since you in fact have no experience in what that involves.

That's the standard scientific point of view, certainly. But would an Orthodox Bayesian agree?:) Isn't there a very strong prior?

if cognitive biases/sociology provide a substantial portion of or even all of the explanation for creationists talking about irreducible organs, then their actual counterarguments are screened off by your prior knowledge of what causes them to deploy those counterarguments; you should be less inclined to consider their arguments than a random string generator that happened to output a sentence that reads as a counterargument against natural selection.

I've just discovered Argument Screens Off Authority by EY, so it seems I've got an authority on my side... (read more)

2hyporational
Argument screens off authority only if you have already considered the argument. This doesn't mean you should consider all arguments by anyone.

It only goes to show how we are all susceptible to power of stories, rather than able to examine them dispassionately, like a rationalist presumably should.

As the person who asked the question, I'd like to say that I don't particularly care about what creationists believe either.

Well then,

  1. you should want to be powerful yourself - so certainly go and exploit the society:)

  2. the powerful are really not paying for it, and if they are it's completely peanuts to them. If you are screwing up anyone by so-called leeching, it's the middle class:) You are not bad to "them", they don't care about you one way or another.

  3. I am rich and powerful (compared to you, at least), and I hereby command you to do it:)

2ialdabaoth
Heh. I'm coming back to this, now that I'm in a different mindset. Unfortunately, that leads to a "thrashing" unstable loop, because this: is cached shorthand for the actual system, which is "the powerful dictate morality". In general, "the powerful dictate morality" can be easily cached into "the strong deserve to dominate and torment the weak", because most ways of gaining power over the weak involve dominating and tormenting them, so the people who have that mentality tend to get and keep power - hence a stable loop. The problem is, when I find my own power rising, my external moral compass ("the powerful define morality") notices that I'm entering that "powerful" reference class, and thus my internal moral compass ("don't dominate others, and seek to distribute power fairly") gains more moral weight. As I said before, this leads to an unstable loop: while I'm powerless, my own internal moral compass doesn't enter into the moral calculus, and therefore it is moral for me to dominate and torment others in order to gain power. But as that becomes successful, I become more powerful, and therefore my internal moral compass enters into the moral calculus - and suddenly, the actions I have taken to gain power are no longer morally justified.

I really don't see what your usefulness/uselessness to powerful people has to do with you being bad. I can't even imagine what premises you are relying on for such a statement.

-1A1987dM
Certain people here sometimes seem to measure someone's value by their income, i.e. by how much people with money are willing to pay them to do stuff.
6ialdabaoth
It's a modification of Hypercalvinism / Dispensationalism / Dominionism / Divine Command Theory that I was taught as a child. Essentially, power defines morality, because "fuck you, what are you going to do about it?". And (to quote the actual book Catch-22), "Catch-22 says they have a right to do anything we can't stop them from doing". Basically, the strong are morally justified - in a sense, morally compelled - to dominate and torment the weak, because they can. And the weak deserve every minute of it, because fuck them. I've spent... roughly four to five hours a day, every day, for 35+ years, trying to update out of that belief system, and yet I fundamentally still operate under it.

I think it better be true that both of these are falsifiable (and they both are). I agree that the former is overwhelmingly likely and no one I'd care to talk to disputes it. In any event I am only talking about the latter. The fact that it completely explains the variety of life on Earth is the very thing I am accepting on faith, and that's what I don't like.

Essentially none. I have a lot of evidence of science being right (at least as far as I can reasonably tell) in some other subject areas such as parts of physics, chemistry, cognitive science, etc.

I've read some FAQs on both, but it doesn't count as verification. I suppose I can look at the map of S. America and Africa and see coastlines roughly match, that is some evidence for plate tectonics. Also, as I mentioned in reply to other comments, it seems correct that with genetics being right (that I strongly believe), natural selection would certainly work to cause some species to change. I think even creationists nowadays are forced to agree with this.

I think you are interpreting my comments with too much emphasis on specific examples I give. Sure, Earth being 1 million years old is unlikely, but there could be some equally embarrassing artifact or contradictory evidence. I can't give a realistic example because I haven't studied the problem - that's my whole point. You seem to be saying that the Theory of Evolution is unfalsifiable, at least in practice. That would be a bad thing, not a good thing. Besides, surely, if someone runs cryptological analysis software on the DNA of E. Coli, and get back &quo... (read more)

1Lumifer
Let's be a bit more precise. Evolution is a mechanism. It works given certain well-known preconditions. The fact that it works is not contested by anyone sane. What actually is contested by creationists is that the mechanism of evolution is sufficient to generate all the variety of life we see on Earth and that it actually did, in fact, generate all that variety. *That* claim is falsifiable -- e.g. by showing that some cause/mechanism/agency other than evolution played an important part in the development of life on Earth.

I agree that certainly some evolution would follow from your premises (1) and (2). But imagine that we also have independent evidence that Earth is 1 million years old. In that case, I'd be forced to say that the Theory of Evolution can't account for the evidence of life we observe, given mutation rates, etc. This is the sort of thing I am worried about when I say I haven't looked at the evidence. As far as I know there isn't any contradictory evidence of this sort, but there may be specific challenges that aren't well-explained. Creationists like to cite ... (read more)

1KnaveOfAllTrades
Just realised you're the post author, so: Thanks for posting this, it's something I've wondered about in relation to myself, as well. :) 1: No tentacles This reminds me of something Eliezer once said--"How would I explain the event of my left arm being replaced by a blue tentacle? The answer is that I wouldn't. It isn't going to happen." We do not observe a young (even 10^6) Earth, and by suggesting the possibility of one as counterevidence against the strength of the 'a priori' reasoning I advocated, you must be smuggling in a circular assumption that young Earth models have significant probability. Your argument as I understand it is roughly that since my a priori reasoning would fail in young Earth scenarios, then that reasoning is unreliable. But if our prior for young Earth scenarios is extremely low, then it will only very rarely happen that my reasoning will fail in that particular way. Therefore for your argument to go through, you would have to place a high prior probability on young Earth scenarios. To put it another another way: If observing a young Earth would be evidence against my a priori reasoning, then by conservation of expected evidence, our actual observation of a non-young Earth must be evidence in favour of that reasoning. People in a modern day situation, and LW'ers in particular, are better placed to understand that 'naturalistic' explanations are preferable, and that magic ones should incur huge complexity penalties. Therefore we should have low priors on young Earths, because most of our probability will be concentrated in models where intelligent life arises from nonintelligent (hence slow) processes as opposed to intelligent (e.g. God) processes. Moreover, the more intelligent the process that generated us, the more we push the explanatory buck back onto that process. God is an extreme case where the mystery of the apparent improbability of human intelligence is replaced with the mystery of the apparent improbability of divine intel

That's very useful, actually. I think I have a tendency to just accept the latest medical theory/practice as being the best guess that the most qualified people made with the current state of evidence. Which may be really suboptimal if they don't have a lot of evidence for it, and perhaps it should be independently examined if it concerns you personally. I am not sure what degree of belief to assign such things, though, because I have no experience with them.

Do you, or anyone, have an idea of how trustworthy such things generally are, in the modern age? Are there statistics about how often mainstream approaches are later proven to be harmful (and how often merely suboptimal)?

If you'd deferred to the leading authorities over the past 100 years, you would have been an introspectionist, then a behaviourist, then a cognitive scientist and now you'd probably be a cognitive neuroscientist.

I think you are right, but is it so bad? If I were living at the time of the introspectionists, was there a better alternative for me? I suspect that unless I personally worked out some other theory (unlikely), I'd have to either take that one or something equally bad. Maybe it's slightly different around boundaries of these paradigm shifts whe... (read more)

1scientism
I'm not sure about introspectionism, but I'm sure you could find theories that have produced bad outcomes and had mainstream acceptance, particularly in medicine. I suppose the alternative is to remain noncommittal.
9gwern
Once, for a Wittgenstein course, I read through the entirety of William James's 1890 Principles of Psychology. It was of course absurdly outdated, but I learned a lot from it. One of the things was surprise at how much time James felt he had to spend in the book attacking theories involving souls. So yes, you could do much worse than being an introspectionist.

What you are talking about is a lay sense of evolution. Sure, things change, and the more adapted thing should survive with higher frequency, this much is obvious even to creationists. It is also obvious to me (as it was to Aristotle), that things which are in motion tend to come to rest. Turns out, it's not really true. Just because a theory is intuitive, doesn't mean that's how the world really works. You only need to think about Heliocentrism, let alone something like quantum physics.

One problem that Darwin had was the lack of mechanism for evolution (i... (read more)

3KnaveOfAllTrades
We should expect some amount of evolution by natural selection 'a priori', from various obvious premises such as (1) There is a reproduction process in which characteristics are inherited (2) Things with X characteristics in Y environment die/live etc. There seems to be an absence of similarly parsimonious explanations, and the account given by natural selection is compelling. I suspect that even a small amount of knowledge of the empirical evidence for natural selection would establish a lower bound on the share of evolution it causes, such that searching for equally significant factors for evolution of life in general should be expected to fail. If one set up a mathematical representation of a population that took into account characteristics, life, and death, etc. then natural selection would be the name for a provable behaviour of the system, even if the system were just axiomatised by more basic facts such as (1) and (2). I'm not convinced that the same is true of Aristotelian physics. I struggle far more to fabricate accounts of our observations without natural selection than I did to get to grips with Newtonian mechanics. As in, accounts that don't leave me more confused (e.g. 'God did it', which is a mysterious non-answer). Quantum mechanics I do not know well enough (and I'm not sure anyone does) at the level where mathematical reductionism meets theoretical physics, but I would not be surprised if it turned out to be extremely parsimonious given even a small number of our empirical observations. Heliocentrism also seems much more contingent than natural selection, although possibly less than one thinks, given how prevalent star-planet systems are.

I've been thinking about this sort of thing as well. There are lots of books published by creationists and I am sure they are quite compelling (I haven't actually read those either), otherwise they wouldn't write those. Essentially, reading someone's summary is again putting yourself into the hands of whoever wrote it. If they have an agenda, you'll likely end up believing it. So, really, you need to read both sides, compare their arguments, etc. Lots of work.

I don't think I can have "knowledge" in Science. It's done by humans, therefore it makes errors. For any given proposition, if I examine the evidence and find it compelling, sure. But my whole point is whether I can rely on it without specifically examining it.

That's a good point. I suppose it has no practical implications for me, except that I'd like to have an accurate model of how the Universe works. Although if I were a young-earth creationist, it would have mattered a lot.

But let's take global warming. That one does matter in a practical sense.

4ChristianKl
When it comes to global warming there are two separete issues. The first is talking about global warming. It's good for a society is a broad public debates important issues. On the personal level however the significance of being wrong about global warming is relatively low for most people. Given that's low you can just go with what your favorite authority says. If you however work in a field where it's not low, I would again recommend that you get a better understanding of the subject.

I am not sure I got that. Is "the question I am asking now" referring to a theory whose truthfulness I am evaluating? And "the asked in past" the ones whose truthfulness I have verified? It's confusing because chronologically it's the other way around: most of these theories are old and were accepted by me on faith since school days, and I could only verify a few of them as I grew older.

2buybuydandavis
Yes and yes. Asking now = not yet verified. Asking in the past = already verified. It would have been better for me to say:

Thanks, that was interesting, although didn't specifically address my question.

I think the whole experience is also interesting on a meta-level. Since programming is essentially the same as logical reasoning, it goes to show that humans are very nearly incapable of creating long chains of reasoning without making mistakes, often extremely subtle ones. Sometimes finding them provides insight (especially in multi-threaded code or with memory manipulation), although most often it's just you failing to pay attention.

0lmm
Threading is not normally part of logical reasoning. Compare with mathematics, where even flawed proofs are usually (though not always) of correct results. I think a large part of the difficulty of correct programming is the immaturity of our tools.

I know this is not your main topic, but are you familiar with Good-Turing estimation? It's a way of assigning non-arbitrary probability to unobserved events.

He probably is an INTP, although it's too early to tell. I am too. That doesn't really answer the question:)

0eugman
Which question? Whether or not to teach him to lie? For that one my answers are: I don't know, it's not my place to say, no, yes and I'm not sure how. In that order. More concretely, I think it's important you teach him to be honest and teach him the social game because I personally benefited from learning both. As to how, it's hard to say, but if he's anything like me, he's a high level thinker, so go meta. Talk about economics, talk about the difference between content and form, talk about communication and signaling.. Explain that by responding in a limited way he is allowing another person to show interest without getting involved in a deep conversation. Treat this as a charitable service we can provide to one another. As for why you personally feel it's wrong to lie here's my take on it. I personally tend to be very rigid about rules and principles. I believe that is something is bad on average then it's bad as a whole. I also believe that once you start making exceptions for rules like no lying, it's very easy to make exceptions for the wrong lies, thus defeating the purpose of the rule. So instead of drawing a line in the sand one expects to be wrong, one doesn't go in the sand at all. Now arguably this is very rigid thinking and I've adapted to be more flexible as of late. But still, this sort of morality appeals to me on a fundamental level.

SInce we are on the subject of quotes, here's one from C.S. Lewis, who I am not generally a fan of, but this is something that struck me when I read it for the first time:

“Oh, Piebald, Piebald,” she said, still laughing. “How often the people of your race speak!”

“I’m sorry,” said Ransom, a little put out. “What are you sorry for?”

“I am sorry if you think I talk too much”

“Too much? How can I tell what would be too much for you to talk?”

“In our world when they say a man talks much they mean they wish him to be silent.”

“If that is what they mean, why do they not say it?”

“What made you laugh?” asked Ransom, finding her question too hard.

That specific thing is not a human universal. But the general behavior is, as far as I know. There are always little lies one is supposed to say. E.g. "no, that woman is not as beautiful as you", "he looks just like his dad", "nice to meet you", "please come again" (but I'll never invite you). In Russian, in particular, the very act of greeting is often a lie, since it means "be healthy" and there is effectively no way to "greet" an enemy without wishing him well.

5Richard_Kennaway
In Klingon (fiction alert) the nearest thing to "hello" is nuQneH, which literally means "what do you want?"

I am in fact not planning to interfere for now.

I don't disagree necessarily, but this is way too subtle for a kid, so it's not a practical answer.

Besides, as a semi-professional linguist, I must say you are confusing semantics (e.g. your boxes example) with pragmatics which is what we are talking about, where one uses words to mean something other than what the dictionary + propositional logic say they mean. These are often very confusing because they rely on cultural context and both kids and foreigners often screw up when they deal with them.

8Crux
Too subtle? This is just one tiny part of growing up and learning to interact with other people. If it's too subtle for him at this point in his development, then you'll just have to wait. It's a practical answer in that it shows why you shouldn't encourage him to respond like that in those situations. I would have to know a lot more about your kid (and perhaps also way more about parenting) to know whether you should try to discourage it (and how to go about that), but at least we now know that it's not a virtue in itself, but merely a social misunderstanding. In other words, you were wondering whether to teach him to "lie" upon these occasions. I'm saying that you definitely shouldn't do the opposite (express affirmation to him about what he's doing). That's useful to know, right? About whether you should go about trying to fix this social misunderstanding though, I don't know. Is this normal for his age? Is this part of a trend? Will he simply update later with no bumps in the road? Etc. You could try just telling him that sometimes "how are you" means that they want a long response about whether he's happy or sad or whatever and why, but sometimes it's just to be friendly and they don't want anything more than a quick "good" or "fine" or whatever. In fact, that simple insight might well launch him into a long, fruitful path of social inquiry and analysis for many years to come. (If he asks how to know which is which and you don't think you could explain it or he wouldn't understand you, just say it's hard to tell but that he'll get it at some point if he keeps trying.) How exactly am I confusing those? Yes. Actual communication is quite difficult. (That's sort of sarcastic or something, but it's not supposed to convey bad will; I'm simply trying to clarify my position. The attempt is sort of vague though, so I don't necessarily expect you to know where I'm going with it.)

Well, it's one thing not to give details and another to misreport. Even now, as an adult, I say "I am OK" when I mean "things suck", and "I am great" when things are OK. I just shift them by a degree in the positive direction. Now, if he is unhappy, should he say "I am fine"? If he is not fine, he is lying.

7Crux
I think the same principle applies. If I'm checking out at a grocery store and the cashier asks me how I am, responding by producing the sequence of phonemes "good" doesn't really function as a way to put a map of my current well-being in their head; it's more just for signalling respect etc. In other words, saying "things suck" would be about as off-topic as if I started putting on a pair of boxing gloves in response to you saying, "Hey, could you help me move these boxes?"

I am not sure I completely follow, but I think the point is that you will in fact update the probability up if a new argument is more convincing than you expect. Since AI can better estimate what you expect it to do than you can estimate how convincing AI will make it, it will be able to make all arguments more convincing than you expect.

0prase
I think you are adding further specifications to the original setting. Your original description assumed that AI is a very clever arguer who constructs very persuasive deceptive arguments. Now you assume that AI actively tries to make the arguments more persuasive than you expect. You can stipulate for argument's sake that AI can always make more convincing argument than you expect, but 1) it's not clear whether it's even possible in realistic circumstances, 2) it obscures the (interesting and novel) original problem ("is evidence of evidence equally valuable as the evidence itself?") by rather standard Newcomb-like mind-reading paradox.

I am not convinced that 1984-style persuasion really works. I don't think that one can really be persuaded to genuinely believe something by fear or torture. In the end you can get someone to respond as if they believe it, but probably not to actually do so. It might convince them to undergo something like what my experiment actually describes.

3Viliam_Bur
I don't think about persuation like: "You have to believe this, under threat of pain, in 3... 2... 1... NOW!" It's more like this: We have some rationalist tools -- methods of thinking which, when used propertly, can improve our rationality. If some methods of thinking can increase rationality, then avoiding them, or intentionally using some contrary methods of thinking, could decrease rationality... could you agree with that? Omega could scan your brain, and deliver you an electric shock whenever your "Bayesian reasoning circuit" is activated. So you would be conditioned to stop using it. On the other hand, Omega would reward you for using the "happy death spiral circuit", as long as the happy thought is related to Zoroastrianism. It could make rational reasoning painful, irrational reasoning pleasant, and this way prepare you for believing whatever you have to believe. In real brainwashing there is no Omega and no brain scans, but a correct approach can trigger some evolutionary built mechanisms that can reduce your rationality. (It is an evolutionary advantage to have a temporary rationality turn-off switch for situations when being rational is a great danger to your life. We are not perfect thinkers, we are social beings.) The correct approach is not based on fear only, but uses a "carrot and stick" strategy. Some people can resist a lot of torture, if in their minds they do not see any possibility to escape. For efficient brainwashing, they must be reminded that there is an escape, that it's kind of super easy, and it only involves going through the "happy death spiral"... which we all have a natural tendency to do, anyway. The correctly broken person is not only happy to have escaped physical pain, but also enjoys the new state of mind. I think 1984 described this process pretty well, but I don't have it here to quote it. The brainwashed protagonist is not just happy to escape torture (he knows that soon... spoiler avoided), but he is happy to resolve his

There is some degree to which you should expect to be swayed by empty arguments, and yes, you should subtract that out if you anticipate it.

Right. I think my argument hinges on the fact that AI knows how much you intend to subtract before you read the book, and can make it be more convincing than this amount.

1Manfred
I don't think it's okay to have the AI's convincingness be truly infinite, in the full inf - inf = undefined sense. Your math will break down. Safer just to represent "suppose there's a super-good arguer" by having the convincingess be finite, but larger than every other scale in the problem.

So the person in the thought experiment doesn’t expect to agree with a book's conclusion, before reading it.

No he expects that if he reads the book, his posterior belief in the proposition is likely going to be high. But his current prior belief in the truth of the proposition is low.

Also, as I made clear in my update, AI is not perfect, merely very good. I only need it to be good enough for the whole episode to go through, i.e. that you don't argue that a rational person will never believe in Z after reading the book and my story is implausible.

2[anonymous]
So in other words, the person is expecting to be persuaded by something other than the truth. Perhaps on the basis that the last N times he read one of these books, it changed his mind. In that case, it is no different than if the person were stepping into a brain modification booth, and having his mind altered directly. Because a rational person would simply not be conned by this process. He would see that he currently believes in the existence of the flying spaghetti monster, and that he just read a book on the flying spaghetti monster prepared by a superintelligent AI which he had asked to prepare for him ultra-persuasive but entirely biased collections of evidence, and remember that he didn't formerly believe in the flying spaghetti monster. He would conclude on this basis that his belief probably has no basis in reality, i.e. is inaccurate, and stop believing (with such high probability) in it. If we are to accept that the AI is good enough to prevent this happening - a necessary premise of the thought experiment - then it must be preventing the person from being rational in this way, perhaps by including statements in the book that in some extraordinary way reprogram his mind via some backdoor vulnerability. Let's say that perhaps the person is an android creating by the AI for its own amusement, which responds to certain phrases with massive anomalous changes in its brain wiring. That is simply the only way I can accept the premises that: a) the person applies Bayes's theorem properly (if this is not true, then he is simply not “mentally consistent” as you said) b) he is aware that the books are designed to persuade him with high probability c) he believes that the propositions to be proven in the books are untrue in general d) he believes with high probability that the books will persuade him which, unless I am very much mistaken, are equivalent to your statements of the problem. If reading a book is not basically equivalent to submitting knowingly t

I understand the principle, yes. But it means if your friend is a liar, no argument he gives needs to be examined on its own merits. But what if he is a liar and he saw a UFO? What if P(he is a liar) and P(there's a UFO) are not independent? I think if they are independent, your argument works. If they are not, it doesn't. If UFOs appear mostly to liars, you can't ignore his evidence. Do you agree? In my case, they are not independent: it's easier to argue for a true proposition, even for a very intelligent AI. Here I assume that P must be strictly less than 1 always.

We are running into meta issues that are really hard to wrap your head around. You believe that the book is likely to convince you, but it's not absolutely guaranteed to. Whether it will do so surely depends on the actual arguments used. You'd expect, a priori, that if it argues for X which is more likely, its arguments would also be more convincing. But until you actually see the arguments, you don't know that they will convince you. It depends on what they actually are. In your formulation, what happens if you read the book and the arguments do not convi... (read more)

0prase
I think I address some of these questions in another reply, but anyway, I will try a detailed description: Let's denote the following propositions: * Z = "Zoroastrianism is true." * B = Some particular, previously unknown, statement included in the book. It is supposed to be evidence for Z. Let this be in form of propositions so that I am able to assign it a probability (e.g. B shouldn't be a Pascal-wagerish extortion). * C(r) = "B is compelling to such extent that it shifts odds for Z by ratio r". That is, C(r) = "P(B|Z) = r*P(B|not Z)". * F = Unknown evidence against Z. * D(r) = "F shifts odds against Z by ratio r." Before reading the book 1. p(Z) is low 2. I may have a probability distribution for "B = S" (that is, "the convincing argument contained in the book is S") over set of all possible S; but if I have it, it is implicit, in sense I have an algorithm which assigns p(B = S) for any given S, but haven't gone through the whole huge set of all possible S - else the evidence in the book wouldn't be new to me in any meaningful sense 3. I have p(S|Z) and p(S|not Z) for all S, implicitly like in the previous case 4. I can't calculate the distribution p(C(r)) from p(B = S), p(S|Z) and p(S|not Z), since that would require calculating explicitly p(B = S) for every S, which is out of reach; however 5. I have obtained p(C(r)) by another means - knowledge about how the book is constructed - and p(C(r)) has most of its mass at pretty high values of r 6. by the same means I have obtained p(D(r)), which is distributed at as high or even higher values of r Can I update the prior p(Z)? If I knew for certain that C(1,000) is true, I should take it into account and multiply the odds for Z by 1,000. If I knew that D(10,000) is true, I should analogically divide the odds by 10,000. Having probability distributions instead of certainty changes little - calculate the expected value* E(r) for both C and D and use that. If the values for C and D are similar or only d
Load More