All of Yvain2's Comments + Replies

Yvain220

One more thing: Eliezer, I'm surprised to be on the opposite side as you here, because it's your writings that convinced me a catastrophic singularity, even one from the small subset of catastrophic singularities that keep people alive, is so much more likely than a good singularity. If you tell me I'm misinterpreting you, and you assign high probability to the singularity going well, I'll update my opinion (also, would the high probability be solely due to the SIAI, or do you think there's a decent chance of things going well even if your own project fails?)

Yvain250

"I'm curious to know how you know that in advance? Isn't it like a kid making a binding decision on its future self? As Aubrey says, (I'm paraphrasing): "If I'm healthy today and enjoying my life, I'll want to wake up tomorrow. And so on." You live a very long time one day at a time."

Good point. I usually trust myself to make predictions of this sort. For example, I predict that I would not want to eat pizza every day in a row for a year, even though I currently like pizza, and this sort of prediction has worked in the past. But I shoul... (read more)

Yvain270

"There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities."

That doesn't seem at all obvious to me. First, our current society doesn't allow people to die, although today law enforcement is spotty enough that they can't really prevent it. I assume far future societies will have excellent law enforcement, including mind reading and total surveillance (unless libertarians seriously get their act together in the next hundred ye... (read more)

0Gurkenglas
There is a minuscule probability that during the next 10 seconds, nanomachines produced by a fresh GAI sweep in through your window and capture you for infinite life and thus, by your argument, infinite hell. Building on your argumentation, the case can be made that you should strive to minimize the probability of that outcome. Therefore, suicide. Edit: My point has already been made by Eliezer. Lets see how this retracting thingy works.
2Ulysses
The threat of dystopia stresses the importance of finding or making a trustworthy, durable institution that will relocate/destroy your body if the political system starts becoming grim. Of course there is no such thing. Boards can become infiltrated. Missions can drift. Hostile (or even well-intentioned) outside agents can act suddenly before your guardian institution can respond. But there may be measures you can take to reduce fell risk to acceptable levels (i.e: levels comparable to current risk of exposure to, as Yudkowsky mentioned, secret singularity-in-a-basement): 1. You could make contracts with (multiple) members of the younger generation of cryonicists, on condition that they contract with their younger generation, etc. to guard your body throughout the ages. 2. You can hide a very small bomb in your body that continues to countdown slowly even while frozen (don't know if we have the technology yet, but it doesn't sound too sophisticated) so as to limit the amount of divergence from now that you are willing to expose yourself to [explosion small enough to destroy your brain, but not the brain next to you]. 3. You can have your body hidden and known only to cryonicist leaders. 4. You can have your body's destruction forged. I don't think any combination of THESE suggestions will suffice. But it is worth very much effort inventing more (and not necessarily sharing them all online), and making them possible if you are considering freezing yourself.
0[anonymous]
The threat of dystopia stresses the importance of finding or making a trustworthy, durable institution that will relocate/destroy your body if the political system starts becoming grim. Of course there is no such thing. Boards can become infiltrated. Missions can drift. Hostile (or even well-intentioned) outside agents can act suddenly before your guardian institution can respond. But there may be measures you can take to reduce fell risk to acceptable levels. You could make contracts with (multiple) members of the younger generation of cryonicists, on condition that they contract with their younger generation, etc. to guard your body throughout the ages. You can hide a very small bomb in your body that continues to countdown slowly even while frozen (don't know if we have the technology yet, but it doesn't sound too sophisticated) so as to limit the amount of divergence from now that you are willing to expose yourself to [explosion small enough to destroy your brain, but not the brain next to you] You can have your body hidden and known only to cryonicist leaders. You can have your body's destruction forged. No matter what arrangements you make, if you choose to freeze yourself you can never get the probability of being indefinitely tortured upon reanimation down to zero. So what is an acceptable level of risk? I'll give you a lower bound: the probability that a terrorist group has already secretly figured out how to extend life indefinitely, and is on route to kidnap you now. I don't think all the suggestions I made put together will suffice. But it is worth very much effort inventing more (and not necessarily sharing them all online), and making them possible if you are considering freezing yourself.
Yvain260

facepalm And I even read the Sundering series before I wrote that :(

Coming up with narratives that turn the Bad Guys into Good Guys could make good practice for rationalists, along the lines of Nick Bostrom's Apostasy post. Obviously I'm not very good at it.

GeorgeNYC, very good points.

4WhySpace_duplicate0.9261692129075527
For anyone who comes this way in the future, I found Nick Bostrom's post through a self-critique of Effective Altruism.
Yvain260

Wealth redistribution in this game wouldn't have to be communist. Depending on how you set up the analogy, it could also be capitalist.

Call JW the capitalist and AA the worker. JW is the one producing wealth, but he needs AA's help to do it. Call the under-the-table wealth redistribution deals AA's "salary".

The worker can always cooperate, in which case he makes some money but the capitalist makes more.

Or he can threaten to defect unless the capitalist raises his salary - he's quitting his job or going on strike for higher pay.

(To perfect the ana... (read more)

Yvain290

Darnit TGGP, you're right. Right. From now on I use Lord of the Rings for all "sometimes things really are black and white" examples. Unless anyone has some clever reason why elves are worse than Sauron.

1AnthonyC
Not worse than Sauron, no, but certainly culpable for a lot. For example, they refused to share their knowledge with humans or even with the part-elven Numenoreans Aragorn is descended from - and it was that refusal that got the Numenoreans listening to Sauron in the first place. Of course, the gods are on the elven side, which is even worse, though I fail to see how it absolves the elves.
8eshear
The elves are totally worse than Sauron. See http://dir.salon.com/story/ent/feature/2002/12/17/tolkien_brin/index3.html for the details.
Yvain2280

[sorry if this is a repost; my original attempt to post this was blocked as comment spam because it had too many links to other OB posts]

I've always hated that Dante quote. The hottest place in Hell is reserved for brutal dictators, mass murderers, torturers, and people who use flamethrowers on puppies - not for the Swiss.

I came to the exact opposite conclusion when pondering the Israel-Palestinian conflict. Most of the essays I've seen in newspapers and on bulletin boards are impassioned pleas to designate one side or the other as Evildoers and the other ... (read more)

0gwern
The Dante quote is particularly interesting in light of "James Burnham’s Dante: Politics as Wish".
0Nisan
I have become slightly involved in the Israeli-Palestinian activism sphere on campus. I tried to square the reasonable and obviously correct comment above with the reasonable and obviously correct position of the pro-Palestinian side that maintaining neutrality in the conflict is the same as supporting the status quo. In fact, most people don't realize that problem-solving is even an option. To them, the choice is between {pro-Israeli, pro-Palestinian, neutral}. Well, to be fair, I expect lots of people believe that they see both sides and just want to solve the problem. But parts of their minds just want to take sides. And I can't really blame those parts, because taking sides can have significant social payoffs to them, and insignificant expected payoffs in the Middle East, and if they have somewhat egoistical values then taking sides might be a somewhat good deal for them.
-4velisar
(thanks for the - negative - feedback, I edited my answer to make it clearer) Biases are data compression mistakes. Talking about a neutrality bias as a laziness of the rational mind, I think Eliezer hurried up a bit when choosing the examples: he intended to point out some bad consequences in situations with a high difference in complexity. The school director is tasteless in not putting the smallest effort because of "eh, they are just kids". For them it is important. Injustice is intense at all ages; primates feel it. And the 'wise' school director is disgusting, yes. Easy answer to simple situation. Also a pacifist who dances for peace is ridiculous. And Yvain's polarized example characters (those journalists who emotionally cherry-pick to convince you) are, well, negative. Easy answer to complex situation. But the common theme here is that they share the fact that they are simplistic. We are biased when we feel disgust, a conclusion is strongly formed and maybe there to stay. Cherry picking/ selective observation at work here, those images jump into your eyes or at the surface of your memory. So if the examples are a bit off we may see distorted pictures of the idea - that is, we make wrong conclusions out of very partial examples. My point is that Yvain and Eliezer probably have the same take on the concept if they are to judge the same example. There are lazy 'wise' and lazy opinionated. I'd risk saying that we can use a rule of thumb for complex situations such: Is the situation really complex and with unknown unknowns? If yes, then reset to the wise posture (as if being profound is some sort of attitude). Otherwise you'll take one position too quickly and there is a great chance to became opinionated. Those we legitimately call wise are always flirting with complexity. The wise equidistant attitude is like a joke, in the sense that it reveals the contrary, the imposture of the simplistic.
JohnH110

"how do we minimize suffering in the Middle East?" may be an easier question than "who's to blame?"

The quickest way to minimize suffering is to nuke the Middle East into a sea of glass with the nukes spaced such that every person is vaporized instantly without feeling a thing. As they feel nothing from their instant vaporization, they are no longer suffering and no longer are capable of suffering or causing suffering.

Somehow I don't see this as a viable solution.

Yvain2200

"To be concerned about being grown up, to admire the grown up because it is grown up, to blush at the suspicion of being childish; these things are the marks of childhood and adolescence. And in childhood and adolescence they are, in moderation, healthy symptoms. Young things ought to want to grow. But to carry on into middle life or even into early manhood this concern about being adult is a mark of really arrested development. When I was ten, I read fairy tales in secret and would have been ashamed if I had been found doing so. Now that I am fifty I read them openly. When I became a man I put away childish things, including the fear of childishness and the desire to be very grown up." - C.S. Lewis

Yvain2100

Bruce and Waldheri, you're being unfair.

You're interpreting this as "some scientists got together one day and asked Canadians about their grief just to see what would happen, then looked for things to correlate it with, and after a bunch of tries came across some numbers involving !Kung tribesmen reproductive potential that fit pretty closely, and then came up with a shaky story about why they might be linked and published it."

I interpret it as "some evolutionary psychologists were looking for a way to confirm evolutionary psychology, predic... (read more)

Yvain220

@Robin: Thank you. Somehow I missed that post, and it was exactly what I was looking for.

@Vladimir Nesov: I agree with everything you said except for your statement that fiction is a valid argument, and your supporting analogy to mathematical proof.

Maybe the problem is the two different meanings of "valid argument". First, the formal meaning where a valid argument is one in which premises are arranged correctly to prove a conclusion eg mathematical proofs and Aristotelian syllogisms. Well-crafted policy arguments, cost-benefit analyses, and stati... (read more)

Yvain2110

Uncle Tom's Cabin is not a valid argument that slavery is wrong. "My mirror neurons make me sympathize with a person whose suffering is caused by Policy X" to "Policy X is immoral and must be stopped" is not a valid pattern of inference.

Consider a book about the life of a young girl who works in a sweatshop. She's plucked out of a carefree childhood, tyrannized and abused by greedy bosses, and eventually dies of work-related injuries incurred because it wasn't cost-effective to prevent them. I'm sure this book exists, though I haven't p... (read more)

Yvain2320

Assuming the Lord Pilot was correct in saying that, without the nova star, the Happy Fun People would never be able to reach the human starline network ...and assuming it's literally impossible to travel FTL without a starline ...and assuming the only starline to the nova star was the one they took ...and assuming Huygens, described as a "colony world", is sparsely populated, and either can be evacuated or is considered "expendable" compared to the alternatives

...then blow up Huygens' star. Without the Huygens-Nova starline, the Happy P... (read more)

Yvain2590

Political Weirdtopia: Citizens decide it is unfair for a democracy to count only the raw number of people who support a position without considering the intensity with which they believe it. Of course, one can't simply ask people to self-report the intensity with which they believe a position on their ballot, so stronger measures are required. Voting machines are redesigned to force voters to pull down a lever for each issue/candidate. The lever delivers a small electric shock, increasing in intensity each second the voter holds it down. The number of vote... (read more)

9ViEtArmis
Or excellent skin-conductivity!
6DanielLC
Wouldn't they get electrocuted before their vote counts for enough to take over?
Yvain210

Though it's a side issue, what's even more... interesting.... is the way that our brains simply haven't updated to their diminished power in a super-Dunbarian world. We just go on debating politics, feverishly applying our valuable brain time to finding better ways to run the world, with just the same fervent intensity that would be appropriate if we were in a small tribe where we could persuade people to change things.

Thank you. That's one of those insights that makes this blog worth reading.

2rastilin
Or in looking at it another way... we can change politics but choose not to. For example, a researcher at TED was explaining how politicians are far more receptive to written letters than any other method of communication; even to the point where a well written letter was enough to change their vote on a topic. Failing that, we always joke about how special interest groups have enough money to get close to and negotiate with politicians. However; nothing stops any of us from starting our own group, taking donations and having our hired employees go to the capital and get our word in. It sounds more and more like the monkey sphere is an argument for not bothering to do any of the things that could change the particular problems affecting us
Yvain2150

"O changeless and aeternal physical constants, we give thanks to thee for existing at values such that the Universe, upon being set in motion and allowed to run for thirteen billion years, give or take an eon, naturally tends toward a state in which we are seated here tonight with turkey, mashed potatoes, and cranberry sauce in front of us."

Or "O natural selection, thou hast adapted turkeys to a mostly predation-free environment, making them slow, weak, and full of meat. In contrast, thou hast adapted us humans to an environment full of dang... (read more)

Yvain200

I don't know what's up with people who say they still haven't read the archives. When I discovered OB, I spent all my free time for two weeks reading the archives straight through :)

I support Roland's idea. A few Eliezer posts per week, plus an (official, well-publicized, Eliezer-and-Robin-supported) forum where the rest of us could discuss those posts and bring up issues of our own. Certain community leaders (hopefully Eliezer and Robin if they have time) picking out particularly interesting topics and comments on the board and telling the posters to writ... (read more)

Yvain2200

I don't know anything about the specific AI architectures in this post, but I'll defend non-apples. If one area of design-space is very high in search ordering but very low in preference ordering (ie a very attractive looking but in fact useless idea), then telling people to avoid it is helpful beyond the seemingly low level of optimization power it gives.

A metaphor: religious beliefs constitute a very small and specific area of beliefspace, but that area originally looks very attractive. You could spend your whole life searching within that area and never... (read more)

Yvain220

Robin Gane-McCalla is an Overcoming Bias reader? I knew him back in college, but haven't talked to him in years. It really is a small world.

Yvain210

"Why do people, including you apparently, always hide the price for this kind of thing? Market segmentation? Trying to get people to mentally commit before they find out how expensive it is? Maintaining a veneer of upper-class distaste for the crassness of money (or similarly, a "if you have to ask how much it is, you can't afford it" type thing)?"

I agree with that, and I have a policy of never buying from anyone who does this.

Often I don't know how much something would cost even to an order of magnitude; for example, I have no clue whe... (read more)

Yvain220

Disappointing. I kept on waiting for Eliezer to say some sort of amazingly witty thing that would cause everything Jaron was saying to collapse like a house of cards, but either he was too polite to interrupt or the format wasn't his style.

At first I thought Jaron was talking nonsense, but after thinking it over for a while, I'm prepared to give him the benefit of the doubt. He said that whether a computer can be intelligent makes no difference and isn't worth talking about. That's obviously wrong if he's using a normal definition of intelligent, but if by... (read more)

Yvain280

This is a beautiful comment thread. Too rarely do I get to hear anything at all about people's inner lives, so too much of my theory of mind is generalizations from one example.

For example, I would never have guessed any of this about reflectivity. Before reading this post, I didn't think there was such a thing as people who hadn't "crossed the Rubicon", except young children. I guess I was completely wrong.

Either I feel reflective but there's higher level of reflectivity I haven't reached and can't even imagine (which I consider unlikely but am ... (read more)

Yvain2270

I am glad Stanislav Petrov, contemplating his military oath to always obey his superiors and the appropriate guidelines, never read this post.

Yvain2120

"Historically speaking, it seems likely that, of those who set out to rob banks or murder opponents "in a good cause", those who managed to hurt themselves, mostly wouldn't make the history books. (Unless they got a second chance, like Hitler after the failed Beer Hall Putsch.) Of those cases we do read about in the history books, many people have done very well for themselves out of their plans to lie and rob and murder "for the greater good". But how many people cheated their way to actual huge altruistic benefits - cheated an... (read more)

Yvain2120

"I need to beat my competitors" could be used as a bad excuse for taking unnecessary risks. But it is pretty important. Given that an AI you coded right now with your current incomplete knowledge of Friendliness theory is already more likely to be Friendly than that of some competitor who's never really considered the matter, you only have an incentive to keep researching Friendliness until the last possible moment when you're confident that you could still beat your competitors.

The question then becomes: what is the minimum necessary amount of F... (read more)

Yvain270

Given that full-scale nuclear war would either destroy the world or vastly reduce the number of living people, Petrov, Arkhipov, and all the other "heroic officer makes unlikely decision to avert nuclear war" stories Recovering Irrationalist describes above make a more convincing test case for the anthropic principle than an LHC breakdown or two.

Yvain210

Just realized that several sentences in my previous post make no sense because they assume Everett branches were separate before they actually split, but think the general point still holds.

0AlexanderRM
Some of the factors leading to a terrorist attack succeeding or failing would be past the level of quantum uncertainty before the actual attack happens, so unless the terrorists are using bombs set up on the same principle as the trigger in Scrodinger's Cat, the branches would have split already before the attack happened.
Yvain2170

Originally I was going to say yes to the last question, but after thinking over why a failure of the LHC now (before it would destroy Earth) doesn't let me conclude anything by the anthropic principle, I'm going to say no.

Imagine a world in which CERN promises to fire the Large Hadron Collider one week after a major terrorist attack. Consider ten representative Everett branches. All those branches will be terrorist-free for the next few years except number 10, which is destined to suffer a major terrorist attack on January 1, 2009.

On December 31, 2008, Yva... (read more)

Yvain2100

IMHO, the idea that wealth can't usefully be measured is one which is not sufficiently worthwhile to merit further discussion.

The "wealth" idea sounds vulnerable to hidden complexity of wishes. Measure it in dollars and you get hyperinflation. Measure it in resources, and the AI cuts down all the trees and converts them to lumber, then kills all the animals and converts them to oil, even if technology had advanced beyond the point of needing either. Find some clever way to specify the value of all resources, convert them to products and allocate ... (read more)

Yvain2300

I was one of the people who suggested the term h-right before. I'm not great with mathematical logic, and I followed the proof only with difficulty, but I think I understand it and I think my objections remain. I think Eliezer has a brilliant theory of morality and that it accords with all my personal beliefs, but I still don't understand where it stops being relativist.

I agree that some human assumptions like induction and Occam's Razor have to be used partly as their own justification. But an ultimate justification of a belief has to include a reason for... (read more)

2Eli Tyre
This is a very clear articulation. Thank you.
9VAuroch
A way to justify Occam and Induction more explicitly is by appealing to natural selection. Take large groups of anti-inductor anti-occamians and occamian inductors, and put them in a partially-hostile environment. The Inductors will last much longer. Now, possibly the quality of maximizing inclusive fitness is somehow based on induction or Occam's Razor, but in a lawful universe it will usually be the case that the inductor wins.
Yvain200

...yeah, this was supposed to go in the new article, and I was just checking something in this one and accidentally posted it here. Please ignore embarrassed

Yvain230

I was one of the people who suggested the term h-right before. I'm not great with mathematical logic, and I followed the proof only with difficulty, but I think I understand it and I think my objections remain. I think Eliezer has a brilliant theory of morality and that it accords with all my personal beliefs, but I still don't understand where it stops being relativist.

I agree that some human assumptions like induction and Occam's Razor have to be used partly as their own justification. But an ultimate justification of a belief has to include a reason for... (read more)

Yvain2180

To say that Eliezer is a moral relativist because he realizes that a primality sorter might care about primality rather than morality, is equivalent to calling him a primality relativist because he realizes that a human might care about morality rather than primality.

But by Eliezer's standards, it's impossible for anyone to be a relativist about anything.

Consider what Einstein means when he says time and space are relative. He doesn't mean you can just say whatever you want about them, he means that they're relative to a certain reference frame. An observe... (read more)

2Ghatanathoah
The reason people think that Eliezer is really a relativist is that they see concepts like "good" and "right" as reducing down to mean, "the thing that I [the speaker, whoever it is] values." Eliezer is arguing that that is not what they reduce down to. He argues that "good" and "right" reduce down to something like "concepts related to enhancing the wellbeing of conscious eudaemonic life forms." It's not a trick of the language, Eliezer is arguing that "right" refers to [wellbeing related concept] and p-right refers to [primality sorting related concept]. The words "good" and "right" might be relative but the referent [wellbeing of conscious eudaemonic life forms] is not. The reason Eliezer focuses on fairness is that the concept of fairness is less nebulous than the concept of "right" so it is easier to see that it is not arbitrary. Pebble sorters and humans can both objectively agree on what it means to enhance the wellbeing of conscious eudaemonic life forms. Where they differ is whether they care about doing it. Pebble sorters don't care about the wellbeing of others. Why would they, unless it happened to help them sort pebbles? Similarly, humans and pebble sorters can both agree on which pebble heaps are prime-numbered. Where they differ is if they care about sorting pebbles. Humans don't care about pebble-sorting. Why would they, unless it helped then enhance the wellbeing of themselves and others? So if you define morality as "the thing that I care about," then I suppose it is relative, although I think that is not a proper use of the word "morality." But if you define it as "enhancing the wellbeing of eudaemonic life forms" then it is quite objective. Now, there might be room for moral disagreement in that people care about different aspects of wellbeing more. But that would be grounds for moral pluralism, not moral relativism. Regardless of what specific aspects of morality people focus on, certain things, like torturing the human population for all et
0A1987dM
Until I read that, I thought I understood (and agreed with) Eliezer's point, but that got me thinking. Now, I guess Eliezer would agree that it's easy for Japanese people to speak Japanese, while he wouldn't agree that it's right for Baby-Eaters to keep on eating their children. So there must be something subtler I'm missing.
Yvain250

Why "ought" vs. "p-ought" instead of "h-ought" vs. "p-ought"?

Sure, it might just be terminology. But change

"So which of these two perspectives do I choose? The human one, of course; not because it is the human one, but because it is right."

to

"So which of these two perspectives do I choose? The human one, of course; not because it is the human one, but because it is h-right."

and the difference between "because it is the human one" and "because it is h-right" sounds a lot less convincing.

0MarsColony_in10years
If I see a toddler in the path of a boulder rolling downhill, I don't ask myself "should I help the bolder, or the toddler?" and conclude "the toddler, because it is the human one." If I were to even pause and ask myself a question, it would be "what should I do?" and the answer would be "save the toddler, because it is h-right". Perhaps h-right is "just the human perspective", but that's not the reason I save the toddler. Similarly, the bolder rolls downhill because F=G(m1m2)/r^2, not because it is what boulders do. It is what boulders do, but that's different from the question of why they do what they do.
Yvain2180

But that's clearly not true, except in the sense that it's "arbitrary" to prefer life over death. It's a pretty safe generalization that actions which are considered to be immoral are those which are considered to be likely to cause harm to others.

From an reproductive fitness point of view, or a what-humans-prefer point of view, there's nothing at all arbitrary about morality. Yes, it does mostly contain things that avoid harm. But from an objective point of view, "avoid harm" or "increase reproductive fitness" is as arbitrary... (read more)

Yvain2540

Things I get from this:

  • Things decided by our moral system are not relative, arbitrary or meaningless, any more than it's relative, arbitrary or meaningless to say "X is a prime number"

  • Which moral system the human race uses is relative, arbitrary, and meaningless, just as there's no reason for the pebble sorters to like prime numbers instead of composite numbers, perfect numbers, or even numbers.

  • A smart AI could follow our moral system as well or better than we ourselves can, just as the Pebble-Sorters' AI can hopefully discover that they're

... (read more)
Yvain280

This is something that's bothered me a lot about the free market. Many people, often including myself, believe that a bunch of companies which are profit-maximizers (plus some simple laws against use of force) will cause "nice" results. These people believe the effect is so strong that no possible policy directly aimed at niceness will succeed as well as the profit-maximization strategy does. There seems to be a lot of evidence for this. But it also seems too easy, as if you could take ten paper-clip maximizers competing to convert things into di... (read more)

0[anonymous]
It's really not at all mysterious if you understand the math. Much like how evolution can miraculously create complex life by maximizing "fitness" (i.e. offspring). Also, when you study the math, you will see the many assumptions that make the result go through. Much like evolution, it doesn't always turn out. Markets are stupid. I just googled to find a decent example of the math and this (pdf) is what I came up with. Looks pretty good, but there are many versions of this material available online.
2buybuydandavis
First, policies don't aim, actors with intent do. A journalistic peeve of mine. Newspaper writers generally spend the first 10 paragraphs of a story about legislation psycho analyzing the intent of pieces of paper, and rarely will tell you what the pieces of paper actually say. Second, I don't consider this a serious pro free market position. It's not that no "possible" government enforced policy would do better, it's that the political process is generally unlikely to yield a better policy.
Yvain240

No, I still think there's a difference, although the omnipotence suggestion might have been an overly hasty way of explaining it. One side has moving parts, the other is just a big lump of magic.

When a statement is meaningful, we can think of an experiment that confirms it such that the experiment is also built out of meaningful statements. For example, my experiment to confirm the cake-in-the-sun is for a person on August 1 to go to the center of the sun, and see if it tastes delicious. So, IF Y is in the center of the sun, AND IF Y is there on August 1, ... (read more)

Yvain230

There are different shades of positivism, and I think at least some positivists are willing to say any statement for which there is a decision procedure even possible in principle for an omnipotent being is meaningful.

Under this interpretation, as Doug S. says, the omnipotent being can travel back in time, withstand the heat of the sun, and check the status of the cake. The omnipotent being could also teleport to the spaceship past the cosmological horizon and see if it's still there or not.

However, an omnipotent being still wouldn't have a decision proced... (read more)

1Ronny Fernandez
Might algorithmic positivism be a good name for it? As in if there is an implementable algorithm which decides the truth of the sentence, it is meaningful.
Yvain280

Wow. And this is the sort of thing you write when you're busy...

I've enjoyed these past few posts, but the part I've found most interesting are the attempts at evolutionary psychology-based explanations for things, like teenage rebellion and now flowers. Are these your own ideas, or have you taken them from some other source where they're backed up by further research? If the latter, can you tell me what the source is? I would love to read more of them (I've already read "Moral Animal", but most of these are still new to me).

Yvain200

If one defines morality in a utilitarian way, in which a moral person is one who tries for the greatest possible utility of everyone in the world, that sidesteps McCarthy's complaint. In that case, the apex of moral progress is also, by definition, the world in which people are happiest on average.

It's easy to view moral progress up to this point as progress towards that ideal. Ending slavery increases ex-slaves' utility, hopefully less than it hurts ex-slaveowners. Ending cat-burning increases cats' utility, hopefully less than it hurts that of cat-burnin... (read more)

Yvain240

I second Vladimir's "Prince of Nothing" recommendation. It's a great read just as pure fantasy fiction, but it also helped me to understand some of the concepts on this blog. Reading the "chimpanzee - village idiot - Einstein" line of posts, I found myself interpreting them by sticking Anasurimbor Kelhus at the right end of the spectrum and going from there.

Yvain250

Subhan's explanation is coherent and believable, but he has to bite a pretty big bullet. I happen to like helping people, Hitler happens to like hurting people, and we can both condemn each other if we want but both of our likes are equally valid.

I think most people who think about morality have long realized Subhan's position is a very plausible one, but don't want to bite that bullet. Subhan's arguments confirm that the position is plausible, but they don't make the consequences any more tolerable. I realize that appeal to consequences is a fallacy and that reality doesn't necessarily have to be tolerable, but I don't feel anywhere near like the question has been "dissolved"

Yvain200

It depends.

My morality is my urge to care for other people, plus a systematization of exactly how to do that. You could easily disprove the systematization by telling me something like that giving charity to the poor increases their dependence on handouts and only leaves them worse off. I'd happily accept that correction.

I don't think you could disprove the urge to care for other people, because urges don't have truth-values.

The best you could do would be, as someone mentioned above, to prove that everyone else was an NPC without qualia. Prove that, and I'd probably just behave selfishly, except when it was too psychologically troubling to do so.

Yvain200

It depends on how you disproved my morality.

As far as I can tell, my morality consists of an urge to care about others channeled through a systematization of how to help people most effectively. Someone could easily disprove specifics of the systematization by proving something like that giving charity to the poor only encourages their dependence and increases poverty. If you disproved it that way, I would accept your correction and channel my urge to care differently.

But I don't think you could disprove the urge to care itself, since it's an urge and does... (read more)

Yvain230

I took a different route on the "homework".

My thought was that "can" is a way of stating your strength in a given field, relative to some standard. "I can speak Chinese like a native" is saying "My strength in Chinese is equal to the standard of a native level Chinese speaker." "Congress can declare war" means "Congress' strength in the system of American government is equal to the strength needed to declare war."

Algorithmically, it would involve calculating your own strength in a field, and then ... (read more)

0jschulter
I approached it similarly (as part of a more general attempt, since this is a minor use of the word), positing the "I could lift that box over there" was a comparison of the physical prowess necessary to complete the task and the amount I currently possess. In Eliezer's formulation, this is equivalent to determining reachability with constraints, but it's more of an example of the general procedure than an explanation of it, unfortunately. I'm glad to see that someone else was thinking similarly though.
Yvain2170

"But I'd agree that if a scientific understanding destroyed Keats's sense of wonder, then that was a bug in Keats"

If Keats could turn his wonder on and off like a light switch, then clearly he was being silly in withholding his wonder from science. Since science is clearly true, in order to maximize his wonder Keats should have pressed the "off" button for wonder based on ideas like rainbows being Bifrost the magic bridge to Heaven, and the "on" button for wonder based on science.

But Keats, and the rest of us, can't turn wonde... (read more)

1DanielLC
Shouldn't he have just left it on, all the time?
Yvain2170

I had a professor, David Berman, who believed some people could image well and other people couldn't. He cited studies by Galton and James in which some people completely denied they had imaginative ability, and other people were near-perfect "eidetic" imagers. Then he suggested psychological theories denying imagination were mostly developed by those who could not themselves imagine. The only online work of his I can find on the subject is http://books.google.co.jp/books?id=fZXoM80K9qgC&pg=PA13&lpg=PA13&ots=Zs03EkNZ-B&sig=2eVzzMm... (read more)

5DilGreen
It's been a few years, but the answer is now - yes. Here's a link to a New Scientist article from earlier this year. I'm afraid there's a pay barrier: https://www.newscientist.com/article/2083706-my-minds-eye-is-blind-so-whats-going-on-in-my-brain/ The article documents recent experiments and thinking about people who are poor or incapable (about 2 to 3% report this) of forming mental pictures (as opposed to manipulating concepts). Key quote: Test yourself here: http://socrates.berkeley.edu/~kihlstrm/MarksVVIQ.htm

No large N experiments. but Feynman in one of his autobiographies tests this with a friend. One of them hears numbers, the other sees them. They are unable to multitask within the domain they use to process numbers. I for one hear numbers. I can count while performing visual tasks. My father sees them. He cannot. He can speak and count, which I find amusing.