If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Open thread, Aug. 03 - Aug. 09, 2015
New Comment
178 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Emotiv EPOC give-away:

So back in March 2013 or so, another LWer gave me a "Special Limited Edition Emotiv EPOC Neuroheadset"/"Research Edition SDK". Idea was that I could maybe use it for QS purposes like meditation or quantifying mental effects of nootropics. EEG headsets turn out to be a complicated area with a lot of unfamiliar statistics & terminology in it, and so I never quite got around to making any use of it; so it's been sitting on my desk gathering dust ever since.

I'm not doing as much QS stuff these days and it's been over two years without a single use, so it's time I admit that it's unlikely I'm going to use it any time soon as well.

I might as well ship to another American LWer who might get some use out of it. If you're interested, email me.

EDIT: it's taken

5Elo
Upvoted for good cultural standards.

Here's the slides from my talk on logical counterfactuals at the Cambridge/MIRI workshop in May 2015. I'm planning to give a similar talk tomorrow at the Google Tel Aviv office (meetup link). None of the material is really new, but I hope it shows that basic LWish decision theory can be presented in a mathematically rigorous way.

2Ronny Fernandez
This is super interesting. Is this based on UDT?
4cousin_it
Yeah, it's UDT in a logic setting. I've posted about a similar idea on the MIRI research forum here.

Speed matters: Why working quickly is more important than it seems

An interesting blog post which points out additional benefits of doing things quickly. A sampler:

The obvious benefit to working quickly is that you’ll finish more stuff per unit time. But there’s more to it than that. If you work quickly, the cost of doing something new will seem lower in your mind. So you’ll be inclined to do more.

The converse is true, too. If every time you write a blog post it takes you six months, and you’re sitting around your apartment on a Sunday afternoon thinking of stuff to do, you’re probably not going to think of starting a blog post, because it’ll feel too expensive.

What’s worse, because you blog slowly, you’re liable to continue blogging slowly—simply because the only way to learn to do something fast is by doing it lots of times.

This is true of any to-do list that gets worked off too slowly. A malaise creeps into it. You keep adding items that you never cross off. If that happens enough, you might one day stop putting stuff onto the list.

2Viliam
Also in terms of reinforcement learning, if you work quickly, you will get more rewards per unit of time, and the rewards will be closer in time to the start the work (shorter time delay means better reinforcement).

The problem of “me” studies by Joseph Heath

So what I would like to discuss today is just one strand or tendency, that often gets described as political correctness, but that is more precisely known as the problem of “me” studies.

Although described in political terms, biases caused by "me" studies also affect other fields such as philosophy

9Vaniver
Very good article. "Me" studies refers to, basically, studying yourself--which gets into topics of identity politics. (Instead of just studying your life, you might, say, study the history of your racial group in your country.) But the core of it is a simple model of how discussions get radicalized when people are studying oppression. The post-script is also fascinating reading, because someone objects to a minor comment in the first post in a way that highlights the underlying dynamics (see the first comment, possibly by the author of the email). The next post on the same topic, On the problem of normative sociology, is also well worth reading. (On an unrelated, but still rationalist, topic, see his post on the implications of psychology on consumerism / climate change.)

see his post on the implications of psychology on consumerism / climate change

Yes, this is a frequent mistake of rationality, sometimes so difficult to explain to people outside of LW.

Essentially, the world is a system of gears. To understand some activity that happens in world, look at the gears, what they do, and how they interact. Don't search for a mysterious spirit responsible for the activity, if the activity can be fully explained by the gears.

This is a simple application of naturalism into economics (and therefore to politics, because often politics = economics + value judgements). Yeah, but many people fail hard at naturalism, even those who call themselves atheists.

Unfortunately, seeing the world as a system of gears is often considered a "right-wing" position; and the "left-wing" position is calling out the various evil spirits. (I am not saying that this is inherently a left-wing approach; possibly just a recent fashion.) As if people fail to coordinate to solve hard problems merely because evil corporate wizards make them do so using magical brainwashing powers, instead of simply everyone optimizing locally for themselves.

"Me" studie

... (read more)
9HungryHippo
You put your finger on something I've been attempting to articulate. There's a similar idea I've seen here on Lesswrong. That idea said approximately that it's difficult do define what counts as a religion, because not all religions fulfill the same criteria. But a tool that seems to do the job you want to do is to separate people (and ideas) based on the question "is mind made up of parts or is it ontologically fundamental?". This seems to separate the woo from the non-woo. My mutation of this idea is that there are fundamentally two ways of explaining things. One is the "animistic" or "intentional stance (Cf. Daniel Dennett)" view of the world, the other is the "clockwork" view of the world. In the the animistic view, you explain events by mental (fundamentally living) phenomena. Your explanations point towards some intention. God holds his guiding hand over this world and saved the baby from the plane crash because he was innocent, and God smote America because of her homosexuals. I won the lottery because I was good. Thunderclaps are caused by the Lightningbird flapping his wings, and lightning-flashes arise when he directs his gaze towards the earth. Or perhaps Thor is angry again, and is riding across the sky. Maybe if we sacrifice something precious to us, a human life, we might appease the gods and collect fair weather and good fortune. Cause and effect are connected by mind and intention. There can be no unintended consequences, because all consequences are intended, at least by someone. Whatever happens was meant (read: intended) to happen. If you believe that God is good, this gives comfort even when you are under extreme distress. God took you child away from you because he wanted her by his side in heaven, and he is testing you only because he loves you. If you believe in no God, then bad things happen only because some bad person with bad intentions intended them to happen. If only we can replace them with good people with good intentions, the ills
3Lumifer
I don't know if it's a good separation as stated. Let me illustrate with a 2x2 table. Earthquake in California: God punished sin (animist) -- The tectonic plate moved (clockwork) Alice went for a coffee: Alice wants coffee (animist) -- A complicated neuro-chemical mix reacting to some set of stimuli made Alice go get coffee (clockwork) The problem is that I want the clockwork description for the earthquake, but I want the animist description for Alice. The clockwork description for Alice sounds entirely unworkable. The way you set it up, the animist believes that there is no such thing as "by nature" and that God's will decides all, including who will be friends and who will not. Don't see that. The clockworker believes we will do whatever the gears will push us to do. Clockworkers are determinists, basically.
3Viliam
We should use explanations of type "the entity is a human, they think and act like a human" for humans, and for nothing else. (Although in some situations it may be useful to also think about a human as a system.) The most frequent error in my opinion is modelling a group of humans as a single human. Maybe a useful help for intuition would be to notice when you are using a gramatical singular for a group of people, and replace it with plural. E.g. "government" -> "politicians in the government"; "society" -> "individuals in the society"; "educational system" -> "teachers and students", etc.
2Lumifer
I think it's a bit more complicated. I see nothing wrong with modeling a group of humans as a single entity which has, say, particular interests, traditions, incentives, etc. There are big differences between "government" and "politicians in the government" -- an obvious one would be that politicians come and go, but the government (including a very large and very influential class of civil servants) remains. I am not saying that we should anthropomorphise entities, but treating them just as a group of humans doesn't look right either.
4Viliam
Such model ignores e.g. minorities which don't share the interests of the majority, or the internal fighting between people who have the same interests but compete with each other for scarce resources (such as status within the group). As a result, the group of humans modelled this way will seem like a conspiracy, and -- depending on whether you choose to model all failures of coordination as "this is what the entity really wants" or "this is what the entity doesn't want, but does it anyway" -- either evil or crazy.
4Lumifer
Well, let's step back a little bit. How good a model is cannot be determined without specifying purpose of this model. In particular, there is no universally-correct granularity -- some models track a lot of little details and effects, while others do not and aggregate all of them into a few measures or indicators. Both types can be useful depending on the purpose. In particular, a more granular model is not necessarily a better model. This general principle applies here as well. Sometimes you do want to model a group of humans as a group of distinct humans, and sometimes you want to model a group of humans as a single entity.
1Dagon
It's a bit more complicated, but still basically true: a group is not very well modeled as an individual. Heck, I'm not sure individual humans have sufficient consistency over time to be well-modeled as an individual. I suspect that (Arrow's Theorem)[https://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem] applies to subpersonal thinking modules as well as it does whole people. A single entity which can believe and act simultaneously in contradictory ways is not really a single entity, is it?
0Lumifer
See my answer to Viliam...
0HungryHippo
As stated, my comment is more a vague suggestion than a watertight deduction from first principles. What I intend to suggest is that just as humans vary along the dimensions of aggression, empathy, compassion, etc., so too do we vary according to what degree, and when, we give either explanation (animistic/clockwork) primacy over the other. I'm interested in these modes of explanation more from the perspective of it being a psychological tendency rather than it giving rise to a self-consistent world view. In the mental operations of humans there is a tendency to say "here, and no longer, for we have arrived" when we explain phenomena and solve problems. For some this is when they have arrived at a kind of "spirit", for some it is when we arrive at "gears". For some it is gears six days a week, and spirits on Sunday. The degree to which you seek spirit-explanations depends on the size and complexity of the physical system (a spec of dust, a virus, a bacteria, a single celled organism, an ant, a frog, a mouse, a cat, a monkey, a human), and also the field of inquiry (particle physics, ..., sociology). And it probably depends on some personal nature-and-nurture quality. Sometimes explanations are phrased in terms of spirits, sometimes gears. I'm also not saying that the world-views necessarily contradict each other (in that they deny the existence of phenomena the other asserts), only that each world-view seeks different post-hoc rationalizations. The animist will claim that the tectonic plate moved because God was wrathful and intended it. The clockworker will claim that God's wrath is superfluous in his own model of earthquakes. Whether either world-view adds something the other lacks is beside the point, only that each desire to stop at a different destination, psychologically. In real life I once heard, from an otherwise well adjusted member of society, that the Devil was responsible for the financial crisis. I did not pry into what he meant by this, but he se
0Lumifer
Do you think your distinction maps to the free will vs. determinism dimension? I think what makes me confused is that religion is heavily mixed into the animistic view. Can the animistic view (in particular with respect to natural phenomena) exist without being based on religion?
0HungryHippo
I don't have any hard and fast answers, so I cannot be completely sure. My guess is that a "spirit" person is more likely to believe in free will, while a "gear" person is more likely to believe in the absence of free will. What free will means precisely, I'm not sure, so it feels forced for me to claim that another person would believe free will, when I myself am unable to make an argument that is as convincing to me as I'm sure their arguments must be to them. I haven't thought much of free will, but the only way I'm personally able to conceive of it is that my mind is somehow determined by brain-states which in turn are defined by configurations of elementary particles (my brain/my body/the universe) with known laws, if unknown (in practice) solutions. So personally I'm in the "it's gears all the way down" camp, at least with the caveat that I haven't thought about it much. But there are people who genuinely claim to believe in free will and I take their word for it, whatever those words mean to them. So my guess at the beginning of the paragraph should interpreted as: if you ask a "spirit" person he will most likely say, "yes, I believe in free will", while a "gear" person will most likely say "no, I do not believe in free will." The factual content of each claim is a separate issue. Whether either world-view can be made self-consistent is a further issue. I think a "spiritist" would accept the will of someone as a sufficient first cause of a phenomenon, with the will being conceived of only as a "law unto itself". When it comes to determinism, I think a "gearist" are more likely to be determinists, since that is what has dominated all of the sciences (except for quantum physics). "Spiritits" on the other hand, I don't know. If God has a plan for everything and everyone, that sounds pretty deterministic. But if you pray for him to grant you this one wish, then you don't know whether he will change the course of the universe for your benefit or no, so I would
3[anonymous]
I'm interested in your characterization of left vs. right, as it seems to me both parties make this mistake equally. What examples were you thinking of when making that characterization?
8Viliam
Different countries have different definitions of left and right. There seems to be some system, but also... well, let me give you an example: In Slovakia, the political party promoting marijuana legalization and homosexual marriage was labeled by its opponents as right-wing, because... well, they also supported free-market, and supporting free market means opposing communists, and since communists are left-wing, then logically if you oppose them, you must be right-wing. Having exactly the same opinions in USA would make one left-wing, if I understand it correctly. This said, the examples I have in mind may be rather atypical for most LW readers. Thinking about my country, I would roughly classify political parties into three groups, listed from most powerful to least powerful. 1) Communists, including some small Nazi-ish parties, because they have a similar ideology (defend the working class, blame evil people for everything bad; the difference is that for Communists the evil people are Americans and entrepreneurs, while for Nazis they are Americans, Hungarians, Jews, and Gypsies; also both are strongly pro-Russia). 2) Liberals/Libertarians, basicly anyone who knows Economics 101 and wants to have some free market, and in extreme cases even things like marijuana and gay marriage. 3) Catholics, who only care about more power and money to Catholic church, and are willing to support either of the previous two groups if they in return give them what they want (so far they mostly joined the Liberals, but it always created a lot of tension within the government) So for me, "left-wing" usually means (1), and "right-wing" usually means (2) + (3). In my country, knowing Economics 101 already gets you labeled "right-wing", and if you say things like "if you increases taxes, you will punish the rich, but you will also make stuff more expensive for the poor" or "if you increase minimum wage, some people will get higher salary, but other people will get fired or unable to
8Username
In many non-western countries the very dichotomy between left and right doesn't make any sense. Westerners make a lot of fatal mistakes when they try to project their limited understanding onto non-Western countries.
3Good_Burning_Plastic
On reading that comment on Top Comments Today before having read its ancestors, I thought you was talking about Australian Aboriginal cultures who use compass directions even in everyday situations where Europeans would use relative directions.
4Vaniver
The only American political party like that is the libertarian party, which is consistently considered right-wing. (That is, the combination of marijuana legalization, gay rights, and free-market; you do find people in favor of marijuana legalization, gay rights, and less free market on the left.)
2Viliam
You are right. Well, in Slovakia the libertarian-ish party is the only one that would touch the topic of marijuana and gay rights. We do not have a "marijuana, gay rights, less free market" party, and maybe not even the voters who would vote for such party. Any kind of freedom is right-wing (although not everything right-wing is pro-freedom).
2[anonymous]
Communists (in the marxist sense) definitely take a systems thinking gear like approach, not a magical "evil people do evil things" approach. The entire idea behind Marxism is that there's a systemic problem with capitalism where the rich own the means of production. This will lead to a systemic unrest among those who don't own the means of production, which will eventually lead to a revolution. I would say the problem is not systems thinking in this case, but lack of empiricism. Communism has been proven again and again to lead to corruption, but that fact is ignored by communists because it contradicts their systemic models. That's not just a leftist problem though. For instance, it's been shown again and again that raising the minimum wage doesn't lead to unemployement, but that's been ignored again and again by the right because it contradicts their systemic model.
6Viliam
True for textbook Communism, but it doesn't work for politicians. What is a Communist politician supposed to say to their voters: "Let's sit here with our hands folded and wait for the inevitable collapse of the capitalism"? They must point fingers. They must point fingers more than their competitors for the same role. And when the Communists rule the country... they empirically can't deliver what Marx has promised. So they must find excuses. Stuff like "Socialism is the initial stage of the Communism, containing still some elements of capitalist system such as money; just wait a few years longer and you will see the final stage". In reality, you have a pseudo-capitalist system with a dictatorship of the Communist Party, state-owned factories and regulated prices, mandatory employment and press censorship... and you stay there for decades, because... well, again, you must point fingers. American imperialists, internal traitors, everyone is trying to destroy our 'freedom' and happiness. EDIT: But probably more important than this all is that you have to "sell" Communism to people who are prone to magical thinking. So whatever the original theory was, as soon as it reaches the masses, your average supporter will think magically.
0[anonymous]
Yes, I agree with all of this. It's essentially restating my initial point, which is that communism's problem is not that they don't think in systems - it's that they don't update their systems based on empirical results.
4Lumifer
That's not true. Or, rather, it's only true if you cherry-pick your economics papers. In fact, there is considerable debate as to the economic consequences (especially beyond short-term) of the minimum wage and the question is far from settled.
-1[anonymous]
Well, yes. Which still means that the right wing stance is at best, incomplete. Which isn't the stance you see any of the politicians taking, at least not in the US.
-3Lumifer
I haven't seen anyone (well, anyone who isn't drooling or foaming) claim "completeness" :-)
-2[anonymous]
I think it's implied in the arguments you see them making.
2Lumifer
Oh, like this one?
-1[anonymous]
Yes, as I've admitted in the previous comments, that's not true. And yes, I've seen the reverse attitude many times.
1Houshalter
I don't really agree with that link. Like the picture from a random real estate listing he likely cherry picked. And just assumed was an environmentalist because of where they live. And assume they use a lot of gas because they have ATVs. Which makes no sense at all. The mileage on those is actually not terrible, and they are typically not driven very long distances. Those motorcycles are even better than cars on fuel consumption. But even the idea that people can't be for the environment if they don't own an electric car and live in a tiny house, or whatever. I view environmentalist lifestyles as extremely pointless. Your individual sacrifice won't contribute even the tiniest drop in the bucket. Even if everyone did it, the price of gas and coal would just go down, and other countries would buy it - the total consumption would remain the same. They will keep mining and pumping it out of the ground until it's no longer possible to do so. The only way to solve the problem is to force them to keep it in the ground, or at least heavily tax it as it is taken out. And that's a much more reasonable stance. You might not want to sacrifice individually and pointlessly. But you would be willing to do so if everyone else has to.
3Vaniver
Your central point, that it's a collective action problem, is Heath's main point, as I read his article. He points out that people do not live environmentalist lifestyles as evidence that they will not vote for making non-environmentalist lifestyles expensive, and thus Klein's claim that democracy and local politics will help solve this issue is fundamentally mistaken. While I agree that he doesn't have sufficient information to conclude the amount of gas they use, it's certainly fair to claim that their lifestyle is fossil fuel intensive, which was his claim. I have friends with guns; the amount of gunpowder consumption they require is better measured by how frequently they go to the range, not the number of guns they own. But it still seems fair to argue that using guns is a gunpowder-intensive hobby. ...other countries don't count as everyone?
0Houshalter
This is possibly true, but not necessarily so. We know nothing about these people's political beliefs or what issues they care about. He's making massive judgements about them from a single picture in a real estate listing. Even if we are accepting the premise that people only vote in their self interest (they don't), these people are clearly very wealthy. Increases in energy prices will have a lower impact on them. "intensive" is a pretty strong word. Clearly they use gas, but everyone uses gas. Just because they have ATVs and motorcycles doesn't mean much. As I said, they get good mileage, sometimes better than cars, and they aren't going to be driving them long distances. A single person with a long commute is likely going to use far more gas than them.
0Vaniver
Would you describe this as "necessarily so"? Which is why it might be relevant that this is in a suburb of Toronto--i.e. someone who lives here and works in the city probably has an hour-long commute.
0Houshalter
Because they own a giant, expensive home, with a garage filled with expensive toys. Perhaps, but that is not what the article said at all. He was judging them entirely based on a picture of their garage, and the fact that they owned motorcycles and ATVs. He didn't try to estimate how much gas they use per day. He looked at a photo and noticed it didn't have the aesthetic of environmentalism.
0Vaniver
Wealth typically refers to one's assets minus one's liabilities; evidence of assets does not suffice to demonstrate wealth. My point was that you are putting forward a reasonable general claim that is not necessarily true--even if this particular home seller is underwater on their mortgage, similar people exist that are not and one would expect the latter group to be more likely--at the same time that you are criticizing Heath for putting forward a reasonable general claim that is not necessarily true--people who own multiple ATVs and motorcycles and live an hour from the city probably consume more gasoline than the average Canadian and are unlikely to be a strong supporter of the environmentalist political coalition.
0Lumifer
It seems Heath is talking about what Scott Alexander calls Moloch.
1[anonymous]
Great share
1ChristianKl
Even when a law professor thinks that those purchases should be illegal, I find it hard to imagine that the legal system moves against index funds.
[-][anonymous]100

One of the most, if not the most effective ways for me to focus on a particular task is to open Paint (on Windows) write in one word what I'm doing right now, e.g. "complexity" for an online course on complexity I'm taking and leave it like this on the side of my screen (or on the second screen), to always be in my field of view, but not interfere with anything.

This creates a really weird effect that whenever I want to get distracted by something automatically and almost completely effortlessly tells my mind to focus on the task instead, and doesn't let me get distracted.

Can anybody check how well it generalises for them?

1Elo
look into something called a kanban board. Consider segments of: * "tasks to do" * "single task doing now" * "single next task" * "tasks awaiting external imput" * "completed tasks" Where a task X seems hard, break it down into smaller tasks, (task of break X down to smaller tasks) As a bonus: make an estimate of how long each task will take. After completing - compare your predicting ability and update your time-guess methods.
0[anonymous]
I'm not sure how is this relevant to my comment. I do use Complice for what you're describing, but the gist of my comment is that the need for the reminder of "single task doing right now" to be in the field of view, lest it's not getting forgotten.
1Elo
I thought you were looking for related task focus strategies. My mistake. I have not found the need for the instruction to be in the field of view. I don't imagine very much of an impact if I were to place it in my field of view. I would rather use as much of my field of view as possible for the task at hand, but I will try it and get back to you.
0[anonymous]
That'd be awesome!
1Elo
update: it didn't do much for me. need to try a few more times.
0[anonymous]
Doesn't complice have this exact feature?
0[anonymous]
Almost. It doesn't have "tasks awaiting external input" section :)
0[anonymous]
I meant, doesn't Complice have the "single task doing right now" function?
0[anonymous]
So I tried using this function and it's way too cluttered and distracting for me, so I guess the answer is no. The purity of Paint turned out to be pretty important.
0[anonymous]
Oh.

Wikipedia on Chalmers, consciousness, and zombies:

Chalmers argues that since such zombies are conceivable to us, they must therefore be logically possible. Since they are logically possible, then qualia and sentience are not fully explained by physical properties alone.

That kind of reasoning allows me to prove so many exciting things! I can imagine a world where gravity is Newtonian but orbits aren't elliptical (my math skills are poor but my imagination is top notch), therefore Newtonian gravity cannot explain elliptical orbits. And so on.

Am I being ... (read more)

4Elo
I believe there is a misunderstanding in the word where if you taboo you might get: = imagination = describe with a human brain = develop all the requirements for it to be feasible and the understanding of how to make it so within one human brain. where for C3 if we can conceive3 it, it is logically possible.
0cousin_it
I used the first meaning. Doesn't Chalmers use it as well?
1Elo
Things that are imaginable are not therefore logically possible. I find it an unreasonable and untrue leap of reasoning. Does that make sense?
4fubarobfusco
In fact, there are quite a lot of concepts that are imaginable but not logically possible. Any time a mathematician uses a proof by contradiction, they're using such a concept. We can state very clearly what it would mean to have an algorithm that solves the halting problem. It is only because we can conceive of such an algorithm, and reason from its properties to a contradiction, that we can prove it is impossible. Or, put another way, yes, we can conceive of halting solvers (or zombies), but it does not follow that our concepts are self-consistent.
0cousin_it
Yeah, that's probably right. I'm not sure what "logically possible" means to philosophers, so I tried to give a reductio ad absurdum of the argument as a whole, which should work for any meaning of "logically possible".
3IffThen
Logically possible just means that "it works in theory" -- that there is no logical contradiction. It is possible to have an idea that is logically possible but not physically possible, e.g., a physicist might come up with a internally consistent theory of a universe that hold that the speed of light in a vacuum is 3mph. These are in contrast to logically impossible worlds, the classic example being a world that contains both an unstoppable force and an unmovable object; these elements contradict each other, so cannot both occur in the same universe.
2cousin_it
OK. Is a world with Newtonian gravity and non-elliptical orbits logically possible? Is a world where PA proves ¬Con(PA) logically possible? Is a world with p-zombies logically possible? Too often, people confuse "I couldn't find a contradiction in 5 minutes" with "there's provably no contradiction, no matter how long you look". The former is what philosophers seem to use routinely, while the latter is a very high standard. For example, our familiar axioms about the natural numbers provably cannot meet that standard, due to the incompleteness theorems. I'd be very surprised if Chalmers had an argument that showed p-zombies are logically possible in the latter sense.
2IffThen
This is shorthand for "in the two decades that Chalmers has been working on this problem, he has been defending the argument that..." You might look at his arguments and find them lacking, but he has spent much longer than five minutes on the problem.
0[anonymous]
I get the impression Chalmers is using something like Conceivable1 for the "zombies are conceivable" part of the arguments then sneakily switching to something more like Conceivable3 for the "conceivable, therefore logically possible" part.
-4IffThen
I suspect you already know this, but just in case, in philosophy, a zombie is an object that can pass the Turing test but does not have internal experiences or self-awareness. Traditionally, zombies are also physically indistinguishable from humans.
4Manfred
The truth is usually simple, but arguments about it are allowed to be unboundedly complicated :P Which is to say, I bet Chalmers has heard this argument before and formulated a counterargument, which would in turn spawn a counter-counterargument, and so on. So have you "proven" anything in a publicly final sense? I don't think so. Doesn't mean you're wrong, though.
1iarwain1
The question is, how do I tell (without reading all the literature on the topic) if my argument is naive and the counterarguments that I haven't thought of are successful, or if my argument is valid and the counterarguments are just obfuscating the truth in increasingly complicated ways?
1[anonymous]
You either ask an expert, or become an expert. Although I'd be wary of philosophy experts, as there's not really a tight feedback loop in philosophy.
3Kaj_Sotala
My default assumption is that if someone smart says something that sounds obviously false to me, either they're giving their words different meanings than I am, or alternatively the two-sentence version is skipping a lot of inferential steps. Compare the cautionary tale of talking snakes.
1Jiro
If the tale of talking snakes really showed what it is supposed to show, we'd see lots of nonreligious people refuse to accept evolution on the grounds that evolution is so absurd that it's not worth considering. That hardly ever happens; somehow the "absurdity" is only seen as absurd by people who have separate motivations to reject it. I don't think that apes turning into men is any more absurd than matter being composed of invisible atoms, germs causing disease, or nuclear fusion in stars. Normal people say "yeah, that sounds absurd, but scientists endorse them, I guess they know what they're doing".
3Viliam
You certainly are appropriating higher status than you deserve from the academic point of view. :P
0gjm
If you are, then so am I, because that was also my immediate reaction on hearing this conceivability argument.

This is an important paper regarding the foundation of probability, in particular section 2.5 which lists all the papers that dealt previously with fixing the holes in Cox's theorem.

I have heard (from the book Global Catastrophic Risks) that life extension could increase existential risk by giving oppressive regimes increased stability by decreasing how frequently they would need to select successors. However, I think it may also decrease existential risk by giving people a greater incentive to care about the far future (because they could be in it). What are your thoughts on the net effect of life extension?

8pcm
One of the stronger factors influencing the frequency of wars is the ratio of young men to older men. Life extension would change that ratio to imply fewer wars. See http://earthops.org/immigration/Mesquida_Wiener99.pdf. Stable regimes seem to have less need for oppression than unstable ones. So while I see some risk that mild oppression will be more common with life extension, I find it hard to see how that would increase existential risks.
4G0W51
Oppression could cause an existential catastrophe if the oppressive regime is never ended.
3knb
But why do young men cause wars (assuming they do)? If everyone remains biologically 22 forever, are they psychologically more similar to actual 22 year-olds or to whatever their chronological age is? If younger men are more aggressive due to higher testosterone levels (or whatever) agelessness might actually have the opposite effect, increasing the percentage of the male population which is aggressive.
3Username
Radical life extension might lead to overpopulation and wars that might escalate to existential risk level danger.
-3[anonymous]
Is there anything that can't somehow be spun into increasing existential risk? The biggest existential risk is being alive at all in the first place.
0G0W51
Yes, but I'm looking to see if it increases existential risk more than it decreases it, and if the increase is significant.

Seeking plausible-but-surprising fictional ethics

How badly could a reasonably intelligent follower of the selfish creed, "Maximize my QALYs", be manhandled into some unpleasant parallel to a Pascal's Mugging?

How many rules-of-thumb are there, which provide answers to ethical problems such as Trolley Problems, give answers that allow the user to avoid being lynched by an angry mob, and don't require more than moderate mathematical skill to apply?

Could Maslow's Hierarchy of Needs be used to form the basis of a multi-tiered variant of utilitarianism... (read more)

2Illano
For story purposes, using a multi-tiered variant of utilitarianism based on social distance could lead to some interesting results. If the character were to calculate his utility function for a given being by something Calculated Utility = Utility / (Degrees of Separation from me)^2, it would be really easy to calculate, yet come close to what people really use. The interesting part from a fictional standpoint could be if your character rigidly adheres to this function, such that you can manipulate your utility in their eyes by becoming friends with their friends. (e.g. The utility for me to give a random stranger $10 is 0 (assuming infinite degrees of separation), but if they told me they were my sister's friend, it may have a utility of $10/(2)^2, or $2.50) It could be fun to play around with the hero's mind by manipulating the social web.
1DataPacRat
I think I once heard of a variant of this, only using degrees of kinship instead of social connections. Eg, direct offspring and full siblings are discounted to 50%, grandchildren to 25%, and so forth.
0DataPacRat
I was just struck by a thought, which could combine the two approaches, by applying some sort of probability measure to one's acquaintances about how likely they are to become a blood relative of one's descendants. The idea probably needs tweaking, but I don't think I've come across a system quite like it before... Well, at least, not formally. It seems plausible that a number of social systems have ended up applying something like such a heuristic through informal social-evolutionary adaptation, which could provide some fodder for contrasting the Bayesian version against the historically-evolved versions. Anyone have any suggestions on elaborations?
3Illano
Sounds somewhat like the 'gay uncle' theory, where having 4 of your siblings kids pass on their genes is equivalent to having 2 of your own pass on their genes, but with future pairings included, which is interesting. Stephen Baxter wrote a couple of novels that explored the first theory a bit Destiny's Children series, where gur pbybal riraghnyyl ribyirq vagb n uvir, jvgu rirelbar fhccbegvat n tebhc bs dhrraf gung gurl jrer eryngrq gb. The addition of future contributors to the bloodline as part of your utility function could make this really interesting if set in a society that has arranged marriages and/or engagement contracts, as one arranged marriage could completely change the outcome of some deal. Though I guess this is how a ton of history played out anyway, just not quite as explicitly.
1DanielLC
They'd be just as subject to it as anyone else. It's just that instead of killing 3^^^3 people, they threaten to torture you for 3^^^3 years. Or offer 3^^^3 years of life or something. It comes from having an unbounded utility function. Not from any particular utility function.
[-][anonymous]50

Does anybody else get the sense that in terms of karma, anecdotes seem to be more popular than statistical analysis when rating comments? It seems like a clear and common source of bias to me. Thoughts?

Does anybody else get the sense that in terms of karma, anecdotes seem to be more popular than statistical analysis when rating comments?

Are you basing this observation on anecdotes or on statistical analysis? :-P

1Username
Bikeshed effect
1[anonymous]
I get the opposite sense.
0satt
Same. I'd guess that ceteris paribus, comments based on statistical analysis would get more upvotes than anecdotes; it's just that ceteris ain't paribus. A big part of a comment's karma is how many (logged-in) people read the comment, and in a given thread early comments tend to get more readers than late comments. Assuming that posting a statistical analysis is more time-consuming than posting an anecdote (and I think on average it is), comments with statistical analysis are systematically disadvantaged because they're posted later. (This has definitely been my anecdotal experience. People seem to like comments where I dredge up statistics, but because I often post them as a thread winds down, or even after it's gone fallow, they're often less upvoted than their more-poorly-sourced parents.)
-5Gunslinger
[-][anonymous]50

Did some 5-min research for curiosity.

Are major categories in abnormal psychology actually good labels, statistically?

Big 5 personality traits were discovered through factor analysis.

Terms like depression, anxiety and personality disorders are or are entering common vernacular, but of unknown origin.

Google searched for (one of 'construct validity and factor analysis) + (one of: depression, anxiety and personality disorder) and selected relevant results on the visible half of the first page (didn't scroll don't more than flick)

Of those pages, closed tabs ... (read more)

3ChristianKl
The fact that you don't understand overlap as a layperson doesn't indicate that a test doesn't test for a real thing. On the other hand the DSM-V categories are likely not the best possible label. That even the NIH opinion who declared that they are willing to fund studies that don't use them and try to find new categories. If you want to dig deeper in how to think about such terms "How to think straight about Psychology" by CFAR advisor and professor of psychology Keith Stanovich is a good read.

I'd like a quick peer review of some low-hanging fruit in the area of effective altruism.

I see that donating blood is rarely talked about in effective altruism articles; in fact, I've only found one reference to it on Less Wrong.

I am also told by those organizations that want me to donate blood that each donation (one pint) will save "up to three lives". For all I know all sites are parroting information provided by the Red Cross, and of course the Red Cross is highly motivated to exaggerate the benefit of donating blood; "up to three"... (read more)

4Elo
To my knowledge - the line "up to three lives" is quoted because a blood sample can be separated into 3 parts? or 3 samples, to help with different problems. What is not mentioned often is the shelf-life for blood products. 3 months on the shelf and that pint is in the medical-waste basket. AKA zero lives saved. And further, if a surgery goes wrong and they need multiple transfusions to stabilise a person the lives saved goes into fractional numbers. (0.5, 0.33, 0.25...) But those numbers are not pretty. Further; if someone requires multiple transfusions over their life; to save their life multiple times... There are numbers less than 1 (0); there are numbers smaller than a whole; and (not actually a mistake made here) real representative numbers don't often fall to a factor of 5 or 10. (5, 10, 50, 100, 1000). Anyway if you are healthy and able to spare some blood then its probably a great thing to do. Ike's article linked does start to cover adverse effects of blood donation; I wonder if a study has been made into it. (http://www.ihn-org.com/wp-content/uploads/2014/04/Side-effects-of-blood-donation-by-apheresis-by-Hans-Vrielink.pdf comes as a source from wikipedia on prevelance of adverse effects) (oh shite thats a lot more common than I expected. The risk I see is that donating blood temporarily disables you by a small amount. I would call it akin to being a little tipsy; a little sleep deprived, or a little drowsy; or a little low in blood pressure (oh wait yea). Nothing bad happens by being a little drowsy, or a little sleep deprived. It really depends on the whole-case of your situation as to whether something bad happens. (See: swiss cheese model: https://en.wikipedia.org/wiki/Swiss_cheese_model ) The important question to ask is - can you take it? If yes; then go right ahead. If you are already under pressure from the complexities of life in such a way that you might be adversely burdening yourself to donate blood; Your life is worth more (even for
3IffThen
I'm not sure where you got the 3 month figure from; in America we store the blood for less than that, no more than 6 weeks. It is true that the value of your donation is dependent on your blood type, and you may find that your local organization asks you to change your donation type (platelets, plasma, whole blood) if you have a blood type that is less convenient. I do acknowledge that this question is much more relevant for those of us who are typo O-.
1Elo
I don't know. The number was in my head that a processed blood sample can last 3 months. entirely possible that it doesn't. " After processing, red cells can be stored for up to 42 days; plasma is frozen and can be stored for up to 12 months;" http://www.donateblood.com.au/faq/about-blood/how-long-until-my-blood-used
2ike
http://acesounderglass.com/2015/04/07/is-blood-donation-effective-yes/
2ChristianKl
Right...
0ike
How would you usually go about calculating marginal effectiveness?
2ChristianKl
In this case it seems like the marginal value of blood donation should be roughly what the organizations like the red cross are willing to pay to get additional blood donations. You could look at how often patients get less blood because of supply issues.
1IffThen
From the Freakonomics blog: "FDA prohibits any gifts to blood donors in excess of $25 in cumulative value". Various articles give different amounts for the price per pint that hospitals pay, but it looks like it's in the range of $125 in most cases.
0ChristianKl
Basically that means that the FDA thinks that putting that limit on blood donations won't reduce the amount of blood donation in critical way that results in people dying as a result.
0ike
That is briefly mentioned in the post, and in more detail in the comments. It does depend on certain efficiency assumptions about the Red Cross, though.
2ChristianKl
If you don't believe that the Red Cross is doing a good job on this then research it's actual practice and openly criticising it could be high leverage. There enough money in the medical system to pay a reasonable price for the blood that's needed.
1NancyLebovitz
I assume the effectiveness of blood donation is affected by whether someone has a rare blood type.
0ChristianKl
A core idea of EA is the marginal value of a donation. The marginal value of an additional person donating blood is certainly less than a live saved. Certainly not. Finding funding to have enough blood donations isn't a problem. Our medical system has enough money to pay people in times of shortage. But it doesn't want to pay people. The average quality of blood of people who have to be bribed is lower than the average quality of people who donate blood to help their fellow citizens.
-1IffThen
I think you are often right about the marginal utility of blood. However, it is worth noting that the Red Cross both pesters people to give blood (a lot, even if you request them directly not to multiple times), and that they offer rewards for blood -- usually a t-shirt or a hat, but recently I've been getting $5 gift cards. Obviously, this is not intended to directly indicate the worth of the blood, but these factors do indicate that bribery and coercion is alive and well. EDIT: The FDA prohibits any gifts to blood donors in excess of $25 in cumulative value. It is also worth noting that there is a thriving industry paying for blood plasma, which may indicate that certain types of blood donation are significantly more valuable than others (plasma are limited use, but can be given regardless of blood type).

I stumbled across this document. I believe it may have influenced a young Eliezer Yudkowsky. He's certainly shown reverence for the author before.

This essay includes everything. A rant against frequentism and the superiority of bayes. A rant against modern academic institutions. A rant against mainstream quantum physics. A section about how mainstream AI is too ad hoc and not grounded in perfect bayesian math. A closing section about sticking to your non-mainstream beliefs and ignoring critics.

I'm not really qualified to speak about most of it. The part ab... (read more)

0Manfred
I don't think he suggests bayesian networks (which, to me, mean the causal networks of Pearl et al). Rather, he is literally suggesting trying to learn by Bayesian inference. His comments about nonlinearity I think are just to the effect that one shoudn't have to introduce nonlinearity with sigmoid activation functions, one should have nonlinearity naturally from Bayesian updates. But yeah, I think it's quite impractical. E.g. suppose you wanted to build an email spam filter, and wanted P(spam). A (non-naive) Bayesian approach to this classification problem might involve a prior over some large population of email-generating processes. Every time you get a training email, you update your probability that a generic email comes from a particular process, and what their probability was of producing spam. When run on a test email, the spam filter goes through every single hypothesis, evaluates its probability of producing this email, and then takes a weighted average of the spam probabilities of those hypotheses to get its spam / not spam verdict. This seems like too much work.
0Houshalter
I don't know, that comment really seemed to suggest Bayesian networks. I guess you could allow for a distribution of possible activation functions, but that doesn't really fit what he said about learning the "exact" nonlinear function for every possible function. That fits more with bayes nets, which use a lookup table for every node. Your example sounds like a bayesian net. But it doesn't really fit his description of learning optimal nonlinearities for functions.

The book Global Catastrophic Risks states that it does not appear plausible that molecular manufacturing will not come into existence before 2040 or 2050. I am not at all an expert on molecular manufacturing, but this seems hard to believe, given how little work seems to be going into it. I couldn't find any sources discussing when molecular manufacturing will come into existence. Thoughts?

1[anonymous]
There are reasons very little work is going into it - the concept makes very little sense compared to manipulating biological systems or making systems that work similar to biological systems. See http://www.sciencemag.org/content/347/6227/1221.short or this previous post of mine: http://lesswrong.com/lw/hs5/for_fai_is_molecular_nanotechnology_putting_our/97rl

I realize there are sites dedicated to career discussions, but I like the advice I've seen lurking here. I'm currently interviewing for a remote-work technical position at a well-known Silicon Valley company. I'd be leaving a stable, somewhat boring, high-paying position that I've had for 10 years, for something much more exciting and intellectually challenging. I'm also old (late 40s). This particular company has a reputation for treating its employees well, but with SV's reputation for rampant ageism and other cultural oddities, what questions should I be asking and what advice would you give for evaluating the move, if an offer comes up?

1Dagon
I'm roughly your age, and have been working for 10 years at the same company (in the Seattle area, not SV, but we have offices there). Unlike your position, it's never been boring - I've been able to work on both immediate-impact and far-reaching topics, and there's always more interesting things coming down the pike. I mostly want to address the age-ism and cultural oddities issue. It definitely exists, and is worse in California than other places. However, it's a topic that can't be analyzed by averages or aggregates over a geographic area. It varies so much by company, by position/role within a company, and by individual interaction with the nearby-team cultures that you really can't decide anything based on the region. This is especially true for remote work - your cultural experience will be far different from someone living there. So, the questions you should be asking are about the expectations for your specific interaction with the employer and coworkers, rather than about the general HR-approved culture spiel you'll get if you ask generally.
0Strangeattractor
Yes, as Dagon says it is very company specific. Is there a way that you could talk to people who already work at the company who are not the people who are involved in the hiring process? If you are on Linked In, perhaps you could find out if you have some connections who would talk to you informally over the phone or in person. Even though you would be working remotely, it may be worth it to go visit the place in person to get a feel for things and observe things that they wouldn't tell you explicitly, before making a decision of this magnitude. Also, read the company's annual report. There are clues to its culture in there, and numbers that will help make sense of the company and the direction it is likely to take in the near future. Not enough people read the annual report when applying for a position at a company or evaluating an offer.

Using Prediction Book (or other prediction software) for motivation

Does anyone have experience with the effects of documenting things you need to do in PredictionBook (or something similar) and the effects it has on motivation/actually doing those things? Basically, is it possible to boost your productivity by making more optimistic predictions? I've been dabbling with PredictionBook and tried it with two (related) things I had to do, which did not work at all.

Thoughts, experiences?

3btrettel
I've made a fair number of predictions about things I need to do on PredictionBook, and I don't think it has had much any effect on my motivation. Boosting your productivity might be possible if you make optimistic predictions if you are strongly motivated to be well calibrated. Another possible use of PredictionBook for motivation is getting a more objective view on whether you might complete a task by a certain date. If others think you are overconfident, then you could put in place additional things to ensure you complete the task.
1btrettel
Another idea: Self-fulfilling prophecies This seems to be the general name for the phenomena of a prediction causing itself to be fulfilled. I don't have time to read the Wikipedia entry right now, but I suspect it'll offer some ideas about how to use predictions to your own advantage. Let me know if you think of anything good. I'll post a reply here if I do.
1Viliam
I am afraid that the perverse incentives would be harmful here. The easy way to achieve perfect accuracy in predicting your own future action is to predict failure, and then fail intentionally. Even if one does not consciously go so far, it could still be unconsciously tempting to predict slightly smaller probability of success, because you can always adjust the outcome downwards. To avoid this effect completely, (as a hypothetical utility maximizer) you would have to care about your success infinitely more than about predicting correctly. In which case, why bother predicting?
1[anonymous]
When you write the predictions, do you simply add optimism without changing the processes to reach a conclusion, or do you try to map out the "how" of making an outcome match more optimistic outcomes?
0ZeitPolizei
Good point, when I wrote down the predictions, I just used my usual unrealistically optimistic estimate of: "This is in principle doable in this time and I want to do it.", i.e. my usual "planning" mode, without considering how often I usually fail to execute my "plans". So in this case, I think I adjusted neither my optimism, nor my plans, I only put my estimate for success into actual numbers for the first time (and hoped that would do the trick).

Are there any good established systems for keeping track of a large number of hypotheses?

I've been using PredictionBook for this. Unfortunately it's hard to compare competing hypotheses. It would be nice to have all related hypotheses on one page, but there really isn't any mechanism to support that (tagging would be a start). The search is quite limited, as well. And due to the short comments, it's rare and clumsy to detail the evidence for each hypothesis. I guess I could figure out how to add tagging and make a pull request at GitHub, but I don't have t... (read more)

2btrettel
Now after I post this, I see there has been a brief discussion of analysis of competing hypotheses before on LessWrong, from which you can find an open source software for the methodology (GitHub). I also see there are other softwares for this methodology, but none of these seem quite like what I want. I'll have to look closer. Any other systems would be of interest to me. This is a good system for comparing competing hypotheses, but does nothing for the management of non-competing hypotheses (which could still be related).
[-][anonymous]10

Recently I've been thinking of dealing with social problems in the physical world, vs the psychological world, and the victim's world vs the perpetrators world.

Is it more effective to deal with public anxiety over a certain danger, than to deal with the anxiety-provoking stimuli itself? For instance, if gun ownership spreads fear and anxiety among a populace, would it be more effective to address those concerns by education about the threat of increased gun ownership (irrespective of change in actual level of physical danger) or to remove the stimuli (e.g.... (read more)

1Jiro
Is "which is more effective" even a useful question to ask? Suppose it was found that the most effective way to deal with people's fears of terrorism is to ban Islam. Should we then ban Islam? (Also, if you will do X when doing X is most effective, that creates incentives for people who want X to respond unusually strongly to doing X. You end up creating utility monsters.)
1IffThen
It is definitely a necessary question to ask. You need to have a prediction of how effective your solutions will be. You also need predictions of how practical they are, and it may be that something very effective is not practical -- e.g. banning Islam. You could make a list of things you should ask: how efficient, effective, sustainable, scalable, etc. But effective certainly has a place on the list.
3ChristianKl
Banning religions in general is no effective move if you have a different goal than radicalising people. Christianity grew in the Roman empire at a time where being a Christian was punishable by death.
6Jiro
I don't see any Arians around. Beware survivorship bias. If some religion was suppressed effectively, it's less likely that you'd have heard of it and even if you have, less likely that it would come to mind. At any rate, my point wasn't just about effectiveness. It was that we have ideas about rights and we don't decide to suppress something just because it is effective, if doing the suppression violates someone's rights.
-1ChristianKl
Neither Russia nor China moved to forbid Islam even through both have homegrown Muslim terrorists. I don't think their concern was mainly about rights.
1Jiro
That was a hypothetical. The hypothetical was chosen to be something that embodies the same principles but to which most people would find the answer fairly clear. The hypothetical was not chosen to actually be true.
0ChristianKl
In general the effectiveness of awareness raising programs intended to shift public perception of a risk is low.
0MrMind
I'm afraid that nobody knows, but you will have to dig into sociological studies to find out for sure. I just want to offer you a different perspective, a parameter that might affect your investigation. It might be possible that cultural and economical influences affect the general level of anxiety in a population, so that even if you just ban a stimulus (say gun ownership) anxiety will just find another object of focus.

Why don't people (outside small groups like LW) advocate the creation of superintelligence much? If it is Friendly, it would have tremendous benefits. If superintelligence's creation isn't being advocated out of fears of it being unFriendly, then why don't more people advocate FAI research? Is it just too long-term for people to really care about? Do people not think managing the risks is tractable?

6MrMind
One answer could be that people don't really think that a superintelligence is possible. It doesn't even enter in their model of the world.
1[anonymous]
Like this? https://youtube.com/watch?v=xKk4Cq56d1Y
0G0W51
I think something else is going on. The responses to this question about the feasibility of strong AI mostly stated that it was possible, though selection bias is probably largely at play, as knowledgable people would be more likely to answer than the ignorant would be.
1MrMind
Surely AI is a concept that's more and more present in the Western culture, but only as fictional, as far as I can tell. No man in the street takes it seriously, as in "it's really starting to happen". Possibly the media are paving the way for a change in that, as the insurgence of AI related movies seems to suggest, but I would bet it's still an idea very far from their realm of possibilities. Also, once the reality of an AI would be estabilished, it would still be a jump to believe in the possibility of an intelligence superior to human's, a leap that for me is tiny but for many I suspect would not be so small (self-importance and all that).
0G0W51
But other than self-importance, why don't people take it seriously? Is it otherwise just due to the absurdity and availability heuristics?
4IffThen
FWIW, I have been a long time reader of SF, have long been a believer of strong AI, am familiar with friendly and unfriendly AIs and the idea of the singularity, but hadn't heard much serious discussion on development of superintelligence. My experience and beliefs are probably not entirely normal, but arose from a context close to normal. My thought process until I started reading LessWrong and related sites was basically split between "scientists are developing bigger and bigger supercomputers, but they are all assigned to narrow tasks -- playing chess, obscure math problems, managing complicated data traffic" and "intelligence is a difficult task akin to teaching a computer to walk bipedally or recognize complex visual images, which will teke forever with lots of dead ends". Most of what I had read in terms of spontaneous AI was fairly silly SF premises (lost packets on the internet become sentient!) or in the far future, after many decades of work on AI finally resulting in a super-AI. I also believe that science reporting downplays the AI aspects of computer advances. Siri, self-driving cars, etc. are no longer referred to as AI in the way they would have been when I was growing up; AI is by definition something that is science fiction or well off in the future. Anything that we have now is framed as just an interesting program, not an 'intelligence' of any sort.
4[anonymous]
If you're not reading about futurism, it's unlikely to come up. There aren't any former presidential candidates giving lectures about it, so most people have never heard of it. Politics isn't about policy as Robin Hanson likes to say.

IQ is said to correlate with life success. If rationality is about 'winning at life' wouldn't it be sensible to define a measure of 'life success'? Like the average increase of some life success metric like income over time.

4Viliam
It's complicated, but maybe we could make some approximations. For example: "list ten things you care about, create a metric for each of them", providing a list of what people usually care about.
4ZeitPolizei
What purpose would such a measure serve? And do you try to find a universal measure or one that is individual for every person? Because different people have different goals, you could try to measure how well reality aligns with their goals, but then you just select for people who can accurately predict what they can achieve. --Benjamin Zander
8Viliam
A crude check of how much you are lying to yourself, for example if you believe that reading LessWrong improved your life. You could enter some data and get the result that no, your life is approximately the same as it was ten years ago. On the other hand, you could also find an improvement that you didn't realize, because of hedonistic treadmill.
3Strangeattractor
Bhutan's Gross National Happiness Index, and various indices inspired by it, attempt to measure this in populations. https://en.wikipedia.org/wiki/Happiness_economics
-4Lumifer
"He who dies with the most toys wins" :-P
0[anonymous]
See, that was before they invented chess...

Cardinal numbers for utilons?

I have a hunch.

Trying to add up utilons or hedons can quickly lead to all sorts of problems, which are probably already familiar to you. However, there are all sorts of wacky and wonderful branches of non-intuitive mathematics, which may prove of more use than elementary addition. I half-remember that regular math can be treated as part of set theory, and there are various branches of set theory which can have some, but not all, of the properties of regular math - for example, being able to say that X < Y, but not necessaril... (read more)

3Manfred
I think the most mathy (and thus, best :P) way to go about this is to think of the properties that these "utility" objects have, and just define them as objects with those properties. For starters, you can compare them for size - The relationship is either bigger, or smaller, or the same. And you can do an operation to them that is a weighted sum - if you have two utilities that are different, you can do this operation to them and get a utility that's in between them, with a third parameter (the probability of one versus the other) distinguishing between different applications of this operation. Actually, I think this sort of thing is pretty much what Savage did.
2Toggle
Seems to be an established conversation around this point, see: https://en.wikipedia.org/wiki/Ordinal_utility https://en.wikipedia.org/wiki/Cardinal_utility "The idea of cardinal utility is considered outdated except for specific contexts such as decision making under risk, utilitarian welfare evaluations, and discounted utilities for intertemporal evaluations where it is still applied. Elsewhere, such as in general consumer theory, ordinal utility with its weaker assumptions Is preferred because results that are just as strong can be derived." Or you could go back to the original Theory of Games proof, which I believe was ordinal- it's going to depend on your axioms. In that document, Von Neumann definitely didn't go so far as to treat utility as simply an integer.
1DataPacRat
Well, I guess coming up with an idea a century-ish old could be considered better than /not/ having come up with something that recent...
2Toggle
When I was a freshman, I invented the electric motor! I think it's something that just happens when you're getting acquainted with a subject, and understand it well- you get a sense of what the good questions are, and start asking them without being told.
0roystgnr
That's one of the most amusing phrases on Wikipedia: "specific contexts such as decision making under risk". In general you don't have to make decisions and/or you can predict the future perfectly, I suppose.
2asr
It's a tempting thought. But I think it's hard to make the math work that way. I have a lovely laptop here that I am going to give you. Suppose you assign some utility U to it. Now instead of giving you the laptop, I give you a lottery ticket or the like. With probability P I give you the laptop, and with probability 1 - P you get nothing. (The lottery drawing will happen immediately, so there's no time-preference aspect here.) What utility do you attach to the lottery ticket? The natural answer is P * U, and if you accept some reasonable assumptions about preferences, you are in fact forced to that answer. (This is the basic intuition behind the von Neumann-Morgenstern Expected Utility Theorem.) Given that probabilities are real numbers, it's hard to avoid utilities being real numbers too.
0Lumifer
If we are going into VNM utility, it is defined as the output of the utility function and the utility function is defined as returning real numbers.
0DataPacRat
I could try to rescue the idea by throwing in units, the way multiplying distance units by time units gives you speed units... but I'd just be trying to technobabble my way out of the corner. I think the most that I can try to rescue from this failed hunch is that some offbeat and unexpected part of mathematics might be able to be used to generate useful, non-obvious conclusions for utilitarian-style reasoning, in parallel with math based on gambling turning out to be useful for measuring confidence-strengths more generally. Anybody have any suggestions for such a subfield which won't make any actual mathematicians wince, should they read my story?
0Douglas_Knight
The von Neumann-Morgenstern theorem say that if you are uncertain about the world then you can denominate your utility in probabilities. Since probabilities are real numbers, so are utilities.
0DanielLC
There are various ways to get infinite and infinitesimal utility. But they don't matter in practice. Everything but the most infinite potential producer of utility will only matter as a tie breaker, which will occur with probability zero. Cardinal numbers also wouldn't work well even as infinite numbers go. You can't have a set with half an element, or with a negative number of elements. And is there a difference between a 50% chance of uncountable utilons and a 100% chance?
0MrMind
I don't think that non-additivity is the only thing that matters about utilons: sometimes they do add, after all. Besides that, yes, infinite cardinal numbers can have the property you cite: since for them X + Z = max(X,Z) if X < Y and Z < Y, it results X + Z < Y

A voice of reason.

Against Musk, Hawking and all other "pacifists".

5Manfred
Meh. The assumption that bans won't work seems to miss most of the subtlety of reality, which can vary between the failure of U.S. prohibition of alcohol to Japan's two gun-related homocides per year.
5ZeitPolizei
Trying to summarize here: The open letter says: "If we allow autonomous weapons, a global arms race will make them much cheaper and much more easily available to terrorists, dictators etc. We want to prevent this, so we propose to outlaw autonomous weapons." The author of the article argues, that the technology gets developed either way and will be cheaply available, and then continues to say, that autonomous weapons would reduce casualties in war. I suspect that most people agree, that (if used ethically) autonomous weapons reduce casualties. The actual question is, how much (more) damage can someone without qualms about ethics do with autonomous weapons, and can we implement policies to minimize the availability of autonomous weapons to people we don't want to have them. I think the main problem with this whole discussion was already mentioned elsewhere: Robotics and AI experts aren't experts on politics, and don't know what the actual effects of an autonomous weapon ban would be.
3Thomas
True. And the experts in politics usually don't want to even consider such childish fantasies like autonomous killing robots. Until at least, they are here.
1ChristianKl
What does "if used ethically" mean? This is a bit like the debate around tasers. Taser seem like a good idea because they allow policeman to use less force. In reality in nearly every case where a policeman wanted to use a real gun in the past they still use a real gun. The do additional shots with the tasers. The US is already using it's drones in Pakistan in a way that violates many passages of international law, like shooting at people who rescue wounded people. That's not in line with ethical use. They use the weapons whenever they expect that to produce a military advantage. Elon Musk does politics in the sense that he has experience in lobbying for laws getting passed. He likely has people with deeper knowledge on staff. On the other hand I don't see that the author of the article has political experience.
1ZeitPolizei
I was thinking mainly along the lines of using it in regular combat vs. indiscriminately killing protesters. Autonomous weapons should eventually be better than humans at (a) hitting targets, thus reducing combatant casualties on the side that uses them and (b) differentiating between combatants and non-combatants, thus reducing civilian casualties. This is working under the assumption, that something like a guard robot would accompany a patrolling squad. Something like a swarm of small drones, that sweep a city to find and subdue all combatants is of course a different matter. I wasn't aware of this, do you have a source on that? Regardless, the number of civilian casualties from drone strikes is definitely too high, from what I know.
4ChristianKl
US drones in Pakistan usually don't strike in regular combat but strike a house while people sleep in it. If you want to kill protesters you don't need drones. You can simply shoot into the mass. In most cases that however doesn't make sense and is no effective move. If you want to understand warfare you have to move past the standard spin. http://www.theguardian.com/commentisfree/2012/aug/20/us-drones-strikes-target-rescuers-pakistan The fact that civilian casualties exists doesn't show that a military violates ethical standards. Shooting on rescues on the other hand is a violation of ethical standards. From a military standpoint there's an advantage to be gained by killing the doctors of the other side, from an ethical perspective it's bad and there's international law against it. The US tries to maximize military objectives instead of ethical ones.
0Douglas_Knight
Do you have a source for that? One method would be to look at the number of police killings and see if it changed the trend. But it's pretty tough to get the number of American police killings, let alone the estimate a trend and determine causes. One could imagine a policy decision to arm people with tasers instead of guns, which is not subject to your complaint. People are rarely disarmed, but new locations could make different choices about how to arm security guards. But I do not know the proportion of guards armed in various ways, let alone the trends.
2roystgnr
Where did "pacifists" and the scare quotes around it come from?
0[anonymous]
The UFAI debate isn't mainly about military robots.
1MrMind
The article's two main points are: 1 - a ban won't work 2 - properly programmed autonomous weapon (AW) could reduce causalties So, the conclusion goes, we should totally dig AW. Point n° 2 is the most fragile: they could very well reduce as increment causalties, depending on how they are programmed. It's also true that the availability of cheaper soldiers might make for cheaper (i.e., more affordable) wars. But point n° 1 is debatable also: after all, the ban on chemical and biological weapon has worked, sorta.
-2Gunslinger
This sounds like a straw man, right there at the beginning. Stopped there.
[-][anonymous]-20

Rhetorical solution: Multi armed bandit problem

disclaimer: I'm not a computer scientist. I read up on the problem to see what the takeaways might be for decision theory. Since I'm not trained in any formal logic, I don't know how to represent this solution in symbols. I think of the problem in terms of things like - am I spending too much time becoming smarter, than doing things that are smart?

  • Exploitation dominates exploration cause unless exploration is a subset of exploitation by definition, it would not be optimising expected utility for a given opti

... (read more)