Previous open thread

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Open thread, 11-17 August 2014
New Comment
274 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Filipe310

Economist Scott Sumner at Econlog praised heavily Yudkowsky and the quantum physics sequence, and applies lessons from it to economics. Excerpts:

I've recently been working my way through a long set of 2008 blog posts by Eliezer Yudkowsky. It starts with an attempt to make quantum mechanics seem "normal," and then branches out into some interesting essays on philosophy and science. I'm nowhere near as smart as Yudkowsky, so I can't offer any opinion on the science he discusses, but when the posts touched on epistemological issues his views hit home.

and

I used to have a prejudice against math/physics geniuses. I thought when they were brilliant at high level math and theory; they were likely to have loony opinions on complex social science issues. Conspiracy theories. Or policy views that the government should wave a magic wand and just ban everything bad. Now that I've read Robin Hanson, Eliezer Yudkowsky and David Deutsch, I realize that I've got it wrong. A substantial number of these geniuses have thought much more deeply about epistemological issues than the average economist. So when Hanson says we put far too little effort into existential risks, or even lesser

... (read more)
7Viliam_Bur
Reading the comments... one commenter objects to WMI in a way which I would summarize as: "MWI provides identical experimental predictions to CI, which makes it useless, and also MWI provides wrong experimental predictions (unlike CI), which makes it wrong". The author immediately detects the contradiction: Another commenter says that MWI has a greater complexity of thought, and while it is more useful to explore algorithmic possibilities on quantum computers abstractly, CI wins because it is about the real world. Then the former commenter says (in reaction to the author) that MWI didn't provide useful predictions, and that Casimir force can only be explained by quantum equations and not by classical physics. (Why exactly is that supposed to be an argument against MWI? No idea. Also, if MWI doesn't provide useful predictions, how can it be useful for studying quantum computers? Does it mean that quantum computers are never going to work in, you know, the real life?) Finally, yet another commenter explains things from MWI point of view, saying that "observers" must follow the same fundamental physics as rocks.

What sophisticated ideas did you come up with independently before encountering them in a more formal context?

I'm pretty sure that in my youth I independently came up with rudimentary versions of the anthropic principle and the Problem of Evil. Looking over my Livejournal archive, I was clearly not a fearsome philosophical mind in my late teens, (or now, frankly), so it seems safe to say that these ideas aren't difficult to stumble across.

While discussing this at the most recent London Less Wrong meetup, another attendee claimed to have independently arrived at Pascal's Wager. I've seen a couple of different people speculate that cultural and ideological artefacts are subject to selection and evolutionary pressures without ever themselves having come across memetics as a concept.

I'm still thinking about ideas we come up with that stand to reason. Rather than prime you all with the hazy ideas I have about the sorts of ideas people converge on while armchair-theorising, I'd like to solicit some more examples. What ideas of this sort did you come up with independently, only to discover they were already "a thing"?

When I was a teenager, I imagined that if you had just a tiny infinitesimally small piece of a curve - there would only be one moral way to extend it. Obviously, an extension would have to be connected to it, but also, you would want it to connect without any kinks. And just having straight-lines connected to it wouldn't be right, it would have to be curved in the same sort of way - and so on, to higher-and-higher orders. Later I realized that this is essentially what a Taylor series is.

I also had this idea when I was learning category theory that objects were points, morphisms were lines, composition was a triangle, and associativity was a tetrahedron. It's not especially sophisticated, but it turns out this idea is useful for n-categories.

Recently, I have been learning about neural networks. I was working on implementing a fairly basic one, and I had a few ideas for improving neural networks: making them more modular - so neurons in the next layer are only connected to a certain subset of neurons in the previous layer. I read about V1, and together, these led to the idea that you arrange things so they take into account the topology of the inputs - so for image processing, havi... (read more)

5HopefullyCreative
I had to laugh at your conclusion. The implementation is the most enjoyable part. "How can I dumb this amazing idea down to the most basic understandable levels so it can be applied?" Sometimes you come up with a solution only to have a feverish fit of maddening genius weeks later finding a BETTER solution. In my first foray into robotics I needed to write a radio positioning program/system for the little guys so they would all know where they were not globally but relative to each other and the work site. I was completely unable to find the math simply spelled out online and to admit at this point in my life I was a former marine who was not quite up to college level math. In banging my head against the table for hours I came up with an initial solution that found a position accounting for three dimensions(allowing for the target object to be in any position relative to the stationary receivers). Eventually I came up with an even better solution that also came up with new ideas for the robot's antenna design and therefore tweaking the solution even more. That was some of the most fun I have ever had...
3Luke_A_Somers
I did the Taylor series thing too, though with s/moral/natural/
[-]Ander150

I came up with the idea of a Basic Income by myself, by chaining together some ideas:

  • Capitalism is the most efficient economic system for fulfilling the needs of people, provided they have money.

  • The problem is that if lots of people have no money, and no way to get money (or no way to get it without terrible costs to themselves), then the system does not fulfill their needs.

  • In the future, automation will both increase economic capacity, while also increase the barrier to having a 'valuable skill' allowing you to get money. Society will have improved capacity to fulfill the needs of people with money, yet the barrier to having useful skills and being able to get money will increase. This leads to a scenario where the society could easily produce the items needed by everyone, yet does not because many of those people have no money to pay for them.

  • If X% of the benefits accrued from ownership of the capital were taken and redistributed evenly among all humans, then the problem is averted. Average people still have some source of money with which they can purchase the fulfillment of their needs, which are pretty easy to supply in this advanced future society.

  • X=100%, as in

... (read more)

Once a Christian friend asked me why I cared so much about what he believed. Without thinking, I came up with, "What you think determines what you choose. If your idea of the world is inaccurate, your choices will fail."

This was years before I found LW and learned about the connection between epistemic and instrumental rationality.

P.S. My friend deconverted himself some years afterwards.

[-]Metus130

This is not a direct answer: Every time I come up with an idea in a field I am not very deeply involved in sooner or later I will realise that the phenomenon is either trivial, a misperception or very well studied. Most recently this happened with pecuniary externalities.

[-][anonymous]100

Came up with the RNA-world hypothesis on my own when reading about the structure and function of ribosomes in middle school.

Decided long ago that there was a conflict between the age of the universe and the existence of improvements in space travel meaning that things such as we would never be able to reach self-replicating interstellar travel. Never came to the conclusion that it meant extinction at all and am still quite confused by people who assume its interstellar metastasis or bust.

9Username
Derivatives. I imagined tangent lines traveling along a function curve and thinking 'I wonder what it looks like when we measure that?' And so I would try to visualize the changing slopes of the tangent lines at the same time. I also remembering wondering how to reverse it. Obviously didn't get farther than that, but I remember being very surprised when I took calculus and realizing that the mind game I had been playing was hugely important and widespread, and could in fact be calculated.
9James_Miller
For as long as I can remember, I had the idea of a computer upgrading its own intelligence and getting powerful enough to make the world a utopia.
7sediment
Oh, another thing: I remember thinking that it didn't make sense to favour either the many worlds interpretation or the copenhagen interpretation, because no empirical fact we could collect could point towards one or the other, being as we are stuck in just one universe and unable to observe any others. Whichever one was true, it couldn't possibly impact on one's life in any way, so the question should be discarded as meaningless, even to the extent that it didn't really make sense to talk about which one is true. This seems like a basically positivist or postpositivist take on the topic, with shades of Occam's Razor. I was perhaps around twelve. (For the record, I haven't read the quantum mechanics sequence and this remains my default position to this day.)
7niceguyanon
In 6th or 7th grade I told my class that it was obvious that purchasing expensive sneakers is mostly just a way to show how cool you are or that you can afford something that not everyone else could. Many years latter I would read about signalling http://en.wikipedia.org/wiki/Signalling_(economics) The following are not ideas as much as questions I had while growing up, and I was surprised/relieved/happy to find out that other people much smarter than me, spent a lot of time thinking about and is "a thing". For example I really wanted to know if there was a satisfactory way to figure out if Christianity was the one true religion and it bothered me very much that I could not answer that question. Also, I was concerned that the future might not be what I want it to be, and I am not sure that I know what I even want. It turns out that this isn't a unique problem and there are many people thinking about it. Also, what the heck is consciousness? Is there one correct moral theory? Well, someone is working on it.
7bramflakes
At school my explanation for the existence of bullies was that it was (what I would later discover was called) a Nash equilibrium.
6HopefullyCreative
I had drawn up some rather detailed ideas for an atomic powered future: The idea was to solve two major problems. The first was the inherent risk of an over pressure causing such a power plant to explode. The second problem to solve was the looming water shortage facing many nations. The idea was a power plant that used internal sterling technology so as to operate at atmospheric pressures. Reinforcing this idea was basically a design for the reactor to "entomb" itself if it reached temperatures high enough to melt its shell. The top of the sterling engine would have a salt water reservoir that would be boiled off. The water then would be collected and directed in a piping system to a reservoir. The plant would then both produce electricity AND fresh water. Of course then while researching thorium power technology in school I discovered that the South Korean SMART micro reactor does in fact desalinate water. On one level I was depressed that my idea was not "original" however, overall I'm exited that I came up with an idea that apparently had enough merit for people actually go through and make a finished design based upon it. The fact that my idea had merit at all gives me hope for my future as an engineer.
6Unnamed
I'm another independent discoverer of something like utilitarianism, I think when I was in elementary school. My earliest written record of it is from when I was 15, when I wrote: "Long ago (when I was 8?), I said that the purpose of life was to enjoy yourself & to help others enjoy themselves - now & in the future." In high school I did a fair amount of thinking (with relatively little direct outside influence) about Goodhart's law, social dilemmas, and indirect utilitarianism. My journal from then include versions of ideas like the "one thought too many" argument, decision procedures vs. criterion for good, tradeoffs between following an imperfect system and creating exceptions to do better in a particular case, and expected value reasoning about small probabilities of large effects (e.g. voting). On religion, I thought of the problem of evil (perhaps with outside influence on that one) and the Euthyphro argument against divine command theory. 16-year-old me also came up with various ideas related to rationality / heuristics & biases, like sunk costs ("Once you’re in a place, it doesn’t matter how you got there (except in mind - BIG exception)"), selection effects ("Reason for coincidence, etc. in stories - interesting stories get told, again & again"), and the importance of epistemic rationality ("Greatest human power - to change ones mind").
6[anonymous]
I've found that ideas that affect me most fall into two major categories: either they will be ideas that hit me completely unprepared or they are ideas that I knew all along but had not formalized. Many-Worlds and and timelessness were the former for me. Utilitarianism and luminosity were the latter.
5TylerJay
After learning the very basics of natural selection, I started thinking about goal systems and reward circuits and ethics. I thought that all of our adaptations were intended to allow us to meet our survival needs so we could pass on our genes. But what should people do once survival needs are met? What's the next right and proper goal to pursue? That line of reasoning's related Googling led me to Eliezer's Levels of Intelligence paper, which in turn led me to Less Wrong. Reading through the sequences, I found so many of the questions that I'd thought about in vague philosophical terms explained and analyzed rigorously, like personal identity vs continuity of subjective experience under things like teleportation. Part of the reason LW appealed to me so much back then is, I suspect, that I had already thought about so many of the same questions but just wasn't able to frame them correctly.
4RomeoStevens
This made me curious enough to skim through my childhood writing. Convergent and divergent infinite series, quicksort, public choice theory, pulling the rope sideways, normative vs positive statements, curiosity stoppers, the overton window. My Moloch moment is what led me to seek out Overcomingbias.
4wadavis
Tangent thread: What sophisticated idea are you holding on to that you are sure has been formalized somewhere but haven't been able to find? I'll go first: When called to explain and defend my ethics I explained I believe in "Karma, NO not the that BS mysticism Karma, but plain old actions have consequences in our very connected world kind of Karma." If you treat people in a manner of honesty and integrity in all things, you will create a community of cooperation. The world is strongly interconnected and strongly adaptable so the benefits will continue outside your normal community, or if you frequently change communities. The lynchpin assumption of these beliefs is that if I create One Unit of Happiness for others, it will self propagate, grow and reflect, returning me more that One Unit of Happiness over the course of my lifetime. The same applies for One Unit of Misery. I've only briefly studied ethics and philosophy, can someone better read point my to the above in formal context.
4iarwain1
This seems like a good place to ask about something that I'm intensely curious about but haven't yet seen discussed formally. I've wanted to ask about it before, but I figured it's probably an obvious and well-discussed subject that I just haven't gotten to yet. (I only know the very basics of Bayesian thinking, I haven't read more than about 1/5 of the sequences so far, and I don't yet know calculus or advanced math of any type. So there are an awful lot of well-discussed LW-type subjects that I haven't gotten to yet.) I've long conceived of Bayesian belief statements in the following (somewhat fuzzily conceived) way: Imagine a graph where the x-axis represents our probability estimate for a given statement being true and the y-axis represents our certainty that our probability estimate is correct. So if, for example, we estimate a probability of .6 for a given statement to be true but we're only mildly certain of that estimate, then our belief graph would probably look like a shallow bell curve centered on the .6 mark of the x-axis. If we were much more certain of our estimate then the bell curve would be much steeper. I usually think of the height of the curve at any given point as representing how likely I think it is that I'll discover evidence that will change my belief. So for a low bell curve centered on .6, I think of that as meaning that I'd currently assign the belief a probability of around .6 but I also consider it likely that I'll discover evidence (if I look for it) that can change my opinion significantly in any direction. I've found this way of thinking to be quite useful. Is this a well-known concept? What is it called and where can I find out more about it? Or is there something wrong with it?
3Lumifer
I don't understand where the bell curve is coming from. If you have one probability estimate for a given statement with some certainty about it, you would depict it as a single point on your graph. The bell curves in this context usually represent probability distributions. The width of that probability distribution reflects your uncertainty. If you're certain, the distribution is narrow and looks like a spike at the estimate value. If you're uncertain, the distribution is flat(ter). Probability distributions have to sum to 1 under the curve, so the smaller the width of the distribution, the higher the spike is. How likely you are to discover new evidence is neither here nor there. Even if you are very uncertain of your estimate, this does not convert into the probability of finding new evidence.
2iarwain1
I think you're referring to the type of statement that can have many values. Something like "how long will it take for AGI to be developed?". My impression (correct me if I'm wrong) is that this is what's normally graphed with a probability distribution. Each possible value is assigned a probability, and the result is usually more or less a bell curve with the width of the curve representing your certainty. I'm referring to a very basic T/F statement. On a normal probability distribution graph that would indeed be represented as a single point - the probability you'd assign to it being true. But we're often not so confident in our assessment of the probability we've assigned, and that confidence is what I was trying to represent with the y-axis. An example might be, "will AGI be developed within 30 years"? There's no range of values here, so on a normal probability distribution graph you'd simply assign a probability and that's it. But there's a very big difference between saying "I really have not the slightest clue, but if I really must assign it a probability than I'd give it maybe 50%" vs. "I've researched the subject for years and I'm confident in my assessment that there's a 50% probability". In my scheme, what I'm really discussing is the probability distribution of probability estimates for a given statement. So for the 30-year AGI question, what's the probability that you'd consider a 10% probability estimate to be reasonable? What about a 90% estimate? The probability that you'd assign to each probability estimate is depicted as a single point on the graph and the result is usually more or less a bell curve. You're probably correct about this. But I've found the concept of the kind of graph I've been describing to be intuitively useful, and saying that it represents the probability of finding new evidence was just my attempt at understanding what such a graph would actually mean.
4Lumifer
OK, let's rephrase it in the terms of Bayesian hierarchical models. You have a model of event X happening in the future which says that the probability of that event is Y%. Y is a parameter of your model. What you are doing is giving a probability distribution for a parameter of your model (in the general case this distribution can be conditional, which makes it a meta-model, so hierarchical). That's fine, you can do this. In this context the width of the distribution reflects how precise your estimate of the lower-level model parameter is. The only thing is that for unique events ("will AGI be developed within 30 years") your hierarchical model is not falsifiable. You will get a single realization (the event will either happen or it will not), but you will never get information on the "true" value of your model parameter Y. You will get a single update of your prior to a posterior and that's it. Is that what you have in mind?
2iarwain1
I think that is what I had in mind, but it sounds from the way you're saying it that this hasn't been discussed as a specific technique for visualizing belief probabilities. That surprises me since I've found it to be very useful, at least for intuitively getting a handle on my confidence in my own beliefs. When dealing with the question of what probability to assign to belief X, I don't just give it a single probability estimate, and I don't even give it a probability estimate with the qualifier that my confidence in that probability is low/moderate/high. Rather I visualize a graph with (usually) a bell curve peaking at the probability estimate I'd assign and whose width represents my certainty in that estimate. To me that's a lot more nuanced than just saying "50% with low confidence". It has also helped me to communicate to others what my views are for a given belief. I'd also suspect that you can do a lot of interesting things by mathematically manipulating and combining such graphs.
1Lumifer
One problem is that it's turtles all the way down. What's your confidence in your confidence probability estimate? You can represent that as another probability distribution (or another model, or a set of models). Rinse and repeat. Another problem is that it's hard to get reasonable estimates for all the curves that you want to mathematically manipulate. Of course you can wave hands and say that a particular curve exactly represents your beliefs and no one can say it ain't so, but fake precision isn't exactly useful.
1Azathoth123
Taken literally, the concept of "confidence in a probability" is incoherent. You are probably confusing it with one of several related concepts. Lumifer has described one example of such a concept. Another concept is how much you think your probability estimate will change as you encounter new evidence. For example, your estimate for whether the outcome of the coin flip for the 2050 Superbowl will be heads is 1/2, and you are unlikely to encounter evidence that changes it (until 2050 that is). On the other hand, your estimate for the probability AI being developed by 2050 is likely to change a lot as you encounter more evidence.
2VAuroch
I don't know, I think the existence of the 2050 Superbowl is significantly less than 100% likely.
0NancyLebovitz
What's your line of thought?
2VAuroch
It wouldn't be the first time a sport has gone from vastly popular to mostly forgotten within 40 years. Jai alai was the particular example I had in mind; it was once incredibly popular, but quickly descended to the point where it's basically entirely forgotten.
0iarwain1
Why? I thought the way Lumifer expressed it in terms of Bayesian hierarchical models was pretty coherent. It might be turtles all the way down as he says, and it might be hard to use it in a rigorous mathematical way, but at least it's coherent. (And useful, in my experience.) This is pretty much what I meant in my original post by writing: But expressing it in terms of how likely my beliefs are to change given more evidence is probably better. Or to say it in yet another way: how strong new evidence would need to be for me to change my estimate. It seems like the scheme I've been proposing here is not a common one. So how do people usually express the obvious difference between a probability estimate of 50% for a coin flip (unlikely to change with more evidence) vs. a probability estimate of 50% for AI being developed by 2050 (very likely to change with more evidence)?
0Anders_H
I believe you may be confusing the "map of the map" for the "map". If I understand correctly, you want to represent your beliefs about a simple yes/no statement. If that is correct, the appropriate distribution for your prior is Bernoulli. For a Bernoulli distribution, the X axis only has two possible values: True or False. The Bernoulli distribution will be your "map". It is fully described by the parameter "p" If you want to represent your uncertainty about your uncertainty, you can place a hyperprior on p. This is your "map of the map". Generally, people will use a beta distribution for this (rather than a bell-shaped normal distribution). With such a hyperprior, p is on the X-axis and ranges from 0 to 1. I am slightly confused about this part, but it is not clear to me that we gain much from having a "map of the map" in this situation, because no matter how uncertain you are about your beliefs, the hyperprior will imply a single expected value for p
0[anonymous]
I believe you may be confusing the "map of the map" for the "map". If I understand correctly, you want to represent your beliefs about a simple yes/no statement. If that is correct, the appropriate distribution for your prior is Bernoulli. For a Bernoulli distribution, the X axis only has two values: True or False. The Bernoulli distribution will be your "map". It is fully described by the parameter "p" If you want to represent your uncertainty about your uncertainty, you can place a hyperprior on p. This is your "map of the map". Generally, people will use a beta distribution for this (rather than a bell-shaped normal distribution). With such a hyperprior, p is on the X-axis and ranges from 0 to 1. I am slightly confused about this part, but it is not clear to me that we gain much from having a "map of the map" in this situation, because no matter how uncertain you are about your beliefs, the hyperprior will imply a single expected value for p.
1[anonymous]
The influence of the British Empire on progressivism. There was that book that talked about how North Korea got its methods from the Japanese occupation, and as soon as I saw that, I thought, "well, didn't something similar happen here?" A while after that, I started reading Imagined Communities, got to the part where Anderson talks about Macaulay, looked him up, and went, "aha, I knew it!" But as far as I know, no one's looked at it. Also, I think I stole "culture is an engineering problem" from a Front Porch Republic article, but I haven't been able to find the article, or anyone else writing rigorously about anything closer in ideaspace to that than dynamic geography, except the few people who approach something similar from an HBD or environmental determinism angle.
1buybuydandavis
I believe Rational Self Interest types make similar arguments, though I can't recall anyone breaking it down to marginal gains in utility.
4lmm
I figured out utilitarianism aged ~10 or so. I had some thoughts about the "power" of mathematical proof techniques that I now recognize as pointing towards turing completeness.
3moridinamael
Well, this isn't quite what you were asking for, but, as a young teenager a few days after 9/11, I was struck with a clear thought that went something like: "The American people are being whipped into a blood frenzy, and we are going to massively retaliate against somebody, perpetuating the endless cycle of violence that created the environment which enabled this attack to occur in the first place." But I think it's actually common for young people to be better at realpolitik and we get worse at it as we absorb the mores of our culture.
32ZctE
In middle school I heard a fan theory that Neo had powers over the real world because it was a second layer of the matrix-- the idea of simulations inside simulations was enough for me to come to Bostrom's simulation argument. Also during the same years I ended up doing an over the top version of comfort zone expansion by being really silly publicly. In high school I think I basically argued a crude version of compatibilism before learning the term, although my memory of the conversation is a bit vague
3Gvaerg
1. This happened when I was 12 years old. I was trying to solve a problem at a mathematical contest which involved proving some identity with the nth powers of 5 and 7. I recall thinking vaguely "if you go to n+1 what is added in the left hand side is also in the right hand side" and so I discovered mathematical induction. In ten minutes I had a rigorous proof. Though, I didn't find it so convincing, so I ended with an unsure-of-myself comment "Hence, it is also valid for 3, 4, 5, 6 and so on..." 2. When I was in high school, creationism seemed unsatisfying in the sense of a Deus Ex Machina narrative (I often wonder how theists reconcile the contradiction between the feeling of religious wonder and the feeling of disappointment when facing Deus Ex Machina endings). The evolution "story" fascinated me with its slow and semi-random progression over billions of years. I guess this was my first taste of reductionism. (This is also an example of how optimizing for interestingness instead of truth has led me to the correct answer.)
3[anonymous]
Cartesian skepticism and egoism, when I was maybe eleven. I eventually managed to argue myself out of both -- Cartesian skepticism fell immediately, but egoism took a few years. (In case it isn't obvious from that, I did not have a very good childhood.) I remember coming close to rediscovering pseudoformalism and the American caste system, but I discovered those concepts before I got all the way there.
3Alicorn
I independently conceived of determinism and a vague sort of compatibilism when I was twelveish.
3ahbwramc
I remember being inordinately relieved/happy/satisfied when I first read about determinism around 14 or 15 (in Sophie's World, fwiw). It was like, thank you, that's what I've been trying to articulate all these years! (although they casually dismissed it as a philosophy in the book, which annoyed 14-or-15-year-old me)
1sediment
Good one! I think I also figured out a vague sort of compatibilism about that time.
1Curiouskid
When I was first learning about neural networks, I came up with the idea of de-convolutional networks: http://www.matthewzeiler.com/ Also, I think this is not totally uncommon. I think this suggests that there is low-hanging fruit in crowd-sourcing ideas from non-experts. Another related thing that happens is that I'll be reading a book, and I'll have a question/thought that gets talked about later in the book.
1Dahlen
I rediscovered most of the more widely agreed upon ontological categories (minus one that I still don't believe to adhere to the definition) before I knew they were called that, at about the age of 17. The idea of researching them came to me after reading a question from some stupid personality quiz they gave us in high school, something like "If you were a color, which color would you be?" -- and something about it rubbed me the wrong way, it just felt ontologically wrong, conflating entities with properties like that. (Yes, I did get the intended meaning of the question, I wasn't that much of an Aspie even back then, but I could also see it in the other, more literal way.) I remember it was in the same afternoon that I also split up the verb "to be" into its constituent meanings, and named them. It seemed related.
1iarwain1
Maybe these aren't so sophisticated, but I figured out determinism + a form of compatibilism, and the hard problem of consciousness in 10th grade.
1Luke_A_Somers
In second or third grade, I noticed that (n+1) (n+1) = (n n) + n + (n+1).
1ShardPhoenix
I came up with a basic version of Tegmark's level 4 multiverse in high school and wrote an essay about it in English class. By that time though I think I'd already read Permutation City which involves similar ideas.
1sediment
I think I was a de facto utilitarian from a very young age; perhaps eight or so.
0VAuroch
I independently constructed algebra (of the '3*x+7=49. Solve for x.' variety) while being given 'guess and check' word problems in second grade. That's a slightly different variety than most of the other examples here, though.
0[anonymous]
Fun question! Under 8: my sister and I were raised atheist, but we constructed what amounted to a theology around our stuffed animals. The moral authority whom I disappointed most often, more than my parents, was my teddy bear. I believed in parts of our pantheon and ethics system so deeply, devoutly, and sincerely that, had I been raised in a real religion, I doubt my temperament would have ever let me escape. Around 8: My mother rinsed out milk bottles twice, each time using a small amount of water. I asked her why she didn't rinse it out once using twice as much water. She explained that doubling the water roughly doubled the cleansing power, but rinsing the bottle twice roughly squared the cleaning power. The most water-efficient way to clean a milk bottle, I figured, would involve a constant stream of water in and out of the bottle. I correctly modeled how the cleaning rate (per unit water) depends on the current milk residue concentration, but I couldn't figure out what to do next or if the idea even made sense. Around 14: Composition is like multiplication, and unions (or options, or choices)) are like addition. University: (1) use Kolmogorov complexity to construct a bayesian prior over universes, then reason anthropically. When you do this, you will (2) conclude with high probability that you are a very confused wisp of consciousness.
[-]Metus130

In the last open thread Lumifer linked to a list by the American Statistical Association with points that need to be understood to be considered statistically literate. In the same open thread in another comment sixes_and_sevens asked for statements we know are true but the average lay person gets wrong. As response he mainly got examples from the natural sciences and mathematics. Which makes me wonder, can we make a general test of education in all of these fields of knowledge that can be automatically graded? This test would serve as a benchmark for traditional educational methods and for autodidacts checking themselves.

I imagine having simple calculations for some things and multiple-choice tests for other scenarios where intuition suffices.

Edit: Please don't just upvote, try to point to similar ideas in your respective field or critique the idea.

4whales
There are concept inventories in a lot of fields, but these vary in quality and usefulness. The most well-known of these is the Force Concept Inventory for first semester mechanics, which basically aims to test how Aristotelian/Newtonian a student's thinking is. Any physicist can point out a dozen problems with it, but it seems to very roughly measure what it claims to measure. Russ Roberts (host of the podcast EconTalk) likes to talk about the "economic way of thinking" and has written and gathered links about ten key ideas like incentives, markets, externalities, etc. But he's relatively libertarian, so the ideas he chose and his exposition will probably not provide a very complete picture. Anyway, EconTalk has started asking discussion questions after each podcast, some of which aim to test basic understanding along these lines.
4ChristianKl
It seems to me like something that can be solved by a community driven website where users can vote on questions.
2sixes_and_sevens
I've often considered a self-assessment system where the sitter is prompted with a series of terms from the topic at hand, and asked to rate their understanding on a scale of 0-5, with 0 being "I've never heard of this concept", and 5 being "I could build one of these myself from scratch". The terms are provided in a random order, and include red-herring terms that have nothing to do with the topic at hand, but sound plausible. Whoever provides the dictionary of terms should have some idea of the relative difficulty of each term, but you could refine it further and calibrate it against a sample of known diverse users, (novices, high-schoolers, undergrads, etc.) When someone sits the test, you report their overall score relative to your calibrated sitters ("You scored 76, which puts you at undergrad level"), but you also report something like the Spearman rank coefficient of their answers against the difficulty of the terms. This provides a consistency check for their answers. If they frequently claim greater understanding of advanced concepts than basic ones, their understanding of the topic is almost certainly off-kilter (or they're lying). The presence of red-herring terms (which should all have canonical score of 0) means the rank coefficient consistency check is still meaningful for domain experts or people hitting the same value for every term. Actually, this seems like a very good learning-a-new-web-framework dev project. I might give this a go.
4somnicule
Look up Bayesian Truth Serum, not exactly what you're talking about but a generalized way to elicit subjective data. Not certain on its viability for individual rankings, though.
2sixes_and_sevens
This is all sorts of useful. Thanks.
4Luke_A_Somers
One problem that could crop up if you're not careful is a control term being used in an educational source not considered - a class, say, or a nonstandard textbook. I have a non-Euclidean geometry book that uses names for Euclidean geometry features that I certainly never encountered in geometry class. If those terms had been placed as controls, I would provide a non-zero rating for them.
0NancyLebovitz
Who's going to do the rather substantial amount of work needed to put the system together?
4sixes_and_sevens
Do you mean to build the system or to populate it with content? The former would be "me, unless I get bored or run out of time and impetus", and the latter is "whichever domain experts I can convince to list and rank terms from their discipline".
2NancyLebovitz
I was thinking about the work involved in populating it.
[-][anonymous]80

What are some good paths toward good jobs, other than App Academy?

1beoShaffer
See Mr.Money Mustache's 50 Jobs over $50,000 without a degree and SSC's Floor Employment for a number of suggestions.
0Shmi
I assume you don't consider going to a good school as a good path?
6[anonymous]
It's difficult for people who aren't in exactly the right place -- and I think people like that would be less likely to be around here. Certainly not likely for me; I'm already out of college, and I went to a no-name local school. (Didn't even occur to me to apply up.)

I've just finished the first draft of a series of posts on control theory, the book Behavior: The Control of Perception, and some commentary on its relevance to AI design. I'm looking for people willing to read the second draft next week and provide comments. Send me a PM or an email (I use the same username at gmail) if you're interested.

In particular, I'm looking for:

  • People with no engineering background.
  • People with tech backgrounds but no experience with control theory.
  • People with experience as controls engineers.

(Yes, that is basically a complete... (read more)

My son was asked what he'd wish for when he could wish for any one thing whatsoever.

He considered a while and then said: "I have so many small wishes that I'd wish for many wishes."

My ex-wife settled for "I want to be able to conjure magic" reasoning that then she could basically make any thing come true.

For me it is obviously "I want a friendly artificial general intelligence" - seems like the safest bet.

Thus basically we all chose alike things.

4Shmi
Maybe he'll grow up to be a mathematician.
0Gunnar_Zarncke
Naa, he is too practical. Builds real things. It's more likely that one of his youger brothers do. Like the five year old who told me that infinity can be reached only in steps of infinity each (thus one step), not in smaller steps (following some examples how 1000 can be reached in steps of 1, 100, 1000, 200 and other).
2NancyLebovitz
If I only had three wishes, I would still spend one of them on having enough sense to make good wishes. I'd probably do that if I only had two wishes. I might even use my only wish on having significantly better sense. My current situation isn't desperate-- if I only had one wish and were desperate, the best choice might well be to use the wish on dealing with the desperate circumstance as thoroughly as possible.
0DanielLC
But the AI would still be constrained by the laws of physics. Intelligence can't beat thermodynamics. You need to wish for an omnipotent friendly AI.

The Unicorn Fallacy (warning, relates to politics)

Is there an existing name for that one? It's similar to the nirvana fallacy but looks sufficiently different to me...

9Shmi
I am not aware of an existing one, although it is related to Moloch, as described in SSC when applied to the state: What Munger describes as The State, SSC calls Moloch. What your link calls the Munger test, may as well be called the Moloch test:
0Lumifer
I don't know about that. I understand Moloch as a considerably wider and larger system than just a State.
0Shmi
Probably. I think Moloch is a metaphor for the actual, uncaring and often hostile universe, as contrasted with an imagined should-universe (the unicorn).
8jaime2000
No, that's Gnon (Nature Or Nature's God). Moloch is the choice between sacrificing a value to remain competitive against others who have also sacrificed that value, or else to stop existing because you are not competitive. The name comes an ancient god people would sacrifice their children to.
0Shmi
Right, thanks.
3Nornagest
I've been thinking of Moloch as the God of the Perverse Incentives, which doesn't quite cover it (it has the right shape, but strictly speaking a perverse incentive needs to be perverse relative to some incentive-setting agent, which the universe lacks) but has the advantage of fitting the meter of a certain Kipling poem.
4Dagon
This is pretty close to my definition, but I'd simplify it to "Moloch is incentives". Perverse or not, Moloch is the god that gives you near-mode benefits unrelated to your far-mode values.
2Lumifer
Well, not THAT wide :-) My thinking about Moloch is still too fuzzy for good definitions, but I'm inclined to to treat is as emergent system behavior which, according to Finagle's Law, is usually not what you want. Often enough it's not what you expect, too, even if you designed (or tinkered with) the system. The unicorn is also narrower than the whole should-universe -- specifically it's some agent or entity with highly unlikely benevolent properties and the proposal under discussion is entirely reliant on these properties in order to work.
-3Azathoth123
Moloch is based on the neo-reactionaries' Gnon. Notice how Nyan deals with the fuzziness by dividing Gnon into four components, each of which can be analyzed individually. Apparently Yvain's brain went into "basilisk shock" up on exposure to the content, which is why his description is so fuzzy.
6[anonymous]
Maybe genealogically, but Moloch and Gnon are two completely different concepts. Gnon is a personalization of the dictates of reality, as stated in the post defining it. Every city in the world has the death penalty for stepping in front of a bus -- who set that penalty? Gnon did. Civilizations thrive when they adhere to the dictates of Gnon, and collapse when they cease to adhere to them. And so on. The structure is mechanistic/horroristic (same thing, in this case): "Satan is evil, but he still cares about each human soul; while Cthulhu can destroy humanity and never even notice." (in the comments here) Gnon is Cthulhu. Gnon doesn't care what you think about Gnon. Gnon doesn't care about you at all. But if you don't care about Gnon, you can't escape the cost. There's nothing dualistic about Gnon: there's only the spectrum from adherence to rebellion. Moloch vs. Elua, on the other hand, is totally Manichaean: the 'survive-mode' dictates of Gnon are identified with Moloch, the evil god of multipolar traps and survival-necessitated sacrifices, and Moloch must be defeated by creating a new god to take over the world and enforce one specific morality and one specific set of dictates everywhere. (Land, Meltdown: "Philosophy has an affinity with despotism, due to its predilection for Platonic-fascist top-down solutions that always screw up viciously.")
0Emile
"Platonic-fascist top-down solutions" that didn't screw up viciously: universal education, the hospital system, unified monetary systems, unified weights and measures, sewers, enforcement of a common code of laws, traffic signals, municipal street cleaning...
1Azathoth123
A lot of people would argue that this is in fact in the process of screwing up right now. This really didn't develop top-down.
0Nornagest
Strictly speaking I don't think an answer to Moloch has to be in the form of a totalizing ethic, although it sure makes it easier if it is.
0Lumifer
That's not self-evident to me. At the levels of abstraction we're talking about, the idea of opaque, uncaring, often perverse, and sometimes malevolent system/universe/reality is really a very old and widespread meme.
1Azathoth123
Personalizing it in quite this way was based on Gnon. Also the level of abstraction we (i.e., Yvain) are talking about it's impossible to say much of anything meaningful as you yourself noted in the grandparent.
4kalium
Or it's based on the poem "Howl," which uses the term Moloch and is quoted in full in the post.

How morally different are ISIS fighters from us? If we had a similar upbringing would we think it morally correct to kill Yazidi children for having the "wrong" religion? Or might genetics play a role in our differing moral views? I find it hard to think of ISIS members as human, or at least I don't want to belong to the same species as them. But yes I do realize that some of my direct ancestors almost certainly did horrible, horrible things by my current moral standards.

I find it hard to think of ISIS members as human

That's how the ISIS fighters feel about the Yazidi.

9James_Miller
Yes, an uncomfortable symmetry.
5Richard_Kennaway
Symmetry? Do you want to behead the children of ISIS fighters?
4James_Miller
No, so I guess it's not perfect symmetry.
2Azathoth123
What age are we talking about here? ISIS has been recruiting children as young as 9 and 10.
0DanielLC
He finds their children human. Just not the ISIS fighters themselves.

I find it hard to think of ISIS members as human, or at least I don't want to belong to the same species as them.

Beware of refusing to believe undeniable reality just because it's not nice.

1Gunnar_Zarncke
Yes. But in in this case it might be an inkling that the credibility of the sources may be the cause.

How morally different are ISIS fighters from us? If we had a similar upbringing would we think it morally correct to kill Yazidi children for having the "wrong" religion?

A relevant factor which is (intentionally or not) ignored by American media is that, from the point of view of pious Muslims, Yazidis are satanists.

To quote Wikipedia (Taus Melek is basically the chief deity for Yazidis, God the Creator being passive and uninvolved with the world):

As a demiurge figure, Tawûsê Melek is often identified by orthodox Muslims as a Shaitan (Satan), a Muslim term denoting a devil or demon who deceives true believers. The Islamic tradition regarding the fall of "Shaitan" from Grace is in fact very similar to the Yazidi story of Malek Taus – that is, the Jinn who refused to submit to God by bowing to Adam is celebrated as Tawûsê Melek by Yazidis, but the Islamic version of the same story curses the same Jinn who refused to submit as becoming Satan.[38] Thus, the Yazidi have been accused of devil worship.

So, what's the Christianity's historical record for attitude towards devil worshippers?

or at least I don't want to belong to the same species as them

Any particular reason you feel this way about the Sunni armed groups, but not about, say, Russian communists, or Mao's Chinese, or Pol Pot's Cambodians, or Rwandans, or... it's a very long list, y'know?

9Nornagest
The closest parallel might be to Catharism, a Gnostic-influenced sect treating the God of the Old Testament as an entity separate from, and opposed to, the God of the New, and which was denounced as a "religion of Satan" by contemporary Christian authorities. That was bloodily suppressed in the Albigensian Crusade. Manicheanism among other early Gnostic groups was similarly accused as well, but it's much older and less well documented, and reached its greatest popularity (and experienced its greatest persecutions) in areas without Christian majorities. A few explicitly Satanist groups have popped up since the 18th century, but they've universally been small and insignificant, and don't seem to have experienced much persecution outside of social disapproval. Outside of fundamentalist circles they seem to be treated as immature and insincere more than anything else. On the other hand, unfounded accusations of Satanism seem to be fertile ground for moral panics -- from the witch trials of the early modern period (which, Wiccan lore notwithstanding, almost certainly didn't target any particular belief system) to the more recent Satanic ritual abuse panics.
0Lumifer
I would probably say that the closest parallel is the persecution of witches in medieval Europe (including but not limited to the witch trials).
4Nornagest
The persecution of witches targeted individuals or small groups, not (as far as modern history knows) members of any particular religion; and the charges leveled at alleged witches usually involved sorcerous misbehavior of various kinds (blighting crops, causing storms, bringing pestilence...) rather than purely religious accusations. Indeed, for most of the medieval era the Church denied the existence of witches (though, as we've seen above, it was happy to persecute real heretics): witch trials only gained substantial clerical backing well into the early modern period. Seems pretty different to me.
0Lumifer
Charges of being in league with the Devil were a necessary part of accusations against the witches because, I think, sorcery was considered to be possible for humans only through the Devil's help. The witches' covens were perceived as actively worshipping the Devil. I agree that it's not the exact parallel, but do you think a whole community (with towns and everything) of devil worshippers could have survived in Europe or North America for any significant period of time? Compared to Islam, Christianity was just more quick and efficient about eliminating them.
3Nornagest
That veers more into speculation than I'm really comfortable with. That said, though, I think you're giving this devil-worship thing a bit more weight than it should have; sure, some aspects of Melek Taus are probably cognate to the Islamic Shaitan myth, but Yazidi religion as a whole seems to draw in traditions from several largely independent evolutionary paths. We're not dealing here with the almost certainly innocent targets of witch trials or with overenthusiastic black metal fans, nor even with an organized Islamic heresy, but with a full-blown syncretic religion. No similar religions of comparable age survive in Christianity's present sphere of influence, though the example of Gnosticism suggests that the early evolution of the Western branch of Abrahamic faith was pretty damn complicated, and that many were wiped out in Christianity's early expansion or in medieval persecutions. There are a lot of younger ones, however, especially in the New World: Santeria comes to mind. That's only tangentially relevant to the historical parallels I'm trying to outline, though.
1Lumifer
Oh, it certainly is, but the issue is not what we are dealing with -- the issue is how the ISIS fighters perceive it. The whole Middle-East-to-India region is full of smallish religions which look to be, basically, outcomes of "Throw pieces of several distinct religious traditions together, blend on high for a while, then let sit for a few centuries".
4Nornagest
I'm pretty sure their perceptions are closer to an Albigensian Crusader's attitude toward Catharism -- or even your average Chick tract fan's attitude toward Catholicism -- than some shit-kicking medieval peasant's grudge toward the old man down the lane who once scammed him for a folk healing ritual that invoked a couple of barbarous names for shock value. Treating religious opponents as devil-worshippers is pretty much built into the basic structure of (premodern, and some modern) Christianity and Islam, whether or not there's anything to the accusation (though as I note above, the charge is at least as sticky for Catharism as for the Yazidi). The competing presence of a structured religion that's related closely enough to be uncomfortable but not closely enough to be a heresy per se... that's a little more distinctive.
0buybuydandavis
It hasn't been ignored by the American media. I've heard it multiple times. I don't think the term used was Satanist, but "devil worshippers".
0James_Miller
Although I'm a libertarian now, in my youth I was very left-wing and can understand the appeal of communism. For many of the others on the long list, yes they do feel very other to me.
2Sabiola
I too was very left-wing when I was young, and now I feel communism does belong with the others on that list. It fills the same mental space as a religion, and is believed in much the same way (IME).
0[anonymous]
Take some ISIS propaganda and do s/infidels/capitalist exploiters, s/Allah/the revolution, etc.

First you might want to consider propaganda.

http://www.revleft.com/vb/ten-commandments-war-t52907/index.html?s=8387131b8a98f6ee7e6ba74cce570d8e

http://home.cc.umanitoba.ca/~mkinnear/16_Falsehood_in_wartime.pdf

  1. We do not want war.

  2. The opposite party alone is guilty of war

  3. The enemy is the face of the devil.

  4. We defend a noble cause, not our own interest.

  5. The enemy systematically commits cruelties; our mishaps are involuntary.

  6. The enemy uses forbidden weapons.

  7. We suffer small losses, those of the enemy are enormous.

  8. Artists and intellectuals back our cause.

  9. Our cause is sacred.

  10. All who doubt our propaganda, are traitors.

It's a little harder to say about the ISIS guys, but I think personality wise many of us are a lot like the Al Qaeda leadership. Ideology and Jihad for it appeals.

Most people don't take ideas too seriously. We do. And I think it's largely genetic.

I find it hard to think of ISIS members as human

Human, All Too Human.

Historically, massacring The Other is the rule, not the exception. You don't even need to be particularly ideological for that. People who just go with the flow of their community will set The Other on fire in a public square, and have a picnic watching. Bring their kids. Take grandma out for the big show.

2James_Miller
Excellent point. I wonder if LW readers and Jihadists would give similar answers to the Trolley problem.
5buybuydandavis
I don't think that's the test. It's not that they'd give the same answers to any particular question. I think the test would be a greater likelihood to be unshakeable according to adjustments along moral modalities that move others who are not so ideological. How "principled" are you? How "extreme" a situation are you willing to assent to, relative to the general population? Largely, how far can you override morality cognitively?
5Nornagest
A hundred bucks says the answer is "no". Religious fundamentalism is not known to encourage consequential ethics. There may be certain parallels -- I've read that engineers and scientists, or students of those disciplines, are disproportionately represented among jihadists -- but they're probably deeper than that.
6buybuydandavis
Also disproportionately represented as the principals in the American Revolution. Inventors, engineers, scientists, architects. Franklin,Jefferson, Paine, and Washington all had serious inventions. That's pretty much the first string of the revolution.
4Richard_Kennaway
That might depend on the consequences. A runaway trolley is careering down the tracks and will kill a single infidel if it continues. If you pull a lever, it will be switched to a side track and kill five infidels. Do you pull the lever? The lever is broken, but beside you on the bridge is a very fat man, one of the faithful. Do you push him off the bridge to deflect the trolley and kill five infidels, knowing that he will have his reward for his sacrifice in heaven?
4Prismattic
I've also read this, but I want to know if it corrects for the fact that the educational systems in many of the countries that produce most jihadis don't encourage study of the humanities and certain social sciences. Is it really engineers in particular, or is the educated-but-stifled who happen overwhelmingly to be engineers in these countries?
8NancyLebovitz
Part of "us" is our culturally transmitted values. My impression is that ISIS is mostly a new thing-- it's a matter of relatively new memes taken up by adolescents and adults rather than generational transmission. I don't think it's practical to see one's enemies, even those who behave vilely and are ideologically committed to continuing to do so, as non-human. To see them as non-human is to commit oneself as framing them as incomprehensible. More exactly, the usual outcomes seems to be "all they understand is force" or "there's nothing to do but kill them". which makes it difficult to think of how to deal with them if victory by violence isn't a current option.
6Lumifer
On the contrary, that's the attitude specifically trained in modern armies, US included. Otherwise not enough people shoot at the enemy :-/
3NancyLebovitz
You might not be in an army.
0Azathoth123
I'm not sure about modern armies, but ancient and even medieval armies certainly didn't need this attitude to kill their enemies.
7bramflakes
It's possible that more inbred clannish societies have smaller moral circles than Western outbreeders. * Bedouin proverb
[-][anonymous]180

I was talking to someone from Tennessee once, and he said something along the lines of: "When I'm in a bar in western Tennessee, I drink with the guy from western Tennessee and fight the guy from eastern Tennessee. When I'm in a bar in eastern Tennessee, I drink with the guy from Tennessee and fight the guy from Georgia. When I'm in a bar in Georgia, I drink with the guy from the South and fight the guy from New England."

7[anonymous]
The history of the European takeover of the Americas and the damn near genocide of somewhere between tens and hundreds of millions of people in the process, and the history of the resultant societies, should disavow everyone here of any laughable claims of ethnic superiority in this regard. I also strongly suspect that the European diaspora of the Americas and elsewhere just hasn't had enough time for the massive patchwork of tribalisms to inevitably crystallize out of the liquid wave of disruptive post-genocide settlement that happened over the last few hundred years, and instead we only have a few very large groups in this hemisphere that are coming to hate each other so far. Though sometimes I suspect the small coal mining town my parents escaped from could be induced to have race riots between the Poles and Italians. Also... Germany. Enough said. EDIT: Not directed at you, bramflakes, but at the whole thread here... how in all hell am I seeing so much preening smug superiority on display here? Humans are brutal murderous monkeys under the proper conditions. No one here is an exception at all except through accidents of space and time, and even now we all reading this are benefiting from systems which exploit and kill others and are for the most part totally fine with them or have ready justifications for them. This is a human thing.

Humans are brutal murderous monkeys under the proper conditions.

They are also sweetness and light under the proper conditions.

No one here is an exception at all except through accidents of space and time

You seem to be claiming that certain conditions -- those not producing brutal murderous monkeys -- are accidents of space and time, but certain others -- those producing brutal murderous monkeys -- are not. That "brutal murderous monkeys" is our essence and any deviation from that mere accident, in the philosophical sense. That the former is our fundamental nature and the latter mere superficial froth.

There is no actual observation that can be made to distinguish "proper conditions" from "parochial circumstance", "essence" from "accident", "fundamental" from "superficial".

1MrMind
Chimpanzees tribes, given enough resources, can pass from an equilibrium based on violence to an equilibrium based on niceness and sharing. I cannot seem to find, despite extensive search, the relevant experiment, but I remember it vividly. I guess the same thing can happen to humans too.
4Richard_Kennaway
It visibly does. If you're not sitting in a war zone, just look around you. Are the people around you engaged in brutally murdering each other? This is not to say that the better parts of the world are perfect, but to look at those parts and moan about our brutally murderous monkey nature is self-indulgent posturing.
2A1987dM
See “Can the Chain Still Hold You?”.
6James_Miller
We have a right to feel morally superior to ISIS, although probably not on genetic grounds. But is this true? Do some people have genes which strongly predispose them against killing children. It feels to me like I do, but I recognize my inability to properly determine this. As a free market economist I disagree with this. The U.S. economy does not derive wealth from the killing of others, although as the word "exploit" is hard to define I'm not sure what you mean by that.
3ChristianKl
The Stanford prison experiment suggests that you don't need that much to get people to do immoral things. ISIS evolved over years of hard civil war. ISIS also partly has their present power because the US first destabilised Iraq and later allowed funding of Syrian rebels. The US was very free to avoid fighting the Iraq war. ISIS fighters get killed if they don't fight their civil war.
3fubarobfusco
The Stanford prison "experiment" was a LARP session that got out of control because the GM actively encouraged the players to be assholes to each other.
2Douglas_Knight
I agree with that interpretation of the experiment but "active encouragement" should count as "not that much."
3James_Miller
I am very confident that a college student version of me taking part in a similar experiment as a guard would not have been cruel to the prisoners in part because the high school me (who at the time was very left wing) decided to not stand up for the pledge of allegiance even though everyone else in his high school regularly did and this me refused to participate in a gym game named war-ball because I objected to the name.
7Nornagest
I didn't stand for the Pledge in school either, but in retrospect I think that had less to do with politics or virtue and more to do with an uncontrollable urge to look contrarian. I can see myself going either way in the Stanford prison experiment, which probably means I'd have abused the prisoners.
-3ChristianKl
But you aren't that left wing anyone but go around teaching people to make decisions based on game theory.
0James_Miller
I moved to the right in my 20s.
-1Lumifer
Who is "we"? and are you comparing individuals to an amorphous military-political movement? Everyone has these genes. It's just that some people can successfully override their biological programming :-/ Killing children is one of the stronger moral taboos, but a lot of kids are deliberately killed all over the world. By the way, the US drone strikes in Pakistan are estimated to have killed 170-200 children.
-1[anonymous]
"Every computer has this code. It's just that some computers can successfully override their programming." What does this statement mean?
2Risto_Saarelma
Suppressing bad instincts. Seems to make sense to me and describe a real thing that's often a big deal in culture and civilization. All it needs to be coherent is that people can have both values and instincts, that the values aren't necessarily that which is gained by acting on instincts, and that people have some capability to reflect on both and not always follow their instincts. For the software analogy, imagine an optimization algorithm that has built-in heuristics, runtime generated heuristics, optimization goals, and an ability to recognize that a built-in heuristic will work poorly to reach the optimization goal in some domain and a different runtime generated heuristic will work better.
0Lumifer
The usual. The decisions that you make result from a weighted sum of many forces (reasons, motivations, etc.). Some of these forces/motivations are biologically hardwired -- almost all humans have them and they are mostly invariant among different cultures. The fact that they exist does not mean that they always play the decisive role.
0A1987dM
You appear to be implying that all (or nearly all) motivations that are hardwired are universal and vice versa, neither of which seems obvious to me.
-2Lumifer
Hm. I would think that somewhere between many and most of the universal terminal motivations are hardwired. I am not sure why would they be universal otherwise (similar environment can produce similar responses but I don't see why would it produce similar motivations). And in reverse, all motivations hardwired into Homo sapiens should be universal since the humanity is a single species.
0A1987dM
Well, about a century ago religion was pretty much universal, and now a sizeable fraction of the population (especially in northern Eurasia) is atheist, even if genetics presumably haven't changed that much. How do we know there aren't more things like that? I'm aware of the theoretical arguments to expect that same species -> same hardwired motivations, but I think they have shortcomings (see the comment thread to that article) and the empirical evidence seems to be against (see this or this).
0Lumifer
Was it? Methinks you forgot about places like China, if you go by usual definitions of "religion". Besides, it has been argued that the pull towards spiritual/mysterious/numinous/godhead/etc. is hardwired in some way. This is a "to which degree" argument. Your link says "Different human populations are likely for biological reasons to have slightly different minds" and I will certainly agree. The issue is what "slightly" means and how significant it is.
0A1987dM
Well, that's a different claim from “all motivations hardwired into Homo sapiens should be universal” (emphasis added) in the great-gradparent.
0Lumifer
If you want to split hairs :-) all motivations hardwired into Homo Sapiens should be universal. Motivations hardwired only into certain subsets of the species will not be universal.
0A1987dM
If you mean motivations hardwired into all Homo Sapiens sure, but that's tautological! :-)
3Viliam_Bur
Uhm, taboo "morally different"? Are their memes repulsive to me? Yes, they are. Do they have terminal value as humans (ignoring their instrumental value)? Yes, they do. How about their instrumental value? Uhm, probably negative, since they seem to spend a lot of time killing other humans. Probably yes. I think there can be a genetic influence, but there is much more of "monkey see, monkey do" in humans.
2niceguyanon
Here is a Vice documentary posted today about ISIS: https://news.vice.com/video/the-islamic-state-full-length
2Richard_Kennaway
The question is irrelevant. If it is wrong to behead children for having the "wrong" religion, that is not affected by fictional scenarios in which "we" believed differently. (It's not clear what "we" actually means there, but that's a separate philosophical issue.) Truth is not found by first seeing what you believe, and then saying, "I believe this, therefore it is true." This question is also irrelevant. Well, they are. Start from there.
1ChristianKl
Focusing on "morally correct" might prevent a lot of understanding of the situation. People in war usually don't do things because they are morally correct.

I wonder why we don't see more family fortunes in the U.S. in kin groups that have lived here for generations. Estate taxes tend to inhibit the transmission of wealth down the line, but enough families have figured out how to game the system that they have held on to wealth for a century or more, notably including families which supply a disproportionate number of American politicians; they provide proof of concept of the durable family fortune. Otherwise most Americans seem to live in a futile cycle where their lifetime wealth trajectory starts from zero ... (read more)

5NancyLebovitz
Another possibility is that Americans are more individualistic. Maintaining a family fortune means subordinating yourself enough that it isn't spent down.
5Lumifer
"Lacking self-control" is probably what you mean :-) Example: the Vanderbilts.
3wadavis
Supporting the individualistic argument. The family values trend in my prosperous region of Canada is leaning toward successful businessmen and entrepreneurs valuing empowering their children but not supporting their children past adolescence. The accepted end goal IS to die as close to net zero as possible, I've not seen strong obligations to leave a large inheritance behind. The only strong obligation is the empowerment of their upper-middle class children so they can follow the same zero to wealth to zero cycle. Where sons stay in the same industry as fathers, instead of striking out on their own, they work for the fathers firm until they have the credit and savings to start taking loans and buying shares of the fathers firm. Successful succession planning is when the children can buy 100% of the firm by the time the parents are ready for retirement. (All based on personal observations of a single province and a group of peers n~20)
1Lumifer
Is there an exception for real estate? I'm thinking both "regular" houses (reverse mortgages are uncommon) and, in particular, things like summer houses and farmland which tend to stay in the family. I agree that the desire to leave behind a large bank account is... not widespread, but land and houses look sticky to me.
0wadavis
Farmland is far closer to a business asset and ends up treated the same as any other economic asset. Of course in farming there is a higher ratio of dynasty minded families (function of this province's immigration history and strong east-european cultural backgrounds). I see what you mean about personal homes and personal land. There may be a mental division between economic assets, which shall not be given only sold, personal assets which are gifted away. This is a gap in my knowledge, It appears I need to spend more time with close to retirement, independently wealthy individuals.
2buybuydandavis
What I'd like to know is how the Brits are doing it.
1sixes_and_sevens
The part of my brain that generates sardonic responses says "Oxbridge and nepotism". At risk of generating explanations for patterns that don't really exist, class, education and assortative mating seem to make for wealthy dynasties.
2Nornagest
I think there's a couple of fairly simple reasons contributing to Americans not having a culture of inheritance: first, that we live a long time by historical standards; and second, that we have a norm of children moving out after maturity. The first means that estates are generally released after children are well into their careers, and sometimes after they're themselves retired. The second means that all but the very wealthiest have to establish their own careers rather than living off the family dime. This wouldn't directly affect actual inheritance, but it does take a lot of the urgency out of establishing a legacy. That lack of urgency might in turn contribute to reductions in real inheritance, given that you can sink a more or less arbitrary amount of money (by middle-class standards) into things like travel and expensive hobbies.
-2Illano
In American society in particular, I would assume a large reason that wealth is not passed from generation to generation currently is the enormous costs associated with end-of-life medical care. You've got to be in the top few percent of Americans to be able to have anything left after medical costs (or die early/unexpectedly which also tends to work against estate planning efforts.)
4Shmi
This only became a thing in the last 50 years or so and would not have been a major expense a century ago. Even now the costs are about $50k to $100k per person, which is in line with what a healthy upper middle-class person spends every year. The wealthy spend a lot more than that, so the palliative care costs are unlikely to make a dent in their fortunes.
0Illano
Good point about the medical costs being a relatively recent development. However, I still think they are a huge hurdle to overcome if wealth staying in a family is to become widespread. Using the number you supplied of $50k/year, the median American at retirement age could afford about 3 years of care. (Not an expert on this, just used numbers from a google search link. This only applies for the middle class though, but essentially it means that you can't earn a little bit more than average and pass it on to your kids to build up dynastic wealth, since for the middle classes at least, at end-of-life you pretty much hit a reset button.
6Lumifer
I don't think it ever works like this -- saving a bit and accumulating it generation after generation. The variability in your income/wealth/general social conditions is just too high. "Dynastic wealth" is usually formed by one generation striking it absurdly rich and the following generations being good stewards of it.
0Shmi
You seem to be grasping here. The OP talked about passing down old family fortunes, not problems building new ones. Whether EOL care expenses are a significant hurdle to the new wealth accumulation is an interesting but unrelated question. My suspicion is that if it is, then there ought to be an insurance one can buy to limit exposure.
3buybuydandavis
I don't think those costs are relevant for families with fortunes.
-5Izeinwinter

This is not an attempt at an organised meetup, but the World Science Fiction Convention begins tomorrow in London. I'll be there. Anyone else from LessWrong?

I had intended to be at Nineworlds last weekend as well, but a clash came up with something else and I couldn't go. Was anyone else here there?

[-]Shmi40

If any LWer is attending the Quantum Foundations of a Classical Universe workshop at the IBM Watson Research Center, feel free to report!

Several relatively famous experts are discussing anthropics, the Born rule, MWI, Subjective Bayesianism, quantum computers and qualia.

1MrMind
Here is a list of papers about the talks, if you want to get an idea without attending.
0Shmi
I've read most of those I care to, but there is always something about face-to-face discussions that is lost in print.

I am getting the red envelope sign on the right side here, as I had a message. But then I see it's not for me. For a few days now.

Have you ever clicked on the grey envelope icon found at the bottom right of every post and comment? If you do, then immediate replies to it show up in your inbox also. Look at the parent of one of these mysterious messages and see if its envelope is green. If it is, you can click it again to turn it off.

5Thomas
Thanks! I had done this, inadvertently.
6Nornagest
If a reply to one of your comments is deleted before you read it, you'll be alerted but won't get the message. I believe the alert should go away once you check your messages, though.
4drethelin
I think if someone is responding to you in a very downvoted thread it might not show up in the your replies?

My brain spontaneously generated an argument for why killing all humans might be the best way to satisfy my values. As far as I know it's original; at any rate, I don't recall seeing it before. I don't think it actually works, and I'm not going to post it on the public internet. I'm happy to just never speak of it again, but is there something else I should do?

is there something else I should do?

Find out how your brain went wrong, with a view to not going so wrong again.

3zzrafz
Playing devil's advocate here, the original poster is not that wrong. Ask any other living species on Earth and they will say their life would be better without humans around.

Apart from the fact that they wouldn't say anything (because generally animals can't speak our languages ;)), nature can be pretty bloody brutal. There are plenty of situations in which our species' existence has made the lives of other animals much better than they would otherwise be. I'm thinking of veterinary clinics that often perform work on wild animals, pets that don't have to be worried about predation, that kind of thing. Also I think there are probably a lot of species that have done alright for themselves since humans showed up, animals like crows and the equivalents in their niche around the world seem to do quite well in urban environments.

As someone who cares about animal suffering, is sympathetic to vegetarianism and veganism, and even somewhat sympathetic to more radical ideas like eradicating the world's predators, I think that humanity represents a very real possibility to decrease suffering including animal suffering in the world, especially as we grow in our ability to shape the world in the way we choose. Certainly, I think that humanity's existence provides real hope in this direction, remembering that the alternative is for animals to continue to suffer on nature's whims perhaps indefinitely, rather than ours perhaps temporarily.

0zzrafz
Never thought of it this way. Guess in the long term it makes sense. So far, though...
9Lumifer
Let's ask a cockroach, a tapeworm, and a decorative-breed dog :-)
2DanielLC
Humans are leading to the extinction of many species. Given the sorts of things that happen to them in the wild, this may be an improvement. This is too distant from the original argument to be an argument for it. I'm just playing devil's advocate recursively.
6Username
It seems I was unclear. I have no intention of attempting to kill all humans. I'm not posting the argument publicly because I don't want to run the (admittedly small) risk that someone else will read it and take it seriously. I'm just wondering if there's anything I can do with this argument that will make the world a slightly better place, instead of just not sharing it (which is mildly negative to me and neutral to everyone else - unless I've sparked anyone's curiousity, for which I apologise).
2polymathwannabe
What values could possibly lead to such a choice?
[-]satt110

Hardcore negative utilitarianism?

In The Open Society and its Enemies (1945), Karl Popper argued that the principle "maximize pleasure" should be replaced by "minimize pain". He thought "it is not only impossible but very dangerous to attempt to maximize the pleasure or the happiness of the people, since such an attempt must lead to totalitarianism."[67] [...]

The actual term negative utilitarianism was introduced by R.N.Smart as the title to his 1958 reply to Popper[69] in which he argued that the principle would entail seeking the quickest and least painful method of killing the entirety of humanity.

Suppose that a ruler controls a weapon capable of instantly and painlessly destroying the human race. Now it is empirically certain that there would be some suffering before all those alive on any proposed destruction day were to die in the natural course of events. Consequently the use of the weapon is bound to diminish suffering, and would be the ruler's duty on NU grounds.[70]

(Pretty cute wind-up on Smart's part; grab Popper's argument that to avoid totalitarianism we should minimize pain, not maximize happiness, then turn it around on Popper by counterarguing that his argument obliges the obliteration of humanity whenever feasible!)

1Gunnar_Zarncke
Values that value animals as high or nearly as high as humans.
9Baughn
Not if you account for the typical suffering in nature. Humans remain the animals' best hope of ever escaping that.
2NancyLebovitz
It might not just be about suffering-- there's also the plausible claim that humans lead to less variety in other species.
4DanielLC
I feel like that's a value that only works because of scope insensitivity. If the extinction of a species is as bad as killing x individuals, then when the size of the population is not near x, one of those things will dominate. But people still think about it as if they're both significant.
1Baughn
Why does that, um, matter? I can see valuing animal experience, but that's all about individual animals. Species don't have moral value, and nature as a whole certainly doesn't.
4James_Miller
Would you say the same about groups of humans? Is genocide worse than killing an equal number of humans but not exterminating any one group?
8fubarobfusco
I suspect that the reason we have stronger prohibitions against genocide than against random mass murder of equivalent size is not that genocide is worse, but that it is more common. It's easier to form, motivate, and communicate the idea "Kill all the Foos!" (where there are, say, a million identifiable Foos in the country) than it is to form and communicate "Kill a million arbitrary people."
8Azathoth123
I suspect that's not actually true. The communist governments killed a lot of people in a (mostly) non-genocidal manner. The reason we have stronger prohibitions against genocide is the same reason we have stronger prohibitions against the swastika than against the hammer and sickle. Namely, the Nazis were defeated and no longer able to defend their actions in debates while the communists had a lot of time to produce propaganda.
0Vulture
Wait, what? Did considering genocide more heinous than regular mass murder only start with the end of WWII?
2NancyLebovitz
For that it's worth, the word genocide may been invented to describe what the Nazis did-- anyone have OED access to check for earlier cites?
0Azathoth123
It existed before, but it's use really picked up after WWII.
1Viliam_Bur
Unfortunately, genocides happen all the time. But only one of them got big media attention. Which made it the evil. Cynically speaking: if you want the world to not pay attention to a genocide, (a) don't do it in a first-world country, and (b) don't do it during the war with other side which can make condemning the genocide part of their propaganda, especially if at the end you lose the war.
8NancyLebovitz
Alternatively, killing a million people at semi-random (through poverty or war) is less conspicuous than going after a defined group.
0Azathoth123
I don't see why it should be.
3Lumifer
Do particular cultures or, say, languages, have any value to you?
0Vulture
Nailed it. By which I mean, this is the standard argument. I'm surprised nobody brought it up earlier.
-2Azathoth123
Do particular computer systems or, say, programming languages, have any value to you? Compare your attitude to these two questions, what accounts for the difference?
1Lumifer
The fact that I am human. And..?
0Azathoth123
And what? You're a human not a meme, so why are you assigning rights to memes? And why some memes and not others?
3Lumifer
I am not assigning any rights to memes. I am saying that, as a human, I value some memes. I also value the diversity of the meme ecosystem and the potential for me to go and get acquainted with new memes which will be fresh and potentially interesting to me. Why some memes and not others -- well, that flows out of my value system and personal idiosyncrasies. Some things I find interesting and some I don't -- but how that's relevant?
-2Azathoth123
So why should anyone else care about your personal favorite set of favored memes?
4NancyLebovitz
A fair number of people believe that it's a moral issue if people wipe out a species, though I'm not sure if I can formalize an argument for that point of view. Anyone have some thoughts on the subject?
0DanielLC
... one way or another.
0Baughn
Given how long they don't live, I'd be satisfied with just preventing any further generations.
0polymathwannabe
Let's suppose for a moment that's what Username meant. If Username deems other beings to be more valuable than humans, then Username, as a human, will have a hard time convincing hirself of pursuing hir own values. So I guess we're safe.
2Username
I'm not going to say what the values are, beyond that I don't think they would be surprising for a LWer to hold. Also, yes, you're safe. But it seems like you started with disbelief in X, and you were given an example of X, and your reaction should be to now assume that there are more examples of X; and it looks like instead, you're attempting to reason about class X based on features of a particular instance of it.
0polymathwannabe
I thought it was clear that "Username deems other beings to be more valuable than humans" was a particular instance of X, not a description of the entire class.
1NancyLebovitz
I'd say not to worry about it unless it's a repetitive thought.
1buybuydandavis
You should consider that the problem may not be in the argument, but in your beliefs about the values you think you have.
1Username
I have considered that, and I don't think it's a relevant issue in this particular case.
0[anonymous]
Why are you asking this question? If you have larger worries about your mental heath or are worried that you might do something Very Bad, you should consider seeking mental assistance. I don't know the best course there (actually, that would be a great page for someone to write up) but I'm sure there are several people here who could point you in a good direction. If your name is Leó Szilárd and you wish to register a Omega-class Dangerous Idea™ with the Secret Society of Sinister Scheme Suppressors, I do not believe they exist. Anyone claiming to be a society representative is actually a 4chan troll who will post the idea on a 30 meter billboard in downtown Hong Kong just to mock you. An argument simple enough to be generated spontaneously in your brain is probably loose in the wild already and not very dangerous. To play it safe, stay quiet and think. If you're asking because you've just thought of this neat thing and you want to share it with someone, but are worried you might look a bit bad, I'm sure plenty of people here would be happy to read your argument in a private message.
0lmm
Do you care about it? It sounds like you're responding appropriately (though IMO it's better that such arguments be public and be refuted publicly, as otherwise they present a danger to people who are smart or lucky enough to think up the argument but not the refutation). If the generation of that argument, or what it implies about your brain, is causing trouble with your life then it's worth investigating, but if it's not bothering you then such investigation might not be worth the cost.
2Username
This is the sort of thing I'm thinking about. The argument seems more robust than the obvious-to-me counterargument, so I feel that it's better to just not set people thinking about it. I'm not sure though.
2[anonymous]
If the argument is simple enough for your brain to generate it spontaneously, someone else has probably thought of it before and not released a mind plague upon humanity. There could even be an established literature on the subject in philosophy journals. Have you done a search? The argument may not have good keywords and be ungooglable. If that's the case, you could (a) discuss with a friendly neighborhood professional philosopher or (2) pay a philosophy grad student $25 to bounce your idea off them. I quickly brainstormed 6 (rather bad) reasons killing everyone in the world would satisfy someone's values. How do these reasons compare in persuasiveness? If your reason isn't much better than I don't think you have much to worry about.
-2zzrafz
Since you won't be able to kill all humans and will eventually get caught and imprisoned, the best move is to abandon your plan, accordingo to utilitarian logic.
0[anonymous]
I'm not so sure this is obvious. How much damage can one intelligent, rational, and extremely devoted person do? Certainly there are a few people in positions that obviously allow them to wipe out large swaths of humanity. Of course, getting to those positions isn't easy (yet still feasible given an early enough start!).. But I've thought about this for maybe two minutes, how many nonobvious ways would there be for someone willing to put in decades? The usual way to rule them out without actually putting in the decades is by taking outside view and pointing at all the failures. But nobody even seems to have seriously tried. If they had, we'd have at least seen partial successes.
-5[anonymous]
2DanielLC
It doesn't seem to be clear whether that's just people of different cultures grouping faces differently, like how they might group colors differently even though their eyes work the same, or if their face/emotion correspondence is different.
[-][anonymous]30

Cryonics question:

For those of you using life insurance to pay your cryonics costs, what sort of policy do you use?

9James_Miller
Whole life via Rudi Hoffman for Alcor.
5Joshua_Blaine
I've not personally finished my own arrangements, but I'll likely be using whole life of some kind. I do know that Rudi Hoffman is an agent well recommended by people who've gone the insurance route, so talking to him will likely get you a much better idea of what choices people make (A small warning, his sight is not the prettiest thing). You could also contact the people recommended on Alcor's Insurance Agents page, if you so desire.

I've been noticing a theme of utilitarianism on this site -- can anyone explain this? More specifically: how did (x)uys rationalize a utilitarian philosophy over an existential, nihilistic, or hedonistic one?

8Dahlen
To put it as simply as I could, LessWrongers like to quantify stuff. A more specific instance of this is the fact that, since this website started off as the brainchild of an AI researcher, the prevalent intellectual trends will be those with applicability in AI research. Computers work easily with quantifiable data. As such, if you want to instill human morality into an AI, chances are you'll at least consider conceptualizing morality in utilitarian terms.
6Richard_Kennaway
The confluence of a number of ideas. Cox's theorem shows that degree of belief can be expressed as probabilities. The VNM theorem shows that preferences can be expressed as numbers (up to an additive constant), usually called utilities. Consequentialism, the idea that actions are to be judged by their consequences, is pretty much taken as axiomatic. Combining these gives the conclusion that the rational action to take in any situation is the one that maximises the resulting expected utility. Your morality is your utility function: your beliefs about how people should live are preferences about they should live. Add the idea of actually being convinced by arguments (except arguments of the form "this conclusion is absurd, therefore there is likely to be something wrong with the argument", which are merely the absurdity heuristic) and you get LessWrong utilitarianism.
2blacktrance
Utilitarianism is more than just maximizing expected utility, it's maximizing the world's expected utility. Rationality, in the economic or decision-theoretic sense, is not synonymous with utilitarianism.
1Richard_Kennaway
That is a good point, but I think one under-appreciated on LessWrong. It seems to go "rationality, therefore OMG dead babies!!" There is discussion about how to define "the world's expected utility", but it has never reached a conclusion.
0blacktrance
In addition to the problem of defining "the world's expected utility", there is also the separate question of whether it (whatever it is) should be maximized.
0Vulture
I think this is probably literally correct, but misleading. "Maximizing X's utility" is generally taken to mean "maximize your own utility function over X". So in that sense you are quite correct. But if by "maximizing the world's utility" you mean something more like "maximizing the aggregate utility of everyone in the world", then what you say is only true of those who adhere to some kind of preference utilitarianism. Other utilitarians would not necessarily agree.
0blacktrance
Hedonic utilitarians would also say that they want to maximize the aggregate utility of everyone in the world, they would just have a different conception of what that entails. Utilitarianism necessarily means maximizing aggregate utility of everyone in the world, though different utilitarians can disagree about what that means - but they'd agree that maximizing one's own utility is contrary to utilitarianism.
0Vulture
Anyone who believes that "maximizing one's own utility is contrary to utilitarianism" is fundamentally confused as to the standard meaning of at least one of those terms. Not knowing which one, however, I'm not sure what I can say to make the matter more clear.
0blacktrance
Maximizing one's own utility is practical rationality. Maximizing the world's aggregate utility is utilitarianism. The two need not the the same, and in fact can conflict. For example, you may prefer to buy a cone of ice cream, but world utility would be bettered more effectively if you'd donate that money to charity instead. Buying the ice cream would be the rational own-utility-maximizing thing to do, and donating to charity would be the utilitarian thing to do.
0Richard_Kennaway
However, if utilitarianism is your ethics, the world's utility is your utility, and the distinction collapses. A utilitarian will never prefer to buy that ice cream.
0Shmi
It's the old System I (want ice cream!) vs System 2 (want world peace!) friction again.
0ChristianKl
In general this site focuses on the friendly AI problem, a nihilistic or a hedonistic AI might not be friendly to humans. The notion of an existentialist AI seems to be largely unexplored as far as I know.
-2Ef_Re
To the extent that lesswrong has an official ethical system, that system is definitely not utilitarianism.
1James_Miller
I don't agree. LW takes a microeconomics viewpoint of decision theory and this implicitly involves maximizing some weighted average of everyone's utility function.
0Vulture
At some point we really need to come up with more words for this stuff so that the whole consequentialism/hedonic-utilitarianism/etc. confusion doesn't keep coming up.
-12ZctE
To the extent that lesswrong has an official ethical system, that system is utilitiarianism with "the fulfillment of complex human values" as a suggested maximand rather than hedons
0Ef_Re
That would normally be referred to as consequentialism, not utilitarianism.
02ZctE
Huh, I'm not sure actually, I had been thinking of consequentialism as being the general class of ethical theories based on caring about the state of the world, and that it's utilitarianism when you try to maximize some definition of utility (which could be human value-fulfillment if you tried to reason about it quantitatively). If my usages are unusual I more or less inherited them from the consequentialism faq I think
0Ef_Re
If you mean Yvain's, while his stuff is in general excellent, I recommend learning about philosophical nomenclature from actual philosophers, not medics.

I posted this in the last open thread but I think it got buried:

I was at Otakon 2014, and there was a panel about philosophy and videogames. The description read like Less Wrongese. I couldn't get in (it was full) but I'm wondering if anyone here was responsible for it.

6David_Gerard
The description: "Philosophy in Video Games [F]: A discussion of philosophical themes present in many different video games. Topics will include epistemology, utilitarianism, philosophy of science, ethics, logic, and metaphysics. All topics will be explained upon introduction and no prior knowledge is necessary to participate!" Did they record all panels?
2Error
According to their FAQ, most panels are not recorded. Google doesn't turn up any immediate evidence that this one was an exception.
[-][anonymous]10

David Collingridge wouldn't have liked Nick Bostrom's "differential technological development" idea.

[This comment is no longer endorsed by its author]Reply

Is it easier for you to tell men or women apart?

Obvious hypothesis: whichever gender you are attracted to, you will find them easier to tell apart.

3kalium
It's easier for me to tell women apart because their hairstyles have more interpersonal variation. (I distinguish people mainly by hair. It takes a few months before I learn to recognize a face.) I'm pretty much just attracted to men though.
1ChristianKl
I don't really know. I'm attracted to women and if I look back most cases of confusing one person for another are cases where a dance Salsa with a woman for 10 minutes and then months later I see the same woman again. I also use gait patterns for recognition and have sometimes a hard time deciding whether a photo is of a person that I have seen in person if I haven't interacted that much with the person.. As far as attraction goes it's also worth noting that I sometimes do feel emotions that come from having interacted with a person beforehand but it takes me some time to puzzle together where I did meet the person before. The emotional part gets handled by different parts of the brain.
0wadavis
Interesting point about the gait recognition. I had a acquaintance of the family recognize my father by his gait at a distance where I couldn't. Anyone else not recognize gaits? Does this vary by person?
0arundelo
If there's a difference (in how well I can discriminate between men versus women) I haven't noticed it. I am attracted to women much more than men.
0bramflakes
What do you mean "tell apart"?
0pianoforte611
I mean how likely are you to mistake one for the other.?
4polymathwannabe
I am bisexual, leaning toward liking men more, and sometimes women seem to me to look all the same. However, if I'm introduced to two obviously distinct people, and their names have the same initial, it'll be months before I get who's who right.

In a world without leap years, how many people should a company have to be reasonably certain that everyday will be someone's birthday?

[-]xnn100

See Coupon collector's problem, particularly "tail estimates".

2polymathwannabe
Thank you.
[-][anonymous]10

If post a response to someone, and someone replies to me, and they get a single silent downvote prior to me reading their response, I find myself reflexively upvoting them just so they won't think I was the one who did the single silent downvote, since it seems plausible to me that if you have a single downvote, and no responses, the most likely explanation to me was that it was from the person who you replied to downvoted you, and I don't want people to think that.

Except, then I seem to have gotten my opinion of the post hopelessly biased before even read... (read more)

If I try to assess this more rationally, I get the suggestion 'You're worrying far too much about what other people MIGHT be thinking, based on flimsy evidence."

The suggestion is correct.

4polymathwannabe
It's easy for users to abandon that supposition by themselves after they have spent enough time at LW.
2Xachariah
You don't need to upvote them necessarily. Just flip a coin. If you downvote them too, then it just looks like they made a bad post.
0lmm
I upvote people who reply to me on unpopular threads disproportionately often, because I want to encourage that. I upvote people who I think have an unfairly low score. Given this, behaviour much like yours follows. I think that's fine. I'd recommend always reading before voting though.
0Richard_Kennaway
I think the thought you thought of there is right.
-1ChristianKl
If you want to make it clear that you didn't downvote just start your post with (I didn't downvote the above post)