You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Open thread, 11-17 August 2014

5 Post author: David_Gerard 11 August 2014 10:12AM

Previous open thread

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Comments (268)

Comment author: sixes_and_sevens 11 August 2014 02:57:22PM *  9 points [-]

What sophisticated ideas did you come up with independently before encountering them in a more formal context?

I'm pretty sure that in my youth I independently came up with rudimentary versions of the anthropic principle and the Problem of Evil. Looking over my Livejournal archive, I was clearly not a fearsome philosophical mind in my late teens, (or now, frankly), so it seems safe to say that these ideas aren't difficult to stumble across.

While discussing this at the most recent London Less Wrong meetup, another attendee claimed to have independently arrived at Pascal's Wager. I've seen a couple of different people speculate that cultural and ideological artefacts are subject to selection and evolutionary pressures without ever themselves having come across memetics as a concept.

I'm still thinking about ideas we come up with that stand to reason. Rather than prime you all with the hazy ideas I have about the sorts of ideas people converge on while armchair-theorising, I'd like to solicit some more examples. What ideas of this sort did you come up with independently, only to discover they were already "a thing"?

Comment author: Adele_L 11 August 2014 04:13:36PM *  19 points [-]

When I was a teenager, I imagined that if you had just a tiny infinitesimally small piece of a curve - there would only be one moral way to extend it. Obviously, an extension would have to be connected to it, but also, you would want it to connect without any kinks. And just having straight-lines connected to it wouldn't be right, it would have to be curved in the same sort of way - and so on, to higher-and-higher orders. Later I realized that this is essentially what a Taylor series is.

I also had this idea when I was learning category theory that objects were points, morphisms were lines, composition was a triangle, and associativity was a tetrahedron. It's not especially sophisticated, but it turns out this idea is useful for n-categories.

Recently, I have been learning about neural networks. I was working on implementing a fairly basic one, and I had a few ideas for improving neural networks: making them more modular - so neurons in the next layer are only connected to a certain subset of neurons in the previous layer. I read about V1, and together, these led to the idea that you arrange things so they take into account the topology of the inputs - so for image processing, having neurons connected to small, overlapping, circles of inputs. Then I realized you would want multiple neurons with the same inputs that were detecting different features, and that you could reuse training data for neurons with different inputs detecting the same feature - saving computation cycles. So for the whole network, you would build up from local to global features as you applied more layers - which suggested that sheaf theory may be useful for studying these. I was planning to work out details, and try implementing as much of this as I could (and still intend to as an exercise), but the next day I found that this was essentially the idea behind convolutional neural networks. I'm rather pleased with myself since CNNs are apparently state-of-the-art for many image recognition tasks (some fun examples). The sheaf theory stuff seems to be original to me though, and I hope to see if applying Gougen's sheaf semantics would be useful/interesting.

I really wish I was better at actually implementing/working out the details of my ideas. That part is really hard.

Comment author: HopefullyCreative 13 August 2014 07:21:50AM 3 points [-]

I had to laugh at your conclusion. The implementation is the most enjoyable part. "How can I dumb this amazing idea down to the most basic understandable levels so it can be applied?" Sometimes you come up with a solution only to have a feverish fit of maddening genius weeks later finding a BETTER solution.

In my first foray into robotics I needed to write a radio positioning program/system for the little guys so they would all know where they were not globally but relative to each other and the work site. I was completely unable to find the math simply spelled out online and to admit at this point in my life I was a former marine who was not quite up to college level math. In banging my head against the table for hours I came up with an initial solution that found a position accounting for three dimensions(allowing for the target object to be in any position relative to the stationary receivers). Eventually I came up with an even better solution that also came up with new ideas for the robot's antenna design and therefore tweaking the solution even more.

That was some of the most fun I have ever had...

Comment author: Luke_A_Somers 12 August 2014 03:08:05PM 2 points [-]

I did the Taylor series thing too, though with s/moral/natural/

Comment author: Ander 11 August 2014 07:27:08PM *  9 points [-]

I came up with the idea of a Basic Income by myself, by chaining together some ideas:

  • Capitalism is the most efficient economic system for fulfilling the needs of people, provided they have money.

  • The problem is that if lots of people have no money, and no way to get money (or no way to get it without terrible costs to themselves), then the system does not fulfill their needs.

  • In the future, automation will both increase economic capacity, while also increase the barrier to having a 'valuable skill' allowing you to get money. Society will have improved capacity to fulfill the needs of people with money, yet the barrier to having useful skills and being able to get money will increase. This leads to a scenario where the society could easily produce the items needed by everyone, yet does not because many of those people have no money to pay for them.

  • If X% of the benefits accrued from ownership of the capital were taken and redistributed evenly among all humans, then the problem is averted. Average people still have some source of money with which they can purchase the fulfillment of their needs, which are pretty easy to supply in this advanced future society.

  • X=100%, as in a strict socialism, is not correct, as then we get the economic failures we saw in the socialist experiments of the past century.

  • X = 0%, as in a strict libertarianism, is not correct, as then everyone whose skills are automated starve.

  • At X = some reasonable number, capitalism still functions correctly (that is, it works today with our current tax rate levels, and hopefully in our economically progressed future society, it provides sufficient money to everyone to supply basic needs).

Eventually I found out that my idea was pretty much a Basic Income system.

Comment author: Metus 11 August 2014 03:22:54PM 7 points [-]

This is not a direct answer: Every time I come up with an idea in a field I am not very deeply involved in sooner or later I will realise that the phenomenon is either trivial, a misperception or very well studied. Most recently this happened with pecuniary externalities.

Comment author: James_Miller 11 August 2014 05:40:57PM 6 points [-]

For as long as I can remember, I had the idea of a computer upgrading its own intelligence and getting powerful enough to make the world a utopia.

Comment author: sediment 12 August 2014 11:08:13AM 5 points [-]

Oh, another thing: I remember thinking that it didn't make sense to favour either the many worlds interpretation or the copenhagen interpretation, because no empirical fact we could collect could point towards one or the other, being as we are stuck in just one universe and unable to observe any others. Whichever one was true, it couldn't possibly impact on one's life in any way, so the question should be discarded as meaningless, even to the extent that it didn't really make sense to talk about which one is true.

This seems like a basically positivist or postpositivist take on the topic, with shades of Occam's Razor. I was perhaps around twelve. (For the record, I haven't read the quantum mechanics sequence and this remains my default position to this day.)

Comment author: Username 11 August 2014 09:34:48PM 5 points [-]

Derivatives. I imagined tangent lines traveling along a function curve and thinking 'I wonder what it looks like when we measure that?' And so I would try to visualize the changing slopes of the tangent lines at the same time. I also remembering wondering how to reverse it. Obviously didn't get farther than that, but I remember being very surprised when I took calculus and realizing that the mind game I had been playing was hugely important and widespread, and could in fact be calculated.

Comment author: bramflakes 11 August 2014 03:44:31PM 5 points [-]

At school my explanation for the existence of bullies was that it was (what I would later discover was called) a Nash equilibrium.

Comment author: HopefullyCreative 13 August 2014 07:07:05AM *  4 points [-]

I had drawn up some rather detailed ideas for an atomic powered future: The idea was to solve two major problems. The first was the inherent risk of an over pressure causing such a power plant to explode. The second problem to solve was the looming water shortage facing many nations.

The idea was a power plant that used internal sterling technology so as to operate at atmospheric pressures. Reinforcing this idea was basically a design for the reactor to "entomb" itself if it reached temperatures high enough to melt its shell. The top of the sterling engine would have a salt water reservoir that would be boiled off. The water then would be collected and directed in a piping system to a reservoir. The plant would then both produce electricity AND fresh water.

Of course then while researching thorium power technology in school I discovered that the South Korean SMART micro reactor does in fact desalinate water. On one level I was depressed that my idea was not "original" however, overall I'm exited that I came up with an idea that apparently had enough merit for people actually go through and make a finished design based upon it. The fact that my idea had merit at all gives me hope for my future as an engineer.

Comment author: Unnamed 12 August 2014 06:19:34AM 4 points [-]

I'm another independent discoverer of something like utilitarianism, I think when I was in elementary school. My earliest written record of it is from when I was 15, when I wrote: "Long ago (when I was 8?), I said that the purpose of life was to enjoy yourself & to help others enjoy themselves - now & in the future."

In high school I did a fair amount of thinking (with relatively little direct outside influence) about Goodhart's law, social dilemmas, and indirect utilitarianism. My journal from then include versions of ideas like the "one thought too many" argument, decision procedures vs. criterion for good, tradeoffs between following an imperfect system and creating exceptions to do better in a particular case, and expected value reasoning about small probabilities of large effects (e.g. voting).

On religion, I thought of the problem of evil (perhaps with outside influence on that one) and the Euthyphro argument against divine command theory.

16-year-old me also came up with various ideas related to rationality / heuristics & biases, like sunk costs ("Once you’re in a place, it doesn’t matter how you got there (except in mind - BIG exception)"), selection effects ("Reason for coincidence, etc. in stories - interesting stories get told, again & again"), and the importance of epistemic rationality ("Greatest human power - to change ones mind").

Comment author: niceguyanon 11 August 2014 07:59:32PM *  4 points [-]

In 6th or 7th grade I told my class that it was obvious that purchasing expensive sneakers is mostly just a way to show how cool you are or that you can afford something that not everyone else could. Many years latter I would read about signalling http://en.wikipedia.org/wiki/Signalling_(economics)

The following are not ideas as much as questions I had while growing up, and I was surprised/relieved/happy to find out that other people much smarter than me, spent a lot of time thinking about and is "a thing". For example I really wanted to know if there was a satisfactory way to figure out if Christianity was the one true religion and it bothered me very much that I could not answer that question. Also, I was concerned that the future might not be what I want it to be, and I am not sure that I know what I even want. It turns out that this isn't a unique problem and there are many people thinking about it. Also, what the heck is consciousness? Is there one correct moral theory? Well, someone is working on it.

Comment author: [deleted] 11 August 2014 03:36:15PM 4 points [-]

I've found that ideas that affect me most fall into two major categories: either they will be ideas that hit me completely unprepared or they are ideas that I knew all along but had not formalized. Many-Worlds and and timelessness were the former for me. Utilitarianism and luminosity were the latter.

Comment author: polymathwannabe 11 August 2014 06:51:36PM 9 points [-]

Once a Christian friend asked me why I cared so much about what he believed. Without thinking, I came up with, "What you think determines what you choose. If your idea of the world is inaccurate, your choices will fail."

This was years before I found LW and learned about the connection between epistemic and instrumental rationality.

P.S. My friend deconverted himself some years afterwards.

Comment author: TylerJay 12 August 2014 05:19:39PM *  3 points [-]

After learning the very basics of natural selection, I started thinking about goal systems and reward circuits and ethics. I thought that all of our adaptations were intended to allow us to meet our survival needs so we could pass on our genes. But what should people do once survival needs are met? What's the next right and proper goal to pursue? That line of reasoning's related Googling led me to Eliezer's Levels of Intelligence paper, which in turn led me to Less Wrong.

Reading through the sequences, I found so many of the questions that I'd thought about in vague philosophical terms explained and analyzed rigorously, like personal identity vs continuity of subjective experience under things like teleportation. Part of the reason LW appealed to me so much back then is, I suspect, that I had already thought about so many of the same questions but just wasn't able to frame them correctly.

Comment author: RomeoStevens 12 August 2014 04:02:17AM 3 points [-]

This made me curious enough to skim through my childhood writing. Convergent and divergent infinite series, quicksort, public choice theory, pulling the rope sideways, normative vs positive statements, curiosity stoppers, the overton window.

My Moloch moment is what led me to seek out Overcomingbias.

Comment author: wadavis 11 August 2014 08:36:45PM 3 points [-]

Tangent thread: What sophisticated idea are you holding on to that you are sure has been formalized somewhere but haven't been able to find?

I'll go first: When called to explain and defend my ethics I explained I believe in "Karma, NO not the that BS mysticism Karma, but plain old actions have consequences in our very connected world kind of Karma." If you treat people in a manner of honesty and integrity in all things, you will create a community of cooperation. The world is strongly interconnected and strongly adaptable so the benefits will continue outside your normal community, or if you frequently change communities. The lynchpin assumption of these beliefs is that if I create One Unit of Happiness for others, it will self propagate, grow and reflect, returning me more that One Unit of Happiness over the course of my lifetime. The same applies for One Unit of Misery.

I've only briefly studied ethics and philosophy, can someone better read point my to the above in formal context.

Comment author: iarwain1 13 August 2014 03:19:45PM *  3 points [-]

This seems like a good place to ask about something that I'm intensely curious about but haven't yet seen discussed formally. I've wanted to ask about it before, but I figured it's probably an obvious and well-discussed subject that I just haven't gotten to yet. (I only know the very basics of Bayesian thinking, I haven't read more than about 1/5 of the sequences so far, and I don't yet know calculus or advanced math of any type. So there are an awful lot of well-discussed LW-type subjects that I haven't gotten to yet.)

I've long conceived of Bayesian belief statements in the following (somewhat fuzzily conceived) way: Imagine a graph where the x-axis represents our probability estimate for a given statement being true and the y-axis represents our certainty that our probability estimate is correct. So if, for example, we estimate a probability of .6 for a given statement to be true but we're only mildly certain of that estimate, then our belief graph would probably look like a shallow bell curve centered on the .6 mark of the x-axis. If we were much more certain of our estimate then the bell curve would be much steeper.

I usually think of the height of the curve at any given point as representing how likely I think it is that I'll discover evidence that will change my belief. So for a low bell curve centered on .6, I think of that as meaning that I'd currently assign the belief a probability of around .6 but I also consider it likely that I'll discover evidence (if I look for it) that can change my opinion significantly in any direction.

I've found this way of thinking to be quite useful. Is this a well-known concept? What is it called and where can I find out more about it? Or is there something wrong with it?

Comment author: Lumifer 13 August 2014 03:45:11PM 3 points [-]

Imagine a graph where the x-axis represents our probability estimate for a given statement being true and the y-axis represents our certainty that our probability estimate is correct. So if, for example, we estimate a probability of .6 for a given statement to be true but we're only mildly certain of that estimate, then our belief graph would probably look like a shallow bell curve

I don't understand where the bell curve is coming from. If you have one probability estimate for a given statement with some certainty about it, you would depict it as a single point on your graph.

The bell curves in this context usually represent probability distributions. The width of that probability distribution reflects your uncertainty. If you're certain, the distribution is narrow and looks like a spike at the estimate value. If you're uncertain, the distribution is flat(ter). Probability distributions have to sum to 1 under the curve, so the smaller the width of the distribution, the higher the spike is.

How likely you are to discover new evidence is neither here nor there. Even if you are very uncertain of your estimate, this does not convert into the probability of finding new evidence.

Comment author: iarwain1 13 August 2014 04:17:34PM *  1 point [-]

I think you're referring to the type of statement that can have many values. Something like "how long will it take for AGI to be developed?". My impression (correct me if I'm wrong) is that this is what's normally graphed with a probability distribution. Each possible value is assigned a probability, and the result is usually more or less a bell curve with the width of the curve representing your certainty.

I'm referring to a very basic T/F statement. On a normal probability distribution graph that would indeed be represented as a single point - the probability you'd assign to it being true. But we're often not so confident in our assessment of the probability we've assigned, and that confidence is what I was trying to represent with the y-axis.

An example might be, "will AGI be developed within 30 years"? There's no range of values here, so on a normal probability distribution graph you'd simply assign a probability and that's it. But there's a very big difference between saying "I really have not the slightest clue, but if I really must assign it a probability than I'd give it maybe 50%" vs. "I've researched the subject for years and I'm confident in my assessment that there's a 50% probability".

In my scheme, what I'm really discussing is the probability distribution of probability estimates for a given statement. So for the 30-year AGI question, what's the probability that you'd consider a 10% probability estimate to be reasonable? What about a 90% estimate? The probability that you'd assign to each probability estimate is depicted as a single point on the graph and the result is usually more or less a bell curve.

How likely you are to discover new evidence is neither here nor there. Even if you are very uncertain of your estimate, this does not convert into the probability of finding new evidence.

You're probably correct about this. But I've found the concept of the kind of graph I've been describing to be intuitively useful, and saying that it represents the probability of finding new evidence was just my attempt at understanding what such a graph would actually mean.

Comment author: Azathoth123 14 August 2014 03:54:28AM 4 points [-]

I'm referring to a very basic T/F statement. On a normal probability distribution graph that would indeed be represented as a single point - the probability you'd assign to it being true. But we're often not so confident in our assessment of the probability we've assigned, and that confidence is what I was trying to represent with the y-axis.

Taken literally, the concept of "confidence in a probability" is incoherent. You are probably confusing it with one of several related concepts. Lumifer has described one example of such a concept.

Another concept is how much you think your probability estimate will change as you encounter new evidence. For example, your estimate for whether the outcome of the coin flip for the 2050 Superbowl will be heads is 1/2, and you are unlikely to encounter evidence that changes it (until 2050 that is). On the other hand, your estimate for the probability AI being developed by 2050 is likely to change a lot as you encounter more evidence.

Comment author: VAuroch 14 August 2014 07:26:31AM 1 point [-]

I don't know, I think the existence of the 2050 Superbowl is significantly less than 100% likely.

Comment author: NancyLebovitz 14 August 2014 10:33:34AM 0 points [-]

What's your line of thought?

Comment author: VAuroch 14 August 2014 09:32:11PM 1 point [-]

It wouldn't be the first time a sport has gone from vastly popular to mostly forgotten within 40 years. Jai alai was the particular example I had in mind; it was once incredibly popular, but quickly descended to the point where it's basically entirely forgotten.

Comment author: iarwain1 14 August 2014 02:10:37PM *  0 points [-]

Taken literally, the concept of "confidence in a probability" is incoherent.

Why? I thought the way Lumifer expressed it in terms of Bayesian hierarchical models was pretty coherent. It might be turtles all the way down as he says, and it might be hard to use it in a rigorous mathematical way, but at least it's coherent. (And useful, in my experience.)

Another concept is how much you think your probability estimate will change as you encounter new evidence.

This is pretty much what I meant in my original post by writing:

I usually think of the height of the curve at any given point as representing how likely I think it is that I'll discover evidence that will change my belief. So for a low bell curve centered on .6, I think of that as meaning that I'd currently assign the belief a probability of around .6 but I also consider it likely that I'll discover evidence (if I look for it) that can change my opinion significantly in any direction.

But expressing it in terms of how likely my beliefs are to change given more evidence is probably better. Or to say it in yet another way: how strong new evidence would need to be for me to change my estimate.

It seems like the scheme I've been proposing here is not a common one. So how do people usually express the obvious difference between a probability estimate of 50% for a coin flip (unlikely to change with more evidence) vs. a probability estimate of 50% for AI being developed by 2050 (very likely to change with more evidence)?

Comment author: Lumifer 13 August 2014 04:36:13PM *  4 points [-]

In my scheme, what I'm really discussing is the probability distribution of probability estimates for a given statement.

OK, let's rephrase it in the terms of Bayesian hierarchical models. You have a model of event X happening in the future which says that the probability of that event is Y%. Y is a parameter of your model. What you are doing is giving a probability distribution for a parameter of your model (in the general case this distribution can be conditional, which makes it a meta-model, so hierarchical). That's fine, you can do this. In this context the width of the distribution reflects how precise your estimate of the lower-level model parameter is.

The only thing is that for unique events ("will AGI be developed within 30 years") your hierarchical model is not falsifiable. You will get a single realization (the event will either happen or it will not), but you will never get information on the "true" value of your model parameter Y. You will get a single update of your prior to a posterior and that's it.

Is that what you have in mind?

Comment author: iarwain1 13 August 2014 05:08:48PM *  1 point [-]

I think that is what I had in mind, but it sounds from the way you're saying it that this hasn't been discussed as a specific technique for visualizing belief probabilities.

That surprises me since I've found it to be very useful, at least for intuitively getting a handle on my confidence in my own beliefs. When dealing with the question of what probability to assign to belief X, I don't just give it a single probability estimate, and I don't even give it a probability estimate with the qualifier that my confidence in that probability is low/moderate/high. Rather I visualize a graph with (usually) a bell curve peaking at the probability estimate I'd assign and whose width represents my certainty in that estimate. To me that's a lot more nuanced than just saying "50% with low confidence". It has also helped me to communicate to others what my views are for a given belief. I'd also suspect that you can do a lot of interesting things by mathematically manipulating and combining such graphs.

Comment author: Lumifer 13 August 2014 05:19:00PM *  1 point [-]

One problem is that it's turtles all the way down.

What's your confidence in your confidence probability estimate? You can represent that as another probability distribution (or another model, or a set of models). Rinse and repeat.

Another problem is that it's hard to get reasonable estimates for all the curves that you want to mathematically manipulate. Of course you can wave hands and say that a particular curve exactly represents your beliefs and no one can say it ain't so, but fake precision isn't exactly useful.

Comment author: Anders_H 13 August 2014 04:48:13PM *  0 points [-]

I believe you may be confusing the "map of the map" for the "map".

If I understand correctly, you want to represent your beliefs about a simple yes/no statement. If that is correct, the appropriate distribution for your prior is Bernoulli. For a Bernoulli distribution, the X axis only has two possible values: True or False. The Bernoulli distribution will be your "map". It is fully described by the parameter "p"

If you want to represent your uncertainty about your uncertainty, you can place a hyperprior on p. This is your "map of the map". Generally, people will use a beta distribution for this (rather than a bell-shaped normal distribution). With such a hyperprior, p is on the X-axis and ranges from 0 to 1.

I am slightly confused about this part, but it is not clear to me that we gain much from having a "map of the map" in this situation, because no matter how uncertain you are about your beliefs, the hyperprior will imply a single expected value for p

Comment author: [deleted] 12 August 2014 03:47:43AM *  1 point [-]

What sophisticated idea are you holding on to that you are sure has been formalized somewhere but haven't been able to find?

The influence of the British Empire on progressivism.

There was that book that talked about how North Korea got its methods from the Japanese occupation, and as soon as I saw that, I thought, "well, didn't something similar happen here?" A while after that, I started reading Imagined Communities, got to the part where Anderson talks about Macaulay, looked him up, and went, "aha, I knew it!" But as far as I know, no one's looked at it.

Also, I think I stole "culture is an engineering problem" from a Front Porch Republic article, but I haven't been able to find the article, or anyone else writing rigorously about anything closer in ideaspace to that than dynamic geography, except the few people who approach something similar from an HBD or environmental determinism angle.

Comment author: buybuydandavis 11 August 2014 09:15:53PM 1 point [-]

I believe Rational Self Interest types make similar arguments, though I can't recall anyone breaking it down to marginal gains in utility.

Comment author: lmm 11 August 2014 07:02:35PM 3 points [-]

I figured out utilitarianism aged ~10 or so.

I had some thoughts about the "power" of mathematical proof techniques that I now recognize as pointing towards turing completeness.

Comment author: CellBioGuy 12 August 2014 05:09:25AM 6 points [-]

Came up with the RNA-world hypothesis on my own when reading about the structure and function of ribosomes in middle school.

Decided long ago that there was a conflict between the age of the universe and the existence of improvements in space travel meaning that things such as we would never be able to reach self-replicating interstellar travel. Never came to the conclusion that it meant extinction at all and am still quite confused by people who assume its interstellar metastasis or bust.

Comment author: moridinamael 15 August 2014 07:25:38PM 2 points [-]

Well, this isn't quite what you were asking for, but, as a young teenager a few days after 9/11, I was struck with a clear thought that went something like: "The American people are being whipped into a blood frenzy, and we are going to massively retaliate against somebody, perpetuating the endless cycle of violence that created the environment which enabled this attack to occur in the first place."

But I think it's actually common for young people to be better at realpolitik and we get worse at it as we absorb the mores of our culture.

Comment author: 2ZctE 15 August 2014 03:59:17AM *  2 points [-]

In middle school I heard a fan theory that Neo had powers over the real world because it was a second layer of the matrix-- the idea of simulations inside simulations was enough for me to come to Bostrom's simulation argument.

Also during the same years I ended up doing an over the top version of comfort zone expansion by being really silly publicly.

In high school I think I basically argued a crude version of compatibilism before learning the term, although my memory of the conversation is a bit vague

Comment author: Gvaerg 13 August 2014 03:20:10PM 2 points [-]
  1. This happened when I was 12 years old. I was trying to solve a problem at a mathematical contest which involved proving some identity with the nth powers of 5 and 7. I recall thinking vaguely "if you go to n+1 what is added in the left hand side is also in the right hand side" and so I discovered mathematical induction. In ten minutes I had a rigorous proof. Though, I didn't find it so convincing, so I ended with an unsure-of-myself comment "Hence, it is also valid for 3, 4, 5, 6 and so on..."

  2. When I was in high school, creationism seemed unsatisfying in the sense of a Deus Ex Machina narrative (I often wonder how theists reconcile the contradiction between the feeling of religious wonder and the feeling of disappointment when facing Deus Ex Machina endings). The evolution "story" fascinated me with its slow and semi-random progression over billions of years. I guess this was my first taste of reductionism. (This is also an example of how optimizing for interestingness instead of truth has led me to the correct answer.)

Comment author: Alicorn 11 August 2014 09:29:39PM 2 points [-]

I independently conceived of determinism and a vague sort of compatibilism when I was twelveish.

Comment author: ahbwramc 11 August 2014 11:03:32PM 2 points [-]

I remember being inordinately relieved/happy/satisfied when I first read about determinism around 14 or 15 (in Sophie's World, fwiw). It was like, thank you, that's what I've been trying to articulate all these years!

(although they casually dismissed it as a philosophy in the book, which annoyed 14-or-15-year-old me)

Comment author: sediment 12 August 2014 10:55:17AM 1 point [-]

Good one! I think I also figured out a vague sort of compatibilism about that time.

Comment author: [deleted] 12 August 2014 03:42:22AM 2 points [-]

Cartesian skepticism and egoism, when I was maybe eleven. I eventually managed to argue myself out of both -- Cartesian skepticism fell immediately, but egoism took a few years.

(In case it isn't obvious from that, I did not have a very good childhood.)

I remember coming close to rediscovering pseudoformalism and the American caste system, but I discovered those concepts before I got all the way there.

Comment author: Curiouskid 21 August 2014 03:22:15AM 1 point [-]

When I was first learning about neural networks, I came up with the idea of de-convolutional networks: http://www.matthewzeiler.com/

Also, I think this is not totally uncommon. I think this suggests that there is low-hanging fruit in crowd-sourcing ideas from non-experts.

Another related thing that happens is that I'll be reading a book, and I'll have a question/thought that gets talked about later in the book.

Comment author: Dahlen 14 August 2014 04:04:37PM 1 point [-]

I rediscovered most of the more widely agreed upon ontological categories (minus one that I still don't believe to adhere to the definition) before I knew they were called that, at about the age of 17. The idea of researching them came to me after reading a question from some stupid personality quiz they gave us in high school, something like "If you were a color, which color would you be?" -- and something about it rubbed me the wrong way, it just felt ontologically wrong, conflating entities with properties like that. (Yes, I did get the intended meaning of the question, I wasn't that much of an Aspie even back then, but I could also see it in the other, more literal way.)

I remember it was in the same afternoon that I also split up the verb "to be" into its constituent meanings, and named them. It seemed related.

Comment author: iarwain1 13 August 2014 02:49:26PM 1 point [-]

Maybe these aren't so sophisticated, but I figured out determinism + a form of compatibilism, and the hard problem of consciousness in 10th grade.

Comment author: Luke_A_Somers 12 August 2014 03:06:20PM *  1 point [-]

In second or third grade, I noticed that (n+1) * (n+1) = (n * n) + n + (n+1).

Comment author: ShardPhoenix 12 August 2014 09:41:21AM *  1 point [-]

I came up with a basic version of Tegmark's level 4 multiverse in high school and wrote an essay about it in English class. By that time though I think I'd already read Permutation City which involves similar ideas.

Comment author: sediment 11 August 2014 05:21:58PM 1 point [-]

I think I was a de facto utilitarian from a very young age; perhaps eight or so.

Comment author: VAuroch 13 August 2014 06:33:07AM -1 points [-]

I independently constructed algebra (of the '3*x+7=49. Solve for x.' variety) while being given 'guess and check' word problems in second grade. That's a slightly different variety than most of the other examples here, though.

Comment author: Metus 11 August 2014 11:24:02AM *  8 points [-]

In the last open thread Lumifer linked to a list by the American Statistical Association with points that need to be understood to be considered statistically literate. In the same open thread in another comment sixes_and_sevens asked for statements we know are true but the average lay person gets wrong. As response he mainly got examples from the natural sciences and mathematics. Which makes me wonder, can we make a general test of education in all of these fields of knowledge that can be automatically graded? This test would serve as a benchmark for traditional educational methods and for autodidacts checking themselves.

I imagine having simple calculations for some things and multiple-choice tests for other scenarios where intuition suffices.

Edit: Please don't just upvote, try to point to similar ideas in your respective field or critique the idea.

Comment author: ChristianKl 11 August 2014 12:00:22PM 3 points [-]

It seems to me like something that can be solved by a community driven website where users can vote on questions.

Comment author: whales 11 August 2014 06:20:48PM *  2 points [-]

There are concept inventories in a lot of fields, but these vary in quality and usefulness. The most well-known of these is the Force Concept Inventory for first semester mechanics, which basically aims to test how Aristotelian/Newtonian a student's thinking is. Any physicist can point out a dozen problems with it, but it seems to very roughly measure what it claims to measure.

Russ Roberts (host of the podcast EconTalk) likes to talk about the "economic way of thinking" and has written and gathered links about ten key ideas like incentives, markets, externalities, etc. But he's relatively libertarian, so the ideas he chose and his exposition will probably not provide a very complete picture. Anyway, EconTalk has started asking discussion questions after each podcast, some of which aim to test basic understanding along these lines.

Comment author: sixes_and_sevens 12 August 2014 01:52:22PM 1 point [-]

I've often considered a self-assessment system where the sitter is prompted with a series of terms from the topic at hand, and asked to rate their understanding on a scale of 0-5, with 0 being "I've never heard of this concept", and 5 being "I could build one of these myself from scratch".

The terms are provided in a random order, and include red-herring terms that have nothing to do with the topic at hand, but sound plausible. Whoever provides the dictionary of terms should have some idea of the relative difficulty of each term, but you could refine it further and calibrate it against a sample of known diverse users, (novices, high-schoolers, undergrads, etc.)

When someone sits the test, you report their overall score relative to your calibrated sitters ("You scored 76, which puts you at undergrad level"), but you also report something like the Spearman rank coefficient of their answers against the difficulty of the terms. This provides a consistency check for their answers. If they frequently claim greater understanding of advanced concepts than basic ones, their understanding of the topic is almost certainly off-kilter (or they're lying). The presence of red-herring terms (which should all have canonical score of 0) means the rank coefficient consistency check is still meaningful for domain experts or people hitting the same value for every term.

Actually, this seems like a very good learning-a-new-web-framework dev project. I might give this a go.

Comment author: somnicule 13 August 2014 11:10:00PM 2 points [-]

Look up Bayesian Truth Serum, not exactly what you're talking about but a generalized way to elicit subjective data. Not certain on its viability for individual rankings, though.

Comment author: sixes_and_sevens 14 August 2014 09:12:01AM 1 point [-]

This is all sorts of useful. Thanks.

Comment author: Luke_A_Somers 12 August 2014 02:57:59PM *  2 points [-]

One problem that could crop up if you're not careful is a control term being used in an educational source not considered - a class, say, or a nonstandard textbook. I have a non-Euclidean geometry book that uses names for Euclidean geometry features that I certainly never encountered in geometry class. If those terms had been placed as controls, I would provide a non-zero rating for them.

Comment author: NancyLebovitz 12 August 2014 03:25:08PM 0 points [-]

Who's going to do the rather substantial amount of work needed to put the system together?

Comment author: sixes_and_sevens 12 August 2014 04:59:06PM 2 points [-]

Do you mean to build the system or to populate it with content? The former would be "me, unless I get bored or run out of time and impetus", and the latter is "whichever domain experts I can convince to list and rank terms from their discipline".

Comment author: NancyLebovitz 12 August 2014 07:57:46PM 1 point [-]

I was thinking about the work involved in populating it.

Comment author: [deleted] 16 August 2014 01:46:04AM 6 points [-]

What are some good paths toward good jobs, other than App Academy?

Comment author: beoShaffer 18 August 2014 10:57:17PM 1 point [-]

See Mr.Money Mustache's 50 Jobs over $50,000 without a degree and SSC's Floor Employment for a number of suggestions.

Comment author: shminux 16 August 2014 06:15:40PM 0 points [-]

I assume you don't consider going to a good school as a good path?

Comment author: [deleted] 17 August 2014 01:09:59AM 4 points [-]

It's difficult for people who aren't in exactly the right place -- and I think people like that would be less likely to be around here.

Certainly not likely for me; I'm already out of college, and I went to a no-name local school. (Didn't even occur to me to apply up.)

Comment author: Vaniver 15 August 2014 08:42:05PM *  5 points [-]

I've just finished the first draft of a series of posts on control theory, the book Behavior: The Control of Perception, and some commentary on its relevance to AI design. I'm looking for people willing to read the second draft next week and provide comments. Send me a PM or an email (I use the same username at gmail) if you're interested.

In particular, I'm looking for:

  • People with no engineering background.
  • People with tech backgrounds but no experience with control theory.
  • People with experience as controls engineers.

(Yes, that is basically a complete grouping of people. But somehow people are more likely to think you're looking for them if you specifically say you're looking for them, and I think I can learn different useful things about the post from people in those groups.)

Comment author: Lumifer 12 August 2014 02:56:06PM 5 points [-]

The Unicorn Fallacy (warning, relates to politics)

Is there an existing name for that one? It's similar to the nirvana fallacy but looks sufficiently different to me...

Comment author: shminux 12 August 2014 03:58:36PM 5 points [-]

I am not aware of an existing one, although it is related to Moloch, as described in SSC when applied to the state:

although from a god’s-eye-view everyone knows that eliminating corporate welfare is the best solution, each individual official’s personal incentives push her to maintain it.

What Munger describes as The State, SSC calls Moloch. What your link calls the Munger test, may as well be called the Moloch test:

The Munger test:

In debates, I have found that it is useful to describe this problem as the "unicorn problem," precisely because it exposes a fatal weakness in the argument for statism. If you want to advocate the use of unicorns as motors for public transit, it is important that unicorns actually exist, rather than only existing in your imagination. People immediately understand why relying on imaginary creatures would be a problem in practical mass transit. But they may not immediately see why "the State" that they can imagine is a unicorn. So, to help them, I propose what I (immodestly) call "the Munger test."

Go ahead, make your argument for what you want the State to do, and what you want the State to be in charge of. Then, go back and look at your statement. Everywhere you said "the State" delete that phrase and replace it with "politicians I actually know, running in electoral systems with voters and interest groups that actually exist."

If you still believe your statement, then we have something to talk about.

Comment author: Lumifer 12 August 2014 04:08:37PM 1 point [-]

What Munger describes as The State, SSC calls Moloch

I don't know about that. I understand Moloch as a considerably wider and larger system than just a State.

Comment author: shminux 12 August 2014 05:19:50PM 0 points [-]

Probably. I think Moloch is a metaphor for the actual, uncaring and often hostile universe, as contrasted with an imagined should-universe (the unicorn).

Comment author: jaime2000 14 August 2014 05:30:05PM 5 points [-]

I think Moloch is a metaphor for the actual, uncaring and often hostile universe, as contrasted with an imagined should-universe

No, that's Gnon (Nature Or Nature's God). Moloch is the choice between sacrificing a value to remain competitive against others who have also sacrificed that value, or else to stop existing because you are not competitive. The name comes an ancient god people would sacrifice their children to.

Comment author: Lumifer 12 August 2014 06:03:17PM 3 points [-]

I think Moloch is a metaphor for the actual, uncaring and often hostile universe

Well, not THAT wide :-)

My thinking about Moloch is still too fuzzy for good definitions, but I'm inclined to to treat is as emergent system behavior which, according to Finagle's Law, is usually not what you want. Often enough it's not what you expect, too, even if you designed (or tinkered with) the system.

The unicorn is also narrower than the whole should-universe -- specifically it's some agent or entity with highly unlikely benevolent properties and the proposal under discussion is entirely reliant on these properties in order to work.

Comment author: Azathoth123 13 August 2014 05:15:51AM 1 point [-]

My thinking about Moloch is still too fuzzy for good definitions

Moloch is based on the neo-reactionaries' Gnon. Notice how Nyan deals with the fuzziness by dividing Gnon into four components, each of which can be analyzed individually. Apparently Yvain's brain went into "basilisk shock" up on exposure to the content, which is why his description is so fuzzy.

Comment author: Nornagest 12 August 2014 05:27:27PM 2 points [-]

I've been thinking of Moloch as the God of the Perverse Incentives, which doesn't quite cover it (it has the right shape, but strictly speaking a perverse incentive needs to be perverse relative to some incentive-setting agent, which the universe lacks) but has the advantage of fitting the meter of a certain Kipling poem.

Comment author: Dagon 12 August 2014 08:03:26PM 3 points [-]

This is pretty close to my definition, but I'd simplify it to "Moloch is incentives". Perverse or not, Moloch is the god that gives you near-mode benefits unrelated to your far-mode values.

Comment author: advancedatheist 11 August 2014 03:38:35PM *  5 points [-]

I wonder why we don't see more family fortunes in the U.S. in kin groups that have lived here for generations. Estate taxes tend to inhibit the transmission of wealth down the line, but enough families have figured out how to game the system that they have held on to wealth for a century or more, notably including families which supply a disproportionate number of American politicians; they provide proof of concept of the durable family fortune. Otherwise most Americans seem to live in a futile cycle where their lifetime wealth trajectory starts from zero at birth and returns to zero by death.

Steve Sailer noted on his blog a few months back that in the UK, people with Anglo-Norman surnames in our time have held on to more wealth on average than Brits with surnames suggesting manual-laborer origins. For example, Aubrey de Grey has an Anglo-Norman surname, and he reportedly inherited several million British pounds when his mother died a few years ago. I gather that this doesn't generally happen to ordinary Brits. Apparently the warriors who came over from France with William the Conqueror in 1066, and participated in the division of the spoils, started a way of handling wealth which enabled their descendants to hold on to inherited assets down through the centuries. If the Anglo-Normans could do it, and if some American families have figured out how to do it more recently, then what keeps this practice from becoming widespread in American society?

Comment author: NancyLebovitz 11 August 2014 06:49:32PM 5 points [-]

Another possibility is that Americans are more individualistic. Maintaining a family fortune means subordinating yourself enough that it isn't spent down.

Comment author: Lumifer 11 August 2014 07:03:49PM 6 points [-]

"Lacking self-control" is probably what you mean :-)

Example: the Vanderbilts.

Comment author: wadavis 11 August 2014 08:14:12PM 2 points [-]

Supporting the individualistic argument. The family values trend in my prosperous region of Canada is leaning toward successful businessmen and entrepreneurs valuing empowering their children but not supporting their children past adolescence.

The accepted end goal IS to die as close to net zero as possible, I've not seen strong obligations to leave a large inheritance behind. The only strong obligation is the empowerment of their upper-middle class children so they can follow the same zero to wealth to zero cycle.

Where sons stay in the same industry as fathers, instead of striking out on their own, they work for the fathers firm until they have the credit and savings to start taking loans and buying shares of the fathers firm. Successful succession planning is when the children can buy 100% of the firm by the time the parents are ready for retirement.

(All based on personal observations of a single province and a group of peers n~20)

Comment author: Lumifer 11 August 2014 08:48:37PM 2 points [-]

The accepted end goal IS to die as close to net zero as possible

Is there an exception for real estate? I'm thinking both "regular" houses (reverse mortgages are uncommon) and, in particular, things like summer houses and farmland which tend to stay in the family.

I agree that the desire to leave behind a large bank account is... not widespread, but land and houses look sticky to me.

Comment author: wadavis 11 August 2014 11:00:40PM 0 points [-]

Farmland is far closer to a business asset and ends up treated the same as any other economic asset. Of course in farming there is a higher ratio of dynasty minded families (function of this province's immigration history and strong east-european cultural backgrounds).

I see what you mean about personal homes and personal land. There may be a mental division between economic assets, which shall not be given only sold, personal assets which are gifted away. This is a gap in my knowledge, It appears I need to spend more time with close to retirement, independently wealthy individuals.

Comment author: buybuydandavis 11 August 2014 09:20:05PM 3 points [-]

What I'd like to know is how the Brits are doing it.

Comment author: sixes_and_sevens 11 August 2014 10:17:21PM 2 points [-]

The part of my brain that generates sardonic responses says "Oxbridge and nepotism". At risk of generating explanations for patterns that don't really exist, class, education and assortative mating seem to make for wealthy dynasties.

Comment author: Nornagest 11 August 2014 05:51:22PM *  2 points [-]

I think there's a couple of fairly simple reasons contributing to Americans not having a culture of inheritance: first, that we live a long time by historical standards; and second, that we have a norm of children moving out after maturity. The first means that estates are generally released after children are well into their careers, and sometimes after they're themselves retired. The second means that all but the very wealthiest have to establish their own careers rather than living off the family dime.

This wouldn't directly affect actual inheritance, but it does take a lot of the urgency out of establishing a legacy. That lack of urgency might in turn contribute to reductions in real inheritance, given that you can sink a more or less arbitrary amount of money (by middle-class standards) into things like travel and expensive hobbies.

Comment author: Filipe 11 August 2014 08:41:38PM *  19 points [-]

Economist Scott Sumner at Econlog praised heavily Yudkowsky and the quantum physics sequence, and applies lessons from it to economics. Excerpts:

I've recently been working my way through a long set of 2008 blog posts by Eliezer Yudkowsky. It starts with an attempt to make quantum mechanics seem "normal," and then branches out into some interesting essays on philosophy and science. I'm nowhere near as smart as Yudkowsky, so I can't offer any opinion on the science he discusses, but when the posts touched on epistemological issues his views hit home.

and

I used to have a prejudice against math/physics geniuses. I thought when they were brilliant at high level math and theory; they were likely to have loony opinions on complex social science issues. Conspiracy theories. Or policy views that the government should wave a magic wand and just ban everything bad. Now that I've read Robin Hanson, Eliezer Yudkowsky and David Deutsch, I realize that I've got it wrong. A substantial number of these geniuses have thought much more deeply about epistemological issues than the average economist. So when Hanson says we put far too little effort into existential risks, or even lesser but still massive threats like solar flares, and Yudkowsky says cryonics is under-appreciated, or when they say AI (or brain ems) is coming faster than we think and will have far more profound effects than we realize, I'm inclined to take them very seriously.

Comment author: Viliam_Bur 13 August 2014 07:06:05AM *  4 points [-]

Reading the comments... one commenter objects to WMI in a way which I would summarize as: "MWI provides identical experimental predictions to CI, which makes it useless, and also MWI provides wrong experimental predictions (unlike CI), which makes it wrong".

The author immediately detects the contradiction:

You know more about it than me. But if it's just equations and you can't empirically test which interpretation is true, then why does the Casimir force make MWI less likely?

Another commenter says that MWI has a greater complexity of thought, and while it is more useful to explore algorithmic possibilities on quantum computers abstractly, CI wins because it is about the real world.

Then the former commenter says (in reaction to the author) that MWI didn't provide useful predictions, and that Casimir force can only be explained by quantum equations and not by classical physics.

(Why exactly is that supposed to be an argument against MWI? No idea. Also, if MWI doesn't provide useful predictions, how can it be useful for studying quantum computers? Does it mean that quantum computers are never going to work in, you know, the real life?)

Finally, yet another commenter explains things from MWI point of view, saying that "observers" must follow the same fundamental physics as rocks.

Comment author: Gunnar_Zarncke 14 August 2014 08:58:51AM *  4 points [-]

My son was asked what he'd wish for when he could wish for any one thing whatsoever.

He considered a while and then said: "I have so many small wishes that I'd wish for many wishes."

My ex-wife settled for "I want to be able to conjure magic" reasoning that then she could basically make any thing come true.

For me it is obviously "I want a friendly artificial general intelligence" - seems like the safest bet.

Thus basically we all chose alike things.

Comment author: shminux 14 August 2014 06:24:38PM *  2 points [-]

Maybe he'll grow up to be a mathematician.

Comment author: Gunnar_Zarncke 15 August 2014 09:54:41PM 0 points [-]

Naa, he is too practical. Builds real things. It's more likely that one of his youger brothers do. Like the five year old who told me that infinity can be reached only in steps of infinity each (thus one step), not in smaller steps (following some examples how 1000 can be reached in steps of 1, 100, 1000, 200 and other).

Comment author: NancyLebovitz 14 August 2014 10:37:03AM 1 point [-]

If I only had three wishes, I would still spend one of them on having enough sense to make good wishes. I'd probably do that if I only had two wishes.

I might even use my only wish on having significantly better sense. My current situation isn't desperate-- if I only had one wish and were desperate, the best choice might well be to use the wish on dealing with the desperate circumstance as thoroughly as possible.

Comment author: DanielLC 14 August 2014 10:16:05PM 0 points [-]

For me it is obviously "I want a friendly artificial general intelligence" - seems like the safest bet.

But the AI would still be constrained by the laws of physics. Intelligence can't beat thermodynamics. You need to wish for an omnipotent friendly AI.

Comment author: RichardKennaway 13 August 2014 11:52:43AM 3 points [-]

This is not an attempt at an organised meetup, but the World Science Fiction Convention begins tomorrow in London. I'll be there. Anyone else from LessWrong?

I had intended to be at Nineworlds last weekend as well, but a clash came up with something else and I couldn't go. Was anyone else here there?

Comment author: shminux 12 August 2014 05:28:39PM *  3 points [-]

If any LWer is attending the Quantum Foundations of a Classical Universe workshop at the IBM Watson Research Center, feel free to report!

Several relatively famous experts are discussing anthropics, the Born rule, MWI, Subjective Bayesianism, quantum computers and qualia.

Comment author: MrMind 13 August 2014 01:05:53PM 1 point [-]

Here is a list of papers about the talks, if you want to get an idea without attending.

Comment author: shminux 13 August 2014 04:54:09PM 0 points [-]

I've read most of those I care to, but there is always something about face-to-face discussions that is lost in print.

Comment author: bramflakes 18 August 2014 01:01:43PM *  2 points [-]
Comment author: DanielLC 19 August 2014 01:23:33AM 1 point [-]

It doesn't seem to be clear whether that's just people of different cultures grouping faces differently, like how they might group colors differently even though their eyes work the same, or if their face/emotion correspondence is different.

Comment author: [deleted] 12 August 2014 02:55:10PM 2 points [-]

Cryonics question:

For those of you using life insurance to pay your cryonics costs, what sort of policy do you use?

Comment author: James_Miller 13 August 2014 03:40:02AM *  5 points [-]

Whole life via Rudi Hoffman for Alcor.

Comment author: Joshua_Blaine 12 August 2014 06:10:27PM 3 points [-]

I've not personally finished my own arrangements, but I'll likely be using whole life of some kind. I do know that Rudi Hoffman is an agent well recommended by people who've gone the insurance route, so talking to him will likely get you a much better idea of what choices people make (A small warning, his sight is not the prettiest thing). You could also contact the people recommended on Alcor's Insurance Agents page, if you so desire.

Comment author: Thomas 11 August 2014 05:03:04PM 2 points [-]

I am getting the red envelope sign on the right side here, as I had a message. But then I see it's not for me. For a few days now.

Comment author: RichardKennaway 11 August 2014 08:43:04PM *  10 points [-]

Have you ever clicked on the grey envelope icon found at the bottom right of every post and comment? If you do, then immediate replies to it show up in your inbox also. Look at the parent of one of these mysterious messages and see if its envelope is green. If it is, you can click it again to turn it off.

Comment author: Thomas 12 August 2014 09:51:07AM 3 points [-]

Thanks! I had done this, inadvertently.

Comment author: Nornagest 11 August 2014 05:17:12PM 3 points [-]

If a reply to one of your comments is deleted before you read it, you'll be alerted but won't get the message. I believe the alert should go away once you check your messages, though.

Comment author: drethelin 11 August 2014 05:13:07PM 2 points [-]

I think if someone is responding to you in a very downvoted thread it might not show up in the your replies?

Comment author: James_Miller 11 August 2014 05:59:58PM *  3 points [-]

How morally different are ISIS fighters from us? If we had a similar upbringing would we think it morally correct to kill Yazidi children for having the "wrong" religion? Or might genetics play a role in our differing moral views? I find it hard to think of ISIS members as human, or at least I don't want to belong to the same species as them. But yes I do realize that some of my direct ancestors almost certainly did horrible, horrible things by my current moral standards.

Comment author: polymathwannabe 11 August 2014 07:01:29PM 9 points [-]

I find it hard to think of ISIS members as human, or at least I don't want to belong to the same species as them.

Beware of refusing to believe undeniable reality just because it's not nice.

Comment author: Lumifer 11 August 2014 06:08:07PM *  9 points [-]

How morally different are ISIS fighters from us? If we had a similar upbringing would we think it morally correct to kill Yazidi children for having the "wrong" religion?

A relevant factor which is (intentionally or not) ignored by American media is that, from the point of view of pious Muslims, Yazidis are satanists.

To quote Wikipedia (Taus Melek is basically the chief deity for Yazidis, God the Creator being passive and uninvolved with the world):

As a demiurge figure, Tawûsê Melek is often identified by orthodox Muslims as a Shaitan (Satan), a Muslim term denoting a devil or demon who deceives true believers. The Islamic tradition regarding the fall of "Shaitan" from Grace is in fact very similar to the Yazidi story of Malek Taus – that is, the Jinn who refused to submit to God by bowing to Adam is celebrated as Tawûsê Melek by Yazidis, but the Islamic version of the same story curses the same Jinn who refused to submit as becoming Satan.[38] Thus, the Yazidi have been accused of devil worship.

So, what's the Christianity's historical record for attitude towards devil worshippers?

or at least I don't want to belong to the same species as them

Any particular reason you feel this way about the Sunni armed groups, but not about, say, Russian communists, or Mao's Chinese, or Pol Pot's Cambodians, or Rwandans, or... it's a very long list, y'know?

Comment author: Nornagest 11 August 2014 06:42:23PM *  7 points [-]

from the point of view of pious Muslims, Yazidis are satanists [...] what's the Christianity's historical record for attitude towards devil worshippers?

The closest parallel might be to Catharism, a Gnostic-influenced sect treating the God of the Old Testament as an entity separate from, and opposed to, the God of the New, and which was denounced as a "religion of Satan" by contemporary Christian authorities. That was bloodily suppressed in the Albigensian Crusade. Manicheanism among other early Gnostic groups was similarly accused as well, but it's much older and less well documented, and reached its greatest popularity (and experienced its greatest persecutions) in areas without Christian majorities.

A few explicitly Satanist groups have popped up since the 18th century, but they've universally been small and insignificant, and don't seem to have experienced much persecution outside of social disapproval. Outside of fundamentalist circles they seem to be treated as immature and insincere more than anything else.

On the other hand, unfounded accusations of Satanism seem to be fertile ground for moral panics -- from the witch trials of the early modern period (which, Wiccan lore notwithstanding, almost certainly didn't target any particular belief system) to the more recent Satanic ritual abuse panics.

Comment author: Lumifer 11 August 2014 06:53:52PM 0 points [-]

The closest parallel might be to Catharism

I would probably say that the closest parallel is the persecution of witches in medieval Europe (including but not limited to the witch trials).

Comment author: Nornagest 11 August 2014 07:02:22PM *  4 points [-]

The persecution of witches targeted individuals or small groups, not (as far as modern history knows) members of any particular religion; and the charges leveled at alleged witches usually involved sorcerous misbehavior of various kinds (blighting crops, causing storms, bringing pestilence...) rather than purely religious accusations. Indeed, for most of the medieval era the Church denied the existence of witches (though, as we've seen above, it was happy to persecute real heretics): witch trials only gained substantial clerical backing well into the early modern period.

Seems pretty different to me.

Comment author: Lumifer 11 August 2014 07:10:37PM *  0 points [-]

Charges of being in league with the Devil were a necessary part of accusations against the witches because, I think, sorcery was considered to be possible for humans only through the Devil's help. The witches' covens were perceived as actively worshipping the Devil.

I agree that it's not the exact parallel, but do you think a whole community (with towns and everything) of devil worshippers could have survived in Europe or North America for any significant period of time? Compared to Islam, Christianity was just more quick and efficient about eliminating them.

Comment author: Nornagest 11 August 2014 08:00:43PM *  2 points [-]

I agree that it's not the exact parallel, but do you think a whole community (with towns and everything) of devil worshippers could have survived in Europe or North America for any significant period of time?

That veers more into speculation than I'm really comfortable with. That said, though, I think you're giving this devil-worship thing a bit more weight than it should have; sure, some aspects of Melek Taus are probably cognate to the Islamic Shaitan myth, but Yazidi religion as a whole seems to draw in traditions from several largely independent evolutionary paths. We're not dealing here with the almost certainly innocent targets of witch trials or with overenthusiastic black metal fans, nor even with an organized Islamic heresy, but with a full-blown syncretic religion.

No similar religions of comparable age survive in Christianity's present sphere of influence, though the example of Gnosticism suggests that the early evolution of the Western branch of Abrahamic faith was pretty damn complicated, and that many were wiped out in Christianity's early expansion or in medieval persecutions. There are a lot of younger ones, however, especially in the New World: Santeria comes to mind.

That's only tangentially relevant to the historical parallels I'm trying to outline, though.

Comment author: Lumifer 11 August 2014 08:41:14PM *  2 points [-]

a full-blown syncretic religion

Oh, it certainly is, but the issue is not what we are dealing with -- the issue is how the ISIS fighters perceive it.

The whole Middle-East-to-India region is full of smallish religions which look to be, basically, outcomes of "Throw pieces of several distinct religious traditions together, blend on high for a while, then let sit for a few centuries".

Comment author: Nornagest 11 August 2014 09:58:13PM *  4 points [-]

Oh, it certainly is, but the issue is not what we are dealing with -- the issue is how the ISIS fighters perceive it.

I'm pretty sure their perceptions are closer to an Albigensian Crusader's attitude toward Catharism -- or even your average Chick tract fan's attitude toward Catholicism -- than some shit-kicking medieval peasant's grudge toward the old man down the lane who once scammed him for a folk healing ritual that invoked a couple of barbarous names for shock value. Treating religious opponents as devil-worshippers is pretty much built into the basic structure of (premodern, and some modern) Christianity and Islam, whether or not there's anything to the accusation (though as I note above, the charge is at least as sticky for Catharism as for the Yazidi). The competing presence of a structured religion that's related closely enough to be uncomfortable but not closely enough to be a heresy per se... that's a little more distinctive.

Comment author: buybuydandavis 11 August 2014 09:34:16PM 0 points [-]

A relevant factor which is (intentionally or not) ignored by American media is that, from the point of view of pious Muslims, Yazidis are satanists.

It hasn't been ignored by the American media. I've heard it multiple times. I don't think the term used was Satanist, but "devil worshippers".

Comment author: James_Miller 11 August 2014 06:26:33PM 0 points [-]

Although I'm a libertarian now, in my youth I was very left-wing and can understand the appeal of communism. For many of the others on the long list, yes they do feel very other to me.

Comment author: bbleeker 12 August 2014 12:41:39PM 2 points [-]

I too was very left-wing when I was young, and now I feel communism does belong with the others on that list. It fills the same mental space as a religion, and is believed in much the same way (IME).

Comment author: DanielLC 11 August 2014 10:04:45PM 16 points [-]

I find it hard to think of ISIS members as human

That's how the ISIS fighters feel about the Yazidi.

Comment author: James_Miller 11 August 2014 10:51:57PM 5 points [-]

Yes, an uncomfortable symmetry.

Comment author: RichardKennaway 12 August 2014 06:53:26AM 4 points [-]

Symmetry? Do you want to behead the children of ISIS fighters?

Comment author: Azathoth123 13 August 2014 04:52:26AM 3 points [-]

What age are we talking about here? ISIS has been recruiting children as young as 9 and 10.

Comment author: James_Miller 12 August 2014 03:03:34PM 2 points [-]

No, so I guess it's not perfect symmetry.

Comment author: DanielLC 12 August 2014 04:48:30PM 0 points [-]

He finds their children human. Just not the ISIS fighters themselves.

Comment author: bramflakes 11 August 2014 07:24:43PM 5 points [-]

Or might genetics play a role in our differing moral views?

It's possible that more inbred clannish societies have smaller moral circles than Western outbreeders.

I against my brother, my brothers and I against my cousins, then my cousins and I against strangers

  • Bedouin proverb
Comment author: [deleted] 12 August 2014 03:52:10AM 12 points [-]

I was talking to someone from Tennessee once, and he said something along the lines of: "When I'm in a bar in western Tennessee, I drink with the guy from western Tennessee and fight the guy from eastern Tennessee. When I'm in a bar in eastern Tennessee, I drink with the guy from Tennessee and fight the guy from Georgia. When I'm in a bar in Georgia, I drink with the guy from the South and fight the guy from New England."

Comment author: CellBioGuy 12 August 2014 05:02:29AM *  4 points [-]

It's possible that more inbred clannish societies have smaller moral circles than Western outbreeders.

The history of the European takeover of the Americas and the damn near genocide of somewhere between tens and hundreds of millions of people in the process, and the history of the resultant societies, should disavow everyone here of any laughable claims of ethnic superiority in this regard. I also strongly suspect that the European diaspora of the Americas and elsewhere just hasn't had enough time for the massive patchwork of tribalisms to inevitably crystallize out of the liquid wave of disruptive post-genocide settlement that happened over the last few hundred years, and instead we only have a few very large groups in this hemisphere that are coming to hate each other so far. Though sometimes I suspect the small coal mining town my parents escaped from could be induced to have race riots between the Poles and Italians.

Also... Germany. Enough said.

EDIT: Not directed at you, bramflakes, but at the whole thread here... how in all hell am I seeing so much preening smug superiority on display here? Humans are brutal murderous monkeys under the proper conditions. No one here is an exception at all except through accidents of space and time, and even now we all reading this are benefiting from systems which exploit and kill others and are for the most part totally fine with them or have ready justifications for them. This is a human thing.

Comment author: RichardKennaway 12 August 2014 07:16:12AM 10 points [-]

Humans are brutal murderous monkeys under the proper conditions.

They are also sweetness and light under the proper conditions.

No one here is an exception at all except through accidents of space and time

You seem to be claiming that certain conditions -- those not producing brutal murderous monkeys -- are accidents of space and time, but certain others -- those producing brutal murderous monkeys -- are not. That "brutal murderous monkeys" is our essence and any deviation from that mere accident, in the philosophical sense. That the former is our fundamental nature and the latter mere superficial froth.

There is no actual observation that can be made to distinguish "proper conditions" from "parochial circumstance", "essence" from "accident", "fundamental" from "superficial".

Comment author: James_Miller 12 August 2014 03:38:39PM 4 points [-]

how in all hell am I seeing so much preening smug superiority on display here?

We have a right to feel morally superior to ISIS, although probably not on genetic grounds.

No one here is an exception at all except through accidents of space and time

But is this true? Do some people have genes which strongly predispose them against killing children. It feels to me like I do, but I recognize my inability to properly determine this.

and even now we all reading this are benefiting from systems which exploit and kill others and are for the most part totally fine with them or have ready justifications for them.

As a free market economist I disagree with this. The U.S. economy does not derive wealth from the killing of others, although as the word "exploit" is hard to define I'm not sure what you mean by that.

Comment author: ChristianKl 12 August 2014 09:15:53PM 1 point [-]

We have a right to feel morally superior to ISIS, although probably not on genetic grounds.

The Stanford prison experiment suggests that you don't need that much to get people to do immoral things. ISIS evolved over years of hard civil war.

ISIS also partly has their present power because the US first destabilised Iraq and later allowed funding of Syrian rebels. The US was very free to avoid fighting the Iraq war. ISIS fighters get killed if they don't fight their civil war.

Comment author: fubarobfusco 13 August 2014 02:37:22AM 1 point [-]

The Stanford prison experiment suggests that you don't need that much to get people to do immoral things.

The Stanford prison "experiment" was a LARP session that got out of control because the GM actively encouraged the players to be assholes to each other.

Comment author: Douglas_Knight 14 August 2014 01:57:59AM 1 point [-]

I agree with that interpretation of the experiment but "active encouragement" should count as "not that much."

Comment author: James_Miller 12 August 2014 09:25:48PM 0 points [-]

I am very confident that a college student version of me taking part in a similar experiment as a guard would not have been cruel to the prisoners in part because the high school me (who at the time was very left wing) decided to not stand up for the pledge of allegiance even though everyone else in his high school regularly did and this me refused to participate in a gym game named war-ball because I objected to the name.

Comment author: Nornagest 12 August 2014 09:44:21PM 5 points [-]

I didn't stand for the Pledge in school either, but in retrospect I think that had less to do with politics or virtue and more to do with an uncontrollable urge to look contrarian.

I can see myself going either way in the Stanford prison experiment, which probably means I'd have abused the prisoners.

Comment author: buybuydandavis 11 August 2014 09:46:27PM 9 points [-]

It's a little harder to say about the ISIS guys, but I think personality wise many of us are a lot like the Al Qaeda leadership. Ideology and Jihad for it appeals.

Most people don't take ideas too seriously. We do. And I think it's largely genetic.

I find it hard to think of ISIS members as human

Human, All Too Human.

Historically, massacring The Other is the rule, not the exception. You don't even need to be particularly ideological for that. People who just go with the flow of their community will set The Other on fire in a public square, and have a picnic watching. Bring their kids. Take grandma out for the big show.

Comment author: James_Miller 11 August 2014 10:56:03PM *  1 point [-]

Most people don't take ideas too seriously. We do. And I think it's largely genetic.

Excellent point. I wonder if LW readers and Jihadists would give similar answers to the Trolley problem.

Comment author: buybuydandavis 12 August 2014 02:20:30AM 5 points [-]

I don't think that's the test. It's not that they'd give the same answers to any particular question.

I think the test would be a greater likelihood to be unshakeable according to adjustments along moral modalities that move others who are not so ideological. How "principled" are you? How "extreme" a situation are you willing to assent to, relative to the general population? Largely, how far can you override morality cognitively?

Comment author: Nornagest 11 August 2014 11:08:37PM *  4 points [-]

I wonder if LW readers and Jihadists would give similar answers to the Trolley problem.

A hundred bucks says the answer is "no". Religious fundamentalism is not known to encourage consequential ethics.

There may be certain parallels -- I've read that engineers and scientists, or students of those disciplines, are disproportionately represented among jihadists -- but they're probably deeper than that.

Comment author: buybuydandavis 12 August 2014 02:36:13AM 5 points [-]

Also disproportionately represented as the principals in the American Revolution. Inventors, engineers, scientists, architects.

Franklin,Jefferson, Paine, and Washington all had serious inventions. That's pretty much the first string of the revolution.

Comment author: RichardKennaway 12 August 2014 07:03:08AM 4 points [-]

A hundred bucks says the answer is "no". Religious fundamentalism is not known to encourage consequential ethics.

That might depend on the consequences.

A runaway trolley is careering down the tracks and will kill a single infidel if it continues. If you pull a lever, it will be switched to a side track and kill five infidels. Do you pull the lever?

The lever is broken, but beside you on the bridge is a very fat man, one of the faithful. Do you push him off the bridge to deflect the trolley and kill five infidels, knowing that he will have his reward for his sacrifice in heaven?

Comment author: Prismattic 12 August 2014 02:02:53AM 2 points [-]

I've read that engineers and scientists, or students of those disciplines, are disproportionately represented among jihadists

I've also read this, but I want to know if it corrects for the fact that the educational systems in many of the countries that produce most jihadis don't encourage study of the humanities and certain social sciences. Is it really engineers in particular, or is the educated-but-stifled who happen overwhelmingly to be engineers in these countries?

Comment author: Viliam_Bur 13 August 2014 07:50:38AM 3 points [-]

How morally different are ISIS fighters from us?

Uhm, taboo "morally different"?

Are their memes repulsive to me? Yes, they are.

Do they have terminal value as humans (ignoring their instrumental value)? Yes, they do.

How about their instrumental value? Uhm, probably negative, since they seem to spend a lot of time killing other humans.

If we had a similar upbringing would we think it morally correct to kill Yazidi children for having the "wrong" religion? Or might genetics play a role in our differing moral views?

Probably yes. I think there can be a genetic influence, but there is much more of "monkey see, monkey do" in humans.

Comment author: Gunnar_Zarncke 11 August 2014 10:34:04PM 5 points [-]

First you might want to consider propaganda.

http://www.revleft.com/vb/ten-commandments-war-t52907/index.html?s=8387131b8a98f6ee7e6ba74cce570d8e

http://home.cc.umanitoba.ca/~mkinnear/16_Falsehood_in_wartime.pdf

  1. We do not want war.

  2. The opposite party alone is guilty of war

  3. The enemy is the face of the devil.

  4. We defend a noble cause, not our own interest.

  5. The enemy systematically commits cruelties; our mishaps are involuntary.

  6. The enemy uses forbidden weapons.

  7. We suffer small losses, those of the enemy are enormous.

  8. Artists and intellectuals back our cause.

  9. Our cause is sacred.

  10. All who doubt our propaganda, are traitors.

Comment author: NancyLebovitz 11 August 2014 07:01:43PM 4 points [-]

Part of "us" is our culturally transmitted values.

My impression is that ISIS is mostly a new thing-- it's a matter of relatively new memes taken up by adolescents and adults rather than generational transmission.

I don't think it's practical to see one's enemies, even those who behave vilely and are ideologically committed to continuing to do so, as non-human. To see them as non-human is to commit oneself as framing them as incomprehensible. More exactly, the usual outcomes seems to be "all they understand is force" or "there's nothing to do but kill them". which makes it difficult to think of how to deal with them if victory by violence isn't a current option.

Comment author: Lumifer 11 August 2014 07:12:11PM 3 points [-]

I don't think it's practical to see one's enemies ... as non-human.

On the contrary, that's the attitude specifically trained in modern armies, US included. Otherwise not enough people shoot at the enemy :-/

Comment author: Azathoth123 13 August 2014 04:40:28AM 2 points [-]

On the contrary, that's the attitude specifically trained in modern armies,

I'm not sure about modern armies, but ancient and even medieval armies certainly didn't need this attitude to kill their enemies.

Comment author: NancyLebovitz 11 August 2014 07:54:08PM 2 points [-]

You might not be in an army.

Comment author: RichardKennaway 12 August 2014 06:51:57AM 2 points [-]

If we had a similar upbringing would we think it morally correct to kill Yazidi children for having the "wrong" religion?

The question is irrelevant. If it is wrong to behead children for having the "wrong" religion, that is not affected by fictional scenarios in which "we" believed differently. (It's not clear what "we" actually means there, but that's a separate philosophical issue.) Truth is not found by first seeing what you believe, and then saying, "I believe this, therefore it is true."

Or might genetics play a role in our differing moral views?

This question is also irrelevant.

I find it hard to think of ISIS members as human

Well, they are. Start from there.

Comment author: niceguyanon 14 August 2014 07:30:13PM 1 point [-]

Here is a Vice documentary posted today about ISIS: https://news.vice.com/video/the-islamic-state-full-length

Comment author: mouseking 15 August 2014 01:29:28AM 2 points [-]

I've been noticing a theme of utilitarianism on this site -- can anyone explain this? More specifically: how did (x)uys rationalize a utilitarian philosophy over an existential, nihilistic, or hedonistic one?

Comment author: Dahlen 15 August 2014 06:31:26PM 5 points [-]

To put it as simply as I could, LessWrongers like to quantify stuff. A more specific instance of this is the fact that, since this website started off as the brainchild of an AI researcher, the prevalent intellectual trends will be those with applicability in AI research. Computers work easily with quantifiable data. As such, if you want to instill human morality into an AI, chances are you'll at least consider conceptualizing morality in utilitarian terms.

Comment author: RichardKennaway 15 August 2014 01:00:06PM 4 points [-]

The confluence of a number of ideas.

Cox's theorem shows that degree of belief can be expressed as probabilities.

The VNM theorem shows that preferences can be expressed as numbers (up to an additive constant), usually called utilities.

Consequentialism, the idea that actions are to be judged by their consequences, is pretty much taken as axiomatic.

Combining these gives the conclusion that the rational action to take in any situation is the one that maximises the resulting expected utility.

Your morality is your utility function: your beliefs about how people should live are preferences about they should live.

Add the idea of actually being convinced by arguments (except arguments of the form "this conclusion is absurd, therefore there is likely to be something wrong with the argument", which are merely the absurdity heuristic) and you get LessWrong utilitarianism.

Comment author: blacktrance 15 August 2014 11:10:50PM 1 point [-]

Utilitarianism is more than just maximizing expected utility, it's maximizing the world's expected utility. Rationality, in the economic or decision-theoretic sense, is not synonymous with utilitarianism.

Comment author: RichardKennaway 16 August 2014 08:03:04AM 1 point [-]

That is a good point, but I think one under-appreciated on LessWrong. It seems to go "rationality, therefore OMG dead babies!!" There is discussion about how to define "the world's expected utility", but it has never reached a conclusion.

Comment author: blacktrance 16 August 2014 08:54:53AM 0 points [-]

In addition to the problem of defining "the world's expected utility", there is also the separate question of whether it (whatever it is) should be maximized.

Comment author: Vulture 17 August 2014 05:13:27PM *  0 points [-]

Utilitarianism is more than just maximizing expected utility, it's maximizing the world's expected utility.

I think this is probably literally correct, but misleading. "Maximizing X's utility" is generally taken to mean "maximize your own utility function over X". So in that sense you are quite correct. But if by "maximizing the world's utility" you mean something more like "maximizing the aggregate utility of everyone in the world", then what you say is only true of those who adhere to some kind of preference utilitarianism. Other utilitarians would not necessarily agree.

Comment author: blacktrance 17 August 2014 08:52:21PM *  0 points [-]

Hedonic utilitarians would also say that they want to maximize the aggregate utility of everyone in the world, they would just have a different conception of what that entails. Utilitarianism necessarily means maximizing aggregate utility of everyone in the world, though different utilitarians can disagree about what that means - but they'd agree that maximizing one's own utility is contrary to utilitarianism.

Comment author: Vulture 18 August 2014 12:34:58AM *  0 points [-]

Anyone who believes that "maximizing one's own utility is contrary to utilitarianism" is fundamentally confused as to the standard meaning of at least one of those terms. Not knowing which one, however, I'm not sure what I can say to make the matter more clear.

Comment author: blacktrance 18 August 2014 01:09:18AM 0 points [-]

Maximizing one's own utility is practical rationality. Maximizing the world's aggregate utility is utilitarianism. The two need not the the same, and in fact can conflict. For example, you may prefer to buy a cone of ice cream, but world utility would be bettered more effectively if you'd donate that money to charity instead. Buying the ice cream would be the rational own-utility-maximizing thing to do, and donating to charity would be the utilitarian thing to do.

Comment author: RichardKennaway 18 August 2014 06:30:41AM *  0 points [-]

However, if utilitarianism is your ethics, the world's utility is your utility, and the distinction collapses. A utilitarian will never prefer to buy that ice cream.

Comment author: shminux 18 August 2014 06:39:34AM 0 points [-]

It's the old System I (want ice cream!) vs System 2 (want world peace!) friction again.

Comment author: Ef_Re 15 August 2014 01:58:48AM -1 points [-]

To the extent that lesswrong has an official ethical system, that system is definitely not utilitarianism.

Comment author: James_Miller 15 August 2014 02:36:58AM 1 point [-]

I don't agree. LW takes a microeconomics viewpoint of decision theory and this implicitly involves maximizing some weighted average of everyone's utility function.

Comment author: Vulture 17 August 2014 05:22:45PM 0 points [-]

At some point we really need to come up with more words for this stuff so that the whole consequentialism/hedonic-utilitarianism/etc. confusion doesn't keep coming up.

Comment author: 2ZctE 15 August 2014 05:06:41PM 0 points [-]

To the extent that lesswrong has an official ethical system, that system is utilitiarianism with "the fulfillment of complex human values" as a suggested maximand rather than hedons

Comment author: Ef_Re 16 August 2014 06:35:30PM 0 points [-]

That would normally be referred to as consequentialism, not utilitarianism.

Comment author: 2ZctE 18 August 2014 03:08:25AM *  0 points [-]

Huh, I'm not sure actually, I had been thinking of consequentialism as being the general class of ethical theories based on caring about the state of the world, and that it's utilitarianism when you try to maximize some definition of utility (which could be human value-fulfillment if you tried to reason about it quantitatively). If my usages are unusual I more or less inherited them from the consequentialism faq I think

Comment author: Ef_Re 22 August 2014 11:46:07PM 0 points [-]

If you mean Yvain's, while his stuff is in general excellent, I recommend learning about philosophical nomenclature from actual philosophers, not medics.

Comment author: ChristianKl 15 August 2014 11:40:36AM 0 points [-]

In general this site focuses on the friendly AI problem, a nihilistic or a hedonistic AI might not be friendly to humans. The notion of an existentialist AI seems to be largely unexplored as far as I know.

Comment author: Username 11 August 2014 01:33:45PM 2 points [-]

My brain spontaneously generated an argument for why killing all humans might be the best way to satisfy my values. As far as I know it's original; at any rate, I don't recall seeing it before. I don't think it actually works, and I'm not going to post it on the public internet. I'm happy to just never speak of it again, but is there something else I should do?

Comment author: RichardKennaway 11 August 2014 02:27:40PM 13 points [-]

is there something else I should do?

Find out how your brain went wrong, with a view to not going so wrong again.

Comment author: zzrafz 11 August 2014 04:20:42PM 0 points [-]

Playing devil's advocate here, the original poster is not that wrong. Ask any other living species on Earth and they will say their life would be better without humans around.

Comment author: Nectanebo 11 August 2014 05:26:11PM *  9 points [-]

Apart from the fact that they wouldn't say anything (because generally animals can't speak our languages ;)), nature can be pretty bloody brutal. There are plenty of situations in which our species' existence has made the lives of other animals much better than they would otherwise be. I'm thinking of veterinary clinics that often perform work on wild animals, pets that don't have to be worried about predation, that kind of thing. Also I think there are probably a lot of species that have done alright for themselves since humans showed up, animals like crows and the equivalents in their niche around the world seem to do quite well in urban environments.

As someone who cares about animal suffering, is sympathetic to vegetarianism and veganism, and even somewhat sympathetic to more radical ideas like eradicating the world's predators, I think that humanity represents a very real possibility to decrease suffering including animal suffering in the world, especially as we grow in our ability to shape the world in the way we choose. Certainly, I think that humanity's existence provides real hope in this direction, remembering that the alternative is for animals to continue to suffer on nature's whims perhaps indefinitely, rather than ours perhaps temporarily.

Comment author: zzrafz 11 August 2014 06:31:37PM 0 points [-]

Never thought of it this way. Guess in the long term it makes sense. So far, though...

Comment author: Lumifer 11 August 2014 04:44:25PM 7 points [-]

Ask any other living species on Earth and they will say their life would be better without humans around.

Let's ask a cockroach, a tapeworm, and a decorative-breed dog :-)

Comment author: DanielLC 11 August 2014 09:45:09PM 1 point [-]

Humans are leading to the extinction of many species. Given the sorts of things that happen to them in the wild, this may be an improvement.

This is too distant from the original argument to be an argument for it. I'm just playing devil's advocate recursively.

Comment author: Username 12 August 2014 12:24:58AM 3 points [-]

It seems I was unclear. I have no intention of attempting to kill all humans. I'm not posting the argument publicly because I don't want to run the (admittedly small) risk that someone else will read it and take it seriously. I'm just wondering if there's anything I can do with this argument that will make the world a slightly better place, instead of just not sharing it (which is mildly negative to me and neutral to everyone else - unless I've sparked anyone's curiousity, for which I apologise).

Comment author: polymathwannabe 11 August 2014 02:43:03PM 3 points [-]

What values could possibly lead to such a choice?

Comment author: satt 12 August 2014 12:00:35AM *  8 points [-]

Hardcore negative utilitarianism?

In The Open Society and its Enemies (1945), Karl Popper argued that the principle "maximize pleasure" should be replaced by "minimize pain". He thought "it is not only impossible but very dangerous to attempt to maximize the pleasure or the happiness of the people, since such an attempt must lead to totalitarianism."[67] [...]

The actual term negative utilitarianism was introduced by R.N.Smart as the title to his 1958 reply to Popper[69] in which he argued that the principle would entail seeking the quickest and least painful method of killing the entirety of humanity.

Suppose that a ruler controls a weapon capable of instantly and painlessly destroying the human race. Now it is empirically certain that there would be some suffering before all those alive on any proposed destruction day were to die in the natural course of events. Consequently the use of the weapon is bound to diminish suffering, and would be the ruler's duty on NU grounds.[70]

(Pretty cute wind-up on Smart's part; grab Popper's argument that to avoid totalitarianism we should minimize pain, not maximize happiness, then turn it around on Popper by counterarguing that his argument obliges the obliteration of humanity whenever feasible!)

Comment author: buybuydandavis 11 August 2014 09:00:42PM 2 points [-]

You should consider that the problem may not be in the argument, but in your beliefs about the values you think you have.

Comment author: Username 12 August 2014 12:34:38AM 1 point [-]

I have considered that, and I don't think it's a relevant issue in this particular case.

Comment author: NancyLebovitz 11 August 2014 10:14:01PM 1 point [-]

I'd say not to worry about it unless it's a repetitive thought.

Comment author: [deleted] 11 August 2014 04:57:15PM -2 points [-]

Reform yourself. Killing all humans is axiomatically evil in my playbook, so eithar (a) you are reasoning from principles which permit Mark!evil (which makes you Mark!evil, and on my watch-list), or (b) you made a mistake. It's probably the latter.

Comment author: lmm 11 August 2014 07:08:23PM 1 point [-]

Do you care about it? It sounds like you're responding appropriately (though IMO it's better that such arguments be public and be refuted publicly, as otherwise they present a danger to people who are smart or lucky enough to think up the argument but not the refutation). If the generation of that argument, or what it implies about your brain, is causing trouble with your life then it's worth investigating, but if it's not bothering you then such investigation might not be worth the cost.

Comment author: Username 12 August 2014 12:44:13AM 1 point [-]

though IMO it's better that such arguments be public and be refuted publicly, as otherwise they present a danger to people who are smart or lucky enough to think up the argument but not the refutation

This is the sort of thing I'm thinking about. The argument seems more robust than the obvious-to-me counterargument, so I feel that it's better to just not set people thinking about it. I'm not sure though.

Comment author: zzrafz 11 August 2014 04:18:21PM -2 points [-]

Since you won't be able to kill all humans and will eventually get caught and imprisoned, the best move is to abandon your plan, accordingo to utilitarian logic.

Comment author: SolveIt 12 August 2014 06:59:53AM 0 points [-]

I'm not so sure this is obvious. How much damage can one intelligent, rational, and extremely devoted person do? Certainly there are a few people in positions that obviously allow them to wipe out large swaths of humanity. Of course, getting to those positions isn't easy (yet still feasible given an early enough start!).. But I've thought about this for maybe two minutes, how many nonobvious ways would there be for someone willing to put in decades?

The usual way to rule them out without actually putting in the decades is by taking outside view and pointing at all the failures. But nobody even seems to have seriously tried. If they had, we'd have at least seen partial successes.

Comment author: David_Gerard 18 August 2014 04:55:46PM 0 points [-]
Comment author: Error 12 August 2014 10:30:04PM 1 point [-]

I posted this in the last open thread but I think it got buried:

I was at Otakon 2014, and there was a panel about philosophy and videogames. The description read like Less Wrongese. I couldn't get in (it was full) but I'm wondering if anyone here was responsible for it.

Comment author: David_Gerard 13 August 2014 11:22:41AM 2 points [-]

The description: "Philosophy in Video Games [F]: A discussion of philosophical themes present in many different video games. Topics will include epistemology, utilitarianism, philosophy of science, ethics, logic, and metaphysics. All topics will be explained upon introduction and no prior knowledge is necessary to participate!"

Did they record all panels?

Comment author: Error 13 August 2014 02:42:02PM 1 point [-]

According to their FAQ, most panels are not recorded. Google doesn't turn up any immediate evidence that this one was an exception.

Comment author: polymathwannabe 12 August 2014 08:48:36PM 1 point [-]

In a world without leap years, how many people should a company have to be reasonably certain that everyday will be someone's birthday?

Comment author: xnn 12 August 2014 09:23:34PM *  6 points [-]

See Coupon collector's problem, particularly "tail estimates".

Comment author: polymathwannabe 12 August 2014 09:34:05PM 1 point [-]

Thank you.

Comment author: NancyLebovitz 18 August 2014 08:26:42AM 0 points [-]

30 day experiment with homemade soylent-- mostly positive outcome.

Comment author: pianoforte611 13 August 2014 11:00:45PM 0 points [-]

Is it easier for you to tell men or women apart?

Obvious hypothesis: whichever gender you are attracted to, you will find them easier to tell apart.

Comment author: kalium 14 August 2014 03:03:32AM 0 points [-]

It's easier for me to tell women apart because their hairstyles have more interpersonal variation. (I distinguish people mainly by hair. It takes a few months before I learn to recognize a face.) I'm pretty much just attracted to men though.

Comment author: ChristianKl 14 August 2014 11:23:59AM *  1 point [-]

I don't really know. I'm attracted to women and if I look back most cases of confusing one person for another are cases where a dance Salsa with a woman for 10 minutes and then months later I see the same woman again.

I also use gait patterns for recognition and have sometimes a hard time deciding whether a photo is of a person that I have seen in person if I haven't interacted that much with the person..

As far as attraction goes it's also worth noting that I sometimes do feel emotions that come from having interacted with a person beforehand but it takes me some time to puzzle together where I did meet the person before. The emotional part gets handled by different parts of the brain.

Comment author: wadavis 18 August 2014 07:07:35PM 0 points [-]

Interesting point about the gait recognition. I had a acquaintance of the family recognize my father by his gait at a distance where I couldn't. Anyone else not recognize gaits? Does this vary by person?

Comment author: arundelo 15 August 2014 02:29:27AM *  0 points [-]

If there's a difference (in how well I can discriminate between men versus women) I haven't noticed it. I am attracted to women much more than men.

Comment author: bramflakes 14 August 2014 12:06:03AM 0 points [-]

What do you mean "tell apart"?