Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open thread, Apr. 10 - Apr. 16, 2017

2 Post author: MrMind 11 April 2017 06:57AM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Comments (123)

Comment author: sone3d 13 April 2017 10:12:27PM *  2 points [-]

You think like a human because you are a human. Not because this is how an intelligent being thinks.

Just a thought.

Comment author: Viliam 19 April 2017 01:25:10PM 0 points [-]
Comment author: Thomas 11 April 2017 07:30:48AM 2 points [-]
Comment author: Good_Burning_Plastic 11 April 2017 11:08:38AM 0 points [-]

Yes, they do. That's where the extra (1 + z) factor in the definition of luminosity distance comes from.

Comment author: Thomas 11 April 2017 11:34:25AM *  0 points [-]

I don't like this solution. There is nowhere the speed of light to be seen there.

OTOH, the "curvature of space" they mention, is not very necessary in our flat space.

But the Lorentz factor would be needed here. Not only for the time dilatation factor, by which the energy output is to be reduced - but also for the relativistic mass increase by the same factor. And for the length contraction as well!

That's the real problem, I think.

Comment author: Good_Burning_Plastic 15 April 2017 05:33:22PM 0 points [-]

relativistic mass

That's not a very useful concept, because it's nothing but the total energy measured in different units. It only has a name of its own for hysterical raisins. A much more useful concept is the invariant mass, which is the square root of the total energy squared minus the total momentum squared (in suitable units), which (as the name suggests) is the same in all frames of references; in particular, it equals the total energy in the frame of reference where the total momentum is zero. Nowadays when people say "mass" they usually mean the invariant mass, because it makes more sense to call the relativistic mass "total energy" instead.

Comment author: Good_Burning_Plastic 11 April 2017 10:39:46PM *  0 points [-]

I don't like this solution.

But it's the standard way the luminosity distance is defined.

There is nowhere the speed of light to be seen there.

Units with c = 1 are used in the formulas.

OTOH, the "curvature of space" they mention, is not very necessary in our flat space.

Space alone is flat (within measurement uncertainties), but space-time is curved, because space expands with time.

But the Lorentz factor would be needed here.

It's not the easiest way to treat objects moving with the Hubble flow...

Not only for the time dilatation factor, by which the energy output is to be reduced - but also for the relativistic mass increase by the same factor.

Yes, there are two (1+z) factors, one because fewer photons are emitted per unit time because "time was slower back then" (I know, not a very clear way to put it) and one because each photon is redshifted. The luminosity distance is defined with one (1+z) factor so that when you divide by its square you get (1+z)^-2.

And for the length contraction as well!

No, because we're talking about the total luminosity of the galaxy -- if its length is contracted and its luminosity density is increased by the same factor, nothing changes.

That's the real problem, I think.

What do you mean? It's not like this is an open question in cosmology. The implications of the FLRW metric have been well known for decades.

Comment author: Thomas 12 April 2017 05:43:48AM 0 points [-]

Me: I don't like this solution.

GBP: But it's the standard way the luminosity distance is defined.

Still don't like it.

Me: There is nowhere the speed of light to be seen there.

GBP: Units with c = 1 are used in the formulas.

c = 1, but v isn't. Therefore the gamma factor is NOT a single exponential.

Me: OTOH, the "curvature of space" they mention, is not very necessary in our flat space.

GBP: Space alone is flat (within measurement uncertainties), but space-time is curved, because space expands with time.

At any moment, space has some size, a galaxy has its apparent speed, so there are mass, volume and so on, as a well defined function. Lorentz transformations of dimensions like length, clock speed and mass.

Me: But the Lorentz factor would be needed here.

GBP: It's not the easiest way to treat objects moving with the Hubble flow...

I don't care if it easy or not. I just want to know how it is.

Me: Not only for the time dilatation factor, by which the energy output is to be reduced - but also for the relativistic mass increase by the same factor.

Me: Yes, there are two (1+z) factors, one because fewer photons are emitted per unit time because "time was slower back then" (I know, not a very clear way to put it) and one because each photon is blueshifted. The luminosity distance is defined with one (1+z) factor so that when you divide by its square you get (1+z)^-2.

A photon is usually redshifted. Some additional redshift should occur due to the mass increase, and then some additional redshift due to the increased density, which is caused by the famous length contraction.

Me: And for the length contraction as well!

GBP: No, because we're talking about the total luminosity of the galaxy -- if its length is contracted and its luminosity density is increased by the same factor, nothing changes.

This is not true. The whole amount of emitted radiation goes down, because the escape velocity goes up. And it is more redshifted again.

Me: That's the real problem, I think.

GBP: What do you mean? It's not like this is an open question in cosmology. The implications of the FLRW metric have been well known for decades.

I am not sure, how well known or not well known they are. Or for how long known. I just ask a question. Do we see any relativistic effects on (far) away galaxies. If we do, fine. If we do not, also fine.

Comment author: Good_Burning_Plastic 14 April 2017 04:37:58PM 0 points [-]

A photon is usually redshifted.

Yes. Thanks. Fixed.

Comment author: simon 11 April 2017 03:54:16PM *  0 points [-]

It's all baked into z.

z is defined based on frequency change but the frequency change must also be the amount it appears to be slowed down, since e.g. you could measure the number of peaks in a light wave coming from a galaxy as a measure of time.

For the benefit of others: in making this post Thomas is I expect motivated by my responses to his post here:

https://protokol2020.wordpress.com/2013/09/06/embarrassing-images/#comments

In an edit to my last comment, Thomas wrote:

you must get an apparent slowdown proportional to the ratio of frequencies, because the frequency is itself a measurement of time.

That is NOT true AT ALL! Time dilatation IS NOT linear. NOT AT ALL.

And what about blue shift galaxies? Do you think they speed up their internal clock?

I will not approve such comments anymore. You have to know the basic stuff.

In reply about the blue shift galaxies: they will indeed appear to be sped up from our perspective. Something moving toward us is slowed down in our reference frame by time dilation, but also appears sped up because light takes less and less time to get here. As with redshift, both of these effects are baked into z, so the final (apparent) speedup is what you get from z.

Comment author: Thomas 11 April 2017 04:14:16PM *  0 points [-]

Yes. You were my inspiration for this problem.

How exactly is everything baked into z?

The mass is increased and volume is decreased by factor gamma.

So the density is increased by gamma squared.

Do we really see that?

Comment author: simon 11 April 2017 04:23:45PM *  0 points [-]

You can use a light wave as a clock. The ratio of frequency that the light wave is emitted at to the frequency we perceive is 1+z. Thus, the ratio of the time we observe a galaxy for to the amount of time that elapsed in the galaxy's proper time is also 1+z.

For non-relativistic motion, the speed is approximately proportional to the redshift, but as speeds get higher, that breaks down. Apparent slowdown in terms of v will involve a Lorentz factor, but in terms of z it will not, because of the definition of z being in terms of apparent slowdown (of light).

Comment author: Thomas 11 April 2017 04:33:47PM *  0 points [-]

You can use a light wave as a clock.

Not directly. You have to square the velocity of a galaxy, then you must divide it by c (light speed). Then you must divide it by c once again. Then you have to subtract 1, change the sign, compute the square root.

Regardless of the direction of the galaxy in question.

You are very wrong here, I am sorry.

But that's beside the point. We want the right solution and that solution should be in a good agreement with all those Hubble pictures.

Comment author: simon 11 April 2017 04:43:17PM *  1 point [-]

Let me try to explain more clearly.

Imagine there are aliens in the high-z galaxy. They produce a laser beam with a particular frequency, pointed at us. There are also other events occurring in their galaxy, for simplicity at the beam source. The aliens measure the time between two events as a particular number of cycles of the laser light.

Now when we observe the laser light and the events, we must also measure the time between the events as the same number of cycles of the laser light apart. But, we see the cycles at a lower frequency by a factor of 1+z, so we also see the two events an increased time apart by a factor of 1+z.

Now, it seems to me that maybe the issue is disagreement on what exactly we are measuring. What I am talking about is what we see when we look through a telescope. But it seems to me that maybe what you are talking about is what is "really" there in "our" reference frame. Unfortunately that latter thing is ambiguous since you can extend our reference frame to the other galaxy in different ways.

It's true that you can view far away galaxies as actually moving away from us rather than stationary in an expanding universe - both are valid ways of looking at reality in general relativity. But, there's a reason most astronomers use metrics in which galaxies are (almost) stationary: it's much simpler and less confusing.

Comment author: Thomas 11 April 2017 04:47:32PM 0 points [-]

Do you think, the Lorentz factor id somehow present in z, or not?

Comment author: simon 11 April 2017 05:04:25PM *  0 points [-]

z represents frequency differences which is the same as apparent slowdown (or speedup for blueshift). Note, this is apparent slowdown in the sense of what we see through a telescope, not how much it is "really" slowed down in "our" reference frame.

Now, when we imagine an extending our reference frame to that other galaxy in a particular way such that in that particular extension of our reference frame the slowdown is caused by motion rather than by universe expansion, then we use the relativistic doppler shift formula to get a speed. That formula involves a Lorentz factor (or rather a sqrt ((1+v/c)/(1-v/c)) factor).

Edit: for clarity, the relativistic doppler formula I think should be better represented as (1+v/c)/sqrt(1-(v/c)^2). This makes it more clear that it's a Lorentz factor (the denominator) representing the relativistic time dilation in combination with the numerator which represents the non-relativistic doppler effect (due to the time it takes light to get here increasing as the thing moves farther away).

Another later edit: We actually don't want to just use a doppler formula, at least if in the standard picture the expansion rate of the universe is changing. That's because the expansion rate changes via a gravitational effect that would also be expected to have a gravitational doppler effect. So in a no-expansion picture we want a combination of doppler effect and gravitational redshift (at least for a changing expansion rate), just nothing from stretching of space.

Comment author: entirelyuseless 14 April 2017 03:21:54PM 1 point [-]

This is a response to this comment.

Can you clarify what you mean by phenomenological and existentialist stances, and what you mean by saying that there is no true ontology? I agree that we could use somewhat different models of the world. For example, we don't have to divide between dogs and wolves, but could just call them one common name. I don't see what difference this makes. Dogs and wolves still exist in the world and would be potentially distinguishable in the way that we do, even if we did not distinguish them, and likewise the common thing would still exist even if we did explicitly think of it.

Many opinions that are not normally counted as moral realism are in fact forms of moral realism, if moral realism is understood to mean "moral statements make claims about the facts in the world, and the ones that people accept normally make true claims." For example, if someone says that saying that it is good to do something means that he wants to do it, and saying that something is bad means that he doesn't want to do it or want other people to do it, then when he says, "murder is bad," he is making a true claim about the world, namely that he does not want to murder and does not want other people to murder. Likewise, Eliezer's theory is morally realist in this sense. However there other opinions which say that moral statements are either meaningless or false, like error theory, which would say that they are false. It was my impression that you were denying moral realism in this stronger sense.

I think that moral realism is true and in a stronger sense than in Eliezer's theory, but the facts a statement would depend on in order to be true in my theory are very much like the facts that make such statements true according to him.

Pointing to some aspects where my theory is different from his:

  • in my theory, the universe and life are good in themselves, not indifferent.
  • "good" is thought of as the cause of desire, not as the output of a function. This of course is a common sense way of thinking about good, but it seems backwards to many people after thinking about it. But it is exactly right: for example, the fact that food is good for us is the cause, over geological time, of the fact that we desire it. Likewise if you are standing in front of an ice cream shop and see the ice cream, it is physically the light coming from the ice cream which begins the chain of physical causes that end in you desiring it.
  • these things imply that although good is relative in the sense that what is good for me is different from what is good for you, and what is good for humans is different from what is good for e.g. babyeaters, all of those things fall under the concept of good, even as applied by me. I do not say, "This is babyeaterish for babyeaters," like Eliezer; I say, "this is good for babyeaters, although not for us." That implies e.g. that I do not want to impose human values on babyeaters, and I think that would be an evil thing.
  • human life has an objective purpose. Eliezer's theory sort of has this implication but not in a robust sense, since he thinks it only has that purpose from a human point of view, and babyeaters would not accept it. I think that informed babyeaters would accept my moral theory, and therefore they would agree with us about the purpose of human life.
Comment author: gworley 14 April 2017 08:39:00PM 0 points [-]

By the phenomenological stance I mean that I believe the world is only known through experience. This reduces down in terms of physics to something like "all information is generated by observation" where "observation" is the technical term used to mean the sort of physical measurement we encounter in quantum physics where entropy is generated. If there is anything more going on that's fine, but we still won't know about it except through the standard process by which classical information is generated.

By the existential stance I mean simply that I believe the world exists first. This seems sort of obvious, but the alternative is essentialism, which assumes there is some structure to the world that determines its existence. The question is which comes first, ontology or metaphysics. Existentialism says ontology comes first, and through ontology we can discover metaphysics. Essentialism says the opposite, that metaphysics reveals ontology (naturally for this reason metaphysics and ontology are often not clearly distinct in essentialist perspectives).

I think it's worth noting that both these perspectives are often only nominally or shallowly respected. I think a lot of this is because the phenomenological stance implies that we only have an inside view, and any "outside" view of the world we obtain is necessarily an inference from our inside view of the world. But it's quite easy to accidentally conclude the outside view we've inferred is timeless (this is, after all, seen by many as the entire point of philosophy: to discover timeless truths), so there is a risk of short circuiting both phenomenology and existentialism to produce ontological realism and essentialism, respectively.

I believe the combination of these two is necessary. Accepting the phenomenological stance we are forced either into Husserl's idealism and transcendental phenomenology or realism. Since idealism makes untestable claims, even if it is true I can't really say much about it, so I must take the realist stance. And based on my knowledge of the world, I'm forced into existentialism because I can find no strong evidence that there is a structure preceding existence and there seems no evidence suggesting a real world does not exist (solipsism). Existentialism is basically what's left after eliminating the possibilities that don't fit the evidence.

To summarize, I take the view that things exist prior to knowing about them, and the only way we know about them is through experience.

The consequence of both on my epistemology is that I have no conception of "truth" as the world is normally used. The only recovery for "truth" is something like correspondence theory but through the lens of phenomenology, so I can at most say I have knowledge that leads me to believe a statement has some likelihood of corresponding with reality but only insofar as I can observe the correspondence through experience. We cannot even talk about the "true" probability that a statement corresponds to reality, since doing so introduces a side channel for gaining information that is not through experience.

So where this leaves me with morality is that I must naturally reject moral realism in the sense that there are no true statements, let alone true moral statements. I further don't find notions of "good" and "bad" meaningful because linguistically they imply a moving of meaning from strictly residing in the ontology to being part of the metaphysics, thus they make poor choices for technical terms for me because of their connotations.

What I can say is that there are intersubjective beliefs about reality and those inform our preferences and it is our collective willingness to hold certain preferences and categorize those preferences under labels like "good" or "bad" that creates "morality", but this morality is strictly speaking only resident in ontology and seems to imply little about metaphysics.

I'm not exactly sure what to call this metaethical stance. It's not quite moral nihilism or non-cognitivism because I'm not wholly rejecting the notion that we might come to agreement on particular preference norms, but it also seems not moral realism or cognitivism because the place where there is agreement to come to resides in experience, and thus ontology only, not the external reality of metaphysics that exists outside experience.

Perhaps this should be classified as moral realism? Although doing so to me seems to lump it closer to theories it is more dissimilar from, whereas it is fairly close, especially in its application, to moral nihilism, except that it is grounded in the intersubjective rather than simply not in the objective.

Comment author: entirelyuseless 16 April 2017 04:21:46PM *  0 points [-]

I agree with the summary statement that "things exist prior to knowing about them, and the only way we know about them is through experience," but I probably understand that differently from the way that you do.

I agree that we only know through experience, but your reference to how this cashes out in physical terms suggests that we might mean something different by knowing through experience. That is, I do not disagree that in fact this is how it cashed out. But the fact that it does, is a fact that we learned by experience, and from the point of view that we had before those experiences, it could have cashed out quite differently. From the point of view after learning about this, it is easy to suppose that "it had to be something like this," but in fact we have no way to exclude scenarios where things would have been radically different. I don't know for a fact that this means that our position differs: but I would not have mentioned such a physical account, and it seems to me that bringing in that physical account into an account of epistemology will tend to lead people astray, i.e. by judging not from their experience but from other things instead, or to put this in another way, misusing experience, since in one way it is impossible to judge from anything except from experience, even by mistake, since you have no access to anything but experience.

It is obvious that the world exists before we know it. But here I more clearly disagree with you because it seems to me that what you are calling essentialism is simply straightforwardly true. I don't see the connection with existing before we know it, though, because essentialism does not mean (even as you have described it) that we know the world before it exists, but that the world has a structure that precedes existence, not in time, but logically. You say that you do not accept such a structure because there is no strong evidence for this.

I think there is conclusive proof of such a structure: some things are not other things (take any example you like: my desk is not my chair), and this is a fact that does not in any way depend on me. If it depended on me, then we could say this is my way of knowing, not the structure of the world, and essentialism might turn out to be false. But as it is, it does not depend on me, and this proves that the world has a structure on which its existence logically depends.

In fact, overall you seem to me to be asserting a position like that of Parmenides: being exists, and all apparent distinction is illusion. I very much doubt you will agree that you are in agreement with him, but I don't know how to otherwise understand what you have said above.

I agree about realism, but I pretty much fully disagree with what you conclude about truth: that is, I accept the prima facie argument that it is absurd to say that there are no true statements, because "there are no true statements" would in this way be a true statement. I also would be surprised if you can find any reasonable number of academic philosophers who would call a position "realist" if it says that there are no true statements. But also your argument does not seem even intended to establish this: it seems at most to establish that we cannot have ultimate and conclusive knowledge, once and for all, that a particular statement is true. I agree, but that is hardly the same as showing that there are not in fact true statements.

As I just said, I agree about not having any ultimate knowledge, and this points to a partial agreement about truth as well: I think the concept of truth, and all human concepts, are intrinsically imperfect ways of knowing the world. Supporting this, I think that e.g. the liar paradox, or even the paradox of the heap, do not and cannot have valid and satisfying solutions. All solutions are artificial, and this is because the paradoxes in fact follow logically from our idea of truth, which only imperfectly points to something in the world.

I don't think it would be fair to say that your ethical view is non-realist because of your idea about truth except to the degree that someone concludes that your view of the world is non-realist as a whole (and your view does seem suggestive of this but you at least denied it). The question would be about how ethical statements compare to other statements: if there are no true statements about morals in exactly the same way that there are no true statements about cows and trees, then it would not be fair to count that as morally non-realist.

I can't draw a conclusion about this myself from what you have said: perhaps you could compare yourself e.g. statements about ethics and statements about money, which are clearly intersubjective. I find it hard to imagine someone who is really and truly non-realist about money: that is, who believes that when he says, "I have 50 dollars in my wallet," the statement is strictly speaking false, because he actually has just a few pieces of paper in his wallet, and much less than 50. But perhaps this is no different from the fact that it is hard to accept that people who claim to be moral non-realists, actually are so.

Comment author: gworley 17 April 2017 07:33:19PM 0 points [-]

Hmm, so some of this sounds like I may misunderstand the terminology of academic philosophy. I'm trying to learn it, but I generally lack a lot of context for how the terminology is used so I largely have to go with what I find to be the definitions suggested by summary articles as I find I want to talk about some subject. In many cases I feel like the terminology is accidentally ignoring parts of theory space I'd like to point to, though I'm not sure if that's because I'm confused or academic philosophy is confused. Yet it seems to be the primary shared language I have available for talking about these subjects other than going "full-Heidegger" and being deliberately subtle to hide my meaning from all who would not bother to do the work to think my thoughts.

On some particular points:

I agree that we only know through experience, but your reference to how this cashes out in physical terms suggests that we might mean something different by knowing through experience. That is, I do not disagree that in fact this is how it cashed out. But the fact that it does, is a fact that we learned by experience, and from the point of view that we had before those experiences, it could have cashed out quite differently.

Sure, I only included the physical explanation because I wanted to be clear that I'm talking about a fundamental kind of thing here by "experience" and not, say, the common use of the word "experience". Unfortunately existing phenomenology lacks, from what I can tell, a rigorous way of talking about experience as generic information transfer.

I think there is conclusive proof of such a structure: some things are not other things (take any example you like: my desk is not my chair), and this is a fact that does not in any way depend on me. If it depended on me, then we could say this is my way of knowing, not the structure of the world, and essentialism might turn out to be false. But as it is, it does not depend on me, and this proves that the world has a structure on which its existence logically depends.

This is one such case where maybe the terminology fails me. Perhaps the existentialist/essentialist divide is not the one I mean. I want to separate those theories that conflate ontology, especially teleological aspects of ontology, with metaphysics from those that view them as separate. Once we have them separate, then we seem to be able to talk about idealism and realism from a perspective of structure creates reality or reality creates structure (i.e. ontology determines metaphysics or metaphysics determines ontology). It is this latter latter case I mean to be in: ontology, which is necessarily discovered only through experience) is the lens through which we can try to discover metaphysics, but metaphysics is ultimately about the stuff that exists prior to the understanding of its structure, and that there is literally nothing you can say about reality except through the lens of ontology because you have no other way to know the world and make sense of the experience of it.

In fact, overall you seem to me to be asserting a position like that of Parmenides

I'd say Parmenides has the same flavor as me, although I'd have to do some heavy interpretation to make what evidence we have of his position fit mine.

I can't draw a conclusion about this myself from what you have said: perhaps you could compare yourself e.g. statements about ethics and statements about money, which are clearly intersubjective. I find it hard to imagine someone who is really and truly non-realist about money: that is, who believes that when he says, "I have 50 dollars in my wallet," the statement is strictly speaking false, because he actually has just a few pieces of paper in his wallet, and much less than 50. But perhaps this is no different from the fact that it is hard to accept that people who claim to be moral non-realists, actually are so.

I'd say there's nothing so special about talking about ethics versus money other than they have differences in meaning and purpose for us, i.e. teleological differences. There is a useful sense in which I can say "I have 50 dollars in my wallet" or "murder is bad" but this is also all understood through multiple layers of structure heaped on top of reality that, without interpretation via experience, would have no meaning. Perhaps "truth" has a broader meaning than I think in academic philosophy, but it seems to me if we're talking about ways of experiencing the experience of reality then we've left the realm of what most people seem to mean by the word "truth". But perhaps this is a definitional dispute?

Comment author: entirelyuseless 21 April 2017 03:20:10PM 1 point [-]

I think I understand your position a little better now. I still think it is at least expressed in a way which is more skeptical than necessary.

I want to separate those theories that conflate ontology, especially teleological aspects of ontology, with metaphysics from those that view them as separate.

In my theory, the teleological aspects of things are pretty directly derived from metaphysics. Galileo somewhere says that inertia is the "laziness" of a body, or in other words the answer to "Why does this continue to move?" is "Because it continues to remain what it is." Once you have this sort of thing, it is easy enough to see why you get the origin of life, which seems to have purpose, and then the evolution of complex life, which seems to have complex purposes. In this way, ultimately all questions of final cause, "for what purpose," reduce to this answer: because things tend to remain what they are. Now maybe we can't explain the metaphysics behind things remaining what they are, but it is surely something metaphysical.

metaphysics is ultimately about the stuff that exists prior to the understanding of its structure, and that there is literally nothing you can say about reality except through the lens of ontology because you have no other way to know the world and make sense of the experience of it.

I think I mostly agree with that, actually, but I don't think we should conclude that there aren't true statements. I'll say more about this in the context of money vs ethics below.

There is a useful sense in which I can say "I have 50 dollars in my wallet" or "murder is bad" but this is also all understood through multiple layers of structure heaped on top of reality that, without interpretation via experience, would have no meaning. Perhaps "truth" has a broader meaning than I think in academic philosophy, but it seems to me if we're talking about ways of experiencing the experience of reality then we've left the realm of what most people seem to mean by the word "truth". But perhaps this is a definitional dispute?

Dan Dennett is always arguing against "essentialism," and I find myself agreeing mostly with his arguments while disagreeing with the anti-essentialist conclusion. Basically his main point, in almost every case, is that things have vague boundaries, not permanent white and black once and for all boundaries. He takes this as an argument against essentialism because he takes essentialism to mean a description of the world where you reduce everything to a complex of "A, B, C, etc." and A is there or not, B is there or not, C is there or not. Everything is black or white. I agree that the world is not like that, but I disagree with his conclusion about how it is, or rather it seems that he has no alternative -- "the world is not like that," but he cannot say in any sense how it is instead.

I agree that boundaries are vague; in fact, I would assert that all verbal boundaries are vague, including the boundaries of words that we use to define mathematical and logical ideas. If this is so, it follows that these kinds of vague boundaries will come up in everything we talk about, not only in things like whether a person is "tall" or "short." For example, we may or may not be able to find something which is "kinda sorta" a carbon atom, rather than definitely being one or definitely not being one. But even if we can't, this is like the fact that we don't find all of the evolutionary intermediate forms between living things: the fact that we don't find them in practice does not mean they are impossible. Or at any rate, if there are some boundaries that cannot be vague, we have no way of proving that they cannot be, but we can simply say, "We haven't found any examples yet where such and such a boundary is vague."

I'm discussing this in relation to the question, "perhaps this is a definitional dispute?" I don't think there is or can be a rigid line between definitional disputes and disputes about the world. In some cases, we can clearly say that people are arguing about words. In other cases, we can clearly say they are arguing about facts. But this is no different from the fact that we can say that some particular person is definitely bald and some other is not: the boundary between being bald and not being bald remains a vague one, and likewise the boundary between arguing about facts and arguing about words is a vague one.

And unfortunately your question may be very near that boundary. Looking at this verbally, I would say that "it is useful to say this," and "it is true to say this," are very close, although not identical. We could put it this way: a statement is true if it is useful because it points at reality. This is to exclude, of course, the usefulness of lying and self deceiving. These things may be useful, but they get their utility from pointing away from reality. If a statement is useful because it points at reality, I would say that to that extent it is true (to that extent, because it might also have some falsehood insofar as it might have some disutility in addition to its utility.)

The statement about money (and about ethics), in my opinion, is useful because it points at reality. Your argument is that it points more directly to our interpretations of reality. Fine: but those interpretations themselves point at reality as well. It isn't easy to see how you could redescribe this as those interpretations pointing away from reality, which is what would be needed to say that the statement is false.

Comment author: gworley 23 April 2017 07:53:09PM 0 points [-]

And unfortunately your question may be very near that boundary. Looking at this verbally, I would say that "it is useful to say this," and "it is true to say this," are very close, although not identical. We could put it this way: a statement is true if it is useful because it points at reality. This is to exclude, of course, the usefulness of lying and self deceiving. These things may be useful, but they get their utility from pointing away from reality. If a statement is useful because it points at reality, I would say that to that extent it is true (to that extent, because it might also have some falsehood insofar as it might have some disutility in addition to its utility.)

This, I think, gets at why I don't want to acknowledge "true" and "false", because it seems to me the only way to salvage those terms is to make them teleological to the purpose of likelihood of matching experiences of reality. I guess this is fine but it's not really what most people mean when they say "true" and "false" as far as I can tell, so it seems better to reject the notions of "true" and "false" to avoid confusion about what we're discussing.

Comment author: entirelyuseless 23 April 2017 10:08:04PM *  0 points [-]

This, I think, gets at why I don't want to acknowledge "true" and "false", because it seems to me the only way to salvage those terms is to make them teleological to the purpose of likelihood of matching experiences of reality.

This is at least very close to what I meant. Consider this situation: you are walking along, and you see a man in the distance. "That looks like a pretty tall fellow," you say. When he approaches you, you can see how tall he is. Was your statement true or false? It is obvious that "pretty tall fellow" does not name a specific height or even give a minimum. So what determines whether your statement was true or not? You will almost certainly say that you were right if you do not find yourself surprised by his height compared to what you expected, or if you find him surprisingly tall, and similarly you will say that you were wrong if you find him surprisingly short compared to what you expected.

I guess this is fine but it's not really what most people mean when they say "true" and "false" as far as I can tell

But what do you think people really mean instead? I think pretty much everyone would agree with the above example: you are mistaken if you are surprised in the wrong direction, and you are right if you are not surprised, or if you are surprised in the right direction.

I suppose theoretically someone could say that truth and falsity mean that there is a bit somewhere in his metaphysical structure which has the value of 0 or 1, in such a way that "he is tall" is true if the bit is set to 1, and false if the bit is set to 0. But it seems obvious that this is not what people would normally mean at least when talking about this situation, even if they might sometimes say abstract things that sound sort of like this. And people will sometimes explicitly assert that there is something like such a bit in a particular case, e.g. whether or not something is human. This assertion is almost certainly false, but it is not some special kind of falsity about the existence of truth and falsity; they are simply mistakenly asserting the existence of such a bit in roughly the same way someone is mistaken if the person called tall turns out to be 4'11'.

So I don't see how people mean something different from this by truth and falsity, or at least significantly different.

so it seems better to reject the notions of "true" and "false" to avoid confusion about what we're discussing.

I think that doxastic voluntarism is true in general, but even if it is not, one aspect of it certainly is: we can use words to mean what we choose to use them to mean. And insofar as this is a matter of choice, practical considerations will be involved in deciding to use a word one way or another. You are pointing to this here: what benefit would we get from using "truth" in the above way, compared to using it in other ways?

I think most people will take the denial of truth to be a denial that the world is real. As I said earlier, if anything seems like a denial of realism, the denial of truth does. And most people, coming to the conclusion that there is no truth, will conclude that they should not bother to spend much time thinking about things. Obviously you haven't drawn that conclusion or you wouldn't be spending time on Less Wrong, but I think most people would draw that conclusion. So for someone who thinks that thinking is valuable, rejecting truth does not seem helpful.

In terms of avoiding confusion, you may be seeking an unattainable goal. The ability to understand is in a way limited, but also in a way not. As I said in another comment recently, we can think about anything; if not, just think about "what you can't think about." But this means we will always be confused when we attempt to think about the things on the boundaries of our understanding. Your visual field is limited, but you cannot see the edges of it, because if you could, they would not be the edges. In a similar way, your understanding is bounded, but you cannot directly understand the boundaries, because if you could, they would not be the boundaries. That implies there will always be an "edge of understanding" where you are going to be confused.

Comment author: gworley 24 April 2017 03:03:24AM 0 points [-]

So I don't see how people mean something different from this by truth and falsity, or at least significantly different.

Right, I don't expect my position to make much of a difference to most people most of the time. Perhaps this is a matter of how I perceive the context of my readers, but I generally expect them to be more likely to make the mistake of even accidentally thinking of what I might call "true" and "false" for what we might call the "hard essentialist" version of truth (there are truth bits in the universe) when discussing topics that are sufficiently abstract.

what benefit would we get from using "truth" in the above way, compared to using it in other ways?

It seems mostly to matter when I want to give a precise accounting of my thoughts (or more precisely my experience of my thoughts).

I think most people will take the denial of truth to be a denial that the world is real. As I said earlier, if anything seems like a denial of realism, the denial of truth does. And most people, coming to the conclusion that there is no truth, will conclude that they should not bother to spend much time thinking about things. Obviously you haven't drawn that conclusion or you wouldn't be spending time on Less Wrong, but I think most people would draw that conclusion. So for someone who thinks that thinking is valuable, rejecting truth does not seem helpful.

This gets at why I feel "in-between" in many ways: rejecting truth the way nihilists and solipsists do is not where I mean to end up, but not rejecting truth in at least some form seems to me to deny the skepticism I think we must take given the intentional appearance of experience. Building from "no truth" to "some kind of truth" seems a better approach to me than backing down from "yes truth".

This may be because I find myself in a society where idealism and dualism are common and rationalists and other folks who favor realism often express it in terms of strict materialism that often denies phenomenological intentionality (even if unintentionally). Maybe I am too far removed from general society these days, but I feel it more important to accentuate intentionality over the strict materialism I perceive my target readers are likely to hold if they don't already get what I'm pointing at. You seem to be evidence, though, that this is misunderstanding, although I suspect you are an outlier given how much we agree.

That implies there will always be an "edge of understanding" where you are going to be confused.

Agreed. I expect us all to remain confused in a technical sense of having beliefs that do not fully predict reality. But I also believe it virtuous to minimize that confusion where possible and practical.

Comment author: moridinamael 13 April 2017 06:06:07PM *  1 point [-]

Sometimes we talk about unnecessarily complex potential karma/upvote systems, so I thought I would throw out an idea along those lines:

Every time you post, you're prompted to predict the upvote/downvote ratio of your post.

Instead of being scored on raw upvotes, you're scored on something more like how accurately you predicted the future upvote/downvote ratio.

So if you write a good post that you expect to be upvoted, then you predict a high upvote/downvote ratio, and if you're well calibrated to your audience, then you actually achieve the ratio you predicted, and you're rewarded "extra" by the system.

And here's the cool part. If you write a lazy low-effort post, or if you're trolling, or you write any kind of post that you expect to be poorly received, then you have two options. You can either lie about the expected upvote/downvote ratio, input a high expected ratio, and then the system penalizes you even more when you turn out to get a low u/d ratio, and considers you to be a poorly calibrated poster. Or you can be honest about the u/d ratio you expect, in which case the system can just preemptively tell you not to bother posting stuff like that, or hide it, or penalize it in some other way.

Overall you end up with a system that rewards users who (1) are well-calibrated regarding the quality of their posts and (2) refrain from posting content they know to be bad by explicitly making them admit that it's bad before they post it and also maybe hiding the content.

Comment author: entirelyuseless 14 April 2017 01:53:58AM 3 points [-]

refrain from posting content they know to be bad

Knowing that your post will get a low score is not equivalent to knowing that it is bad.

Comment author: moridinamael 14 April 2017 05:50:15PM *  0 points [-]

That's true. But there are few circumstances that would warrant posting a comment that you know most people in your community will think is bad.

If you want to say something you expect to be unpopular, you can almost always phrase it in away that contextualizes why you are saying it, and urges people to consider the extenuating context before downvoting. If you don't do this, then you're just doing exactly what you shouldn't be doing, if your goal was to make some kind of change.

edit: Another possibility would be this: instead of suppressing posts that you have predicted to be poorly received, the system simply forces you to sit on them for an hour or so before posting. This should reduce the odds that you are writing something in the heat of the moment and increase the relative odds that your probably-controversial post is actually valuable.

Comment author: gilch 14 April 2017 03:52:28AM 2 points [-]

I get the feeling that LW has a lot of lurkers with interesting things to say, but who are too afraid to say them. They may eventually build up the courage they need to contribute to the community, but this system would scare them off. They don't yet have enough data to predict how well their posts would be received. We need to be doing the opposite and remove some of the barriers to joining in.

On the other hand, trolls don't care that much about karma. They'll just exploit sock puppets.

Comment author: moridinamael 14 April 2017 05:51:52PM 2 points [-]

Yeah, LW would probably not be the place to try this. I would guess that most potential karma systems only truly function correctly with a sufficient population of users, a sufficient number of people reading each post. LW has atrophied too much for this.

Comment author: ProofOfLogic 14 April 2017 10:24:47PM 0 points [-]

I really like the idea, but agree that it is sadly not the right thing here. It would be a fun addition to an Arbital-like site.

Comment author: tristanm 14 April 2017 04:40:08PM 0 points [-]

The thing is that without downvotes, there aren't actually that many barriers to joining in. If someone has a problem with something you say, they have to actually say so, instead of just downvoting, which is what often happens on Reddit. And I think this is better because it forces negative reward to be associated with feedback, so that people who either have misunderstandings or are poor articulators of their views can get better over time. The worst thing is getting downvoted without knowing why. I don't know if this has been tried anywhere, but maybe a system where every vote would necessitate a comment would work better, so that why a remark was received a particular way by the community would be well understood.

Comment author: MrMind 11 April 2017 06:59:31AM 1 point [-]

Sorry for the delay in the creation of this open thread. Yesterday I didn't even check, usually someone steps up to the task. Anyway, it's here.

Comment author: Lumifer 17 April 2017 07:46:27PM 0 points [-]

So...

Google News, US edition, front page, science section:

Russia's Fedor robot has learned to shoot guns with impressive precision. How do companies like Google, groups and individuals try to stop killer robots from taking over the world?

...are you happy now?

Comment author: morganism 16 April 2017 08:56:27PM 0 points [-]

kickstarting as a funding method of scientific research.

" In Bollen’s system, scientists no longer have to apply; instead, they all receive an equal share of the funding budget annually—some €30,000 in the Netherlands, and $100,000 in the United States—but they have to donate a fixed percentage to other scientists whose work they respect and find important. “Our system is not based on committees’ judgments, but on the wisdom of the crowd,”

Bollen and his colleagues have tested their idea in computer simulations. If scientists allocated 50% of their money to colleagues they cite in their papers, research funds would roughly be distributed the way funding agencies currently do, they showed in a paper last year—but at much lower overhead costs."

http://www.sciencemag.org/news/2017/04/new-system-scientists-never-have-write-grant-application-again

and an article

http://johnhawks.net/weblog/topics/metascience/funding/bollen-grant-money-allocation-2017.html

Comment author: Viliam 19 April 2017 02:39:54PM *  2 points [-]

Let's make predictions about what kind of bad incentives will this create. ;)

My guess: If scientics can choose who receives what fraction, they will donate 99% to their friends (who in return will donate 99% to them). If instead it depends on the number of cited works, or something like that, scientists will try to publish research in as many small articles as possible, hoping that multiple articles will be cited instead of one. (But they already do this, don't they?) If multiple articles from the same author count as one, scientists will trade parts of their research with other scientists, like: "I will let you publish the second half of my article, if you let me publish the second half of your article" (hoping that both parts get cited).

Comment author: madhatter 15 April 2017 10:03:14PM 0 points [-]

I have said before that I think consciousness research is not getting enough attention in EA, and I want to add another argument for this claim:

Suppose we find compelling evidence that consciousness is merely "how information feels from the inside when it is being processed in certain complex ways", as Max Tegmark claims (and Dan Dennett and others agree). Then, I argue, we should be compelled from a utilitarian perspective to create a superintelligent AI that is provably conscious, regardless of whether it is safe, and regardless whether it kills us humans (or worse), if we know it will try to maximize the subjective happiness of itself and the subagents it creates.

The above isn't my argument (Sam Harris mentioned someone else arguing this) but I am claiming this is one reason why consciousness research is ethically important.

Comment author: simon 16 April 2017 01:11:36AM 5 points [-]

I would consider the option of creating a utility monster to be a reductio ad absurdum of utlitarianism.

Comment author: madhatter 16 April 2017 01:01:20PM 0 points [-]

Why?

Comment author: simon 16 April 2017 10:26:40PM 3 points [-]

Because it doesn't seem right to me to create something that will kill off all of humanity even if it would have higher utility.

There are (I feel confident enough to say) 7 billion plus of us actually existing people who are NOT OK with you building something to exterminate us, no matter how good it would feel about doing it.

So, you claim you want to maximize utility, even if that means building something that will kill us all. I doubt that's really what you'd want if you thought it through. Most of the rest of us don't want that. But let's imagine you really do want that. Now let's imagine you try to go ahead anyway. Then some peasants show up at your Mad Science Laboratory with torches and pitchforks demanding you stop. What are you going to say to them?

Comment author: entirelyuseless 17 April 2017 01:30:13PM 1 point [-]

Because it doesn't seem right to me to create something that will kill off all of humanity even if it would have higher utility.

This isn't really about utility monsters. The same argument will apply, equally well or equally badly, to any situation where we ask, "What do you think about replacing humanity with something better?"

Probably dinosaurs would have objected, if they could, to being replaced by humans which are presumably better than them, but it does not change the fact that the resulting situation is better. And likewise, whether or not humans object to being replaced by something better, it would still be better if it happens.

Comment author: simon 17 April 2017 04:01:08PM *  4 points [-]

"Better" from whose perspective?

If it's "This thing is so great that even all of us humans agree that it killing us off is a good thing" then fine. But if it's "Better according to an abstract concept (utility maximization) that only a minority of humans agree with, but fuck the rest of humanity, we know what's better" then that's not so good.

Sure, we're happy that the dinosaurs were killed off given that is allows us to replace them. That doesn't mean the dinosaurs should have welcomed that.

Comment author: entirelyuseless 18 April 2017 02:07:57PM *  0 points [-]

I meant better from the point of view of objective truth, but if you disagree that better in that way is meaningful, we can change it to this:

Something, let's call it X, comes into existence and replaces humanity. It is better for X to be X, than for humans to be humans.

That is a meaningful comparison in exactly the same way that it is meaningful to say that being a human being is better (for human beings of course) than being a dinosaur is (for dinosaurs of course.)

That does not mean that humans would want X to come into existence, just as dinosaurs might not have wanted to be wiped out. But from a pretty neutral point of view (if we assume being human is better for humans than being a dinosaur is for dinosaurs), there has been improvement since the dinosaurs, and there would be more if X came into existence.

Also, there's another issue. You seem to be assuming that humans have the possibility of not being replaced. That is not a real possibility. Believing that the human race is permanent is exactly the same kind of wishful thinking as believing that you have an immortal soul. You are going to die, and no part of you will outlive that; and likewise the human race will end, and will not outlive that. So the question is not whether humanity is going to be replaced. It is just whether it will be replaced by something better, or something inferior. I would rather be replaced by something better.

Comment author: Lumifer 18 April 2017 02:28:20PM 2 points [-]

it will be replaced by something better, or something inferior

Better or inferior from which point of view?

Comment author: entirelyuseless 19 April 2017 01:28:21AM 0 points [-]

Since I said I would rather be replaced by something better, I meant from my point of view. But one or way another, since we will be replaced by something different, it will be better or worse from pretty much any point of view, except the "nothing matters" point of view.

Comment author: simon 18 April 2017 03:06:54PM *  1 point [-]

Regarding your first 4 paragraphs: as it happens, I am human.

Regarding your last paragraph: yes most likely, but we can assess our options from our own point of view. Most likely our own point of view will include, as one part of what we consider, the point of view of what we are choosing to replace us. But it won't likely be the only consideration.

Comment author: entirelyuseless 19 April 2017 01:28:44AM 0 points [-]

Sure. I don't disagree with that.

Comment author: madhatter 17 April 2017 11:06:14AM 0 points [-]

Haha, yea I agree there are some practical problems.

I just think in the abstract ad absurdum arguments are a logical fallacy. And of course most people on Earth (including myself) are intuitively appalled by the idea, but we really shouldn't be trusting our intuitions on something like this.

Comment author: simon 17 April 2017 04:06:30PM *  1 point [-]

If 100% of humanity are intuitively appalled with an idea, but some of them go ahead and do it anyway, that's just insanity. If the people going ahead with it think that they need to do it because that's the morally obligatory thing to do, then they're fanatic adherents of an insane moral system.

It seems to me that you think that utilitarianism is just abstractly The Right Thing to Do, independently of practical problems, any intuitions to the contrary including your own, and all that. So, why do you think that?

Comment author: Dagon 17 April 2017 06:48:16PM 0 points [-]

If 100% of humanity are intuitively appalled with an idea, but some of them go ahead and do it anyway, that's just insanity.

Really? I think almost everyone has things that are intuitively appalling, but they do anyway. Walking by a scruffy, hungry-looking beggar? Drinking alcohol? there's something that your intuition and your actions disagree on.

Personally, I'm not a utilitarian because I don't think ANYTHING is the Right Thing to Do - it's all preferences and private esthetics. But really, if you are a moral realist, you shouldn't claim that other human's moral intuitions are binding, you should Do The Right Thing regardless of any disagreement or reprisals. (note: you're still allowed to not know the Right Thing, but even then you should have some justification other than "feels icky" for whatever you do choose to do).

Comment author: simon 17 April 2017 11:08:57PM *  1 point [-]

OK, I guess I was equivocating on intuition.

But on your second paragraph: I don't think I actually disagree with you about what actually exists.

Here are some things that I'm sure you'll agree exist (or at least can exist):

  • preferences and esthetics (as you mentioned)
  • tacitly agreed on patterns of behaviour, or overt codes, that reduce conflict
  • game theoretic strategies that encourage others to cooperate, and commitment to them either innately or by choice

Now, the term "morality", and related terms like "right" or "wrong", could be used to refer to things that don't exist, or they could be used to refer to things that do exist, like maybe some or all of the the things in that list or other things that are like them and also exist.

Now, let's consider someone who thinks, "I'm intuitively appalled by this idea, as is everyone else, but I'm going to do it anyway, because that's the morally obligatory thing to do even though most people don't think so" and analyze that in terms of things that actually exist.

Some things that actually exist that would be in favour of this point of view are:

  • an aesthetic preference for a conceptually simple system combined with a willingness to bite really large bullets
  • a willingness to sacrifice oneself for the greater good
  • a willingness to sacrifice others for the greater good
  • a perhaps unconscious tendency to show loyalty for one's tribe (EA) by sticking to tribal beliefs (Utilitarianism) in the face of reasons to the contrary

Perhaps you could construct a case for that position out of these or other reasons in a way that does not add up to "fanatic adherent of insane moral system" but that's what it's looking like to me.

Comment author: g_pepper 17 April 2017 08:52:46PM 0 points [-]

But, even a moral realist should not have 100% confidence that he/she is correct with respect to what is objectively right to do. The fact that 100% of humanity is morally appalled with an action should at a minimum raise a red flag that the action may not be morally correct.

Similarly, "feeling icky" about something can be a moral intuition that is in disagreement with the course of action dictated by one's reasoned moral position. it seems to me that "feeling icky" about something is a good reason for a moral realist to reexamine the line of reasoning that led him/her to believe that course of action was morally correct in the first place.

It seems to me that it is folly for a moral realist to ignore his/her own moral intuitions or the moral intuitions of others. Moral realism is about believing that there are objective moral truths. But a person with 100% confidence that he/she knows what those truths are and is unwilling to reconsider them is not just a moral realist, he/she is also a fanatic.

Comment author: g_pepper 17 April 2017 01:22:27PM 0 points [-]

we really shouldn't be trusting our intuitions on something like this.

I don't see why not; after all, a person relies on his/her ethical intuitions when selecting a metaethical system like utilitarianism in the first place. Surely someone's ethical intuition regarding an idea like the one that you propose is at least as relevant as the ethical intuition that would lead a person to choose utilitarianism.

I just think in the abstract ad absurdum arguments are a logical fallacy.

I don't see why. It appears that you and simon agree that utilitarianism leads to the idea that creating utility monsters is a good idea. But whereas you conclude from your intuition that utilitarianism is correct that we should create utility monsters, simon argues from his intuition that creating a utility monster as you describe is a bad idea to the conclusion that utilitarianism is not a good metaethical system. It would appear that simon's reasoning mirrors your own.

Like the saying goes - one persons's modus ponens in another person's modus tollens.

Comment author: Viliam 19 April 2017 02:45:45PM 1 point [-]

Are we actually optimizing for "subjective happiness"? That's the wireheading scenario. I would say that wireheading humans seems better than killing humans and creating a wireheaded machine, but... both scenarios seem suboptimal.

And if you instead want to make a machine that is much better at "human values" (not just "subjective happiness") than humans... I guess the tricky part is making the machine that is good at human values.

Comment author: denimalpaca 11 April 2017 06:51:15PM 0 points [-]

Maybe this has been discussed ad absurdum, but what do people generally think about Facebook being an arbiter of truth?

Right now, Facebook does very little to identify content, only provide it. They faced criticism for allowing fake news to spread on the site, they don't push articles that have retractions, and they just now have added a "contested" flag that's less informative than Wikipedia's.

So the questions are: does Facebook have any responsibility to label/monitor content given that it can provide so much? If so, how? If not, why doesn't this great power (showing you anything you want) come with great responsibility? Finally, if you were to build a site from ground-up, how would you design around the issue of spreading false information?

Comment author: Lumifer 11 April 2017 07:00:50PM *  6 points [-]

what do people generally think about Facebook being an arbiter of truth?

It's a horrible idea.

does Facebook have any responsibility to label/monitor content

No.

great power (showing you anything you want)

You're confusing FB and Google (and a library, etc.)

how would you design around the issue of spreading false information?

I wouldn't.

I recommend acquiring some familiarity with the concept of the freedom of speech.

Comment author: denimalpaca 12 April 2017 06:19:26PM 3 points [-]

I'm actually very familiar with freedom of speech and I'm getting more familiar with your dismissive and elitist tone.

Freedom of speech applies, in the US, to the relationship between the government and the people. It doesn't apply to the relationship between Facebook and users, as exemplified by their terms of use.

I'm not confusing Facebook and Google, Facebook also has a search feature and quite a lot of content can be found within Facebook itself.

But otherwise thanks for your reply, it's stunning lack of detail gave me no insight whatsoever.

Comment author: Lumifer 12 April 2017 06:42:22PM *  5 points [-]

I'm actually very familiar with freedom of speech

Freedom of speech applies, in the US, to the relationship between the government and the people

You seem to be mistaken about your familiarity with the freedom of speech. In particular, you're confusing it with the 1st Amendment to the US Constitution. That's a category error.

elitist tone

LOL. Would you assert that you represent the masses?

it's stunning lack of detail gave me no insight whatsoever

A stunning example of narcissism :-P Hint: it's not all about you and your lack of insight.

Comment author: Osho 13 April 2017 04:54:14PM 1 point [-]

So are you going to actually explain why "freedom of speech" (not a negative right, but platform owners allowing users to post whatever they want) is a good thing?

Comment author: Lumifer 13 April 2017 05:22:13PM 1 point [-]

Sniff... sniff... smells like a bad-faith question. You don't imagine you're setting a trap for me or anything like that?

Comment author: MrMind 12 April 2017 07:32:27AM 1 point [-]

I don't think that freedom of speech is enforceable inside a private-owned network.

Comment author: Lumifer 12 April 2017 03:16:48PM 4 points [-]

We're talking about "should", the normative approach. A private entity can do a lot of things -- it doesn't mean that it should do these things.

Freedom of speech is not just a legal term, it's also a very important component of a civil society.

Comment author: MrMind 13 April 2017 07:31:18AM *  1 point [-]

Still: should Lesswrong allow the discussion of any off-topic subject just because "free speech"?

Comment author: Lumifer 13 April 2017 02:55:08PM 2 points [-]

...did anyone claim anything like that?

Comment author: MrMind 14 April 2017 07:51:11AM 0 points [-]

You did, implicitly.

Comment author: Lumifer 14 April 2017 04:15:16PM 0 points [-]

I did not. You read me wrong.

Comment author: username2 12 April 2017 03:14:23PM 2 points [-]

Lumifer didn't say anything about enforceability. E.g. the boy scouts have the right (as a private group, if you accept that a group with the U.S. president as their figurehead is in fact private) to disallow membership based on gender, sexual orientation, or religion. That doesn't mean it is right for them to do so. One should expect that in a civilized society groups like the boy scouts shouldn't discriminate based on things like sexual orientation. But that doesn't necessarily imply that there should be regulatory action to enforce that.

Likewise, Facebook should be a public commons where freedom of speech is respected. But that doesn't mean I'd call for regulatory enforcement of that.

Comment author: MrMind 13 April 2017 07:38:52AM 0 points [-]

One should expect that in a civilized society groups like the boy scouts shouldn't discriminate based on things like sexual orientation.

Agreed in principle, but there are certain situations where the boundaries are much less clear. Should I in a gentleman's club allow women? Obviously not, and it's not even discrimination.

Should I in Lesswrong allow the discussion of theology? Obviously not, and someone shouldn't, in the normative sense, invoke freedom of speech to allow trolling.

At the same time, I can create a social network which is devoted to the dissemination of only carefully verified news, and no one should be able to invoke freedom of speech to hijack this mission.

Comment author: Lumifer 13 April 2017 02:54:26PM *  3 points [-]

Should I in Lesswrong allow the discussion of theology? Obviously not

<snort>

LW discusses theology all the time, it just uses weird terminology and likes to reinvent the wheel a lot.

The whole FAI problem is better phrased as "We will create God, how do we make sure He likes us?". The Simulation Hypothesis is straight-up creationism: we were created by some, presumably higher, beings for their purposes. Etc.

Comment author: MrMind 14 April 2017 07:50:38AM 0 points [-]

You are strawmanning both positions a lot...

Comment author: Lumifer 14 April 2017 04:14:40PM *  3 points [-]

No, I'm being quite literal here.

I see no meaningful difference between a god and a fully-realized (in the EY sense) AI. And the Simulation Hypothesis is literally creationism. Not necessarily Christian creationism (or any particular historic one), but creationism nonetheless.

Comment author: tukabel 13 April 2017 07:18:59PM 0 points [-]

Hell yeah, bro. Sufficiently advanced Superintelligence is indistinguishable from God.

Comment author: ChristianKl 13 April 2017 11:27:17AM 2 points [-]

Should I in Lesswrong allow the discussion of theology? Obviously not, and someone shouldn't, in the normative sense, invoke freedom of speech to allow trolling.

I don't think we have any ban on discussion on theology or that it was mentioned in any discussion we had about what might be valid reasons to ban a post.

Comment author: MrMind 14 April 2017 08:01:03AM 0 points [-]

Theology was just an example, but a relevant one: in a forum devoted to the improvement of rationality, discussing about some flavor of thoughts that have by long being proved irrational should amount to trolling. I'm not talking trying to justify rationally theism, that had and might still have a place here, but discussing theology as if theism was true shouldn't be allowed.
On the other hand, you cannot explicitly ban everything that is off-topic, so that isn't written anywhere shouldn't be a proof against.

Comment author: ChristianKl 14 April 2017 02:01:01PM 0 points [-]

On the other hand, you cannot explicitly ban everything that is off-topic, so that isn't written anywhere shouldn't be a proof against.

LW never used to have an explicit or implicit ban against being off-topic. Off-topic posts used to get downvoted and not banned.

We delete spam, we delete advocacy of illegal violence and the Basilisk got deleted under the idea that it's a harmful idea.

An off-topic post about theism would be noise and not harmful, so it's not worth banning under our philosophy for banning posts.

In addition, I don't think that it's even true that a post about theology has to be off-topic. It's quite common on LW that people use replacement Gods like Omega for exploring thought experiments. Those discussions do pretend that "Omega existence is true" and that doesn't make them problematic in any way. Taking a more traditional God instead of Omega wouldn't be a problem.

It's also even clear that theism has been proved irrational. In the census a significant portion allocates more than 0 percent to it being true. I think at the first Double Crux we did at LW Berlin someone updated in the direction of theism. A CFAR person did move to theism after an elaborate experiment of the Reverse Turing test. LW likely wouldn't have existed if it wouldn't be for the philanthropic efforts of a certain Evangelical Christian.

David Chapman made in his posts about post-rationality the point that his investigation of religious ideas like Tantra allowed him to make advances in AI while at MIT that he likely otherwise wouldn't have made.

Comment author: username2 13 April 2017 11:19:30AM 0 points [-]

Actually neither of those are obvious to me.

Comment author: MrMind 14 April 2017 07:55:37AM 0 points [-]

That's a weird position to have: basically you're saying that there's no moral way to limit the topic of or the accessibility to a closed group.
Am I representing you correctly? If not, where would you put the boundaries?

Comment author: username2 14 April 2017 02:00:50PM *  0 points [-]

Those specific examples are bad examples.

Gentlemen clubs are actually concentrations of power where informal deals happen. Admitting women to these institutions is vital to having gender equality at the highest echelons of civic power.

And theology is discussed all the time on LW, even if it is often the subject of criticism.

I was just saying that those particular examples were poorly chosen. But since you have me engaged here, the problem with taking an absolutive view is when a private communication medium, e.g. Facebook, becomes a medium of debate over civic issues. Somewhere along the way it becomes a public commons vital to democracy where matters of free speech must be protected. In some countries (not the USA) this is arguably already the case with Facebook.

Comment author: tristanm 14 April 2017 04:53:39PM 0 points [-]

It's a horrible idea.

Can you at least try to articulate why you believe this? When you make a statement like this with very few arguments, in response to a genuine question, it doesn't matter if you feel the post you're responding to is incredibly misguided or based on poor understanding. It's simply condescending to respond this way. Now, as of my writing this comment, your response has 6 upvotes. For a forum with a lot of posts with zero votes, it's pretty rare to have posts with this many upvotes, unless a lot of community members feel your response added a lot of light to the conversation. So if anyone is reading this who upvoted Lumifer's post, can you explain why you felt it was worthy? This a pretty deep mystery for me on a forum where people who argued things in such depth, like Eliezer or Yvain, are usually held as people we should try to emulate.

Comment author: Lumifer 15 April 2017 12:24:30AM *  2 points [-]

Can you at least try to articulate why you believe this? When you make a statement like this with very few arguments, in response to a genuine question, it doesn't matter if you feel the post you're responding to is incredibly misguided or based on poor understanding. It's simply condescending to respond this way.

No, I don't think so. A short answer does not implicitly accuse the question of being stupid or misguided.

It was a simple direct question. I have a simple direct answer to it without much in the way of hedging or iterating through hands or anything like that.

If someone asks you "vanilla or chocolate?" and you're a chocoholic, you answer with one word and not with a three-page essay on how and why your love for chocolate arose and developed.

Now your question of "why?" could easily lead to multiple pages but tl;dr would be that I like freedom, I don't like the Ministry of Truth, and I think that power corrupts.

why you felt it was worthy?

I would offer a guess that the upvotes say "I agree" and not "this was the most insightful thing evah!" :-)

Comment author: madhatter 11 April 2017 10:47:34PM 0 points [-]

I agree there is a big danger of slipping down the free speech slope if we fight too hard against fake news, but I also think we need to consider a (successful) campaign effort of another nation to undermine the legitimacy of our elections as an act of hostile aggression, and in times of war most people agree some measured limitation of free speech can be justified.

Comment author: Lumifer 12 April 2017 03:13:54PM 5 points [-]

You shouldn't uncritically ingest all the crap the media is feeding you. It's bad for your health.

in times of war

So we are at war with Russia? War serious enough to necessitate suspending the Constitution?

Comment author: madhatter 12 April 2017 05:44:12PM 0 points [-]

No, at least not yet. That's a good point. But Facebook is a private company, so filtering content that goes against their policy need not necessarily violate the constitution, right? I don't know the legal details, though, I could be completely wrong.

Comment author: Lumifer 12 April 2017 05:56:19PM 2 points [-]

Facebook can filter the content, yes, but we're not discussing the legalities, we're discussing whether this is a good idea.

Comment author: ChristianKl 13 April 2017 11:51:56AM 4 points [-]

All of the information submitted to Wikileaks was real. Even if it came from Russia it was nothing to do with Fake News.

Comment author: lmn 13 April 2017 06:19:18AM 3 points [-]

I agree there is a big danger of slipping down the free speech slope if we fight too hard against fake news, but I also think we need to consider a (successful) campaign effort of another nation to undermine the legitimacy of our elections as an act of hostile aggression,

You know, your campaign against fake news might be taken slightly more seriously if you didn't immediately follow it up by asserting a piece of fake news as fact.

Comment author: skeptical_lurker 12 April 2017 08:21:47PM 2 points [-]

I've just been skimming the wiki page on Russian involvement in the US election.

SecureWorks stated that the actor group was operating from Russia on behalf of the Russian government with "moderate" confidence level

The other claims seem to just be that there was Russian propaganda. If propaganda and possible spying counts as "war" then we will always be at war, because there is always propaganda (as if the US doesn't do the same thing!). The parallels with 1984 go without saying, but I really think that the risk of totalitarianism isn't Trump, its people overreacting to Trump.

Also, there are similar allegations of corruption between Clinton and Saudi Arabia.

Comment author: skeptical_lurker 12 April 2017 08:28:40PM 4 points [-]

Facebook is full of bullshit because it is far quicker to share something then to fact-check it, not that anyone cares about facts anyway. A viral alarmist meme with no basis in truth will be shared more then a boring, balanced view that doesn't go all out to fight the other tribe.

But Facebook has always been full of bullshit and no-one cared until after the US election when everyone decided to pin Trump's victory on fake news. So its pretty clear that good epistemology is not the genuine concern here.

Not that I'm saying that Facebook is worse then any other social media - the problem isn't Facebook, the problem is human nature.

Comment author: ChristianKl 13 April 2017 10:54:14AM 1 point [-]

Half of the US voted for Trump. If Facebook would make a move that would censor a lot of pro-Trump lies it risks losing a significant portion of it's audience.

Finally, if you were to build a site from ground-up, how would you design around the issue of spreading false information?

I'm not sure whether the function of verifying the quality of news articles is best fulfilled by a traditional social network. If I would care to solve the problem I would build a browser plugin that provides quality ratings of articles and websites. Users can vote and there's a machine learning algorithm that translates the user votes into a good quality metric.

Comment author: lmn 13 April 2017 06:30:01AM 1 point [-]

A better question is why should we trust Facebook to do so honestly, rather than abusing that power to declare lies that benefit Mark Zuckerberg to be "facts". Given the amount of ethics, or rather lack thereof, his actions have shown so far, I see very little reason to trust him.

Comment author: DryHeap 12 April 2017 02:59:02PM *  1 point [-]

Right now, Facebook does very little to identify content, only provide it.

They certainly do identify content, and indeed alter the way that certain messages are promoted.

Example.

They faced criticism for allowing fake news to spread on the site

Who decides what is and is not fake news?

Comment author: denimalpaca 12 April 2017 06:31:22PM 0 points [-]

Not quite what I meant about identifying content but fair point.

As for fake news, the most reliable way to tell is whether the piece states information as verifiable fact, and if that fact is verified. Basically, there should be at least some sort of verifiable info in the article, or else it's just narrative. While one side's take may be "real" to half the world, the other side's take can be "real" to the other half of the world, but there should be some piece of actual information that both sides look at and agree is real.

Comment author: ChristianKl 13 April 2017 10:31:09AM 1 point [-]

That means if you have an investigative reporter with non-public sources, that's fake news because the other side has no access to his non-public sources?

Comment author: lmn 13 April 2017 06:14:38AM 1 point [-]

As for fake news, the most reliable way to tell is whether the piece states information as verifiable fact, and if that fact is verified.

Verified by whom? There is a long history of "facts verified by official sources" turning out to be false.

Comment author: MrMind 12 April 2017 07:29:59AM *  1 point [-]

"Arbiter of truth" is too big of a word.
People easily forget two important things:

  1. Facebook is a social media, emphasis on media: it allows the dissemination of content, it does not produce it;

  2. Facebook is a private, for profit enterprise: it exists to generate a revenue, not to provide a service to citizens.

Force 1 obviously acts against any censoring or control besides what is strictly illegal, but force 2 pushes for the creation of an environment that is customer friendly. That is the only reason why there is some form of control on the content published: because doing otherwise would lose customers.

People are silly if they delegate the responsibility of verifying the truth of a content to the transport layer, and the only reason that a flag button is present is because doing otherwise would lose customers.
That said, to answer your question:

No, Facebook does not have any responsability beyond what is strictly illegal. That from power comes responsibility is a silly implication written in a comic book, but it's not true in real life (it's almost the opposite). As a general rule of life, do not acquire your facts from comics.

Comment author: Lumifer 12 April 2017 03:32:47PM 3 points [-]

That is the only reason why there is some form of control on the content published: because doing otherwise would lose customers.

Since we're talking about Facebook, it's worth reminding that the customers are the advertisers. All y'all are just the product being sold.

Comment author: MrMind 13 April 2017 07:27:42AM 0 points [-]

Right, the chain has one more step, but still: if people start unsubscribing from Facebook, then money goes elsewhere and so does advertisers.

Comment author: denimalpaca 12 April 2017 06:36:11PM 0 points [-]

"That from power comes responsibility is a silly implication written in a comic book, but it's not true in real life (it's almost the opposite). "

Evidence? I 100% disagree with your claim. Looking at governments or business, the people with more power tend to have a lot of responsibility both to other people in the gov't/company and to the gov't/company itself. The only kind of power I can think of that doesn't come with some responsibility is gun ownership. Even Facebook's power of content distribution comes with a responsibility to monetize, which then has downstream responsibilities.

Comment author: MrMind 13 April 2017 07:24:07AM 0 points [-]

You're looking only at the walled garden of institutions inside a democracy. But if you look at past history, authoritarian governments or muddled legal situations (say some global corporations), you'll find out that as long as the structure of power is kept intact, people in power can do pretty much as they please with little or no backlash.

Comment author: Viliam 19 April 2017 03:58:36PM 0 points [-]

Let's try to frame this with as little politics as possible...

You build a medium where people can exchange content. Your original goal is to make money, so you want to make it as popular as possible -- in perfect case, the Schelling point for anyone debating anything.

But you notice that certain messages, optimized for virality, make a disproportional fraction of your content. You don't like this... either because you realize you actually have values beyond "making money"... or because you realize that in long term this could have a negative impact on your medium if people start to associate it with low-quality viral messages -- you aim to be a king of all content, not only yellow journalism. There is a risk your competitor would make a competing medium that it more pleasant to read, at least at the beginning, and gradually take over your readers.

Some quick ideas:

a) censor specific ideas
a.1) completely, e.g. all kitten videos get deleted
a.2) penalize kitten videos in content aggregation

Problem: This will get noticed, and people who love kitten videos will move to your competitors.

b) target virality itself
b.1) make it more difficult to share content

This goes too strongly against your goal be being an addictive website for simpletons.

b.2) penalize mindless sharing

For example, you have one-click-sharing functionality, but you can optionally add your own comment. Shares with hand-written comments will get much higher priority than shares without ones. The easier to share, the faster to disappear.

b.3) penalize articles with too much shares (globally)

Your advantage, as a huge website, is that you know which articles are popular worldwide. Unfortuately, soon there will be SEO techniques to circumvent any action you take, such as showing the same content to different users under different URLs (or whatever will make your system believe it is different content.)

c) distributed "censorship"

You could make functionality of voluntary "content rating" or "content filtering", where anyone can register as a rating/filtering authority, and people can voluntarily subscribe to them. The authorities will flag the content, and you can choose to either see the content flagged, or have it automatically removed. Important: make the user interface really simple (for the subscribers).

But I guess most people wouldn't use this anyway.

d) allow different "profiles" or "channels" for users

Not sure about details, but suppose there are different channels for politics, kitten videos, programming, etc... and you can turn them on and off. Many people would not turn on the politics channel, making the political news less viral.

Potential problems, such as "JavaScript inventor fired for political donation" does belong under "programming" or "politics"? Who defines the ontology. Etc.

Comment author: Lumifer 19 April 2017 04:59:11PM *  1 point [-]

Relevant: today's discussion on HN of how Facebook shapes the feeds on its platform and what do various people think about it.