Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: gworley 24 May 2017 02:22:35AM 0 points [-]

If we are looking for intentional communities that do work, we need look no further than modern organizations like corporations. We may not like the communities they create, but we can't deny the corporations that survive for long tend to have some reason they are able to do it and it must involve coordinate the actions of thousands of people. WalMart is perhaps the most successful intentional community of all time.

Comment author: gworley 22 May 2017 10:57:23PM 0 points [-]

this feels like it matches with what i've seen. i wonder if there are other studies replicating similar effects?

Comment author: gworley 22 May 2017 10:56:17PM 1 point [-]

Abstract:

Although evidence-based algorithms consistently outperform human forecasters, people often fail to use them after learning that they are imperfect, a phenomenon known as algorithm aversion. In this paper, we present three studies investigating how to reduce algorithm aversion. In incentivized forecasting tasks, participants chose between using their own forecasts or those of an algorithm that was built by experts. Participants were considerably more likely to choose to use an imperfect algorithm when they could modify its forecasts, and they performed better as a result. Notably, the preference for modifiable algorithms held even when participants were severely restricted in the modifications they could make (Studies 1–3). In fact, our results suggest that participants’ preference for modifiable algorithms was indicative of a desire for some control over the forecasting outcome, and not for a desire for greater control over the forecasting outcome, as participants’ preference for modifiable algorithms was relatively insensitive to the magnitude of the modifications they were able to make (Study 2). Additionally, we found that giving participants the freedom to modify an imperfect algorithm made them feel more satisfied with the forecasting process, more likely to believe that the algorithm was superior, and more likely to choose to use an algorithm to make subsequent forecasts (Study 3). This research suggests that one can reduce algorithm aversion by giving people some control—even a slight amount—over an imperfect algorithm’s forecast.

Comment author: gworley 15 May 2017 09:11:59PM 0 points [-]

In what ways do you see meta decision theory as different from the notion of alignment? At the currently level of detail they sound to me like they are describing the same thing, although alignment is perhaps less pre-committed to decision theory as a solution even if that's what alignment research is focused on now.

Maybe the intended interpretation is that they are functionally equivalent, but since you didn't specify I wanted to clarify.

Comment author: woodchopper 06 May 2017 05:59:18PM 0 points [-]

and more specifically you should not find yourself personally living in a universe where the history of your experience is lost. I say this because this is evidence that we will likely avoid a failure in AI alignment that destroys us, or at least not find ourselves in a universe where AI destroys us all, because alignment will turn out to be practically easier than we expect it to be in theory.

Can you elaborate on this idea? What do you mean by 'the history of your experience is lost'? Can you supply some links to read on this whole theory?

Comment author: gworley 08 May 2017 08:43:20PM 0 points [-]
Comment author: gworley 24 April 2017 03:40:06AM 0 points [-]

It's short, but here's the money quote/tl;dr

MHC takes developmental psychology and reverses its normal etiology. Rather than positing stages that give rise to behaviors, MHC considers individual behaviors and then gives a system of classifying their complexity. The classification is hierarchical, so behaviors in higher complexity classes are necessarily constructed from combinations of less complex behaviors, and the classifications match the shape of developmental psychology as discovered by Piaget and Erikson, but the hierarchy is derived independently via a mathematical abstraction.

[Link] Unstaging Developmental Psychology

1 gworley 24 April 2017 03:39AM
Comment author: entirelyuseless 23 April 2017 10:08:04PM *  0 points [-]

This, I think, gets at why I don't want to acknowledge "true" and "false", because it seems to me the only way to salvage those terms is to make them teleological to the purpose of likelihood of matching experiences of reality.

This is at least very close to what I meant. Consider this situation: you are walking along, and you see a man in the distance. "That looks like a pretty tall fellow," you say. When he approaches you, you can see how tall he is. Was your statement true or false? It is obvious that "pretty tall fellow" does not name a specific height or even give a minimum. So what determines whether your statement was true or not? You will almost certainly say that you were right if you do not find yourself surprised by his height compared to what you expected, or if you find him surprisingly tall, and similarly you will say that you were wrong if you find him surprisingly short compared to what you expected.

I guess this is fine but it's not really what most people mean when they say "true" and "false" as far as I can tell

But what do you think people really mean instead? I think pretty much everyone would agree with the above example: you are mistaken if you are surprised in the wrong direction, and you are right if you are not surprised, or if you are surprised in the right direction.

I suppose theoretically someone could say that truth and falsity mean that there is a bit somewhere in his metaphysical structure which has the value of 0 or 1, in such a way that "he is tall" is true if the bit is set to 1, and false if the bit is set to 0. But it seems obvious that this is not what people would normally mean at least when talking about this situation, even if they might sometimes say abstract things that sound sort of like this. And people will sometimes explicitly assert that there is something like such a bit in a particular case, e.g. whether or not something is human. This assertion is almost certainly false, but it is not some special kind of falsity about the existence of truth and falsity; they are simply mistakenly asserting the existence of such a bit in roughly the same way someone is mistaken if the person called tall turns out to be 4'11'.

So I don't see how people mean something different from this by truth and falsity, or at least significantly different.

so it seems better to reject the notions of "true" and "false" to avoid confusion about what we're discussing.

I think that doxastic voluntarism is true in general, but even if it is not, one aspect of it certainly is: we can use words to mean what we choose to use them to mean. And insofar as this is a matter of choice, practical considerations will be involved in deciding to use a word one way or another. You are pointing to this here: what benefit would we get from using "truth" in the above way, compared to using it in other ways?

I think most people will take the denial of truth to be a denial that the world is real. As I said earlier, if anything seems like a denial of realism, the denial of truth does. And most people, coming to the conclusion that there is no truth, will conclude that they should not bother to spend much time thinking about things. Obviously you haven't drawn that conclusion or you wouldn't be spending time on Less Wrong, but I think most people would draw that conclusion. So for someone who thinks that thinking is valuable, rejecting truth does not seem helpful.

In terms of avoiding confusion, you may be seeking an unattainable goal. The ability to understand is in a way limited, but also in a way not. As I said in another comment recently, we can think about anything; if not, just think about "what you can't think about." But this means we will always be confused when we attempt to think about the things on the boundaries of our understanding. Your visual field is limited, but you cannot see the edges of it, because if you could, they would not be the edges. In a similar way, your understanding is bounded, but you cannot directly understand the boundaries, because if you could, they would not be the boundaries. That implies there will always be an "edge of understanding" where you are going to be confused.

Comment author: gworley 24 April 2017 03:03:24AM 0 points [-]

So I don't see how people mean something different from this by truth and falsity, or at least significantly different.

Right, I don't expect my position to make much of a difference to most people most of the time. Perhaps this is a matter of how I perceive the context of my readers, but I generally expect them to be more likely to make the mistake of even accidentally thinking of what I might call "true" and "false" for what we might call the "hard essentialist" version of truth (there are truth bits in the universe) when discussing topics that are sufficiently abstract.

what benefit would we get from using "truth" in the above way, compared to using it in other ways?

It seems mostly to matter when I want to give a precise accounting of my thoughts (or more precisely my experience of my thoughts).

I think most people will take the denial of truth to be a denial that the world is real. As I said earlier, if anything seems like a denial of realism, the denial of truth does. And most people, coming to the conclusion that there is no truth, will conclude that they should not bother to spend much time thinking about things. Obviously you haven't drawn that conclusion or you wouldn't be spending time on Less Wrong, but I think most people would draw that conclusion. So for someone who thinks that thinking is valuable, rejecting truth does not seem helpful.

This gets at why I feel "in-between" in many ways: rejecting truth the way nihilists and solipsists do is not where I mean to end up, but not rejecting truth in at least some form seems to me to deny the skepticism I think we must take given the intentional appearance of experience. Building from "no truth" to "some kind of truth" seems a better approach to me than backing down from "yes truth".

This may be because I find myself in a society where idealism and dualism are common and rationalists and other folks who favor realism often express it in terms of strict materialism that often denies phenomenological intentionality (even if unintentionally). Maybe I am too far removed from general society these days, but I feel it more important to accentuate intentionality over the strict materialism I perceive my target readers are likely to hold if they don't already get what I'm pointing at. You seem to be evidence, though, that this is misunderstanding, although I suspect you are an outlier given how much we agree.

That implies there will always be an "edge of understanding" where you are going to be confused.

Agreed. I expect us all to remain confused in a technical sense of having beliefs that do not fully predict reality. But I also believe it virtuous to minimize that confusion where possible and practical.

Comment author: gworley 23 April 2017 08:24:30PM 2 points [-]

I like this because this helps better answer the anthropics problem of existential risk, namely that we should not expect to find ourselves in a universe that gets destroyed, and more specifically you should not find yourself personally living in a universe where the history of your experience is lost. I say this because this is evidence that we will likely avoid a failure in AI alignment that destroys us, or at least not find ourselves in a universe where AI destroys us all, because alignment will turn out to be practically easier than we expect it to be in theory. That alignment seems necessary for this still makes it a worthy pursuit since progress on the problem increases our measure, but it also fixes the problem of believing the low-probability event of finding yourself in a universe where you don't continue to exist.

Comment author: entirelyuseless 21 April 2017 03:20:10PM 1 point [-]

I think I understand your position a little better now. I still think it is at least expressed in a way which is more skeptical than necessary.

I want to separate those theories that conflate ontology, especially teleological aspects of ontology, with metaphysics from those that view them as separate.

In my theory, the teleological aspects of things are pretty directly derived from metaphysics. Galileo somewhere says that inertia is the "laziness" of a body, or in other words the answer to "Why does this continue to move?" is "Because it continues to remain what it is." Once you have this sort of thing, it is easy enough to see why you get the origin of life, which seems to have purpose, and then the evolution of complex life, which seems to have complex purposes. In this way, ultimately all questions of final cause, "for what purpose," reduce to this answer: because things tend to remain what they are. Now maybe we can't explain the metaphysics behind things remaining what they are, but it is surely something metaphysical.

metaphysics is ultimately about the stuff that exists prior to the understanding of its structure, and that there is literally nothing you can say about reality except through the lens of ontology because you have no other way to know the world and make sense of the experience of it.

I think I mostly agree with that, actually, but I don't think we should conclude that there aren't true statements. I'll say more about this in the context of money vs ethics below.

There is a useful sense in which I can say "I have 50 dollars in my wallet" or "murder is bad" but this is also all understood through multiple layers of structure heaped on top of reality that, without interpretation via experience, would have no meaning. Perhaps "truth" has a broader meaning than I think in academic philosophy, but it seems to me if we're talking about ways of experiencing the experience of reality then we've left the realm of what most people seem to mean by the word "truth". But perhaps this is a definitional dispute?

Dan Dennett is always arguing against "essentialism," and I find myself agreeing mostly with his arguments while disagreeing with the anti-essentialist conclusion. Basically his main point, in almost every case, is that things have vague boundaries, not permanent white and black once and for all boundaries. He takes this as an argument against essentialism because he takes essentialism to mean a description of the world where you reduce everything to a complex of "A, B, C, etc." and A is there or not, B is there or not, C is there or not. Everything is black or white. I agree that the world is not like that, but I disagree with his conclusion about how it is, or rather it seems that he has no alternative -- "the world is not like that," but he cannot say in any sense how it is instead.

I agree that boundaries are vague; in fact, I would assert that all verbal boundaries are vague, including the boundaries of words that we use to define mathematical and logical ideas. If this is so, it follows that these kinds of vague boundaries will come up in everything we talk about, not only in things like whether a person is "tall" or "short." For example, we may or may not be able to find something which is "kinda sorta" a carbon atom, rather than definitely being one or definitely not being one. But even if we can't, this is like the fact that we don't find all of the evolutionary intermediate forms between living things: the fact that we don't find them in practice does not mean they are impossible. Or at any rate, if there are some boundaries that cannot be vague, we have no way of proving that they cannot be, but we can simply say, "We haven't found any examples yet where such and such a boundary is vague."

I'm discussing this in relation to the question, "perhaps this is a definitional dispute?" I don't think there is or can be a rigid line between definitional disputes and disputes about the world. In some cases, we can clearly say that people are arguing about words. In other cases, we can clearly say they are arguing about facts. But this is no different from the fact that we can say that some particular person is definitely bald and some other is not: the boundary between being bald and not being bald remains a vague one, and likewise the boundary between arguing about facts and arguing about words is a vague one.

And unfortunately your question may be very near that boundary. Looking at this verbally, I would say that "it is useful to say this," and "it is true to say this," are very close, although not identical. We could put it this way: a statement is true if it is useful because it points at reality. This is to exclude, of course, the usefulness of lying and self deceiving. These things may be useful, but they get their utility from pointing away from reality. If a statement is useful because it points at reality, I would say that to that extent it is true (to that extent, because it might also have some falsehood insofar as it might have some disutility in addition to its utility.)

The statement about money (and about ethics), in my opinion, is useful because it points at reality. Your argument is that it points more directly to our interpretations of reality. Fine: but those interpretations themselves point at reality as well. It isn't easy to see how you could redescribe this as those interpretations pointing away from reality, which is what would be needed to say that the statement is false.

Comment author: gworley 23 April 2017 07:53:09PM 0 points [-]

And unfortunately your question may be very near that boundary. Looking at this verbally, I would say that "it is useful to say this," and "it is true to say this," are very close, although not identical. We could put it this way: a statement is true if it is useful because it points at reality. This is to exclude, of course, the usefulness of lying and self deceiving. These things may be useful, but they get their utility from pointing away from reality. If a statement is useful because it points at reality, I would say that to that extent it is true (to that extent, because it might also have some falsehood insofar as it might have some disutility in addition to its utility.)

This, I think, gets at why I don't want to acknowledge "true" and "false", because it seems to me the only way to salvage those terms is to make them teleological to the purpose of likelihood of matching experiences of reality. I guess this is fine but it's not really what most people mean when they say "true" and "false" as far as I can tell, so it seems better to reject the notions of "true" and "false" to avoid confusion about what we're discussing.

View more: Next