Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Erebus30

I have recently had the unpleasant experience of getting subjected to the kind of dishonest emotional manipulation that is recommended here. A (former) friend tried to convert me to his religion by using these tricks, and I can attest that they are effective if the person on the receiving end is trusting enough and doesn't realize that they are being manipulated. In my case the absence and avoidance of rational argument eventually led to the failure of the conversion attempt, but not before I had been inflicted severe emotional distress by a person I used to trust.

Needless to say, I find it unpleasant that these kind of techniques are mentioned without also mentioning that they are indeed manipulative, dishonest and very easy to abuse.

Erebus20

Solomonoff's universal prior assigns a probability to every individual Turing machine. Usually the interesting statements or hypotheses about which machine we are dealing with are more like "the 10th output bit is 1" than "the machine has the number 643653". The first statement describes an infinite number of different machines, and its probability is the sum of the probabilities of those Turing machines that produce 1 as their 10th output bit (as the probabilities of mutually exclusive hypotheses can be summed). This probability is not directly related to the K-complexity of the statement "the 10th output bit is 1" in any obvious way. The second statement, on the other hand, has probability exactly equal to the probability assigned to the Turing machine number 643653, and its K-complexity is essentially (that is, up to an additive constant) equal to the K-complexity of the number 643653.

So the point is that generic statements usually describe a huge number of different specific individual hypotheses, and that the complexity of a statement needed to delineate a set of Turing machines is not (necessarily) directly related to the complexities of the individual Turing machines in the set.

Erebus60

Of course it is still valid, unless X corresponds directly to some observable and clearly identifiable element of physical reality, so that its existence is not Platonic, but physically real. Obviously it wouldn't make sense to discuss whether someone has, say, committed theft if there didn't exist a precise and agreed-upon definition of what counts as theft -- or otherwise we would be hunting for some objectively existing Platonic idea of "theft" in order to see whether it applies.

Of course? There must be a miscommunication.

Do you think it makes sense to discuss, say, intelligence, friendship or morality? Do you think these exist either as physically real things or Platonic ideas, or can you supply precise and agreed-upon definitions for them?

I don't count any of my three examples physically real in the sense of being a clearly identifiable part of physical reality. Of course they reduce to physical things at the bottom, but only in the trivial sense in which everything does. Knowing that the reduction exists is one thing, but we don't judge things as intelligent, friendly or moral based on their physical configuration, but on higher-order abstractions. I'm not expecting us to have a disagreement here. I wouldn't consider any of the examples a Platonic idea either. Our concepts and intuitions do not have their source in some independently existing ideal world of perfections. Since you seemed to point to Platonism as a fallacy, we probably don't disagree here either.

So I'm led to expect that you think that to sensibly discuss whether a given behaviour is intelligent, friendly or moral, we need to be able to give precise definitions for intelligence, friendship and morality. But I can only think that this is fundamentally misguided: the discussions around these concepts are relevant precisely because we do not have such definitions at hand. We can try to unpack our intuitions about what we think of as a concept, for example by tabooing the word for it. But this is completely different from giving a definition.

However, to use the same example again, when people are accused of theft, in the overwhelming majority of cases, the only disagreement is whether the facts of the accusation are correct, and it's only very rarely that even after the facts are agreed upon, there is significant disagreement over whether what happened counts as theft. In contrast, when people are accused of sexism, a discussion almost always immediately starts about whether what they did was really and truly "sexist," even when there is no disagreement at all about what exactly was said or done.

This only reflects on the easiest ways of making or defending against particular kinds of accusations, not at all on the content of the accusations. Morality is similar to sexism in this respect, but it still makes sense to discuss morality without being a Platonist about it or without giving a precise agreed-upon definition.

Erebus30

[...] Discussing whether some institution, act, or claim is "sexist" makes sense only if at least one of these two conditions applies:

  1. There is some objectively existing Platonic idea of "sexism," [...]

  2. There is a precise and agreed-upon definition of "sexism," [...]

Replace "sexism" by "X". Do you think this alternative is still valid?

Or maybe you should elaborate on why you think "sexism" gives rise to this alternative.

Erebus170

I am troubled by the vehemence by which people seem to reject the notion of using the language of the second-order simulacrum -- especially in communities that should be intimately aware of the concept that the map is not the territory.

Understanding signaling in communication is almost as basic as understanding the difference between the map and the territory.

A choice of words always contains an element of signaling. Generalizing statements are not always made in order to describe the territory with a simpler map, they are also made in order to signal that the exceptions from the general case are not worth mentioning. This element of signaling is also present, even if the generalization is made out of a simple desire to not "waste space" - indeed the exceptional cases were not mentioned! Thus a sweeping generalization is evidence for the proposition that the speaker doesn't consider the exceptions to the stated general rule worth much (an upper bound is the trouble of mentioning them). And when dealing with matters of personal identity, not all explanations for the small worth of the set of exceptional people are as charitable as a supposedly small size of the set.

Erebus10

Maybe I misinterpreted your first comment. I agree almost completely with this one, especially the part

(...) not relying on some magic future technology that will solve the existing problems.

Erebus30

What would be the point of criticizing technology on the basis of its appropriate use?

Technologies do not exist in a vacuum, and even if they did, there'd be nobody around to use them. Thus restricting to only the "technology itself" is bound to miss the point of the criticism of technology. When considering the potential effects of future technology we need to take into account how the technologies will be used, and it is certainly reasonable to believe that some technologies have been and will be used to cause more harm than good. That a critical argument takes into account the relevant features of the society that uses the technology is not a flaw of the argument, but rather the opposite.

Erebus10

The argument is that simple numbers like 3^^^3 should be considered much more likely than random numbers with a similar size, since they have short descriptions and so the mechanisms by which that many people (or whatever) hang in the balance are less complex.

Consider the options A = "a proposed action affects 3^^^3 people" and B = "the number 3^^^3 was made up to make a point". Given my knowledge about the mechanisms that affect people in the real world and about the mechanisms people use to make points in arguments, I would say that the likelihood of A versus B is hugely in favor of B. This is because the relevant probabilities for calculating the likelihood scale (for large values and up to a first order approximation) with the size of the number in question for option A and the complexity of the number for option B. I didn't read de Blanc's paper further than the abstract, but from that and your description of the paper it seems that its setting is far more abstract and uninformative than the setting of Pascal's mugging, in which we also have the background knowledge of our usual life experience.

Erebus00

I can't make it before mid-August, so waiting for me is probably not a good idea.

Erebus00

A mailing list is a fine idea. With the amount of traffic on the front page these days, a dedicated mailing list might be a more reliable way of contacting less active readers. Assuming, of course, that we can get them to sign up on the list :)

Unfortunately I'm unable to participate in the meetup this time, as I'll be out of the country for quite some time starting on friday.

Load More