Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: madhatter 21 March 2017 10:28:54PM 1 point [-]

Can someone explain why UDT wasn't good enough? In what case does UDT fail? (Or is it just hard to approximate with algorithms)?

Comment author: dogiv 24 March 2017 02:26:12PM 0 points [-]

I've been trying to understand the differences between TDT, UDT, and FDT, but they are not clearly laid out in any one place. The blog post that went along with the FDT paper sheds a little bit of light on it--it says that FDT is a generalization of UDT intended to capture the shared aspects of several different versions of UDT while leaving out the philosophical assumptions that typically go along with it.

That post also describes the key difference between TDT and UDT by saying that TDT "makes the mistake of conditioning on observations" which I think is a reference to Gary Drescher's objection that in some cases TDT would make you decide as if you can choose the output of a pre-defined mathematical operation that is not part of your decision algorithm. I am still working on understanding Wei Dai's UDT solution to that problem, but presumably FDT solves it in the same way.

Comment author: dogiv 22 March 2017 09:43:12PM 7 points [-]

It does seem like a past tendency to overbuild things is the main cause. Why are the pyramids still standing five thousand years later? Because the only way they knew to build a giant building back then was to make it essentially a squat mound of solid stone. If you wanted to build a pyramid the same size today you could probably do it for 1/1000 of the cost but it would be hollow and it wouldn't last even 500 years.

Even when cars were new they couldn't be overbuilt the way buildings were in prehistory because they still had to be able to move themselves around. Washing machines are somewhere in between, I guess. But I don't think rich people demand less durability. If anything, rich people have more capital to spend up front on a quality product and more luxury to research which one is a good long-term investment.

Comment author: username2 22 March 2017 12:31:42AM 0 points [-]

I agree with your concern, but I think that you shouldn't limit your fear to party-aligned attacks.

For example, the Thirty-Meter Telescope in Hawaii was delayed by protests from a group of people who are most definitely "liberal" on the "liberal/conservative" spectrum (in fact, "ultra-liberal"). The effect of the protests is definitely significant. While it's debatable how close the TMT came to cancelation, the current plan is to grant no more land to astronomy atop Mauna Kea.

Comment author: dogiv 22 March 2017 05:06:49PM 0 points [-]

Agreed. There are plenty of liberal views that reject certain scientific evidence for ideological reasons--I'll refrain from examples to avoid getting too political, but it's not a one-sided issue.

Comment author: Viliam 21 March 2017 01:31:28PM 0 points [-]

I have a feeling that perhaps in some sense politics is self-balancing. You attack things that are associated with your enemy, which means that your enemy will defend them. Assuming you are an entity that only cares about scoring political points, if your enemy uses rationality as an applause light, you will attack rationality, but if your enemy uses postmodernism as an applause light, you will attack postmodernism and perhaps defend (your interpretation of) rationality.

That means that the real risk for rationality is not that everyone will attack it. As soon as the main political players will all turn against rationality, fighting rationality will become less important for them, because attacking things the others consider sacred will be more effective. You will soon get rationality apologists saying "rationality per se is not bad, it's only rationality as practiced by our political opponents that leads to horrible things".

But if some group of idiots will choose "rationality" as their applause light and they will be doing it completely wrong, and everyone else will therefore turn against rationality, that would cause much more damage. (Similarly to how Stalin is often used as an example against "atheism". Now imagine a not-so-implausible parallel universe where Stalin used "rationality" -- interpreted as: 1984-style obedience of the Communist Party -- as the official applause light of his regime. In such world, non-communists hate the word "rationality" because it is associated with communism, and communists insist that the only true meaning of rationality is the blind obedience of the Party. Imagine trying to teach people x-rationality in that universe.)

Comment author: dogiv 21 March 2017 05:34:02PM 0 points [-]

This may be partially what has happened with "science" but in reverse. Liberals used science to defend some of their policies, conservatives started attacking it, and now it has become an applause light for liberals--for example, the "March for Science" I keep hearing about on Facebook. I am concerned about this trend because the increasing politicization of science will likely result in both reduced quality of science (due to bias) and decreased public acceptance of even those scientific results that are not biased.

Comment author: dogiv 20 March 2017 08:43:50PM 2 points [-]

Interesting piece. It seems like coming up with a good human-checkable way to evaluate parsing is pretty fundamental to the problem. You may have noticed already, but Ozora is the only one that didn't figure out "easily" goes with "parse".

Comment author: markan 20 March 2017 06:29:30PM 1 point [-]

I've been writing about effective altruism and AI and would be interested in feedback: Effective altruists should work towards human-level AI

Comment author: dogiv 20 March 2017 06:58:18PM 0 points [-]

The idea that friendly superintelligence would be massively useful is implicit (and often explicit) in nearly every argument in favor of AI safety efforts, certainly including EY and Bostrom. But you seem to be making the much stronger claim that we should therefore altruistically expend effort to accelerate its development. I am not convinced.

Your argument rests on the proposition that current research on AI is so specific that its contribution toward human-level AI is very small, so small that the modest efforts of EAs (compared to all the massive corporations working on narrow AI) will speed things up significantly. In support of that, you mainly discuss vision--and I will agree with you that vision is not necessary for general AI, though some form of sensory input might be. However, another major focus of corporate AI research is natural language processing, which is much more closely tied to general intelligence. It is not clear whether we could call any system generally intelligent without it.

If you accept that mainstream AI research is making some progress toward human-level AI, even though it's not the main intention, then it quickly becomes clear that EA efforts would have greater marginal benefit in working on AI safety, something that mainstream research largely rejects outright.

Comment author: skeptical_lurker 09 March 2017 09:50:20PM 1 point [-]

I said a few weeks back that I would publically precommit to going a week without politics. Well, I partially succeeded, in that I did start reading for example an SSC article on politics because it popped up in my RSS feed, but I stopped when I remembered that I was ignoring politics. The main thing is I managed to avoid any long timewasting sessions of reading about politics on the net. And I think this has partially broken some bad habits of compulsive web browsing I was developing.

So next I think I shall avoid all stupid politics for a month. No facebook or reddit, but perhaps one reasonably short and high-quality article on politics per day. Speaking of which, can anyone recommend any short, intelligent, rational writings on feminism for instance? My average exposure to anti-feminist thought is fairly intelligent, while my average exposure to pro-feminist thought is "How can anyone disagree with me? Don't they realise that their opinions are just wrong? Women can be firefighters and viking warriors! BTW, could you open this jar for me, I'm not strong enough." And this imbalance is not good from a rationalist POV. I am especially interested whether feminists have tackled the argument that if feminists have fewer children, then all the genes that predispose one to being feminist (and to anything else that corrlates) will be selected against. I mean, this isn't a concern for people who think that the singularity is near(tm) or who don't care what happens a few generations in the future, but I doubt either of these apply to many feminists, or people in general.

Comment author: dogiv 10 March 2017 04:23:17PM 0 points [-]

I haven't seen any feminists addressing that particular argument (most are concerned with cultural issues rather than genetic ones) but my initial sense is something like this: a successful feminist society would have 1) education and birth control easily available to all women, and 2) a roughly equal division of the burden of child-rearing between men and women. These changes will remove most of the current incentives that seem likely to cause a lower birth rate among feminists than non-feminists. Of course, it could remain true that feminists tend to be more educated, more independent, less traditional, etc--traits that might correlate with reduced desire for children. However, I suspect we already have that issue (for both men and women) entirely separately from feminism. Some highly-educated countries try to increase fertility with tax incentives and ad campaigns (Denmark, for instance) but I'm not sure how successful it is. In the end the only good solution to such Malthusian problems may be genetic engineering.

Comment author: 9eB1 08 March 2017 08:15:55PM 0 points [-]

I have sometimes mused that accumulating political power (or generally being able to socially engineer) is the closest to magic that we have in the real world. It's the force multiplier that magic is used for in fiction by a single protagonist. Most people who want magic also do not follow political careers. Of course, this is only a musing because there are lots of differences. No matter how much power you accumulate you are still beholden to someone or something, so if independence is a big part of your magical power fantasy then it won't help.

Comment author: dogiv 09 March 2017 09:13:48PM 1 point [-]

I would argue that the closest real-world analogue is computer hacking. It is a rare ability, but it can bestow a large amount of power on an individual who puts in enough effort and skill. Like magic, it requires almost no help from anyone else. The infrastructure has to be there, but since the infrastructure isn't designed to allow hacking, having the infrastructure doesn't make the ability available to everyone who can pay (like, say, airplanes). If you look at the more fantasy-style sci-fi, science is often treated like magic--one smart scientist can do all sorts of cool stuff on their own. But it's never plausible. With hacking, that romanticization isn't nearly as far from reality.

Comment author: dogiv 09 March 2017 08:54:30PM 0 points [-]

It seems like the key problem described here is that coalitions of rational people, when they form around scientific propositions, cause the group to become non-scientific out of desire to support the coalition. The example that springs to my mind is climate change, where there is social pressure for scientific-minded people (or even those who just approve of science) to back the rather specific policy of reducing greenhouse gas emissions rather than to probe other aspects of the problem or potential solutions and adaptations.

I wonder if we might solve problems like this by substituting some rational principle that is not subject to re-evaluation. Ultimate goals (CEV, or the like) would fit the bill in principle, but in practice, even if enough people could agree on them, I suspect they are too vague and remote to form a coalition around. The EA movement may be closer to succeeding, where the key idea is not an ultimate goal but rather the general technique of quantitatively evaluating opportunities to achieve altruistic objectives in general. Still, it's difficult to extend a coalition like that to a broader population, since most people can't easily identify with it.

Perhaps the middle ground is to start with a goal that is controversial enough to distinguish coalition members from outsiders, but too vague to form a strong coalition around--say, aggregative consequentialism or something. Then find a clear practical implication of the goal that has the necessary emotional impact. As long as the secondary goal follows easily enough from the first goal that it won't need to be re-evaluated later on, the coalition can hold together and make progress toward the original goal without much danger of becoming irrational. Can't think of a good example for the sub-goal, though.

In response to Am I Really an X?
Comment author: math 06 March 2017 05:21:46PM 0 points [-]

As I understand it, there is a phenomenon among transgender people where no matter what they do they can't help but ask themselves the question, "Am I really an [insert self-reported gender category here]?"

The obvious answer is "No". In fact this experience seems suspiciously like trying to make oneself belief that one believes one's gender to be X.

Humans universally make inferences about their typicality with respect to their self-reported gender. Check Google Scholar for 'self-perceived gender typicality' for further reading. So when I refer to a transman, by my model, I mean, "A human whose self-reporting algorithm returns the gender category 'male', but whose self-perceived gender typicality checker returns 'Highly atypical!'"

And the word 'human' at the beginning of that sentence is important. I do not mean "A human that is secretly, essentially a girl," or "A human that is secretly, essentially a boy,"; I just mean a human. I postulate that there are not boy typicality checkers and girl typicality checkers; there are typicality checkers that take an arbitrary gender category as input and return a measure of that human's self-perceived typicality with regard to the category.

While we're assigning categories in complete defiance to common sense and evidence, why are we so sure that the category "human" is applicable?

In response to comment by math on Am I Really an X?
Comment author: dogiv 06 March 2017 05:44:02PM 0 points [-]

I really wish we could have discussions like this without anyone questioning anyone else's humanity.

View more: Next