Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
kip1981200

Although I'm a lawyer, I've developed my own pet meta-approach to philosophy. I call it the "Cognitive Biases Plus Semantic Ambiguity" approach (CB+SA). Both prongs (CB and SA) help explain the amazing lack of progress in philosophy.

First, cognitive biases - or (roughly speaking) cognitive illusions - are persistent by nature. The fact that cognitive illusions (like visual illusions) are persistent, and the fact that philosophy problems are persistent, is not a coincidence. Philosophy problems cluster around those that involve cognitive illusions (positive outcome bias, the just world phenomenon, the Lake Wobegon effect, the fundamental attribution error), etc. I see this in my favorite topic area (the free will problem), but I believe that it likely applies broadly across philosophy.

Second, semantic ambiguity creates persistent problems if not identified and fixed. The solutions to several of Hilbert's 100 problems are "no answer - problem statement is not well defined." That approach is unsexy, and emotionally dissatisfying (all of this work, yet we get no answer!). Perhaps for that reason, philosophers (but not mathematicians) seem completely incapable of doing it. On only the rarest occasions do philosophers suggest that some term ("good", "morality," "rationalism", "free will", "soul", "knowledge") might not possess a definition that is precise enough to do the work that we ask of it. In fact, as with CB, philosophy problems tend to cluster around problems that persist because of SA. (If the problems didn't persist, they might be considered trivial or boring.)

kip1981110

My biggest criticism of SI is that I cannot decide between:

A. promoting AI and FAI issues awareness will decrease the chance of UFAI catastrophe; or B. promoting AI and FAI issues awareness will increase the chance of UFAI catastrophe

This criticism seems district from the ones that Holden makes. But it is my primary concern. (Perhaps the closest example is Holden's analogy that SI is trying to develop facebook before the Internet).

A seems intuitive. Basically everyone associated with SI assumes that A is true, as far as I can tell. But A is not obviously true to me. It seems to me at least plausible that:

A1. promoting AI and FAI issues will get lots of scattered groups around the world more interested in creating AGI A2. one of these groups will develop AGI faster than otherwise due to A1 A3. the world will be at greater risk of UFAI catastrophe than otherwise due to A2 (i.e. the group creates AGI faster than otherwise, and fails at FAI)

More simply: SI's general efforts, albeit well intended, might accelerate the creation of AGI, and the acceleration of AGI might decrease the odds of the first AGI being friendly. This is one path by which B, not A, would be true.

SI might reply that, although it promotes AGI, it very specifically limits its promotion to FAI. Although that is SI's intention, it is not at all clear that promoting FAI will not have the unintended consequence of accelerating UFAI. By analogy, if a responsible older brother goes around promoting gun safety all the time, the little brother might be more likely to accidentally blow his face off, than if the older brother had just kept his mouth shut. Maybe the older brother shouldn't have kept his mouth shut, maybe he should have... it's not clear either way.

If B is more true than A, the best thing that SI could do would probably be develop clandestine missions to assassinate people who try to develop AGI. SI does almost the exact opposite.

SI's efforts are based on the assumption that A is true. But it's far from clear to me that A, instead of B, is true. Maybe it is, maybe it is. SI seems overconfident that A is true. I've never heard anyone at SI (or elsewhere) really address this criticism.

kip1981-20

He didn't count on the stupidity of mankind.

"Two things are infinite: the universe and human stupidity; and I'm not sure about the the universe."

kip198100

But why/how is it better spent that way?

kip198150

"I consistently refuse to be drawn into running the Singularity Institute. I have an overwhelming sense of doom about what happens if I start going down that road."

This strikes me as pretty strange. I would like to hear more about it.

Certainly, one can obtain status by other means, such as by posting at OB and LW, and presenting at conferences, etc. Are there other reasons why you don't want to "run" the Singularity Institute?

kip198100

I think these are great predictions.

kip198110

I agree that that's one reasonable interpretation.

I just want to emphasize that that standard is very different than the weaker "if I had to guess, I would say that the person actually committed the crime." The first standard is higher. Also, the law might forbid you from considering certain facts/evidence, even if you know in the back of your mind that the evidence is there and suggestive. There are probably other differences between the standards that I'm not thinking of.

kip198110

By guilty, do we mean "committed or significantly contributed to the murder"?

Or do we mean "committed or significantly contributed to the murder AND there is enough evidence showing that to satisfy the beyond-a-reasonable-doubt (or Italian equivalent) standard of proof for murder"?

The comments don't seem to make that distinction, but I think it could make a big difference.

kip198160

I would be surprised if Eliezer would cite Joshua Greene's moral anti-realist view with approval.

kip198110

GEB has always struck me as more clever than intelligent.

Load More