Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Daniel_Burfoot 04 May 2017 07:19:31PM 7 points [-]

Most of the pessimistic people I talk to don't think the government will collapse. It will just get increasingly stagnant, oppressive and incompetent, and that incompetence will make it impossible for individual or corporate innovators to do anything worthwhile. Think European-style tax rates, with American-style low quality of public services.

There will also be a blurring of the line between the government and big corporations. Corporations will essentially become extensions of the bureaucracy. Because of this they will never go out of business and they will also never innovate. Think of a world where all corporations are about as competent as AmTrak.

Comment author: WalterL 02 May 2017 02:58:05PM 2 points [-]

Thanks for the advice, but your username makes me reluctant to take it :).

Comment author: Daniel_Burfoot 03 May 2017 01:02:01AM 0 points [-]

LessWrong: kind of an odd place to find references to Christian ethical literature.

Comment author: Lumifer 26 April 2017 01:16:30AM *  1 point [-]

America, for all its terrible problems, is the world's leading producer of new technology.

True.

That means there is an enormous ethical rationale for trying to help American society continue to prosper.

Not true. There's rationale to help America continue be inventive, but that's not the same thing at all as "continue to prosper" since the US looks at the moment like an empire in decline -- one that will continue to prosper for a while, but will be too ossified and sclerotic to continue innovating.

Note that it's received wisdom in Silicon Valley (and elsewhere) that you need to innovate in the world of bits because the world of atoms is too locked-down. There are some exceptions (see e.g. Musk), but overall the difference between innovations in bits and innovations in atoms is huge and stark.

Currently the most serious threat to the stability of American society is the culture war

Not true at all. Even in Berkeley what you have is young males playing political-violence LARP games (that's how you get laid, amirite?) and that's about it.

Read less media -- it optimizes for outrage.

Comment author: Daniel_Burfoot 26 April 2017 05:21:49AM 0 points [-]

and that's about it.

We can agree to disagree, but my view is that the US has dozens or hundreds of problems we can't solve - education, criminal justice, the deficit, the military-industrial complex - because the government is paralyzed because of partisan hatred.

Comment author: Dagon 25 April 2017 11:12:31PM 0 points [-]

Please expand on "Currently the most serious threat to the stability of American society is the culture war", and provide some reasoning for "stability" being a driver of producing beneficial technology.

I dispute (or perhaps just don't understand) both premises. I also am not sure if you mean "end the culture war" or "win the culture war for my side". Is surrendering your recommended course of action?

Comment author: Daniel_Burfoot 25 April 2017 11:48:54PM 1 point [-]
  1. I live in Berkeley, where there are literally armed gangs fighting each other in the streets.
  2. Stability isn't intrinsically valuable. The point is that we know our current civilizational formula is a pretty good one for innovation and most others aren't, so we should stick to the current formula more or less.
  3. My recommendation is a political ceasefire. Even if we could just decrease the volume of partisan hate speech, without solving any actual problems, that seems like it would have a lot of benefits.
Comment author: Daniel_Burfoot 25 April 2017 10:52:21PM 2 points [-]

Claim: EAs should spend a lot of energy and time trying to end the American culture war.

America, for all its terrible problems, is the world's leading producer of new technology. Most of the benefits of the new technology actually accrue to people who are far removed from America in both time and space. Most computer technology was invented in America, and that technology has already done worlds of good for people in places like China, India, and Africa; and it's going to continue help people all over the world in the centuries and millennia to come. Likewise for medical technology. If an American company discovers a cure for cancer, that will benefit people all over the globe... and it will also benefit the citizens of Muskington, the capitol of the Mars colony, in the year 4514.

It should be obvious to any student of history that most societies, in most historical eras, are not very innovative. Europe in the 1000s was not very innovative. China in the 1300s was not very innovative, India in the 1500s was not very innovative, etc etc. France was innovative in the 1700s and 1800s but not so much today. So the fact that the US is innovative today is pretty special: the ability to innovate is a relatively rare property of human societies.

So the US is innovative, and that innovation is enormously beneficial to humanity, but it's naive to expect that the current phase of American innovation will last forever. And in fact there are a lot of signs that it is about to die out. Certainly if there were some large scale social turmoil in the US, like revolution, civil war, or government collapse, it would pose a serious threat to America's ability to innovate.

That means there is an enormous ethical rationale for trying to help American society continue to prosper. There's a first-order rationale: Americans are humans, and helping humans prosper is good. But more important is the second-order rationale: Americans are producing technology that will benefit all humanity for all time.

Currently the most serious threat to the stability of American society is the culture war: the intense partisan political hatred that characterizes our political discourse. EAs could have a big impact by trying to reduce partisanship and tribalism in America, thereby helping to lengthen and preserve the era of American innovation.

Comment author: Daniel_Burfoot 19 April 2017 02:13:42AM *  2 points [-]

I really want self-driving cars to be widely adopted as soon as possible. There are many reasons, the one that occurred to me today while walking down the street is : look at all the cars on the street. Now imagine all the parked cars disappear, and only the moving cars remain. A lot less clutter, right? What could we do with all that space? That's the future we could have if SDCs appear (assuming that most people will use services like Lyft/Uber with robotic drivers instead of owning their own car).

Comment author: Daniel_Burfoot 08 April 2017 08:14:35PM 2 points [-]

I agree with the broad sentiment, but I think it's increasingly unrealistic to believe that the liberal/conservative distinction is based on a fundamental philosophical difference instead of just raw partisan tribal hatred. In theory people would develop an ethical philosophy and then join the party that best represents the philosophy, but in practice people pick a tribe and then adopt the values of that tribe.

Comment author: Daniel_Burfoot 05 April 2017 11:09:24PM *  2 points [-]

If there's anything we can do now about the risks of superintelligent AI, then OpenAI makes humanity less safe.

I feel quite strongly that people in the AI risk community are overly affected by the availability or vividness bias relating to an AI doom scenario. In this scenario some groups get into an AI arms race, build a general AI without solving the alignment problem, the AGI "fooms" and then proceeds to tile the world with paper clips. This scenario could happen, but some others could also happen:

  • An asteroid is incoming and going to destroy Earth. AI solves a complex optimization problem to allow us to divert the asteroid.
  • Terrorists engineer a virus to kill all persons with genetic trait X. An AI agent helps develop a vaccine before billions die.
  • By analyzing systemic risk in the markets, an AI agent detects and allows us to prevent the Mother of all Financial Meltdowns, that would have led to worldwide economic collapse.
  • An AI agent helps SpaceX figure out how to build a Mars colony for a two orders of magnitude less money than otherwise, thereby enabling the colony to be built.
  • An AI system trained on vast amounts of bioinformatics and bioimaging data discovers the scientific cause of aging and also how to prevent it.
  • An AI climate analyzer figures out how to postpone climate change for millennia by diverting heat into the deep oceans, and gives us an inexpensive way to do so.
  • etc etc etc

These scenarios are equally plausible, involve vast benefit to humanity, and require only narrow AI. Why should we believe that these positive scenarios are less likely than the negative scenario?

Comment author: dogiv 20 March 2017 08:43:50PM 2 points [-]

Interesting piece. It seems like coming up with a good human-checkable way to evaluate parsing is pretty fundamental to the problem. You may have noticed already, but Ozora is the only one that didn't figure out "easily" goes with "parse".

Comment author: Daniel_Burfoot 20 March 2017 09:11:37PM 0 points [-]

Good catch. Adverbial attachment is really hard, because there aren't a lot of rules about where adverbs can go.

Actually, Ozora's parse has another small problem, which is that it interprets "complex" as an NN with a "typeadj" link, instead of as a JJ with an "adject" link. The typeadj link is used for noun-noun pairings such as "police officer", "housing crisis", or "oak tree".

For words that can function as both NN and JJ (eg "complex"), it is quite hard to disambiguate the two patterns.

[Link] Chuckling a Bit at Microsoft and the PCFG Formalism

5 Daniel_Burfoot 20 March 2017 07:37PM

View more: Next