Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: gworley 23 April 2017 08:24:30PM 2 points [-]

I like this because this helps better answer the anthropics problem of existential risk, namely that we should not expect to find ourselves in a universe that gets destroyed, and more specifically you should not find yourself personally living in a universe where the history of your experience is lost. I say this because this is evidence that we will likely avoid a failure in AI alignment that destroys us, or at least not find ourselves in a universe where AI destroys us all, because alignment will turn out to be practically easier than we expect it to be in theory. That alignment seems necessary for this still makes it a worthy pursuit since progress on the problem increases our measure, but it also fixes the problem of believing the low-probability event of finding yourself in a universe where you don't continue to exist.

Comment author: woodchopper 06 May 2017 05:59:18PM 0 points [-]

and more specifically you should not find yourself personally living in a universe where the history of your experience is lost. I say this because this is evidence that we will likely avoid a failure in AI alignment that destroys us, or at least not find ourselves in a universe where AI destroys us all, because alignment will turn out to be practically easier than we expect it to be in theory.

Can you elaborate on this idea? What do you mean by 'the history of your experience is lost'? Can you supply some links to read on this whole theory?

Comment author: contravariant 24 April 2017 06:17:45PM 0 points [-]

Evolution is smarter than you.

Could you qualify that statement? If I was given a full time job to find the best way to increase some bacterium's fitness, I'm sure I could study the microbiology necessary and find at least some improvement well before evolution could. Yes, evolution created things that we don't yet understand, but then again, she had a planet's worth of processing power and 7 orders of magnitude more time to do it - and yet we can still see many obvious errors. Evolution has much more processing power than me, sure, but I wouldn't say she is smarter than me. There's nothing evolution created over all its history that humans weren't able to overpower in an eyeblink of a time. Things like lack of foresight and inability to reuse knowledge or exchange it among species, mean that most of this processing power is squandered.

Comment author: woodchopper 06 May 2017 05:57:17PM 0 points [-]

Could you qualify that statement?

Can you make an AGI given only primordial soup?

Comment author: tukabel 22 April 2017 10:54:15PM 4 points [-]

Welcome to the world of Memetic Supercivilization of Intelligence... living on top of the humanimal substrate.

It appears in maybe less than a percent of the population and produces all these ideas/science and subsequent inventions/technologies. This usually happens in a completely counter-evolutionary way, as the individuals in charge get most of the time very little profit (or even recognition) from it and would do much better (in evolutionary terms) to use their abilities a bit more "practically". Even the motivation is usually completely memetic: typically it goes along the lines like "it is interesting" to study something, think about this and that, research some phenomenon or mystery.

Worse, they give stuff more or less for free and without any control to the ignorant mass of humanimals (especially those in power), empowering them far beyond their means, in particular their abilities to control and use these powers "wisely"... since they are governed by their DeepAnimal brain core and resulting reward functions (that's why humanimal societies function the same way for thousands and thousands of years - politico-oligarchical predators living off the herd of mental herbivores, with the help of mindfcukers, from ancient shamans, through the stone age religions like the catholibanic one, to the currently popular socialist religion).

AI is not a problem, humanimals are.

Our sole purpose in the Grand Theatre of the Evolution of Intelligence is to create our (first nonbio) successor before we manage to self-destruct. Already nukes were too much, and once nanobots arrive, it's over (worse than DIY nuclear grenade for a dollar any teenager or terrorist can assemble in a garage).

Singularity should hurry up, there are maybe just few decades left.

Do you really want to "align" AI with humanimal "values"? Especially if nobody knows what we are really talking about when using this magic word? Not to mention defining it.

Comment author: woodchopper 06 May 2017 05:55:06PM 0 points [-]

An AI will have a utility function. What utility function do you propose to give it?

What values would we give an AI if not human ones? Giving it human values doesn't necessarily mean giving it the values of our current society. It will probably mean distilling our most core moral beliefs.

If you take issue with that all you are saying is that you want an AI to have your values, rather than humanity's, as a whole.

In response to AI arms race
Comment author: RyanCarey 06 May 2017 09:18:59AM *  1 point [-]

Someone pointed out to me that probably we should calling superintelligence a possible "arms race". In an "arms race", you're competing to have a stronger force than the other person. You want to keep your nose in-front in case of a fight.

Developing superintelligence, on the other hand, is just a plain old race. A technology race. You simply want to get to the destination first.

(Likewise with developing the first nuke, which also involved arms but was not an arms race.)

In response to comment by RyanCarey on AI arms race
Comment author: woodchopper 06 May 2017 05:04:59PM 0 points [-]

Developing an AGI (and then ASI) will likely involve a serious of steps involving lower intelligences. There's already an AI arms race between several large technology companies and keeping your nose in front is already practiced because there's a lot of utility in having the best AI so far.

So it isn't true to say that it's simply a race without important intermediate steps. You don't just want to get to the destination first, you want to make sure your AI is the best for most of the race for a whole heap of reasons.

Comment author: Lumifer 28 April 2017 02:55:57PM 1 point [-]

It takes more than prosperity for innovation to happen. It takes a combination of factors that nobody really understands.

I don't know about that. People have been discussing how does an innovation hub (like Silicon Valley) appear and how one might create one -- that is a difficult problem, partially because starting a virtuous circle is hard.

But general innovation in a society? Lemme throw in some factors off the top of my mind:

  • Low barriers to entry (to experimentation, to starting up businesses, etc.). That includes a permissive legal environment and a light regulatory hand.
  • A properly Darwinian environment where you live or die (quickly) by market success and not by whether you managed to bribe the right bureaucrat.
  • Relatively low stigma attached to failure
  • Sufficient numbers of high-IQ people who are secure enough to take risks
  • Enough money floating around to fund high-risk ventures
  • For basic science, enough money coupled with the willingness to throw it at very high-IQ people and say "Make something interesting with it"
Comment author: woodchopper 30 April 2017 05:21:46AM 1 point [-]

That's a partial list. It also takes good universities, a culture that produces a willingness to take risks, a sufficient market for good products, and I suspect a litany of other things.

I think once you've got a society that genuinely innovates started, it can be hard to kill that off, but it can be and has been done. The problem is, as you mentioned, very few societies have ever been particularly innovative.

It's easy to use established technology to build a very prosperous first world society. For example: Australia, Canada, Sweden. But it's much harder for a society to genuinely drive humanity forwards and in the history of humanity it has only happened a few times. We forget that for a very long time, very little invention happened in human society anywhere.

Comment author: Daniel_Burfoot 25 April 2017 10:52:21PM 2 points [-]

Claim: EAs should spend a lot of energy and time trying to end the American culture war.

America, for all its terrible problems, is the world's leading producer of new technology. Most of the benefits of the new technology actually accrue to people who are far removed from America in both time and space. Most computer technology was invented in America, and that technology has already done worlds of good for people in places like China, India, and Africa; and it's going to continue help people all over the world in the centuries and millennia to come. Likewise for medical technology. If an American company discovers a cure for cancer, that will benefit people all over the globe... and it will also benefit the citizens of Muskington, the capitol of the Mars colony, in the year 4514.

It should be obvious to any student of history that most societies, in most historical eras, are not very innovative. Europe in the 1000s was not very innovative. China in the 1300s was not very innovative, India in the 1500s was not very innovative, etc etc. France was innovative in the 1700s and 1800s but not so much today. So the fact that the US is innovative today is pretty special: the ability to innovate is a relatively rare property of human societies.

So the US is innovative, and that innovation is enormously beneficial to humanity, but it's naive to expect that the current phase of American innovation will last forever. And in fact there are a lot of signs that it is about to die out. Certainly if there were some large scale social turmoil in the US, like revolution, civil war, or government collapse, it would pose a serious threat to America's ability to innovate.

That means there is an enormous ethical rationale for trying to help American society continue to prosper. There's a first-order rationale: Americans are humans, and helping humans prosper is good. But more important is the second-order rationale: Americans are producing technology that will benefit all humanity for all time.

Currently the most serious threat to the stability of American society is the culture war: the intense partisan political hatred that characterizes our political discourse. EAs could have a big impact by trying to reduce partisanship and tribalism in America, thereby helping to lengthen and preserve the era of American innovation.

Comment author: woodchopper 28 April 2017 11:03:44AM 2 points [-]

I think it's an interesting point about innovation actually being very rare, and I agree. It takes a special combination of things for to happen and that combination doesn't come around much. Britain was extremely innovative a few hundred years ago. In fact, they started the industrial revolution, literally revolutionising humanity. But today they do not strike me as particularly innovative even with that history behind them.

I don't think America's ability to innovate is coming to end all that soon. But even if America continues to prosper, will that mean it continues to innovate? It takes more than prosperity for innovation to happen. It takes a combination of factors that nobody really understands. It takes a particular culture, a particular legal system, and much more.

Comment author: Dagon 27 October 2016 06:07:31PM 0 points [-]

I think we can all agree that an entity's anticipated future experiences matter to that entity. I hope (but would be interested to learn otherwise) that imaginary events such as fiction don't matter. In between, there is a hugely wide range of how much it's worth caring about distant events.

I'd argue that outside your light-cone is pretty close to imaginary in terms of care level. I'd also argue that events after your death are pretty unlikely to effect you (modulo basilisk-like punishment or reward).

I actually buy the idea that you care about (and are willing to expend resources on) subjunctive realities on behalf of not-quite-real other people. You get present value from imagining good outcomes for imagined-possible people even if they're not you. This has to get weaker as it gets more distant in time and more tenuous in connection to reality, though.

But that's not even the point I meant to make. Even if you care deeply about the far future for some reason, why is it reasonable to prefer weak, backward, stupid entities over more intelligent and advanced ones? Just because they're made of similar meat-substance as you seems a bit parochial, and hypocritical given the way you treat slightly less-capable organic beings like lettuce.

Woodchopper's post indicated that he'd violently interfere with (indirectly via criminalization) activities that make it infinitesimally more likely to be identified and located by ETs. This is well beyond reason, even if I overstated my long-term lack of care.

Comment author: woodchopper 28 October 2016 02:05:25PM 0 points [-]

You have failed to answer my question. Why does anything at all matter? Why does anything care about anything at all? Why don't I want my dog to die? Obviously, when I'm actually dead, I won't want anything at all. But there is no reason I cannot have preferences now regarding events that will occur after I am dead. And I do.

Comment author: Dagon 26 October 2016 02:22:02PM 1 point [-]

A lot of humans care (or at least signal that they care in far-mode) about what happens in the future. That doesn't make it sane or reasonable.

Why does it matter to anyone today whether the beings inhabiting Earth's solar system in 20 centuries are descended from apes, or made of silicon, or came from elsewhere?

Comment author: woodchopper 27 October 2016 12:04:37AM 0 points [-]

Why does anything at all matter?

Comment author: skeptical_lurker 10 October 2016 06:14:36PM 2 points [-]

We live in an increasingly globalised world, where moving between countries is both easier in terms of transport costs and more socially acceptable. Once translation reaches near-human levels, language barriers will be far less of a problem. I'm wondering to what extent evaporative cooling might happen to countries, both in terms of values and economically.

I read that France and Greece lost 3 & 5% of their millionaires last year (or possibly the year before), citing economic depression and rising racial/religious tension, with the most popular destination being Australia (as it has the 1st or 2nd highest HDI in the world). 3-5% may not seem like a lot, but if it were sustained for several years it quickly piles up. The feedback effects are obvious - the wealthier members of society find it easier to leave and perhaps have more of a motive to leave an economic collapse, which decreases tax revenue, which increases collapse etc. On the flip side, Australia attracts these people and its economy grows more making it even more attractive...

Socially, the same effect as described in EY's essay I linked happens on a national scale - if the 'blue' people leave, the country becomes 'greener' which attracts more greens and forces out more blues. And social/economic factors feed into each other too - economic collapses cause extremism of all sorts, while I imagine a wealthy society attracting elites would be more able to handle or avoid conflicts.

Now, this is not automatically a bad thing, or at least it might be bad locally for some people, but perhaps not globally. Any thoughts as to what sort of outcomes there might be? And incidentally, how many people can you fit in Australia? I know its very big, but also has a lot of desert.

Comment author: woodchopper 26 October 2016 10:54:41AM 0 points [-]

In Australia we currently produce enough food for 60 million people. This is without any intensive farming techniques at all. This could be scaled up by a factor of ten if it was really necessary, but quality of life per capita would suffer.

I think smaller nations are as a general rule governed much better, so I don't see any positives in increasing our population beyond the current 24 million people.

Comment author: Houshalter 10 October 2016 08:15:41PM 0 points [-]

Friendly AI is an AI which maximizes human values. We know what it is, we just don't know how to build one. Yet, anyway.

Comment author: woodchopper 26 October 2016 10:50:51AM 1 point [-]

Each human differs in their values. So it is impossible to build the machine of which you speak.

View more: Next