Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Riteofwhey 24 July 2015 03:36:06PM 1 point [-]

Verification seems like a strictly simpler problem. If we can't prove properties for a web server, how are we going to do anything about a completely unspecified AI?

The AI take over scenarios I've head almost always involve some kind of hacking, because today hacking is easy. I don't see why that would necessarily be the case a decade from now. We could prove some operating system security guarantees for instance.

Comment author: gjm 24 July 2015 05:21:54PM 2 points [-]

Yes, verification is a strictly simpler problem, and one that's fairly thoroughly addressed by existing research -- which is why people working specifically on AI safety are paying attention to other things.

(Maybe they should actually be working on doing verification better first, but that doesn't seem obviously a superior strategy.)

Some AI takeover scenarios involve hacking (by the AI, of other systems). We might hope to make AI safer by making that harder, but that would require securing all the other important computer systems in the world. Even though making an AI safe is really hard, it may well be easier than that.

Comment author: jacob_cannell 23 July 2015 11:14:55PM *  1 point [-]

One question/concern I have been monitoring for a while now is the response from conservative Christianity. It's not looking good. Google "Singularity image of the beast" to get an idea.

What kind of problems do you think this will lead to, down the line?

Hopefully none - but the conservative protestant faction seems to have considerable political power in the US, which could lead to policy blunders. Due to that one stupid book (revelations), the xian biblical worldview is almost programmed to lash out at any future system which offers actual immortality. The controversy over stem cells and cloning is perhaps just the beginning.

On the other hand, out of all religions, liberal xtianity is perhaps closest to transhumanism, and could be its greatest ally.

As an example, consider this quote:

It is a serious thing to live in a society of possible gods and goddesses, to remember that the dullest and most uninteresting person you talk to may one day be a creature which, if you saw it now, you would be strongly tempted to worship.

This sounds like something a transhumanist might say, but it's actually from C.S. Lewis:

The command Be ye perfect is not idealistic gas. Nor is it a command to do the impossible. He is going to make us into creatures that can obey that command. He said (in the Bible) that we were "gods" and He is going to make good His words. If we let Him—for we can prevent Him, if we choose—He will make the feeblest and filthiest of us into a god or goddess, dazzling, radiant, immortal creature, pulsating all through with such energy and joy and wisdom and love as we cannot now imagine, a bright stainless mirror which reflects back to God perfectly (though, of course, on a smaller scale) His own boundless power and delight and goodness. The process will be long and in parts very painful; but that is what we are in for. Nothing less. He meant what He said.

Divinization or apotheosis is one of the main belief currents underlying xtianity, emphasized to varying degrees across sub-variations and across time.

..

[We alread create lots of new agents with different beliefs ...]

This is true, but:

  1. I'm not comparing ANN-based AGI to the status quo, but to a future with some sort of near-optimal FAI.

The practical real world FAI that we can create is going to be a civilization that evolves from what we have now - a complex system of agents and hierarchies of agents. ANN-based AGI is a new component, but there is more to a civilization than just the brain hardware.

  1. The new agents we currently create aren't much more powerful than ourselves, and cannot take over the universe and foreclose the possibility of a better outcome.

Humanity today is enormously more powerful than our ancestors from say a few thousand years ago. AGI just continues the exponential time-acceleration trend, it doesn't necessarily change the trend.

From the perspective of humanity of a thousand years ago, friendliness mainly boils down to a single factor: will the future posthuman civ ressurrect them into a heaven sim?

  1. Humans or humanity as a whole seem capable of making moral and philosophical progress, and this capability is likely to persist in future generations. I'm not sure the same will be true of ANN-based AGIs.

Why not?

One of the main implications of the brain being a ULM is that friendliness is not just a hardware issue. There is a hardware component in terms of the value learning subsystem, but once you solve that, it is mostly a software issue. It's a culture/worldview/education issue. The memetic software of humanity is the same software that we will instill into AGI.

That being said, I do believe that the AGI we create will be far more aligned with our values than our children are.

I look forward to your post explaining this, but again my fear is that since to a large extent I don't know what my own values are (especially when it comes to post-Singularity problems like how to reorganize the universe on a large scale . .

I don't see how that is a problem. You may not know yourself completely, but have some estimation or distribution over your values. As long as you continue to exist into the future, and as long as you have a significant share in the future decision structure (ie wealth or voting rights), then that should suffice - you will have time to figure out your long term values.

Are you not worried that during this time, the AGIs will take over the universe and reorganize it according to their imperfect understanding of our values, which will look disastrous when we become superintelligences ourselves and figure out what we really want?

This is a potential worry, but it can probably be prevented.

The brain is reasonably efficient in terms of intelligence per unit energy. Brains evolved from the bottom up, and biological cells are near optimal nanocomputers (near optimal in terms of both storage density in DNA, and near optimal in terms of energy cost per irreversible bit op in DNA copying and protein computations). The energetic cost of computation in brains and modern computers alike is dominated by wire energy dissipation in terms of bits/J/mm. Moore's law is approaching it's end which will result in hardware that is on par a little better than the brain. With huge investments into software cleverness, we can close the gap and achieve AGI. In 5 years or so, lets say that 1 AGI runs amortized on 1 GPU (neuromorphics doesn't change this picture dramatically). That means an AGI will only require 100 watts of energy and say $1,000/year. That is about a 100x productivity increase, but in a pinch humans can survive on only $10,000 a year.

Today the foundry industry produces about 10 million mid-high end GPUs per year. There are about 100 million human births per year, and around 4 million per year in the US. Of course if we consider only humans with IQ > 135, then there are only 1 million high IQ humans born per year. This puts some constraints on the likely transition time, and it is likely measured in years.

We don't need to instill values so perfectly that we can rely on our AGI to solve all of our problems until the end of time - we just need AGI to be similar enough to us that it can function as at least a replacement for future human generations and fulfill the game theoretic pact across time of FAI/god/resurrection.

Comment author: gjm 24 July 2015 11:29:28AM 3 points [-]

liberal xtianity is perhaps closest to transhumanism, and could be its greatest ally

There's some truth in the first half of that, but I'm not so sure about the second. Expecting that God will at some point transform us into something beyond present-day humanity is a very different thing from planning to make that transformation ourselves. That whole "playing God" accusation probably gets worse, rather than better, if you're actually expecting God to do the thing in question on his own terms and his own schedule.

For a far-from-perfect analogy, consider the interaction between creationism and climate change. You might say: Those who fear that human activity might lead to disastrous changes in the climate, including serious harm to humanity, should find their greatest allies in those who believe that in the past God brought about a disastrous change in the earth's climate and wrought serious harm to humanity. But, no, of course it doesn't actually work that way; what actually happens is that creationists say "human activity can't harm the climate much; God promised no more worldwide floods" or "the alleged human influence on climate is on a long timescale, and God will be wrapping everything up soon anyway".

Comment author: Clarity 23 July 2015 11:48:53PM 1 point [-]

The healthcare startup scene suprises me.

Why doesn't the free home doctor service put free (bulk-billed) medical clinics out of business?

Why did MetaMed go out of business?

Comment author: gjm 24 July 2015 11:22:13AM 4 points [-]

MetaMed's service was expensive. I would guess they didn't find enough takers.

Comment author: Riteofwhey 24 July 2015 02:47:37AM *  5 points [-]

Thanks for doing this. A lack of self criticism about AI risk is one of the reasons I don't take it too seriously.

I generally agree with http://su3su2u1.tumblr.com/ , but it may not be organized enough to be helpful.

As for MIRI specifically, I think you'd be much better served by mainstream software verification and cryptography research. I've never seen anyone address why that is not the case.

I have a bunch of disorganized notes about why I'm not convinced of AI risk, if you're interested I could share more.

Comment author: gjm 24 July 2015 11:18:17AM 3 points [-]

I've never seen anyone address why that is not the case.

It's solving a different problem.

Problem One: You know exactly what you want your software to do, at a level of detail sufficient to write the software, but you are concerned that you may introduce bugs in the implementation or that it may be fed bad data by a malicious third party, and that in that case terrible consequences will ensue.

Problem Two: You know in a vague handwavy way what you want your software to do, but you yet don't know with enough precision to write the software. You are concerned that if you get this wrong, the software will do something subtly different from what you really wanted, and terrible consequences will ensue.

Software verification and crypto address Problem One. AI safety is an instance of Problem Two, and potentially an exceptionally difficult one.

Comment author: Lumifer 21 July 2015 05:12:12AM 1 point [-]

People will still write short replies.

Andthenfilltheremainderof500characterswithtrashjustsothatthestupidmachinebesatisfiedandtheywouldnothavetopaythe-1karmapricesinceit'seasytojustfillupspacewatchme:ooohaaahooohaaahooohaaahooohaaahooohaaahooohaaahooohaaahooohaaahooohaaahooohaaahooohaaahooohaaahooohaaahooohaaahooohaaahooohaaahooohaaahooohaaahooohaaahooohaaahooohaaahooohaaahphew.

Comment author: gjm 21 July 2015 02:01:09PM 1 point [-]

And then get a storm of downvotes that cancels out the benefit they hoped to gain by padding their comment. And then probably not do it again.

What I'd be more worried about is that short comments may be more valuable than you would think from their average karma -- e.g., perhaps in some cases short not-exceptionally-insightful comments form (as it were) the skeleton of a discussion, within which insights might emerge. Or perhaps if everyone felt they mustn't post short comments unless they were exceptionally insightful, the barrier to participation would feel high enough that scarcely anyone would ever post anything, and LW would just wither.

Comment author: Clarity 21 July 2015 04:02:56AM *  3 points [-]

Towards trainable mental skills for domain-neutral high-performance cognitive reappraisal.

Blessed with the capacity for cognitive reappraisal, one is constantly confronted with some degree of freedom over the emotion they experience in a given of clarity. How does one decide upon an emotion?

Therapeutic considerations dominate literature on cognitive reappraisal, however, performance considerations take a share of pie too. To illustrate the latter, sports psychologists have identified certain emotions as higher performance and lower performance emotions in sport. Some of the results are counter-intuitive and partially incompatible with the therapeutic research. Importantly, it appears that student's performance emotions in the class differ from athletes. This makes it difficult to generalise about a general theory of performance emotion which can be applied to any arbitrary situation arising, from a political negotiation, to editing a Wikipedia page, to conducting a semiotic analysis in one's mental space.

Alas hope is not lost. Yerkes–Dodson law is a generalised 'theory'/law of learning predicted from stress (anxiety) response. Perhaps if we stratified the stress response of students in classrooms, athletes on the track, and other performance scenarios, then mathematically transformed the data to model the stress/anxiety to performance relationship, we may be able to classify emotions based on their impact on human performance in task where novelty, unpredictability, self-inefficacy or a threat of negative social evaluation can be predicted.

anxiety risk management

I've been toying with the idea of a risk management framework for rationalists.

The question is, how do you align your willingness to take risks with its ability to do so?

I frame risks in terms of threats to mental well-being, and figure anxiety is good catchall. Then, I practice defensive pessimism to identify what I'm willing to lose before my anxiety level rises to a point of internal dissent. Then, I gamble all of that over a diversified portfolio of risky activities with the highest potential rewards I can muster. I try to convert ~70% of these rewards into non-property gains e.g. learning, happiness, relationships then reinvest the rest in future gambles in so far as my baseline anxiety tolerance level hasn't rises or fallen.

Comments?

Comment author: gjm 21 July 2015 01:54:02PM 1 point [-]

You have two links ("cognitive appraisal" and "Yerkes-Dodson law") to www.en.wikipedia.org, which doesn't exist; they should go to en.wikipedia.org.

Comment author: cleonid 20 July 2015 10:15:41PM 3 points [-]

It’s an interesting possibility. But I have looked at the data and for all ten users the comments above 1000 characters get higher average ratings than shorter comments.

Comment author: gjm 20 July 2015 10:17:14PM 0 points [-]

Aha, excellent.

Comment author: James_Miller 19 July 2015 04:10:02PM *  4 points [-]

Zoltan is articulate, extremely good looking, and willing to put in a lot of work to become president. Imagine one or both of the major U.S. political parties becomes discredited and Zoltan gets significant financial support from a high-tech billionaire. He could then have a non-trivial chance of becoming president, although the odds of this ever happening is still under .1%.

Comment author: gjm 20 July 2015 05:08:39PM 0 points [-]

I'd guess less than 5% chance for each major party to get discredited, maybe 50% chance that after that a high-tech billionaire decides it's a good time to try to shape politics, maybe a 2% chance that s/he chooses Zoltan, and no more than a 20% chance that Zoltan wins after all that happens. I make that about a 0.0005% chance, being quite generous.

So, yeah, "remotely theoretically possible" is about as far as it goes.

Comment author: gjm 20 July 2015 05:04:58PM 9 points [-]

In the "top 10" aggregate, you are at risk of the following Simpsonian problem: you have two posters A and B; one writes longer comments than the other and also happens to be cleverer / more interesting / funnier / better at appealing to the prejudices of the LW crowd. So in the whole group there is a positive correlation between length and quality, but actually everyone likes A's shorter comments better and everyone likes B's shorter comments better. (Or, of course, likewise but with "longer" and "shorter" switched.)

Comment author: Val 18 July 2015 01:03:03PM 3 points [-]

This effect also exists in software development:

http://thedailywtf.com/articles/The-Defect-Black-Market

Comment author: gjm 18 July 2015 03:44:34PM 1 point [-]

Famous Dilbert cartoon on this topic.

View more: Next