Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: evand 27 April 2017 04:27:38PM 0 points [-]

On the other hand... what level do you want to examine this at?

We actually have pretty good control of our web browsers. We load random untrusted programs, and they mostly behave ok.

It's far from perfect, but it's a lot better than the desktop OS case. Asking why one case seems to be so much farther along than the other might be instructive.

Comment author: whpearson 28 April 2017 07:29:30AM 0 points [-]

In some ways Browser is better, it is also more limited. It still has things like CSRF and XSS which can be seen as failures of the user to control their systems. Those are getting better, for CSRF by making the server be more wary about what they accept as legitimate requests.

I'll write an article this weekend on the two main system design patterns to avoid. *spoilers* Ambient authority because it causes the confused deputy problem and global namespaces. It is namespaces that web pages browsers have improved, web pages downloaded by the browser can't interact at all, so each one is a little island. It makes some things hard and the user very reliant on external servers.

Comment author: madhatter 26 April 2017 02:49:04AM 2 points [-]

This is a cool idea! My intuition says you probably can't completely solve the normal control problem without training the system to become generally intelligent, but I'm not sure. Also, I was under the impression there is already a lot of work on this front from antivirus firms (i.e. spam filters, etc.)

Also, quick nitpick: We do for the moment "control our computers" in the sense that each system is corrigible. We can pull the plug or smash it with a sledgehammer.

Comment author: whpearson 26 April 2017 08:22:13AM 1 point [-]

It think there are different aspects of the normal control problem. Stopping it have malware that bumps it into desks is probably easier than stopping it have malware that exfiltrates sensitive data. But having a gradual progression and focusing on control seems like the safest way to build these things.

All the advancements of spam filtering I've heard of recently have been about things like DKIM and DMARC. So not based on user feedback. I'm sure google does some things based on users clicking spam on mail, but it has not filtered into the outside world much. Most malware detection (AFAIK) is based on looking at the signatures of the binaries not on behaviour, to do that you would have to have some idea of what the user wants the system to do.

Also, quick nitpick: We do for the moment "control our computers" in the sense that each system is corrigible. We can pull the plug or smash it with a sledgehammer.

I'll update the control of computers section to say I'm talking about subtler control than wiping/smashing hard disks and starting again. Thanks,

Defining the normal computer control problem

3 whpearson 25 April 2017 11:49PM

There has been focus on controlling super intelligent artificial intelligence, however we currently can't even control our un-agenty computers without having to resort to formatting and other large scale interventions.

Solving the normal computer control problem might help us solve the super intelligence control problem or allow us to work towards safe intelligence augmentation.

continue reading »
Comment author: eternal_neophyte 25 April 2017 06:57:51PM *  0 points [-]

The key ingredient for a MAD situation as far as I can think is some technology with a high destructiveness potential distributed among multiple agents who cannot trust each other. To reduce my whole argument to its cartoon outlines: serious brain augmentation seems about as good an idea as handing everyone their own nuclear arsenal.

Comment author: whpearson 25 April 2017 11:17:33PM 0 points [-]

I think there is a whole long discussion about whether individual or small numbers of brain augments can somehow hope to outsmart whole societies of brain augments that are all working together to improve their augmentations. And also discussions around how much smarter pure AIs would be compared to normal augments.

Comment author: eternal_neophyte 24 April 2017 10:15:53PM 0 points [-]

The more intelligence augmentation is equitably spread the more likely that there will be less consequence free power over others.

That is not apparent to me though. It seems like it would lead to a MAD style situation where no agent is able to take any action that might be construed as malintent without being punished. Every agent would have to be suspicious of the motives of every other agent since advanced agents may do a very good job of hiding their own malintent, making any coordinated development very difficult. Some agents might reason that it is better to risk a chance of destruction for the chance of forming a singleton.

It seems to me very hard to reason about the behaviour of advanced agents without ultimately resorting to mathematics ( e.g. situations involving mutual-policing should be formalizable in game-theoretic terms ).

Comment author: whpearson 25 April 2017 06:54:15PM 0 points [-]

I think I am unsure what properties of future tech you think will lead to more MAD style situations than we have currently. Is it hard takeoff?

Comment author: eternal_neophyte 24 April 2017 08:48:37PM *  0 points [-]

Privately manufactured bombs are common enough to be a problem - and there is a very plausible threat of life imprisonment ( or possibly execution ) for anyone who engages in such behaviour. That an augmented brain with the inclination to doing something analogous would be effectively punishable is open to doubt - they may well find ways of either evading the law or of raising the cost of any attempted punishment to a prohibitive level.

I'd say it's more useful to think of power in terms of things you can do with a reasonable chance of getting away with it rather than just things you can do. Looking at the former class of things - there are many things that people do that are harmful to others that they do nevertheless because they can get away with it easily: littering, lying, petty theft, deliberately encouraging pathological interpersonal relationship dynamics, going on the internet and getting into an argument and trying to bully the other guy into feeling stupid... ( no hint intended to be dropped here, just for clarity's sake ).
Many, in my estimation probably most, human beings do in fact have at least some consequence-free power over others and do choose to abuse that minute level of power.

Comment author: whpearson 24 April 2017 10:00:05PM 0 points [-]

The more intelligence augmentation is equitably spread the more likely that there will be less consequence free power over others. Intelligence augmentation would allow you to collect more data and be able to communicate with more people about the actions you see other people taking.

There are worlds where IA is a lot easier than standalone AI, I think that is what elon is optimizing for. He has publicly stated he wants to spread it around when it is created (probably why he is investing in OpenAI as well).

This world feels more probable to me as well, currently. It conflicts somewhat with the need for secrecy in singleton AI scenarios.

Comment author: eternal_neophyte 24 April 2017 08:06:28PM *  0 points [-]

even with increasing power

At the individual level? By what metric?

these do not seem the correct things for maths to be trying to tackle

Is that a result of mathematics or of philosophy? :P

Comment author: whpearson 24 April 2017 08:41:26PM 0 points [-]

At the individual level? By what metric?

Knowledge and ability to direct energy. There are a lot more people who could probably put together half decent fertilizer bomb nowadays but we are not in continual state of trying to assassinate leaders and overthrow governments.

Comment author: ChristianKl 24 April 2017 08:20:04PM 0 points [-]

As far as I understand the EA funds there's no cashing out of the money that's donated to them.

Comment author: whpearson 24 April 2017 08:31:04PM 0 points [-]

Sorry misread this thread (thought it was talking about investment funds).

Comment author: eternal_neophyte 24 April 2017 06:26:47PM 0 points [-]

For this tactic to be effectual it requires that a society of augmented human brains will converge on a pattern of aggregate behaviours that maximize some idea of humanity's collective values or at least doesn't optimize anything that is counter to such an idea. If the degree to which human values can vary between unaugmented brains reflects some difference between them that would be infeasible to change then it's not likely that a society of augmented minds would be any more coordinated in values that a society of augmented ones.

In one sense I do believe a designed AI is better - the theorems a human being devised can stand or fall independently of the man who devised them. The risk increases inversely with our ability to follow trustworthy inference procedures in reasoning about designing AIs. With brain-augmentation the risk increases inversely with our aggregate ability to avoid the temptation of power. Humanity has produced many examples of great mathematicians. Trustworthy but powerful men are rarer.

Comment author: whpearson 24 April 2017 07:23:22PM 0 points [-]

We have been gradually getting more peaceful, even with increasing power. So I think there is an argument that brain augmentation is like literacy and so could increase that trend.

A lot depends on how hard a take off is possible.

I like maths. I like maths safely in the theoretical world, occasionally bought out to bear on select problems that have proven to be amenable to it. Also I've worked with computers enough to know that maths is not enough. They are imperfectly modeled physical systems.

I really don't like maths trying to be in charge of everything in the world, dealing with knotty problems of philosophy. Question like what is a human, what is life, what is a humans value; these do not seem the correct things for maths to be trying to tackle.

Comment author: ChristianKl 24 April 2017 08:28:13AM 0 points [-]

That still leaves the question why you think people expect from funds to report on the success of their investments but don't expect it from GiveWell.

Comment author: whpearson 24 April 2017 06:20:48PM 0 points [-]

There is no cashing out of the money to GiveWell. At no point will you go to it and find out how much good it has done (easily). If it turns out GiveWell did poorly all you have is the opportunity of having donated to another charity which also probably isn't reporting its successes objectively.

For a fund, you have skin in the game. You make plans like retirement/housing/yacht where the value has to be going up or if not going up you have to alter your plans. This puts it on a different mental level.

View more: Next