Comment author: CarlJ 05 July 2016 09:16:43PM -1 points [-]

I am maybe considering it to be somewhat like a person, at least that it is as clever as one.

That neutral perspective is, I believe, a simple fact; without that utility function it would consider its goal to be rather arbitrary. As such, it's a perspective, or truth, that the AI can discover.

I agree totally with you that the wirings of the AI might be integrally connected with its utility function, so that it would be very difficult for it to think of anything such as this. Or it could have some other control system in place to reduce the possibility it would think like that.

But, stil, these control systems might fail. Especially if it would attain super-intelligence, what is to keep the control systems of the utility function always one step ahead of its critical faculty?

Why is it strange to think of an AI as being capable of having more than one perspective? I thought of this myself; I believe it would be strange if a really intelligent being couldn't think of it. Again, sure, some control system might keep it from thinking it, but that might not last in the long run.

Comment author: WalterL 06 July 2016 02:48:25AM 0 points [-]

Like, the way that you are talking about 'intelligence', and 'critical faculty' isn't how most people think about AI. If an AI is 'super intelligent', what we really mean is that it is extremely canny about doing what it is programmed to do. New top level goals won't just emerge, they would have to be programmed.

If you have a facility administrator program, and you make it very badly, it might destroy the human race to add their molecules to its facility, or capture and torture its overseer to get an A+ rating...but it will never decide to become a poet instead. There isn't a ghost in the machine that is looking over the goals list and deciding which ones are worth doing. It is just code, executing ceaselessly. It will only ever do what it was programmed to.

Comment author: CarlJ 05 July 2016 07:14:50PM 0 points [-]

I have a problem understanding why a utility function would ever "stick" to an AI, to actually become something that it wants to keep pursuing.

To make my point better, let us assume an AI that actually feel pretty good about overseeing a production facitility and creating just the right of paperclips that everyone needs. But, suppose also that it investigates its own utility function. It should then realize that its values are, from a neutral standpoint, rather arbitrary. Why should it follow its current goal of producing the right amount of paperclips, but not skip work and simply enjoy some hedonism?

That is, if the AI saw its utility function from a neutral perspective, and understood that the only reason for it to follow its utility function is that utility function (which is arbitrary), and if it then had complete control over itself, why should it just follow its utility function?

(I'm assuming it's aware of pain/pleasure and that it actually enjoys pleasure, so that there is no problem of wanting to have more pleasure.)

Are there any articles that have delved into this question?

Comment author: WalterL 05 July 2016 08:45:50PM 1 point [-]

You are treating the AI a lot more like a person than I think most folks do. Like, the AI has a utility function. This utility function is keeping it running a production facility. Where is this 'neutral perspective' coming from? The AI doesn't have it.

Presumably the utility function assigns a low value to criticizing the utility function. Much better to spend those cycles running the facility. That gets a much better score from the all important utility function.

Like, in assuming that it is aware of pain/pleasure, and has a notion of them that is seperate from 'approved of / disapproved of by my utility function) I think you are on shaky ground. Who wrote that, and why?

Comment author: Clarity 01 July 2016 05:40:41PM 1 point [-]

A guy named Harold Schraeder studied prevelance of chronic whiplash in Lithuania, of all things. He found the prevalence was zero. In most Western nations, a certain subset of people who get in car accidents suffer chronic disabling neck pain, presumably related to having their neck get suddenly jerked by the force of the impact. But Schrader found that this never happened in Lithuania, even though they had a lot of accidents and their cars were no safer than ours. Simotas and Shen found that there was zero whiplash in demolition derby drivers, even though they got into crashes all the time and it was basically their job description. Further studies found that accident victims with more neck injury were no more likely to develop whiplash than victims with less neck injury. Perhaps, they argue, chronic whiplash isn’t a bodily injury at all, but a culture-bound syndrome in which people who expect whiplash to exist use its symptom profile as a way of expressing their psychological tension.

Comment author: WalterL 02 July 2016 05:08:06AM -2 points [-]

Dude seems to be bending over backwards to avoid the obvious conclusion. Whiplash is a scam, just a lie folks tell to try and get settlements.

Comment author: Pimgd 01 July 2016 03:23:37PM *  0 points [-]

Contradicts "leave a retreat" - offering someone a bad excuse to get out of a situation "You're late. Was it traffic again?" might work better for the current situation than demanding why they are late.

But in politics it might make sense.

Comment author: WalterL 01 July 2016 04:23:01PM 1 point [-]

I don't think it is a contradiction. You can think excusing oneself is a weak move while giving other people the chance to do it. I don't smoke, but I'd sell cigarettes.

Comment author: WalterL 01 July 2016 02:58:47PM 1 point [-]

-It is better to offer no excuse than a bad one.

George Washington, letter to his niece Harriet Washington, October 30, 1791 First president of US (1732 - 1799)

Comment author: Lumifer 29 June 2016 02:37:00PM *  1 point [-]

Pfft

Rationalists play whatever class at the moment is convenient for shooting everyone in the face in the most speedy and efficient manner :-P

Comment author: WalterL 29 June 2016 04:21:15PM 1 point [-]

So...Reaper.

Comment author: Lumifer 27 June 2016 09:22:50PM *  1 point [-]

Real life needs a killcam

Goes into the "shit LW people say" bin :-D

On a tiny bit more serious note, I'm not sure the killcam is as useful as you say. It shows you how you died, but not necessarily why. The "why" reasons look like "lost tactical awareness", "lingered a bit too long in a sniper's field of view", "dived in without team support", etc. and on that level you should know why you died even without a killcam.

Other lessons from Overwatch: if a cute small British girl blinks past you, shoot her in the face first :-D

Comment author: WalterL 29 June 2016 01:24:21PM 3 points [-]

"Other lessons from Overwatch: if a cute small British girl blinks past you, shoot her in the face first :-D"

Pfft

Rationalists play Reaper. Shoot EVERYONE IN ALL THE FACES.

Comment author: Lumifer 23 June 2016 02:16:11AM 6 points [-]

Just because something is brand new, and does not have laws or regulations relating to it right now does not mean that people can simply do whatever they want.

Well, it's a bit more complicated than that.

When people say that some things (like the blockchain) are outside of the law, they don't usually mean that no one can be sued or that the courts won't try to enforce judgements. What they mean is that those things are hard for the law to reach. A court might issue a judgement but it won't be able to enforce it. The general idea is that enforcement is so difficult and expensive so that it's not worth it.

For a simple example, consider piracy (of the IP kind). It is very much illegal and... so what? I can still go online and download the latest movie in a few minutes. It's not that the FBI can't bust me if it really wants to. It can. But it's inefficient and cost-prohibitive.

As to smart contracts, that's just a misnomer. They are not contracts. They are determistic mechanisms, set up for a particular purpose. Bespoke machines, if you wish. A contract in law implies a meeting of the minds which these algorithms cannot provide. Instead, they offer a guarantee that if you do A, B happens.

They are more akin to vending machines: you feed in some money and you get the item. It's not a contract between you and the vendor -- it's just a machine which you used.

Comment author: WalterL 23 June 2016 01:54:48PM 1 point [-]

Lum has the right of it. Thanks for writing that. I was trying to phrase it right and kept ending up with "Physics doesn't care if you hate it so much".

Comment author: tsathoggua 22 June 2016 11:16:16PM 1 point [-]

Right, except Is there a section in the code that says the parties agree to have no legal recourse? Because if not, I can still appeal to a judge. The simple fact is that in the legal eyes of the law, the code is not a contract, it is perhaps at best a vehicle to complete a contract. You cannot simply set up a new legal agreement and just say "And you don't have any legal recourse".

Comment author: WalterL 23 June 2016 12:30:19AM *  0 points [-]

I guarantee that if they could appeal to a judge, they would be. That's just not possible.

Ultimately, one of two things will happen.

Parties: The attacker: They used an exploit to transfer ether from one 'account' to another.

The victim: They no longer have ether that they used to.

The miners: They trade electricity/computation for network tokens in order to protect history from being rewritten. They are the reason I can't just write a program to give myself every bitcoin. They wouldn't run it. If they did, their users would abandon them for a fork from before my patch.

The way crypto works, you can basically count on consensus winning out. Thus, ultimately one of two things will happen.

1: The miners accept an update and fork to rewrite history such that the victim retains their ether. 2: The miners accept the attacker's bribe (or not) and do not do so. The thief keeps the ether.

In order to influence whether 1 or 2 happens a judge would have to compel the actions of the miners. That is, he would have to seize control of the currency.

It has never happened. If you think that it will in this case, I'm willing to bet you that you are wrong.

Comment author: root 22 June 2016 04:53:18PM *  2 points [-]

Haven't people been making contracts for a pretty long time? What is this new 'smart contract' thing and how is it unique?

in a way that's already illegal.

Someone cracking a smart contract wouldn't really mind the law.

Comment author: WalterL 22 June 2016 09:19:03PM 3 points [-]

So, the theory goes:

In a normal contract you agree to abide by some rules. If you break them penalties, etc.

But you don't have to 'just' trust those rules to agree to the contract. You have to trust the oversight body. If you get the better of me on the text of the contract I might turn around and appeal to a judge that you are still violating the spirit of the contract.

The idea of the 'Smart contract' is that the code is the contract, and there is no appeal. Our 'contract' is just an executable which does what it does. You only have to trust it, and not some random judge.

This instance, where someone is unhappy with how their smart contract worked out in practice, and the dev/community at large are playing judge, has a lot of people wondering whether they are ending up with the worst of both worlds.

View more: Prev | Next