Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: tadasdatys 21 July 2017 10:26:29AM 0 points [-]

How do you know?

The same way I know what a chair is.

Does a falling rock also observe the gravitational field?

I'd have to say no here, but if you asked about plants observing light or even ice observing heat, I'd say "sure, why not". There are various differences between what ice does, what roomba does, and what I do, however they are mostly quantitative and using one word for them all should be fine.

Comment author: lmn 23 July 2017 06:16:25PM 0 points [-]

I'd have to say no here, but if you asked about plants observing light or even ice observing heat, I'd say "sure, why not". There are various differences between what ice does, what roomba does, and what I do, however they are mostly quantitative and using one word for them all should be fine.

What are you basing this distinction on? More importantly, how is whatever you're basing this distinction on relevant to grounding the concept of empirical reality?

Using Eliezer's formulation of "making beliefs pay rents in anticipated experiences" may make the relevant point clearer here. Specifically, what's an "experience"?

Comment author: Lumifer 21 July 2017 08:06:33PM 2 points [-]

That's not terribly hard -- e.g. you can see the Earth's curvature from a normal commercial airliner -- but misses the real point. If there's a general conspiracy of such magnitude and pervasiveness, whether Earth is actually flat is likely to be the least of my concerns.

Comment author: lmn 22 July 2017 05:52:01PM 0 points [-]

Science is based on the principal of nullius in verba (take no one's word for it). So your attitude is anti-scientific and likely to fall a foul of Goodhart's law.

Comment author: username2 22 July 2017 03:50:16AM 0 points [-]

On a central command and control server it owns, and pays bitcoin to maintain.

Comment author: lmn 22 July 2017 07:35:26AM 0 points [-]

Ok, so where does it store the administrator password to said server?

Comment author: tadasdatys 20 July 2017 06:03:27AM 0 points [-]

"observation" is what your roomba does to find the dirt on your floor.

Comment author: lmn 20 July 2017 10:31:17PM 1 point [-]

How do you know? Does a falling rock also observe the gravitational field?

Comment author: username2 19 July 2017 07:48:08AM 0 points [-]

With bitcoin botnet mining this was briefly possible. Also see "google eats itself."

Comment author: lmn 20 July 2017 12:58:43AM 0 points [-]

I don't think this could work. Where would the virus keep its private key?

Comment author: turchin 18 July 2017 08:22:17PM 1 point [-]

What worries me is that if ransomware virus could own money, it could pay some human to install itself on others people computers, and also pay programmers for finding new exploits and even for the improvement of the virus.

But for such development legal personhood is not needed, only illegal.

Comment author: lmn 20 July 2017 12:52:48AM 0 points [-]

even for the improvement of the virus.

I don't think this would work. This requires some way for it to keep the human it has entrusted with editing its programing from modifying it to simply send him all the money it acquires.

Comment author: tadasdatys 17 July 2017 08:24:26AM 0 points [-]

The three examples deal with different kinds of things.

Knowing X mostly means believing in X, or having a memory of X. Ideally beliefs would influence actions, but even if they don't, they should be physically stored somehow. In that sense they are the most real of the three.

Having a mental skill to do X means that you can do X with less time and effort than other people. With honest subjects, you could try measuring these somehow, but, obviously, you may find some subject who claims to have the skill perform slower than another who claims not to. Ultimately, "I have a skill to do X" means "I believe I'm better than most at X" and while it is a belief as good as the previous one, but it's a little less direct.

Finally, being conscious doesn't mean anything at all. It has no relationship to reality. At best, "X is conscious" means "X has behaviors in some sense similar to a human's". If a computationalist answers "no" to the first two questions, and "yes" to the last one, they're not being inconsistent, they merely accepted that the usual concept of consciousness is entirely bullshit, and replaced it with something more real. That's, by the way, similar to what compatibilists do with free will.

Comment author: lmn 19 July 2017 11:56:55PM 1 point [-]

Finally, being conscious doesn't mean anything at all. It has no relationship to reality.

What do you mean by "reality"? If you're an empiricist, as it looks like you are, you mean "that which influinces our observations". Now what is an "observation"? Good luck answering that question without resorting to qualia.

Comment author: cousin_it 27 June 2017 03:45:54PM *  1 point [-]

Yeah, Schelling's "Strategy of Conflict" deals with many of the same topics.

A: "I would have an advantage in war so I demand a bigger share now" B: "Prove it" A: "Giving you the info would squander my advantage" B: "Let's agree on a procedure to check the info, and I precommit to giving you a bigger share if the check succeeds" A: "Cool"

Comment author: lmn 28 June 2017 03:57:02AM 0 points [-]

A: "I would have an advantage in war so I demand a bigger share now" B: "Prove it" A: "Giving you the info would squander my advantage" B: "Let's agree on a procedure to check the info, and I precommit to giving you a bigger share if the check succeeds" A: "Cool"

Simply by telling B about the existence of an advantage A is giving B info that could weaken it. Also, what if the advantage is a way to partially cheat in precommitments?

Comment author: cousin_it 27 June 2017 12:31:27PM *  1 point [-]

Even if A is FAI and B is a paperclipper, as long as both use correct decision theory, they will instantly merge into a new SI with a combined utility function. Avoiding arms races and any other kind of waste (including waste due to being separate SIs) is in their mutual interest. I don't expect rational agents to fail achieving mutual interest. If you expect that, your idea of rationality leads to predictably suboptimal utility, so it shouldn't be called "rationality". That's covered in the sequences.

Comment author: lmn 28 June 2017 03:50:08AM 0 points [-]

Even if A is FAI and B is a paperclipper, as long as both use correct decision theory, they will instantly merge into a new SI with a combined utility function.

What combined utility function? There is no way to combine utility functions.

Comment author: cousin_it 27 June 2017 08:24:46AM *  2 points [-]

I don't believe it. War wastes resources. The only reason war happens is because two agents have different beliefs about the likely outcome of war, which means at least one of them has wrong and self-harming beliefs. Sufficiently rational agents will never go to war, instead they'll agree about the likely outcome of war, and trade resources in that proportion. Maybe you can't think of a way to set up such trade, because emails can be faked etc, but I believe that superintelligences will find a way to achieve their mutual interest. That's one reason why I'm interested in AI cooperation and bargaining.

Comment author: lmn 28 June 2017 03:47:58AM 1 point [-]

Maybe you can't think of a way to set up such trade, because emails can be faked etc, but I believe that superintelligences will find a way to achieve their mutual interest.

They'll also find ways of faking whatever communication methods are being used.

View more: Next