Comment author: Gunnar_Zarncke 01 November 2014 10:37:35PM *  25 points [-]

The only difference between reality and fiction is that fiction needs to be credible.

Mark Twain

Actually I found this in The topology of Seemingly impossible functional programs which is using topological methods to 'check' infinitely many cases in finite time. Which might even be applicable to FAI research.

Comment author: Strilanc 12 November 2014 03:58:22AM 9 points [-]

... wait, what? You can equate predicates of predicates but not predicates?!

(Two hours later)

Well, I'll be damned...

Comment author: Strilanc 11 November 2014 05:59:18PM *  1 point [-]

What are other examples of possible motivating beliefs? I find the examples of morals incredibly non-convincing (as in actively convincing me of the opposite position).

Here's a few examples I think might count. They aren't universal, but they do affect humans:

  • Realizing neg-entropy is going to run out and the universe will end. An agent trying to maximize average-utility-over-time might treat this as a proof that the average is independent of its actions, so that it assigns a constant eventual average utility to all possible actions (meaning what it does from then on is decided more by quirks in the maximization code, like doing whichever hypothesized action was generated first or last).

  • Discovering more fundamental laws of physics. Imagine an AI was programmed and set off in the 1800s, before anyone knew about quantum physics. The AI promptly discovers quantum physics, and then...? There was no rule given for how to maximize utility in the face of branching world lines or collapse-upon-measurement. Again the outcome might come down to quirks in the code; on how the mapping between the classical utilities and quantum realities is done (e.g. if the AI is risk-averse then its actions could differ based on if was using Copenhagen or Many-worlds).

  • Learning you're not consistent and complete. An agent built with an axiom that it is consistent and complete, and the ability to do proof by contradiction, could basically trash its mathematical knowledge by proving all things when it finds the halting problem / incompleteness theorems.

  • Discovering an opponent that is more powerful than you. For example, if an AI proved that Yahweh, god of the old testament, actually existed then it might stop mass-producing paperclips and start mass-producing sacrificial goats or prayers for paperclips.

Comment author: Strilanc 21 October 2014 12:07:24AM *  2 points [-]

For instance, if anything dangerous approached the AIXI's location, the human could lower the AIXI's reward, until it became very effective at deflecting danger. The more variety of things that could potentially threaten the AIXI, the more likely it is to construct plans of actions that contain behaviours that look a lot like "defend myself." [...]

It seems like you're just hardcoding the behavior, trying to get a human to cover all the cases for AIXI instead of modifying AIXI to deal with the general problem itself.

I get that you're hoping it will infer the general problem, but nothing stops it from learning a related rule like "Human sensing danger is bad.". Since humans are imperfect at sensing danger, that rule will better predict what's happening compared to the actual danger you want AIXI to model. Then it removes your fear and experiments with nuclear weapons. Hurray!

Comment author: travisrm89 17 October 2014 06:40:38PM 2 points [-]

There is at least one situation in which you might expect something different under MWI than under pilot-wave: quantum suicide. If you rig a gun so that it kills you if a photon passes through a half-silvered mirror, then under MWI (and some possibly reasonable assumptions about consciousness) you would expect the photon to never pass through the mirror no matter how many experiments you perform, but under pilot-wave you would expect to be dead after the first few experiments.

Comment author: Strilanc 17 October 2014 07:14:48PM 4 points [-]

Anthropomorphically forcing the world to have particular laws of physics by more effectively killing yourself if it doesn't seems... counter-productive to maximizing how much you know about the world. I'm also not sure how you can avoid disproving MWI by simply going to sleep, if you're going to accept that sort of evidence.

(Plus quantum suicide only has to keep you on the border of death. You can still end up as an eternally suffering almost-dying mentally broken husk of a being. In fact, those outcomes are probably far more likely than the ones where twenty guns misfire twenty times in a row.)

Comment author: hydkyll 16 October 2014 08:16:44PM *  9 points [-]

Probably not too interesting, but after studying physics at university I was pretty sure that the Many-Worlds interpretation of QM was crazy-talk (nobody even really mentioned it at uni). Of course I didn't read Eliezer's sequence on QM (although I read the others). I mean I had a degree in physics and Eliezer didn't.

Then after seeing it over and over again on LW, I actually read this paper to see what it was all about. And I was enlightened. Well, I had a short crisis of faith first, then I was enlightened.

This all could have been avoided if I had read that paper earlier. The lesson is that I can't even trust my fellow physicists :(

Comment author: Strilanc 17 October 2014 12:46:31AM *  14 points [-]

I find Eliezer's insistence about Many-Worlds a bit odd, given how much he hammers on "What do you expect differently?". Your expectations from many-worlds are be identical to those from pilot-wave, so....

I'm probably misunderstanding or simplifying his position, e.g. there are definitely calculational and intuition advantages to using one vs the other, but that seems a bit inconsistent to me.

Comment author: Strilanc 25 August 2014 03:57:42PM 10 points [-]

Is there an existing post on people's tendency to be confused by explanations that don't include a smaller version of what's being explained?

For example, confusion over the fact that "nothing touches" in quantum mechanics seems common. Instead of being satisfied by the fact that the low-level phenomena (repulsive forces and the Pauli exclusion principle) didn't assume the high-level phenomena (intersecting surfaces), people seem to want the low-level phenomena to be an aggregate version of the high-level phenomena. Explaining something without using it is one of the best properties an explanation can have, but people are somehow unsatisfied by such explanations.

Other examples of "but explain(X) doesn't include X!": emotions from biology, particles from waves, computers from solid state physics, life from chemistry.

More controversial examples: free will, identity, [insert basically any other introspective mental concept here].

Examples of the opposite: any axiom/assumption of a theory, billiard balls in Newtonian mechanics, light propagating through the ether, explaining a bar magnet as an aggregation of atom-sized magnets, fluid mechanics using continuous fields instead of particles, love from "God wanted us to have love".

Comment author: zzrafz 14 August 2014 04:47:02PM 3 points [-]

I'm not sure speed alone, by itself, is a solution. If you speed down a game by, say, 70% there would probably be no difference than if you sped it down by 90%, since there's a limit to what the character can do in a given second. Mario, for instance, once you jump, there's not much to do until he actually lands.

Suppose the same would happen if we had the capability to speed down time in our actual lives. Sure you could dodge bullets and win F1 races from time to time, but the actual day-to-day tasks, that take the majority of our time wouldn't be improved much. If you need to eat lunch, eating it in an optimal way won't give you much advantage in comparison to regular people that don't take the fork to their mouths following a perfect parabola.

Comment author: Strilanc 14 August 2014 05:56:21PM 16 points [-]

Mario, for instance, once you jump, there's not much to do until he actually lands

Mario games let you change momentum while jumping, to compensate for the lack of fine control on your initial speed. This actually does matter a lot in speed runs. For example, Mario 64 speedruns rely heavily on a super fast backwards long jump that starts with switching directions in the air.

A speed run of real life wouldn't start with you eating lunch really fast, it would start with you sprinting to a computer.

Comment author: Strilanc 31 July 2014 07:12:08PM 2 points [-]

In the examples you show how to run the opponent, but how do you access the source code? For example, how do you distinguish a cooperate bot from a (if choice < 0.0000001 then Defect else Cooperate) bot without a million simulations?

Comment author: shminux 05 July 2014 11:33:22PM 1 point [-]

There are speculations, certainly. But so far this is one experiment on one epileptic patient with a piece of hippocampus removed in an attempt to control her seizures. This is a long way away from reliably and reversibly switching off consciousness and memory formation on demand.

Comment author: Strilanc 06 July 2014 03:30:49AM 0 points [-]

That sounds like what I expected. Have any links?

Comment author: Strilanc 05 July 2014 06:26:59PM 2 points [-]

Is it expected that electrically disabling key parts of the brain will replace anesthetic drugs?

View more: Prev | Next