Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: shminux 28 July 2014 08:54:41PM 0 points [-]

Selection bias: when you are presented with a specific mugging scenario, you ought to realize that there are many more extremely unlikely scenarios where the payoff is just as high, so selecting just one of them to act on is suboptimal.

As for the level at which to stop calculating, bounded computational power is a good heuristic. But I suspect that there is a better way to detect the cutoff (it is known as infrared cutoff in physics). If you calculate the number of choices vs their (log) probability, once you go low enough, the number of choices explodes. My guess is that for reasonably low probabilities you would get exponential increase for the number of outcomes, but for very low probabilities the growth in the number of outcomes becomes super-exponential. This is, of course, a speculation, I would love to see some calculation or modeling, but this is what my intuition tells me.

Comment author: Pentashagon 29 July 2014 03:29:12AM 3 points [-]

for very low probabilities the growth in the number of outcomes becomes super-exponential

There can't be more than 2^n outcomes each with probability 2^-n.

Comment author: shminux 28 July 2014 09:04:01PM 0 points [-]

If you use a bounded utility function, it will inevitably be saturated by unlikely but high-utility possibilities, rendering it useless.

Comment author: Pentashagon 29 July 2014 03:09:47AM 2 points [-]

For any possible world W, |P(W) * BoundedUtility(W)| < |P(W) * UnboundedUtility(W)| as P(W) goes to zero.

Comment author: TheAncientGeek 08 July 2014 02:06:49PM *  0 points [-]

Theres a difference between causation and reduction. The idea that qualia have physical causes is compatible with dualism, the idea that they are not ieducible to physics.

Knowing what causes non standard qualia, or where they diverge, still doesn't tell you how non standard qualia feel to the person having them.

For that reason, we are not going to have private language dictionaries any time soon. Looking at brain scans of someone with non standard qualia us not going to tell me what their qualia are as qualia.

Comment author: Pentashagon 09 July 2014 02:41:50AM 0 points [-]

Granted; we won't have definitive evidence for or against dualism until we're at the point that we can fully test the (non-) reductive nature of qualia. If people who have access to each other's private language dictionaries still report meta-feeling that the other person feels different qualia from the same mental stimulus then I'll have more evidence for dualism than I do now. True, that won't help with incomparable qualia, but it would be kind of...convenient...if the only incomparable qualia are the ones that people report feeling differently.

Comment author: roystgnr 08 July 2014 05:05:43PM 13 points [-]

I'm slightly torn here.

My first impulse was to point out that, just because there happened to be seven people on this site with the poor sense to upvote this unintentional self-parody, that doesn't justify an eighth person having the poor sense to unilaterally delete it.

Then I remembered kuro5hin's decline and death. A self-moderated forum can transition from "temporary lull in activity" to "permanent death spiral" if a critical mass of trolls pounce on the lull, and it's not a pretty thing to witness. The "critical mass" doesn't have to be very large, either, since self-moderated forums generally weight their users' opinions proportionately to amount-of-available-free-time, which most trolls have in surplus. I suspect it wouldn't have taken too many swings of the banhammer to save that site.

There's got to be a better solution, though. I'd hope there's even a simple better solution. Maybe a "trash" category alongside "main" and "discussion"? Then moderators can move posts between categories, while users can upvote/downvote within each. That would still allow moderators to keep the new users' queue cleaned up in a way that can't be "gamed", but makes moderator mistakes much less significant. "They're deleting me!" is a half-decent rallying cry; "They're calling me names!" not so much.

Comment author: Pentashagon 09 July 2014 02:31:13AM 2 points [-]

Then I remembered kuro5hin's decline and death.

kuro5hin is still a vibrant, thriving community.

Comment author: TheAncientGeek 06 July 2014 02:46:08PM *  0 points [-]

If you can write a non opened ended goal i

I believe I've done that every time I've used Google maps.

Comment author: Pentashagon 07 July 2014 06:33:12AM 0 points [-]

I believe I've done that every time I've used Google maps.

"How do I get from location A to location B" is more open ended than "How do I get from location A to location B in an automobile" which is even still much more open ended than "How do I get from a location near A to a location near B obeying all traffic laws in a reasonably minimal time while operating a widely available automobile (that can't fly, jump over traffic jams, ford rivers, rappel, etc.)"

Google is drastically narrowing the search space for achieving your goal, and presumably doing it manually and not with an AGI they told to host a web page with maps of the world that tells people how to quickly get from one location to another. Google is not alone in sending drivers off cliffs, into water, the wrong way down one way streets, or across airport tarmacs.

Safely narrowing the search space is the hard problem.

Comment author: leplen 01 June 2014 05:26:59PM 7 points [-]

I'm broadly interested in the question, what physical limits if any, will a superintelligence face? What problems will it have to solve and which ones will it struggle with?

Eliezer Yudkowsky has made the claim "A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis—perhaps not the dominant hypothesis, compared to Newtonian mechanics, but still a hypothesis under direct consideration—by the time it had seen the third frame of a falling apple. It might guess it from the first frame, if it saw the statics of a bent blade of grass.”

I can't see how this is true. It isn't obvious to me that one could conclude anything from a video like that without a substantial prior knowledge of mathematical physics. Seeing a red, vaguely circular object, move across a screen tells me nothing unless I already know an enormous amount.

We can put absolute physical limits on the energy cost of a computation, at least in classical physics. How many computations would we expect an AI to need in order to do X or Y. Can we effectively box an AI by only giving it a 50W power supply?

I think there are some interesting questions at the intersection of information theory/physics/computer science that seem like they would be relevant for the AI discussion that I haven't seen addressed anywhere. There's a lot of hand-waving, and arguments about things that seem true, but "seem true" is a pretty terrible argument. Unlike math, "seem true" pretty reliably yields whatever you wanted to believe in the first place.

I'm making slow progress on some of these questions, and I'll eventually write it up, but encouragement, suggestions, etc. would be pretty welcome, because it's a lot of work and it's pretty difficult to justify the time/effort expenditure.

Comment author: Pentashagon 03 June 2014 09:03:28AM 4 points [-]

I can't see how this is true. It isn't obvious to me that one could conclude anything from a video like that without a substantial prior knowledge of mathematical physics. Seeing a red, vaguely circular object, move across a screen tells me nothing unless I already know an enormous amount.

This DeepMind paper describes their neural network learning from an emulated Atari 2600 display as its only input and eventually learning to directly use its output to the emulated Atari controls to do very well at several games. The neural network was not built with prior knowledge of Atari game systems or the games in question, except for the training using the internal game score as a direct measurement of success.

More than 3 frames from the display were used for training, but it arguably wasn't a superintelligence looking at them.

Comment author: jimrandomh 09 May 2014 07:50:54PM 13 points [-]

Three months into the process, the economist came back, and sold them both digital signs that update every hour.

Comment author: Pentashagon 11 May 2014 07:04:14AM 0 points [-]

Isn't the moral of the story that the gas station owners colluded to form an illegal cartel to raise gasoline prices?

Comment author: NancyLebovitz 19 April 2014 11:14:34PM 2 points [-]

If you could come up with an organization with as much emotional oomph as the Catholic Church that took cryonics seriously, that would be very impressive, but I don't think it's possible.

On the other side, what would it take to convince the Catholic Church that frozen people were alive enough that care should be taken to keep them frozen until they can be revived?

Comment author: Pentashagon 20 April 2014 06:43:41AM *  9 points [-]

In Dignitas Personae section 18 and 19 the Catholic Church asserts the personhood of cryopreserved embryos and, although it objects to IVF and other techniques for several other reasons, a major objection is that many cryopreserved embryos are not revived. It specifically objects to cryopreservation carrying the risk of death for human embryos, implying that they are either living or at least not-dead, and suggests the possibility of "prenatal adoption", and also objects to any medical use or destruction of the embryos.

So, in a narrow sense, they already believe that frozen people are alive enough to be worth keeping frozen or reviving.

Comment author: Pentashagon 09 February 2014 06:23:32AM 0 points [-]

Consider a Turing Machine whose input is the encoded state of the world as a binary integer, S, of maximum value 2^N-1, and which seeks SN positions into its binary tape and outputs the next N bits from the tape. Does that Turing Machine cause the same experience as an equivalent Turing Machine that simulates 10 seconds of the laws of physics for the world state S and outputs the resulting world state? I posit that the former TM actually causes no experience at all when run, despite the equivalence. So there probably exist l-zombies that would act as if they have experience when they are run but are *wrong.

Your last paragraph brings up an interesting question. I assumed that transitions between l-zombie and person are one-way. Once run, how do you un-run something? It seems to imply that you could construct a TM that is and is not an l-zombie.

Comment author: Brillyant 27 January 2014 09:17:11PM 1 point [-]

The last line in the article is my favorite:

"Evolution, we could say, has found a simpler solution yet: reproduction. You get new people with the genetic heritage of the species, but neotenous and adaptable to the current environment."

It is ironic to me that death, as a part of the mechanism of natural selection, has brought about creatures who seek to invent methods to eliminate it.

Death, after reproduction, works as a part of a process to advance a given species' levels of fitness.

Comment author: Pentashagon 31 January 2014 04:21:11AM 0 points [-]

It is ironic to me that death, as a part of the mechanism of natural selection, has brought about creatures who seek to invent methods to eliminate it.

The irony is that DNA and its associated machinery, as close as it is to a Turing Machine, did not become sentient and avoid the concept of individual death. The universe would make much more sense if we were DNA-based computers that cared about our genes because they were literally our own thoughts and memories and internal experience.

Or perhaps DNA did became sentient and decided to embark on a grand AGI project that resulted in Unfriendly multi-cellular life...

View more: Next