Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: TheAncientGeek 08 July 2014 02:06:49PM *  0 points [-]

Theres a difference between causation and reduction. The idea that qualia have physical causes is compatible with dualism, the idea that they are not ieducible to physics.

Knowing what causes non standard qualia, or where they diverge, still doesn't tell you how non standard qualia feel to the person having them.

For that reason, we are not going to have private language dictionaries any time soon. Looking at brain scans of someone with non standard qualia us not going to tell me what their qualia are as qualia.

Comment author: Pentashagon 09 July 2014 02:41:50AM 0 points [-]

Granted; we won't have definitive evidence for or against dualism until we're at the point that we can fully test the (non-) reductive nature of qualia. If people who have access to each other's private language dictionaries still report meta-feeling that the other person feels different qualia from the same mental stimulus then I'll have more evidence for dualism than I do now. True, that won't help with incomparable qualia, but it would be kind of...convenient...if the only incomparable qualia are the ones that people report feeling differently.

Comment author: roystgnr 08 July 2014 05:05:43PM 13 points [-]

I'm slightly torn here.

My first impulse was to point out that, just because there happened to be seven people on this site with the poor sense to upvote this unintentional self-parody, that doesn't justify an eighth person having the poor sense to unilaterally delete it.

Then I remembered kuro5hin's decline and death. A self-moderated forum can transition from "temporary lull in activity" to "permanent death spiral" if a critical mass of trolls pounce on the lull, and it's not a pretty thing to witness. The "critical mass" doesn't have to be very large, either, since self-moderated forums generally weight their users' opinions proportionately to amount-of-available-free-time, which most trolls have in surplus. I suspect it wouldn't have taken too many swings of the banhammer to save that site.

There's got to be a better solution, though. I'd hope there's even a simple better solution. Maybe a "trash" category alongside "main" and "discussion"? Then moderators can move posts between categories, while users can upvote/downvote within each. That would still allow moderators to keep the new users' queue cleaned up in a way that can't be "gamed", but makes moderator mistakes much less significant. "They're deleting me!" is a half-decent rallying cry; "They're calling me names!" not so much.

Comment author: Pentashagon 09 July 2014 02:31:13AM 2 points [-]

Then I remembered kuro5hin's decline and death.

kuro5hin is still a vibrant, thriving community.

Comment author: TheAncientGeek 06 July 2014 02:46:08PM *  0 points [-]

If you can write a non opened ended goal i

I believe I've done that every time I've used Google maps.

Comment author: Pentashagon 07 July 2014 06:33:12AM 0 points [-]

I believe I've done that every time I've used Google maps.

"How do I get from location A to location B" is more open ended than "How do I get from location A to location B in an automobile" which is even still much more open ended than "How do I get from a location near A to a location near B obeying all traffic laws in a reasonably minimal time while operating a widely available automobile (that can't fly, jump over traffic jams, ford rivers, rappel, etc.)"

Google is drastically narrowing the search space for achieving your goal, and presumably doing it manually and not with an AGI they told to host a web page with maps of the world that tells people how to quickly get from one location to another. Google is not alone in sending drivers off cliffs, into water, the wrong way down one way streets, or across airport tarmacs.

Safely narrowing the search space is the hard problem.

Comment author: leplen 01 June 2014 05:26:59PM 7 points [-]

I'm broadly interested in the question, what physical limits if any, will a superintelligence face? What problems will it have to solve and which ones will it struggle with?

Eliezer Yudkowsky has made the claim "A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis—perhaps not the dominant hypothesis, compared to Newtonian mechanics, but still a hypothesis under direct consideration—by the time it had seen the third frame of a falling apple. It might guess it from the first frame, if it saw the statics of a bent blade of grass.”

I can't see how this is true. It isn't obvious to me that one could conclude anything from a video like that without a substantial prior knowledge of mathematical physics. Seeing a red, vaguely circular object, move across a screen tells me nothing unless I already know an enormous amount.

We can put absolute physical limits on the energy cost of a computation, at least in classical physics. How many computations would we expect an AI to need in order to do X or Y. Can we effectively box an AI by only giving it a 50W power supply?

I think there are some interesting questions at the intersection of information theory/physics/computer science that seem like they would be relevant for the AI discussion that I haven't seen addressed anywhere. There's a lot of hand-waving, and arguments about things that seem true, but "seem true" is a pretty terrible argument. Unlike math, "seem true" pretty reliably yields whatever you wanted to believe in the first place.

I'm making slow progress on some of these questions, and I'll eventually write it up, but encouragement, suggestions, etc. would be pretty welcome, because it's a lot of work and it's pretty difficult to justify the time/effort expenditure.

Comment author: Pentashagon 03 June 2014 09:03:28AM 4 points [-]

I can't see how this is true. It isn't obvious to me that one could conclude anything from a video like that without a substantial prior knowledge of mathematical physics. Seeing a red, vaguely circular object, move across a screen tells me nothing unless I already know an enormous amount.

This DeepMind paper describes their neural network learning from an emulated Atari 2600 display as its only input and eventually learning to directly use its output to the emulated Atari controls to do very well at several games. The neural network was not built with prior knowledge of Atari game systems or the games in question, except for the training using the internal game score as a direct measurement of success.

More than 3 frames from the display were used for training, but it arguably wasn't a superintelligence looking at them.

Comment author: jimrandomh 09 May 2014 07:50:54PM 13 points [-]

Three months into the process, the economist came back, and sold them both digital signs that update every hour.

Comment author: Pentashagon 11 May 2014 07:04:14AM 0 points [-]

Isn't the moral of the story that the gas station owners colluded to form an illegal cartel to raise gasoline prices?

Comment author: NancyLebovitz 19 April 2014 11:14:34PM 2 points [-]

If you could come up with an organization with as much emotional oomph as the Catholic Church that took cryonics seriously, that would be very impressive, but I don't think it's possible.

On the other side, what would it take to convince the Catholic Church that frozen people were alive enough that care should be taken to keep them frozen until they can be revived?

Comment author: Pentashagon 20 April 2014 06:43:41AM *  9 points [-]

In Dignitas Personae section 18 and 19 the Catholic Church asserts the personhood of cryopreserved embryos and, although it objects to IVF and other techniques for several other reasons, a major objection is that many cryopreserved embryos are not revived. It specifically objects to cryopreservation carrying the risk of death for human embryos, implying that they are either living or at least not-dead, and suggests the possibility of "prenatal adoption", and also objects to any medical use or destruction of the embryos.

So, in a narrow sense, they already believe that frozen people are alive enough to be worth keeping frozen or reviving.

Comment author: Pentashagon 09 February 2014 06:23:32AM 0 points [-]

Consider a Turing Machine whose input is the encoded state of the world as a binary integer, S, of maximum value 2^N-1, and which seeks SN positions into its binary tape and outputs the next N bits from the tape. Does that Turing Machine cause the same experience as an equivalent Turing Machine that simulates 10 seconds of the laws of physics for the world state S and outputs the resulting world state? I posit that the former TM actually causes no experience at all when run, despite the equivalence. So there probably exist l-zombies that would act as if they have experience when they are run but are *wrong.

Your last paragraph brings up an interesting question. I assumed that transitions between l-zombie and person are one-way. Once run, how do you un-run something? It seems to imply that you could construct a TM that is and is not an l-zombie.

Comment author: Brillyant 27 January 2014 09:17:11PM 1 point [-]

The last line in the article is my favorite:

"Evolution, we could say, has found a simpler solution yet: reproduction. You get new people with the genetic heritage of the species, but neotenous and adaptable to the current environment."

It is ironic to me that death, as a part of the mechanism of natural selection, has brought about creatures who seek to invent methods to eliminate it.

Death, after reproduction, works as a part of a process to advance a given species' levels of fitness.

Comment author: Pentashagon 31 January 2014 04:21:11AM 0 points [-]

It is ironic to me that death, as a part of the mechanism of natural selection, has brought about creatures who seek to invent methods to eliminate it.

The irony is that DNA and its associated machinery, as close as it is to a Turing Machine, did not become sentient and avoid the concept of individual death. The universe would make much more sense if we were DNA-based computers that cared about our genes because they were literally our own thoughts and memories and internal experience.

Or perhaps DNA did became sentient and decided to embark on a grand AGI project that resulted in Unfriendly multi-cellular life...

Comment author: Pentashagon 17 January 2014 03:38:07AM 0 points [-]

Is behaviorism the right way to judge experience? Suppose you simply recorded the outcome of the sleeping beauty problem for a computer X, and then replayed the scenario a few times using the cached choice of X instead of actually running X again each time. For that matter, suppose you just accurately predict what X will conclude and never actually run X at all. Does X experience the same thing the same number of times in all these instances? I don't see a behavioral difference between running two thin computers layered on top of each other and using one thin computer and one cached/predicted result.

Another way to ask this is how thin can you slice the computer before it stops having experience? From a behaviorism perspective you can thin it all the way down to just a cached/predicted responses.

If we can't differentiate between how much experience a computer has when it's running thin or thick, then that implies we can't truly measure how much experience a computer has at different times or places. It's just an assumption that a computer now and a computer 5 minutes from now have the same measure of experience, but I don't see a way to test that assumption.

Comment author: MugaSofer 12 January 2014 06:58:10PM -1 points [-]

The AI's primary goal is specifically to have a reduced impact, as opposed to AI boxing where the AI's goals are artificially restricted by the box. So long as the AI is good at goal-preservation it can self-improve while continuing to have a goal of reduced impact. Increasing intelligence/power makes reduced impact more effective but makes AI-boxing less effective.

Yes, it's the equivalent of making the AI design it's own box - the box is better-designed the smarter the AI is.

But it doesn't matter how well-designed your box is. It's probably not impossible to design a box an arbitrarily smart entity can't escape (don't quote me on that.) But they will escape anyway, because you put a channel in for them to give you stuff - advice, nanotech plans, whatever - and they will escape out that channel. The channel out - the human siting at a terminal somewhere - is the weak point, not the durability of the box itself.

Comment author: Pentashagon 13 January 2014 03:42:03AM 0 points [-]

But they will escape anyway, because you put a channel in for them to give you stuff - advice, nanotech plans, whatever - and they will escape out that channel. The channel out - the human siting at a terminal somewhere - is the weak point, not the durability of the box itself.

I almost completely agree; don't make the channel too large otherwise the AI has too much choice over the future. In the simplest and safest case, however, we could start up a reduced-impact AI whose goal is to alter the world only through a single binary choice (a 1-bit channel). Since the goal is not to alter the world more than a 1 bit choice could allow, it would be virtually impossible for the AI to escape through that channel. For instance, the AI would attempt to prevent a human from releasing it since that would have a much larger than 1 bit change on the world. That is not a terribly useful AI, but perhaps we could have it approve or reject critical plans for space exploration, nanotechnology, the reduced-impact safety of other AI (but this is probably the most dangerous one; if it agrees with the goals but not the reduced-impact nature of the new AI of course it will approve it), etc. Each time it was used, one more bit would be given to it to influence the future, so we would still have to be very careful that it didn't escape over a long history of use. I don't have a good strategy for preventing the latter. Perhaps we only ask it for unique answers to NP-hard problems so that we can verify the solutions are not an attempt to use tiny parts of the channel to form an aggregate message to escape, and make each question small enough that we can stop using it after it appears to be trying to abuse the channel without it being able to actually escape. I don't know the shortest bit-sequence that an AI could use to escape, so I have no upper bound for a safe channel size.

View more: Next