Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Michaelos 27 August 2014 02:31:03PM 3 points [-]

Maybe my favorite thought experiment along these lines was invented by my former student Andy Drucker. In the past five years, there’s been a revolution in theoretical cryptography, around something called Fully Homomorphic Encryption (FHE), which was first discovered by Craig Gentry. What FHE lets you do is to perform arbitrary computations on encrypted data, without ever decrypting the data at any point. So, to someone with the decryption key, you could be proving theorems, simulating planetary motions, etc. But to someone without the key, it looks for all the world like you’re just shuffling random strings and producing other random strings as output.

You can probably see where this is going. What if we homomorphically encrypted a simulation of your brain? And what if we hid the only copy of the decryption key, let’s say in another galaxy? Would this computation—which looks to anyone in our galaxy like a reshuffling of gobbledygook—be silently producing your consciousness?

Okay, I think my bright dilettante answer to this is the following: The key is what allows you to prove that the FHE is conscious. It is not, itself, the FHE's consciousness, which is probably still silently running (although that can no longer be proven). Proof of consciousness and consciousness are different things, although they clearly are related, and something may or may not have proved it's consciousness in the past before losing its ability to do so in the future.

I used the following thought experiment while thinking about this:

Assume Bob, Debra, and Flora work at a company with a number of FHEs. Everyone at the company has to wear their FHE's decryption key and keep it with them at all times.

Alice is an FHE simulation in the middle of calculating a problem for Bob. It will take about 5 minutes to solve. Charlie is a seperate FHE simulation in the middle of calculating a seperate problem for Debra. It will also take 5 minutes to solve.

Bob and Debra both remove their keys, go to the bathroom, and come back. That takes 4 minutes.

Debra plugs the key back in, and sure enough FHE Charlie reports that it needs 1 more minute to solve the problem. A minute later Charlie solves it, and gives Debra the answer.

Bob comes in and tells Debra that he appears to have gotten water on his key and it is no longer working, so all he can get from Alice is just random gibberish. Bob is going to shut Alice down.

"Wait a minute." Debra tells Bob. "Remember, the problem we were working on was 'Are you conscious?' and the answer Charlie gave me was 'Yes. And here is a novel and convincing proof.' I read the proof and it is novel and convincing. Alice was meant to independently test the same question, because she has the same architecture as Charlie, just different specific information, like how you and I have the same architecture but different information. It doesn't seem plausible that Charlie would be conscious and Alice wouldn't."

"True." Bob says, reading the paper. "But the difference is, Charlie has now PROVED he's conscious, at least to the extent that can be done by this novel and convincing proof. Alice may or may not have had consciousness in the first place. She may have had a misplaced semicolon and outputted a recipe for blueberry pie. I can't tell."

"But she was similar to Charlie in every way prior to you breaking the encryption key. It doesn't make sense that she would lose consciousness when you had a bathroom accident." Debra says.

"Let's rephrase. She didn't LOSE conciousness, but she did lose the ability to PROVE she's conscious." Bob says.

"Hey guys?" Flora, a coworker says. "Speaking of bathroom accidents, I just got water on my key and it stopped working."

"We need to waterproof these! We don't have spares." Debra says shaking her head. "What happened with your FHE, Edward?"

"Well, he proved he was conscious with a novel and convincing proof." Flora says. handing a decrypted printout of it over to Debra. "After I read it, I was going to have a meeting with our boss to share the good news, and I wanted to hit the bathroom first... and then this happened."

Debra and Bob read the proof. "This isn't the same as Charlie's proof. It really is novel." Debra notes.

"Well, clearly Edward is conscious." Bob says. "At least, he was at the time of this proof. If he lost consciousness in the near future, and started outputting random gibberish we wouldn't be able to tell."

FHE: Charlie chimes in. "Since I'm working, and you still have a decryption key for me, you can at least test that I don't start producing random gibberish in the near future. Since we're based on similar architecture, the same reasoning should apply to Alice and Edward. Also Debra, could you please waterproof your key ASAP? I don't want people to take a broken key as an excuse to shut me down."

End thought experiment.

Now that I've come up with that, and I don't see any holes myself, I guess I need to start finding out what I'm missing as someone who only dilettantes this. If I were to guess, it might be somewhere in the statement 'Proof of consciousness and consciousness are different things.' That seems to be a likely weak point. But I'm not sure how to address it immediately.

Comment author: Michaelos 22 August 2014 06:09:58PM *  6 points [-]

I think I have an idea of what they might be attempting to model, but I do see a few phrases that aren't clear on the site if that is the case.

There are three possibilities I think they are attempting to model:

A: Defense strikes you. (Because you seem to favor the plaintiff too much)

B: Plaintiff strikes you. (Because you seem to favor the defense too much)

C: Neither side strikes you. You remain on the jury.

What they might be trying to say is that income <$50k might increase the chance of A and income>=$50k might increase the chance of C.

So 'No effect on either Lawyer' might be better phrased as 'Given that answer you may be more likely to remain on the jury.'

Some answers would presumably have to indicate that because the two lawyers can't strike everyone.

Comment author: SolveIt 19 August 2014 08:37:03AM 2 points [-]

But is it clear that automation hasn't caused long-term unemployment?

Comment author: Michaelos 19 August 2014 01:11:15PM 4 points [-]

Something that occurs to me when reading this comment that I'm now considering, that isn't necessarily related to this comment directly:

Automation doesn't actually have to be a sole cause of long term unemployment problems for it to be problematic. If Automation just slows the rate at which reemployment occurs after something else (perhaps a recession) causes the unemployment problem, that would still be problematic.

For instance, if we don't recover to the pre-recession peak of employment before we have a second recession, and we don't recover to the pre-second recession peak of employment before we have a third recession.... That would be a downward spiral in employment with large economic effects, and every single one of the sudden downward drops could be caused by recessions, with the automation just hampering reemployment.

I'm kind of surprised I didn't think of something like this before, because it sounds much more accurate than my previous thinking. Thank you for helping me think about this.

Comment author: TylerJay 18 August 2014 03:12:17PM *  6 points [-]

Here's a video on AI job automation, intended to be accessible to a nontechnical audience, but still interesting:

http://qz.com/250154/still-think-robots-cant-do-your-job-this-video-may-change-your-mind/

Comment author: Michaelos 18 August 2014 08:01:54PM 0 points [-]

I saw that video from an entirely different site shortly before reading this article for the first time. I think it's a fairly good video and would recommend it to people that have 15 minutes.

In response to Truth vs Utility
Comment author: Michaelos 13 August 2014 06:53:07PM 1 point [-]

Making the assumption that since #2 comes with 'No strings attached' it is implying safety measures such as 'The answer does not involve the delivery of a star sized super computer that kills you with it's gravity well' since that feels like a string, and #1 does not have such safety measures (implying you have infinite utility because you have been turned into a paperclipper in simulated paperclippium is an interpretation), I find myself trying to ponder ways of getting the idealized results of #1 with the safety measures of #2, such as

"If you were willing to answer an unlimited number of questions, and I asked you all the questions I could think of, What are all question answer pairs where I would consider any set of those question answer pairs a net gain in utility, answered in order from highest net gain of utility to smallest net gain of utility?"

Keeping in mind that the questions such as the below would be part of the hilariously meta above question:

"Exactly, in full detail without compression and to the full extent of time, what would all of my current and potentially new senses experience like if I took the simulation in Option 1?"

It was simply an idea that I found interesting that I wanted to put into writing. Thank you for reading.

This was an interesting idea to read! (Even if I don't think my interpretation was what you had in mind.) Thank you for writing!

Comment author: Michaelos 12 August 2014 12:59:09PM 2 points [-]

If post a response to someone, and someone replies to me, and they get a single silent downvote prior to me reading their response, I find myself reflexively upvoting them just so they won't think I was the one who did the single silent downvote, since it seems plausible to me that if you have a single downvote, and no responses, the most likely explanation to me was that it was from the person who you replied to downvoted you, and I don't want people to think that.

Except, then I seem to have gotten my opinion of the post hopelessly biased before even reading it, because I'd feel bad if I revoked the upvote, let alone actually downvoted them, and I feel like I can't get back to the status quo of them just having a 0 point or positive post.

It also doesn't seem like it would have the same effect if someone replied to me and was heavily downvoted, but I don't actually recall that happening.

If I try to assess this more rationally, I get the suggestion 'You're worrying far too much about what other people MIGHT be thinking, based on flimsy evidence."

Thoughts?

Comment author: Michaelos 11 August 2014 01:34:45PM 0 points [-]

A radical social movement needs one charismatic radical who enunciates appealing, impractical ideas, and another figure who can appropriate all of the energy and devotion generated by the first figure's idealism, yet not be held to their impractical ideals. It's a two-step process that is almost necessary, to protect the pretty ideals that generate popular enthusiasm from the grit and grease of institution and government.

Should we add 'followers' to this list? A substantial difference between MLP:FIM and other works of fiction isn't necessarily the lack of an idealist, or the lack of a large company behind that idealist. There are other examples of productions that have both. It's the lack of Bronies. (or using some of the other examples on your list, Christians, Socialists, Nazis, Mormons, Scientologists, Revolutionaries, Objectvists...)

Comment author: Michaelos 11 August 2014 01:20:51PM *  0 points [-]

I think you may have some typos such as 'for the greatest number for the greatest number' in paragraph 1 and 'will start end' in paragraph 2. But that aside, if I throw in some concrete numbers:

GG1: 1 year of fun, starting today.

GG2: 1 year of fun, starting 3 years from now and ending 4 years from now.

GG3: 2 years of fun, starting 1 year from now and ending 3 years from now.

If you were to heavily time discount, you would probably pick GG1.

If you were to simply want most person years of fun, you would probably pick GG3.

If you were under the impression that this was heavily focused on a survival analysis (Dr. Dystopia, unless stopped, will cause absolutely no effects until the fun starts, and then at the end of the fun period will exterminate everyone forever.) then you might want to pick GG2, since that gives the most time to come up with a plan to stop Dr. Dystopia.

If three people all have comparable utility beliefs except that one is heavily time discounting, one is heavily valuing person years and one is heavily thinking of survival analysis and they need to vote on those scenarios, presumably they have different priors and can begin discussing the various types of evidence they have for those beliefs and can attempt to come to an accurate conclusion.

Does that help? I'm a bit concerned I'm not addressing the core question, but I'm not sure what else to say yet.

Comment author: Slider 08 August 2014 08:57:35PM *  1 point [-]

I would say yes. One of Albert's values is to be transparent about his cognitive process.

but you are reading that as if self-awareness would be one of Albert's values. The reason he wants to be self-aware is raise probability of safe self-edits. Being transparent is about raising the ease of verification by programmers. Self-awareness doesn't work to this end.

Hiding one channel bears no implication on the visibility of any generated channels.

The only real downside is that if he becomes too reliant on such "telepathy" and doesn't explicitly communicate it througt officail channels. I could recorn that pondering high-utility questions could soon become correlated with programmer presence.

Comment author: Michaelos 11 August 2014 12:54:08PM 0 points [-]

Hiding one channel bears no implication on the visibility of any generated channels.

I think this is a good key point.

If the programmers wish to have a hidden channel, and Albert's code independently suggests an identical channel that isn't hidden (Because Albert just came up with the idea.) then it is perfectly fine to just implement the open channel and to have Albert remember that fact. The entire reason to have the hidden channel is to prevent Albert from going below a certain level of transparent communication.

If Albert voluntarily communicates more, that's great, but you would still want to leave the hidden channel in as safety code.

Comment author: ChristianKl 08 August 2014 07:07:31PM 0 points [-]

I'm not sure if identifying high impact utility calculations is that easy. A lot of Albert's decisions might be high utility.

Comment author: Michaelos 08 August 2014 08:44:05PM 0 points [-]

I was going by the initial description from Douglas_Reay:

Albert is a relatively new AI, who under the close guidance of his programmers is being permitted to slowly improve his own cognitive capability.

That does not sound like an entity that should be handling a lot of high impact utility calculations. If an entity was described as that and was constantly announcing it was making high impact utility decisions, that either sounds like a bug or people are giving it things it isn't meant to deal with yet.

View more: Next