Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Meetup : Atlanta September Meetup - Self Awareness

1 Adele_L 23 September 2014 01:02AM

Discussion article for the meetup : Atlanta September Meetup - Self Awareness

WHEN: 27 September 2014 07:00:00PM (-0400)

WHERE: 2388 Lawrenceville Hwy. Unit L, Decatur, GA 30033

This meetup we'll be discussing self awareness and reflection. We'll be talking about ideas discussed in the Living Luminously sequence. As usual, there will be snacks and games as well as plenty of interesting conversation! There are cats at the location. Please park in a spot labeled visitor, as parking in the other spots runs the risk of getting towed. Hope to see you then!

Discussion article for the meetup : Atlanta September Meetup - Self Awareness

Comment author: bramflakes 15 September 2014 05:53:19PM 11 points [-]

I think this thread is for opinions that are contrarian relative to LW, and not to the mainstream.

e.g. my opinion on open borders is something that a great majority of people share but is contrarian here, shown by the fact that as of the time of writing it is currently tied for highest-voted in the thread.

Comment author: Adele_L 15 September 2014 06:08:30PM 1 point [-]

I think it's still a problem relative to LW.

Comment author: Adele_L 15 September 2014 04:20:57PM 8 points [-]

Meta

I think LW is already too biased towards contrarian ideas - we don't need to encourage them more with threads like this.

Comment author: James_Miller 13 September 2014 07:52:47PM *  0 points [-]

According to the definition of the game, the clone will happen to defect if you defect and will happen to cooperate if you cooperate.

You have to consider off-the-equilibrium-path behavior. If I'm the type of person who will always cooperate, what would happen if I went off-the-equilibrium-path and did defect even if my defecting is a zero probability event?

Comment author: Adele_L 13 September 2014 08:38:06PM 1 point [-]

You can consider it, but conditioned on the information that you are playing against your clone, you should assign this a very low probability of happening, and weight it in your decision accordingly.

Comment author: Warrigal3 13 September 2014 02:13:10PM 2 points [-]

So, I read textbooks "wrong".

The "standard" way of reading a textbook (a math textbook or something) is, at least I imagine, to read it in order. When you get to exercises, do them until you don't think you'd get any value out of the remaining exercises. If you come across something that you don't want to learn, skip forwards. If you come across something that's difficult to understand because you don't fully understand a previous concept, skip backwards.

I almost never read textbooks this way. I essentially read them in an arbitrary order. I tend to start near the beginning and move forwards. If I encounter something boring, I tend to skip it even if it's something I expect to have to understand eventually. If I encounter something I have difficulty understanding because I don't fully understand a previous concept, I skip backwards in order to review the previous concept. Or I skip forwards in the hopes that the previous concept will somehow become clear later. Or I forget about it and skip to an arbitrary different interesting section. I don't do exercises unless either they seem particularly interesting, or I feel like I have to do them in order to understand the material.

I know that I can sometimes get away with the second method even when other people wouldn't be able to. If I were to read a first-year undergraduate physics textbook, I imagine I could read it in essentially any order without trouble, even though I never took undergraduate physics. But I tend to use this method for all textbooks, including textbooks that are at or above my level (Awodey's Category Theory, Homotopy Type Theory, David Tong's Quantum Field Theory, Figure Drawing for All It's Worth).

Is the second method a perfectly good alternative to the "standard" method? Am I completely shooting myself in the foot by using the second method for difficult textbooks? Is the second method actually better than the "standard" method?

Comment author: Adele_L 13 September 2014 03:13:49PM *  1 point [-]

This is how I read too, usually. I think it's one of those things that works better for some people but not others. I've tried reading things the standard way, and it works for some books, but for other books I just get too bored trudging through the boring parts.

BTW, I've also been reading HoTT, so if you want to talk about it or something feel free to message me!

Comment author: skeptical_lurker 12 September 2014 05:10:34PM 1 point [-]

My 30 day karma just jumped over 40 points since I checked LW this morning. Either I've said something really popular (and none of my recent comments have karma that high), or there's a bug.

Comment author: Adele_L 12 September 2014 10:12:12PM *  3 points [-]

My guess is that someone with a similar political ideology to you upvoted forty of your comments on the recent political post.

ETA: Well I've been struck by the mysterious mass-upvoter as well! I'm pretty sure the political motivation hypothesis is wrong now.

Comment author: NancyLebovitz 09 September 2014 09:30:23PM 1 point [-]

Another risk of polyamory is increasing the odds of getting involved with someone who is very bad news.

On the other hand, if you choose to be monogamous, then the consequences of a bad partner are more serious.

Comment author: Adele_L 09 September 2014 10:06:50PM -1 points [-]

Also, having other good partners while dealing with a bad partner can make it a lot easier, and help you recognize and get out of it faster.

Comment author: Matthew_Opitz 05 September 2014 01:34:23AM 3 points [-]

Okay, wow, I don't know if I quite understand any of this, but this part caught my attention:

The Omohundrian/Yudkowskian argument is not that we can take an arbitrary stupid young AI and it will be smart enough to self-modify in a way that preserves its values, but rather that most AIs that don't self-destruct will eventually end up at a stable fixed-point of coherent consequentialist values. This could easily involve a step where, e.g., an AI that started out with a neural-style delta-rule policy-reinforcement learning algorithm, or an AI that started out as a big soup of self-modifying heuristics, is "taken over" by whatever part of the AI first learns to do consequentialist reasoning about code.

I have sometimes wondered whether the best way to teach an AI a human's utility function would not be to program it into the AI directly (because that will require that we figure out what we really want in a really precisely-defined way, which seems like a gargantuan task), but rather, perhaps the best way would be to "raise" the AI like a kid at a stage where the AI would have minimal and restricted ways of interacting with human society (to minimize harm...much like a toddler thankfully does not have the muscles of Arnold Schwarzenegger to use during its temper tantrums), and where we would then "reward" or "punish" the AI for seeming to demonstrate better or worse understanding of our utility function.

It always seemed to me that this strategy had the fatal flaw that we would not be able to tell if the AI was really already superintelligent and was just playing dumb and telling us what we wanted to hear so that we would let it loose, or if the AI really was just learning.

In addition to that fatal flaw, it seems to me that the above quote suggests another fatal flaw to the "raising an AI" strategy—that there would be a limited time window in which the AI's utility function would still be malleable. It would appear that, as soon as part of the AI figures out how to do consequentialist reasoning about code, then its "critical period" in which we could still mould its utility function would be over. Is this the right way of thinking about this, or is this line of thought waaaay too amateurish?

Comment author: Adele_L 05 September 2014 03:32:33AM 6 points [-]

It always seemed to me that this strategy had the fatal flaw that we would not be able to tell if the AI was really already superintelligent and was just playing dumb and telling us what we wanted to hear so that we would let it loose, or if the AI really was just learning.

In addition to that fatal flaw, it seems to me that the above quote suggests another fatal flaw to the "raising an AI" strategy—that there would be a limited time window in which the AI's utility function would still be malleable. It would appear that, as soon as part of the AI figures out how to do consequentialist reasoning about code, then its "critical period" in which we could still mould its utility function would be over. Is this the right way of thinking about this, or is this line of thought waaaay too amateurish?

This problem is essentially what MIRI has been calling corrigibility. A corrigible AI is one that understands and accepts that it or its utility function is not yet complete.

Comment author: MathiasZaman 28 August 2014 10:46:13AM 4 points [-]

I'll try to update it before Sunday. Tumblr made spaghetti-code out of the html version of the list making updating it more laborious than it should be. It'll take some time to sort out, but I'll solve the problem by saving a neat version on my laptop.

Comment author: Adele_L 29 August 2014 05:12:12PM 2 points [-]

Sounds good. Guess I should request to be on it before then!

Comment author: Adele_L 26 August 2014 07:41:28PM 7 points [-]

I haven't read it yet, but I think that the bright dilettante caveat applies less strongly than usual given that it is disclaimed with: "My talk is for entertainment purposes only; it should not be taken seriously by anyone," and I think it's weird you felt it was necessary to bring it up for this post specifically. Do you want people to take this more seriously than Scott seems to? Anyway, I feel more suspicious going in to the post than I would otherwise because of this.

View more: Next