Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: meta_ark 18 April 2011 12:10:32AM 0 points [-]

Would you be able to make it if we moved to a slightly later time, so Oklord and Erratio could come after work?

Comment author: zemaj 19 April 2011 05:26:35AM 0 points [-]

Sure. Just post here what you decide. I'll check this page before turning up.

Comment author: zemaj 15 April 2011 01:26:44AM 1 point [-]

Calendared. I plan to be there!

In response to comment by michaelhoney on Where are we?
Comment author: MattFisher 03 April 2009 03:54:35AM 0 points [-]

Sydney, Australia

But I could make it to Canberra ;)

In response to comment by MattFisher on Where are we?
Comment author: zemaj 21 September 2010 08:52:44AM 0 points [-]

+1!

Comment author: [deleted] 18 August 2010 03:46:37PM *  8 points [-]

I think largish fraction of the population have worries about human extinction / the end of the world. Very few associate this with the phrase "existential risk" -- I for one had never heard the term until after I had started reading about the technological singularity and related ideas. Perhaps rebranding of a sort would help you further the cause. Ditto for FAI - I think 'Ethical Artficial Intelligence' would get the idea across well enough and might sound less flakey to certain audiences.

In response to comment by [deleted] on Existential Risk and Public Relations
Comment author: zemaj 19 August 2010 10:09:06AM 9 points [-]

"Ethical Artificial Intelligence" sounds great and makes sense without having to know the background of the technological singularity as "Friendly Artificial Intelligence" does. Every time I try to mention FAI to someone without any background on the topic I always have to take two steps back in the conversation and it becomes quickly confusing. I think I could mention Ethical AI and then continue on with whatever point I was making without any kind of background and it would still make the right connections.

I also expect it would appeal to a demographic likely to support the concept as well. People who worry about ethical food, business, healthcare etc... would be likely to worry about existential risk on many levels.

In fact I think I'll just go ahead and start using Ethical AI from now on. I'm sure people in the FAI community would understand what I'm talking about.

Comment author: Jack 14 May 2010 12:42:40AM 31 points [-]

Maybe you should write a post that describes the same effect but without the pictures, citations or good grammar.

Comment author: zemaj 15 May 2010 12:04:42AM 8 points [-]

Maybe u shld write a post that describes the same effect but wihout the pics, citaations or grammar.

Comment author: zemaj 19 April 2010 01:47:05AM 5 points [-]

Hi

Been reading Less Wrong religiously for about 6 months, but still definitely in the consume, not contribute phase.

It feels like Less Wrong has pretty dramatically changed my life. I'm doing pretty well with overcoming Akrasia (or at least identifying it where I haven't yet overcome it). I'm also significantly happier all round, understanding decisions I make and most importantly exercising my ability to control these decisions. I'm doing a lot of things I would have avoided before just because I realise that my reasons for avoiding them were not rational. My boundaries are much more sensible now and getting better weekly. Still a work in progress, but I'm incredibly happy with where things are going.

So, a big thanks to everyone who contributes here. Can't thank you enough :)

In response to comment by ata on City of Lights
Comment author: pjeby 01 April 2010 07:37:56PM 4 points [-]

That reminds me a bit of PJ Eby's list of ways people sometimes do his RMI technique wrong. (PJ, if you're reading this, would you mind if I posted it? I'm referring to the list from page 55 of TTD

That's fine; I've posted a similar list here previously, too.

I know RMI isn't exactly the same as what Alicorn is talking about,

It's sort of the same, in that the same basic mental state applies. It's simply a question of utilization.

My model differs in that I assume there are really only two "parts" to speak of:

  1. The "near" brain, composed of a network of possibly-conflicting interests, and a warehouse of mental/physical motor programs, classified by context and expected effects on important variables (such as SASS-derived variables).

  2. The logical, confabulating, abstract, verbal "far" brain... whose main role sometimes seems to be to try to distract you from actually observing your motivations!

Anyway, the near brain doesn't have a personality - it embodies personalities, and can play whatever role you can remember or imagine. That's why I consider the exercise a waste of time in the general case, even though there are useful ways to do role-playing. If you simply play roles, you run the risk of simply confabulating, because your brain can play any role, whether it's related to what you actually do or not.

And it's not so much that it's fanfiction, per se (as it would be if you use only the "far" brain to write the dialogs).. What you roleplay is real, in the sense that you are using the same equipment (if you're doing it right) that also plays the role of your "normal" personality! The near brain can play any role you want it to, so you are already corrupting the state of what you're trying to inspect by bringing roles into it in the first place.

IOW, it's a (relative) waste of time to have elaborate dialogs about your internal conflicts, even though there's a very good chance you'll stumble onto insights that will lead to you fixing things, from time to time.

In effect, self-anthropomorphism is like spending time talking to chatbots, when what you need to do is directly inspect their source code and pull out their goal lists.

The things that seem to be "parts" or "personalities" are really just roles that you can play -- like mimicking a close friend or pretending to be Yoda or Darth Vader. You're essentially putting costumes on yourself and acting things out, rather than simply inspecting the raw material these roles are based on.

To put it another way, instead of pretending to be Darth Vader, what you want to be inspecting are the life events of Anakin Skywalker... unpleasant though that may be. ;-) (And even as unpleasant as it may be to watch little Ani's traumas, it's probably safer than asking to have a sit-down with Vader himself...)

So, the point of inner dialoging (IMO) is to identify those interests that are based on outdated attempts to seek SASS (Status, Affiliation, Safety, or Stimulation) in contexts where the desired behavior will not actually bring you those things, so you can surface that and drop the mental rules that link SASS threats to a desired behavior, or SASS rewards to an undesired one.

(That, I guess would be the alchemy/chemistry distinction that Roko was alluding to previously.)

In response to comment by pjeby on City of Lights
Comment author: zemaj 02 April 2010 02:08:06AM 1 point [-]

I agree. I worry that anthropomorphising these conflicting thoughts just strengthens the divide.

I like your comment "All this has very little to do with actual agency or the workings of akrasia, though, and tends to interfere with the process of a person owning up to the goals that they want to dissociate from. By pretending it's another agency that wants to surf the net, you get to maintain moral superiority... and still hang onto your problem. The goal of virtually any therapy that involves multiple agencies, is to integrate them, but the typical person on getting hold of the metaphor uses is to maintain the separation."

Comment author: zemaj 20 March 2010 07:12:52AM 2 points [-]

"How does thinking, in general, feel to you?" Do you mean this metaphorically? Can you give some examples of how thinking feels to you?

Comment author: zemaj 20 March 2010 07:35:32AM 3 points [-]

Hmmm.... thinking feels to me like poking leaves floating down a river.

Comment author: zemaj 20 March 2010 07:12:52AM 2 points [-]

"How does thinking, in general, feel to you?" Do you mean this metaphorically? Can you give some examples of how thinking feels to you?

In response to Living Luminously
Comment author: zemaj 17 March 2010 12:42:55PM 0 points [-]

Brilliant idea for a series! I spend a lot of time thinking about this; trying to understand my thoughts and consequently hack them.

It's really interesting how much variation there is in people's ability to comprehend the origin of thoughts. Also it's surprising how little control, or desire for control, some people have over their decisions. Certainly seems like something that can be learnt and changed over time. I've seen some significant improvements myself over the past 12 months without many exterior environmental changes.

The main hurdle I hit up against is confidence in my conclusions - introspection can't be scientific by definition. I find it really difficult to measure improvement over time. Definitely interested to see how you deal with this!

View more: Next