Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Rukiedor 15 October 2015 05:00:26PM 7 points [-]

What does 'difficulty concentrating' feel like for you? I often find that value words, like 'good', 'bad', 'difficult', 'happy', 'sad', mean different things for me than for most people.

I spend much of my free time working on a game that I intend to sell at some point. The indie game community talks a lot about focusing, overcoming difficulties concentrating, etc. But I've never seen someone describe in detail what 'difficulty concentrating' or 'difficulty getting to work' feels like. I find myself wondering if they are talking about what I think they are talking about. It's possible that their tips don't often work because we are thinking about different things.

Akrasia gets talked about a lot here, as well as ways of improving productivity, and I'm really curious what akrasia or difficulty concentrating actually feels like for other people. Taboo the words 'akrasia', 'procrastination', 'distraction' and other similar words, and tell me what it feels like.

Here's what 'difficulty getting to work' typically feels like for me: I look at my list of tasks and I get a strong feeling of despair. Starting work on the list feels like I'm chaining myself to an assembly line in a grey factory in a grey world making grey featureless joyless objects, and I'm going to be there for the rest of eternity. It's strange because I actually feel like what I'm producing is colorful, beautiful and interesting. I'm not sure if it is related to the length of the list. I thought it was perhaps due to nebulous definition of the task leaving uncertainty as to what the finished task was supposed to look like, but I've had the same problems even with well defined tasks.

Here's what 'difficulty concentrating' feels like: Imagine that you've got a good sized dog, and you're trying to make it look at something. You grab its head and hold it down to look at whatever it is, and the dog fights you the whole time. Sometimes this will pass if I start with a simple task and get going. Other times it feels like every line of code I write is a continual struggle to hold the dog's head in place. Or imagine that you've really got to go to the bathroom, and you're trying to ignore it and just work. There's a pressure that demands release. It's almost like there's a voice in my head (not an actual voice, it's not schizophrenia) screaming for me to turn off my brain and play a video game or do something else that requires no brain power.

I would estimate that I have difficulty with these feelings at least 30% of the time I try to sit down and work. Sometimes these bother me at my day job, but they show up most frequently when I'm working in the evenings and weekends.

So what does it feel like for you?

I imagine someone will ask about this at some point: I have been diagnosed with Bipolar Type II, I currently take lamotrigine, quetiapine, and bupropion to manage it. I've had problems like this at least since my early teens.

Comment author: MathiasZaman 15 October 2015 06:11:53PM 4 points [-]

Disclaimer: I've been diagnosed with ADHD.

difficulty concentrating

This XKCD is a fair visualization of what difficulty concentrating feels like. I can be doing an activity (even a pleasurable one), but I get a lot of other stimuli coming in that link to different activities that also need doing or would also be fun or pleasurable. Or while doing an activity or trying to think about one specific thing, my mind jumps to other (often related) topics and this has a tendency to escalate. Think about the way people describe going to tv-tropes. You start out reading about the film you just saw, and before you know it, you're browser is filled up with dozens of tabs (all of which have links that you'll probably also click).


Akrasia feels, to me, a lot like inertia. Sometimes in a very physical way. It's a feeling of "being stuck" and often translates to physically being stuck, without anything specific holding you physically in place. It's like the space between thinking "Doing X would be a good idea right about now" and actually doing X is a steep, uphill climb.

Comment author: [deleted] 15 October 2015 12:50:14PM 0 points [-]

Hhmm, I wonder we as a community should volunteer a set of guidelines and norms about karmic behaviour as to aid the interpretation of karma. On the other hand, perhaps an intuitive system has its own charm.

In response to comment by [deleted] on Stupid questions thread, October 2015
Comment author: MathiasZaman 15 October 2015 05:37:54PM 0 points [-]

There's no real way to enforce that. Even with those guidelines you'll mostly end up with an intuitive system that's maybe influenced by the guidelines.

Comment author: [deleted] 15 October 2015 06:52:00AM *  0 points [-]

Interviewer: So Mr. Larity, you seem like a great fit for this job so far, do your values align with those of our company?

Clarity: (hmmm, I remember reading about values on the LessWrong wiki) ...

It is not known whether humans have terminal values that are clearly distinct from another set of instrumental values. Humans appear to adopt different values at different points in life. Nonetheless, if the theory of terminal values applies to humans', then their system of terminal values is quite complex. The values were forged by evolution in the ancestral environment to maximize inclusive genetic fitness. These values include survival, health, friendship, social status, love, joy, aesthetic pleasure, curiosity, and much more. Evolution's implicit goal is inclusive genetic fitness, but humans do not have inclusive genetic fitness as a goal. Rather, these values, which were instrumental to inclusive genetic fitness, have become humans' terminal values (an example of subgoal stomp).

Humans cannot fully introspect their terminal values. Humans' terminal values are often mutually contradictory, inconsistent, and changeable.

Interview: Carlos, I was asking you about values?

Clarity: Oh yeah, I reckon I have those values, so yeah, I'd make a great fit...

How do you communicate in the instrumental rationality real world when your mind is immersed in the epistemic rationality world, if that makes sense? Hopefully the situation I've described illustates what I'm trying to say

In response to comment by [deleted] on Stupid questions thread, October 2015
Comment author: MathiasZaman 15 October 2015 09:52:29AM 7 points [-]

Answer the question the interviewer means, not the question as you'd break it down on Less Wrong. Or more broadly: adapt your communication to the intended argument and goal.

In this particular example, you should know the values of the company before you end up at the interview, so this answer should be: Yes, followed by one or two examples show that your values match those of the company.

Comment author: [deleted] 15 October 2015 06:53:51AM 1 point [-]

Is retributive downvoting on other forums, or is it just a LW thing? Do we have more retributive downvoting than other sites? Can anyone think of some relationship between rationality and vindication? I feel like if anything we should be above that and have far less...

In response to comment by [deleted] on Stupid questions thread, October 2015
Comment author: MathiasZaman 15 October 2015 09:14:16AM 3 points [-]

Is retributive downvoting on other forums, or is it just a LW thing?

Reddit also had it. I don't frequent other forums that use voting, but a forum I used to be part of had a user that would delve into the history of people he disagreed with and report year-old comments to get those people banned.

Given that it's an easy way to hinder "opponents" I very much doubt it's LW exclusive.

Can anyone think of some relationship between rationality and vindication?

Apart from willingness to use tools others would think immoral, no. I also don't think we need to go that far as an explanation. You only need one person doing it in a community as small as this one for it to become noticeable.

Comment author: Fuglinnavon 14 October 2015 02:19:21PM 0 points [-]

Is the human kind made to live in such big societies ?

Comment author: MathiasZaman 14 October 2015 07:35:26PM *  4 points [-]

(Is "Are big societies optimal for human happiness/quality of life," a fair rephrasing of your question?)

I've been asking myself similar questions lately. As pointed out "made to live" implies things that never happened, in that humans weren't created, nor were the current societies/civilizations ever consciously designed or created. They just sort of happened.

Since both humans and societies got to where they are through mostly unthinking processes, it's easy to see how things didn't end up optimal.

Humans were hunter-gatherers for most of their existence. It's hard to intuitively grasp how long a time that is, but I find this quote helpful (source):

If the history of the human race began at midnight, then we would now be almost at the end of our first day. We lived as hunter-gatherers for nearly the whole of that day, from midnight through dawn, noon, and sunset. Finally, at 11:54 p. m. we adopted agriculture.

Without wanting to get into bad evolutionary sciences, I think it's reasonably fair that even modern humans are mostly adapted for the hunter-gatherer life, with a couple of more modern modules thrown in. It's also reasonably fair that humans were mostly "made" to live in small tribes, hunting and gathering.

Agriculture (and later writing, the printing press, the Industrial Revolution, computers...) gave us reasons to not be hunter-gatherers any more and my naive assessment is that a good number of those reasons are good ones. It's just that our bodies and brains haven't caught up.

So where am I going with this? I'm not sure. What I'm trying to say is that I think it's better to say that (our) big societies weren't made for humans (at least, they're not optimal for humans), rather than saying that humans weren't made for big societies.

Comment author: Lumifer 14 October 2015 04:17:24PM 0 points [-]

A better way to put this is something like "there is no starvation left that could be treated by government programs."

So you are saying that there is no starvation that could be treated by government programs, but there is starvation that could be eliminated by UBI?


Comment author: MathiasZaman 14 October 2015 07:18:47PM 0 points [-]

UBI means every citizen gets a sum of money in their account each month. Current government programs means people need to jump through multiple hoops in order to get food. I don't think UBI is a panacea, but I don't think it's a stretch to say it'll reach people who aren't being helped by the current welfare systems.

Comment author: cousin_it 13 October 2015 12:16:22PM *  0 points [-]

The only way I think you could see the Superhappies' solution as acceptable if you don't think jokes or fiction (or other sort of arts involving "deception") are something humans would value as part of their utility function.

Um, that's the opposite of how utility functions work. They don't have sacred components. You can and should trade off one component for a larger gain in another component. That's exactly what the super happies were offering.

Comment author: MathiasZaman 13 October 2015 01:05:37PM 2 points [-]

What I'm saying is that humans aren't wrong in trading off some amount of comfort so they can have jokes, fiction, art and romantic love.

Comment author: cousin_it 13 October 2015 10:40:40AM *  6 points [-]

I was just rereading Three Worlds Collide today and noticed that my feelings about the ending have changed over the last few years. It used to be obvious to me that the "status quo" ending was better. Now I feel that the "super happy" ending is better, and it's not just a matter of feelings - it's somehow axiomatically better, based on what I know about decision theory.

Namely, the story says that the super happies are smarter and understand humanity's utility function better, and also that they are moral and wouldn't offer a deal unless it was beneficial according to both utility functions being merged (not just according to their value of happiness). Under these conditions, accepting the deal seems like the right thing to do.

Comment author: MathiasZaman 13 October 2015 11:26:34AM 4 points [-]

Does the story actually says the Superhappies really know humanity's utility function better? As in, does an omniscient narrator tell it, or is it a Superhappy or one of the crew that says this? That changes a lot, to me. Of course the Superhappies would believe they know our utility function better than we do. Just like how the humans assumed they knew what was better for the Babyeaters.

Similarly, the Superhappies are moral, for their idea of morality. They were perfectly willing to use force (not physical, but force nonetheless) to encourage humans to see their point of view. They threatened humanity and were willing to forcibly change human children, even if the adults could continue to feel pain. While humans also employs threats and force to change behavior, in most cases we would be hard-pressed to call that "moral."

From a meta-perspective, I'd findit odd if Yudkowsky wrote it like that. He's not careless enough to make that mistake and as far as I know, he thinks humanity's utility function goes beyond mere bliss.

The only way I think you could see the Superhappies' solution as acceptable if you don't think jokes or fiction (or other sort of arts involving "deception") are something humans would value as part of their utility function. Which I personally would find very hard to understand.

Comment author: helldalgo 12 October 2015 11:02:58AM 0 points [-]

I didn't know that there was a rationalist tumblr sphere. I should look into that.

Comment author: MathiasZaman 12 October 2015 05:52:57PM *  1 point [-]

This would be a good place to start looking. It's a list that holds most of the (self-proclaimed) rationalists on tumblr, although I can't guarantee the quality or level of activity of each tumblr. Notable absences are Scott's tumblr and theunitofcaring.

Comment author: MathiasZaman 09 October 2015 12:32:36PM 4 points [-]

The rationalist tumblr sphere helped me a lot. It's a lot more approachable for newcomers than this site is and has a very low barrier for making low-effort, high emotion posts, which is something I could totally use assistance on at the time. It also helped that I could see rationalist practices and the results in (more-or-less) real time, which were highly available examples (I've always learned better with good, tangible examples) and showed me that rationality could be practised by "real" people, rather than mythical figures like Jeffreyssai, the Defence Professor or Eliezer Yudkowsky.

Yudkowsky's fiction also helped because it provided easy-to-read content that teaches the basics. For the same reasons, I Shall Wear Midnight (by Terry Pratchett) was useful

View more: Next